Heh, low comments on C++ posts now. A sign of the times. My two cents anyway.
I've been using C++ for a decade. Of all the warts, they all pale in comparison to the default initialization behavior. After seeing thousands of bugs, the worst have essentially been caused by cascading surprises from initialization UB from newbies. The easiest, simplest fix is simply to default initialize with a value. That's what everyone expects anyway. Use Python mentality here. Make UB initialization an EXPLICIT choice with a keyword. If you want garbage in your variable and you think that's okay for a tiny performance improvement, then you should have to say it with a keyword. Don't just leave it up to some tiny invisible visual detail no one looks at when they skim code (the missing parens). It really is that easy for the language designers. When thinking about backward compatibility... keep in mind that the old code was arguably already broken. There's not a good reason to keep letting it compile. Add a flag for --unsafe-initialization-i-cause-trouble if you really want to keep it.
C++, I still love you. We're still friends.
juliangmp · 9h ago
> When thinking about backward compatibility... keep in mind that the old code was arguably already broken. There's not a good reason to keep letting it compile.
Oh how I wish the C++ committee and compiler authors would adopt this way of thinking...
Sadly we're dealing with an ecosystem where you have to curate your compiler options and also use clang-tidy to avoid even the simplest mistakes :/
Like its insane to me how Wconversion is not the default behavior.
motorest · 5h ago
> Oh how I wish the C++ committee and compiler authors would adopt this way of thinking...
I disagree. If you expect anyone to adopt your new standard revision, the very least you need to do is ensure their code won't break just by flipping s flag. You're talking about production software, many of which has decades worth of commit history, which you simply cannot spend time going through each and every single line of code of your >1M LoC codebase. That's the difference between managing production-grade infrastructure and hobbyist projects.
dwattttt · 14m ago
> If you expect anyone to adopt your new standard revision, the very least you need to do is ensure their code won't break just by flipping s flag.
Why would you expect that a new revision can't cause existing code to compile? It means that "new" revisions can't fix old problems, and one thing you always get more of over time is perspective.
If you don't want your code "broken", don't migrate to a new standard. That's the point of supporting old standards. Don't hobble new standards because you both want new things, but don't want old things to change.
johannes1234321 · 1h ago
The option there is better tooling, for which the foundation exists which can do such maintenance somewhat automatically, in the simplest case by just adding the Keywords to request old behavior.
But the annoyance comes when dealing with multiple compilers and versions. Then you have to add more compatibility macros all over. Say, when being a library vendor trying to support broad range of customers.
motorest · 1h ago
> The option there is better tooling (...)
The tooling already exists. The bulk of the criticism in this thread is clearly made from a position of ignorance. For example, all major compilers already provide flags to enable checks for uninitialized variables being used. Onboarding a static code analysis tool nowadays requires setting a flag in CMake.
These discussions would be constructive if those engaging in them had any experience at all with the language and tooling. But no, it seems the goal is to parrot cliches out of ignorance. Complaining that they don't know what a reserved word means and using that as an argument to rewrite software in other languages is somehow something worth stating.
monkeyelite · 2h ago
And the cost of this is that every time I open a project in another language it’s broken and I have to make changes to fix all their little breaking changes.
zahlman · 6h ago
>Oh how I wish the C++ committee and compiler authors would adopt this way of thinking
Many different committees, organizations etc. could benefit, IMO.
josefx · 5h ago
> keep in mind that the old code was arguably already broken
The code is only broken if the data is used before anything is written to it. A lot of uninitialized data is wrapped by APIs that prevent reading before something was written to it, for example the capacity of a standard vector, buffers for IO should only access bytes that where already stored in them. I have also worked with a significant number of APIs that expect a large array of POD types and then tell you how many entries they filled.
> for a tiny performance improvement
Given how Linux allocates memory pages only if they are touched and many containers intentionally grow faster then they are used? It reduces the amount of page faults and memory use significantly if only the used objects get touched at all.
riehwvfbk · 4h ago
You are very very unlikely to trigger Linux overcommit behavior by not initializing a member variable. It's even more unlikely for this to be a good thing.
In effect, you are assuming that your uninitialized and initialized variables straddle a page boundary. This is obviously not going to be a common occurrence. In the common case you are allocating something on the heap. That heap chunk descriptor before your block has to be written, triggering a page fault.
Besides: taking a page fault, entering the kernel, modifying the page table page (possibly merging some VMAs in the process) and exiting back to userspace is going to be A LOT slower than writing that variable.
OK you say, but what if I have a giant array of these things that spans many pages. In that case your performance and memory usage are going to be highly unpredictable (after all, initializing a single thing in a page would materialize that whole page).
OK, but vectors. They double in size, right? Well, the default allocator for vectors will actually zero-initialize the new elements. You could write a non-initializing allocator and use it for your vectors - and this is in line with "you have to say it explicitly to get dangerous behavior".
josefx · 3h ago
> In effect, you are assuming that your uninitialized and initialized variables straddle a page boundary
You are assuming that I am working with small data structures, don't use arrays of data, don't have large amounts of POD members, ... .
> That heap chunk descriptor before your block has to be written, triggering a page fault.
So you allocate one out of hundreds of pages? The cost is significantly less than the alternative.
> In that case your performance and memory usage are going to be highly unpredictable (after all, initializing a single thing in a page would materialize that whole page).
As opposed to initializing thousands of pages you will never use at once? Or allocating single pages when they are needed?
> Well, the default allocator for vectors will actually zero-initialize the new elements.
I reliably get garbage data after the first reserve/shrink_to_fit calls. Not sure why the first one returns all zero, I wouldn't rely on it.
jchw · 2h ago
> You are assuming that I am working with small data structures, don't use arrays of data, don't have large amounts of POD members, ... .
Sounds like a great set of use cases for explicit syntax to opt out of automatic initialization.
motorest · 4h ago
> You are very very unlikely to trigger Linux overcommit behavior by not initializing a member variable.
The problem with your assumption is that you're just arguing that it's ok for code to be needlessly buggy if you believe the odds this bug is triggered are low. OP points out a known failure mode and explains how a feature eliminates it. You intentionally ignore it for no reason.
This assumption is baffling when, in the exact same thread, you see people whining about C++ for allowing memory-related bugs to exist.
yorwba · 2h ago
Linux overcommit is not a bug, it's a feature. The argument isn't that it's okay for code to be buggy if the odds of triggering the bug are low, it's that it's okay for code to not make use of a feature if the odds of benefiting from that feature are low.
motorest · 1h ago
> Linux overcommit is not a bug, it's a feature.
You failed to read what I wrote. I referred to why clients would choose to not initialize early to avoid scenarios such as Linux over committing, not that Linux had a bug.
yorwba · 49m ago
Overcommit is an optimization where virtual memory that is allocated but unused is not mapped to physical memory. If you want to avoid this (for some reason), choosing not to initialize early is not going to have the intended effect.
tails4e · 2h ago
Especially when doing the right/safe thing by default is at worst a minor performance hit. They could change the default to be sane and provide a backwards compatible switch to pragma to revert to the less safe version. They could, but for some reason never seem to make such positive changes
vrighter · 1h ago
that's the undefined keyword in zig. I love it. It makes UB opt-in and explicit
loeg · 9h ago
Compilers should add this as a non-standard extension, right? -ftrivial-auto-var-init=zero is a partial solution to a related problem, but it seems like they could just... not have UB here. It can't be that helpful for optimization.
Matheus28 · 9h ago
Yes but it’s not portable. If zero initialization were the default and you had to opt-in with [[uninitialized]] for each declaration it’d be a lot safer. Unfortunately I don’t think that will happen any time soon.
tialaramex · 7h ago
You probably don't want zero initialization if you can help it.
Ideally, what you want is what Rust and many modern languages do: programs which don't explain what they wanted don't compile, so, when you forget to initialize that won't compile. A Rust programmer can write "Don't initialize this 1024 byte buffer" and get the same (absence of) code but it's a hell of a mouthful - so they won't do it by mistake.
The next best option, which is what C++ 26 will ship, is what they called "Erroneous Behaviour". Under EB it's defined as an error not to initialize something you use but it is also defined what happens so you can't have awful UB problems, typically it's something like the vendor specifies which bit pattern is written to an "unintialized" object and that's the pattern you will observe.
Why not zero? Unfortunately zero is too often a "magic" value in C and C++. It's the Unix root user, it's often an invalid or reserved state for things. So while zero may be faster in some cases, it's usually a bad choice and should be avoided.
motorest · 3h ago
> Ideally, what you want is what Rust and many modern languages do: programs which don't explain what they wanted don't compile, so, when you forget to initialize that won't compile.
I think you're confusing things. You're arguing about static code analysis being able to identify uninitialized var reads. All C++ compilers already provide support for flags such as -Wuninitiaized.
dwattttt · 8m ago
> You're arguing about static code analysis being able to identify uninitialized var reads.
(Safe) Rust does guarantee to identify uninitialised variable reads, but I believe the point is that you can get the optimisation of not forcing early initialisation in Rust, you just have to be explicit that that's what you want (you use the MaybeUninit type); you're forced to be clear that that's what you meant, not just by forgetting parens.
leni536 · 4h ago
Something like that is heading into C++26 actually. Except the initialization is not to zero, but to some unspecified value (with explicit intention of not allowing leaking garbage) and allowing to trap. It's called "erroneous values".
loeg · 8h ago
I don't really care if it isn't portable. I only have to work with Clang, personally.
> If zero initialization were the default and you had to opt-in with [[uninitialized]] for each declaration it’d be a lot safer.
I support that, too. Just seems harder than getting a flag into Clang or GCC.
motorest · 54m ago
> I don't really care if it isn't portable.
You don't care because your job is not to ensure that a new release of C++ doesn't break production code. You gaze at your navel and pretend that's the universe everyone is bound to. But there are others using C++, and using it in production software. Some of them care, and your subjective opinions don't have an impact in everyone else's requirements.
> I only have to work with Clang, personally.
Read Clang's manual and check what compiler flags you need to flip to get that behavior. It's already there.
ryandrake · 8h ago
Portability is always for the other guy’s sake, not your own. That’s why so many people don’t care about it.
loeg · 7h ago
Again, I'm not opposed to the idea, it just seems more challenging logistically.
MichaelRo · 6h ago
>> Of all the warts, they all pale in comparison to the default initialization behavior.
Come on. That's nothing compared to the horrors that lay in manual memory management. Like I've never worked with a C++ based application that doesn't have crashes lurking all around, so bad that even a core dump leaves you clueless as to what's happening. Couple OOP involving hundreds of classes and 50 levels deep calls with 100s of threads and you're hating your life when trying to find the cause for yet another crash.
kaashif · 5h ago
50 levels deep? With some of the template metaprogramming I've seen, looking at just the types for just one level will not only fill your screen, but take up megabytes on disk...
nlehuen · 11h ago
Not to worry, there is a 278 page book about initialization in C++!
(I don't know whether it's good or not, I just find it fascinating that it exists)
bhk · 8h ago
Wow! Exhibit 1 for the prosecution.
kazinator · 7h ago
C++ doesn't have initiation hazing rituals, but initialization hazing rituals. (One of which is that book.)
nitrogen99 · 10h ago
Well, authors are incentivized into writing long books. Having said that it obviously doesn't take away the fact that C++ init is indeed bonkers.
harry8 · 9h ago
What would be the incentive for making this a long book? Couldn't be money.
jcelerier · 8h ago
It is actually. It's been shown that longer books make more sales as they are considered more trustworthy, so authors are incentivized to artificially drag them longer than they actually require
Analemma_ · 9h ago
I imagine if I'd managed to actually memorize all of C++'s initialization rules, I'd probably have to write a book too just to get it all out, or I'd lose my sanity.
sph · 2h ago
Then you can proudly put “C++ initialization consultant” on your resumé and get paid $1000 a day fixing class constructors at Fortune 500 companies.
agent327 · 3h ago
The answer to this is to replace default-init by zero-init. This removes all special cases and all surprise, at a cost that is minimal (demonstrated experimentally by its implementation in things like Windows and Chrome) or even negative. Doing so would make software safer, and more reproducible, and it would make the object model more sound by removing the strange zombie state that exists only for primitive types.
Of course we should provide a mechanism to allow large arrays to remain uninitialized, but this should be an explicit choice, rather than the default behaviour.
However, will it happen? It's arguably the easiest thing C++ could do to make software safer, but there appears to be no interest in the committee to do anything with safety other than talk about it.
shultays · 11m ago
Of course we should provide a mechanism to allow large arrays to remain uninitialized, but this should be an explicit choice, rather than the default behaviour.
First you are saying "cost is minimal even negative" and then already arguing against it on the next paragraph.
monkeyelite · 2h ago
We all agree, poor defaults were chosen in C++ across the board. we have learned a lot about languages since then.
The question is what to do about it - balancing the cost of change to code and to engineers who learned it.
> but there appears to be no interest in the committee to do anything with safety other than talk about it.
There is plenty of interest in improving C++ safety. It’s a regular topic of discussion.
Part of that discussion is how it will help actual code bases that exist.
Should the committee do some breaking changes to make HN commenters happier, who don’t even use the language?
agent327 · 45m ago
I was not proposing sweeping changes to all the defaults in C++, I was proposing to adopt a single, specific change. That change does not break any existing code, removes pitfalls from the language, and has already been tried by industry and found to be beneficial. Why is it not in C++26?
https://open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2754r0... provides what appears to be the answer to this question: "No tools will be able to detect existing logical errors since they will become indistinguishable from intentional zero initialization. The declarations int i; and int i = 0; would have precisely the same meaning." ...yes, they would. _That's the point_. The paper has it exactly the wrong way around: currently tools cannot distinguish between logical error and intentional deferred initialization, but having explicit syntax for the latter would make the intention clear 100% of the time. Leaving a landmine in the language just because it gives you more warnings is madness. The warning wouldn't be needed to begin with, if there were no landmine.
I'm not sure what you mean with "who don't even use the language". Are you implying that only people that program professionally in C++ have any stake in reliable software?
Initialization does look insane. But as with most C++ complexity this is inherent.
Lists of the “good parts” of C++ over C usually include RAII. But f we imagine starting with C and adding C++ features to see when complexity explodes. I think the worst offender is constructor/destructor.
They require the language to perfectly track the lifetime of each member of every structure. If you resize a vector, every entry must call a constructor. If exceptions are possible, but insert little cleanup calls into all possible code paths.
Want to make a copy of something? Who is responsible for calling constructor/destructor. Want to make a struct? What if one member requires construction? How do you handle a union?
The result is micromanaging and turning most operations into O(n) init/cleanup calls.
The modern C approach avoids all of this and allows you to manage pieces of memory - rather than values. Zero initialize or leave uninitialized.
So what do we lose? Well classes own resources. If you have a vector<MyObject> and MyObject has a member vector<Items> then we should be able to cleanup without looking inside each member of each element.
I think we should separate resource allocation from use. Allocators are the things that care about cleanup, move, etc. This should be the exception - rather than the default way to think about structs.
gpderetta · 28m ago
> Want to make a copy of something? Who is responsible for calling constructor/destructor.
What do you mean? The compiler will do it for you.
> This should be the exception - rather than the default way to think about structs.
the way that RAII in C++ recursively construct and destroys arbitrary object graphs is extremely powerful. It is something that very few other languages have (Rust, any other?). It should definitely be the default.
> I think we should separate resource allocation from use. Allocators are the things that care about cleanup, move, etc.
I'm not sure what you mean by use. If you mean we should separate allocation from construction, I agree! But then so does C++. They are tied by default, but it is easy to separate them if you need it.
ts4z · 10h ago
This is a specialization of the general statement that C++ is bonkers.
MichaelMoser123 · 2h ago
and putting structure instances into an array so that you can refer to them via indexes of the array entries (as the only escape from being maimed by the borrow checker) is normal?
The book Beautiful C++: 30 Core Guidelines for Writing Clean, Safe, and Fast Code recommends initializing/providing default values for member variables in default member initializers intead of the initializer list used here.
"""
Default member initializers define a default value at the point of declaration. If there is a member that cannot be defined in such a way, it suggests that there may be no legal mechanism by which a default constructor can be defined.
"""
markhahn · 9h ago
Most of that actually just makes sense if you approach it from the historic,low-level, minimalist direction. But maybe if you're coming from some other, higher-comfort language...
frollogaston · 8h ago
Coming from C, none of this made sense to me. Wut is `foo() = default;`? If you want a default value of 0, why isn't it just
struct foo {
int a = 0;
};
In Python, which is higher-level ofc, I still have to do `foo = 0`, nice and clear.
Maxatar · 7h ago
`foo() = default;` is an explicit way to generate a default constructor for `foo`. The default constructor works by recursively calling the default constructors for all class instance fields. In C++ there are a bunch of rules about when a class has a default constructor or not, but by explicitly declaring one you are guaranteed to have it so long as all your class instance fields have default constructors.
Your example of having a field called `a` that is initialized to 0 is perfectly valid C++ as well but it's not the same as an explicitly declared default constructor.
motorest · 4h ago
> Coming from C, none of this made sense to me. Wut is `foo() = default;`?
C does not have member functions, let alone special member functions such as constructors. It's understandable that someone with a C background who never had any experience using a language besides C would struggle with this sort of info.
C++ improved upon C's developer experience by introducing the concept of special member functions. These are functions which the compiler conveniently generates for you when you write a simple class. This covers constructors (copy constructors and move constructors too). This is extremely convenient and eliminates the need for a ton of boilerplate code.
C++ is also smart enough to know when not to write something it might surprise you. Thus, if you add anything to a basic class that would violate assumptions on how to generate default implementations for any of these special member functions, C++ simply backs off and doesn't define them.
Now, just because you prevented C++ from automatically defining your constructors, that does not mean you don't want them without having to add your boilerplate code. Thus, C++ allows developers to define these special member functions using default implementations. That's what the default keyword is used for.
Now, to me this sort of complaining just sounds like nitpicking. The whole purpose of special member functions and default implementations is to help developers avoid writing boilerplate code to have basic implementations of member functions you probably need anyway. For basic, predictable cases, C++ steps in and helps you out. If you prevent C++ from stepping in, it won't. Is this hard to understand?
More baffling, you do not have to deal with these scenarios if you just declare and define the special member functions you actually want. This was exactly how this feature was designed to work. Is this too hard to follow or understand?
I think the problem with C++ is that some people who are clearly talking out of ignorance feel the need to fabricate arguments about problems you will experience if you a) don't know what you are doing at all and aren't even interested in learning, b) you want to go way out of your way to nitpick about a tool you don't even use. Here we are, complaining about a keyword. If we go through the comments, most of the people doing the bulk of the whining don't even know what it means or how it's used. They seem to be invested in complaining about things they never learned about. Wild.
zabzonk · 8h ago
> If you want a default value of 0, why isn't it ...
It is.
frollogaston · 3h ago
I know that works too, but there are also other unclear ways to do it.
kazinator · 7h ago
> This rule makes sense when you think about it
No, it is bonkers; stick to your consistent point, please.
These two should have exactly the same effect:
bar() = default; // inside class declaration
bar::bar() = default; // outside class declaration
The only difference between them should be analogous to the difference between an inline and non-inline function.
For instance, it might be that the latter one is slower than the former, because the compiler doesn't know from the class declaration that the default constructor is actually not user-defined but default. How it would work is that a non-inline definition is emitted, which dutifully performs the initialization, and that definition is actually called.
That's what non-bonkers might look like, in any case.
I.e. both examples are rewritten by the compiler into
where __default_init is a fictitious place holder for the implementation's code generation strategy for doing that default initialization. It would behave the same way, other than being inlined in the one case and not in the other.
Another way that it could be non-bonkers is if default were simply not allowed outside of the class declaration.
bar::bar() default; // error, too late; class declared already!
Something that has no hope of working right and is easily detectable by syntax alone should be diagnosed. If default only works right when it is present at class declaration time, then ban it elsewhere.
e-dant · 10h ago
Let the language die, hope it goes quicker than cobol.
trealira · 6h ago
C++ is not going anywhere. It's even still used in gamedev to make new games. It's used in HPC and scientific computing. Windows applications often use it. And so on.
COBOL Language Frontend Merged For GCC 15 Compiler
Written by Michael Larabel in GNU on 11 March 2025 at 06:22 AM EDT. 33 Comments
greesil · 6h ago
I don't think it's going anywhere, too much existing code that's still useful. People STILL use Fortran 77 for goodness sake.
lblume · 3h ago
Fortran may still be used but is considered functionally dead nonetheless. Nobody is hiring Fortran devs anymore (and those who do put themselves in a really hard market position). Yet, learning C++ might still be a more valuable skill than learning Rust.
kergonath · 3h ago
Fortran 77 is dead. Fortran is not, and yes, people still get hired to use it. Just maybe not in your field.
bdangubic · 9h ago
“quicker than cobol” means it will die in the next 100 years (maybe) :)
jandrewrogers · 3h ago
For better or worse, modern C++ is still the most capable and expressive systems language. To replace it, we need (at a minimum) a language with similar capability and expressiveness in the low-level systems domain. The options are really thin; Zig probably comes the closest but it is a bit austere coming from recent C++ versions.
I think we can do significantly better than C++ as a systems language. We just haven’t landed on a language that really nails the design.
indigoabstract · 3h ago
I think this saying applies here pretty well: Horses don't die when the dogs want them to.
gosub100 · 10h ago
COBOL is alive and well. Why would a company rewrite a codebase that has decades of error free functionality? What do they get?
cheema33 · 9h ago
> Why would a company rewrite a codebase that has decades of error free functionality? What do they get?
All well and good if it is something you do not have to modify/maintain on a regular basis. But, if you do, then the ROI on replacing it might be high, depending on how much pain it is to keep maintaining it.
We have an old web app written in asp.net web forms. It mostly works. But we have to maintain it and add functionality to it. And that is where the pain is. We've been doing it for a few years but the amount of pain it is to work on it is quite high. So we are slowly replacing it. One page at a time.
gosub100 · 8h ago
the insurance companies running COBOL don't care. it's cheaper to pay a cowboy $X00,000/yr to keep the gravy dispenser running than trying to modify it. by definition, this is code that's been in use for decades. Why change it?
jimbob45 · 8h ago
I suspect the committee agrees with you. I think they’ve anticipated a competitor coming to kill C++ for two decades now and see themselves as keeping C++ on life support for those who need it.
It’s shameful that there’s no good successor to C++ outside of C# and Java (and those really aren’t successors). Carbon was the closest we came and Google seems to have preemptively dropped it.
The addition of a safety design is a shift in our milestones for v0.1, and you can see the difference here. Both of these are fundamental parts of v0.1, and will take long enough that the earliest date for v0.1 is pushed out to the end of 2026
Look, no one is more excited than me for this, but this is reaching Star Citizen levels of delays.
This idea that everything must be initialized (i.e. no undefined or non-deterministic behavior) should never be forced upon a language like C++ which rightly assumes the programmer should have the final say. I don't want training wheels put on C++ -- I want C++ do exactly and only what the programmer specifies and no more. If the programmer wants to have uninitialized memory -- that is her business.
Maxatar · 8h ago
It's so ironic hearing a comment like this. If what you really want is for C++ to do only what you strictly specified, then you'd always release your software with all optimizations disabled.
But I'm going to go out on a limb here and guess you don't do that. You actually do allow the C++ compiler to make assumptions that are not explicitly in your code, like reorder instructions, hoist invariants, eliminate redundant loads and stores, vectorize loops, inline functions, etc...
All of these things I listed are based on the compiler not doing strictly what you specified but rather reinterpreting the source code in service of speed... but when it comes to the compiler reinterpreting the source code in service of safety.... oh no... that's not allowed, those are training wheels that real programmers don't want...
Here's the deal... if you want uninitialized variables, then explicitly have a way to declare a variable to be uninitialized, like:
int x = void;
This way for the very very rare cases where it makes a performance difference, you can explicitly specify that you want this behavior... and for the overwhelming majority of cases where it makes no performance impact, we get the safe and well specified behavior.
monkeyelite · 2h ago
> It's so ironic hearing a comment like this. If what you really want is for C++ to do only what you strictly specified, then you'd always release your software with all optimizations disabled
The whole idea of optimizations is producing code that’s equivalent to the naiive version you wrote. There is no inconsistency here.
RUnconcerned · 25m ago
Optimizations are not "exactly and only what the programmer specifies and no more". They actually fall into the "more" category, believe it or not.
waynecochran · 5h ago
The whole advantage of UB is that this places less restraints on what the optimizer can do. If I say something does not need to be initialized I am giving the optimizer the freedom to do more!
TheBicPen · 4h ago
So what's the issue with introducing explicit syntax to do exactly that if you want to? A safe default does not preclude you from opting out of safety with a bit of syntax or perhaps a compiler flag.
monkeyelite · 2h ago
The issue is that the language was already designed with the old behavior.
trealira · 3h ago
Nah, they'd never add new syntax like that, given it's inconsistent with the rest of C++.
If they added an explicit uninitialized value representation to the language, I bet it would look something like this:
int x {std::uninitialized<int>::value};
gpderetta · 22m ago
C++ hasn't done it this way for nullptr or nullopt, why would it do it for an explicit uninitialized?
frollogaston · 8h ago
How about int x = 0 if you want 0. Just `int x;` doesn't make it clear that you want 0.
kstrauser · 7h ago
Safe defaults matter. If you're using x to index into a array, and it's randomly initialized as +-2,000,000,000 because that's what happened to be in that RAM location when the program launched, and you use it before explicitly setting it, you're gonna have a bad time.
And if you used it with a default value of 0, you're going to end up operating on the 0th item in the array. That's probably a bug and it may even be a crasher if the array has length 0 and you end up corrupting something important, but the odds of it being disastrous are much lower.
yxhuvud · 9h ago
The discussion about what should be the default behavior and of what should be the opt-in behavior is very different from what should be possible. It is definitely clear that in c++, it must be possible to not initialize variables.
Would it really be that unreasonable to have initialisation be opt-out instead of opt-in? You'd still have just as much control, but it would be harder to shoot yourself in the foot by mistake. Instead it would be slightly more easy to get programs that can be optimised.
frollogaston · 8h ago
C++ is supposed to be an extension of C, so I wouldn't expect things to be initialized by default, even though personally I'm using C++ for things where it'd be nice.
I'm more annoyed that C++ has some way to default-zero-init but it's so confusing that you can accidentally do it wrong. There should be only one very clear way to do this, like you have to put "= 0" if you want an int member to init to 0. If you're still concerned about safety, enable warnings for uninitialized members.
gpderetta · 21m ago
my_type my_var = {}; almost always does the right thing.
The almost is unfortunate.
loeg · 9h ago
As someone who has to work in C++ day in and day out: please, give me the fucking training wheels. I don't want UB if I declare an object `A a;` instead of `A a{};`. At least make it a compiler error I can enable!
ryandrake · 8h ago
Ideally, there would be a keyword for it. So ‘A a;’ would not compile. You’d need to do ‘A a{};’ or something like ‘noinit A a;’ to tell the compiler you’re sure you know what you are doing!
waynecochran · 5h ago
Not me. I want to give the optimizer the freedom to do its thing. If I say something does not need to be initialized, then the optimizer has one less constraint to worry about.
wiseowise · 4h ago
We’ve already understood you don’t want sane language design, you don’t need to repeat it ten times.
90s_dev · 8h ago
That's the inherent tension, though, isn't it?
A programmer wants the compiler to accept code that looks like a stupid mistake when he knows it's not.
But he also wants to have the compiler make sure he isn't making stupid mistakes by accident.
Unfortunately C++ ended up with a set of defaults (i.e., the most ergonomic ways of doing things) that are almost always the least safe. During most of C++'s development, performance was king and so safety became opt-in.
Many of these can't be blamed on C holdover. For example Vector.at(i) versus Vector[i] – most people default to the latter and don't think twice about the safety implications. The irony is that most of the time when people use std::vector, performance is irrelevant and they'd be much better off with a safe default.
Alas, we made our bed and now we have to lie in it.
kstrauser · 8h ago
By that logic, you'd have to dislike the situations where C++ does already initialize variables to defined values, like `int i;`, because they're removing your control and forcing training wheels upon you.
So, do you?
jcelerier · 8h ago
int i;
does not initialize the value.
kstrauser · 8h ago
It's a gotcha to be sure. Sometimes it does, sometimes it doesn't. From a reference[0]:
#include <string>
struct T1 { int mem; };
struct T2
{
int mem;
T2() {} // “mem” is not in the initializer list
};
int n; // static non-class, a two-phase initialization is done:
// 1) zero-initialization initializes n to zero
// 2) default-initialization does nothing, leaving n being zero
int main()
{
[[maybe_unused]]
int n; // non-class, the value is indeterminate
std::string s; // class, calls default constructor, the value is ""
std::string a[2]; // array, default-initializes the elements, the value is {"", ""}
// int& r; // Error: a reference
// const int n; // Error: a const non-class
// const T1 t1; // Error: const class with implicit default constructor
[[maybe_unused]]
T1 t1; // class, calls implicit default constructor
const T2 t2; // const class, calls the user-provided default constructor
// t2.mem is default-initialized
}
That `int n;` on the 11th line is initialized to 0 per standard. `int n;` on line 18, inside a function, is not. And `struct T1 { int mem; };` on line 3 will have `mem` initialized to 0 if `T1` is instantiated like `T1 t1{};`, but not if it's instantiated like `T1 t1;`. There's no way to tell from looking at `struct T1{...}` how the members will be initialized without knowing how they'll be called.
> "There's a great language somewhere deep inside of C++"
or something to that effect.
portaltonowhere · 7h ago
Unless `i` is global…
waynecochran · 5h ago
Most cases, e.g. local var declaration. `int i` does not initialize i.
charlotte-fyi · 9h ago
The entire problem is that what the programmer wants to do and what the program actually does isn't always clear to the programmer.
GrantMoyer · 9h ago
The problem is that the initialization semantics are so complex in C++ that almost no programmer is actually empowered to exercise their final say, and no programmer without significant effort.
And that's not just said out of unfamiliarity. I'm a professional C++ developer, and I often find I'm more familiar with C++'s more arcane semantics than many of my professional C++ developer co-workers.
titzer · 7h ago
> I want C++ do exactly and only what the programmer specifies and no more.
Most programmers aren't that good and you're mostly running other people's code. Bad defaults that lead to exploitable security bugs is...bad defaults. If you want something to be uninitialized because you know it then you should be forced to scream it at the compiler.
tonyhart7 · 9h ago
"If the programmer wants to have uninitialized memory -- that is her business."
idk, seems like years of academic effort and research wasted if we do the way C++ do it
vjvjvjvjghv · 9h ago
The dev should have the option to turn it off but I think that removing a lot of undefined and non deterministic behavior would be a good thing. When I did C++ I initialized everything and when there was a bug it could usually be reproduced. There are a few cases where it makes sense performance wise to not initialize but those cases are very small compared to most other code where undefined behavior causes a ton of intermittent bugs.
anon-3988 · 9h ago
If they want the program to do exactly what is told they won't get to have optimization.
> The results show that, in the cases we evaluated, the performance gains from exploiting UB are minimal. Furthermore, in the cases where performance regresses, it can often be recovered by either small to moderate changes to the compiler or by using link-time optimizations.
waynecochran · 5h ago
That's the whole point of UB -- it leaves open more possibilities for optimization. If everything is nailed down, then the options are more restricted.
alexvitkov · 7h ago
This is not even worth thinking about, just type " = {}" on every struct/class member and every variable declaration, and forget about all this nonsense.
dataflow · 7h ago
That's a bad idea. It defeats tools (warnings, sanitizers, etc.) that try to tell you you have forgotten to place the semantically correct value in your variables.
If you want indiscriminate initialization, a compiler flag is the way, not forcing it in the source code.
jandrewrogers · 10h ago
I largely prefer modern C++ as systems languages go but there is no getting around the fact that the initialization story in C++ is a hot mess. Fortunately, it mostly does what you need it to even if you don't understand it.
vjvjvjvjghv · 9h ago
And sometimes it doesn’t do what you think it does.
timewizard · 10h ago
> Explicitly initialize your variables, and if you ever fall in to the trap of thinking C++ is a sane language, remember this
It's a systems language. Systems are not sane. They are dominated by nuance. In any case the language gives you a choice in what you pay for. It's nice to be able to allocate something like a copy or network buffer without having to pay for initialization that I don't need.
creata · 6h ago
C and Rust both tend to be more sane than C++, though, so you can't just pin it on C++ being a systems programming language.
vacuity · 10h ago
I think in this case it's not amiss to mention Rust. Rust gives a compile error if it's not certain a variable is initialized. Option is the standard dynamic representation of this, and works nicely in the context of all Rust code. MaybeUninint is the `unsafe` variant that is offered for performance-critical situations.
wffurr · 7h ago
>> Systems are not sane.
“The systems programmer has seen the terrors of the world and understood the intrinsic horror of existence.”
That may have made sense in the days of < 100 MHz CPUs but today I wish they would amend the standard to reduce UB by default and only add risky optimizations with specific flags, after the programmer has analyzed them for each file.
jcelerier · 9h ago
> That may have made sense in the days of < 100 MHz CPUs
you don't know how much C++ code is being written for 100-200MHz CPUs everyday
I have a codebase that is right now C++23 and soon I hope C++26 targeting from Teensy 3.2 (72 MHz) to ESP32 (240 MHz).
Let me tell you, I'm fighting for microseconds every time I work with this.
vjvjvjvjghv · 9h ago
I bet even there you have only a few spots where it really makes a difference. It’s good to have the option but I think the default behavior should be safer.
jcelerier · 8h ago
I don't know, way too often often my perf traces are evenly distributed across a few hundred functions (at best), without any clear outlier.
gosub100 · 8h ago
"how much code" =/= how many developers.
the people who care about clock ticks should be the ones inconvenienced, not ordinary joes who are maintaining a FOSS package that is ultimately stuck by a 0-day. It still takes a swiss-cheese lineup to get there, for sure. but one of the holes in the cheese is C++'s default behavior, trying to optimize like it's 1994.
jcelerier · 8h ago
> the people who care about clock ticks
I mean that's pretty much the main reason for using c++ isn't it? Video games, real-time media processing, CPU ai inference, network middleware, embedded, desktop apps where you don't want startup time to take more than a few milliseconds...
PaulDavisThe1st · 4h ago
it's not about startup time. it's about computational bandwidth and latency once running.
gosub100 · 8h ago
No, it's not a dichotomy of having uninitialized data and fast startup or wait several milliseconds for a jvm or interpreter to load a gigabyte of heap allocated crap.
timewizard · 9h ago
CPU speed is not memory bandwidth. Latency and contention always exist. Long lived processes are not always the norm.
C++ sucks, it's too hard to use, the compiler should generate stores all over the place to preemptively initialize everything!
Software is too bloated, if we optimized more we could use old hardware!
Maxatar · 8h ago
I'm not familiar with programming languages that generate redundant stores in order to initialize anything.
Usually what happens is the language requires you to initialize the variable before it's read for the first time, but this doesn't have to be at the point of declaration. Like in Java you can declare a variable, do other stuff, and then initialize it later... so long as you initialize it before reading from it.
Note that in C++, reading from a variable before writing to it is undefined behavior, so it's not particularly clear what benefit you're getting from this.
josefx · 4h ago
> Note that in C++, reading from a variable before writing to it is undefined behavior, so it's not particularly clear what benefit you're getting from this.
The compiler cannot always tell if a variable will be written to before it is accessed. if you have a 100kb network buffer and you call int read = opaque_read(buffer); the compiler cannot tell how much or if anything at all was written to buffer and how size relates to it, it would be forced to initialize every byte in it to zero. A programmer can read the API docs, see that only the first read bytes are valid and use the buffer without ever touching anything uninitialized. Now add in that you can pass mutable pointers and references to nearly anything in C++ and the compiler has a much harder time to tell if it has to initialize arguments passed to functions or if the function is doing the initialization for it.
jorhannn · 2h ago
>Note that in C++, reading from a variable before writing to it is undefined behavior
They are finally fixing that in C++26 where it's no longer undefined behavior, it's "erroneous behavior" which will require a diagnostic and it has to have some value and compilers aren't allowed to break your code anymore.
zahlman · 6h ago
> Note that in C++, reading from a variable before writing to it is undefined behavior, so it's not particularly clear what benefit you're getting from this.
You gain the benefit that the compiler can assume the code path in question is impossible to reach, even if there's an obvious way to reach it. To my understanding, this can theoretically back-propagate all the way to `main()` and make the entire program a no-op.
I've been using C++ for a decade. Of all the warts, they all pale in comparison to the default initialization behavior. After seeing thousands of bugs, the worst have essentially been caused by cascading surprises from initialization UB from newbies. The easiest, simplest fix is simply to default initialize with a value. That's what everyone expects anyway. Use Python mentality here. Make UB initialization an EXPLICIT choice with a keyword. If you want garbage in your variable and you think that's okay for a tiny performance improvement, then you should have to say it with a keyword. Don't just leave it up to some tiny invisible visual detail no one looks at when they skim code (the missing parens). It really is that easy for the language designers. When thinking about backward compatibility... keep in mind that the old code was arguably already broken. There's not a good reason to keep letting it compile. Add a flag for --unsafe-initialization-i-cause-trouble if you really want to keep it.
C++, I still love you. We're still friends.
Oh how I wish the C++ committee and compiler authors would adopt this way of thinking... Sadly we're dealing with an ecosystem where you have to curate your compiler options and also use clang-tidy to avoid even the simplest mistakes :/
Like its insane to me how Wconversion is not the default behavior.
I disagree. If you expect anyone to adopt your new standard revision, the very least you need to do is ensure their code won't break just by flipping s flag. You're talking about production software, many of which has decades worth of commit history, which you simply cannot spend time going through each and every single line of code of your >1M LoC codebase. That's the difference between managing production-grade infrastructure and hobbyist projects.
Why would you expect that a new revision can't cause existing code to compile? It means that "new" revisions can't fix old problems, and one thing you always get more of over time is perspective.
If you don't want your code "broken", don't migrate to a new standard. That's the point of supporting old standards. Don't hobble new standards because you both want new things, but don't want old things to change.
But the annoyance comes when dealing with multiple compilers and versions. Then you have to add more compatibility macros all over. Say, when being a library vendor trying to support broad range of customers.
The tooling already exists. The bulk of the criticism in this thread is clearly made from a position of ignorance. For example, all major compilers already provide flags to enable checks for uninitialized variables being used. Onboarding a static code analysis tool nowadays requires setting a flag in CMake.
These discussions would be constructive if those engaging in them had any experience at all with the language and tooling. But no, it seems the goal is to parrot cliches out of ignorance. Complaining that they don't know what a reserved word means and using that as an argument to rewrite software in other languages is somehow something worth stating.
Many different committees, organizations etc. could benefit, IMO.
The code is only broken if the data is used before anything is written to it. A lot of uninitialized data is wrapped by APIs that prevent reading before something was written to it, for example the capacity of a standard vector, buffers for IO should only access bytes that where already stored in them. I have also worked with a significant number of APIs that expect a large array of POD types and then tell you how many entries they filled.
> for a tiny performance improvement
Given how Linux allocates memory pages only if they are touched and many containers intentionally grow faster then they are used? It reduces the amount of page faults and memory use significantly if only the used objects get touched at all.
In effect, you are assuming that your uninitialized and initialized variables straddle a page boundary. This is obviously not going to be a common occurrence. In the common case you are allocating something on the heap. That heap chunk descriptor before your block has to be written, triggering a page fault.
Besides: taking a page fault, entering the kernel, modifying the page table page (possibly merging some VMAs in the process) and exiting back to userspace is going to be A LOT slower than writing that variable.
OK you say, but what if I have a giant array of these things that spans many pages. In that case your performance and memory usage are going to be highly unpredictable (after all, initializing a single thing in a page would materialize that whole page).
OK, but vectors. They double in size, right? Well, the default allocator for vectors will actually zero-initialize the new elements. You could write a non-initializing allocator and use it for your vectors - and this is in line with "you have to say it explicitly to get dangerous behavior".
You are assuming that I am working with small data structures, don't use arrays of data, don't have large amounts of POD members, ... .
> That heap chunk descriptor before your block has to be written, triggering a page fault.
So you allocate one out of hundreds of pages? The cost is significantly less than the alternative.
> In that case your performance and memory usage are going to be highly unpredictable (after all, initializing a single thing in a page would materialize that whole page).
As opposed to initializing thousands of pages you will never use at once? Or allocating single pages when they are needed?
> Well, the default allocator for vectors will actually zero-initialize the new elements.
I reliably get garbage data after the first reserve/shrink_to_fit calls. Not sure why the first one returns all zero, I wouldn't rely on it.
Sounds like a great set of use cases for explicit syntax to opt out of automatic initialization.
The problem with your assumption is that you're just arguing that it's ok for code to be needlessly buggy if you believe the odds this bug is triggered are low. OP points out a known failure mode and explains how a feature eliminates it. You intentionally ignore it for no reason.
This assumption is baffling when, in the exact same thread, you see people whining about C++ for allowing memory-related bugs to exist.
You failed to read what I wrote. I referred to why clients would choose to not initialize early to avoid scenarios such as Linux over committing, not that Linux had a bug.
Ideally, what you want is what Rust and many modern languages do: programs which don't explain what they wanted don't compile, so, when you forget to initialize that won't compile. A Rust programmer can write "Don't initialize this 1024 byte buffer" and get the same (absence of) code but it's a hell of a mouthful - so they won't do it by mistake.
The next best option, which is what C++ 26 will ship, is what they called "Erroneous Behaviour". Under EB it's defined as an error not to initialize something you use but it is also defined what happens so you can't have awful UB problems, typically it's something like the vendor specifies which bit pattern is written to an "unintialized" object and that's the pattern you will observe.
Why not zero? Unfortunately zero is too often a "magic" value in C and C++. It's the Unix root user, it's often an invalid or reserved state for things. So while zero may be faster in some cases, it's usually a bad choice and should be avoided.
I think you're confusing things. You're arguing about static code analysis being able to identify uninitialized var reads. All C++ compilers already provide support for flags such as -Wuninitiaized.
(Safe) Rust does guarantee to identify uninitialised variable reads, but I believe the point is that you can get the optimisation of not forcing early initialisation in Rust, you just have to be explicit that that's what you want (you use the MaybeUninit type); you're forced to be clear that that's what you meant, not just by forgetting parens.
> If zero initialization were the default and you had to opt-in with [[uninitialized]] for each declaration it’d be a lot safer.
I support that, too. Just seems harder than getting a flag into Clang or GCC.
You don't care because your job is not to ensure that a new release of C++ doesn't break production code. You gaze at your navel and pretend that's the universe everyone is bound to. But there are others using C++, and using it in production software. Some of them care, and your subjective opinions don't have an impact in everyone else's requirements.
> I only have to work with Clang, personally.
Read Clang's manual and check what compiler flags you need to flip to get that behavior. It's already there.
Come on. That's nothing compared to the horrors that lay in manual memory management. Like I've never worked with a C++ based application that doesn't have crashes lurking all around, so bad that even a core dump leaves you clueless as to what's happening. Couple OOP involving hundreds of classes and 50 levels deep calls with 100s of threads and you're hating your life when trying to find the cause for yet another crash.
https://leanpub.com/cppinitbook
(I don't know whether it's good or not, I just find it fascinating that it exists)
Of course we should provide a mechanism to allow large arrays to remain uninitialized, but this should be an explicit choice, rather than the default behaviour.
However, will it happen? It's arguably the easiest thing C++ could do to make software safer, but there appears to be no interest in the committee to do anything with safety other than talk about it.
The question is what to do about it - balancing the cost of change to code and to engineers who learned it.
> but there appears to be no interest in the committee to do anything with safety other than talk about it.
There is plenty of interest in improving C++ safety. It’s a regular topic of discussion.
Part of that discussion is how it will help actual code bases that exist.
Should the committee do some breaking changes to make HN commenters happier, who don’t even use the language?
https://open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2754r0... provides what appears to be the answer to this question: "No tools will be able to detect existing logical errors since they will become indistinguishable from intentional zero initialization. The declarations int i; and int i = 0; would have precisely the same meaning." ...yes, they would. _That's the point_. The paper has it exactly the wrong way around: currently tools cannot distinguish between logical error and intentional deferred initialization, but having explicit syntax for the latter would make the intention clear 100% of the time. Leaving a landmine in the language just because it gives you more warnings is madness. The warning wouldn't be needed to begin with, if there were no landmine.
I'm not sure what you mean with "who don't even use the language". Are you implying that only people that program professionally in C++ have any stake in reliable software?
Lists of the “good parts” of C++ over C usually include RAII. But f we imagine starting with C and adding C++ features to see when complexity explodes. I think the worst offender is constructor/destructor.
They require the language to perfectly track the lifetime of each member of every structure. If you resize a vector, every entry must call a constructor. If exceptions are possible, but insert little cleanup calls into all possible code paths.
Want to make a copy of something? Who is responsible for calling constructor/destructor. Want to make a struct? What if one member requires construction? How do you handle a union?
The result is micromanaging and turning most operations into O(n) init/cleanup calls.
The modern C approach avoids all of this and allows you to manage pieces of memory - rather than values. Zero initialize or leave uninitialized.
So what do we lose? Well classes own resources. If you have a vector<MyObject> and MyObject has a member vector<Items> then we should be able to cleanup without looking inside each member of each element.
I think we should separate resource allocation from use. Allocators are the things that care about cleanup, move, etc. This should be the exception - rather than the default way to think about structs.
What do you mean? The compiler will do it for you.
> This should be the exception - rather than the default way to think about structs.
the way that RAII in C++ recursively construct and destroys arbitrary object graphs is extremely powerful. It is something that very few other languages have (Rust, any other?). It should definitely be the default.
> I think we should separate resource allocation from use. Allocators are the things that care about cleanup, move, etc.
I'm not sure what you mean by use. If you mean we should separate allocation from construction, I agree! But then so does C++. They are tied by default, but it is easy to separate them if you need it.
Related: Initialization in C++ is Seriously Bonkers (166 points, 2019, 126 points) https://news.ycombinator.com/item?id=18832311
""" Default member initializers define a default value at the point of declaration. If there is a member that cannot be defined in such a way, it suggests that there may be no legal mechanism by which a default constructor can be defined. """
Your example of having a field called `a` that is initialized to 0 is perfectly valid C++ as well but it's not the same as an explicitly declared default constructor.
C does not have member functions, let alone special member functions such as constructors. It's understandable that someone with a C background who never had any experience using a language besides C would struggle with this sort of info.
C++ improved upon C's developer experience by introducing the concept of special member functions. These are functions which the compiler conveniently generates for you when you write a simple class. This covers constructors (copy constructors and move constructors too). This is extremely convenient and eliminates the need for a ton of boilerplate code.
C++ is also smart enough to know when not to write something it might surprise you. Thus, if you add anything to a basic class that would violate assumptions on how to generate default implementations for any of these special member functions, C++ simply backs off and doesn't define them.
Now, just because you prevented C++ from automatically defining your constructors, that does not mean you don't want them without having to add your boilerplate code. Thus, C++ allows developers to define these special member functions using default implementations. That's what the default keyword is used for.
Now, to me this sort of complaining just sounds like nitpicking. The whole purpose of special member functions and default implementations is to help developers avoid writing boilerplate code to have basic implementations of member functions you probably need anyway. For basic, predictable cases, C++ steps in and helps you out. If you prevent C++ from stepping in, it won't. Is this hard to understand?
More baffling, you do not have to deal with these scenarios if you just declare and define the special member functions you actually want. This was exactly how this feature was designed to work. Is this too hard to follow or understand?
I think the problem with C++ is that some people who are clearly talking out of ignorance feel the need to fabricate arguments about problems you will experience if you a) don't know what you are doing at all and aren't even interested in learning, b) you want to go way out of your way to nitpick about a tool you don't even use. Here we are, complaining about a keyword. If we go through the comments, most of the people doing the bulk of the whining don't even know what it means or how it's used. They seem to be invested in complaining about things they never learned about. Wild.
It is.
No, it is bonkers; stick to your consistent point, please.
These two should have exactly the same effect:
The only difference between them should be analogous to the difference between an inline and non-inline function.For instance, it might be that the latter one is slower than the former, because the compiler doesn't know from the class declaration that the default constructor is actually not user-defined but default. How it would work is that a non-inline definition is emitted, which dutifully performs the initialization, and that definition is actually called.
That's what non-bonkers might look like, in any case.
I.e. both examples are rewritten by the compiler into
where __default_init is a fictitious place holder for the implementation's code generation strategy for doing that default initialization. It would behave the same way, other than being inlined in the one case and not in the other.Another way that it could be non-bonkers is if default were simply not allowed outside of the class declaration.
Something that has no hope of working right and is easily detectable by syntax alone should be diagnosed. If default only works right when it is present at class declaration time, then ban it elsewhere.COBOL Language Frontend Merged For GCC 15 Compiler Written by Michael Larabel in GNU on 11 March 2025 at 06:22 AM EDT. 33 Comments
I think we can do significantly better than C++ as a systems language. We just haven’t landed on a language that really nails the design.
All well and good if it is something you do not have to modify/maintain on a regular basis. But, if you do, then the ROI on replacing it might be high, depending on how much pain it is to keep maintaining it.
We have an old web app written in asp.net web forms. It mostly works. But we have to maintain it and add functionality to it. And that is where the pain is. We've been doing it for a few years but the amount of pain it is to work on it is quite high. So we are slowly replacing it. One page at a time.
It’s shameful that there’s no good successor to C++ outside of C# and Java (and those really aren’t successors). Carbon was the closest we came and Google seems to have preemptively dropped it.
Look, no one is more excited than me for this, but this is reaching Star Citizen levels of delays.
A wonderful exploration of an underexplored topic--I've pre-ordered the hard copy and have been following along with the e-book in the interim.
https://youtu.be/7DTlWPgX6zs?si=-jNEIYQf1_uUioD-
But I'm going to go out on a limb here and guess you don't do that. You actually do allow the C++ compiler to make assumptions that are not explicitly in your code, like reorder instructions, hoist invariants, eliminate redundant loads and stores, vectorize loops, inline functions, etc...
All of these things I listed are based on the compiler not doing strictly what you specified but rather reinterpreting the source code in service of speed... but when it comes to the compiler reinterpreting the source code in service of safety.... oh no... that's not allowed, those are training wheels that real programmers don't want...
Here's the deal... if you want uninitialized variables, then explicitly have a way to declare a variable to be uninitialized, like:
This way for the very very rare cases where it makes a performance difference, you can explicitly specify that you want this behavior... and for the overwhelming majority of cases where it makes no performance impact, we get the safe and well specified behavior.The whole idea of optimizations is producing code that’s equivalent to the naiive version you wrote. There is no inconsistency here.
If they added an explicit uninitialized value representation to the language, I bet it would look something like this:
And if you used it with a default value of 0, you're going to end up operating on the 0th item in the array. That's probably a bug and it may even be a crasher if the array has length 0 and you end up corrupting something important, but the odds of it being disastrous are much lower.
Would it really be that unreasonable to have initialisation be opt-out instead of opt-in? You'd still have just as much control, but it would be harder to shoot yourself in the foot by mistake. Instead it would be slightly more easy to get programs that can be optimised.
I'm more annoyed that C++ has some way to default-zero-init but it's so confusing that you can accidentally do it wrong. There should be only one very clear way to do this, like you have to put "= 0" if you want an int member to init to 0. If you're still concerned about safety, enable warnings for uninitialized members.
The almost is unfortunate.
A programmer wants the compiler to accept code that looks like a stupid mistake when he knows it's not.
But he also wants to have the compiler make sure he isn't making stupid mistakes by accident.
How can it do both? They're at odds.
By doing what’s right.
https://en.wikipedia.org/wiki/Principle_of_least_astonishmen...
Many of these can't be blamed on C holdover. For example Vector.at(i) versus Vector[i] – most people default to the latter and don't think twice about the safety implications. The irony is that most of the time when people use std::vector, performance is irrelevant and they'd be much better off with a safe default.
Alas, we made our bed and now we have to lie in it.
So, do you?
C++ is fun!
[0]https://en.cppreference.com/w/cpp/language/default_initializ...
> "There's a great language somewhere deep inside of C++"
or something to that effect.
And that's not just said out of unfamiliarity. I'm a professional C++ developer, and I often find I'm more familiar with C++'s more arcane semantics than many of my professional C++ developer co-workers.
Most programmers aren't that good and you're mostly running other people's code. Bad defaults that lead to exploitable security bugs is...bad defaults. If you want something to be uninitialized because you know it then you should be forced to scream it at the compiler.
idk, seems like years of academic effort and research wasted if we do the way C++ do it
> The results show that, in the cases we evaluated, the performance gains from exploiting UB are minimal. Furthermore, in the cases where performance regresses, it can often be recovered by either small to moderate changes to the compiler or by using link-time optimizations.
If you want indiscriminate initialization, a compiler flag is the way, not forcing it in the source code.
It's a systems language. Systems are not sane. They are dominated by nuance. In any case the language gives you a choice in what you pay for. It's nice to be able to allocate something like a copy or network buffer without having to pay for initialization that I don't need.
“The systems programmer has seen the terrors of the world and understood the intrinsic horror of existence.”
https://www.usenix.org/system/files/1311_05-08_mickens.pdf
you don't know how much C++ code is being written for 100-200MHz CPUs everyday
https://github.com/search?q=esp8266+language%3AC%2B%2B&type=...
I have a codebase that is right now C++23 and soon I hope C++26 targeting from Teensy 3.2 (72 MHz) to ESP32 (240 MHz). Let me tell you, I'm fighting for microseconds every time I work with this.
the people who care about clock ticks should be the ones inconvenienced, not ordinary joes who are maintaining a FOSS package that is ultimately stuck by a 0-day. It still takes a swiss-cheese lineup to get there, for sure. but one of the holes in the cheese is C++'s default behavior, trying to optimize like it's 1994.
I mean that's pretty much the main reason for using c++ isn't it? Video games, real-time media processing, CPU ai inference, network middleware, embedded, desktop apps where you don't want startup time to take more than a few milliseconds...
In another era we would have just called this optimal. https://x.com/ID_AA_Carmack/status/1922100771392520710
C++ sucks, it's too hard to use, the compiler should generate stores all over the place to preemptively initialize everything!
Software is too bloated, if we optimized more we could use old hardware!
Usually what happens is the language requires you to initialize the variable before it's read for the first time, but this doesn't have to be at the point of declaration. Like in Java you can declare a variable, do other stuff, and then initialize it later... so long as you initialize it before reading from it.
Note that in C++, reading from a variable before writing to it is undefined behavior, so it's not particularly clear what benefit you're getting from this.
The compiler cannot always tell if a variable will be written to before it is accessed. if you have a 100kb network buffer and you call int read = opaque_read(buffer); the compiler cannot tell how much or if anything at all was written to buffer and how size relates to it, it would be forced to initialize every byte in it to zero. A programmer can read the API docs, see that only the first read bytes are valid and use the buffer without ever touching anything uninitialized. Now add in that you can pass mutable pointers and references to nearly anything in C++ and the compiler has a much harder time to tell if it has to initialize arguments passed to functions or if the function is doing the initialization for it.
They are finally fixing that in C++26 where it's no longer undefined behavior, it's "erroneous behavior" which will require a diagnostic and it has to have some value and compilers aren't allowed to break your code anymore.
You gain the benefit that the compiler can assume the code path in question is impossible to reach, even if there's an obvious way to reach it. To my understanding, this can theoretically back-propagate all the way to `main()` and make the entire program a no-op.