Cybernetic natural selection should take care of this over time, but the rate of random mutations in software systems is much higher than in biological systems. Would be interested in modeling the equilibrium dynamics of this
That would happen in a free market, but software is intentionally not a free market thanks to copyright/patent laws. In software, lock-in effects dominate. People will continue using bad software because it's necessary to interoperate with bad software other people are using. There's a coordination problem where everybody would be better off if they collectively switched to better software, but any individual is worse off if they're the first to switch.
Oh, I agree that's the correct answer. I just don't see the article actually ending up with that answer. I see it waffling. Basically, the article ends up saying that, well, we told you about all this dodgy stuff, but what he's doing is working.
The end state of genAI could as well be a few billionaires being their enterprise and everybody else being unemployed or working at the factory. Robots are not there yet (far from it) and someone needs to build and maintain the thing as well as food for everyone. High unemployment could drive salaries down and make lots of thing unavailable to the common people while making humans cheaper than automation for boring manual work.
That's an extreme scenario but today's politicians are not very keen into redistribution of wealth or prevention of excessive accumulation of economic power leading to exceeding the power of the state itself. I see nothing preventing that scenario from happening.
> High unemployment could drive salaries down and make lots of thing unavailable to the common people while making humans cheaper than automation for boring manual work.
‘I wanted a machine to do the dishes for me so I could concentrate on my art, and what I got was a machine to do the art so now I’m the one doing the dishes’
But you already have a machine that can do the dishes. Like doing the laundry, people forget the machines they already have doing 90% of the work. Soon enough artists will forget that computers can do 90% of their work too.
The best thing the C++ WG could do is to spend an entire release cycle working on modules and packaging.
It's nice to have new features, but what is really killing C++ is Cargo. I don't think a new generation of developers are going to be inspired to learn a language where you can't simply `cargo add` whatever you need and instead have to go through hell to use a dependency.
To me, the most important feature of Cargo isn't even the dependency management but that I don't ever need to tell it which files to compile or where to find them. The fact that it knows to look for lib.rs or main.rs in src and then recursively find all my other modules without me needing to specify targets or anything like that is a killer feature on its own IMO. Over the past couple of years I've tried to clone and build a number of dotnet packages for various things, but for an ecosystem that's supposedly cross-platform, almost none of them seem to just work by default when I run `dotnet build` and instead require at least some fixes in the various project files. I don't think I've ever had an issue with a Rust project, and it's hard not to feel like a big part of that is because there's not really much configuration to be done. The list of dependencies is just about the only thing in there that effects the default build; if there's any other configuration other than that and the basic metadata like the name, the repo link, the license, etc., it almost always will end up being specifically for alternate builds (like extra options for release builds, alternate features that can be compiled in, etc.).
> The fact that it knows to look for lib.rs or main.rs in src and then recursively find all my other modules without me needing to specify targets or anything like that is a killer feature on its own IMO.
In the interest of pedantry, locating source files relative to the crate root is a language-level Rust feature, not something specific to Cargo. You can pass any single Rust source file directly to rustc (bypassing Cargo altogether) and it will treat it as a crate root and locate additional files as needed based on the normal lookup rules.
Interesting, I didn't realize this! I know that a "crate" is specifically the unit of compilation for rustc, but I assumed there was some magic in cargo that glued the modules together into a single AST rather than it being in rustc itself.
That being said, I'd argue that the fact that this happens so transparently that people don't really need to know this to use Cargo correctly is somewhat the point I was making. Compared to something like cmake, the amount of effort to use it is at least an order of magnitude lower.
The module system in Rust is incredibly confusing for a beginner.
The standard module in Rust is basically a Rust source file (<module_name>.rs) with the name of the module or a directory <module_name> with a mod.rs file inside (<module_name>/mod.rs). However, that alone doesn't make it a module. The root source file (lib.rs or main.rs) must declare that <module_name>.rs is actually a module in the preamble via mod <module_name>;
Thus it works "backwards" in comparison to C style #include. mod doesn't include a module into the current file/module, it includes the module into the current project directory tree, which means you will never need a second mod <module_name>.rs declaration again.
I wouldn't exactly say that the way C/C++ headers work is particularly intuitive for beginners either; having spent several years as a TA in a course in college where students were writing C for the first time (after having used mostly Java beforehand), I've seen plenty of times where students have trouble figuring out how to properly split things between headers and "regular" source files, have trouble figuring out how to properly specify which sources to link together in their makefiles, and have to learn how to either avoid accidentally including the same thing twice transitively or use one of the various workarounds that mitigates it.
You might argue that all of these are solvable problems, but I'd argue that learning how to properly declare a module in Rust is overall a lot simpler than learning how to deal with all of the analogous problems in C/C++. For people who have never used C/C++ or Rust before, you could just as easily say that they work backwards in comparison to Rust, and in a vaccuum, I think the way Rust does it would be far more intuitive to someone who had familiarity with neither and were presented both at the same time.
As a thought experiment: if you had a group of people who didn't know either Rust or C/C++, and you split them in half, and taught half of them Rust first and C second and did the reverse for the other half, how many of the people in each group would you expect to consider the configuration of Rust builds to be more confusing than the configuration of C builds? I'd be willing to bet that you'd have a far more people in the group that you taught C first who considered Rust builds to be more intuitive than people in the other group who thought that C builds were more intuitive, and that it would be strong evidence that the build system for Rust is overall much easier to understand.
> I don't think I've ever had an issue with a Rust project, and it's hard not to feel like a big part of that is because there's not really much configuration to be done.
For most crates, yes. But you might be surprised how many crates have a build.rs that is doing more complex stuff under the hood (generating code, setting environment variables, calling a C compiler, make or some other build system, etc). It just also almost always works flawlessly (and the script itself has a standardised name), so you don't notice most of the time.
True, but if anything, a build.rs is a lot easier for me to read and understand (or even modify) if needed because I already know Rust. With something like cmake, the build configuration is an entirely separate language the one I'm actually working in, and I haven't seen a project that doesn't have at least some amount of custom configuration in it. Starting up a cargo project literally doesn't require putting anything in the Cargo.toml that doesn't exist after you run `cargo new`.
Oh sure, build.rs is (typically) a great experience. My favourite example is Skia which is notoriously difficult to build, but relatively easy to build with the Rust bindings. My point was just that this isn't always because there's nothing complex going on, but because it still works well even though there sometimes are complex things going on!
There's a subtle point here that I hadn't fully considered before: even three languages with somewhat different ideas about what the language itself should be like are mostly in agreement about how builds should work (or more specifically, how they shouldn't work). There's somewhat of a consensus in terms of what people expect from a language nowadays in terms of tooling, and as the parent comment I originally responded to was saying, I have to wonder whether any amount of changes in the C++ language itself would be enough to matter without also reaching parity in that regard.
But you are specifying source files, although indirectly, aren't you? That's what all those `mod blah` with a corresponding `blah.rs` file present in the correct location are.
Yes and no. You're right that the mod declarations are necessary, but I'd argue that this is actually more direct (it's happening in the code, not in some external build file), and that you still aren't actually choosing where the file is located as much as stating that it's present, since you don't have the ability to look for `mod foo` in an arbitrary location, but only the places that the tooling already expects to find it.
I suffered with Java from
Any, Maven and Gradle (the oldest is the the best). After reading about GNU Autotools I was wondering why the C/C++ folks still suffer? Right at that time Meson appeared and I skipped the suffering.
* No XML
* Simple to read and understand
* Simple to manage dependencies
* Simple to use options
Meson merges the crappy state of C/C++ tooling with something like Cargo in the worst way possible: by forcing you to handle the complexity of both. Nothing about Meson is simple, unless you're using it in Rust, in which case you're better off with Cargo.
In C++ you don't get lockfiles, you don't get automatic dependency install, you don't get local dependencies, there's no package registry, no version support, no dependency-wide feature flags (this is an incoherent mess in Meson), no notion of workspaces, etc.
Compared to Cargo, Meson isn't even in the same galaxy. And even compared to CMake, Meson is yet another incompatible incremental "improvement" that offers basically nothing other than cute syntax (which in an era when AI writes all of your build system anyway, doesn't even matter). I'd much rather just pick CMake and move on.
Build system generators (like Meson, autotools, CMake or any other one) can't solve programming language module and packaging problems, even in principle. So, it's not clear what your argument is here.
> I’m still surprised how people ignore Meson. Please test it :)
I did just that a few years ago and found it rather inconvenient and inflexible, so I went back to ignoring it. But YMMV I suppose.
Meson is indeed nice, but has very poor support for GPU compilation compared to CMake. I've had a lot of success adopting the practices described in this talk, https://www.youtube.com/watch?v=K5Kg8TOTKjU. I thought I knew a lot of CMake, but file sets definitely make things a
lot simpler.
Agreed, arcane cmake configs and or bash build scripts are genuinely off-putting. Also cpp "equivalents" of cargo which afaik are conan and vcpkg are not default and required much more configuring in comparison with cargo. Atleast this was my experience few years ago.
It's fundamentally different; Rust entirely rejects the notion of a stable ABI, and simply builds everything from source.
C and C++ are usually stuck in that antiquated thinking that you should build a module, package it into some libraries, install/export the library binaries and associated assets, then import those in other projects. That makes everything slow, inefficient, and widely dangerous.
There are of course good ways of building C++, but those are the exception rather than the standard.
"Stable ABI" is a joke in C++ because you can't keep ABI and change the implementation of a templated function, which blocks improvements to the standard library.
In C, ABI = API because the declaration of a function contains the name and arguments, which is all the info needed to use it. You can swap out the definition without affecting callers.
That's why Rust allows a stable C-style ABI; the definition of a function declared in C doesn't have to be in C!
But in a C++-style templated function, the caller needs access to the definition to do template substitution. If you change the definition, you need to recompile calling code i.e. ABI breakage.
If you don't recompile calling code and link with other libraries that are using the new definition, you'll violate the one-definition rule (ODR).
This is bad because duplicate template functions are pruned at link-time for size reasons. So it's a mystery as to what definition you'll get. Your code will break in mysterious ways.
This means the C++ committee can never change the implementation of a standardized templated class or function. The only time they did was a minor optimization to std::string in 2011 and it was such a catastrophe they never did it again.
That is why Rust will not support stable ABIs for any of its features relying on generic types. It is impossible to keep the ABI stable and optimize an implementation.
It's not true that Rust rejects "the notion of a stable ABI". Rust rejects the C++ solution of freeze everything and hope because it's a disaster, it's less stable than some customers hoped and yet it's frozen in practice so it disappoints others. Rust says an ABI should be a promise by a developer, the way its existing C ABI is, that you can explicitly make or not make.
Rust is interested in having a properly thought out ABI that's nicer than the C ABI which it supports today. It'd be nice to have say, ABI for slices for example. But "freeze everything and hope" isn't that, it means every user of your language into the unforeseeable future has to pay for every mistake made by the language designers, and that's already a sizeable price for C++ to pay, "ABI: Now or never" spells some of that out and we don't want to join them.
> It'd be nice to have say, ABI for slices for example.
The de-facto ABI for slices involves passing/storing pointer and length separately and rebuilding the slice locally. It's hard to do better than that other than by somehow standardizing a "slice" binary representation across C and C-like languages. And then you'll still have to deal with existing legacy code that doesn't agree with that strict representation.
Rust is just a bit less than 11 years old, C++ was 13 years old when screwed up std::string ABI, so, I think Rust has a few years yet to do less badly.
Obviously it's easier to provide a stable ABI for say &'static [T] (a reference which lives forever to an immutable slice of T) or Option<NonZeroU32> (either a positive 32-bit unsigned integer, or nothing) than for String (amortized growable UTF-8 text) or File (an open file somewhere on the filesystem, whatever that means) and it will never be practical to provide some sort of "stable ABI" for arbitrary things like IntoIterator -- but that's exactly why the C++ choice was a bad idea. In practice of course the internal guts of things in C++ are not frozen, that would be a nightmare for maintenance teams - but in theory there should be no observable effect from such changes and so that discrepancy leads to endless bugs where a user found some obscure way to depend on what you'd hidden inside some implementation detail, the letter of the ISO document says your change is fine but the practice of C++ development says it is a breaking change - and the resulting engineering overhead at C++ vendors is made even worse by all the UB in real C++ software.
This is the real reason libc++ still shipped Quicksort as its unstable sort when Biden was President, many years after this was in theory prohibited by the ISO standard† Fixing the sort breaks people's code and they'd rather it was technically faulty and practically slower than have their crap code stop working.
† Tony's Quicksort algorithm on its own is worse than O(n log n) for some inputs, you should use an introspective comparison sort aka introsort here, those existed almost 30 years ago but C++ only began to require them in 2011.
> C and C++ are usually stuck in that antiquated thinking that you should build a module, package it into some libraries, install/export the library binaries and associated assets, then import those in other projects. That makes everything slow, inefficient, and widely dangerous.
It seems to me the "convenient" options are the dangerous ones.
The traditional method is for third party code to have a stable API. Newer versions add functions or fix bugs but existing functions continue to work as before. API mistakes get deprecated and alternatives offered but newly-deprecated functions remain available for 10+ years. With the result that you can link all applications against any sufficiently recent version of the library, e.g. the latest stable release, which can then be installed via the system package manager and have a manageable maintenance burden because only one version needs to be maintained.
Language package managers have a tendency to facilitate breaking changes. You "don't have to worry" about removing functions without deprecating them because anyone can just pull in the older version of the code. Except the older version is no longer maintained.
Then you're using a version of the code from a few years ago because you didn't need any of the newer features and it hadn't had any problems, until it picks up a CVE. Suddenly you have vulnerable code running in production but fixing it isn't just a matter of "apt upgrade" because no one else is going to patch the version only you were using, and the current version has several breaking changes so you can't switch to it until you integrate them into your code.
This is all wishful thinking disconnected from practicalities.
First you confuse API and ABI.
Second there is no practical difference between first and third-party for any sufficiently complex project.
Third you cannot have multiple versions of the same thing in the same program without very careful isolation and engineering. It's a bad idea and a recipe for ODR violations.
In any non-trivial project there will be complex dependency webs across different files and subprojects, and humans are notoriously bad at packaging pieces of code into sensible modules, libraries or packages, with well-defined and maintained boundaries. Being able to maintain ABI compatibility, deprecating things while introducing replacement etc. is a massive engineering work and simply makes people much less likely to change the way things are done, even if they are broken or not ideal. That's an effort you'll do for a kernel (and only on specific boundaries) but not for the average program.
I'm not confusing API with ABI. If you don't have a stable ABI then you essentially forfeit the traditional method of having every program on the system use the same copy (and therefore version) of that library, which in turn encourages them to each use a different version and facilitates API instability by making the bad thing easier.
> Second there is no practical difference between first and third-party for any sufficiently complex project.
Even when you have a large project, making use of curl or sqlite or openssl does not imply that you would like to start maintaining a private fork.
There are also many projects that are not large enough to absorb the maintenance burden of all of their external dependencies.
> Third you cannot have multiple versions of the same thing in the same program without very careful isolation and engineering.
Which is all the more reason to encourage every program on the system to use the same copy by maintaining a stable ABI. What do you do after you've encouraged everyone to include their own copy of their dependencies and therefore not care if there are many other incompatible versions, and then two of your dependencies each require a different version of a third?
> In any non-trivial project there will be complex dependency webs across different files and subprojects, and humans are notoriously bad at packaging pieces of code into sensible modules, libraries or packages, with well-defined and maintained boundaries.
This feels like arguing that people are bad at writing documentation so we should we should reduce their incentive to write it, instead of coming up with ways to make doing the good thing easier.
Build everything from source within a single unified workspace, cache whatever artifacts were already built with content-addressable storage so that you don't need to build them again.
You should also avoid libraries, as they reduce granularity and needlessly complexify the logic.
I'd also argue you shouldn't have any kind of declaration of dependencies and simply deduce them transparently based on what the code includes, with some logic to map header to implementation files.
>Build everything from source within a single unified workspace, cache whatever artifacts were already built with content-addressable storage so that you don't need to build them again.
Which tool do you use for content-addressable storage in your builds?
>You should also avoid libraries, as they reduce granularity and needlessly complexify the logic.
This isn't always feasible though.
What's the best practice when one cannot avoid a library?
You can use S3 or equivalent; a normal filesystem (networked or not) also works well.
You hash all the inputs that go into building foo.cpp, and then that gives you /objs/<hash>.o. If it exists, you use it; if not, you build it first. Then if any other .cpp file ever includes foo.hpp (directly or indirectly), you mark that it needs to link /objs/<hash>.o.
You expand the link requirements transitively, and you have a build system. 200 lines of code. Your code is self-describing and you never need to write any build logic again, and your build system is reliable, strictly builds only what it needs while sharing artifacts across the team, and never leads to ODR.
The problem is doing this requires a team to support it that is realistically as large as your average product team. I know Bazel is the solution here but as someone who has used C++, modified build systems and maintained CI for teams for years, I have never gotten it to work for anything more than a toy project.
It usually ends up somewhat non-generic, with project-specific decisions hardcoded rather than specified in a config file.
I usually make it so that it's fully integrated with wherever we store artifacts (for CAS), source (to download specific revisions as needed), remote running (which depending on the shop can be local, docker, ssh, kubernetes, ...), GDB, IDEs... All that stuff takes more work for a truly generic solution, and it's generally more valuable to have tight integration for the one workflow you actually use.
Since I also control the build image and toolchain (that I build from source) it also ends up specifically tied to that too.
In practice, I find that regardless of what generic tool you use like cmake or bazel, you end up layering your own build system and workflow scripts on top of those tools anyway. At some point I decided the complexity and overhead of building on top of bazel was more trouble than it was worth, while building it from scratch is actually quite easy and gives you all the control you could possibly need.
You'd be wrong. If the build system has full knowledge on how to build the whole thing, it can do a much better job. Caching the outputs of the build is trivial.
If you import some ready made binaries, you have no way to guarantee they are compatible with the rest of your build or contain the features you need. If anything needs updating and you actually bother to do it for correctness (most would just hope it's compatible) your only option is usually to rebuild the whole thing, even if your usage only needed one file.
"That makes everything slow, inefficient, and widely dangerous."
There nothing faster and more efficient than building C programs. I also not sure what is dangerous in having libraries. C++ is quite different though.
Of course there is. Raw machine code is the gold standard, and everything else is an attempt to achieve _something_ at the cost of performance, C included, and that's even when considering whole-program optimization and ignoring the overhead introduced by libraries. Other languages with better semantics frequently outperform C (slightly) because the compiler is able to assume more things about the data and instructions being manipulated, generating tighter optimizations.
I was talking about building code not run-time. But regarding run-time, no other language does outperform C in practice, although your argument about "better semantics" has some grain of truth in it, it does not apply to any existing language I know of - at least not to Rust which is in practice for the most part still slower than C.
Neither "ODR violations" nor IFNDR exist in C. Incompatibility across translation units can cause undefined behavior in C, but this can easily be avoided.
The ODR problem is much more benign in C. Undefined behavior at translation time (~ IFNDR) still exists in C but for C2y we have removed most of it already.
You can't fundamentally solve the issue of what happens if you call a function in another TU that takes a T but the caller and the callee have a different definition of T.
Whether you call that IFNDR or UB doesn't make much of a difference.
C++ mitigates that issue with its mangling (which checks the type name is the same), Rust goes the extra mile and puts a hash of the whole definition of the arguments in the symbol name.
C has the most unsafe solution (no mitigation at all).
C++ ODR requires different definition to consist of the same tokens, and if not the the program is IFNDR. Name mangling catches some stuff, but this becomes less relevant today with more things being generated via templates from headers.
In C, it is UB when the types are not compatible, which is more robust. In practice it also easy to avoid with the same solution as in C++, i.e. there is a single header which declares the object. But even if not, tooling can check consistency across TU it is just not required by the ISO standard (which Rust does not have, so the comparison makes no sense). In practice, with GCC a LTO build detects inconsistencies.
Different parts of the build seeing inconsistent definitions of the same name is a clear consequence of building things piecemeal rather than as a single project -- which is precisely the problem I described higher up in this thread.
Things being built piecemeal also likely won't be using LTO (even if fat LTO allows this, no static library packages in a distro are built with it).
Not sure what you are trying to say. Inconsistent definitions is a consequence of being able to build things separately, which is a major feature of C and C++, although in C++ it does not work well anymore. The reason to not build things piecemeal is often that LTO is more expensive. But occasionally running this will catch violations. When libraries get split up, the interface is defined via headers and there is little risk to get an inconsistency. So really do not think there is a major problem in C.
In my experience, no one does build systems right; Cargo included.
The standard was initially meant to standardize existing practice. There is no good existing practice. Very large institutions depending heavily on C++ systematically fail to manage the build properly despite large amounts of third party licenses and dedicated build teams.
With AI, how you build and integrate together fragmented code bases is even more important, but someone has yet to design a real industry-wide solution.
Speedy convenience beats absolute correctness anyday. Humans are not immortal and have finite amount of time for life and work. If convenience didn't matter, we would all still be coding in assembly or toggling hardware switches.
C++ builds are extremely slow because they are not correct.
I'm doing a migration of a large codebase from local builds to remote execution and I constantly have bugs with mystery shared library dependencies implicitly pulled from the environment.
This is extremely tricky because if you run an executable without its shared library, you get "file not found" with no explanation. Even AI doesn't understand this error.
It's pretty simple and works reliably as specified.
I can only infer that your lack of familiarity was what made it take so long.
Rebuilding GCC with specs does take forever, and building GCC is in general quite painful, but you could also use patchelf to modify the binary after the fact (which is what a lot of build systems do).
Clang cannot bootstrap in the same way GCC can; you need GCC (or another clang) to build it. You can obviously build it twice to have it be built by itself (bear in mind some of the clang components already do this, because they have to be built by clang).
In general though, a clang install will still depend on libstdc++, libgcc, GCC crtbegin.o and binutils (at least on Linux), which is typically why it will refer to a specific GCC install even after being built.
There are of course ways to use clang without any GCC runtime, but that's more involved and non-standard (unless you're on Mac).
And there is also the libc dependency (and all sysroot aspects in general) and while that is usually considered completely separate from GCC, the filesystem location and how it is found is often tied to how GCC is configured.
I may be in the minority but I like that C++ has multiple package managers, as you can use whichever one best fits your use case, or none at all if your code is simple enough.
It's the same with compilers, there's not one single implementation which is the compiler, and the ecosystem of compilers makes things more interesting.
Multiple package managers is fine, what's needed is a common repository standard (or even any repository functionality at all). Look at how it works in Java land, where if you don't want to use Maven you can use Gradle or Bazel or what have you, or if you hate yourself you can use Ant+Ivy, but all of them share the same concept of what a dependency is and can use the same repositories.
Also, having one standard packaging format and registry doesn't preclude having alternatives for special use cases.
There should be a happy path for the majority of C++ use cases so that I can make a package, publish it and consume other people's packages. Anyone who wants to leave that happy path can do so freely at their own risk.
The important thing is to get one system blessed as The C++ Package Format by the standard to avoid xkcd 927 issues.
In the Linux world and even Haiku, there is a standard package dependacy format, so dependencies aren’t really a problem. Even OSX has Homebrew. Windows is the odd man out.
On the contrary, most Linux distributions use platform-specific global-only packaging formats for C++ libraries, and if anything I think that's holding back the development of a real, C++-native packaging/dependency manager.
That would actually be pretty cool. Though I think there might have been papers written on this a few years ago. Does anyone know of these or have any updates about them?
CPS[1] is where all the effort is currently going for a C++ packaging standard, CMake shipped it in 4.3 and Meson is working on it. Pkgconf maintainer said they have vague plans to support at some point.
There's no current effort to standardize what a package registry is or how build frontends and backends communicate (a la PEP 517/518), though its a constant topic of discussion.
100% agree this is something that would have immediate, high value impact.
The fact that building C++ is this opaque process defined in 15 different ways via make, autoconf, automake, cmake, ninja, with 50 other toolchains is something that continues to create a barrier to entry.
I still remember the horrors of trying to compile c++ in 2004 on windows without anything besides borland...
Standardizing the build system and toolchain needs to happen. It's a hard problem that needs to be solved.
> Standardizing the build system and toolchain needs to happen. It's a hard problem that needs to be solved.
I agree, and I also think it’s never happening. It requires agreeing on so many things that are subjective and likely change behaviour. C++ couldn’t even manage to get module names to be required to match the file name. That was for a new feature that would have allowed us to figure out esports without actually opening the file…
No, because most major compilers don't support header units, much less standard library header units from C++26.
What'll spur adoption is cmake adopting Clang's two-step compilation model that increases performance.
At that point every project will migrate overnight for the huge build time impact since it'll avoid redundant preprocessing. Right now, the loss of parallelism ruins adoption too much.
The idea is great, the execution is terrible. In JS, modules were instantly popular because they were easy to use, added a lot of benefit, and support in browsers and the ecoysystem was fairly good after a couple of years. In C++, support is still bad, 6 years after they were introduced.
The idea is great in the same way the idea of a perpetual motion machine is great: I'd love to have a perpetual motion machine (or C++ modules), but it's just not realistic.
IMO, the modules standard should have aimed to only support headers with no inline code (including no templates). That would be a severe limitation, but at least maybe it might have solved the problem posed by protobuf soup (AFAIK the original motivation for modules) and had a chance of being a real thing.
No idea if modules themselves are failed or no, but if c++ wants to keep fighting for developer mindshare, it must make something resembling modules work and figure out package management.
yes you have CPM, vcpkg and conan, but those are not really standard and there is friction involved in getting it work.
I emphatically agree. C++ needs a standard build system that doesn’t suck ass. Most people would agree it needs a package manager although I think that is actually debatable.
Neither of those things require modules as currently defined.
That is not even half realistic. Are you going to port all that code out there (autotools, cmake, scons,meson, bazel, waf...) to a "true" build system?
Only the idea is crazy. What Conan does is much more sensible: give s layer independent of the build system (and a way to consume packages and if you want some predefined "profiles" such as debug, etc), leave it half-open for extensions and let existing tools talk with that communication protocol.
That is much more realistic and you have way more chances of having a full ecosystem to consume.
Also, noone needs to port full build system or move from oerfectly working build systems.
Much like contracts--yes, C++ needs something modules-like, but the actual design as standardized is not usable.
Once big companies like Google started pulling out of the committee, they lost their connection to reality and now they're standardizing things that either can't be implemented or no one wants as specced.
I know Microsoft invested a lot into modules development and migrated a few small pieces of Office onto modules, but I'm not sure if they are actually using it extensively, and I'm also not sure if they're actually all that beneficial. Every time I hear about modules, it's stories about a year of migration work for a single-digit build-time improvement.
It has the developer mindshare of game engines, games and VFX industry standards, CUDA, SYCL, ROCm, HIP, Khronos APIS, game consoles SDK, HFT, HPC, research labs like CERN, Fermilab,...
Ah, and the two compiler major frameworks that all those C++ wannabe replacements use as their backend.
Because as a percentage of global C++ builds they’re used in probably 0.0001% of builds with no line of sight to that improving.
They have effectively zero use outside of hobby projects. I don’t know that any open source C++ library I have ever interacted with even pretends that modules exist.
I'm not the PC but I think you miss most of the pain points due to: 'personal' projects.
There's not a compatible format between different compilers, or even different versions of the same compiler, or even the same versions of the same compiler with different flags.
This seems immediately to create too many permutations of builds for them to be distributable artifacts as we'd use them in other languages. More like a glorified object file cache. So what problem does it even solve?
BMIs are not considered distributable artifacts and were never designed to be. Same as PCHs and clang-modules which preceded them. Redistribution of interface artifacts was not a design goal of C++ modules, same as redistribution of CPython byte code is not a design goal for Python's module system.
Modules solve the problems of text substitution (headers) as interface description. It's why we call the importable module units "interface units". The goals were to fix all the problems with headers (macro leakage, uncontrolled export semantics, Static Initialization Order Fiasco, etc) and improve build performance.
They succeeded at this rather wonderfully as a design. Implementation proved more difficult but we're almost there.
I frankly wish we'd stop developing C++. It's so hard to keep track of all the new unnecessary toys they're adding to it. I thought I knew C++ until I read some recent C++ code. That's how bad it is.
Meanwhile C++ build system is an abomination. Header files should be unnecessary.
reply