Hacker Newsnew | past | comments | ask | show | jobs | submit | bri3d's commentslogin

Proxying from the "hot" domain (with user credentials) to a third party service is always going to be an awful idea. Why not just CNAME Mintlify to dev-docs.discord.com or something?

This is also why an `app.` or even better `tenant.` subdomain is always a good idea; it limits the blast radius of mistakes like this.


I run a product similar to Mintlify.

We've made different product decisions than them. We don't support this, nor do we request access to codebases for Git sync. Both are security issues waiting to happen, no matter how much customers want them.

The reason people want it, though, is for SEO: whether it's true or outdated voodoo, almost everyone believes having their documentation on a subdomain hurts the parent domain. Google says it's not true, SEO experts say it is.

I wish Mintlify the best here – it's stressful to let customers down like this.


What makes you say that Google claims it's not true? Google claims subdomains are completely two different domains and you'll lose all the linking/page rank stuff according to their own docs regarding SEO. Some SEO gurus claim it's not so black and white but no one knows for sure. The data does show having docs on subdomain is more harmful to your SEO if you get linked to then a lot.

Here's the argument for/against it: https://www.searchenginejournal.com/ranking-factors/subdomai...

I think the answer likely is quite nuanced, for what it's worth.


To my knowledge it's not as much hurting the parent domain as having two separate "worlds". Your docs which are likely to receive higher traffic will stop contributing any SEO juice to your main website.

Yep - this is the core issue that made the vulnerability so bad. And if you use a subdomain for a third-party service, make sure your main app auth cookies are scoped to host-only. Better yet, use a completely different domain like you would for user-generated content (e.g. discorddocs.com).

I think the reason companies do this for doc sites is so they can substitute your real credentials into code snippets with "YOUR_API_KEY". Seems like a poor tradeoff given the security downside.

This isn’t likely to be a good indicator. Essentially only the network permission and any fingerprint is necessary for the tracking in this accusation; the idea is not that TikTok were spying on Grindr on the device, but that a device fingerprinting firm who broker both TikTok and Grindr data were able to correlate the user.

> You're not going to try and extract a timestamp from a uuid.

What? The first 48 bits of an UUID7 are a UNIX timestamp.

Whether or not this is a meaningful problem or a benefit to any particular use of UUIDs requires thinking about it; in some cases it’s not to be taken lightly and in others it doesn’t matter at all.

I see what you’re getting at, that ignoring the timestamp aspect makes them “just better UUIDs,” but this ignores security implications and the temptation to partition by high bits (timestamp).


Nobody forces you to use a real Unix timestamp. BTW the original Unix timestamp is 32 bits (expiring in 2038), and now everyone is switching to 64-bit time_t. What 48 bits?

All you need is a guaranteed non-decreasing 48-bit number. A clock is one way to generate it, but I don't see why a UUIDv7 would become invalid if your clock is biased, runs too fast, too slow, or whatever. I would not count on the first 48 bits being a "real" timestamp.


> Nobody forces you to use a real Unix timestamp.

Besides the UUIDv7 specification, that is? Otherwise you have some arbitrary kind of UUID.

> I would not count on the first 48 bits being a "real" timestamp.

I agree; this is the existential hazard under discussion which comes from encoding something that might or might not be data into an opaque identifier.

I personally don't agree as dogmatically with the grandparent post that extraneous data should _not_ be incorporated into primary key identifiers, but I also disagree that "just use UUIDv7 and treat UUIDs as opaque" is a completely plausible solution either.


That is like the HTML specification -- nobody ever puts up a web page that is not conformant. ;p

The idea behind putting some time as prefix was for btree efficiency, but lots of people use client side generation and you can't trust it, and it should not matter because it is just an id not a way of registering time.


I mean, any 32-bit unsigned integer is a valid Unix timestamp up until 19 January 2038, and, by extension, any u64 is, too, for far longer time.

The only promise of Unix timestamps is that they never go back, always increase. This is a property of a sequence of UUIDs, not any particular instance. At most, one might argue that an "utterly valid" UUIDv7 should not contain a timestamp from far future. But I don't see why it can't be any time in the past, as long as the timestamp part does not decrease.

The timestamp aspect may be a part of an additional interface agreement: e.g. "we guarantee that this value is UUIDv7 with the timestamp in UTC, no more than a second off". But I assume that most sane engineers won't offer such a guarantee. The useful guarantee is the non-decreasing nature of the prefix, which allows for sorting.


If you absolutely need it, use a separate uC / “trigger” chip for PD negotiation.

I think the GP's point is that this requires a 12V-capable USB power supply.

I have converted pretty much everything I have to USB-C, from toothbrushes to old laptops, and am very happy with the results. My solution is to only own high-quality power supplies with good support for PD. Having done this, the question "Why isn't this thing charging?" doesn't really arise.


The common device that this doesn’t work well for is the Raspberry Pi 5. For full power mode it needs an unusual 5V/5A power supply, and that is quite unusual.

Specifically it needs a supply that offers 5V/5A as a basic profile outside of PPS (programmable power supply), because the Pi doesn't support PPS negotiations. That is what's so rare, much more than the actual ability to do 5V/5A.

It’s more than unusual, it violates the spec. However you only need that to have full power USB

> COM is basically just reference counting and interfaces. > I remember a few years back hearing hate about COM and I didn't feel like they understood what it was.

Even in "core" COM there's also marshaling, the whole client/server IPC model, and apartments.

And, I think most people encounter COM with one of its friends attached (like in this case, OLE/Automation in the form of IDispatch), which adds an additional layer of complexity on top.

Honestly I think that COM is really nice, though. If they'd come up with some kind of user-friendly naming scheme instead of UUIDs, I don't even think it would get that much hate. It feels to me that 90% of the dislike for COM is the mental overhead of seeing and dealing with UUIDs when getting started.

Once you get past that part, it's really fast to do pretty complex stuff in; compared to the other things people have come up with like dbus or local gRPC and so on, it works really well for coordinating extensibility and lots of independent processes that need to work together.


Even the UUIDs aren't bad, they're a reasonable solution to Zooko's triangle. You can't globally assign names.

Yeah, I've often thought about what I'd do instead and there's no legitimate alternative. It might help developers feel better if they had some kind of "friendly name" functionality (ie - if registrations in the Registry had a package-identifier style string alongside), but that also wouldn't have flown when COM was invented and resources overall were much more scarce than they are today.

While they're not "the same", classic COM (or OLE? the whole history is a mess) did actually have ProgIDs, and WinRT introduces proper "classes" and namespaces (having given up global registration for everything but system provided API's) with proper "names" (you can even query them at runtime with IInspectable::GetRuntimeClassName).

Microsoft tried to do a lot with COM when they first released it, it wasn't just a solution for having a stable cross-language ABI, it was a way to share component libraries across multiple applications on a system, and a whole lot more.

> but that also wouldn't have flown when COM was invented and resources overall were much more scarce than they are today.

And this ultimately is the paradox of COM. There were good ideas, but given Microsoft's (mostly kept) promise of keeping old software working the bad ones have remained baked in.


This is quite interesting: it's easy to blame the use of LLM to find the interface, but really this is a matter of needing to understand the COM calling conventions in order to interact with it.

I found the interface and a C++ sample in about two minutes of GitHub searching:

https://github.com/microsoft/SampleNativeCOMAddin/blob/5512e...

https://github.com/microsoft/SampleNativeCOMAddin/blob/5512e...

but I don't actually think this would have helped the Rust implementation; the authors already knew they wanted a BSTR and a BSTR*, they just didn't understand the COM conventions for BSTR ownership.


Every time I read an article on someone understanding COM from interfaces and dispatching, I think: reinventing Delphi, badly.

COM is cross-language, though, and cross-process, and even cross-machine although not often used that way these days.

Life is definitely easier if you can restrict everything to being in the same language.


Delphi was designed to be COM-compatible, so the vtable layout was compatible, for example. Its interfaces, via the inbuilt interface keyword, use COM-compatible reference counting. It has inbuilt RTL types for handling a lot of common COM scenarios. It did this back in the 90s and remains extremely useful for COM still today.

Then late 2010s, C++Builder (its sister product) dropped ATL to DAX -- Delphi ActiveX aka COM -- and using COM from C++ uses the same inbuilt support, including keyword suggestions and RTL types. It's not quite as clean since it uses language bridging to do so, but it's still a lot nicer than normal C++ and COM.

Seeing someone do COM from first principles in 2025 is jarring.


You mean, like Microsoft themselves?

.NET COM support was never as nice, with the RCW/CCW layer, now they have redoned it for modern .NET Core, still you need some knowledge how to use it from C++ to fully master it.

Then there is CsWinRT, which is supposed to be the runtime portion of .NET Native, which to this day has enough bugs and not as easy to use as it was .NET Native.

Finally, on the C++ side it has been a wasteland of frameworks, since MFC there have been multiple attempts, and when they finally had something close to C++ Builder with C++/CX, an internal team managed to sell to their managers the idea to kill C++/CX and replace it with C++/WinRT.

Nowadays C++/WinRT is sold as the way to do COM and WinRT, it is actually in maintenance, stuck in C++17, those folks moved on to the windows-rs project mentioned on the article, and the usuability story sucks.

Editing IDL files without any kind of code completion or syntax highlighting, non-existing tooling since COM was introduced, manually merging the generated C++ code into the ongoing project.

To complement your last sentence, seeing Microsoft employees push COM from first principles in 2025 is jarring.


OLE at least looked easier to me in Assembler than in C++. Back in the day.


Oh I see. Python, Ruby, and various other high level languages, including of course the MS languages, have pretty seamless integration as well, although not at the level of direct binary compatibility. I imagine they just use wrappers.

Not C++, it has been a battlefield of frameworks, each reboot with its own set of sharp edges.

I feel that way about most of frontend development since I was a teenager playing with Delphi 7.

> it's easy to blame the use of LLM to find the interface, but really this is a matter of needing to understand the COM calling conventions in order to interact with it.

Sure, but I think that this perfectly illustrates why LLMs are not good at programming (and may well never get good): they don't actually understand anything. An LLM is fundamentally incapable of going "this is COM so let me make sure that the function signature matches the calling conventions", it just generates something based on the code it has seen before.

I don't blame the authors for reaching for an LLM given that Microsoft has removed the C++ example code (seriously, what's up with that nonsense?). But it does very nicely highlight why LLMs are such a bad tool.


In defense of the LLM here: learning COM from scratch given its lack of accessible documentation would have forced us to reach for C# for this minor project.

The LLM gave us an initial boost of productivity and (false) confidence that enabled us to get at the problem with Rust. While the LLM's output was flawed, using it did actually cause us to learn a lot about COM by allowing us to even getting started. That somewhat flies in the face of a lot of the "tech debt" criticisms levied at LLMs (including by me). Yes, we accumulated a bit of debt while working on the project, but were in this case able to pay it off before shipping and it gave us the leverage we needed to approach this problem using pure Rust.


You might actually get that desired behavior through reasoning, or if the model was reinforced for coding workflows involving COM, or at least enough stack diversity for the model to encounter the need to develop this capability.

In the case of LLMs with reasoning, they might pull this off because reasoning is in fact a search in the direction of extra considerations that improve its performance on the task. This is measured by the verifier during reasoning training, which the LLM learns to emulate during inference hence improved performance.

As for RL coding training, the difference can be slightly blurry since reasoning is also done with RL, but for coding models specifically they also discover additional considerations, or even recipes, through self play against a code execution environment. If that environment includes COM and the training data has COM-related tasks, then the process has a chance to discover the behavior you described and reinforce it during training increasing its likelihood during actual coding.

LLMs are not really just autocomplete engines. Perhaps the first few layers or for base models can be seen as such, but as you introduce instruct and reinforcement tuning LLMs build progressively higher levels of conceptual abstractions from words to sentences to tasks like CNNs learn basic geometric features then composing those into face parts and so on.


IMO the value of COTS software stack compatibility is becoming overstated: academics, small research groups, hobbyists, and some enterprises will rely on commodity software stacks working well out of the box, but large pure/"frontier"-AI inference-and-training companies are already hand optimizing things anyway and a lot of less dedicated enterprise customers are happy to use provided engines (like Bedrock) and operate at only the higher level.

I do think AWS need to improve their software to capture more downmarket traction, but my understanding is that even Trainium2 with virtually no public support was financially successful for Anthropic as well as for scaling AWS Bedrock workloads.

Ease of optimization at the architecture level is what matters at the bleeding edge; a pure-AI organization will have teams of optimization and compiler engineers who will be mining for tricks to optimize the hardware.


VLIW works for workloads where the compiler can somewhat accurately predict what will be resident in cache. It’s used everywhere in DSP, was common in GPU for awhile, and is present in lots of niche accelerators. It’s a dead end for situations where cache residency is not predictable, like any kind of multitenant general purpose workload.


https://web.archive.org/web/20111219004314/http://journal.th... (referenced, at least tangentially, in the video) is a piece from the engineering lead which does a great job discussing Why C++. The short summary is "they couldn't find enough people to write Ada, and even if they could, they also couldn't find enough Ada middleware and toolchain."

I actually think Ada would be an easier sell today than it was back then. It seems to me that the software field overall has become more open to a wider variety of languages and concepts, and knowing Ada wouldn't be perceived as widely as career pidgeonholing today. Plus, Ada is having a bit of a resurgence with stuff like NVidia picking SPARK.


I've always strongly disliked this argument of not enough X programmers. If the DoD enforces the requirement for Ada, Universities, job training centers, and companies will follow. People can learn new languages. And the F35 and America's combat readiness would be in a better place today with Ada instead of C++.


I agree. First of all I don't think Ada is a difficult language to learn. Hire C++ programmers and let them learn Ada.

Secondly, when companies say "we can't hire enough X" what they really mean is "X are too expensive". They probably have some strict salary bands and nobody had the power to change them.

In other words there are plenty of expensive good Ada and C++ programmers, but there are only cheap crap C++ programmers.


Actually these kinds of projects are chronically over budget and the US military is notorious for wasting money.

Using C++ vs wishing an Ada ecosystem into existence may have been one of the few successful cost saving measures.

Keep in mind that these are not normal programmers. They need to have a security clearance and fulfill specific requirements.


They need to have very strict security clearance requirements and maintain them throughout the life of the project or their tenure. People don’t realize this isn’t some little embedded app you throw on an ESP32.

You’ll be interviewed, your family, your neighbors, your school teachers, your past bosses, your cousin once removed, your sheriff, your past lovers, and even your old childhood friends. Your life goes under a microscope.


I went through the TS positive vetting process (for signals intelligence, not writing software for fighter jets, but the process is presumably the same).

If I were back on the job market, I’d be demanding a big premium to go through it again. It’s very intrusive, puts significant limitations on where you can go, and adds significant job uncertainty (since your job is now tied to your clearance).


Not to mention embedded software is often half the pay of a startup and defense software often isn't work from home. Forget asking what languages they can hire for. They are relying on the work being interesting to compensate for dramatically less pay and substantially less pleasant working conditions. Factor in some portion of the workforce has ethical concerns working in the sector and you can see they will get three sorts of employees. Those who couldn't get a job elsewhere, those who want something cool on their resume, and those who love the domain. And they will lose the middle category right around the time they become productive members of the team because it was always just a stepping stone.


Yes but like a certification, that clearance is yours, not the companies. You take it with you. It lasts a good while. There are plenty of government companies that would love you if you had one. Northrop, Lockheed, Boeing, etc.


An Engineering degree and a TS is basically a guaranteed job. They might not be the flashiest FAANG jobs, but it is job security. In this downturn where people talk about being unable to find jobs for years in big cities, I look around my local area and Lockheed, BAE, Booze Allen, etc they have openings.


My issue is you end up dealing with dopes who don't want to learn, just want to milk the money and the job security, and actively fight you when you try to make things better. Institutionalized.


While getting lunch at an Amazon tech day a couple of years ago, I overheard somebody talking about how easy it was to place somebody with a clearance and AWS certifications. Now, this was Washington, DC, but I doubt it's the only area where that's true.

They always have openings so investors think theyre hiring and growing. Many ads are for fictional positions.


And yet my experience looking at the deluge of clearance-required dev jobs from defense startups in the past couple of years is that there is absolutely no premium at all for clearance-required positions.


I was once interviewed by the FBI due to a buddy applying for security clearance. One thing they asked was, "have you every known XXX to drink excessively", to which I replied "we were fraternity brothers together, so while we did often drink a lot, it needs to be viewed in context",


I agree - Ada is very similar to Pascal, and much faster to pick up than, say, C++.


C++ is not that hard to pick up. But writing error free C++ code is hard as hell.

As I wrote to someone else:

Why require that companies use a specific programming language instead of requiring that the end product is good? > And the F35 and America's combat readiness would be in a better place today with Ada instead of C++.

What is the evidence for this? Companies selling Ada products would almost certainly agree, since they have a horse in the race. Ada does not automatically lead to better, more robust, safer or fully correct software.

Your line of argument is dangerous and dishonest, as real life regrettably shows.[0]

[0]: https://en.wikipedia.org/wiki/Ariane_flight_V88

> The failure has become known as one of the most infamous and expensive software bugs in history.[2] The failure resulted in a loss of more than US$370 million.[3]

> The launch failure brought the high risks associated with complex computing systems to the attention of the general public, politicians, and executives, resulting in increased support for research on ensuring the reliability of safety-critical systems. The subsequent automated analysis of the Ariane code (written in Ada) was the first example of large-scale static code analysis by abstract interpretation.[9]


Ada and especially Spark makes it a whole lot easier to produce correct software. That doesn't mean it automatically leads to better software. The programming language is just a small piece of the puzzle. But an important one.


> Ada and especially Spark makes it a whole lot easier to produce correct software.

Relative to what? There are formal verification tools for other languages. I have heard Ada/SPARK is good, but I do not know the veracity of that. And Ada companies promoting Ada have horses in the race.

And Ada didn't prevent the Ada code in Ariane 5 from being a disaster.

> The programming language is just a small piece of the puzzle. But an important one.

100% true, but the parent of the original post that he agreed with said:

> And the F35 and America's combat readiness would be in a better place today with Ada instead of C++.

What is the proof for that, especially considering events like Ariane 5?

And Ada arguably has technical and non-technical drawbacks relative to many other languages.

When I tried Ada some weeks ago for a tiny example, I found it cumbersome in some ways. Is the syntax worse and more verbose than even C++? Maybe that is just a learning thing, though. Even with a mandate, Ada did not catch on.


>What is the proof for that, especially considering events like Ariane 5?

Ariane 5 is a nice anti-ada catchphrase, but ada is probably the most used language for war machines in the United States.

now the argument can be whether or not the US military is superior to X; but the fact that the largest military in the world is filled to the brim with warmachines running ada code is testament itself to the effectiveness of the language/dod/grant structure around the language.

would it be better off in c++? I don't know about that one way or the other , but it's silly pretend ada isn't successful.


But Ada had for a number of years a mandate to require its usage [0]. That should have been an extreme competitive advantage. And even then, C++ is still used these days for some US military projects, like F-35. Though I don't know whether the F-35 is successful or not, if it is not, that could be an argument against C++.

Ada is almost non-existent outside its niche.

The main companies arguing for Ada appear to be the ones selling Ada services, meaning they have a horse in the race.

I barely have any experience at all with Ada. My main impression is that it, like C++, is very old.

[0]: https://www.militaryaerospace.com/communications/article/167...

> The Defense Department`s chief of computers, Emmett Paige Jr., is recommending a rescission of the DOD`s mandate to use the Ada programming language for real-time, mission-critical weapons and information systems.


Poking around it looks like ada is actually the minority now. Everything current is either transitioning to c++ or started that way. The really old but still used stuff is often written in weird languages like jovial or in assembly.


Essentially the story of DoD mandates goes down to everyone getting waivers all the time and nuking the mandate.


> Ada didn't prevent the Ada code in Ariane 5 from being a disaster

That's a weak argument to say that Ada could not lead to a better place in term of software. It's like saying that it's not safer to cross at a crosswalk because you know someone who died while crossing on one.

(But I guess that's fair for you to say that, as the argument should probably be made by the people that say that Ada would be better, and because they made a claim without evidences, you can counterclaim without any evidence :-) )

There are no programming language that can prevent a software for working correctly outside of the domain for which the software is written, which was the case for Ariane 501. Any language that would have been used to write the same software for Ariane 4 may have led to the same exact error. Ariane 501 failure is a system engineering problem here, not a software problem (even if in the end, the almost last piece in the chain of event is a software problem)


Well, readability, better typesafety, less undefined behaviour. In and out parameters, named parameters. Built in concurrency.

With C++ it's just too easy to make mistakes.


> Relative to what?

Relative to C++.

> There are formal verification tools for other languages.

None that are actually used.

I have no horse in this race and I have never actually written any Ada, but it seems pretty clear to me that it would produce more correct code on average.

Looking at the Ariane 5 error it looks like the specifically disabled the compile-time error: https://www.sarahandrobin.com/ingo/swr/ariane5.html

That's nothing to do with Ada.

Also asking for evidence is a red herring. Where's the evidence that Rust code is more likely to be correct than Perl? There isn't any. It's too difficult to collect that evidence. Yet it's obviously true.

Plenty of things are pretty obviously true but collecting scientific evidence of them is completely infeasible. Are code comments helpful at all? No evidence. Are regexes error-prone and hard to read? No evidence. Are autoformatters helpful? No evidence.


I agree that the "there aren't enough programmers for language X" argument is generally flawed. Acceptable cases would be niches like maintenance of seriously legacy or dying platforms. COBOL anyone?

But, not because I think schools and colleges would jump at the opportunity and start training the next batch of students in said language just because some government department or a bunch of large corporations supported and/or mandated it. Mostly because that hasn't actually panned out in reality for as long as I can remember. Trust me, I _wish_ schools and colleges were that proactive or even in touch with with the industry needs, but... (shrug!)

Like I said, I still think the original argument is flawed, at least in the general case, because any good organization shouldn't be hiring "language X" programmers, they should be hiring good programmers who show the ability to transfer their problem solving skills across the panopticon of languages out there. Investing in getting a _good_ programmer upskilled on a new language is not as expensive as most organizations make it out to be.

Now, if you go and pick some _really obscure_ (read "screwed up") programming language, there's not much out there that can help you either way, so... (shrug!)


> If the DoD enforces the requirement for Ada, Universities, job training centers, and companies will follow

DoD did enforce a requirement for Ada but universities and others did not follow.

The JSF C++ guidelines were created for circumventing the DoD Ada mandate (as discussed in the video).


TL;DR Ada programmers were more expensive


Since when was expense a problem for defense spending?

In the video, the narrator also claims that Ada compilers were expensive and thus students were dissuaded from trying it out. However, in researching this comment I founds that the Gnat project has been around since the early 90s. Maybe it wasn't complete enough until much later and maybe potential students of the time weren't using GNU?

  The GNAT project started in 1992 when the United States Air Force awarded New 
  York University (NYU) a contract to build a free compiler for Ada to help 
  with the Ada 9X standardization process. The 3-million-dollar contract 
  required the use of the GNU GPL for all development, and assigned the 
  copyright to the Free Software Foundation.
https://en.wikipedia.org/wiki/GNAT


Take a look at job adds for major defense contractors in jurisdictions that require salary disclosure. Wherever all that defense money is going, it's not engineering salaries. I'm a non-DoD government contractor and even I scoff at the salary ranges that Boeing/Lockheed/Northrup post, which often feature an upper bound substantially lower than my current salary while the job requires an invasive security clearance (my current job doesn't). And my compensation pales in comparison to what the top tech companies pay.


The DOD could easily have organized Ada hackathons with a lot of prize money to "make Ada cool" if they had chosen to in order to get the language out of the limelight. They could also have funded developing a free, open source toolchain.


Ada would never have been cool.

Ironically I remember one of the complaints was it took a long time for the compilers to stabilize. They were such complex beasts with a small userbase so you had smallish companies trying to develop a tremendously complex compiler for a small crowd of government contractors, a perfect recipe for expensive software.

I think maybe they were just a little ahead of their time on getting a good open source compiler. The Rust project shows that it is possible now, but back in the 80s and 90s with only the very early forms of the Internet I don't think the world was ready.


Out of curiosity:

1: If you had to guess, how high is the level of complexity of rustc?

2: How do you think gccrs will fare?

3: Do you like or dislike the Rust specification that originated from Ferrocene?

4: Is it important for a systems language to have more than one full compiler for it?


Given how much memory and CPU time is burned compiling Rust projects I'm guessing it is pretty complex.

I'm not deep enough into the Rust ecosystem to have solid opinions on the rest of that, but I know from the specification alone that it has a lot of work to do every time you execute rustc. I would hope that the strict implementation would reduce the number of edge cases the compiler has to deal with, but the sheer volume of the specification works against efforts to simplify.


> They could also have funded developing a free, open source toolchain.

If the actual purpose of the Ada mandate was cartel-making for companies selling Ada products, that would have been counter-productive to their goals.

Not that compiler vendors making money is a bad thing, compiler development needs to be funded somehow. Funding for language development is also a topic. There was a presentation by the maker of Elm about how programming language development is funded [0].

[0]: https://youtube.com/watch?v=XZ3w_jec1v8


Is the Gnat compiler not sufficiently free and open source? It does not fulfill the comment calling for "toolchain" however.

Edit: Thanks for that video. It is an interesting synthesis ad great context.


GNAT exists because DoD funded a free, open source toolchain.


Since on paper government cares about cost efficiency and you have to consider that in your lobbying materials.

Also it enables getting cheaper programmers who where possible might be isolated from the actual TS materiel to develop on the cheap so that the profit margin is bigger.

It gets worse outside of the flight side JSF software - or so it looks like from GAO reports. You don't turn around a culture of shittiness that fast, and I've seen earlier code in the same area (but not for JSF) by L-M... and well, it was among the worst code I've seen. Including failing even basic requirement of running on a specific version of a browser at minimum.


No they won't. DoD is small compared to the rest of the software market. You get better quality and lower cost with COTS than with custom solutions, unless you spend a crap ton. The labor market for software's no different.

Everyone likes to crap on C++ because it's (a) popular and (b) tries to make everyone happy with a ton of different paradigms built-in. But you can program nearly any system with it more scalably than anything else.


In my experience people criticize C++ for its safety problems. Safety is more important in certain areas than in others. I’m not convinced that you get better quality with C++ than with Ada


Go was built because C++ does not scale. Anybody that's ever used a source based distro knows that if you're installing/building a large C++ codebase, better forget your PC for the day because you will not be using it. Rust also applies here, but at least multiplatform support is easier, so I don't fault it for slow build times


Go was created because Rob Pike hates C++, notice Plan 9 and Inferno don't have C++ compilers, even though C++ was born on UNIX at Bell Labs.

As for compilation times, yes that is an issue, they could have switched to Java as other Google departments were doing, with some JNI if needed.

As sidenote, Kubernetes was started in Java and only switching to Go after some Go folks joined the team and advocated for the rewrite, see related FOSDEM talk.


A lot of people hate C++, that doesn't grant you the ability to make a language, however very few have the opportunity to create a new language out of free time provided by said language taking too long to compile.

I do not know why they did not go with java, I imagine building a java competitor (limbo) and then being forced to use it is kind of demeaning. but again, this would all be conjecture.


Go was made because Rob Pike didn't want to do Java.

There were 3 people making the language, it wasn't a one man thing.

> more scalably than anything else

That's quite debatable. C++ is well known to scale poorly.


Yet the largest codebases I know of are either C, Fortran, or C++. Who's doing anything really big (in terms of LOC) in another language?

C/C++ basically demand that codebases be large. And we hear all the time about software troubles written in these languages. Finding reports of this are almost endless.

I think people who write complex applications in more sane languages end up not having to write millions of lines of code that no one actually understands. The sane languages are more concise and don't require massive hurdles to try and bake in saftey into the codebase. Safety is baked into the language itself.


The exact opposite of what you suggest already happened: Ada was mandated and then the mandate was revoked. It’s generally a bad idea to be the only customer of a specific product, because it increases costs.

> And the F35 and America's combat readiness would be in a better place today with Ada instead of C++

What’s the problem with the F35 and combat readiness? Many EU countries are falling over each-other to buy it.


> Many EU countries are falling over each-other to buy it

They are not buying it for its capabilities though, but to please their US ally/bully which would have retaliated economically otherwise.

See the very recent Swiss case were theirs pilots had chosen another aircraft (the french Rafale), only to be disavowed by their politics later on.


Maybe the EU shouldn’t have transformed themselves into US vassals then.

Nobody respects weakness, not even an ally. Ironically showing a spine and decoupling from the US on some topics would have hurt more short term, but would have been healthier in the long term.


>Maybe the EU shouldn’t have transformed themselves into US vassals then.

I share the same opinion. If you're (on paper) the biggest economic block in the world, but you can be so easily bullied, then you've already failed >20 years ago.

But I don't think it was bullying, but the other way around. EU countries were just buying favoritism for US military protection, because it was still way cheaper than ripping the bandaid and building its own domestic military industry of similar power and scale.

Most defense spending uses the same motivation. You're not seeking to buying the best or cheapest hardware, you seek to buy powerful friends.


Much of existing European F-35 fleet predates Trump's first term. In fact now quite the opposite happens: other options being eyed from reliable partners, even if technically inferior.


The pilots might have reassessed after Pakistan seemed to have shot three of them down from over 200km range. Intel failure blamed but likely many factors of which some presumably may be attributed to the planes.


Pakistan has never downed an F-35.


They were talking about the Rafales. But I think the comment is irrelevant anyway as the scandal happened before that iirc.


I poorly worded it. Rafales allegedly shot down. After that happened, perhaps the pilots wanting them over F35s might have a different opinion. F35s might be harder to get a lock on at that distance and might have better situational awareness capabilities.


> What’s the problem with the F35 and combat readiness?

For example, the UK would like to use its own air-to-ground missile (the spear missile) with its own F-35 jets, but it's held back by Lockheed Martin's Block 4 software update delays.


Also by the software being black box for everyone outside of USA / Lockheed-Martin.


> What’s the problem with the F35 and combat readiness?

Block 4 is very delayed for starters.


The F35 was like 10 years and $50B over budget.


> Many EU countries are falling over each-other to buy it.

It's because we are obliged to want more freedom.


I’ve learned most languages on the job: c#, php, golang, JavaScript, …

I know others who learned ADA on the job.

It’s not too terrible.


Why require that companies use a specific programming language instead of requiring that the end product is good?

> And the F35 and America's combat readiness would be in a better place today with Ada instead of C++.

What is the evidence for this? Companies selling Ada products would almost certainly agree, since they have a horse in the race. Ada does not automatically lead to better, more robust, safer or fully correct software.

Your line of argument is dangerous and dishonest, as real life regrettably shows.[0]

[0]: https://en.wikipedia.org/wiki/Ariane_flight_V88

> The failure has become known as one of the most infamous and expensive software bugs in history.[2] The failure resulted in a loss of more than US$370 million.[3]

> The launch failure brought the high risks associated with complex computing systems to the attention of the general public, politicians, and executives, resulting in increased support for research on ensuring the reliability of safety-critical systems. The subsequent automated analysis of the Ariane code (written in Ada) was the first example of large-scale static code analysis by abstract interpretation.[9]


> Why require that companies use a specific programming language instead of requiring that the end product is good?

I can think of two reasons. First, achieving the same level of correctness could be cheaper using a better language. And second, you have to assume that your testing is not 100% correct and complete either. I think starting from a better baseline can only be helpful.

That said, I have never used formal verification tools for C or C++. Maybe they make up for the deficiencies of the language.


How do you define a better programming language, how do you judge whether one programming language is better than another, and how do you prevent corruption and cartels from taking over?

If Ada was "better" than C++, why did Ada not perform much better than C++, both in regards to safety and correctness (Ariane 5), and commercially regarding its niche and also generally? Lots of companies out there could have gotten a great competitive edge with a "better" programming language. Why did the free market not pick Ada?

You could then argue that C++ had free compilers, but that should have been counter-weighed somewhat by the Ada mandate. Why did businesses not pick up Ada?

Rust is much more popular than Ada, at least outside Ada's niche. Some of that is organic, for instance arguably due to Rust's nice pattern matching and modules and crates. And some of that is inorganic, like how Rust evangelists through force, threats[0], harassment[1] and organized and paid media spam force Rust.

I also tried Ada some time ago, trying to write a tiny example, and it seemed worse than C++ in some regards. Though I only spent a few hours or so on it.

[0]: https://github.com/microsoft/typescript-go/discussions/411#d...

[1]: https://lkml.org/lkml/2025/2/6/1292

> Technical patches and discussions matter. Social media brigading - no than\k you.

> Linus

https://archive.md/uLiWX

https://archive.md/rESxe


>How do you define a better programming language

A language that makes avoiding certain important classes of defects easier and more productive.

>how do you judge whether one programming language is better than another

Analytically, i.e. by explaining and proving how these classes of bugs can be avoided.

I don't find empirical studies on this subject particularly useful. There are too many moving parts in software projects. The quality of the team and its working environment probably dominates everything else. And these studies rarely take productivity and cost into consideration.


Yeah I find myself wishing it would take off again.

I’m sure I’m idealizing it, but at least I’m not demonizing it like folks did back in the day.


> It seems to me that the software field overall has become more open to a wider variety of languages and concepts, and knowing Ada wouldn't be perceived as widely as career pidgeonholing today.

Are you sure? I cannot even find Ada in [0].

I tried modifying some Hello World example in Ada some weeks ago, and I cannot say that I liked the syntax. Some features were neat. I had some trouble with figuring out building and organizing files. Like C++, and unlike Rust I think, there are multiple source file types, like how C++ has header files. I also had trouble with some flags, but I was trying to use some experimental features, so I think that part was on me.

[0]: https://redmonk.com/sogrady/2025/06/18/language-rankings-1-2...


Given that there are still 7 vendors selling Ada compilers I always found that argument a bit disingenuous.

https://www.adacore.com/

https://www.ghs.com/products/ada_optimizing_compilers.html

https://www.ptc.com/en/products/developer-tools/apexada

https://www.ddci.com/solutions/products/ddci-developer-suite...

http://www.irvine.com/tech.html

http://www.ocsystems.com/w/index.php/OCS:PowerAda

http://www.rrsoftware.com/html/prodinf/janus95/j-ada95.htm

What is true, is that those vendors, and many others, like UNIX vendors that used to have Ada compilers like Sun, paying for Ada compilers was extra, while C and C++ were already there on the UNIX developers SKU (a tradition that Sun started, having various UNIX SKUs).

So schools and many folks found easier to just buy a C or C++ compiler, than an Ada one, with its price tags.

Something that has helped Ada is the great work done by Ada Core, even if a few love hating them. They are the major sponsor for ISO work, and spreading Ada knowledge on the open source community.


Another factor for Ada not being more popular is probably: https://en.wikipedia.org/wiki/Ariane_flight_V88

> The failure has become known as one of the most infamous and expensive software bugs in history.[2] The failure resulted in a loss of more than US$370 million.[3]

> The launch failure brought the high risks associated with complex computing systems to the attention of the general public, politicians, and executives, resulting in increased support for research on ensuring the reliability of safety-critical systems. The subsequent automated analysis of the Ariane code (written in Ada) was the first example of large-scale static code analysis by abstract interpretation.[9]


The failure of Ariane was not specific to Ada.

It is just an example that it is possible to write garbage programs in any programming language, regardless if it is Rust or any other supposedly safer programming language.

A program written in C, but compiled with the option to trap on overflow errors would have behaved identically to the Ada program of Ariane.

A program where exceptions are ignored would have continued to run, but most likely the rocket would have crashed anyway a little later due to nonsense program decisions and the cause would have been more difficult to discover.


People love to point that out, missing the amount of failures in C derived languages.


But C-derived languages are also used much more. And it still shows that Ada does not automatically make software correct and robust. It presumably did indeed make Ada less popular than if it had not happened.


People still die in car crashes when wearing seatbelts, ergo seatbelts are useless.


Yet that was not any of my arguments. It, ironically, applies more to the argument you made in your previous post.

A better argument would have been based on statistics. But that might both be difficult to do, and statistics can also be very easy to manipulate and difficult to handle correctly.

I think companies should be free to choose any viable option, and then have requirements that the process and end product is good. Mandating Ada or other programming languages, doesn't seem like it would have prevented Ariane 5, and probably wouldn't improve safety, security or correctness, instead just open the door for limiting competition and cartels and false sense of security. I believe that one should never delegate responsibility to the programming language, more that programmers, organizations and companies are responsible for which languages they choose and how they use them (for instance using a formally verified subset). On the other hand, having standards and qualifications like ISO 26262 and ASIL-D, like what Ferrocene is trying to do with their products for Rust, is fine, I believe. Even though, specifically, some things about the Ferrocene-derived specification seem very off.


The funny thing is the promise of Ada was "if it compiles it won't crash at runtime" which has a lot of overlap with Rust.


https://web.archive.org/web/20111219004314/http://journal.th...

A large segment in this article (which is great overall) focuses on this decision. The short summary is "hiring Ada developers was hard and middleware and tooling were difficult to acquire."

While I've moved through a lot of parts of the software industry and may just be out of touch, I actually feel that this may be less the case today. I've seen a lot of school programs focus less on specific languages and frameworks and more on fundamental concepts, and with more "esoteric" languages becoming popular in the mainstream, I actually think hiring Ada developers would be a lot easier today (plus, big industry players like NVIDIA are back to using Ada since AdaCore have been so effective at pushing SPARK, which probably helps too).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: