One trick I have tried is asking the LLM to output a specification of the thing we are in the middle of building. A commenter above said humans struggle with writing good requirements - LLMs have trouble following good requirements - ALL of them - often forgetting important things while scrambling to address your latest concern.
Getting it to output a spec lets me correct the spec, reload the browser tab to speed things up, or move to a different AI.
Damn … Crucial P3 plus and P5 plus support Opal 2.0 full disk encryption. This leaves Samsung standing nearly alone in the consumer market, except for some smaller names.
If it matters, many of their issues were apparently closed without comment. Here is an issue asking to resolve this. It also lists many of the commits.
> From a technical perspective, as someone who had previously been merging the MR's from that author, there are also a number of technical reasons we'd slowed down on merging them and started rejecting more of the MR's - most notably it becoming increasingly obvious that the MR's were untested and breaking things. Many of the MR's already have technical objections in them to the changes, and many have no benefit other than refactoring the code to make it easier for future changes he had planned, but now will not contribute to us, so would cause code churn and risk for no remaining reason.
May be a repeat here, but best proof I saw was inscribe a square with sides of length c inside another square, but rotated such that the interior square’s corners intersect the outer square’s edges. The intersecting points divide the outer square’s edge making lengths a and b.
This produces an inner square’s edge with sides length c and four equal right triangles of sides a, b, and c.
Note that the area of the outer square equals the sum of the inner square plus the area of the four triangles. Solve this equality.
From a DoS risk perspective there is no practical difference between an infinite loop, or a finite but arbitrarily large loop, which was always possible.
For example, this doesn't work:
#define DOUBLE(x) DOUBLE(x) DOUBLE(x)
DOUBLE(x)
That would only expand once and then stop because of the rule against repeated expansion. But nothing prevents you from unrolling the first few recursive expansions, e.g.:
#define DOUBLE1(x) x x
#define DOUBLE2(x) DOUBLE1(x) DOUBLE1(x)
#define DOUBLE3(x) DOUBLE2(x) DOUBLE2(x)
#define DOUBLE4(x) DOUBLE3(x) DOUBLE3(x)
DOUBLE4(x)
This will generate 2^4 = 16 copies of x. Add 60 more lines to generate 2^64 copies of x. While 2^64 is technically a finite number, for all practical purposes it might as well be infinite.
Without any specific implementation of a constraint it certainly can happen, although I'm not totally sure that it's something to be concerned about in terms of a DOS as much as a nuisance when writing code with a bug in it; if you're including malicious code, there's probably much worse things it could do if it actually builds properly instead of just spinning indefinitely.
Rust's macros are recursive intentionally, and the compiler implements a recursion limit that IIRC defaults to 64, at which point it will error out and mention that you need to increase it with an attribute in the code if you need it to be higher. This isn't just for macros though, as I've seen it get triggered before with the compiler attempting to resolve deeply nested generics, so it seems plausible to me that C compilers might already have some sort of internal check for this. At the very least, C++ templates certainly can get pretty deeply nested, and given that the major C compilers are pretty closely related to their C++ counterparts, maybe this is something that exists in the shared part of the compiler logic.
C++ also has constexpr functions, which can be recursive.
All code can have bugs, error out and die.
There are lots of good reasons to run code at compile time, most commonly to generate code, especially tedious and error-prone code. If the language doesn't have good built-in facilities to do that, then people will write separate programs as part of the build, which adds system complexity, which is, in my experience, worse for C than for most other languages.
If a language can remove that build complexity, and the semantics are clear enough to the average programmer (For example, Nim's macro system which originally were highly appealing (and easy) to me as a compiler guy, until I saw how other people find even simple examples completely opaque-- worse than C macros.
1. compile time evaluation of functions - meaning you can write ordinary D code and execute it at compile time, including handling strings
2. a "mixin" statement that has a string as an argument, and the string is compiled as if it were D source code, and that code replaces the mixin statement, and is compiled as usual
No. Other modern languages have strong compile-time execution capabilities, including Zig, Rust and C++. And my understanding is that C is looking to move in that direction, though as with C++, macros will not go away.
Nowadays, do pass 1 (abstract, intro, headings, conclusions), upload the paper to your LLM, then ask questions interactively. LLMs can make step 3 a lot more efficient.
I get the impression that somehow an attacker is able to inject this prompt (maybe in front of the actual coder’s prompt) in such a way to produce actual production code. I’m waiting to hear how this can happen - cross site attacks on the developer’s browser?
"Documentation, tickets, MCP server" in pictures...
With internal documentation and tickets I think you would have bigger issues... And external documentation. Well maybe there should be tooling to check that. Not expert on MCP. But vetting goes there too.
Getting it to output a spec lets me correct the spec, reload the browser tab to speed things up, or move to a different AI.
reply