> I have no critical knowledge about the codebase that makes me feel like I could fix an edge case without reading the source code line by line (which at that point would probably take longer than 4 hours).
Peter Naur argues that programming is fundamentally an activity of theory building, not just program text production. The code itself is merely the artifact of the real work.
You must not confuse the artifact (the source code) with the mind that produced the artifact. The theory is not contained in the text output of the theory-making process.
The problems of program modification arise from acting on the assumption that programming is just text production; the decay of a program is a result of modifications made by programmers without a proper grasp of the underlying theory. LLMs cannot obtain Naur's Ryleian "theory" because they "ingest the output of work" rather than developing the theory by doing the work.
LLMs may _appear_ to have a theory about a program, but this is an illusion.
To believe that LLMs can write software, one must mistakenly assume that the main activity of the programmer is simply to produce source code, which is (according to Naur) inaccurate.
I agree with this take, but I'm wondering what vibe coders are doing differently?
Are they mainly using certain frameworks that already have a rigid structure, thus allowing LLMs to not worry about code structure/software architecture?
Are they worry-free and just run with it?
Not asking rhetorically, I seriously want to know.
This is one of the most insightful thoughts I've read about the role of LLM's in software development. So much so, indeed, its pertinence would remain pristine after removing all references to LLM's
There isn't a whole lot of theory building when you're writing a shell script or a Kubernetes manifest... You're just glad to get the damn thing working.
I'd look to Peter Naur's 1985 paper, "Programming as Theory Building," to remind us that writing meaningful commit messages is essential insofar as it is the discipline that confirms and preserves the intellectual "theory" of our work, distinguishing it from mere text production. In Naur's view, programming is fundamentally the activity of builsing theory, which he defines as the knowledge a person must possess not only to intelligently perform tasks, but also to explain them, to answer queries about them, to argue about them, and so forth. The source code is merely an artifact, and neither machine code nor source code contains the wisdom of knowing how the program works, or more critically, why the program was written the way it was instead of some other way that would have accomplished the same task.
The act of composing a descriptive and concise commit message compels the programmer to transition from the task of "path-making" (writing code) to the intellectual work of explaining the process, which is necessary to grok the change completely.
Reviving the theory of an existing program is a difficult, cult-frustrating, and time-consuming effort. When theory-building is neglected, we lose the intellectual foundation that dictates the program's purpose and design, trapping future maintenance efforts in a costly, confusing cycle of trying to deduce intent from artifact alone
Right now the API is nonexistent, relying entirely on people using the web interface to make listings, upload photos, and set prices. But if you would find this useful I can happily build it out. Our stack is Elixir and building APIs is very straightforward. Our code is open-source, too!
When you say "algorithmically driven print-on-demand" do you mean that prices would automatically adjust based on inventory? Or like, how do you mean.
Also, when you say "see them show up in a request on sale" — can you clarify? I interpret this to mean you want a webhook triggered when an order comes in.
It's a simple, timeless, inescapable law of the universe that failures, while potentially damaging, are acceptable risks. The Pareto principle suggests that addressing only the most critical 20% of issues issues yields a disproportionate 80% of the benefits, while the rest of the big bounties yield diminishing marginal returns.
We're seeing bugs in bigger slices because technology is, overall, a bigger pie. Full of bugs. The bigger the pie, the easier it is to eat around them.
Another principle at play might be "induced demand," most notoriously illustrated by widening highways, but might just as well apply to the widening of RAM.
Are we profligate consumers of our rareified, finite computing substrate? Perhaps, but the Maximum Power Transfer Theorem suggests that anything less than 50% waste heat would slow us down. What's the rush? That's above my pay grade.
I guess what I'm saying is that I don't see any sort of moral, procedural, or ideological decay at fault.
In my circles, QA is still very much a thing, only "shifted left" for tighter integration into CI/CD.
Edit: It's also worth reflecting on "The Mess We're In."[0] Approaches that avoid or mitigate the pitfalls common to writing software must be taught or rediscovered in every generation, or else wallow in the obscure quadrant of unknown-unknowns.
Close. Failure-free is simply impossible. And believing the opposite fails even harder and dies out.
This is not "acceptable", because there is no alternative, there is no choice or refutation (non-acceptance). It is a fact of life. Maybe even more so than gravity and mechanical friction.