I just long for DBs to evolve from "stateful" to "stateless". CQRS at the DB level.
* All inserts into append only tables. ("UserCreatedByEnrollment", "UserDeletedBySupport" instead of INSERT vs UPDATE on a stateful CRUD table)
* Declare views on these tables in the DB that present the data you want to query -- including automatically maintained materialized indices on multiple columns resulting from joins. So your "User" view is an expression involving those event tables (or "UserForApp" and "UserForSupport"), and the DB takes care of maintaining indices on these which are consistent with the insert-only tables.
* Put in archival policies saying to delete / archive events that do not affect the given subset of views. ("Delete everything in UserCreatedByEnrollment that isn't shown through UserForApp or UserForSupport")
I tend to structure my code and DB schemas like this anyway, but lack of smoother DB support means it's currently for people who are especially interested in it.
Some bleeding edge DBs let you do at least some of this efficient and user-friendly. I.e. they will maintain powerful materialized views and you don't have to write triggers etc manually. But I long for the day we get more OLTP focus in this area not just OLAP.
My point is that event sourcing would have been a lot less painful if popular DBs had builtin support for it in the way I describe.
If you go with event sourcing today you end up with having to do a lot of things that the DB could have been able to handle automatically, but there's an abstraction mismatch.
(I've worked with 3-4 different strategies for doing event sourcing in SQL DBs in my career)
Why use a nailgun instead of a hammer, if the nailgun still requires supervision and handholding?
Example: Say I discover a problem in the SPA design that can be fixed by tuning some CSS.
Without LLM: Dig around the code until I find the right spot. If it's been some months since I was there this can easily cost five minutes.
With LLM: Explain what is wrong. Perhaps description is vague ("cancel button is too invisible, I need another solution") or specific ("1px more margin here please"). The LLM makes a best effort fix within 30 secs. The diff points to just the right location so you can fine tune it.
I've been programming for 20 years, and I've always been under-estimating how long things will take (no, not pressured by anyone to give firm estimates, just talking about informally when prioritizing work order together).
The other day I gave an estimate to my co-worker and he said "but how long is it really going to take, because you always finish a lot quicker than you say, you say two weeks and then it takes two days".
The LLMs will just make me finish things a lot faster and my gut feel estimation for how long things will take still is not yet taking that into account.
(And before people talk about typing speed: No that isn't it at all. I've always been the fastest typer and fastest human developer among my close co-workers.)
Yes, I need to review the code and interact with the agent. But it's doing a lot better than a lot of developers I've worked with over the years, and if I don't like the style of the code it takes very few words and the LLM will "get it" and it will improve it..
Some commenters are comparing the LLM to a junior. In some sense that is right in that the work relationship may be the same as towards a (blazingly fast) junior; but the communication style and knowledge area and how few words I can use to describe something feels more like talking to a senior.
(I think it may help that latest 10 years of my career a big part of my job was reviewing other people's code, delegating tasks, being the one who knew the code base best and helping others into it. So that means I'm used to delegating not just coding. Recently I switched jobs and am now coding alone with AI.)
I obviously don't know that my past two days of work would have taken two weeks in the alternative route, but it's my feeling for this particular work:
I'm implementing a drawing tool on top of maps for fire departments (see demo.syncmap.no -- it's only in Norwegian for now though, plan to launch in English and Show HN it in some months). Typescript, Svelte, Go, Postgres.
This week I have been making the drawing tools more powerful (not deployed publicly yet).
* Gesture recognition to turn wobbly lines into straight lines in some conditions
* Auto-fill closed shapes: Vector graphics graph algorithms to segment the graph and compute the right fill regions that feel natural in the UI (default SVG fill regions were not right, took some trial and error to find something that just feels natural enough)
* Splines to make smoother curves .. fitting Catmull-Rom, converting those to other splines for SVG representation etc
* Constraints when dragging graph nodes around that shapes don't intersect when I don't want them to etc
I haven't been working all that much with polygon graphics before, so the LLM is very helpful in a) explaining me the concepts and b) providing robust implementations for whatever I need.
And I've had many dead ends that didn't feel natural in UI that I could discard after trying them out in full, without loosing huge investment.
These are all things that are very algorithm and formula intensive and where I would have had do to a lot of reading and research to do things right myself. (I could deal with it, but it takes a lot of time to read up on it.)
I review to see that it "looks sensible", not every single addition and division in the spline interpolations, or every step of the graph segmentation algorithms used to compute fill regions. I review function signatures and overall architecture, not the small details (in frontend -- obviously the backend authorization code is reviewed line by line..)
I described a problem on UI level. LLM suggested the Ramer-Douglas-Peucker algorithm to solve it, which I have never heard about before. It implemented it. Works perfectly. It is 40 lines of code (of which I only really need to review the function signature, and note the fact that it's a recursive bisection algorithm). I would have spent a very long trying to figure out what to do here otherwise and the LLM handed me the solution.
Yes this kind of work will be sped up a lot by AI since you are not familiar with the intricacies of the subject matter. Especially with well documented but complex formats it can assist (vector graphics are not necessarily intuitive). Additionally in my experience UI design is quite pattern and boilerplate heavy.
The suggesting of algorithms sounds good, I don't know how you got there but I would ask for several algorithms that fit the bill and narrow it down myself (the first suggestion isn't always optimal).
Thank you for taking the time to shine some light onto what you're doing as I can see how you get that kind of speedup from using AI in this scenario.
As long as it makes already senior engineers work as quickly alone as when working in a team together with 3 juniors, it can lead to replacing jobs without producing code that doesn't need review.
This is not advice to just follow for anyone. For some people this may be right, but for others it can be dangerous and a disaster. (At least if there's any chance "months" turns into "years".)
If one is of verge of depression (or similar stuff) then removing routines in your life is in general not going to fix things, but make things worse.
A long vacation or unpaid leave, sure. But quitting work without a concrete plan to return and definite exit point feels dangerous. If one isn't in the right place mentally suddenly you are just stuck at home watching Netflix in a downward spiral, instead of all those exciting things you planned on doing but somehow don't end up doing.
I remember seeing a post from someone on HN that started in this place, then did quit work for a year. It seemed quite obvious reading about that journey that attempting a "reset" just made things worse.
A variant of this advice, that avoids some of the pitfalls, is to take time off to do something structured and specific.
Personally, in between jobs a long time ago, I chose to walk the Henro Trail, an approximately 800-mile Buddhist pilgrimage trail in Shikoku, Japan. To make a long story short, it was the experience of a lifetime.
I haven't, but others have written about the same trip. There's lots of material online these days, I'm not really familiar with it but if you google "Shikoku henro pilgrimage", all the hits will be about the same trip I took.
There is a wonderful book, Japanese Pilgrimage by Oliver Statler. He goes into the history of the pilgrimage and of Kobo Daishi, the monk whose path the trail follows. He also discusses his own personal experience walking the trail.
I am pretty new to frontend development (but have 20 years of backend)
I assumed I would need to use one of these libraries at some point. But, perhaps since I am using Svelte instead of React, whenever I ask AI to do something, then since I don't already use a component lib it just spits out the HTML/CSS/TS to do the job from scratch (or, depending on how you look at it, output the mean component from its training data).
I have to point out it should organize the code and give the component a dedicated Svelte file (sure I could fix AGENTS
md to do that).
I think with AI the usecase for these libraries is much lower. If there is anything complex you need AI can build it in some seconds specifically tailored for you, so..
I've been dabbling in backend and frontend stuff for about 25 years now, but for the past 15 years or so I haven't really had to do any webby stuff for work (and that's kind of how I like it).
Recently I've needed to put together a few things as "proof of concept" for things like internal directories and catalogues, and it's one of those "How Hard Can It Possibly Be" situations where we've had folk prevaricating for months with outline drawings and sketches and mockups.
So I knocked together a backend for it in Django, which worked okay, and then styled up the raw template with MinCSS[1], and then to do stuff like "find-as-you-type" and other "magical dynamic page" things I used HTMX[2] which has been discussed here endlessly.
No need for AI sloppiness. Just write some code, look at some examples, stick in some styles, and away you go.
I've used HTMX-like approaches a lot for other apps and I've been pretty frontend-averse, but this time I'm doing something similar to a drawing program with lots of d3 and SVG etc, very much the "real usecase" for SPA. So I feel HTMX doesn't apply to this specific usecase.
Why? Russia didn't have a protracted civil war between 2000-ish and now?
Isn't Trump busy replacing US Army leadership with those loyal to him? Why would Army and ICE be on opposite sides?
Seems MAGA just have to continue the present course and apply just enough pressure to the election system to keep "winning" half-credibly and autocracy is there in not too many years.
I mean they are already past pardoning those attacking congress for not accepting the election result.
It is just a gradual process which is well underway, at what point would California and Washington suddenly prop up a militia?
I wish there was a standard protocol for consuming event logs, and that all the client side tooling for processing them didn't care what server was there.
I’d love a world where “consume an event log” is a standard protocol and client-side tooling doesn’t care which broker is behind it.
Feed API is very close to the mental model I’d want: stable offsets, paging, resumability, and explicit semantics over HTTP. Ayder’s current wedge is keeping the surface area minimal and obvious (curl-first), but long-term I’d much rather converge toward a shared model than invent yet another bespoke API.
If you’re open to it, I’d be very curious what parts of Feed API were hardest to standardize in practice and where you felt the tradeoffs landed in real systems.
I don't have that much to offer... we just implemented it for a few different backends sitting on top of SQL. The concept works (obviously as there is not much there). The main challenge was getting safe export mechanisms from SQL, i.e. a column in tables you can safely use as cursor. The complexity in achieving that was our only problem really.
But because there wasn't any official spec it was a topic of bikeshedding organizationally. That would have been avoided by having more mature client libs and spec provided externally..
This spec is I a bit complex but it is complexity that is needed to support a wide range of backend/database technologies.. Simpler specs are possible by making more assumptions/hardcoding of how backend/DB works.
It has been a few years since I worked with this, but reading it again now I still like it in this version. (This spec was the 2nd iteration.)
The partition splitting etc was a nice idea that wasn't actually implemented/needed in the end. I just felt it was important that it was in the protocol at the time.
That makes a lot of sense the hard part isn’t “HTTP paging”, it’s defining a safe cursor (in SQL that becomes “which column is actually stable/monotonic”), and without an external spec/libs it turns into bikeshedding. In Ayder the cursor is an explicit per-partition log offset, so resumability/paging is inherent, which is why Feed API’s mental model resonates a lot. I’d love to see a minimal “event log profile” of that spec someday.
Your mind must work differently than mine. I have programmed for 20 years, I have a PhD in astrophysics..
And my "reasoning" is pretty much like a long ChatGPT verbal and sometimes not-so-verbal (visual) conversation with myself.
If my mind really did abstract platonic thinking I think answers to hard problems would just instantly appear to me, without flaws. But only problems I hve solved before and can pattern match do that.
And if I have to think any new thoughts I feel that process is rather similar to how LLMs work.
It is the same for history of science really -- only thoughts that build small steps on previous thoughts and participate in a conversation actually are thought by humans.
Totally new leaps, which a "platonic thinking machines" should easily do, do not seem to happen..
it’s very different architectures for brain vs modern transformer even though they are related which is why it may seem easy for some to draw analogies
Whether LLM is reasoning or not is an independent question to whether it works by generating text.
By the standard in the parent post, humans certainly do not "reason". But that is then just choosing a very high bar for "reasoning" that neither humans nor AI meets...what is the point then?
It is a bit like saying: "Humans don't reason, they just let neurons fire off one another, and think the next thought that enters their mind"
Yes, LLMs need to spew out text to move their state forward. As a human I actually sometimes need to do that too: Talk to myself in my head to make progress. And when things get just a tiny bit complicated I need to offload my brain using pen and paper.
Most arguments used to show that LLMs do not "reason" can be used to show that humans do not reason either.
To show that LLMs do not reason you have to point to something else than how it works.
If LLMs were actually able to think/reason and you acknowledge that they’ve been trained on as much data as everyone could get their hands on such that they’ve been “taught” an infinite amount more than any ten humans could learn in a lifetime, I would ask:
When coding they are solving "novel, unsolved problems" related to coding problems set up.
So I will assume you mean within maths, science etc? Basically things they can't solve today.
Well 99.9% of humans cannot solve novel, unsolved problems in those fields.
LLMs cannot learn, there is just the initial weight estimation process. And that process currently does not make them good enough on novel math/theoretical physics problems.
That does not mean they do not "reason" in the same way that those 99.9% of humans still "reason".
But they definitely do not learn, the way humans do.
(Anyway, if LLMs could somehow get 1000x as large context window and get to converse with themselves for a full year, it does not seem out of the question they could come out with novel research?)
* All inserts into append only tables. ("UserCreatedByEnrollment", "UserDeletedBySupport" instead of INSERT vs UPDATE on a stateful CRUD table)
* Declare views on these tables in the DB that present the data you want to query -- including automatically maintained materialized indices on multiple columns resulting from joins. So your "User" view is an expression involving those event tables (or "UserForApp" and "UserForSupport"), and the DB takes care of maintaining indices on these which are consistent with the insert-only tables.
* Put in archival policies saying to delete / archive events that do not affect the given subset of views. ("Delete everything in UserCreatedByEnrollment that isn't shown through UserForApp or UserForSupport")
I tend to structure my code and DB schemas like this anyway, but lack of smoother DB support means it's currently for people who are especially interested in it.
Some bleeding edge DBs let you do at least some of this efficient and user-friendly. I.e. they will maintain powerful materialized views and you don't have to write triggers etc manually. But I long for the day we get more OLTP focus in this area not just OLAP.
reply