It's the commercials that get me. Depicting people playing the games constantly, while clearing the house, going to the bathroom, hanging out with friends. It shows them with their minds elsewhere, as if they're in a fancy casino surrounded by rich beautiful people wearing suits and glamourous clothes. Meanwhile they are in their dull ordinary life, craving for a dopamine hit the app gives them.
The commercials are a celebration of addiction, and its disgusting to those of us who have struggled with addiction and know, like you say, the damage is clear. And they tacitly admit it too at the end of the commercial where they hurriedly say "struggling with gambling addiction? call this number." As if that absolves anything.
And it's not just the gambling either. A typical commercial break these days consists of: gambling ads where they try to get you addicted, crypto ads where they try to bilk you, political ads where they lie to you, and then there's the omnipresent pharmaceutical ads. Now we've got AI ads on top of it all. Every one of those ad categories should be made illegal, like tobacco advertising.
I'd support fairly broad restrictions on advertising for things like that: gambling, alcohol/intoxicants, prescription medicines. Basically, if it's not available to everyone, it should not be advertised to everyone.
I can't call my new formula translation language FORTRAN because it's been taken, as have many other names. So now to avoid collisions, it's named after my cat.
The LLM prompt space is an ND space where you can start at any point, and then the LLM carves a path through the space for so many tokens using the instructions you provided, until it stops and asks for another direction. This frames LLM prompt coding as a sort of navigation task.
The problem is difficult because at every decision point, there's an infinite number of things you could say that could lead to better or worse results in the future.
Think of a robot going down the sidewalk. It controls itself autonomously, but it stops at every intersection and asks "where to next boss?" You can tell it either to cross the street, or drive directly into traffic, or do any number of other things that could cause it to get closer to its destination, further away, or even to obliterate itself.
In the concrete world, it's easy to direct this robot, and to direct it such that it avoids bad outcomes, and to see that it's achieving good outcomes -- it's physically getting closer to the destination.
But when prompting in an abstract sense, its hard to see where the robot is going unless you're an expert in that abstract field. As an expert, you know the right way to go is across the street. As a novice, you might tell the LLM to just drive into traffic, and it will happily oblige.
The other problem is feedback. When you direct the physical robot to drive into traffic, you witness its demise, its fate is catastrophic, and if you didn't realize it before, you'd see the danger then. The robot also becomes incapacitated, and it can't report falsely about its continued progress.
But in the abstract case, the LLM isn't obliterated, it continues to report on progress that isn't real, and as a non expert, you can't tell its been flattened into a pancake. The whole output chain is now completely and thoroughly off the rails, but you can't see the smoldering ruins of your navigation instructions because it's told you "Exactly, you're absolutely right!"
My mother wouldn't be able to do what you did. She wouldn't even know where to start despite using LLMs all the time. Half of my CS students wouldn't know where to start either. None of my freshman would. My grad students can do this but not all of them.
Your 20 years is assisting you in ways you don't know; you're so experienced you don't know what it means to be inexperienced anymore. Now, it's true you probably don't need 20 years to do what you did, but you need some experience. Its not that the task you posed to the LLM is trivial for everyone due to the LLM, its that its trivial for you because you have 20 years experience. For people with experience, the LLM makes moderate tasks trivial, hard tasks moderate, and impossible tasks technically doable.
For example, my MS students can vibe code a UI, but they can't vibe code a complete bytecode compiler. They can use AI to assist them, but it's not a trivial task at all, they will have to spend a lot of time on it, and if they don't have the background knowledge they will end up mired.
The person at the top of the thread only made a claim about "non-experts".
Your mom wouldn't vibe-code software that she wants not because she's not a software engineer, but because she doesn't engage with software as a user at the level where she cares to do that.
Consider these two vibe-coded examples of waybar apps in r/omarchy where the OP admits he has zero software experience:
That is a direct refutation of OP's claim. LLM enabled a non-expert to build something they couldn't before.
Unless you too think there exists a necessary expertise in coming up with these prompts:
- "I want a menubar app that shows me the current weather"
- "Now make it show weather in my current location"
- "Color the temperatures based on hot vs cold"
- "It's broken please find out why"
Is "menubar" too much expertise for you? I just asked claude "what is that bar at the top of my screen with all the icons" and it told me that it's macOS' menubar.
I didn't make clear I was responding to your question:
"Where do my 20 years of software dev experience fit into this except beyond imparting my aesthetic preferences?"
Anyway, I think you kind of unintentionally proved my point. These two examples are pretty trivial as far as software goes, and it enabled someone with a little technical experience to implement them where before they couldn't have.
They work well because:
a) the full implementation for these apps don't even fill up the AI context window. It's easy to keep the LLM on task.
b) it's a tutorial style-app that people often write as "babby's first UI widget", so there are thousands of examples of exactly this kind of thing online; therefore the LLM has little trouble summoning the correct code in its entirety.
But still, someone with zero technical experience is going to be immediately thwarted by the prompts you provided.
Take the first one "I want a menubar app that shows me the current weather".
ChatGPT response: "Nice — here's a ready-to-run macOS menubar app you can drop into Xcode..."
She's already out of her depth by word 11. You expect your mom to use Xcode? Mine certainly can't. Even I have trouble with Xcode and I use it for work. Almost every single word in that response would need to be explained to her, it might as well be a foreign language.
Now, the LLM could help explain it to her, and that's what's great about them. But by the time she knows enough to actually find the original response actionable, she would have gained... knowledge and experience enough to operate it just to the level of writing that particular weather app. Though having done that, it's still unreasonable to now believe she could then use the LLM to write a bytecode compiler, because other people who have a Ph.D. in CS can. The LLM doesn't level the playing field, it's still lopsided toward the Ph.D.s / senior devs with 20 years exp.
Indeed, how do we know they are nude if we can't see any of their parts? I mean, living in SF I've seen people walking around in public like that, wearing the most minimal covering possible.
No, because then what happens when the place they move to starts censoring them as well? Then all the places start censoring them? You're basically arguing for "separate but equal", and we know how that works out. The correct move is to fight for your rights, not to assuage bigotry.
And you are arguing every business must support your agenda, and if not, they are your "enemy"? What an odd take. Again, you are free to use other means of social media to spread your message but no one is obligated to read or support it. And, that does not make them the enemy.
You already said that. It does not answer the question. Moving to another app doesn't solve anything, because we still haven't answered the question of why they should have had to move in the first place! It's the same situation if they move to a new app, nothing has changed.
At this point we have gone in a circle, I must assume I won't get a genuine answer to the only thing I have asked despite trying to engage genuinely in conversation. Have a good day.
Welcome to the world of antisocial personality disorders. The rationale goes:
That didn't happen.
And if it did, it wasn't that bad.
And if it was, that's not a big deal.
And if it is, that's not my fault.
And if it was, I didn't mean it.
And if I did, you deserved it.
It's called the "narcissist's prayer", it's what narcissists and sociopaths tell themselves to absolve themselves of accountability. Whatever the situation, they have an excuse as to how it's not their fault. It's like the stages of grief but for people trying to avoid consequences or guilt for their actions.
No, dealing with tables was like trying to build a house out of tempered glass.
With css grid, I can tell each element which area or column+row to occupy.
If I add or remove a random element, the rest of the elements stay in the correct place.
But do that with a table and you end up trying to glue your house back together shard by shard whilst trying not to cut yourself or breaking things more.
> If I add or remove a random element, the rest of the elements stay in the correct place.
This complaint highlights how absurdly not fit-for-purpose html+css actually is. Okay, you may want to do "responsive" design, but you have the semantic layout fixed, therefore you try and contort a styling engine into pretending to be a layout engine when in reality it is three stylesheets in a trenchoat.
> Okay, you may want to do "responsive" design, but you have the semantic layout fixed, therefore you try and contort a styling engine into pretending to be a layout engine when in reality it is three stylesheets in a trenchoat.
I need to write this up properly, but one of my bugbears with responsive design is that it became normalised to push the sidebar down below the content on small screens. And if you didn't have a sidebar, to interweave everything in the content no matter what screensize you were viewing on.
What I want is a way to interleave content and asides on small screens, and pull them out into 1+ other regions on larger screens. Reordering the content on larger screens would be the icing on the cake but for now I'll take just doing it.
Using named grid-template-areas stacks the items you move to the sidebar on top of each other, so you only see one of them.
'Good' old floats get most of the way, but put the item in the sidebar exactly where it falls. Plus they're a pain to work with overall: https://codepen.io/pbowyer/pen/jEqdJgP
>This complaint highlights how absurdly not fit-for-purpose html+css actually is. Okay, you may want to do "responsive" design, but you have the semantic layout fixed,
this not fit for purpose may in fact be historically superseded usages that still are baked in to some usages affected by the relatively rapid change of the various platforms that must interact and use the respective technologies, the specification version of "technical debt"
that is to say some subsets of the numerous technologies can be used to construct something fit for the purpose that you are describing, but as a general rule anything constructed in a solution will probably be using other subsets not fit for that particular purpose, but maybe fit for some other purpose.
The commercials are a celebration of addiction, and its disgusting to those of us who have struggled with addiction and know, like you say, the damage is clear. And they tacitly admit it too at the end of the commercial where they hurriedly say "struggling with gambling addiction? call this number." As if that absolves anything.
And it's not just the gambling either. A typical commercial break these days consists of: gambling ads where they try to get you addicted, crypto ads where they try to bilk you, political ads where they lie to you, and then there's the omnipresent pharmaceutical ads. Now we've got AI ads on top of it all. Every one of those ad categories should be made illegal, like tobacco advertising.
reply