Is Waldo really there? Because I've looked everywhere, and at this point I'm a bit concerned that my eyes are going to give out before I'm able to find him. That guy sure is a wily fella.
A core problem with humans, or perhaps it's not even a problem, just something that takes a long time to recognize, is that they complain and hate on something that they continue to spend money on.
Not like food or clothing, but stuff like DLC content, streaming services, and LLMs.
At least in my case, I suspect they also don't keep up with the progress. They did experiments in 2023/24, were thoroughly put off, have not fired it up since. So the impression they have is frozen in time, a time when it was indeed much less impressive.
Why do people in your circle not like AI?
I have similar a experience about friends and family not liking AI, but usually it’s due to water and energy reasons, not because of an issue with the model reasoning
If your circle has any artists in it, chances are they'll also have a very negative perception, although influenced heavily by the proliferation of AI-generated art.
At least personally, I've seen basically three buckets of opinions from non-technical people on AI. There's a decent-sized group of people who loathe anything to do with it due to issues you've mentioned, the art issue I mentioned, or other specific things that overall add up to the point that they think it's a net harm to society, a decent-sized group of people who basically never think about it at all or go out of their way to use anything related to it, and then a small group of people who claim to be fully aware of the limitations and consider themselves quite rational but then will basically ask ChatGPT about literally anything and trust what it says without doing any additional research. It's the last group that I'm personally most concerned about because I've yet to find any effective way of getting them to recognize the cognitive dissonance (although sometimes at least I've been able to make enough of an impression that they stop trying to make ChatGPT a participant in every single conversation I have with them).
Pretty much hit the nail on the head -- while there are some artists, most are from traditional broadly "intellectual" fields. Examples: writers, journalists, academia (liberal arts), publishing industry...
That's a good point; "art" might be a bit too narrow to accurately describe the type of field where people have fairly concrete concerns about how AI relates to what they produce. I'd be tempted to use the label "creative work", but even that doesn't quite feel like it's something that everyone would understand to include stuff like written journalism, which I think is likely to have pretty similar concerns.
I'm all for existentialism informing our ridiculous chase of productivity. But... learning new things before you kick the bucket can qualify as stopping and smelling the roses.
I agree, what I’m rallying against is the notion of feeling you have to learn something. Which is what happens near the New Year, this sudden pressure of having a goal and doing something, this arbitrary point to making grandiose decisions.
What I’m suggesting instead is to not tie a date or a goal to it. Let your interests and desires guide your learning process, not the calendar. I’m also advocating for reflection in the choice of what to pursue. Learning manipulation techniques and scams because you’re interested in the ingenuity of ideas or want to better defend yourself and your loved ones may count as smelling the roses, but learning those same techniques to apply them to other people for personal gain does not.
Woah. I didn't know about that. I found it from asciiart.eu. This stuff makes me wish HN supported ANSI. We could have so much fun. (Also PRE or preformatted text would be useful).
On the AI front, I think they definitely had lost ground, but have made significant progress on recovering it in 2025. I went from not using Gemini to mostly using 3 Pro.
Just the fact that they managed to dodge Nvidia and launch a SOTA model with their own TPU's for training/inference is a big deal, and takes a lot of resources and expertise not all competitors have in-house. I suspect that decision will continue to pay dividends for them.
As long as there is competition in LLM's, Google will now be towards the front of the pack. They have everything they need to be competitive.
First off, I'm sorry. I went down this road with my godmother, $300k still unrecovered, despite lots of information documented.
How did the money actually leave her account once they had access? Was it wired?
Unfortunately the solution for you right now is to focus on rebuilding and acceptance. This is a massive problem, you aren't alone, and it's a reminder that there are shitty people in this world. There needs to be an alert and approval mechanism for outbound wires that older people can be strongly encouraged to set up. Sons and daughters can be notified if there's a massive outbound wire pending and intervene -- scammers are often posing as these people.
I am all in favor of broad cannabis legalization, but there there is something to the gateway theory.
Most users of harder drugs indicate past use of marijuana. Additionally, marijuana gives many their first taste of doing business with drug dealers and 'breaks their cherry.' When they decide they want to try something else they have already gained experience locating dealers and engaging with them. Legalizing cannabis helps here because its users won't engage with dealers to score, they'll go to the store and buy a regulated product.
It's definitely a gateway drug, but only from the perspective that you've forced people to establish black market financial connections. Once you've figured out how to get something illegal it opens a whole new world.
Search engines can afford to throw out stopwords because they're often keyword based. But (frontier) LLM's need the nuance and semantics they signal -- they don't automatically strip them. There are probably special purpose models that do this, or in certain parts of a RAG pipeline, but that's the exception.
Yeah, it'll be less input tokens if you omitted them yourself. It's not guaranteed to keep the response the same, though. You're asking the model to work with less context and more ambiguity at that point. So stripping your prompt of stopwords is going to save you negligible $ and potentially cost a lot in model performance.
Edit: I absolutely did find Waldo! That was fun.
reply