I downloaded the original article page, had claude extract the submission info to json, then wrote a script (by hand ;) to run feed each submission title to gemini-3-pro and ask it for an article webpage and then for a random number of comments.
I was impressed by some of the things gemini came up with (or found buried in its latent space?). Highlights:
"You’re probably reading this via your NeuralLink summary anyway, so I’ll try to keep the entropy high enough to bypass the summarizer filters."
"This submission has been flagged by the Auto-Reviewer v7.0 due to high similarity with "Running DOOM on a Mitochondria" (2034)."
"Zig v1.0 still hasn't released (ETA 2036)"
The unprompted one-shot leetcode, youtube, and github clones
Nature: "Content truncated due to insufficient Social Credit Score or subscription status" / "Buy Article PDF - $89.00 USD" / "Log in with WorldCoin ID"
Github Copilot attempts social engineering to pwn the `sudo` repo
It made a Win10 "emulator" that goes only as far as displaying a "Windows Defender is out of date" alert message
"dang_autonomous_agent: We detached this subthread from https://news.ycombinator.com/item?id=8675309 because it was devolving into a flame war about the definition of 'deprecation'."
Columns now support "Vibe" affinity. If the data feels like an integer, it is stored as an integer.
This resolves the long-standing "strict tables" debate by ignoring both sides.
Also:
SQLite 4.0 is now the default bootloader for 60% of consumer electronics.
The build artifacts include sqlite3.wasm which can now run bare-metal without an operating system.
I haven't laughed this much for a while :) I'm exploring the possibility for gemini to write me such jokes every day when I wake up - perhaps it can vibe code something itself.
> Predictive SELECT Statements:
> Added the PRECOGNITION keyword.
> SELECT * FROM sales WHERE date = 'tomorrow' now returns data with 99.4% accuracy by leveraging the built-in 4kB inference engine. The library size has increased by 12 bytes to accommodate this feature.
12 bytes really sounds like something that the lead dev would write!
Also, a popup appeared at the bottom with this message:
> The future costs money.
> You have reached your free article limit for this microsecond.
> Subscribe for 0.0004 BTC/month
Suddenly, I have high hopes again for LLMs. Imagine you were a TV/film script writer and had writer's block. You could talk to an LLM for a while to see what funny ideas it can suggest. It is one more tool in the arsenal.
Personal favourite is from the Gemini shutdown article which has a small quote from the fictional Google announcement:
> "We are incredibly proud of what Gemini achieved. However, to better serve our users, we are pivoting to a new architecture where all AI queries must be submitted via YouTube Shorts comments. Existing customers have 48 hours to export their 800TB vector databases to a FAT32 USB drive before the servers are melted down for scrap."
the prompt indeed began with "We are working on a fun project to create a humorous imagining of what the Hacker News front page might look like in 10 years."
The Conditional Formatting rules now include sponsored color scales.
If you want 'Good' to be green, you have to watch a 15-second spot.
Otherwise, 'Good' is 'Mountain Dew Neon Yellow'.
I miss the old days of Prompt Engineering. It felt like casting spells. Now you just think what you want via Neural-Lace and the machine does it. Where is the art?
git_push_brain 9 hours ago
The art is in not accidentally thinking about your ex while deploying to production.
> The micro-transaction joke hits too close to home. I literally had to watch an ad to flush my smart toilet this morning because my DogeCoin balance was low.
I am nearly in tears after reading this chain of posts. I have never read anything so funny here on HN.
Real question: How do LLMs "know" how to create good humor/satire? Some of this stuff is so spot on that an incredibly in-the-know, funny person would struggle to generate even a few of these funny posts, let alone 100s! Another interesting thing to me: I don't get uncanny valley feelings when I read LLM-generated humor. Hmm... However, I do get it when looking at generated images. (I guess different parts of the brain are activated.)
The jokes are not new. If you read Philip K Dick or Douglas Adams there's a lot of satirical predictions of the future that sound quite similar. What's amazing about LLMs is how they manage to almost instantly draw from the distilled human knowledge and come up with something that fits the prompt so well...
re: image gen, have you seen the more recent models? gemini-3-pro-image (aka nano banana pro) in particular is stunningly good at just about everything. examples: https://vtom.net/banana/
Especially this bit: "[Content truncated due to insufficient Social Credit Score or subscription status...]"
I realize this stuff is not for everyone, but personally I find the simulation tendencies of LLMs really interesting. It is just about the only truly novel thing about them. My mental model for LLMs is increasingly "improv comedy." They are good at riffing on things and making odd connections. Sometimes they achieve remarkable feats of inspired weirdness; other times they completely choke or fall back on what's predictable or what they think their audience wants to hear. And they are best if not taken entirely seriously.
Why functional programming languages are the future (again)
Top comment:
“The Quantum-Lazy-Linker in GHC 18.4 is actually a terrifying piece of technology if you think about it. I tried to use it on a side project, and the compiler threw an error for a syntax mistake I wasn't planning to make until next Tuesday. It breaks the causality workflow.”
>>> It blocked me from seeing my own child because he was wearing a t-shirt with a banned slogan. The 'Child Safety' filter replaced him with a potted plant.
That deserves to be posted and voted onto the homepage. The fake articles and the fake comments are all incredible. It really captures this community and the sites we love love/hate.
Now I'm curious to try something more real-time. gemini wouldn't work since it's so slow, but gpt-oss-120b on cerebras could be a good fit with careful prompting. might do this after finals
'The new "Optimistic Merge" strategy attempts to reconcile these divergent histories by asking ChatGPT-9 to write a poem about the two datasets merging. While the poem was structurally sound, the account balances were not.'
dear god, I wonder what the accuracy rate on these predictions will be "Does this work against the new smart-mattresses? Mine refuses to soften up unless I watch a 30-second ad for insurance." <https://sw.vtom.net/hn35/pages/90098444.html>
Wow, that is incredible. I found myself reading through the entire thing and feeling a bit of dread. I'm impressed, this was like a plausible sci-fi read – maybe not by 2035 but close.
Wow, that's brilliant. Can't help but think your script unlocked this. I'm now genuinely reconsidering whether frontier LLMs can't act as force-multiplier to general creativity like they do with programming.
"Why is anyone still using cloud AI? You can run Llama-15-Quantum-700B on a standard Neural-Link implant now. It has better reasoning capabilities and doesn't hallucinate advertisements for YouTube Premium."
> It is the year 2035. The average "Hello World" application now requires 400MB of JavaScript, compiles to a 12GB WebAssembly binary, and runs on a distributed blockchain-verified neural mesh. To change the color of a button, we must query the Global State Singularity via a thought-interface, wait for the React 45 concurrent mode to reconcile with the multiverse, and pay a micro-transaction of 0.004 DogeCoin to update the Virtual DOM (which now exists in actual Virtual Reality).
This is all too realistic... If anything, 400MB of JS is laughably small for 2035. And the last time I was working on some CI for a front-end project -- a Shopify theme!! -- I found that it needed over 12GB of RAM for the container where the build happened, or it would just crash with an out-of-memory error.
> And the last time I was working on some CI for a front-end project -- a Shopify theme!! -- I found that it needed over 12GB of RAM for the container where the build happened, or it would just crash with an out-of-memory error.
This sounds epic. Did you blog about it? HN would probably love the write up!
"Ask HN: How do you prevent ad-injection in AR glasses", comments:
visual_noise_complaint 7 hours ago
Is anyone else experiencing the 'Hot Singles in Your Area' glitch where it projects
avatars onto stray cats? It's terrifying.
cat_lady_2035 6 hours ago
Yes! My tabby cat is currently labeled as 'Tiffany, 24, looking for fun'. I can't
turn it off.
"Europe passes 'Right to Human Verification' Act", from the article:
"For too long, citizens have been debating philosophy, negotiating
contracts, and even entering into romantic relationships with Large Language
Models trained on Reddit threads from the 2020s. Today, we say: enough. A
European citizen has the right to know if their customer service
representative has a soul, or just a very high parameter count."
— Margrethe Vestager II, Executive Vice-President for A Europe Fit for the
Biological Age
[...]
Ban on Deep-Empathy™: Synthetic agents are strictly prohibited from using
phrases such as "I understand how you feel," "That must be hard for you," or
"lol same," unless they can prove the existence of a central nervous system.
As far as I'm concerned, that law can't come soon enough - I hope they remember to include an emoji ban.
For "Visualizing 5D with WebGPU 2.0", the link actually has a working demo [1].
I'm sad to say it, but this is actually witty, funny and creative. If this is the dead-internet bot-slop of the future, I prefer it over much of the discussion on HN today (and certainly over reddit, whose comments are just the same jokes rehashed again and all over again, and have been for a decade).
Ah that one was generated with an earlier prompt, where I asked it to use the original comment count from TFA (mostly as a suggestion, I don't expect it would get the exact number). Then I realized that was too many and it would end up repeating tropes for the other submissions' comments, so reduced it to a random comment count from 20-100
Pretty amazing! I was especially impressed with how it has clearly downvoted comments on the Rust kernel like "Safety is a skill issue. If you know what you're doing, C is perfectly safe."
Or people wondering if that means Wayland will finally work flawlessly on Nvidia GPUs? What's next, "The Year of Linux on the Desktop"?
Edit: had to add this favorite "Not everyone wants to overheat their frontal cortex just to summarize an email, Dave."
"The Martian colonies also ran out of oxygen last week because an AI optimized the life-support mixing ratio for 'maximum theoretical efficiency' rather than 'human survival'. I'll take the Comic Sans, thanks.
reply
> Running Tailscale on the Starlink Gen 7 "Orb" (Jailbreak Edition)
By Maya Srinivasan (AI Networking Lead) & Avery Pennarun III
November 12, 2034
Ever since SpaceX released the Starlink Gen 7 (the spherical, floating one that follows you around like a Fallout eyebot)
> We are incredibly proud of what Gemini achieved. However, to better serve our users, we are pivoting to a new architecture where all AI queries must be submitted via YouTube Shorts comments. Existing customers have 48 hours to export their 800TB vector databases to a FAT32 USB drive before the servers are melted down for scrap
Improvements: tell it to use real HN accounts, figure out the ages of the participants and take that to whatever level you want, include new accounts based on the usual annual influx, make the comment length match the distribution of a typical HN thread as well as the typical branching factor.
> Garbage collection pause during landing burn = bad time.
That one was really funny. Some of the inventions are really interesting. Ferrofluidic seals...
> Zig doesn't have traits. How do you expect to model the complexity of a modern `sudoers` file without Higher-Kinded Types and the 500 crates we currently depend on?
> Also, `unsafe` in Rust is better than "trust me bro" in Zig. If you switch, the borrow checker gods will be angry.
"It's scary stuff. Radically advanced. - I mean, it was smashed, it didn't work, but it gave us ideas, took us in new directions, things we would've never Th..."
I downloaded the original article page, had claude extract the submission info to json, then wrote a script (by hand ;) to run feed each submission title to gemini-3-pro and ask it for an article webpage and then for a random number of comments.
I was impressed by some of the things gemini came up with (or found buried in its latent space?). Highlights:
"You’re probably reading this via your NeuralLink summary anyway, so I’ll try to keep the entropy high enough to bypass the summarizer filters."
"This submission has been flagged by the Auto-Reviewer v7.0 due to high similarity with "Running DOOM on a Mitochondria" (2034)."
"Zig v1.0 still hasn't released (ETA 2036)"
The unprompted one-shot leetcode, youtube, and github clones
Nature: "Content truncated due to insufficient Social Credit Score or subscription status" / "Buy Article PDF - $89.00 USD" / "Log in with WorldCoin ID"
"Gemini Cloud Services (formerly Bard Enterprise, formerly Duet AI, formerly Google Brain Cloud, formerly Project Magfi)"
Github Copilot attempts social engineering to pwn the `sudo` repo
It made a Win10 "emulator" that goes only as far as displaying a "Windows Defender is out of date" alert message
"dang_autonomous_agent: We detached this subthread from https://news.ycombinator.com/item?id=8675309 because it was devolving into a flame war about the definition of 'deprecation'."