Hacker Newsnew | past | comments | ask | show | jobs | submit | bayesianbot's commentslogin

I agree with the article and I hold 0 crypto right now. But I still think it's amazing that I can hold something limited, something I can exchange for real money, in my head, just based on math. Sure it is extremely inefficient database, and pretty much all the real value needs to be linked with real world banking, but it does have some really unique features that makes me sad that it (predictably) turned to just scams and speculation.

Edit: and the other feature I like is that I could just attach my code to the raw banking backend. People say that anyways everybody just uses exchanges, and that's true, but if you'd ever want to connect to banking backend, you'd get buried in paperwork. With crypto, you'd just run or connect to a node.


> just scams and speculation

The "currency" part is actually the only one that is not a scam, as long as you understand what it is and the trade-offs it makes.

If you do actually have a legitimate reason to use it (because conventional payment rails are not available, or you're doing crime, or need pseudonymity), it is a perfectly fine tool.


At the same time, the CO2 increase measured at Mauna Loa for 2024 was over 3.5ppm/yr, way up from the ~2.5ppm/yr seen previously this decade[0]

2025 State of the Climate report[1] said (on top of other horrible things)

> A dangerous hothouse Earth trajectory may now be more likely due to accelerated warming, self-reinforcing feedbacks, and tipping points.

I haven't seen hothouse earth mentioned in mainstream papers for a long time (decade+?), as it was deemed unlikely before.

Also The German Physics Society and the German Meteorological Society issued a joint statement warning about the possibility of 3 °C warming by the 2050s[2]

I am actually angry to people that they're irresponsible enough to vote for this without caring about others, but it feels like it was such a horrible timing for all this stupidity as well.

[0] https://www.carbonbrief.org/met-office-atmospheric-co2-rise-... [1] https://academic.oup.com/bioscience/advance-article/doi/10.1... [2] https://worldcrunch.com/focus/green-or-gone/global-warming-a...


They've got a solution to that too - page 185 https://www.commerce.gov/sites/default/files/2025-06/NOAA-FY...

"In coordination with the requested terminations for Weather Laboratories and Cooperative Institutes (see OAR-10) and Ocean Laboratories and Cooperative Institutes (see OAR-19), NOAA will close...Mauna Loa"


Re: likelyness of hothouse earth scenario: I don't think that the clathrate gun hypothesis [0] was ever really off the table. It's the thing that has me the most worried about the long-term future, both for myself and my children.

[0] https://en.wikipedia.org/wiki/Clathrate_gun_hypothesis


> Also The German Physics Society and the German Meteorological Society issued a joint statement warning about the possibility of 3 °C warming by the 2050s[2]

All the glaciers will have disappeared too by then. Pinky promise.


I feel like democrats should make it clear that if there's still a fair election and they regain power, they'll go after both the corrupt people in this admin and entities buying favors. The current state can't be too good for the society, at least there should be a clear possible downside for being a part of it.


Not OP but

- Terminal search and focus (you can list kitty tabs and windows and get the window content from the socket, implementing a BM25 based search is quite easy)

- Giving the current terminal content for AI, so I can do things like run `ls` and then write "Rename the files (in some way)", and push the whole thing to LLM that replaces the command line without me having to write the full context

I even have a Codex session finder that uses codex session files to list and select the session I want, and then uses the kitty socket to find and focus the window which matches the session content


I have thought of things to do with kitty remote but have always been lazy to actually write the code. Do you have it open source by any chance for me to steal?


Very impressive! I'll look into some of these.


Trump also said he's gonna tell Australia's prime minister about the reporter, which is kinda nuts (and hilarious?)

Old track, but just hard to imagine what would have happened if Biden was asked about his corruption and answered like that. But it's hypothetical anyway, since no previous president would ever be rug-pulling crypto scams or selling watches and bibles.

I just can't believe how weekly, or sometimes daily, I share these wild stories and videos with some friends and they keep behaving like anything about this is normal. There are so many things that would make me go WTF even without the context of the constant grift it all comes with.


> SIDENOTE: If you want 2 way audio to work in frigate you must use the tapo:// go2rtc configuration for your main stream instead of the usual rtsp://. TP-Link are lazy and only implement 2 way audio on their own proprietary API.

Annoyingly when this is in use, I can't use ONVIF which seems like the only way to pan and tilt the camera using open tools. So if I want to use two way audio and also control the camera, I have to stop the process reading tapo:// stream, start onvif client and rotate, turn off onvif client and start streaming using tapo:// again


I've been extremely impressed (and actually had quite a good time) with GPT-5 and Codex so far. It seems to handle long context well, does a great job researching the code, never leaves things half-done (with long tasks it may leave some steps for later, but it never does 50% of a step and then just randomly mock a function like Gemini used to), and gives me good suggestions if I'm trying to do something I shouldn't. And the Codex CLI also seems to be getting constant, meaningful updates.


Agreed. We're hardcore Claude Code users and my CC usage trended down to zero pretty quickly after I started using Codex. The new model updates today are great. Very well done OpenAI team!! CC was an existential threat. You responded and absolutely killed it. Your move Anthropic.


To be fair, Anthropic kinda did this to themselves. I consider it as a pretty massive throw on their end in terms of the fairly tight grasp they had on developer sentiment.

Everyone else slowly caught up and/or surpassed them while they simultaneously had quality control issues and service degradation plaguing their system - ALL while having the most expensive models comparatively in terms of intelligence.


Agreed. I really wish Google would get their act together because I think they have the potential of being faster, cheaper with bigger context windows. They're so great at hardcore science and engineering, but they absolutely suck at products.


I really do not want Google to win anything. They're a giant monopoly across multiple industries. We need a greater balance of power.

Antitrust enforcement has been letting us down for over two decades. If we don't have an oxygenation event, we'll go an entire generation where we only reward tax-collecting, non-innovation capital. That's unhealthy and unfair.

Our career sector has been institutionalized and rewards the 0.001% even as they rest on their laurels and conspire to suppress wages and innovation. There's a reason why centicorns petered out and why the F500 is tech-heavy. It's because big tech is a dragnet that consumes everything it touches - film studios, grocery stores, and God only knows what else it'll assimilate in the unending search for unregulated, cancerous growth.

FAANG's $500k TC is at the expense of hundreds of unicorns making their ICs even wealthier. That money mostly winds up going to institutional investors, where the money sits parked instead of flowing into huge stakes risks and cutthroat competition. That's why a16z and YC want to see increased antitrust regulations.

But it's really bad for consumers too. It's why our smartphones are stagnant taxation banana republics with one of two landlords. Nothing new, yet as tightly controlled an authoritarian state. New ideas can't be tried and can't attain healthy margins.

It's wild that you can own a trademark, but the only way for a consumer to access it is to use a Google browser that defaults to Google search (URLs are scary), where the search results will be gamed by competitors. You can't even own your own brand anymore.

Winning shouldn't be easy. It should be hard. A neverending struggle that rewards consumers.

We need a forest fire to renew the ecosystem.


Google supposedly claimed to have no moat, but they actually have

- all the users

- all the apps (Google, GMail, YouTube, Docs, Maps...)

- all the books (Google Books)

- all the video (YouTube)

- all the web pages

- custom hardware

It's honestly weird they aren't doing better. Agree that the models are great and the UX is bad all around.


Google has been, for at least a decade, making pretty terrible choices that squander developer and power-user goodwill (see: any thread where they announce a new product and one of the top comments will link to killedbygoogle). When you've burnt bridges with your biggest evangelists, adoption by normies slows, and your products appear to stagnate.

Unfortunately, they've been insulated from the consequences of their bad decisions by the fact the money printer (ads) keeps their company afloat and mollifies shareholders. The moment that dries up, they're in trouble.


We say this (I admit I would say the same as you), and yet their revenue is $400 billion a year.

I don't think they care what we think. They're thriving despite our protests.

But yeah, they shouldn't be shielded from antitrust. They have literally everything.


Hey now, let's not forget it. They also have:

- all the lobbyists - all the money


Google can do anything but get their act together.


I think this is being downvoted coz it doesn't seem to be really responding to the thread, and maybe it isn't, but for anyone who hasn't tried Gemini CLI:

My experience after a month or so of heavy use is exactly this. The AI is rock solid. I'm pretty consistently impressed with its ability to derive insights from the code, when it works. But the client is flaky, the backend is flaky, and the overall experience for me is always "I wish I could just use Claude".

Say 1 in 10 queries craps out (often the client OOMs even though I have 192Gb of RAM). Sounds like a 10% reliability issue but actually it just pushes me into "fuck this I'll just do it myself" so it knocks out like 50% of the value of the product.

(Still, I wouldn't be surprised if this can be fixed over the next few months, it could easily be very competitive IMO).


I have been heavily using the Gemini API via Aider for a few months and it has been absolutely stable. Claude, in comparison, has been much flakier. OpenAI somewhere in between.


It's definitely possible there's a "grass is always greener" effect going on here, to be fair.

None of these tools give the impression of being well-tested software. My guess is that neither OpenAI nor Anthropic actually has the necessary density in expertise to build quality software. Google obviously can build good software _when it really wants to_ but in this space its strategy looks like "build the products the other guys are building, cut whatever corners necessary to do this absolutely as fast as possible".

So even if my initial impressions are more accurate it's quite possible Google wins long term here.


Semi-related but I have the same experience with the gemini mobile app on android. ChatGPT and Claude are both great user experiences and the best word to describe how the gemini app feels is flaky.


Just adding my two cents after test driving Gemini Ultra after being a long time ChatGPT Pro subscriber:

Remember the whole “Taken 3 makes Taken 2 look like Taken 1” meme? Well Google’s latest video generating AI makes any video gen AI I’ve seen up until now look like Taken 3* (sigh, I said 1, ruined it) - and they are seriously impressive on their own.

Edit: By “they” I mean the other video generating AI makes models, not the other Taken movies. I hope Liam Neeson doesn't read HN, because a delivery like that might not make him laugh.


You're absolutely right!


Gpt5 writes clean, simple code and listens to instructions. I went from tons of Claude APi usage to usage to basically none overnight


Agreed. GPT’s coding is so much cleaner. Claude tends to ramble and generate unnecessary scaffolding. GPT’s code is artful and minimalist.


I would sincerely like to understand what your steps were to get you to convincingly move down to zero usage of CC. I have seen hits and misses with codex to feel like it tries really hard to be good, and in some ways it is (like the out-of-the-box context management seems like a pretty smooth batteries included feature), but in some important (to me) ways, it just keeps falling on its face (like giving up on what it deems to be too complex of a task-in my case, porting a pretty robust JS deobfuscation tool (works but is mad slow) over to Rust-and that has prevented me from feeling so full of confidence and speculative joy about, thus far. It caught and fixed some bugs after a few turns of renewing context but I was doing that with CC (with better walkthroughs as it did its thing) so it felt underwhelming to me. As anecdotal as my situation/experience sounds, I still feel like with every "new"-ish thing that gets thrown at us regarding Ai tooling and similar such news, the hype does not live up to the reality, FOR ME.


But how do you use it?

It's super annoying that it doesn't provide a way to approve edits one by one instead it either vibe codes on its own or gives me diffs to copy paste.

Claude code has a much saner "normal mode".


Wait, this wasn't what I was experiencing. Did something change in gpt-5-codex or was that your normal experience?


I asked you how do you use it.

Is it via CLI? Is it via extension to an editor? What is your flow?


This just goes to show how crucial it was for Anthropic and OpenAI to hire first class product leads. You can’t just pay the AI engineers $100M. Models alone don’t generate revenue.


I got the exact opposite lesson. The parent and grandparent comments seem to be talking about dropping one product for another purely on the strength of the model.


the model is the product


My experience with Codex / Gpt-5:

- The smartest model I have used. Solves problems better than Opus-4.1.

- It can be lazy. With Claude Code / Opus, once given a problem, it will generally work until completion. Codex will often perform only the first few steps and then ask if I want to continue to do the rest. It does this even if I tell it to not stop until completion.

- I have seen severe degradation near max context. For example, I have seen it just repeat the next steps every time I tell it to continue and I have to manually compact.

I'm not sure if the problems are Gpt-5 or Codex. I suspect a better Codex could resolve them.


Claude seems to have gotten worse for me, with both that kind of laziness and a new pattern where it will write the test, write the code, run the test, and then declare that the test is working perfectly but there are problems in the (new) code that need to be fixed.

Very frustrating, and happening more often.


They for sure nerfed it within the last ~3 weeks. There's a measurable difference in quality.


They actually just had a bug fix and it seems like it recently got a lot better in the last week or so


Context degradation is a real problem with all frontier LLMs. As a rule of thumb I try to never exceed 50% of available context window when working with either Claude Sonnet 4 or GPT-5 since the quality drops really fast from there.


I've never seen that level of extreme degradation (just making a small random change and repeating the same next steps infinitely) on Claude Code. Maybe Claude Code is more aggressive about auto compaction. I don't think Codex even compacts without /compact.


I think some of it is not necessarily auto compaction but the tooling built in. For example claude code itself very frequently builds in to remind the model what its working on and should be doing which helps always keeps its tasks in the most recent context, and overall has some pretty serious thought put into its system prompt and tooling.

But they have suffered quite a lot of degradation and quality issues recently.

To be honest unless Anthropic does something very impactful sometime soon I think they're losing their moat they had with developers as more and more jump to codex and other tools. They kind of massively threw their lead imo.


Yeah, I think you are right.


Agreed, and judicious use of subagents to prevent pollution of the main thread is another good mitigant.


I cap my context at 50k tokens.


Yes, this is the one thing stopping me from going to Codex completely. Currently, it's kind of annoying that Codex stops often and asks me what to do, and I just reply "continue". Even though I already gave it a checklist.

With GPT‑5-Codex they do write: "During testing, we've seen GPT‑5-Codex work independently for more than 7 hours at a time on large, complex tasks, iterating on its implementation, fixing test failures, and ultimately delivering a successful implementation." https://openai.com/index/introducing-upgrades-to-codex/


I definitely agree with all of those points. I just really prefer it completing steps and asking me if we should continue to next step rather than doing half of the step and telling me it's done. And the context degradation seems quite random - sometimes it hits way earlier, sometimes we go through crazy amount of tokens and it all works out.


I also noticed the laziness compared to Sonnet models but now I feel it’s a good feature. Sonnet models, now I realize, are way too eager to hammer out code with way more likelihood of bugs.


Gemini seems to be pretty awful as agentic coding. It always finish the task, and when I see the result, it just breaks my code.

Not sure the fault it's "doing bad code", I guess it's just not being good at being agentic. Saw this on Gemini CLI and other tools.

GLM, Kimi, Qwen-Code all behaves better for me.

Probably Gemini 3 will fix this, as Gemini 2.5 Pro it's "old" by now.


Gemini CLI is bad, model itself is really good.


Agreed ditched my Claude code max for the $200 pro ChatGPT.

Gemini cli is too inconsistent, good for documentation tasks. Don’t let it write code for you


Gemini's tool calling being so bad is pretty amazing. Hopefully in the next iteration they fix it, because the model itself is very good.


This is a recurring theme with Google. Their models are phenomenal but the systems around them are so bad that it degrades the whole experience. Veo3 great model horrible website, and so on...


Their massive increase in token processing since Veo3 and nano banana have been released would say otherwise...

Or we're all just used to eating things we don't like and smiling.


That has been my experience as well with every Gemini model, ugh!


Can someone compare it to cursor? So far i see people compare it with Claude code but I’ve had much more success and cost effectiveness with cursor than Claude code


Doesn’t compare, because Cursor has a privacy mode. Why would anyone want to pay OpenAI or Anthropic to train their bots on your business codebase? You know where that leads? Unemployment!


It doesn't seem to have any internal tools it can use. For example, web search; It just runs curl in the terminal. Compared to Gemini CLI that's rough but it does handle pasting much better... Maybe I'm just using both wrong...


It does have web search - it's just not enabled by default. You can enable it with --search or in the config, then it can absolutely search, for example finding manuals/algorithms.


Thanks!


Use --search option when you start codex


Thanks!


web search too is off by default


Have you used Claude Code? How does it compare?


It's objectively a big improvement over Claude Code. I'm rooting for anthropic, but they better make a big move or this will kill CC.


What are the usage limits like compared to Claude Code? Is it more like 5× or 20×? For twice the price, it would have to be very good.


https://help.openai.com/en/articles/11369540-using-codex-wit...

have to say not sure what this even means and what the exact definition of a message is in this context.

with claude code max20 I was constantly hitting limits, with codex not once yet


Same. We're not hitting limits at all with Codex and it's ridiculously good at managing and preserving its context window while getting a metric fuckton of work done. It's kind of unbelievable actually. I don't know re billing. Not my dept.


Are you talking about Codex CLI or their GitHub integration?

GPT-5 is a great model. I tried Codex CLI Rust, as they seem to be deprecating the JS version, and it is awful. I don't know what possessed them to try and write a TUI in Rust but it isn't working. The Claude Code UI is hugely superior.


> then just randomly mock a function like Gemini used to

Claude Code does that on longer tasks.

Time to give Codex a try I guess.


I would never have bought one before but nowadays it could actually be useful. You could have Codex or Claude Code in your pocket, and every ~15min check the work and write a new prompt. Tablets are too big (for me) to constantly carry around for this, and phones annoyingly small for that use.


Usually you'd have some predefined widths, like 33%, 50% and 67%, and use a shortcut to cycle between them. And you can define window rules to start some applications with different width than your default.

edit: and as fellow niri user, I recommend people try it. I think it's one of the easiest tiling WMs to get into, it feels very natural within minutes.


Am I the only one who thinks the way plugins are updated in lazy.nvim (and probably others) is a bit insane? It seems to just pull the latest commits. Every time I update, I feel one rogue commit away from someone stealing my keys. It definitely feels like the riskiest thing I do on my system. Or have I misunderstood something?


Thanks, new fear unlocked for me :')

For me, lazy.nvim doesn't pull the latest commits automatically. I have to <leader>-L and SHIFT-U it. And I don't do it often exactly because if there's an issue with the plugins I hope it's caught by others and addressed before I update mine.


you are right to be worried about such practices. this is why i avoid these things entirely. its a bit more hastle but a lot less risk. once you have a good config u can just roll with that anyhow. but i guess in the same vein i dont use a lot of plugins.

the nr of times now people have been owned by rogue plugins via editors is rising each day...


So you mean you review all the plugin code before you add it? And when there's an update you review the changes?


So far I’ve just YOLO'd it. But if I install other software directly from git and the source isn’t fully reliable, I’ll usually at least check recent changes, or have codex take a look through the source, just like I read through PKGBUILDs when installing from AUR. It feels crazy that I then update LazyVim and suddenly pull in 150 new commits, some just minutes old, all with free access to my system.


If you manual update infrequently you are leaving a period for other people to get burned and flag issues before you pull the change, even if you don't look into a thing yourself.


If your update is the simplest version, a "git pull" -- then you're incorporating commits that have not "stewed" long enough for anyone to be burned. You might win the lucky ticket! (Saying this as someone who rarely updates nvim plugins, out of forgetfulness, not principle, and when they are updated I believe it IS a simple "git pull"...)


With a plugin manager you can also update infrequently


I mostly do, yes. There are exceptions for very mainstream and big plugins, but for the most part I do at least skim the new plugin code before committing it to my dotfiles repo. A nice thing about this ecosystem is for the most part, things don't change that quickly/often, and big refactors are quite rare


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: