I switched to Gemini with my new phone and I literally couldn't tell a difference. It is actually crazy how small the cost of switching is for LLMs. It feels like AI is more like a commodity than a service.
> I switched to Gemini with my new phone and I literally couldn't tell a difference. It is actually crazy how small the cost of switching is for LLMs. It feels like AI is more like a commodity than a service.
It is. It's wild to me that all these VCs pouring money into AI companies don't know what a value-chain is.
Tokens are the bottom of the value-chain; it's where the lowest margins exist because the product at that level is a widely available commodity.
On top of that, the on-device models have got stronger and stronger as the base models + RL has got better. You can do on your laptop now what 2 years ago was state of the art.
Which dimensions do you see Google lagging on? They seem broadly comparable on the usual leaderboard (https://lmarena.ai/leaderboard) and anecdotally I can't tell the difference in quality.
I tend personally to stick with ChatGPT most of the time, but only because I prefer the "tone" of the thing somehow. If you forced me to move to Gemini tomorrow I wouldn't be particularly upset.
> Which dimensions do you see Google lagging on? They seem broadly comparable on the usual leaderboard (https://lmarena.ai/leaderboard) and anecdotally I can't tell the difference in quality.
Gemini holds indeed the top spot, but I feel you framed your response quite well: they are all broadly comparable. The difference in the synthetic benchmark from the top spot and the 20th spot was something like 57 points on a scale of 0-1500
" in many dimensions they lag behind GPT-5 class " - such as?
Outside of computer, "the moat" is also data to train on. That's an even wider moat. Now, google has all the data. Data no one else has or ever will have. If anything, I'd expect them to outclass everyone by a fat margin. I think we're seeing that on video however.
Do you want to model the world accurately or not? That person is part of our authentic reality. The most sophisticated AI in the world will always include that person(s).
a bit weird to think about it since google has literally internet.zip in multiple versions over the years, all of email, all of usenet, all of the videos, all of the music, all of the user's search interest, ads, everything..
> a bit weird to think about it since google has literally internet.zip in multiple versions over the years, all of email, all of usenet, all of the videos, all of the music, all of the user's search interest, ads, everything..
Yeah, Google totally has a moat. Them saying that they have no moat doesn't magically make that moat go away.
They also own the entire vertical which none of the competitors do - all their competitors have to buy compute from someone who makes a profit just on compute (Nvidia, for example). Google owns the entire vertical, from silicon to end-user.
Given Apple’s moat is their devices, their particular spin on AI is very much edge focussed, which isn’t as spectacular as the current wave of cloud based LLM. Apple’s cloud stuff is laughably poor.
Depending on how you look at it I suppose but I believe Gemini surpasses OpenAI on many levels now. Better photo and video models. The leaderboard for text and embeddings are also putting Google on top of Openai.
gemini-2.5-pro is ranked number 1 in llmarena (https://lmarena.ai/leaderboard) before gpt-5-high. In the Text-to-Video and Image-to-video, google also have the highest places, OpenAI is nowhere.
Yes, but they're also slower. As LLMs start to be used for more general purpose things, they are becoming a productivity bottle-neck. If I get a mostly right answer in a few seconds that's much better than a perfect answer in 5 minutes.
Right now the delay for Google's AI coding assistant is high enough for humans to context switch and do something else while waiting. Particularly since one of the main features of AI code assistants is rapid iteration.
This is not really true. Google has all the compute but in many dimensions they lag behind GPT-5 class (catching up, but it has not been a given).
Amazon itself did try to train a model (so did Meta) and had limited success.