Hacker Newsnew | past | comments | ask | show | jobs | submit | Liwink's commentslogin

It appears they just want to avoid responsibility for potential misuse in these areas.

But at the same time, IIRC, several major AI providers had publicly reported their AI assisting patients in diagnosing rare diseases.


I think it's very cynical to say that this is a misuse. And it's definitely cynical when this categorization of misuse comes from the service provider itself. If openai doesn't want to allow misuse, they can just decommision their service. But they don't want to do that, they just want to take the money and push all the responsibility and burden on the users even though they are actively engaging in said "misuse"


I'd recommend one more step - after closing the laptop, bring a notebook and a pen with you.

People often get new ideas or unblocked somehow after stopping the work. If this happens, don't open the laptop again. Write it down.


What percentage of e-commerce will be taken by OpenAI by the end of 2026?

It appears to me that they are already well-positioned to become the next generation of Amazon with their current user base -

* AWS -> OpenAI APIs

* Amazon -> ChatGPT Shopping


It took a while for people to trust putting credit cards into websites. So general adoption will take some time.


There is tutorial page https://www.gmcmap.com/tutorial.asp

Oh wait

"You do not have permission to access this document."


Odd, works for me. Maybe problems from the HN hug?

It probably doesn't have the information anyone wants though, as it's the tutorial to activate your device GC Electronics device to report to the site.


I cannot image what if Tesla has a similar vulnerability issue, and someone took over all of its vehicles.. Or maybe someone is already able to do that, and just waiting.


Why only mention Tesla when the market for EVs is getting quite broad? How about Hyundai, VW, one of the many Stellantis brands, Lynk, BYD, XPeng, Xiaomi or any of the others?


I think Tesla would still be a different beast given how much 1) they are constantly touted for having the best software on the market and 2) being frequently promoted as being „basically ready for self driving, its just a regulatory issue“.

Companies like VW have had their somewhat embarassing issues in the not so distant past, but nobody I know likes (or valuates their stocks) based on their software capabilities


Owning a Tesla myself, I think the mention is valid since it's the only brand I know of with a decent share of people regularly letting the car drive itself. I am not aware of any other brand being in that same situation


Waymo :P


Just look at total number of Tesla's on the road and you have your answer as to why that would be a lot less bad


sure, but its another brand where people regularly let the cars drive themself


Gemini 2.5 Flash is an impressive model for its price. However, I don't understand why Gemini 2.0 Flash is still popular.

From OpenRouter last week:

* xAI: Grok Code Fast 1: 1.15T

* Anthropic: Claude Sonnet 4: 586B

* Google: Gemini 2.5 Flash: 325B

* Sonoma Sky Alpha: 227B

* Google: Gemini 2.0 Flash: 187B

* DeepSeek: DeepSeek V3.1 (free): 180B

* xAI: Grok 4 Fast (free): 158B

* OpenAI: GPT-4.1 Mini: 157B

* DeepSeek: DeepSeek V3 0324: 142B


My one big problem with OpenRouter is that, as far as I can tell, they don't provide any indication of how many companies are using each model.

For all I know there are a couple of enormous whales on there who, should they decide to switch from one model to another, will instantly impact those overall ratings.

I'd love to have a bit more transparency about volume so I can tell if that's what is happening or not.


Granted, due to OpenRouter's 5.5% surcharge, any enormous whales have a strong financial incentive to use the provider's API directly.

A "weekly active API Keys" faceted by models/app would be a useful data point to measure real-world popularity though.



Aggregating by tokens causes the problem simonw mentions in that one poweruser can skew the chart too much.


Right, that chart shows App usage based on the user-agent header but doesn't tell you if there is a single individual user of an app that skews the results.


I was skewing the Gemini starts with my Aider usage. Basically the only model in using with openrouter, until I recently started running qwen3-next locally.

2.5 is probably the best balance for tools like Aider.


I know we have a lot of workloads at my company on older models no one has bothered to upgrade yet


Hell yeah, GPT 35 Turbo


There are cheaper models. Could cut the bill in half or more.


davinci-001 xd


Primarily classification or something else?


Price, 2.0 Flash is cheaper than 2.5 Flash but still very good model.


API usage of Flash 2.0 is free, at least till you hit a very generous bound. It's not simply a trial period. You don't even need to register any payment details to get an API key. This might be a reason for its popularity. AFAIK only some Mistral offerings have a similar free tier?


Yeah, that's my use case. When you want to test some program / script that utilizes an llm in the middle and you just want to make sure everything non-llm related is working. It's free! just try again and again till it "compiles" and then switch to 2.5


wow this would be great for a webapp/site that just needs a basic/performant LLM for some basic tasks.


You might hit some throttling limits. During certain periods of the day, at least in my location, some requests are not served.

It might not be OK for that kind of usecase, or might breach ToS.

But it's still great. Even my premium Perplexity account doesn't give me free API access.


Gemini 2.0 Flash is the best fast non reasoning model by quite a margin. Lot of things doesn't require any reasoning.


Maybe the same reason why they kept the name for the 2.5 Flash update.

People are lazy at pointing to the latest name.


2.0 Flash is significantly cheaper than 2.5 Flash, and is/was better than 2.5-Flash-Lite before this latest update. It's a great workhorse model for basic text parsing/summary/image understanding etc. Though looks like 2.5-Flash-Lite will make it redundant.


Why is Grok so popular


Grok Code Fast 1 usage is driven almost entirely by Kilo Code and Cline: https://openrouter.ai/x-ai/grok-code-fast-1/apps

Both apps have offered usage for free for a limited time:

https://blog.kilocode.ai/p/grok-code-fast-get-this-frontier-...

https://cline.bot/blog/grok-code-fast


Yep Kilo (and Cline/Roo more recently) push these free trial of the week models really hard, partially as incentive to register an account with their cloud offering. I began using Cline and Roo before "cloud" features were even a thing and still haven't bothered to register, but I do play with the free Kilo models when I see them since I'm already signed in (they got me with some kind of register and spend $5 to get $X model credits deal) and hey, it's free (I really don't care about my random personal projects being used for training).

If xAI in particular is in the mood to light cash on fire promoting their new model, you'll see it everywhere during the promo period, so not surprised that heavily boosts xAI stats. The mystery codename models of the week are a bit easier to miss.


It's pretty good and fast af. At backend stuff is ~ gpt5-mini in capabilities, writes ok code, and works good with agentic extensions like roo/kilo. My colleagues said it handles frontend creation so-so, but it's so fast that you can "roll" a couple of tries and choose the one you want.

Also cheap enough to not really matter.


Yeah, the speed and price are why I use it. I find that any LLM is garbage at writing code unless it gets constant high-entropy feedback (e.g. an MCP tool reporting lint errors, a test, etc.) and the quality of the final code depends a lot more on how well the LLM was guided than the quality of the model.

A bad model with good automated tooling and prompts will beat a good model without them, and if your goal is to build good tooling and prompts you need a tighter iteration loop.


This is so far off my experience. Grok 4 fast is straight trash, it literally isn’t even close to decent code for what I tried. Meanwhile Sonnet is miles better - but even still, Opus while I guess technically being only slightly better, in practice is so much better that I find it hard to use Sonnet at all.


Not Grok 4, the code variant of Grok. I think it's different - I agree with you Grok 4 kind of sucks.


I meant to say code actually my bad, I found it significantly worse.


I think it has been free in some editor plugins, which is probably a significant factor.

I would rather use a model that is good than a model that is free, but different people have different priorities.


Non free has double usage than free. Free one uses your data for training.


I mean, I can kinda roll through a lot of iterations with this model without worrying about any AI limits.

Y'know with all these latest models, the lines are kinda blurry actually. The definition of "good" is being foggy.

So it might as well be free as the definition of money is clear as crystal.

I also used it for some time to test on something really really niche like building telegram bot in cloudflare workers and grok-4-fast was kinda decent on that for the most part actually. So that's nice.


They had a lot of free promos with coding apps. It's okay and cheap so I bet some sticked with it.


I think it's very cheap right now.


I think it is included for free into some coding product


It came from nowhere to 1T tokens per week, seems… suspect.


it was free


It’s cheaper and faster. What’s not to understand?


You can get it to be unhinged as well. It's awesome.


I remember a company saying its most effective ads were search ads for their own name.. like what midjourney does

https://postimg.cc/8JwL9WFx


> I remember a company saying its most effective ads were search ads for their own name

I don't have the full context, but this is almost a tautology. Of course you get the highest click-through-rate and highest conversion for searches that are your own name. You usually also get a relatively cheap bid, because most search engines prefer to prioritize relevant results, and you will be very relevant for your own name. But you would have gotten most of those clicks and conversion _for free_ even if you didn't advertise on your name, because the searcher would see your organic result. Advertising on your own name is defensive, not offensive -- you protect customers that are already yours, you don't get new ones.

source: I run marketing for a small business, we advertise on our own name too, and of course it is also the most effective if you calculate it naively.


The next generation vehicle is much bigger https://waymo.com/blog/2024/08/meet-the-6th-generation-waymo...


Is that not the current generation, with four passengers? https://support.google.com/waymo/answer/9059053


That doesn't seem to say anything about the seating capacity?


A more comfortable 5 seat arrangement with easier entry than the Jaguar: https://waymo.com/media-resources/#:~:text=Side%20view%20of%...


I'm usually the Anna in the group, and always appreciate being remembered, even though it's not easy for me to say no.


> But don't forget China still emit 16B compared to US who emit 6B tons of CO2

China has 1.4 billion people, while U.S. has 340 million people.


And China makes all of our stuff. Instead of putting tariffs on solar from China, we should have dropped a trillion dollars on it and put it everywhere.


Before you drop a trillion dollars, you do a cost benefit analysis and you factor for switching costs, the unique geography and population distribution of the U.S. the expected lifespan of solar panels, the battery install capacity necessary to facilitate nighttime and 100 to 1000 year weather event emergencies, the capacity to keep the grid online in the event of a world war, the cost to install HV lines to transport from solar hubs, etc.

You don't dogmatically order $1 trillion of something and sacrifice a functional independent, diverse, weather resilient, geographically distributed energy grid thats served the nation that invented the light bulb for over 125 years, because you read a clickbait headline about China.


> the capacity to keep the grid online in the event of a world war

If I'm learning anything from Russia, its that fossil fuel plants are hella vulnerable in a war. Solar would be much safer.

Fossil Fuel Plant: Knock out the right machine or building and you knock out the plant. The plant is literally storing explosives. The plant must be resupplied which leaves supply trucks/boats/pipelines vulnerable.

Solar: Distributed over a large area. Made of many independent complete power-generating devices so if you knock out 5% of them, all you've accomplished is reducing power output by 5%. Does not need a constant flow of supplies.


> sacrifice a functional ....

Who said we are sacrificing anything? We only gain, and we gain a distributed, diverse, < $1watt of generating capacity.

Your comment makes no sense.


Per capita is a better metric but it's worth noting that China is world's factory - it's easy to reduce emissions if you offshore a lot of your production elsewhere...


And China has the US beat per capita in emissions.


Those people aren't making decisions. Measure emissions per government. They have one government.

The factories are responsible for the pollution from goods, and the state is responsible for the factories it controls.


And the US, like most of the West, had outsourced their most carbon intensive manufacturing.


And looking at historical emissions, US contributed 25% of all emissions vs China 15%.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: