Hacker Newsnew | past | comments | ask | show | jobs | submit | durumu's commentslogin

I think that short film is AI generated. I only watched like 30 seconds of an office scene in the middle but it spontaneously changed from daytime to nighttime with zero explanation.


He says it's not: https:/x.com/jasonjoyride/status/1973164183798816773

>> How do you get HD renders? im getting like super low res shit

>It's because this isn't AI

I haven't watched the film, but the premise is something about an orbiting space station. I could easily imagine scenes featuring rapid day/night cycles like astronauts experience on the ISS.


Even if most of the code you write is solving repetitive plumbing tasks, today's models are incredibly bad at API design taste. IMO designing software in a way that minimizes side effects and is easy to change and test is more than 1% of software engineering.

Lately most of the code I write has been through LLMs and I find them an enormous productivity booster overall, but despite the benchmarks they're not expert human level quite yet, and they need a LOT of coaxing to produce production quality code.

As far as things LLMs are bad at, I think it's mainly the long tail. I'm not sure there's one singular thing that >1% of programmers work on that LLMs suck at, but I think there are thousands of different weird sub-specialties that almost no one is working on and very little public code exists for, thus LLMs are not good at them yet.


Yes, LLMs are currently useful and are improving rapidly so they are likely to become even more useful in the future. I think inevitable is a pretty strong word but barring government intervention or geopolitical turmoil I don't see signs of LLM progress stopping.


Why would they progress significantly than where they are now? An LLM is an LLM. More tokens doesn't mean better capabilities, in fact, quite the opposite seems to be the case, and suggests smaller models aimed at specific tasks are the "future" of it.


I think that's more reflective of the deteriorating relationship between OpenAI and Microsoft than an true lack of demand for datacenters. If a major model provider (OpenAI, Anthropic, Google, xAI) were to see a dip in available funding or stop focusing on training more powerful models, that would convince me we may be in a bubble about to pop, but there are no signs of that as far as I can see.


I like those vehicles, honestly -- delivery trucks are going to park in the bike lane regardless and these are much smaller and safer to maneuver around. I want to see more of them and hope it leads to more bike lanes being built in NYC.


Neovim is fully backwards compatible, no? I'm not sure what the downside of switching is.


Most LLMs do this due to the proliferation of ChatGPT-generated content in the training data.


I doubt there is a service that bundles a bunch of API access for one subscription fee and works with vim. But there are a few plugins that provide cursor like functionality and let you bring your own API key. Avante and code-companion are the most widely used ones. Magenta.nvim looks promising.


I think AI capabilities perception in general is being greatly damaged by the Google search AI summary. Whatever model they use is so cheap and crappy, yet I can't opt out of it or even get my eyes to skip the box... Claude or Perplexity or whatever can comfortably and concisely answer questions about Auckland holidays without hallucinating, yet the Google search AI thinks you can eat rocks and put glue on pizza, and I see people trot similar examples out all the time to prove that "AI is dumb".


In Manhattan ebike access is excellent -- there are tons of bike lanes and bikeshare stations. They are typically as fast as Ubers for getting around the city since traffic is so bad here, and much cheaper. The main issue is that it's not very safe. Probably this does not generalize to most other US cities.


Unfortunately the citi bikes are expensive enough that they may be more expensive than sharing an uber with one other person.

Anecdote, paid $15 x 2 to take two citi bikes across Brooklyn to avoid a two-leg l-shaped subway ride. Coming home took a $25 uber. The bike trip was ~30% faster. It sucked having to navigate around all the delivery trucks and random private cars parked in the bike lane.

$15 seems too much to me for the citi bike for a 25 minute ride. But I'd do it again to save 10 minutes sitting in traffic in an uber.

Oh and the next day we did the same journey via l-shaped subway ride. It took about 10 minutes more than the bike ride, and included an awkward street-level and overpass transition between the two subway lines. Much much cheaper than uber or bike.

My take is there are a variety of crappy options to get around Brooklyn.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: