This is temporary. AI models have their own Moore's law. Yes the mega corps will have the best models but soon enough what is currently SOTA will be open source and run on your own local machine if you want.
the mega corps are getting all of us and the investors to fund the RnD.
This just seems like an engineered pipeline of existing GenAI to get a 3d procedurally generated world that doesn't even look SOTA. I'm really sorry to dunk on this for those that worked on it, but this doesn't look like progress to me. The current approach looks like a dead end.
An end-to-end _trained_ model that spits out a textured mesh of the same result would have been an innovation. The fact that they didn't do that suggests they're missing something fundamental for world model training.
The best thing I can say is that maybe they can use this to bootstrap a dataset for a future model.
The people who worked on it did what they could to satisfy the demands of their higher-up’s, who frequently are out of touch with the technical landscape.
Being kind to them and understanding the environment they work in won’t improve their lives, but it will expand our understanding of the capability of particular large companies to innovate.
These extra steps can cause him weeks of stress, physical and mental. These extra steps cost him money he does not have. The stress can set him back physically for weeks.
Reapplying, waiting on hold for half a day, going down to offices, etc are not easy for some folks. People fall through the cracks and die.
This is called forced attrition. It's pretty common in the business world when companies don't want to fire people. Make it too difficult to bother, so folks stop bothering. Unfortunately this is a literal lifeline for millions of people, so it's more like make it too difficult to bother, so folks start dying.
It doesn't pass the sniff test. If they "know" 186,000 people are deceased who are receiving benefits, then they can simply stop disbursements to those accounts. It doesn't require any action from those who are alive.
> If someone doesn't reapply for food stamps then they weren't that critical for their survival.
For a good number it might be that they don't successfully reapply due to living on a knife edge that lacks the slack to jump through yet another hoop.
The experience here in Australia is that raising welfare barriers hurts those that need welfare the most, the actual fraudsters have the resources to beat the system.
> somehow incapable of doing basic things for something they care about
Even my ADHD often made me incapable of doing basic things for stuff I cared about. I can't imagine the struggle for people with more severe live conditions. Same goes for you, apparently.
You go through the process of actually calling, get sent through a 4-5 week rabbit hole, and then people wonder why less people make it through the funnel that has more holes than a grater.
Remember the whole "waste fraud and abuse" stuff in the beginning of the year? Yeah, there's a lot of waste in how inefficient it is signing up for this government stuff.
> Hate this argument so much. You lose people in your sales funnel because they didn't actually care all that much about the product to justify the extra effort.
On more than one occasion I've been the primary decision maker for a technology choice that was going to be worth tens of thousands of dollars or more per year.
For reasons that aren't relevant here, didn't have a ton of time to do the evaluation... extreme prejudice was exercised against anything that didn't have a 'download now and get started button'.
Even if I wanted to jump on a sales call, I didn't have 2 and 1/2 days to wait for you to get back to me.
Maybe a sales funnel is the right tool for certain industries but when your primary user is technical, don't make them jump on a phone call. Get out of their way and make sure the documentation is good. If they like what they see and they have questions, they will chase you down. That is when you should do the pitch call...
A valid rationalization but never an excuse. At some point the buck has to stop being passed around. Standing up to all instances of violence is the only way to stop the endless cycles.
I used to be in this camp until I tried and bought an M1 Macbook as my daily driver. I thought I was going to be Thinkpad/XPS w/ Linux until I die. I don't love MacOS but POSIX is mostly good enough for me and the hardware is so good that I'm willing to look past the shortfalls.
Seriously I would love to switch back to a full-time Linux distro but I'm more interested in getting work done and having a stable & performant platform. Loosing a day of productivity fixing drivers and patching kernels gets old. The M-series laptops have been the perfect balance for me so far.
That’s just not true. Every coworker I know who use Linux[1] have occasional issues with webcams, mics, Slack notifications, whatever. It’s all fixable and this kind of inconvenience can be worth it when balanced with the perceived advantages, but saying driver issues are a thing of the past is just a lie.
[1]: I’ve seen these issues on Dell (XPS 13), Thinkpads, and HP laptops
That's funny because you sent me comment a few hours after I struggled at work with a webcam constantly freezing on windows/teams.
Webcam that has always worked flawlessly on Fedora on my other laptops.
Also Teams was much more reliable for the last 5-6 years or so I used it with ungoogled chromium on Linux than it did for the last 6 months using the official app on windows. I have had to kill it an awful number of times after struggling with unrecognized audio device, freezing video, or eveb freezing everything except sound.
I've been using Linux for 25 years and I think its been nearly that long since I had kernel issues that required patching the kernel (if ever). Maybe back in the 2.5 days?
The only drivers that I've had memorable issues with over the years are printer drivers, but those have nothing to do with the kernel. And printers are pretty cursed on every platform.
Well you should tell that to Dell because I have coworkers with a range of their models that are constantly fighting with webcams, audio, bluetooth, wifi, and Nvidia driver updates.
If they're new models, the webcam issue is not Dell specific, but an Intel / ipu6 thing. It should be integrated into most systems by now though, even as an out of tree module. The rest should just work, especially on xps machines. Without specifying the models/issues, it's hard to take it as more than an anecdote.
They have a line they sell with linux pre installed. Those always work fine. It takes so e work to figure out which old ones on ebay were in that situation.
Im really not sure why you have to lie to make your point. Just to be clear, you never tried a modern laptop with linux. Because you certianly don't have to patch kernels or deal with drivers anymore. The only time you have to deal with drivers is if you want to game on linux, and even then most of that is covered by modern distros.
I'm not really sure what you mean? I've been in fast and crazy startups now years, all the time ton of work to do. Never having issues with Linux, the CachyOS and Fedora spins I run just keep on chugging day to day.
(2) Seems like a media narrative rather than truth. I don't think that would be anywhere remotely high on a CEO's priority list unless they were a commercial real estate company.
It's far more likely a mixture of (1) and actual results - in-person/hybrid teams produce better outcomes (even if why that's true hasn't been deeply evaluated or ultimately falls on management)
It would be interesting to see two versions of a model. A primary model tuned for precision that's focused on correctness that works with or orchestrates a creative model that's tuned for generating new (and potentially incorrect) ideas. The primary model is responsible for evaluating and reasoning about the ideas/hallucinations. Feels like a left/right brain architecture (even though that's an antiquated model of human brain hemispheres).
I took a quick informal poll of my coworkers and the majority of us have found workflows where CC is producing 70-99% of the code on average in PRs. We're getting more done faster. Most of these people tend to be anywhere from 5-12 yrs professional experience. There are some concerns that maybe more bugs are slipping through (but also there's more code being produced).
We agree most problems stem from:
1. Getting lazy and auto-accepting edits. Always review changes and make sure you understand everything.
2. Clearly written specification documents before starting complex work items
3. Breaking down tasks into a managable chunk of scope
4. Clean digestible code architecture. If it's hard for a human to understand (e.g: poor separation of concerns) it will be hard for the LLM too.
But yeah I would never waste my time making that video. Having too much fun turning ideas into products to care about proving a point.
> Having too much fun turning ideas into products to care about proving a point.
This is a strange response to me. Perhaps you and others aren’t aware that there’s a subculture of folks who livestream coding in general? Nothing to do with proving a point.
My interest in finding such examples is exactly due to the posting of comments like yours - strong claims of AI success - that don’t reflect my experience. I want to see videos that show what I’m doing wrong, and why that gives very different results.
I don’t have an agenda or point to prove, I just want to understand. That is the hacker way!
2, 3, 4 are all what human coders need to be efficient too :)
I'm kinda hoping that this LLM craze will force people to be better at it. Have documentation up to date and easily accessible is good for everyone.
Like we're (over here) better at marking lines in the road, because the EU mandated lane keeping assist needs the road markings to be there or it won't work.
the mega corps are getting all of us and the investors to fund the RnD.
reply