I was targeted by this exact same attack several months ago. It sounded incredibly real, the emails looked legit, down the domains, Google even has a process for this exact scenario. The only thing that tipped me off is that they sent a login request to my phone. Nothing about the login request seemed off- it even originated from a Mountain View IP. But it was the fact they had sent me a login request which prompted me to drill the voice on why they needed a login request instead of some other form of verification. The disembodied voice soon became agitated and eventually told me that I should expect to lose access to my Google account soon since I hadn't complied with their request.
It was only after I checked Twitter that I saw Garry Tan's callout of the exact same scam. After experiencing it myself, I wouldn't fault anyone who fell for it. The only other tip-off was that the voice was pretty monotone and unemotional, but that only appears obvious in hindsight, not in the moment where you're slightly panicking that someone might be trying to claim access to your account.
> It’s like the rise of Unity in the 2010s: the engine democratized making games, but we didn’t see a proportional explosion of good game, just more attempts.
But we did? We've come a long way from the limited XBLA catalog. It didn't happen overnight, but doubtless we wouldn't have the volume of games we have today without Unity, Godot, Gamemaker, Renpy, RPG Maker...
> we didn’t see a proportional explosion of good game, just more attempts.
I'm not sure the 2 of you are disagreeing. We definitely saw an explosion of indie games. In 2010, there were less than 10 indie games released on steam per month. By 2022, there were ~500/mo, and today there's ~750/mo (I expect that the 250/mo jump around 2022 can likely be attributed to LLMs).
What's hard to say is if this increase significantly increased the number of good games. Mostly because "good" is highly subjective, but also, I think something else happens. I've been playing games for the better part of 40 years, and what I noticed, is that in that time, the number of must play games each year has largely gone unchanged, despite the industry being orders of magnitude larger than it was 40 years ago. But that is also tricky, because 2 things happen every year, our standards get higher, and our preferences get more refined.
You also still have the same amount of time you had 40 years ago. There are definitely more games available, and I would argue the proportion of high quality games has also increased massively, but since you're still limited by the number of games you can play in any given year, you'll never feel that increase.
Why would the proportion of high quality games increase? The number yes, but I expect not the proportion. Lowering the entry barrier means more people who have spent less time honing their skills can release something that's lacking in polish, narrative design, fun mechanics and balance. Among new entrants, they should number more than those already able to make a fun game. Not a value judgement, just an observation.
Think of the negative reputation the Unity engine gained among gamers, even though a lot of excellent games and even performant games (DSP) have been made with it.
More competitors does also raise the bar required for novelty, so it is possible that standards are also rising in parallel.
We had shovelware games 25+ years ago (and probably 40 years ago, though I suspect the lack of microcomputers limited that). There were bargain-bin selections (literally bins full of CDs) that cost a few bucks and were utterly shite. I suspect the target audience was tech-unaware relatives who would be "little Johnny likes video games, I'll get him one of these...". Most of them were bad takes on popular games of the time.
Unity + Steam just makes this process a bit easier and more streamlined. I think the new thing is that as well as the dickwads who are trying to rip people off, there are well-intentioned newbie or indie developers releasing their unpolished attempts. These folks couldn't publish their work in the old days, because making CDs costs money, while now they can.
Looking at shareware days and games like Jazz Jackrabbit with team of 2-3 devs also. I don't know if Unity would have been necessary. Ofc, after 20 years there is lot more processing power and lot less memory constraints. But still, I am not sure if such engines fundamentally changed anything.
It'd be quite difficult to deploy the processing power and other resources without an engine.
A 90s PC can't do a complex 3d engine because it lacks the grunt. A 2020s game dev can't do a complex 3d engine themselves because they don't know how to do complex 3d.
As someone reaching 50 years old, we always had such indies, we used to call them bedroom coders, and distributions came in tapes, floppies in magazine covers, shareware CD-ROM and DVD-ROMs.
Maybe it only got visible to the consoles generation around the time of XBLA arcade, and even that was already predated by PS Yaroze and PS2Linux efforts.
Before Unity, we had SDL, Ogre3D, SFML,... but naturally all of those require more coding skills than engines designed with UI workflows in mind.
The site has an API for reading posts [0]. It works (worked?) quite well. For making posts, you'd need to write your own functionality that forwards the CAPTCHA and post timers.
I made this NES emulator with Claude last week [0]. I'd say it was a pretty non-trivial task. It involved throwing a lot of NESDev docs, Disch mapper docs, and test rom output + assembly source code to the model to figure out.
I am considering training a custom Lora on atari roms and see if i could get a working game out of it with the Loras use. The thinking here is that atari, nes, snes, etc... roms are a lot smaller in size then a program that runs natively on whatever os. Lees lines of code to write for the LLM means less chance of a screw up. take the rom, convert it to assembly, perform very detailed captions on the rom and train.... if this works this would enable anyone to create games with one prompt which are a lot higher quality then the stuff being made now and with less complexity. If you made an emulator with the use of an llm, that means it understands assembly well enough so i think there might be hope for this idea.
Well the assembly I put into it was written by humans writing assembly intended to be well-understood by anyone reading it. On the contrary, many NES games abuse quirks specific to the NES that you can't translate to any system outside of the NES. Understanding what that assembly code is doing also requires a complete understanding of those quirks, which LLMs don't seem to have yet (My Mapper 4 implementation still has some bugs because my IRQ handling isn't perfect, and many games rely on precise IRQ timing).
How would you characterize the overall structural complexity of the project, and degree of novelty compared to other NES emulators Claude may have seen during training ?
I'd be a bit suspect of an LLM getting an emulator right, when all it has to go on is docs and no ability to test (since pass criteria is "behaves same as something you don't have access to")... Did you check to see the degree to which it may have been copying other NES emulators ?
> How would you characterize the overall structural complexity of the project, and degree of novelty compared to other NES emulators Claude may have seen during training ?
Highly complex, fairly novel.
Emulators themselves, for any chipset or system, have a very learnable structure: there are some modules, each having their own registers and ways of moving data between those registers, and perhaps ways to send interrupts between those modules. That's oversimplifying a bit, but if you've built an emulator once, you generally won't be blindsided when it comes to building another one. The bulk of the work lies in dissecting the hardware, which has already been done for the NES, and more open architectures typically have their entire pinouts and processes available online. All that to say - I don't think Claude would have difficulty implementing most emulators - it's good enough at programming and parsing assembly that as long as the underlying microprocessor architecture is known, it can implement it.
As far as other NES emulators goes, this project does many things in non-standard ways, for instance I use per-pixel rendering whereas many emulators use scanline rendering. I use an AudioWorklet with various mixing effects for audio, whereas other emulators use something much simpler or don't even bother fully implementing the APU. I can comfortably say there's no NES emulator out there written the way this one is written.
> I'd be a bit suspect of an LLM getting an emulator right, when all it has to go on is docs and no ability to test (since pass criteria is "behaves same as something you don't have access to")... Did you check to see the degree to which it may have been copying other NES emulators ?
Purely javascript-based NES emulators are few in number, and those that implement all aspects of the system even fewer, so I can comfortably say it doesn't copy any of the ones I've seen. I would be surprised if it did, since I came up with most of the abstractions myself and guided Claude heavily. While Claude can't get docs on it's own, I can. I put all the relevant documentation in the context window myself, along with the test rom output and source code. I'm still commanding the LLM myself, it's not like I told Claude to build an emulator and left it alone for 3 days.
Even with your own expert guidance, it does seem impressive that Claude was able complete a project like this without getting bogged down in the complexity.
i'm really hoping for this as well! i'm a big believer that neural rendering pipelines will overtake traditional push-tris-to-gpus that we've essentially been using since Descent