Hacker Newsnew | past | comments | ask | show | jobs | submit | florilegiumson's commentslogin

I like the idea of reimagining the whole stack so as to make AI more productive, but why stop at languages (as x86 asm is still a language)? Why not the operating system? Why not the hardware layer? Why not LLM optimized verilog, or an AI tuned HDL?

Probably because then it wouldn't be software anymore. That is because you will be required to do a physical process (print an integrated circuit) in order to use the functionality you created. It can definitely be done, but it takes it too far away from the idea the author expressed.

But I don't see a reason why the LLM shouldn't be writing binary CPU instructions directly. Or programming some FPGA directly. Why have the assembly language/compiler/linked in between? There is really no need.

We humans write some instructions in English. The LLM generates a working executable for us to use repeatedly in the future.

I also think it wouldn't be so hard to train such a model. We have plenty of executables with their source code in some other language available to us. We can annotate the original source code with a model that understands that language, get its descriptions in English, and train another model to use these descriptions for understanding the executable directly. With enough such samples we will be able to write executables by prompting.


If AI is really likely to cause a mass extinction event, then non-proliferation becomes critical as it was in the case with nuclear weapons. Otherwise, what does it really mean for AI to "replace people" outside of people needing to retool or socially awkward people having to learn to talk to people better? AI surely will change a lot, but I don't understand the steps needed to get to the highly existential threat that has become a cliché in every "Learn CLAUDE/MCP" ad I see. A period of serious unemployment, sure, but this article is talking about population collapse, as if we are all only being kept alive and fed to increase shareholder value for people several orders of magnitude more intelligent than us, and with more opposable thumbs. Do people think 1.2B people are going to die because of AI? What is the economy but people?

I don't think the people will die, just have AI do the jobs. The people will probably still be there giving instructions.

Capitalism gives, capitalism takes. Regulation will be critical so it doesn’t take too much, but tech is moving so fast even technologists, enthusiasts and domain researchers don’t know what to expect.

Thank you: these are excellent.


Really cool project. I love the animations that go with the songs.

I’d go through all of the chord progressions and make sure they actually match what is being played. There are quite a few errors. Happens to everyone.

Also, you and everyone else should remember that while the band is mostly playing power chords and omitting the fifths, what Cobain sings is part of the chord as it’s heard. This means that, for example, a lot of songs do sound major, Smells like teen spirit is probably in F minor.

I find determining key in popular music to be tricky. Most progressions consist of something like 4 chords, and there isn’t the teleology you see in something like Tin Pan Alley or Chopin to give the sense of where one is to arrive. Even the Axis of Awesome progression can be heard a major or minor depending on how you end the song.


Bill bailey converts songs to minor so it all sounds like funeral dirges, like happy birthday and god save the queen.


Really cool to see GPUs applied to sound synthesis. Didn’t realize that all one needed to do to keep up with the audio thread was to batch computations at the size of the audio thread. I’m fascinated by the idea of doing the same kind of thing for continua in the manner of Stefan Bilbao: https://www.amazon.com/Numerical-Sound-Synthesis-Difference-...

Although I wonder if mathematically it’s the same thing …


L-systems were proposed for music even earlier. Here's a link to an article from 1986: https://quod.lib.umich.edu/cgi/p/pod/dod-idx/score-generatio...

It definitely is not a glorified PRNG. The idea is that you can create patterns that have both variety and repetition with them. I don't like the results, generally, but they are not random.


Prusinkiewicz indeed did important work on L-systems and music in the eighties, but the use of generative grammars and rewriting systems in music dates back at least to the sixties. The reslts of such approaches are not random, but "pseudo random", as written. Also the term "fractal noise" is used in this context.


The author is right that there is nothing new about making music with AI. However, earlier uses of AI were for symbol manipulation, whereas currently AI has the potential to be a new kind of sound synthesis method. I’ve heard demos where sounds come from these interstitial regions of latent space and so it sounds like I’m listening to two things at once. I wonder if quantum computers will have the ability to do something similarly freaky.

It’s really cool to use quantum computers to compose music, but I’d love to see them used for things other than control of “frequency modulation (FM), additive synthesis, and granular synthesis.”


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: