$150k is about 2-3 manufacturing floor salaries for one year. I am quite certain many companies would prefer to buy a compliant robot slave that will never ask for a raise, take sick leave, or get demoralized than to pay 2-3 fulltime employees the value of their labor, only to have them leave for a better job 6 months later and have to train new ones all over again.
When dot com crashed it was similar - compute use didn’t reduce, far from it, it’s just capex crashed to the ground and took 90% of Nasdaq value with it, for a few years.
Stop comparing to dotcom. There’s a big difference between the internet then and the internet now. Ignoring the astounding nature of the AI tech (dotcom isn’t even in the same ballpark), you also have to consider these two things:
1) We didn’t make the world digital native in that era
2) We didn’t connect the whole world to the internet
They won’t have any choice soon enough - all that wishy washy they are not our enemy bullshit goes out the window with the first missile/shell/drone flying over.
If you remember Putin is a spy by training, and a damn good one at that, you must consider spies really don’t want to change things when they are advantageous to them. Right now he knows very well what levers to pull to make things happen the way he wants. He won’t change that.
Anything. Everything. In domains where the search space is small enough to physically enumerate and store or evaluate every option, search is commonly understood as a process solved by simple algorithms. In domains where the search space is too large to physically realize or index, search becomes "intelligence."
E.g. winning at Chess or Go (traditional AI domains) is searching through the space of possible game states to find a most-likely-to-win path.
E.g. an LLM chat application is searching through possible responses to find one which best correlates with expected answer to the prompt.
With Grover's algorithm, quantum computers let you find an answer in any disordered search space with O(sqrt(N)) operations instead of O(N). That's potentially applicable to many AI domains.
But if you're so narrow minded as to only consider connectionist / neural network algorithms as "AI", then you may be interested to know that quantum linear algebra is a thing too: https://en.wikipedia.org/wiki/HHL_algorithm
Grover's algorithm is useful for very few things in practice, because for most problems we have a better technique than checking sqrt(N) of all possible solutions, at least heuristicly.
There is, at present, no quantum algorithm which looks like it would beat the state of the art on Chess, Go, or NP-complete problems in general.
There are about 2^152 possible legal chess states. You cannot build a classical computer large enough to compute that many states. Cryptography is generally considered secure when it involves a search space of only 2^100 states.
But you could build a computer to search though sqrt(2^152) = 2^76 states. I mean it'd be big--that's on the order of total global storage capacity. But not "bigger than the universe" big.
Doing 2^76 iterations is huge. That's a trillion operations a second for two and a half thousand years if I've not slipped up and missed a power of ten.
Maybe 100 years from now we can do 2^18 quantum ops/sec and solve chess in a day, whereas a classical computer could do 2^36 ops/sec and still take longer than the lifetime of the universe to complete.
Google's SHA-1 collision took 2^63.1 hash operations to find. Given that a single hash operation takes more than 1000 cycles, that's only less than three doublings away.
Cryptographers worry about big numbers. 2^80 is not considered secure.
It's early so I'm thinking out loud here but I don't think the algorithm scales like this, does it?
We're talking about something that can search a list of size N in sqrt(N) iterations. Splitting the problem in two doesn't halve the compute required for each half. If you had to search 100 items on one machine it's taken 10x iterations but split over two it'd take ~7x on each or ~14 in total.
If an algorithm has a complexity class of O(sqrt(N)), by definition it means that it can do better if run on all 100 elements than by splitting the list into two elements and running it on each 50.
This is not at all a surprising property. The same things happens with binary search: it has complexity O(log(N)), which means that running it on a list of size 1024 will take about 10 operations, but running it in parallel on two lists of size 512 will take 2 * 9 operations = 18.
This is actually easy to intuit when it comes to search problems: the element you're looking for is either in the first half of the list or in the second half, it can't be in both. So, if you are searching for it in parallel in both halves, you'll have to do extra work that just wasn't necessary (unless your algorithm is to look at every element in order, in which case it's the same).
In the case of binary search, with the very first comparison, you can already tell in which half of the list your element is: searching the other half is pointless. In the case of Grober's algorithm, the mechanism is much more complex, but the basic point is similar: Grover's algorithm has a way to just not look at certain elements of the list, so splitting the list in half creates more work overall.
That only helps for a relative small range of N. Chess happens to sort of fit into this space. Go is way out, even a sqrt(N) is still in the "galaxy-sized computer" range. So again, there are few problems for which Grover's algorithms really takes us from practically uncomputable to computable.
Even for chess, 2^76 operations is still waaaaay more time than anyone will ever wait for a computation to finish, even if we assumed quantum computers could reach the OPS of today's best classical computers.
No-one would solve chess by checking every possible legal chess state -- also checking 'all the states' wouldn't solve chess, you need a sequence of moves, and that pushes you up to an even bigger number. But again, you can easily massively prune that, as many moves are forced, or you can check you are in a provable end-game situation.
training an ai model is essentially searching for parameters that can make a function really accurate at making predictions. in the case of LLMs, they predict text.
I'm still waiting for a computer that can make my morning coffee. Until it's there I don't really believe in this whole "computer" or "internet" thing, it's all a giant scam that has no real-world benefit.
reply