The parking spaces in question aren’t free; the city sold the long-term rights to operate the parking facilities to the private sector in a bid to balance one year’s budget.
Overprovisioning is much less aggressive than this in practice. A read-oriented SSD with 15.36 TB of storage typically has 16.384 TiB of flash. The same hardware can be used to implement a 12.8 TB mixed-use SSD (3 DWPD or more).
But, if model development stalls, and everyone else is stalled as well, then what happens to turn the current wildly-unprofitable industry into something that "it makes sense to keep spending billions" on?
I suspect if model development stalls we may start to see more incremental releases to models, perhaps with specific fixes or improvements, updates to a certain cutoff date, etc. So less fanfare, but still some progress. Worth spending billions on? Probably not, but the next best avenue would be to continue developing deeper and deeper LLM integrations to stay relevant and in the news.
The new OpenAI browser integration would be an example. Mostly the same model, but with a whole new channel of potential customers and lock in.
Because they’re not that wildly unprofitable. Yes, obviously the companies spend a ton of money on training, but several have said that each model is independently “profitable” - the income from selling access to the model has overcome the costs of training it. It’s just that revenues haven’t overcome the cost of training the next one, which gets bigger every time.
> the income from selling access to the model has overcome the costs of training it.
Citation needed. This is completely untrue AFAIK. They've claimed that inference is profitable, but not that they are making a profit when training costs are included.
I understand what GP meant, but extraction of values from a sparse matrix is an essential operation of multiplying two sparse matrices. Sparse matmult in turn is an absolutely fundamental operation in everything from weather forecasting to logistics planning to electric grid control to training LLMs. Radix sort on the other hand is very nice but (as far as I know) not nearly used as widely. Matrix multiplication is just super fundamental to the modern world.
I would love to be enlightened about some real-world applications of radix sort I may have missed though, since it's a cool algorithm. Hence my question above.
Not always, or rather not exclusively. For example, some types of distillation benefit from sparse-ifying the dense-ish matrices the original was made of [1]. There's also a lot of benefit to be had from sparsity in finetuning [2]. LLMs were merely one of the examples though, don't focus too much on them. The point was that sparse matmul makes up the bulk of scientific computations and a huge amount of industrial computations too. It's probably second only to the FFT in importance, so it would be wild if radix sort managed to eclipse it somehow.
Where did you get the pricing for vast.ai here? Looking at their pricing page, I don't see any 8xH200 options for less than $21.65 an hour (and most are more than that).
The last few generations of GPU architectures have been increasingly optimized for massive throughput of low-precision integer arithmetic operations, though, which are not useful for any of those other applications.
This is the common argument from proponents of compiler autovectorization. An example like what you have is very simple, so modern compilers would turn it into SIMD code without a problem.
In practice, though, the cases that compilers can successfully autovectorize are very limited relative to the total problem space that SIMD is solving. Plus, if I rely on that, it leaves me vulnerable to regressions in the compiler vectorizer.
Ultimately for me, I would rather write the implementation myself and know what is being generated versus trying to write high-level code in just the right way to make the compiler generate what I want.
It doesn’t sound like they are referring to newborns needing to be physically present to get a SSN. Instead, it seems to refer to persons who are registering to start receiving their Social Security benefits (or existing recipients who want to change their direct deposit information). Also, there is an existing supported method for identifying yourself electronically that is mentioned in the article. In that sense, the headline seems a bit misleading.
> It doesn’t sound like they are referring to newborns needing to be physically present to get a SSN.
Correct. They already tried to take care of that two weeks ago.
Since 1988, through a program called Enumeration at Birth (EAB), as part of their state's birth registration process (typically done at the hospital when the baby is born) allowed the parents to request Social Security registration. The state would then automatically send the information to the Social Security Administration (SSA) for processing.
Last week the acting SSA director ordered EAB cancelled in Maine, so parents would have to go to an SSA office to register their newborn [1][2][3]. The DOGE website showed that EAB was cancelled in Arizona, Maryland, Michigan, New Mexico, and Rhode Island (but did not list Maine).
There was enough objection to this that they then reversed the cancellation in Maine. The acting director said
> I recently directed Social Security employees to end two contracts which affected the good people of the state of Maine. The two contracts are Enumeration at Birth (EAB), which helps new parents quickly request a Social Security number and card for their newborn before leaving the hospital, and Electronic Death Registry (EDR) which shares recorded deaths with Social Security. In retrospect, I realize that ending these contracts created an undue burden on the people of Maine, which was not the intent. For that, I apologize and have directed that both contracts be immediately reinstated
He also said that EAB and EDR remain in place for every state.
As far as I know there was no explanation given on why specifically Maine, or on why those 5 other states were on the DOGE site.
reply