The point I was trying to make was that the time you use to perform the calculation may change whether there is an "experience" on behalf of the calculation. Without specifying the basis if subjectivity, you can't rule anything out as far as what matters and what doesn't. Maybe the speed or locality with which the calculations happen matters. Like the water drops: given the same amount of time, eventually all the water will evaporate in either case leading to the same end state, but the the intermediate states are very different.
You should take your complaints to OpenAI, who constantly write like LLMs think in the exact same sense as humans; here a random example:
> Large language models (LLMs) can be dishonest when reporting on their actions and beliefs -- for example, they may overstate their confidence in factual claims or cover up evidence of covert actions
They have a product to sell based on the idea AGI is right around the corner. You can’t trust Sam Altman as far as you can throw him.
Still, the sales pitch has worked to unlock huge liquidity for him so there’s that.
Still making predictions is a big part of what brains do though not the only thing. Someone wise said that LLM intelligence is a new kind of intelligence, like how animal intelligence is different from ours but is still intelligence but needs to be characterized to understand differences.
> Someone wise said that LLM intelligence is a new kind of intelligence
So long as you accept the slide ruler as a "new kind of intelligence" everything will probably work out fine, it's the Altmannian insistence that only the LLM is of the new kind that is silly.
> As a side note, the program is amazingly performant. For small numbers the results are instantaneous and for the large number close to the 2^32 limit the result is still returned in around 10 seconds.
The current "carefully designed orbits" has a starlink sat doing a collision avoidance manuever every 1.8 minutes on average according to their filing for December 1 to May 31 of this year.
It also notes that the collision odds on which SpaceX triggers such maneuvers is 333 times more conservative than the industry standard. Were that not the case (and they were just using the standard criterion) one might naively assume that they would only be doing a maneuver every ten hours or so. But collision probabilities are not linear, they follow a power law distribution so in actuality they would only be doing such maneuvers every few days.
It is disingenuous to the point of dishonesty to use SpaceX's abundance of caution (or possibly braggadocios operational flex) as evidence that the risk is greater than it actually is.
It's not just starlink up there, at minimum NRO will be sad and unable to track nuclear weapons and such, the US military will be down a satellite coms systems, and there are probably some people which use starlink for something important.
Yeah it would suck to lose Starlink for a few years. I wouldn't mourn spy telescopes. But most other satellites like weather satellites or ballistic missile detectors or GPS are in higher orbits and wouldn't be affected at all.
My point is that even the unlikely worst case scenario would be limited in time and extent. It couldn't possibly block us from reaching space or last for decades, as some people fear.
They do verify their analytical calculation using a N-body simulation, that's section 4.4
> We verify our analytic model against direct N-body conjunction simulations. Written in Python, the simulation code SatEvol propagates orbits using Keplerian orbital elements, and includes nodal and apsidal precession due to Earth’s J2 gravitational moment. [...] The N-body simulation code used in this paper is open source and can be found at https://github.com/norabolig/conjunctionSim.
You are too modest! You should start your poem denouncing those pesky spam filters than hinders the honest viagra pill salesmen!
Then you could regret your inaction when google downweighted zit-popping videos, and maybe you have reached the point where it becomes reasonable to regret losing Facebook the genocide facilitator.
There is a qualitative distinction between 'I filter for myself what I don't want to see' and 'The State decides what everyone is allowed to see.'
Not too sure about those zit-popping videos. But in my time, we had rotten.com - so I might be immunized to that kind of stuff. Personally, I find a honest zit-popping video no worse than yet another AI voice going on and on about some non-topic, clearly written by AI as well. I don't seek out either, but the zit-popping at least is over after 10 seconds.
But that's Google curating content. State censorship is something else entirely. Once justified "for the children" or "for security", it never stops at the first target. It grows, layer by layer. We’ve watched that pattern repeat for centuries across every medium humans have ever invented.
Facebook, the genocide facilitator? If we are honest, so has the printing press. Let's ban letters, they have facilitated genocide.
The printing press spread enlightenment, propaganda, revolutions, and atrocities. The State tried to control that too. It failed every time. It will fail with the net, for young people and for old ones.
Repression never works long-term, it always creates pressure that eventually breaks the system that produced it. Historically, societies tend to get worse before they correct themselves, because authoritarian overreach generates exactly the instability it claims to prevent.
Jefferson’s warning about the recurring need to renew freedom wasn’t a call for violence - it was an observation about the cyclical nature of power, repression, and reform. Every attempt to restrict communication has eventually collapsed under its own contradictions, and the internet will be no exception.
reply