Hacker Newsnew | past | comments | ask | show | jobs | submit | grondilu's commentslogin

It doesn't seem so at all from what I've tried.


What have you tried?

On webvm.io

> $ time python3 -c "max(range(10**7))"

real 0m1.558s

On https://copy.sh/v86/?profile=archlinux

> # time python3 -c "max(range(10**7))"

real 0m5.283s


> Mechanical computers are computers that operate using mechanical components rather than electronic ones.

For anyone who's excited about mechanical computers, perhaps it is worth reminding that an electron is about a thousand times lighter than a nucleon. Therefore, it's probably fair to say that mechanical computers will always be more energy consuming than electronic ones, because they fundamentally need to move atoms around to operate.


Taking this to its logical extreme, photonic computing should be significantly more efficient than electronic computing. Eventually.

Is that the end-game? Is there anything that would theoretically get closer to the Landauer limit than photonic computing? It’s way out of my element but I suppose this is a good venue to ask the question.

https://en.m.wikipedia.org/wiki/Landauer%27s_principle


The big problem in photonic computing is actually making an optical transistor, i.e. a switch where the presence of photons coming from one source controls whether of photons coming from another source pass. This is harder than electrical transistors because photons are bosons and don't interact with each other, so even theoretically this is hard to imagine.

Papers that claim some progress pop up every once in a while but I haven't seen anything promising yet.


Yes, general photonic computing is mostly “theoretical” at the moment. Still, discussion of theory is important. I wish I could add more to your comment but I’m so far out of my depth that it would be simply misleading (blind leading the blind). I believe there’s theory saying it’s theoretically possible to create efficient photon<->matter interfaces which could achieve transistor-like behavior … but there’s too much I don’t understand to be able to evaluate whether there are inherent limitations which kill the practical application of the proposed theoretical mechanisms.

I think companies have come up with some practical applications of limited photonic “computing” at interface edges but I’ve heard that until we no longer need to convert photonics to electronics it won’t surpass electronics for general computing.


> so even theoretically this is hard to imagine.

Possibly a strech, but transistors are basically current amplifiers, so their optical equivalent should be... lasers. Indeed lasers are optical amplifiers. Whether or not they can be turned into logic gates as transistors can, I don't know.


Maybe not more efficient, but maybe more resilient to electromagnetic storms, not prone to overheating (maybe), etc... Maybe it's about fitting constrained scenarios.


> but maybe more resilient to electromagnetic storms

If you mean solar flares, that's generally an issue with long transmission lines, as opposed to very small circuits.


It seems like they may be prone to overheating in some fashion. All that electricity and motion has to cause some kind of thermal load. Or am I way off base?


Yes. Friction is usually the limiting factor in mechanical systems. It causes a lot of heat, noise, stress, and wear on all interacting parts. It requires all sorts of messy approaches to mitigate, such as lubricants and bearings. Electricity is basically magic by comparison.


Wouldn't that all depend on how much energy is used for computing, and how much for fetching and storing the bits involved? If the requirements involve slow computation with extremely long-term storage, perhaps mechanical computing can theoretically have an advantage.

Then again, Chuck Moore's GA144 shows there's still plenty of room when it comes to optimizing electron-based computing for those kind of extreme scenarios as well.

[0] https://www.youtube.com/watch?v=0PclgBd6_Zs


sure, but how many electrons do we typically move around as a single signal


Few, if any; instead, it's typically the propagation of an electromagnetic wave that transmits a signal: https://en.wikipedia.org/wiki/Speed_of_electricity


This veritasiun is relevant, complete with reddit discussion:

https://www.reddit.com/r/engineering/comments/qxrsrp/the_big...

What if you made a really big circuit consisting of a battery, switch, lightbulb, and a wire that goes out 300k km on either side making a circuit that should take 1s at the speed of light to travel through. How long after closing the switch will it take for the light to go on?


“Few”, yes. But definitely some. I don’t think you can have propagation of EM wave through a conduit without at least pushing one electron into the conduit and removing one electron from the other side of the conduit.


Yes, but it's subtle; see https://en.wikipedia.org/wiki/Drift_current and https://en.wikipedia.org/wiki/Drift_velocity and https://en.wikipedia.org/wiki/Electron_mobility for more details

I was pretty surprised about this since I had mistakenly believed that electrons had a velocity near the speed of light, which I think is only true in particle accelerators.


Indeed - I thought most college Physics 2 courses teach that electrons actually move quite slowly through conductors. It’s the “wave” which propagates near the speed of light, not the particles.


My mistake was being a biologist, and skipping or sleeping my way through the EE part of physics :) and then saying the wrong thing in front of some very smart people


Lord, I make that mistake on HN nearly every month. At least you didn’t have a “Putnam award” moment:

https://news.ycombinator.com/item?id=35079


I've had something similar- when I was deciding what grad school to go to, I was explaining how RNA enzymes work to some professor at UC Boulder, who ended up being Tom Cech (who won the Nobel for discovering RNA enzymes); he had to correct a lot of the details I messed up. I ended up going to UCSF and fortunately didn't try to explain prions to Stanley Prusiner.

In short, nearly everything I have learned is from saying dumb things in front of very smart people who instantly understood my misunderstanding and knew exactly how to explain it so I understood. That includes Sanjay Ghemawat and Jeff Dean telling me "your idea isn't so good, it's n-squared, here's a linear solution"


https://youtu.be/2Vrhk5OjBP8?si=gDXZKYeFkVoAs_LG

AlphaPhoenix did an amazing experiment to measure the speed of electricity FWIW. His other videos are incredible as well and explain EM physics in an absolutely outstanding way.


what is the electromagnetic wave made of, what's the substrate it is composed of and moving through?


The wave is electromagnetic energy passing through a waveguide (typically copper) mediated by electrons. See https://en.wikipedia.org/wiki/Waveguide


They took 20 bitcoins out of me, but I never bothered to fill the paperwork. Oh, well.


I'm quite baffled by the fact that LLMs can generate a dataset used to train other LLMs. One would think that such a feedback loop would produce utter nonsense but apparently not. This seems to work.


Humans have bootstrapped by training the next generation. Why not LLMs?


I think the perception is that humans can discover new information to question and improve what they learned, while LLM's cannot.


Human language drifts for the same reason LLM language would, but is continually reset to a sensible state by interaction with the real world.


If the correct labels in the original training set outweigh the incorrect ones then it is possible to reduce the number of errors by relabeling using the trained model. If you can also identify labels that are likely to be incorrect and then have humans focus on relabeling those you have a way to efficiently improve the data.


Yes, and there's even a name for it and associated area of research.

https://en.wikipedia.org/wiki/Model_collapse


I feel the same way about synthetic data. Seems intuitively wrong that you can get new insights / unlock new abilities from generated data that you could not from the original data.


The new information comes from our choice in how to generate that data. We're not just blindly making synthetic data, we come up with clever way to generate synthetic data that is hopefully high quality and can improve our models (and if it doesn't, we don't use it).


> Nor is it music without a musician.

The case of music is quite fascinating. YouTuber Rick Beato is following it very closely. He seems to think that quite soon people will listen to AI generated music even knowing it is AI-generated music. They will listen to it because they'll enjoy it and they will not care who or what made it. I personally think he is right. Music can be enjoyed in a way no other art can, in that sense it might become kind of a drug.

Now some people think that will never happen because AI music will always be bad, but recently he posted a video in which he tells us that his kids can immediately tell an AI-generated song from a normal one. He is baffled because he himself can't do that. But he also tells us that one of his kids thinks that he probably won't be able to tell it anymore in six months or something. In other words, a young person with such a good ear is confident that AI-generated music will soon be if not good, at least indistinguishable from human-made music. I think that's indicative of where things are going.

https://www.youtube.com/watch?v=zbo6SdyWGns


> Now some people think that will never happen because AI music will always be bad

I think they are mistaken, and that this is the wrong way to go thinking about this.

If you're against AI "doing art", it mustn't be because "it's not good". Because art is very subjective -- so much that it's almost impossible to define what it is -- somewhere someone will like AI art. And the procedures and models will get better, too. I can envision a (very near) future where blockbuster movies and hit pop songs are entirely written by AI... it wouldn't be too different from the present, anyway.

No, if you're upset about AI art, it must not be about technical quality, but about human connection. Art is a human activity, by humans, for humans. Even if AI "gets better", I don't want us to be cut out of the loop.

Art is not something to optimize and automate.

(All of what I've just said is debatable, of course. Like art!)


if i heard an AI ballad about heartbreak or a punk anthem about injustice, my response would be "what the fuck do you know about it?"

some music can be passively listened to and is built with consumption in mind (e.g., background music in games or restaurants).

other music has a message and (as pretentious as this sounds) is about communication and connection between the artist and listener.


"I think in the future, instead of typing up our proofs, we would explain them to some GPT. And the GPT will try to formalize it in Lean as you go along. If everything checks out, the GPT will [essentially] say, “Here’s your paper in LaTeX; here’s your Lean proof. If you like, I can press this button and submit it to a journal for you.” It could be a wonderful assistant in the future."

That'd be nice, but eventually what will happen is that the human will submit a mathematical conjecture, the computer will internally translate into something like Lean, and try to find either a proof or a disproof. It will then translate it in natural language and tell it to the user.

Unless mathematics is fundamentally more complicated than chess or go, I don't see why that could not happen.


It is fundamentally more complicated. Possible moves can be evaluated against one another in games. We have no idea how to make progress on many math conjectures, e.g. Goldbach or Riemann's. An AI would need to find connections with different fields in mathematics that no human has found before, and this is far beyond what chess or Go AIs are doing.


Not a mathematician, but I can imagine a few different things which make proofs much more difficult if we simply tried to map chess algorithms to theorem proving. In chess, each board position is a node in a game tree and the legal moves represent edges to other nodes in the game tree. We could represent a proof as a path through a tree of legal transformations to some initial state as well.

But the first problem is, the number of legal transformations is actually infinite. (Maybe I am wrong about this.) So it immediately becomes impossible to search the full tree of possibilities.

Ok, so maybe a breadth-first approach won't work. Maybe we can use something like Monte Carlo tree search with move (i.e. math operation) ordering. But unlike chess/go, we can't just use rollouts because the "game" never ends. You can always keep tacking on more operations.

Maybe with a constrained set of transformations and a really good move ordering function it would be possible. Maybe Lean is already doing this.


> But the first problem is, the number of legal transformations is actually infinite.

I am fairly certain the number of legal transformations in mathematics is not infinite. There is a finite number of axioms, and all proven statements are derived from axioms through a finite number of steps.


Technically speaking one of the foundational axioms of ZFC set theory is actually an axiom schema, or an infinite collection of axioms all grouped together. I have no idea how lean or isabelle treat them.


Whether it needs to be a schema of an infinite number of axioms depends on how big the sets can be. In higher order logic the quantifiers can range over more types of objects.


Lean (and Coq, and Agda, etc) do not use ZFC, they use variants of MLTT/CiC. Even Isabelle does not use ZFC.


Isabelle is generic and supports many object logics, listed in [1]. Isabelle/HOL is most popular, but Isabelle/ZF is also shipped in the distribution bundle for people who prefer set theory (like myself).

[1] https://isabelle.in.tum.de/doc/logics.pdf


Chess and Go are some of the simplest board games there are. Board gamers would put them in the very low “weight” rankings from a rules complexity perspective (compared to ”modern” board games)!

Mathematics are infinitely (ha) more complex. Work your way up to understanding (at least partially) a proof of Gödel’s incompleteness theorem and then you will agree! Apologies if you have done that already.

To some extent, mathematics is a bit like drunkards looking for a coin at night under a streetlight because that’s where the light is… there’s a whole lot more out there though (quote from a prof in undergrad)


> Unless mathematics is fundamentally more complicated than chess or go, I don't see why that could not happen.

My usual comparison is Sokoban: there are still lots of levels that humans can beat that all Sokoban AIs cannot, including the AIs that came out of Deepmind and Google. The problem is that the training set is so much smaller, and the success heuristics so much more exacting, that we can't get the same benefits from scale as we do with chess. Math is even harder than that.

(I wonder if there's something innate about "planning problems" that makes them hard for AI to do.)


> Unless mathematics is fundamentally more complicated than chess or go

"Fundamentally" carries the weight of the whole solar system here. Everyone knows mathematics is more conceptually and computationally complicated than chess.

But of course every person has different opinions on what makes two things "fundamentally" different so this is a tautology statement.


Growing Neural Cellular Automata https://news.ycombinator.com/item?id=22300376, February 2020


Added above. Thanks!


> improved soil health, improved ecosystem health, better water retention, less erosion, more carbon sequestered in the soil.

Regarding carbon sequestration, I think it is worth pointing out that Freeman Dyson, in one of his conferences, mentioned no-till farming as one land management methods that could be used to absorb the carbon emitted in the atmosphere by human activities.

"The point of this calculation is the very favorable rate of exchange between carbon in the atmosphere and carbon in the soil. To stop the carbon in the atmosphere from increasing, we only need to grow the biomass in the soil by a hundredth of an inch per year. Good topsoil contains about ten percent biomass, [Schlesinger, 1977], so a hundredth of an inch of biomass growth means about a tenth of an inch of topsoil. Changes in farming practices such as no-till farming, avoiding the use of the plow, cause biomass to grow at least as fast as this. If we plant crops without plowing the soil, more of the biomass goes into roots which stay in the soil, and less returns to the atmosphere. If we use genetic engineering to put more biomass into roots, we can probably achieve much more rapid growth of topsoil. I conclude from this calculation that the problem of carbon dioxide in the atmosphere is a problem of land management, not a problem of meteorology. No computer model of atmosphere and ocean can hope to predict the way we shall manage our land."

https://www.edge.org/conversation/freeman_dyson-heretical-th...


engineering wise I can't see why it wouldn't be possible to selectively genetic-engineer some kind of plant or algae or something that sucks up tons and tons of carbon that you can then sequester manually by compressing it into, I don't know, artificial peat or something.

I have no idea why massive factory growing operations for produce aren't everywhere. Every city should have one by now, growing produce locally and shaving down the price of transport and waste to almost nothing.


There’s really no need. Ever heard of an algal bloom? Also happens with jellyfish. On freshwater plants like duckweed do similar things.

Given the right nutrients population explosions happen. Environmentalists usually treat these as bad things but they certainly could be good ways to sink carbon. They could be triggered by fertilizing some of the more barren sections of ocean selected to minimize ecological effect. Quite a lot of the biomass simply falls to the ocean floor and gets buried, it could also be harvested and sequestered another way or used as a biomass fuel.

On freshwater lakes you could grow and harvest duckweed.


Investigate the biochar process as an alternative. You can also buy biochar for your houseplant needs [0]. Creating that much artifical peat in a small area would be a massive fire hazard but probably could be managed.

"Vertical" or indoor farming is one of those silicon valley tropes. VC lost billions during the last decade and I don't know of a single success story that is still going / profitable. I'm sure after enough time passes people forget and will try again.

[0]: https://rosysoil.com/


For what it's worth that sounds a lot like what Max Tegmark classifies as the "level 1" multiverse.

https://space.mit.edu/home/tegmark/crazy.html


Friendly reminder that "Paris syndrome" is a thing:

https://en.wikipedia.org/wiki/Paris_syndrome


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: