Hacker Newsnew | past | comments | ask | show | jobs | submit | empiricus's commentslogin

Well, python for AI is just the syntactic sugar to call pytorch cuda code on the gpu.

All nice and beautiful, but I don't understand how will this work in the winter in the temperate areas. You maintain parallel natural gas installations and ramp them up in the winter? Does this doubles the cost?

Not having to burn gas is cheaper than burning gas. There will be a decade or two of transition with rarely used gas turbines getting their yearly packet in a short amount of time. Eventually other tech will take over, or the gas infrastructure will pare down and be cost optimized for its new role or rare usage.

Europe, and Germany and the UK in particular, are really poorly suited to take advantage of this new cheap technology. If these countries don't figure out alternatives, the countries with better and cheaper energy resources will take over energy intensive industries.

This is not a problem for solar and storage to solve, it's a problem that countries with poor resources need to solve if they want to compete in global industry.


Wind power. Mix with emergency reserves running on open cycle gas turbines, if deemed necessary, preferably running on with carbon neutral fuel. Optimize for lowest possible CAPEX.

That is contingent on that we’re not wasting money and opportunity cost that could have larger impact decarbonizing agriculture, construction, aviation, maritime shipping etc.


The next hot thing (pun intended) is geothermal. The tech to drill deep enough opens up the possibility of extracting geothermal energy in most of the world. The tech exists and is deployed. Scaling is not yet proven but is very plausible. Geothermal runs 24/7 and can be clean base load power.

It’s not just drilling deep enough, it’s whether they can keep the wells open and flowing long enough to make the whole thing economic.

Some deep geothermal projects have failed because the wells wouldn’t stay open. Maybe this generation of companies have solved this problem; let’s wait and see.


From a global perspective, people living in temperate areas are actually the exception, not the rule (if a disproportionately economically successful exception).

The likely implication of this is that, long term, unless wind power starts going back down the cost curve, or you're fortunate enough to have lots of hydro power, Northern Europe, Canada, northern China and so on are going to have much more expensive energy than more equatorial places.


This probably depends a lot on how close you are to the equator. Here in Germany output of solar in winter is negligible, and if there is no wind, which can happen for several consecutive weeks, we need a backup. No utilities company will build a fossil power plant that will be used only a few weeks per year, so our government will have to step in to make sure this happens.

On top of this you have very high costs for an increasingly complex grid, which needs to be built and then maintained. Prices will never again be as low as in the fossil/nuclear era.


Here are some numbers: January 2025, the output of solar was ~1500 GWh, it peaked in June at 10500 GWh. So the lowest output was about 15% of the maximum, this year.

https://www.energy-charts.info/charts/energy/chart.htm?l=en&...

https://www.energy-charts.info/charts/energy/chart.htm?l=en&...

Looking at wind, the ratio between min and max per week is about 1:5 (~1200 vs ~6000 GWh). Just as there is always some solar power generation, there is never no wind, though looking at those charts there were 4 weeks in the late summer of 2023 when production was low consecutively, between 700 and 1000 GWh.


How do you interpret these numbers? If your point is that we can simply overprovision photovoltaik arrays by a factor of 6.67, then that would make solar the most expensive power generation method by far.

And it only gets worse the more households transition to heatpumps, because the consumption in winter is so lopsided. For example, I heat my home with a heatpump, and I have 10 kWp of solar arrays on my roof. In the last week of July, we consumed 84 kWh and generated 230 kWh (273 %). In the last week of November, we consumed 341 khW and generated 40 kWh (11 %). This means we'd need roughly 10 times as much PV area to match demand (10 roofs?), and huge batteries because most of that consumption is in the evening, at night, and in the morning.

Of course, utility-scale and residential solar behave a bit differently, and it becomes more complicated if wind is factored in. But it shows that you can't just overprovision PV a little to fix the main problem of solar power: that it is most abundant in summer, and most in demand in winter.


My point was really only that neither is solar what I'd consider negligible in winter, nor are there really weeks with no wind. Other than that, my interpretation is pretty much the same as yours.

Above, I looked at the weekly min/max ratio. Of course the daily ratios are much higher, 1:60 for solar, and about 1:30 for wind. But wind and solar do have a useful anti-correlation: the ratio is "only" about 1:15 for combined solar+wind. Still high, but a huge improvement on both wind and solar individually.

https://www.energy-charts.info/charts/energy/chart.htm?l=en&...

In reality, the ratio is even higher since we routinely have to drop solar and turn off wind turbines when there is more production than demand (and I don't think that generation is reflected in the graph).

Ie. the max is already a representation more of grid and demand than of production, and it'd make more sense to use the ratio of min:mean, so comparing what we expect PV+wind to produce on average with what they give on the worst day. That gets us a different, more favorable ratio: 195 TWh produced in 2025 so far, let's call it 550 GWh/day, giving a ratio of about 1:6.


Thank you for actually running the numbers. I think the data is quite convincing that overprovisioning won't be the solution to the seasonal storage problem, or at least not the major factor in it.

Personally, I have high hopes for flow batteries. Increasing storage capacity is so easy with them, liquids can easily be stored for a long time, and it would even make long-distance transport by ship feasible. If only we can find a cheap, suitable electrolyte.


This is just a slighty more sophisticated version of the "solar doesn't work at night" trope.

The implications of bringing it up is that these silly hippies haven't even thought of this basic fact so how can we trust them with our energy system.

Meanwhile, actual energy experts have been aware of the concept of winter for at least a few years now.

If you want to critique their plans for dealing with it, you'd need to do more than point out the existence of winter as a gotcha.


I don't see you countering my argument, only attempting to ridicule it ("slighty more sophisticated", "trope", "these silly hippies", "been aware of the concept of winter", "existence of winter as a gotcha"). That sucks, man :-(

> If you want to critique their plans for dealing with it […]

There are many ideas for seasonal storage of PV-generated electricity, but so far there is no concrete plan that's both scalable to TWh levels and economically feasible. Here on HN, there's always someone who'll post the knee-jerk response of "just build more panels", without doing the simple and very obvious calculation that 5x to 10x overprovisioning would turn solar from one of the cheaper into the by far most expensive power generation method out there [1].

[1] Except for paying people to crank a generator by hand, although that might at least help with obesity rates.


> 5x to 10x overprovisioning would turn solar from one of the cheaper into the by far most expensive power generation method out there.

This is trivially false if the cost of solar generation (and battery storage) further drops by 5x to 10x.

Additionally that implies the overprovisioned power is worthless in the summer, which does not have to be the case. It might make certain processes viable due to very low cost of energy during those months. Not trivial as those industries would have to leave the equipment using the power unused during winter months, but the economics could still work for certain cases.

Some of the cases might even specifically be those that store energy for use in winter (although then we're not looking at the 'pure' overprovisioning solution anymore).


> This is trivially false if the cost of solar generation (and battery storage) further drops by 5x to 10x.

That's a huge "if". The cost of PV panels has come down by a factor of 10 in the last 13 years or so, that's true. I doubt another 10x decrease is possible, because at some point you run into material costs.

But the real issue is that price of the panels themselves is already only about 35% of the total installation cost of utility-scale PV. This means that even if the panels were free, it would only reduce the cost by a factor of 1.5.


> That's a huge "if". The cost of PV panels has come down by a factor of 10 in the last 13 years or so, that's true. I doubt another 10x decrease is possible, because at some point you run into material costs.

A factor of 5 is certainly within the realms of physics, given the numbers I've seen floating around. Note that prices are changing rapidly and any old price may not be current: around these parts, they're already so cheap they're worth it as fencing material even if you don't bother to power anything with them.

> But the real issue is that price of the panels themselves is already only about 35% of the total installation cost of utility-scale PV. This means that even if the panels were free, it would only reduce the cost by a factor of 1.5.

This should have changed your opinion, because it shows how the material costs are not the most important factor: we can get up to a 3x cost reduction by improving automation of construction of utility-scale PV plants.

I think I've seen some HN startups with this as their pitch, I've definitely seen some IRL with this pitch.


> But the real issue is that price of the panels themselves is already only about 35% of the total installation cost of utility-scale PV. This means that even if the panels were free, it would only reduce the cost by a factor of 1.5.

1. Do the other costs scale with the number of panels? Because if the sites are 5 times the scale of the current ones I would imagine there are considerable scale based cost efficiencies, both within projects and across projects (through standardization and commoditization).

2. Vertically mounted bifacial PV already greatly smoothes the power production curve throughout the day, improving profitability. Lower cost panels make the downside of requiring more panels in such a setup almost non-existent. Additionally, they reduce maintenance/cleaning costs by being mounted vertically.

3. Battery/energy storage (which further improve profitability) costs are dropping and can drop further.

Also, please address the matter of using the overprovisioned power in summer. Possible projects are underground thermal storage ("Pit Thermal Energy Storage", only works in places where heating is required in winter), desalination, producing ammonia for fertilizer, and producing jet fuel.


> 1. Do the other costs scale with the number of panels?

Mostly yes. Once you're at utility-scale, installation and maintainance should scale 1:1 with number of panels. Inverters and balancing systems should also scale 1:1, although you might be able to save a bit here if you're willing to "waste" power during peak insolation.

But think about it this way: If it was possible to reduce non-panel costs by a factor of 5 simply by building 5x larger solar plants, the operating companies would already be doing this. With non-panel costs around 65%, this would result in 65% * (1 - 1/5) = 52% savings and give them a huge advantage over the competition.

> 2. Vertically mounted bifacial PV […] 3. Battery […] costs are dropping

I agree that intra-day fluctuations will be solved by cheaper panels and cheaper batteries, especially once sodium-ion battery costs fall significantly. But I'm specifically talking about seasonal storage here.

> Also, please address the matter of using the overprovisioned power in summer.

I'm quite pessimistic about that. Chemical plants tend to be extremely capital-intensive and quickly become non-profitable if they're effectively idle during half of the year. Underground thermal storage would require huge infrastructure investments into distribution, since most places don't already have district heating.

Sorry, very busy today so I can't go into all details, but I still wanted to give you an answer.


What amounts to „concrete plan“? Right now we’re still in the state where building more generation is the best use of our money with batteries for load shifting a few hours ramping up. So it’s entirely expected that there is no infrastructure for seasonal storage yet. However the maths for storing energy as hydrogen and heat looks quite favorable and the necessary technology exists already.

"Concrete plan" means a technology which satisfies all of these requirements:

1) demonstrated ability in a utility-scale plant

2) already economically viable, or projected to be economically viable within 2 years by actual process engineers with experience in scaling up chemical/electrical plants to industrial size

Yes, that's hard to meet. But the thing is, we've seemingly heard of hundreds of revolutionary storage methods over the last decade, and so far nothing has come to fruition. That's because they were promised by researchers making breakthroughs in the lab, and forecasting orders of magnitude of cost reductions. They're doing great experimental work, but they lack the knowledge and experience to judge what it takes to go from lab result to utility-scale application.


> 2) already economically viable, or projected to be economically viable within 2 years by actual process engineers with experience in scaling up chemical/electrical plants to industrial size

Why 2 years?

Even though I'm expecting the current approximately-exponential growths of both PV and wind to continue until they supply at least 50% of global electrical demand between them, I expect that to happen in the early 2030s, not by the end of 2027.

(I expect global battery capacity to be between a day and a week at that point, still not "seasonal" for sure).


> Why 2 years?

Significantly longer than that and you go from prediction to speculation, and it is unwise to base a country's energy policy on speculation.


Electrolysis hydrogen is only a little bit more expensive than hydrogen derived from methane and electrolyzers with dozens of megawatt are available. That seems pretty solid to me at this point in the energy transition.

Hydrogen generation isn't the problem, storing it over several months is. Economical, safe, and reliable storage of hydrogen is very much an unsolved engineering challenge. If it weren't, hydrogen storage plants would shoot out of the ground left and right: Even here in Germany, we have such an abundance of solar electricity during the summer months that wind generators have to be turned off and the spot price of electricity still falls to negative values(!) over noon, almost every day.

Why stop at hydrogen for storage and transport;

there's ammonia, methanol, and other derivatives that are easier to store and transport.

eg: * https://www.methanex.com/our-products/about-methanol/marine-...


Yes, those are easier to store, but more expensive and less efficient to generate.

The question is the same as for hydrogen: If it's easy, cheap, and safe to generate, store, and convert back into electricity, why isn't it already being done on a large, commercial scale? The answer is invariably that it's either not easy to scale, too expensive (in terms of upfront costs, maintainance costs, or inefficiencies), or too unsafe, at least today.


With rapidly dropping PV prices it just gets cheaper - this is only a relatively recent thing; the projects that exist to exand production are barely complete yet .. capital plant takes time to build.

Fortescue only piloted athe the world's first ammonia dual-fuel vessel late last year, give them time to bed that in and advance.


You can store it in salt caverns

If that's so easy, cheap, and safe, why aren't there companies doing it on a large scale already? We're talking about billions of Euros of market volume.

Right now it’s cheaper to make hydrogen from methane and methane is easier to store and process so no large scale storage of hydrogen is in demand. Nevertheless storage in salt caverns is a proven process that is in use right now eg. Linde does it.

And this also leaves out all the heating power still consumed directly from fossil fuels. The gap is much larger.

This doesn't have have to be by switching consumption; using less is possible: Passivhaus is from Germany, after all. However, you can't do that and keep all your historical protections on buildings and layers-upon-layers of red tape on renovations.


> it peaked in June at 10500 GWh

And 8280 GWh the previous June for those wondering roughly how much of this was due to more solar panels being deployed.


For reference, Germany has ~101GW of solar capacity installed as of this comment (and is deploying ~2GW/month). 59% of Germany's electricity in 2024 came from renewable sources, up from 56% in 2023. I am curious to see how 2025 turns out, and therefore predict 2026 from planned renewables and battery storage projects.

Possible things are to over provision solar, and set it up further south with a high voltage dc cable. We almost had a Morocco - UK power setup but the current government said no to it.

One of the few problem of nuclear is summer time water use. Combining solar with nuclear would be the best option in my opinion.

Nuclear plants, like most large thermal plants, are almost always located near large bodies of water and return that water downstream so it doesn't really matter?

It matters if people don't want to see the rivers full of dead fish, so last year there were already shutdowns because of heatwaves.

https://www.euronews.com/2025/07/02/france-and-switzerland-s...


It does when you care about the environmental impact of your cooling (and also consider the fact that droughts are an increasingly severe problem).

It matters when the level of that body of water drops by a lot in summer and the water temperature rises at the same time. Add environmental laws (cooking the fish is discouraged), and your nuke plant needs to go into safety shutdown pretty reliably every summer.

Historically the biggest impediment to nuclear power has been incompetent construction management and project management. Incompetent is a strong word for it but nuclear power plants are the largest capital equipment purchases on the planet. Even modern so-called modular designs can't save poor project management, and learn as you go engineering.

That so mundane and should be easy to fix, right? That's why I bring up scale. Nobody has experience running projects that big. Some things are just too big to manage.


Well, for some reason horse numbers and horse usage dropped sharply at a moment in time. Probably there was some horse pandemic I forgot about.

I agree that the current models are far from perfect. But I am curious how you see the future. Do you really think/feel they will stop here?

I mean, I'm just some guy, but in my mind:

- They are not making progress, currently. The elephant-in-the-room problem of hallucinations is exactly the same or, as I said above, worse as it was 3 years ago

- It's clearly possible to solve this, since we humans exist and our brains don't have this problem

There's then two possible paths: Either the hallucinations are fundamental to the current architecture of LLMs, and there's some other aspect about the human brains configuration that they've yet to replicate. Or the hallucinations will go away with better and more training.

The latter seems to be the bet everyone is making, that's why there's all these data centers being built right? So, either larger training will solve the problem, and there's enough training data, silica molecules and electricity on earth to perform that "scale" of training.

There's 86B neurons in the human brain. Each one is a stand-alone living organism, like a biological microcontroller. It has constantly-mutating state, memory: short term through RNA and protein presence or lack thereof, long term through chromatin formation, enabling and disabling it's own DNA over time, in theory also permanent through DNA rewriting via TEs. Each one has a vast array of input modes - direct electrical stimulation, chemical signalling through a wide array of signaling molecules and electrical field effects from adjacent cells.

Meanwhile, GPT-4 has 1.1T floats. No billions of interacting microcontrollers, just static floating points describing a network topology.

The complexity of the neural networks that run our minds is spectacularly higher than the simulated neural networks we're training on silicon.

That's my personal bet. I think the 88B interconnected stateful microcontrollers is so much more capable than the 1T static floating points, and the 1T static floating points is already nearly impossibly expensive to run. So I'm bearish, but of course, I don't actually know. We will see. For now all I can conclude is the frontier model developers lie incessantly in every press release, just like their LLMs.


The complexity of actual biological neural networks became clear to me when I learned about the different types of neurons.

https://en.wikipedia.org/wiki/Neural_oscillation

There are clock neurons, ADC neurons that transform analog intensity of signal into counts of digital spikes, there are neurons that integrate signals over time, that synchronizes together etc etc. Transformer models have none of this.


Thanks, that's a reasonable argument. Some critique: based on this argument it is very surprising that LLM work so well, or at all. The fact that even small LLM do something suggests that the human substrate is quite inefficient for thinking. Compared to LLMs, it seems to me that 1. some humans are more aware of what they know; 2. humans have very tight feedback loops to regulate and correct. So I imagine we do not need much more scaling, just slightly better AI architectures. I guess we will see how it goes.

Well, the secret is not how you crawl the web, but how you decide what to show to the users.

It's not like the LOC either publishes their official procedure for what gets to appear on the foremost shelves.

I think the genome might be mostly just the "config file". So the cell already contains most of the information and mechanisms needed for the organism. The genome is config flags and some more detailed settings that turn things on and off in the cell, at specific times in the life of the organism. From this point of view, the discussion about how many pairs/bytes of information are in the genome is misleading. Similar analogy: I can write a hello world program, which displays hello world on the screen. But the screen is 4k, the windows background is also visible, so the hardware and OS are 6-8 orders of magnitude more complex than the puny program, and the output is then much more complex than the puny program.


By the time stellarator designs become economical (tens of years in the most optimistic case), you can cover the entire Germany in PV panels. Or even grow an entire new generation of forrest. So far stellarators look just like interesting vaporware. I mean they are irrelevant to any current energy discussion.


I "like" when ppl talk about UBI and say "but ppl on UBI are not happy and lack purpose". Compare with being poor.


It's even more annoying when you consider that most proposals that gain any type of traction can't even ever approach the "I have everything now so I'm not going to work because I'm so lazy"-type abundance that the fear-mongers try to sell you. If we just were able to use some of all this wealth to create an absolute baseline of "enough money to not starve, have any kind of roof over your head and not be trapped in your current situation", I wonder what society could've looked like.


This looks nice, but somebody should pay the difference, and maybe it should be those that oppose the normal looking supports.


well, cochlea is working withing the realm of biological and physical possibilities. basically it is a triangle through which waves are propagating, and sensors along the edge. smth smth this is similar to a filter bank of gabor filters that respond to rising freq along the triangle edge. ergo you can say fourier, but it only means sensors responding to different freq becasue of their location.


Yeah, but not only the frequency is important - the wave-form is very relevant. For example if your wave-form is a triangle, listerners will tell you that it is very noisy compared to a simple sinus. If you use sinus as a base of your vector space triangles really look like a noisy mix. My question is, if the basic elements are really sinus, or if the basic Eigen-Waves of the cochlea are other Wave-Forms (e.g. slightly wider or narrower than sinus, ...). If physics in the ear isn't linear, maybe sinus isn't the purest wave-form for a listener.

Most people in Physics only know sinus and maybe sometimes rectangles as a base for transformations, but mathematically you could use a lot of other things - maybe very similar to sinus, but different.


But if you apply a frequency-dependent phase shift to the triangle wave, nobody will be able to tell the difference unless the frequency is very low.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: