Hacker Newsnew | past | comments | ask | show | jobs | submit | hyttioaoa's commentslogin

I went down a rabbit hole a few weeks back to discover this. I was so upset with so many wrong explanations including Neil de Grasse Tyson saying we're moving into the bulges.

it's very magical though discovering the beauty of it.


Very much in line with Cory Doctorow's thoughts on enshittification: https://youtu.be/Eiu6FxigqrI?si=aT3vzWVSxV2pi_4U


He published a whole paper providing a systematic analysis of a wide range of models. There's a whole section on that. So it's not specific to PINN.


The use of the term «AI» is, yet again, annoying by its vagueness.

I'm assuming that they do not refer to the general use of machines to solve differential equations (whether exactly or approximately), which is centuries old (Babbage's engine).

But then how restricted these «Physics-Informed Neural Networks» are ? Are there other methods using Neural Networks to solve differential equations ?


"Generalized adversarial networks, or GANs, are a conceptual advance that allow reinforcement learning problems to be solved automatically." -

"Generalized" :D Also the description is nonsense. This has nothing to do with reinforcement learning. Makes me wonder about the rest.


Now, if a press release from a top univ is so wrong on something that is easily checkable, how accurate are other forms of news?


Think of "press release wrongness" with a probability distribution. Some press releases are really good, some are really bad. A sensible prior would be somewhere in the middle. If you start to see a lot of bad press releases, then you can update your posterior towards "I can't trust any of these."


Is that a good prior? I expect due to Dunning–Kruger that the willingness to produce an article on a topic would follow a pretty intense bimodal distribution.


The paper has it right, at least.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: