Whoa, thank you all for the nice comments, I didn't expect to make such a buzz here nor today. I'm glad to see the reactions - even the bad ones, it seems aligned with what I thought. Yes, notebooks are very useful for the faster coding cycle, but they become easily heavy (I'd love to see a better multiline edit and a better autocompletion in Jupiter).
Seems like I already posted my article 2 months ago, but renamed the GitHub repo since then, which may explain why someone else (jedwhite) could submit my article again: https://news.ycombinator.com/item?id=18339703
I didn't submit it twice to HN. Well, nice to see that in a parallel world my post did the 1st page on HN! :-)
But could this mean that my HN account is like "shadow banned" or something? Strange to see that all my own submissions on HN haven't got much attention for months. Or maybe it's just the random factor... Well, thanks!
I see here that original poster (OP) of the post tried to use many-to-one LSTMs instead of many-to-many LSTMs. I tell that first by looking at the charts. Then I saw the method named "predict_point_by_point" with the comment "Predict each timestep given the last sequence of true data, in effect only predicting 1 step ahead each time" in his code here: https://github.com/jaungiers/LSTM-Neural-Network-for-Time-Se...
Well, glad to see that some similar work as mine can get this much traction on HN. I would have loved to get this much traction when I did my post, too. Anyway, I would suggest OP to take a look at seq2seq, as it objectively performs better (and without the "laggy drift" visual effect observed as in OP's figure named "S&P500 multi-sequence prediction").
In other words, using many-to-one neural architectures creates some kind of feedback which doesn't happen with seq2seq which doesn't build on its own accumulated error. It has a decoder with different weights than the encoder, and can be deep (stacked).
The aim of this post is to explain why sequence to sequence models appear to perform better than "many to one" RNNs on signal prediction problems. It also describes an implementation of a sequence 2 sequence model using the Keras API.
So many ads and things that prompt for attention on this page! I needed to press "x" on 3 things to be able to read. And once read to the end, it refers to a "video above" that won't open under Chrome for mobile. Wired, calm down on crap!
You're not missing much in this 304-word blurb. Outside of name-dropping, there's an introductory snippet that guides you to a video.
> So how do they work? You may have heard that the normal rules of reality don’t always apply in the world of quantum mechanics. A phenomenon known as a quantum superposition allows things to kinda, sorta, be in two places at once, for example. In a quantum computer, that means bits of data can be more than just 1 or 0, as they are in a conventional computer; they can also be something like both at the same time.
When data is encoded into effects like those, some normal limitations on conventional computers fall away. That allows a quantum computer to be much faster on certain tricky problems. Want a full PhD, or third-grade, explanation? Watch the video above.
Whoa, thank you all for the nice comments, I didn't expect to make such a buzz here nor today. I'm glad to see the reactions - even the bad ones, it seems aligned with what I thought. Yes, notebooks are very useful for the faster coding cycle, but they become easily heavy (I'd love to see a better multiline edit and a better autocompletion in Jupiter).
Seems like I already posted my article 2 months ago, but renamed the GitHub repo since then, which may explain why someone else (jedwhite) could submit my article again: https://news.ycombinator.com/item?id=18339703
I didn't submit it twice to HN. Well, nice to see that in a parallel world my post did the 1st page on HN! :-)
But could this mean that my HN account is like "shadow banned" or something? Strange to see that all my own submissions on HN haven't got much attention for months. Or maybe it's just the random factor... Well, thanks!