I agree, the non-stop one-sided accusations of entitlement didn't seem very productive to me. I wonder if some day someone will write a post about things maintainers aren't entitled to. I can think of several things.
This is in no way exclusive to software. It's people in general. Dealing with people is extremely difficult. At least software developers aren't likely to get sued over these complaints.
Yes, the word "entitlement" is mainly used in response to widely acceptable behaviors, and is almost always a passive-aggressive escalation.
I may not be entitled to be listened to when I speak, but it's still reasonable for me to speak with the expectation that I will be listened to. If I do speak, it's not an aggressive claim that I'm entitled to speak and you must listen.
It was interesting to read how you did a spiking eural network in PyTorch, but it seems your neurons' states are coupled continuously in time, whereas in the brain, it would be the opposite, ie. the spike timing carries information and not the state values.
> backprop engenders STDP
This is backwards I think, but definitely an interesting association to make
(tldr; use a sine and cosine function regression like a linear regression. Think like solving for a free angle and a free phase instead than for a free bias and weight).
1. Convert the hours to an angle in degrees or in radians (a simple linear transformation).
2. Take the cos and sin of the angle to get the x and y position in a plane, respectively.
3. Introduce a time axis such that the thing doesn't draw a circle but rather an helix (like DNA).
4. So we now have a ton of 3D data points: (time, x, y). Create a ML model to fit a sine and a cosine to those data points to match them perfectly. Your model has only 2 free parameters to optimize for: a shared phase offset and a shared frequency. The sine uses (time, y) and the cosine (time, x).
5. Initialize the model with a random phase offset and a frequency ideally already close to the one you think you have. Don't initialize with a too high frequency to avoid fitting just Nyquist-frequency-close-noise.
6. Optimize! (With the least squares.) I guess that you might congerge only to a local minima and need to try different randon starting frequencies if you fail to converge.
7. The answer to your problem is the now-optimized free parameter of the frequency. It won't sit between two bins of your fft anymore.
Note: This link contains images picturing the transformations I try to explain.
Disclaimer: I didn't do that yet, this is just off the top of my head. If I said something wrong, please comment. Mostly about a wrong convergence to Nyquist freq or something like that (?).
In the end, this way, you won't have discrete fft bins. You'll approach the problem orthogonally to that: you solve for finding the one best fft bin (frequency) directly.
In other words: solve for the content in the exponential of "e" as free parameters, and for one such frequency and phase offset instead of many bins.
I'd apply the window to randomly-sampled mini-batches of consecutive points instead of optimizing the neural network on just randomly-sampled batch points or on all the dataset at once. I guess that using an Hann-Poisson window will make the "gradient" valley easier to "ski down" with gradient descent which is a greedy algorithm. I guess that the spectral leakage caused by the Hann-Poisson window function will make the gradient landscape more monotonically decreasing in every point towards the global minima.
Never tried macOS, but matplotlib works fine under Windows and Linux. Maybe you could save plots to images on disks and prevent them to show? I once ran code that used PLT on a server and I needed to use something like `matplotlib.use('Agg')` to prevent the code from crashing because of lacking graphical output.
Personally, I love to have notebook cells to be able to code without re-running everything. Especially in the case of deep learning, training a model is long. Jupyter is very good for creating and debugging code that:
A) needs a trained model loaded for it to work but you want to skip the part of saving/loading the model, or
B) code that saves-then-load a model.
If the "mutable state with global variables" drives you crazy, you may want to avoid reusing the same variable names from one cell to another, and reset the notebook more often. Also, avoid side effects (such as writing/loading from disks) and try to have what's called pure functions (e.g.: avoid singletons or service locators, pass references instead). If your code is clean and does not do too much side effects, you should be able to work fine in notebooks without having headaches.
Also, you should be able to use your favorite editor for the code outside notebooks (over time, more and more of the code will be outside of your notebook). You might often work in the editor, and at other times in the notebook depending on the nature of the work. As the project advances, notebooks will become less and less important, they only kickstart projects.
Thanks. Yes when I'm being organized I manage to get the notebook able to `reload` a module that I'm working on in my editor, which is probably the most important step towards reducing code in the notebook.