Hacker Newsnew | past | comments | ask | show | jobs | submit | joshlk's commentslogin

What's a good reference to learn about Wavelet Matrices?


Start with Wavelet trees (much more intuitive): https://www.alexbowe.com/wavelet-trees/

The matrix version is just an implementation detail to store the tree in a less tree-like shape so you don't need as many pointers.


Asking a silly question… what piece of kernel code do you find the most awe-inspiring or impressive?


Maybe fs/select.c or the polling machinery.


Agree. The VFS is a delight to read. It's a good intro to the kernel pattern of using function pointers to provide a generic API which other functionality can plug into, simply by implementing the appropriate functions. In this case you'll see all the filesystem drivers implement the VFS operations.


The top claim is that it's "Incredibly Fast" but I can't find any performance benchmarks. Can anyone find a link?


It seems to just be re-exposing existing lua runtimes, which makes the naming very unfortunate IMO. The underlying runtime Luau, for example, details its performance here https://luau.org/performance . Luajit is already popular and has plenty of benchmarks online.


I assume the relevant performance benchmarking would be of the http server APIs it exposes to lua, not lua itself.


R-Trees are a good data structure to use in this case, enabling you to query a collection of intervals for overlap with another in O(log(n)) time.

Wikipedia: https://en.wikipedia.org/wiki/R-tree


Yes


Rust can be used in an embedded environment and also offers asynchronous execution built into the language


What your describing is social Darwinism, an argument used by fascist.


Thanks! Will read up about it.

Edit: Reading up on Social Darwinism on Wikipedia, Britannica and History.com, I guess not. Social Darwinism seems either not well-defined and/or assumes that the fitness functions that natural evolution worked out need to be taken nearly as it is. I am not saying the latter.

I am saying there needs to be more clearly defined fitness functions (call them performance criteria, KPIs, etc.), defined for the modern times, which then need to be more consistently followed. This isn't much different than using gradient descent algorithm where some weights are changed in accordance to their impact on the chosen loss function.


Could you elaborate an example on a real life scenario and how this will look like?


Right question to ask, and I do not have have good answers currently. But here are some thoughts:

First, a clarification over what I am not challenging with the status quo. In nature, some organisms are higher up in the food chain and freely kill others. We do NOT define 'fit' in the way where those who are better at killing other humans are favored for survival. We have already set this right by creating laws that punish homicide. This bends the optimum from favoring more physical strength to favoring people to make good overall social contributions, which can be intellectual as well.

The value society should provide to an individual should (generally) be based on the value they provide to the society. This is already majorly the case. However, I challenge inheritances where someone may just be born with a lot more than others without having made those contributions to the society. There are debates present online on this alone, and I cannot claim that the social choice should be exactly this way or that way.

In a democracy, people (except children, etc.) are given equal right to vote. I do not find this optimal. People who understand social dynamics, policies and promises of various parties well (which does not include me) should have more influence on which party should get selected. I do not know how this could be implemented. Perhaps a quiz along with the vote?

I know these are not good and realistic examples. I'll need to think more. However, I do often feel that people who do good and think for the society struggle more while those who put themselves higher at the cost of the society often end higher up.


> [numpy] stores everything in rows

This isn't true. Pandas uses Numpy to store columns of data. Theres quite a few technical errors in the article.


What do you do in the summer when the homes don’t want the heat?


That doesn't have to be a problem in practice.

The entire issue is that the earth surrounding the tubes is acting as a giant buffer. Enough heat has been dumped into it over the years that it has permanently warmed up. Draw heat from it during the winter to warm up homes, and it'll be able to absorb more heat from the tunnel air during the summer.


And because it's permanently warmed up, the long term consequence is the line becomes a health hazard and has to be closed for increasingly long periods.

When wet bulb > body temp people start getting heat stroke, which leads to fainting and potentially death - a bad look for a public transport system.

The likely remedy is to install gigantic refrigeration units in the ventilation shafts and pump in cold air. This will be hugely expensive to build and run.

But the alternative is a tube line that can't be used. So there may not be much choice.


It won't be zero so spreading it across enough people might already solve it. If that still leads to insufficient demand during the hottest weeks, idk, it's energy, surely there's something useful you can do? Store it for next week, pre-heat water for the nearest steam engine (e.g. gas power plants are steam engines running on methane, so if they have to heat the water by fewer degrees.. The problem will be finding a steam engine close to the heat source), supply it to an industrial process that needs temperatures above ambient (egg breeding for vaccine production? Idk), create electricity from the temperature differential between this system and the Thames water using the Peltier effect

I've surely got a too naïve view of economics but if the goal were to not waste resources then there will be things you can do before dumping it into the hot summer air


People still take hot showers and use hot water


When using low precision formats like float8 you usually have to upscale the activations to BF16 before normalising. So the normalisation layers are proportionally using more compute when going to lower precision. Replacing these layers would help reduce the compute cost significantly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: