Hacker Newsnew | past | comments | ask | show | jobs | submit | thesz's commentslogin

They build channels on top of these "promises" and "futures" and this made them square into communicating sequential processes category. Also, you can look at promise-future pair as a single-element channel, again, it's CSP.

BTW, Erlang does not implement CSP fully. Its' interprocess communication is TCP based in general case and because of this is faulty.


It is not TCP based. In Erlang processes have mailboxes. But they don't have promises, you send a message and wait for response with timeout or do something else. And TCP is only used between nodes (vm instances). But you can use any communication channel (UDP, unix sockets, tls, serial port, some other process doing funny things).

> Its' interprocess communication is TCP based in general case and because of this is faulty.

What? It's faulty because of TCP? No, in Erlang it is assumed that communication can be faulty for a lot of reasons, so you have to program to deal with that and the standard library gives you tools to deal with this.


There is no such thing as "Communicating Sequential Processes with faulty channels and processes." I tried to find something like that, fruitlessly.

This means that Erlang does not implement CSP, it implements something else.

Again, general case of communication between Erlang processes includes communication between processes on different machines.


> BTW, Erlang does not implement CSP fully.

Specific evidence?

> Its' interprocess communication is TCP based in general case

No, it is not. Only between machines is that true.

> and because of this is faulty.

LOL, no. Why are you rolling with "speaking a whole lot of BS based on ignorance" today?

On the other hand, I now understand that one impediment to Elixir adoption is apparently "people repeating a lot of bullshit misinformation about it"


  >> Its' interprocess communication is TCP based in general case
  > No, it is not. Only between machines is that true.
It is true for communication between two VMs on same machine, isn't it?

The general case includes same-VM processes, different VM processes and also different VMs on different machines.

  > Why are you rolling with "speaking a whole lot of BS based on ignorance" today?
TCP is unreliable: https://networkengineering.stackexchange.com/questions/55581...

That was acknowledged by Erlang's developers before 2012. I remember that ICFP 2012 presentation about Cloud Haskell mentioned that "Erlang 2.0" apparently acknowledged TCP unreliability and tried to work around.


Here, page 31 on: https://wiki.haskell.org/wikiupload/4/46/Hiw2012-duncan-cout...

Erlang circa 2012 was even less reliable than TCP on which its interprocess communication was based.

Namely, TCP allows for any prefix of messages m1,m2,m3... to be received. But Erlang circa 2012 allowed for m1,m3... received, dropping m2.

It may be not case today, but it was case about ten years ago.


  > Human brains aren't magic, special or different.
DNA inside neurons uses superconductive quantum computations [1].

[1] https://www.nature.com/articles/s41598-024-62539-5

As the result, all living cells with DNA emit coherent (as in lasers) light [2]. There is a theory that this light also facilitates intercellular communication.

[2] https://www.sciencealert.com/we-emit-a-visible-light-that-va...

Chemical structures in dendrites, not even neurons, are capable to compute XOR [3] which require multilevel artificial neural network with at least 9 parameters. Some neurons in brain have hundredths of thousands of dendrites, we are now talking of millions of parameters only in single neuron's dendrites functionality.

[3] https://www.science.org/doi/10.1126/science.aax6239

So, while human brains aren't magic, special or different, they are just extremely complex.

Imagine building a computer with 85 billions of superconducting quantum computers, optically and electrically connected, each capable of performing computations of a non-negligibly complex artificial neural network.


All three appear to be technically correct, but are (normally) only incidental to the operation of neurons as neurons. We know this because we can test what aspects of neurons actually lead to practical real world effects. Neurophysiology is not a particularly obscure or occult field, so there are many many papers and textbooks on the topic.(And there's a large subset you can test on yourself, besides, though I wouldn't recommend patch-clamping!)

  > We know this because we can test what aspects of neurons actually lead to practical real world effects.
Electric current is also quantum phenomena, but it is also very averaged in most circumstances that lead to practical real world effects.

What is wonderful here is that contemporary electronics wizardry that allowed us to have machines that mimic some of thinking, also is very concerned of the quantum-level electromagnetic effects at the transistor level.


On reread, if your actual argument is that SNN are surprisingly sophisticated and powerful, and we might be underestimating how complex the brain's circuits really are, then maybe we're in violent agreement.

They are extremely complex, but is that complexity required for building a thinking machine? We don't understand bird physiology enough to build a bird from scratch, but an airplane flies just the same.

The complexities of contemporary computers and complexities of computing-related infrastructure (consider ASML and electricity) are orders of magnitudes higher than what was needed for first computers. The difference? We have something that mimics some aspects of (human) thinking.

How complex our everything computing-related should be to mimic thinking (of humans) little more closely?


Are we not just getting lost in semantics when we say "fly"? An airplane does not at all perform the same behavior as a bird. Do we say that boats or submarines "swim"?

Planes and boats disrupt the environments they move through and air and sea freight are massive contributors to pollution.


You seem to have really gone off the rails midway through that post...

These displays use rotating mechanisms.

This ones does not: https://www.youtube.com/watch?v=wrfBjRp61iY

Volumetric display in the video above uses static projector whose pixels light up etchings inside solid glass.


Thank you for sharing - it's a brilliant piece of tech. I posted this earlier but it didn't catch on with upvoting

https://news.ycombinator.com/item?id=46137203


The same person built both of these.

feel like I saw this in a hackaday, at least remember hearing the podcast about projecting all the rays at all intersections, it was green though maybe I'm thinking of something else

oh wow yeah I've seen a lot of this channel's work before the lego display, the CV fiber optic bundle display


Whatever the outcome, when someone sets up an optical table, I'm sold.

Speaking of tables, you probably already know about Tilt-Five? If not, they made a very neat social AR system focused on tabletop gaming.

https://www.tiltfive.com/


  > ...when he's not working is when he's depressed.
The cure for that is known since dawn of time - walking.

Holmes, being an exceptionally observant man, definitely would observe that walks raise the mood, allow for (most often silly) ideas to come and, last but not least, increase observation capabilities, attention to details and speed of thought.

Arthur Conan-Doyle did an extensive walks back then, but his hero was written to not to. This is not right.


As I recall Holmes did in fact do a lot of walking. He vacillated between periods of inactivity(cocaine, violin, shooting V in wall with a revolver) and intense activity (taking up disguises and doing various physical activities including walking all across London and elsewhere.

Just because your logical mind says one thing is good to do and you know you should do it you are not going to always obey your rider, the inertia of the elephant takes over.

So you need a trigger to snap out of it, for Holmes it was a new case.


> and intense activity

AFAIR those had a specific purpose (chasing a perp, tracking down evidence, etc.). Most of his thinking he did sitting in a chair and smoking his pipe for hours on end (sometimes the whole night).


No they are not. He plays violin and shoots a gun inside his house for fun.


Holmes is basically a border collie?


We all are.


5 days ago: https://news.ycombinator.com/item?id=45926371

Sparse models have same quality of results but have less coefficients to process, in case described in the link above sixteen (16) times as less.

This means that these models need 8 times less data to store, can be 16 and more times faster and use 16+ times less energy.

TPUs are not all that good in the case of sparse matrices. They can be used to train dense versions, but inference efficiency with sparse matrices may be not all that great.


TPUs do include dedicated hardware, SparseCores, for sparse operations.

https://docs.cloud.google.com/tpu/docs/system-architecture-t...

https://openxla.org/xla/sparsecore


SparseCores appear to be block-sparse as opposed to element-sparse. They use 8- and 16-wide vectors to compute.

Here's another inference-efficient architecture where TPUs are useless: https://arxiv.org/pdf/2210.08277

There is no matrix-vector multiplication. Parameters are estimated using Gumbel-Softmax. TPUs are of no use here.

Inference is done bit-wise and most efficient inference is done after application of boolean logic simplification algorithms (ABC or mockturtle).

In my (not so) humble opinion, TPUs are example case of premature optimization.


They are on their 7th generation now, so presumably the architecture is being updated as needs require.


One can do a direct translation from Rust AST/IR to C. Many functional languages do that, C++ started as a compiler to C.


What is language feature in some language is a library in Haskell.


Arguably an effect monad is an EDSL that has algebraic effects :)

But the things these languages are experimenting with are low-level implementation details that wouldn't be amenable to embedding. There's no escaping the Haskell GC.


Atom [1] is an EDSL that escaped Haskell GC. Note that Atom takes ideas from Bluespec which compiles to hardware circuits, where GC is not availble.

  [1] https://hackage.haskell.org/package/atom
One can make a Haskell EDSL with effects and everything and output a C (or some compiler's IR) code.

These languages you mentioned repeat Rust's mistake.

Rust's type system includes rules that remove definitions from the scope/environment. This is inherent and obligatory for uniqueness/linear types type systems.

At the time Rust was conceived, Haskell had HList library [2] and Beyond Monads [3] extended state monad. Combining both would embed into Haskell most, if not all, Rust at the time, allowing to pursue research of how to combine borrow logic with algebraic effects. But Rust's developers preferred to go OCaml implementation (syntax first) way and not to pursue complexity issues of semantics.

  [2] https://hackage.haskell.org/package/HList
  [3] http://blog.sigfpe.com/2009/02/beyond-monads.html


Transformers are performing (soft, continuous) beam search inside them, the width of beam being not bigger than number of k-v pairs in attention mechanism.

In my experience, having a Markov Chain to be equipped with the beam search greatly improve MC's predictive power, even if Markov Chain is ARPA 3-gram model, heavily pruned.

What is more, Markov Chains are not restricted to immediate prefixes, you can use skip grams as well. How to use them and how to mix them into a list of probabilities is shown in the paper on Sparse Non-negative Matrix Language Modeling [1].

[1] https://aclanthology.org/Q16-1024/

I think I should look into that link of yours later. Have slimmed over it, I should say it... smells interesting at some places. For one example, decision trees learning is performed with greedy algorithm which, I believe, does not use oblique splits whereas transformers inherently learn oblique splits.


  > A markov chain model will literally have a matrix entry for every possible combination of inputs.
The less frequent prefixes are usually pruned away and there is a penalty score to add to go to the shorter prefix. In the end, all words are included into the model's prediction and typical n-gram SRILM model is able to generate "the pig with dragon head," also with small probability.

Even if you think about Markov Chain information as a tensor (not matrix), the computation of probabilities is not a single lookup, but a series of folds.


This looks no harder than to train custom Kaldi (circa 2017) phoneme model on brain waves, and using remaining Kaldi's functionality for everything else, except for text-to-speech. There was WaveNet for the TTS at that time, with sound quality that is good enough for (and can be improved by) radio transmission.

Thanks for a link!


point is - such tech is used right now to neutralize individuals. imagine hearing word "bread" inescapably couple hundred times a day coming from an unknown source right to your head. for months and (!) years right at the moment you are trying to conceptualize slightly harder thought than usual. everywhere you go 24/7. while there's no help from anywhere (police hasn't answered me for 2 years and counting) as the general public brushes it off as schizophrenia (it's not - voices completely stopped when the lightning storm took out the electricity) and Church paints it as the second coming of Christ (or antichrist when more suitable).


my mostly uneducated guess of what's going on is: radio wave gets sent, human body slightly modulates it and same signal gets received back and used to reconstruct (approximation of?) EEG from noise delta. neural models is the secret sauce that makes such signal processing possible


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: