Hacker Newsnew | past | comments | ask | show | jobs | submit | substation13's commentslogin

Is there any source available for this?


Rich Hickey calls these protocols. In Haskell or Rust you get them via traits.


Many believe that F# is better for functional code and C# is better for imperative code. Hence F# immutable core and C# immutable shell.

But I think this is a myth. F# the language is better than C# at most things and certainly better overall. If you are going to use F#, you may as well go all in and get all of the benefits.

For web services, Giraffe or Suave combinators are much easier to reason about than ASP.NET MVC patterns.


I'm not convinced Tor could scale to a significant percentage of internet traffic using it.


Those are real issues though.


Is concurrency useful for ML?


If your data loading pipeline grows even slightly complex, then yes, you absolutely need concurrency in order to deliver your samples to the GPU fast enough.

The current workarounds to make this happen in python are quite ugly imho, e.g. Pytorch spawns multiple python processes and then pushes data between the processes through shared memory, which incurs quite some overhead. Tensorflow on the other hand requires you to stick to their Tensor-dsl so that it can run within their graph engine. If native concurrency were a thing, data loading would be much more straightforward to implement without such hacks.


Yes, it can be.

1. Loading data

2. Running algorithms that benefit from shared memory

3. Serving the model (if it's not being output to some portable format)

There are also general benefits of using one language across a project. Because Python is weak on these things, we end up using multiple languages.


You end up having to do a lot of things in a ML training run, some of which you can do in parallel because it’s not important now (eg saving metadata) or because you’d otherwise be resource limited (eg loading data and formatting batches for training)


And for this you cannot use Python's multiprocessing because ... ? Sure, moving data between processes is slow because of pickling [0]. However, I'm using parallel processing for the things you suggested, and for these it works great.

If I really had the use case and needed threads, I'd much rather use C++ bindings in a Python package than rebuilding the whole thing. Guess it depends on the scale we are talking about.

[0] https://pythonspeed.com/articles/faster-multiprocessing-pick...


It’s handling all the things that can go wrong when communicating and coordinating across processes, across machines, or troubleshooting bottlenecks on running systems that Elixir (and Erlang) excels.


Concurrency generally makes things run faster. If you test your ML methods your tests will complete faster if the ML methods are able to use and take advantage of concurrency. Some people consider that useful.


No, parallelism is useful, concurrency without parallelism is not useful.

Go and elixir provide some parallelism but the primary focus for both languages is concurrency.


It's not. Until you need to deploy it.


yes but generally they are not that kind make or break type issues like eg Julia correctness problems.


Correctness problems? Could you expand on that? Not doubting, just curious. Thnx



Anyone else read the headline and go straight through to the comments?

I think that HN, which has added deliberate friction elsewhere on the site, should consider hiding the comments link until you have clicked through to the article.


> Second, this is still fine. Don't make changes outside of the IAC control. And if you do make them, retro-fix the IAC files until there is no diff with the actual state.

This doesn't work in practice. Some aspects of the business want to tweak things and it should be reasonably guaranteed that the automated side never touches it.

Terraform state gives this assurance because it won't destroy resources not under its state.


> Some aspects of the business want to tweak things and it should be reasonably guaranteed that the automated side never touches it.

What would a legitimate case for this be?

It seems to me like any changes either must be done via IAC -- and tracked in source control, PR'd, tested in non-prod, etc -- or a missing feature.

If there's a legitimate case for modifying something not in IAC, it should be supported -- this is what I mean by "missing feature". The app and/or IAC should have code for that feature.

Modifying IAC-deployed settings is akin to someone hacking the binary of an executable from a software vendor while still expecting the vendor to support that modified executable. Not gonna happen.


Exactly. If you want portability across clouds, Terraform ain't it. The only way I know of to achieve that right now (for any reasonably complex architecture) is:

- Minimal amount of Terraform to deploy Kubernetes (which is different for each cloud provider)

- Helm (or similar) for deploying to Kubernetes

But then you have Kubernets to deal with.


Adding a "real" programming language makes certain things easier, such as abstraction, but IMO they are too powerful for the task at hand. Do we really want an infrastructure description to be able to execute arbitrary code?


Well, it depends on the language. Some are quite good at restricting programs so that it is not possible to execute arbitrary code.

Take a look at https://propellor.branchable.com to see how Haskell might be used.

Idris might be a good candidate as well.

https://dhall-lang.org is quite interesting for these purposes as well (although it is not general purpose)


Yes, that’s exactly what we want. Things like Terraform also permit this via provisioners, and CloudFormation permits it via execution of lambda functions. Almost any non trivial infrastructure requires it.


With Terraform you can statically analyze the infrastructure definition with some guarantees of determinism etc. Arbitrary execution is allowed, as you say, but only in well-contained places, such as local_exec.

How can this work if, say, TypeScript is used as the definition language?


What degree of determinism do you actually have if provisioners can execute Turing complete code?


It's a bit like how Rust limits dangerous operations to unsafe blocks


I would love LLMs to write documentation for me, even though I don't trust them with the code.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: