Hacker Newsnew | past | comments | ask | show | jobs | submit | 6keZbCECT2uB's commentslogin

I've been meaning to look at Iceoryx as a way to wrap this.

Pytorch multiprocessing queues work this way, but it is hard for the sender to ensure the data is already in shared memory, so it often has a copy. It is also common for buffers to not be reused, so that can end up a bottleneck, but it can, in principle, be limited by the rate of sending fds.


Btw, with the next release iceoryx2 will have Python bindings. They are already on main and we will make it available via PIP. This should make it easier to use with Pytorch.


Partly in jest, you can often find a Perl / bash available where you can't find a Python, Ruby, or Cargo.


Not sure why that's in jest. Perl is pretty much everywhere and could do the job just fine. There's lots of former (and current) Perl hackers still around.


I agree. In my opinion, if you can keep the experience of Bazel limited to build targets, there is a low barrier to entry even if it is tedious. Major issues show up with Bazel once you start having to write rules, tool chains, or if your workspace file talks to the Internet.

I think you can fix these issues by using a package manager around Bazel. Conda is my preferred choice because it is in the top tier for adoption, cross platform support, and supported more locked down use cases like going through mirrors, not having root, not controlling file paths, etc. What Bazel gets from this is a generic solution for package management with better version solving for build rules, source dependencies and binary dependencies. By sourcing binary deps from conda forge, you get a midpoint between deep investment into Bazel and binaries with unknown provenance which allows you to incrementally move to source as appropriate.

Additional notes: some requirements limit utility and approach being partial support of a platform. If you require root on Linux, wsl on Windows, have frequent compilation breakage on darwin, or neglect Windows file paths, your cross platform support is partial in my book.

Use of Java for Bazel and Python for conda might be regrettable, but not bad enough to warrant moving down the list of adoption and in my experience there is vastly more Bazel out there than Buck or other competitors. Similarly, you want to see some adoption from Haskell, Rust, Julia, Golang, Python, C++, etc.

JavaScript is thorny. You really don't want to have to deal with multiple versions of the same library with compiled languages, but you have to with JavaScript. I haven't seen too much demand for JavaScript bindings to C++ wrappers around a Rust core that uses C core libraries, but I do see that for Python bindings.


> You really don't want to have to deal with multiple versions of the same library with compiled languages, but you have to with JavaScript.

Rust handles this fine by unifying up to semver compatibility -- diamond dependency hell is an artifact of the lack of namespacing in many older languages.


Conda unifies by using a sat solver to find versions of software which are mutually compatible regardless of whether they agree on the meaning of semver. So, both approaches require unifying versions. Linking against C gets pretty broken without this.

The issue I was referring to is that in Javascript, you can write code which uses multiple versions of the same library which are mutually incompatible. Since they're mutually incompatible, no sat-solve or unifyer is going to help you. You must permit multiple versions of the same library in the same environment. So far, my approach of ignoring some Javascript libraries has worked for my backend development. :)


Rust does permit multiple incompatible versions of the same library in the same environment. The types/objects from one version are distinct from the types/objects of the other, it's a type error to try mix them.

But you can use two versions of the same library in your project; I've done it by giving one of them a different name.


Most of my time coding is spent on none of: elegant solutions, complex problems, or precise specifications.

In my experience, LLMs are useful primarily as rubber ducks on complex problems and rarely useful as code generation for such.

Instead, I spend most of my time between the interesting work doing rote work which is preventing me from getting to the essential complexity, which is where LLM code gen does better. How do I generate a heat map in Python with a different color scheme? How do I parse some logs to understand our locking behavior? What flags do I pass to tshark to get my desired output?

So, I spend less time coding the above and more time coding how we should redo our data layout for more reuse.


> Most of my time coding is spent on none of: elegant solutions, complex problems, or precise specifications.

I find that deeply sad and it's probably one of the reasons why I'm partially disillusioned in programming as a profession. A lot of it is just throwing stuff at the wall and seeing what sticks. LLMs will probably accelerate that process.


How does this work with things which are expressible only in C? For example, Pascal strings with flexible array members?

I guess since you said header, you keep everything opaque and create a header for that which gets translated to Zig.


In all seriousness, this seems to carry risk of never doing anything deep or hard. In particular, I've been programming for long enough, that I can be casual about many programming languages until I hit something which is actually new, such as in Rust or Prolog.

Promiscuous doesn't have to mean having a low tolerance for difficulty, but everything else you wrote seems to support that. So, are you saying that enduring difficulty is unnecessary, or did you mean something different?


Not the person you responded to, but I actually take a similar approach as them so maybe I can explain.

Firstly, difficulty and fun are not always directly correlated. Something can be difficult but fun, or difficult and unfun.

Following on my first point, different activities or goals usually have aspects that are more or less fun. It’s better to start with the easy and fun stuff. You don’t have to swallow the ocean.

Now this last part really is dependent on the type of person you are. For me, once I’ve gotten into a subject, I just become more curious about it. The aspects of the subject that were unfun at first are now interesting. I’m more invested and I’m more curious. And if I’m more curious, it’s more fun.

To sum it up, it’s not that one can avoid enduring difficulty. It’s more about harnessing your own strengths and curiosity.


Fowler's implementation is written in Java which has a different memory model from C++. To see another example of Java memory model vs a different language, Jon Gjengset ports ConcurrentHashMap to Rust


How does the tool ingest task sentiment? As a developer, I would never put in writing that I'm less than enthusiastic about any task.


This is something we thought about a lot as well; right now it's just an emoji rating. Once set the picker changes to a checked mark and is not visible to others. The data is stored in the db but not visible to others in the system.


The choice between replacing and making it an overlay is up to your editor. I think it would be pretty to handle either choice as a plugin in your editor given the returned json.

I was surprised it wasn't combined with clangd.


Spending 8 months when the business value at the end was mostly improved velocity doesn't sound likely to be a good tradeoff, especially if this is done as a big bang effort which either succeeds holistically or fails. You might have better success in the future by finding ways to integrate maintenance improvements incrementally.


> finding ways to integrate maintenance improvements incrementally

To be fair, it's possible bureaucratic process got in the way. If their commit / deployment process didn't allow for any changes to hit the production branch until it was "finished" then there wasn't really an opportunity for them to increment.

That seems likely considering "the new manager, who turfed all those fixes, including the new functionality" suggests other organizational problems. If one month is too long for the new manager, the new manager's goal seems to be to "do things" rather than to "solve problems".


This kind of refactoring also greatly improves robustness and stability, very worthy if the company values those qualities in their product.


This illustrates the problem with this 'business value at all costs' perspective. Improved velocity is not the only or even primary takeaway from consolidating 3 mostly identical codebases into 1. There are so many benefits that I struggle to think of a decent justification for leaving triplicated codebases in play, based on what GP has described.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: