> unless that crawl is unreasonably expensive or takes it down for others
This _is_ the problem Anubis is intended to solve -- forges like Codeberg or Forgejo, where many routes perform expensive Git operations (e.g. git blame), and scrapers do not respect the robots.txt asking them not to hit those routes.
Are there any implementations of this lambdas proposal on top of any production-quality compilers? Or are there some updates on how this is doing in committee?
Am I missing where the GitHub link is for this, or did the author not release sources? It'd be fun to reproduce this on a different machine, and play around with other architectures and optimizers that weren't mentioned in the article...
> What’s more, the changes to the information on which this “baked in” build logic is based is not tracked very precisely.
kbuild handles this on top of Make by having each target depend on a dummy file that gets updated when e.g. the CFLAGS change. It also treats Make a lot more like Ninja (e.g. avoiding putting the entire build graph into every Make process) -- I'd be interested to see how it compares.
That's a really anthropomorphizing description; a more mechanical one might be,
The attention mechanism that transformers use to find information in the context is, in its simplest form, O(n^2); for each token position, the model considers whether relevant information has been produced at the position of every other token.
To preserve performance when really long contexts are used, current-generation LLMs use various ways to consider fewer positions in the context; for example, they might only consider the 4096 "most likely" places to matter (de-emphasizing large numbers of "subtle hints" that something isn't correct), or they might have some way of combining multiple tokens worth of information into a single value (losing some fine detail).
> For starters, Linux now supports Intel Advanced Performance Extensions (APX). [...] This improvement means you'll see increased performance from next-generation Intel CPUs, such as the Lunar Lake processors and the Granite Rapids Xeon processors.
This isn't actually right, is it? APX hasn't been released, to my knowledge.
Formal reasoning about functional programs in terms of their (denotational) semantics is normal in a functional programming class (e.g. it's about a quarter of our mandatory-for-undergrads FP course).
Formal reasoning about imperative programs in terms of their (e.g. axiomatic) semantics is only in a grad-level course, and the programs you can reason about are really, really limited compared to the functional ones. (The last time I TA'd the FP class, one of the homeworks involved proving a simple compiler correct. When I took the grad course, I think the most complicated program we proved correct was selection sort.)
I think "reasoning about programs" is more computer-science than "writing programs," and choosing an imperative language signals that you're not emphasizing the former.
C's simplicity is oversold, when taking into account that in practice it isn't the second edition of the K&R C book, adapted for ISO, rather a plethora of extensions, several standard revisions since 1989, zero care for security.
Dealing with security issues is one of my occupations, when having a DevOps role in project delivery, and I rather use that time elsewhere.
The difference being that all C++ compilers have an option to enable bounds checking on operator[]().
Additionally, until C++26, the way the standard is written, the legalese doesn't forbid the implementation to do bounds checking.
Now can you please enlighten us how to do the same with C arrays and strings, with bounds checking enabled, and why in 50 years WG14 has completely ignored the problem, including Dennis Ritchie proposal for fat pointers, until the government and industry pressure, and even now, I am quite curious if C2y will really bring any actual improvement.
You can use the bounds sanitizer at least with clang and GCC to get bounds checking for [] on arrays with C and this has been working for a long time. Also the legalese in c never forbid bounds checking, and there were various bound checking compilers already in the past. If you use your own libraries, you can easily do whatever you want anyway.
They even downvote people who suggest C++ :-). Doing this in C is such a colossal waste of time and energy, not to mention the bugs it'll introduce. Sigh!
Eventually, when Rust finally catches up with C++ ecosystem, including being used in industry standards like Khronos APIs, CUDA, console devkits, HPC and HFT standards.
Until then, the choice is pretty much between C and C++, and the latter provides a much saner and safer alternative, than a language that keeps pretending to be portable macro assembler.
Binding just fine isn't the same as taking part in the conversation of industry standards, and being shipped in vendor SDKs.
Requires manual work from people willing to put into the effort, lesser development experience reading documents written for other languages, no out of the box plugins for IDEs, or graphical debuggers support, e.g. CUDA as one such example.
Trolling about the choice of implementation language from a throwaway account is worth downvotes, yes. Doing a given task in a given language, simply for the sake of having it done in that language, is a legitimate endeavour, and having someone document (from personal experience) why it's difficult in that language is real content worth discussion. Choosing a better language is very much not a goal here.
> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
This _is_ the problem Anubis is intended to solve -- forges like Codeberg or Forgejo, where many routes perform expensive Git operations (e.g. git blame), and scrapers do not respect the robots.txt asking them not to hit those routes.