Hacker Newsnew | past | comments | ask | show | jobs | submit | more remexre's commentslogin

> unless that crawl is unreasonably expensive or takes it down for others

This _is_ the problem Anubis is intended to solve -- forges like Codeberg or Forgejo, where many routes perform expensive Git operations (e.g. git blame), and scrapers do not respect the robots.txt asking them not to hit those routes.


Alfred Hitchcock's movies aren't missing from Netflix because Netflix couldn't afford to pay for their production.


Are there any implementations of this lambdas proposal on top of any production-quality compilers? Or are there some updates on how this is doing in committee?


It's straight from C++ which has had them since 2011.


I recently wrote some paper: https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3654.pdf which also includes a short summary of issues when trying to put C++'s lambdas into C.


Am I missing where the GitHub link is for this, or did the author not release sources? It'd be fun to reproduce this on a different machine, and play around with other architectures and optimizers that weren't mentioned in the article...


> What’s more, the changes to the information on which this “baked in” build logic is based is not tracked very precisely.

kbuild handles this on top of Make by having each target depend on a dummy file that gets updated when e.g. the CFLAGS change. It also treats Make a lot more like Ninja (e.g. avoiding putting the entire build graph into every Make process) -- I'd be interested to see how it compares.


For one thing, the model is trained on a language modelling task, not a question-answering task?


That's a really anthropomorphizing description; a more mechanical one might be,

The attention mechanism that transformers use to find information in the context is, in its simplest form, O(n^2); for each token position, the model considers whether relevant information has been produced at the position of every other token.

To preserve performance when really long contexts are used, current-generation LLMs use various ways to consider fewer positions in the context; for example, they might only consider the 4096 "most likely" places to matter (de-emphasizing large numbers of "subtle hints" that something isn't correct), or they might have some way of combining multiple tokens worth of information into a single value (losing some fine detail).


> For starters, Linux now supports Intel Advanced Performance Extensions (APX). [...] This improvement means you'll see increased performance from next-generation Intel CPUs, such as the Lunar Lake processors and the Granite Rapids Xeon processors.

This isn't actually right, is it? APX hasn't been released, to my knowledge.


Formal reasoning about functional programs in terms of their (denotational) semantics is normal in a functional programming class (e.g. it's about a quarter of our mandatory-for-undergrads FP course).

Formal reasoning about imperative programs in terms of their (e.g. axiomatic) semantics is only in a grad-level course, and the programs you can reason about are really, really limited compared to the functional ones. (The last time I TA'd the FP class, one of the homeworks involved proving a simple compiler correct. When I took the grad course, I think the most complicated program we proved correct was selection sort.)

I think "reasoning about programs" is more computer-science than "writing programs," and choosing an imperative language signals that you're not emphasizing the former.


How would you recommend doing that sort of "subtyping"? _Generic and macros?


Yup. It's a lot saner in C++, but people who refuse to use C++ for political reasons can do it the ugly way using C11 or GNU C.


"political reasons"?

I switched from C++ to C because C++ is too complex and dealing with this complexity was stealing my time. I would not call this a "political reason".


C's simplicity is oversold, when taking into account that in practice it isn't the second edition of the K&R C book, adapted for ISO, rather a plethora of extensions, several standard revisions since 1989, zero care for security.

Dealing with security issues is one of my occupations, when having a DevOps role in project delivery, and I rather use that time elsewhere.


After wasting many hours of my life debugging C++ code, before switching to C, I disagree. The simplicity and explicitness of C is underappreciated.


If only it came along with a security mindset, unfortunately 50 years aren't enough to change language culture.


As if C++ had a security mindset:

void foo(std::vector<int> &x) { x[4] = 1; }

int main() { std::vector<int> x{ 0, 1, 3, }; foo(x); std::cout << x[4]; }


The difference being that all C++ compilers have an option to enable bounds checking on operator[]().

Additionally, until C++26, the way the standard is written, the legalese doesn't forbid the implementation to do bounds checking.

Now can you please enlighten us how to do the same with C arrays and strings, with bounds checking enabled, and why in 50 years WG14 has completely ignored the problem, including Dennis Ritchie proposal for fat pointers, until the government and industry pressure, and even now, I am quite curious if C2y will really bring any actual improvement.


You can use the bounds sanitizer at least with clang and GCC to get bounds checking for [] on arrays with C and this has been working for a long time. Also the legalese in c never forbid bounds checking, and there were various bound checking compilers already in the past. If you use your own libraries, you can easily do whatever you want anyway.


An answer that authors of cyber security laws are certainly going to be happy with.


I repeated your argument.


They even downvote people who suggest C++ :-). Doing this in C is such a colossal waste of time and energy, not to mention the bugs it'll introduce. Sigh!


Following that argument, c++ is also a colossal waste of time and energy and bugs when compared with Rust :D.


Eventually, when Rust finally catches up with C++ ecosystem, including being used in industry standards like Khronos APIs, CUDA, console devkits, HPC and HFT standards.

Until then, the choice is pretty much between C and C++, and the latter provides a much saner and safer alternative, than a language that keeps pretending to be portable macro assembler.


You can bind to C and c++ libraries just fine. There’s plenty of Rust bindings for Vulkan and CUDA, the canonical implementation of WebGPU is in rust.


Binding just fine isn't the same as taking part in the conversation of industry standards, and being shipped in vendor SDKs.

Requires manual work from people willing to put into the effort, lesser development experience reading documents written for other languages, no out of the box plugins for IDEs, or graphical debuggers support, e.g. CUDA as one such example.


Trolling about the choice of implementation language from a throwaway account is worth downvotes, yes. Doing a given task in a given language, simply for the sake of having it done in that language, is a legitimate endeavour, and having someone document (from personal experience) why it's difficult in that language is real content worth discussion. Choosing a better language is very much not a goal here.

> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: