Hacker Newsnew | past | comments | ask | show | jobs | submit | more remexre's commentslogin

Encryption and reliable transport.


You really don't want reliable transport as a feature of the tunnel unless you are _intimately_ familiar with what all of the tunneled traffic is already doing for reliable transport.

The net result of two reliable transports which are unaware of each other is awful.


I probably should have clarified that question.

What does QUIC get you that TCP over Wireguard over UDP does not?


> These so-called dynamic types are merely the equivalent of tags in a discriminated union/variant type.

That's far more true in a language like JavaScript or Scheme than in an "everything is an object" language like Python; the only reason why you would need a variant type for PyObject is to avoid the circular data structures the actual implementation uses.

If you allow the circular data structures, your dynamic types instead are "merely" a relatively complicated codata type, but it's far less obvious that this is actually what anyone considers to be "merely."


Funny question since you bring up JTAG and RISC-V -- do you have a cheapish RISC-V device you'd recommend that actually exposes its JTAG? The Milk-V Duo S, Milk-V Jupiter, and Pine64 Oz64 all seem not to expose one; IIRC, the Jupiter even wires TDO as an input (on the other side of a logic level shifter)...


That doesn't seem off-topic at all to me!

I don't know what to recommend there. I have no relevant experience, because all my RISC-V hardware leaves unimplemented the privileged ISA, which is the part that RISC-V makes so much simpler. The unprivileged ISA is okay, but it's nothing to write home about, unless you want to implement a CPU instead of an OS.


To pick on graphics, since I'm more familiar with that domain, the problem isn't that this tutorial is about software rasterization, it's that the tutorial is a raytracer that doesn't do shading, textures, shadows, or any geometry but spheres, and spends most of its word count talking about implementing the trig functions on fixed-point numbers instead of just using math.h functions on IEEE floats.


Well put! This succinctly sums up the crux of my argument in my other comments.


great counterpoint :)


Does that result in working NSS?


I normally statically link as much as possible and avoid nss, but you can make that work as well, just include it along with glibc.


> Maybe RISC-V?

RISC-V is specified as a RISC (and allows very space-/power-efficient lower-end designs with the classic RISC design), but designed with macro-op fusion in mind, which gets you closer to a CISC decoder and EUs.

It's a nice place to be in tooling-wise, but it seems too early to say definitively what extensions will need to be added to get 12900K/9950X/M4 -tier performance-per-core.

In either case though, a bunch of the tricks that make modern CPUs fast are ISA-independent; stuff like branch prediction or [0] don't depend on the ISA, and can "work around" needing more instructions to do certain tasks, for one side or the other.

[0]: https://tavianator.com/2025/shlx.html


The difference between parse and validate is

    function parse(x: Foo): Bar { ... }

    const y = parse(x);
and

    function validate(x: Foo): void { ... }

    validate(x);
    const y = x as Bar;
Zod has a parser API, not a validator API.


They're dependent loads, so probably not except for the last level of page tables (and that's just be "prefetching" -- doing 4/8/etc walks in parallel, not 1 walk in less time).


The scraper breaking every time a new version of Anubis is deployed, until new anti-Anubis features are implemented, is the point; if the scrapers were well-engineered by a team that cared about the individual sites they're scraping, they probably wouldn't be so pathological towards forges.

The human-labor cost of working around Anubis is unlikely to be paid unless it affects enough data to be worth dedicating time to, and the data they're trying to scrape can typically be obtained "respectfully" in those cases -- instead of hitting the git blame route on every file of every commit of every repo, just clone the repos and run it locally, etc.


Sure, but if that's the case, you don't need the POW, which is what bugs people about this design. I'm not objecting to the idea of anti-bot content protection on websites.


https://danluu.com/branch-prediction/ is a good illustrated overview of a few algorithms.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: