Hacker Newsnew | past | comments | ask | show | jobs | submit | swid's commentslogin

Go Proverb:

A little copying is better than a big dependency.


100% agree. This actually makes AI-aided development a big improvement (as long as you’re careful). You can have an LLM write you a little function, or extract the correct one from a big library, and inline it into your module.

He’s talking about zero knowledge proofs - it’s a neat use of graph coloring where you send an encrypted proof that a graph can be colored with three colors and no neighbors with the same color. The verifier makes a challenge to prove two nodes don’t have the same color, and the prover provides a key to decrypted just those two nodes. This process is repeated a number of times (with new colored graphs) until the verifier approaches certainty that the prover will always be able to show all nodes have neighbors with different colors.

This coloring problem is NP complete and somehow the thing the prover is proving is encoded in the graph structure. At the end of the day, the only thing the verifier is sure of is that the prover can make the three colored graph, 1 bit that corresponds to the thing the verifier wants to know (eg - does the prover have a token that can show they are over 18).


For simple yes/no questions ("Is over 18?", "Is US resident?") then you should look back to David Chaum's blind signatures and the work that came out of that back in the 90s. The math is super-simple to understand and there are a ton of even easier metaphors with envelopes and carbon paper that you can use to explain to your grandmother. Once you get someone to grok blind signatures it is easy to lead them to zero-knowledge proofs.


Not really. There are ways to prove ownership of one of several hundred million tokens. If you give out this many tokens, the odds that some will be stolen or sold must be fairly close to 1.


Agreed. But obtaining such a token/proof would still be an additional barrier kids would have to actively bypass, so while I don't think that's the best implementation I don't think it's correct to say there's no value there.

My bigger concern would be who gets to issue these tokens. If it's limited to a particular government, then that doesn't work very well on a global internet. And making the internet not global (blocking adults from accessing foreign websites that don't adhere to your scheme) is kinda authoritarian IMO.

If we're going to do age verification and blocking of adult sites, it needs to be local to the user's device (and thus under the control of parents, not governments).

E.g. Instead of mandating sites verify users, we mandate internet-capable devices sold to kids have certain content restrictions, the same way we mandate you can't sell alcohol to kids. To make this more effective than existing content filtering, implement some kind of legally-enforced content-labeling standards websites have to follow to be whitelisted on these devices. This way the rights, freedoms, and privacy of adults using adult devices is unaffected.


Aren't these all solved problems that we've worked out decades ago with certificates?

Certificates prove that a website/server (and sometimes the client) are who they say they are.

We force the website to renew their certificate from an issuer every year so that stolen tokens/certificates are less of a problem.

The issuer can protect or hide the identity of the certificate owner, and doesn't get any information about which clients accessed a server.


The real problem is just managing identities for millions of people. Some of those people will voluntarily use their credentials for someone under 18. Some will sell their identities. There is no technical solution to that.


Chat GPT would be happy to explain "Rate-limited anonymous credentials" to you. Just because you can't think of something doesn't mean brilliant mathematicians can't.


It would be much more valuable if you explained rate-limited anonymous credentials or provided an article (even wikipedia). ChatGPT is non-deterministic and telling someone to use it feels a bit cold for this website.


This has no bearing on my comment


It surely is different. If you set the temp to 0 and do the test with slightly different wording, there is no guarantee at all the scores would be consistent.

And if an LLM is consistent, even with a high temp, it could give the same PR the same grade while choosing different words to say.

The tokens are still chosen from the distribution, so a higher probability of the same grade will result in the same grade being chosen regardless of the temp set.


I think you're restating (in a longer and more accurate way) what I understood the original criticism to be, that this grading test isn't testing what's it's supposed to, partly because a grade is too few tokens.

The model could "assess" the code qualitatively the same and still give slightly different letter grades.


This isn't true in practice because you won't be able to control where allocations are made in the dependencies you use, including inside the Go standard library itself. You could rewrite/fork that code, but then you lose access to the Go ecosystem.

The big miss of the OP is that it ignores the Go region proposal, which is using lessons learned from this project to solve the issue in a more tractable way. So while Arenas won't be shipped as they were originally envisioned, it isn't to say no progress is being made.


I had to fork go’s CSV to make it re-use buffers and avoid defensive copies. But im not sure an arena api is a panacea here - even if i can supply an arena, the library needs certain guarantees about how memory it returns is aliased / used by the caller. Maybe it would still defensive copy into the arena, maybe not. So i don’t see how taking arena as parameter lets a function reason about how safely it can use the arena.


My nephew was reading at age two… he is obviously a very special kid, but no one really pushed him to do that. Apparently this would kind of freak people out in public.

I’m not sure if reading before age one is biologically possible, but I have a surprising data point in my life, so who knows.


My daughter turns two today, and she points out about half of capital letters when we’re reading a book. “That’s A”, etc.


And productized in days!


I'm going to ignore the actual names used here - you can use any name you want. I think this pattern is vulnerable to introducing bugs that allow security bugs. I'm imagining process being some kind of sanitization or validation. Then, you have this thing called result, and some of the time it might be "safe" or processed, and sometimes not. Sometimes people will process it more or less than once with real consequences.

So yeah, definitely it is much better to name the first one in a way that makes it more clear it hasn't been processed yet.


There is no objectively correct way to do the merge, but there are ways that are obviously wrong.


Async/await themselves are not that much magic really, it's a bit of syntactic sugar over promise chains. Of course, understanding promises is its own bag.

ChatGPT explanation: https://chatgpt.com/share/68c30421-be3c-8011-8431-8f3385a654...


During my interviews, may be I should ask them to read and understand this:

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

prior to any dev they plan to do in JS/TS.

PS: 10 bucks that none of them would stay.


That reminds me of my Unix guru of the 90s: "man pages ARE easy to read".

[spoil: "when you are already an expert of the tool detailled in it"]


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: