>As standardized in [2 - 4], the Privacy Pass protocol is able to accommodate many “architectures.” Our deployment model follows the original architecture presented by Davidson et al. [1], called “Shared Origin, Attester, Issuer” in § 4 of [2].
From [2] RFC 9576 § 3.3 "Privacy Goals and Threat Model" :
>Clients explicitly trust Attesters to perform attestation correctly and in a way that does not violate their privacy. In particular, this means that Attesters that may be privy to private information about Clients are trusted to not disclose this information to non-colluding parties. Colluding parties are assumed to have access to the same information; see Section 4 for more about different deployment models and non-collusion assumptions. However, Clients assume that Issuers and Origins are malicious.
And From [2] RFC 9576 § 4.1 "Shared Origin, Attester, Issuer" :
>As a result, attestation mechanisms that can uniquely identify a Client, e.g., requiring that Clients authenticate with some type of application-layer account, are not appropriate, as they could lead to unlinkability violations.
Womp womp :(
This is not genuinely private in any meaningful sense of the term. Kagi plays the role of all three parties, and even relies on the very thing section 4.1 says is not appropriate: to use mechanisms that can uniquely identify a client. They utilize a client's session token: "In the case of Kagi’s users, this can be done by presenting their Kagi session cookie to the server."
Frankly, that blog post is disingenuous at best, and malicious at worst.
=================================
I want to be wrong here. Where am I wrong? What am I missing?
From [2] RFC 9576 § 4.1 "Shared Origin, Attester, Issuer", right before the sentence you quoted:
> In this model, the Attester, Issuer, and Origin share the attestation, issuance, and redemption contexts.
I haven't read the RFC in detail, but I believe this is where the nuance is: When you enable the privacy pass setting in the extension/browser the redemption context is changed relative to the attestation context by removing the session cookie, to just the information sent by the browser for someone who is not logged in. What remains is your IP address and browser fingerprinting, which can be countered by using Tor.
This would definitely seem like a big concern if you were just looking at the RFC, but the key here is that Kagi's system has a different set of security/privacy/functional requirements and therefore the issues mentioned in the RFC do not necessarily apply.
In the RFC's architecture, the request flow is like so:
1. CLIENT sends anonymous request to ORIGIN
2. ORIGIN sends token challenge to CLIENT
3. CLIENT uses its identity to request token from ISSUER/ATTESTER
4. ISSUER/ATTESTER issues token to CLIENT
5. CLIENT sends token to ORIGIN
You can see how the ISSUER/ATTESTER can identify the client as the source of the "anonymous request" to the ORIGIN because the ISSUER, ATTESTER and ORIGIN are the same entity, so it can use a timing attack to correlate the request to the ORIGIN (1.) with the request to the ISSUER/ATTESTER (3.).
However you can also see that if a lot of time passes between steps (1.) and (3.), then such an attack would be infeasible. Reading past your quote from RFC 9576 § 4.1., it states:
> Origin-Client, Issuer-Client, and Attester-Origin unlinkability requires that issuance and redemption events be separated over time, such as through the use of tokens that correspond to token challenges with an empty redemption context (see Section 3.4), or that they be separated over space, such as through the use of an anonymizing service when connecting to the Origin.
In Kagi's architecture, the "time separation" requirement is met by making the client generate a large batch of tokens up front, which are then slowly redeemed over a period of 2 months. The "space separation" requirement is also satisfied with the introduction of the Tor service.
There is some more discussion in RFC 9576 § 7.1. "Token Caching" and RFC 9577 § 5.5. "Timing Correlation Attacks".
One question you may have is: Why wasn't this solution used in the RFC?
This can be understood if you look at the mentions of "cross-ORIGIN" in the RFC. This RFC was written by Cloudflare, who envisioned it's use across the whole Internet. Different ORIGINs would trust different ISSUERs, tokens from one ORIGIN<->ISSUER network might not work in another ORIGIN<->ISSUER network. This made it infeasible for clients to mass-generate tokens in advance, as a client would need to generate tokens across many different ISSUERS.
Of course, adoption was weak and there ended up being only one ISSUER - Cloudflare, so they adopted the same architecture as Kagi where clients would batch generate tokens in advance (batch size was only 30 tokens though).
RFC 9576 § 7.1. also mentions a "token hoarding" attack, which Cloudflare felt particularly threatened by. Cloudflare's Privacy Pass system worked in concert with CAPTCHAs. Users could trade a completed CAPTCHA for a small batch of tokens, allowing a single CAPTCHA completion to be split into multiple redemptions across a longer time period.
However, rudimentary "hoarding"-like attacks were already in use against CAPTCHAs through "traffic exchanges". Opening up another avenue for hoarding through Privacy Pass would have only exacerbated the problem.
>3. CLIENT uses it's identity to request token from ISSUER/ATTESTER
The ISSUER and ATTESTER are different roles. As previously quoted, "Clients explicitly trust Attesters to perform attestation correctly and in a way that does not violate their privacy." The RFC is explicit that, when all of the roles are held by the same entity, the attestation should not rely on unique identifiers. But that's exactly what a session cookie is.
>You can see how the ISSUER/ATTESTER can identify the client as the source of the "anonymous request" to the ORIGIN because the ISSUER, ATTESTER and ORIGIN are the same entity, and therefore it can use a timing attack to correlate the request to the ORIGIN (1.) with the request to the ISSUER/ATTESTER (3.).
No timing or spacing attack is needed here. If I have to provide Kagi with a valid session cookie in order to get the tokens, then they already have a unique identifier for me. There is no guarantee that Kagi is not keeping a 1-to-1 mapping of session cookies to ISSUER keypairs, or that Kagi could not, if compelled, establish distinct ISSUER keypairs for specific session cookies.
> The RFC is explicit that, when all of the roles are held by the same entity, the attestation should not rely on unique identifiers. But that's exactly what a session cookie is.
Very true, but again, the RFC describes a completely different threat model with much stronger guarantees. The Kagi threat model:
- Does not provide Issuer-Client unlinkability
- Does not provide Attester-Origin unlinkability
In particular, the model does not assume a malicious Issuer and requires the Client have some level of trust in the Issuer. The Client trusts the Issuer with their private billing information but does not trust the Issuer with their search activity.
The RFC explicitly guarantees the Issuer cannot obtain any of the Client's private information.
That said, I will point out that this Issuer-Client unlinkability issue can be solved by introducing a 3rd-party service or when Kagi starts accepting Monero payments.
> There is no guarantee that Kagi is not keeping a 1-to-1 mapping of session cookies to ISSUER keypairs, or that Kagi could not, if compelled, establish distinct ISSUER keypairs for specific session cookies.
Also completely valid, but also not something Kagi claims to guarantee. They believe the extension should be responsible for guarding attainer issuance partitioning. I don't think it's implemented currently but it shouldn't be too hard, especially since they currently use only 1 keypair.
>Very true, but again, the RFC describes a completely different threat model with much stronger guarantees. The Kagi threat model:
>
>- Does not provide Issuer-Client unlinkability
>
>- Does not provide Attester-Origin unlinkability
If the Client, Attester, and Origin are all a single party (Kagi), then it follows from that threat model that Kagi does not provide Kagi-Client unlinkability, no?
Further, this is not what Kagi has advertised in the blog post:
>What guarantees does Privacy Pass offer?
>
>As used by Kagi, Privacy Pass tokens offer various security properties (§ 3.3, of [2]).
Kagi are explicitly stating that they provide the guarantees of § 3.3. They even use more plain language:
>Generation-redemption unlinkability: Kagi cannot link the tokens presented during token redemption (i.e. during search) with any specific token generation phase. *This means that Kagi will not be able to tell who it is serving search results to*, only that it is someone who presented a valid Privacy Pass token.
>
>Redemption-redemption unlinkability: Kagi cannot link the tokens presented during two different token redemptions. This means that *Kagi will not be able to tell from tokens alone whether two searches are being performed by the same user*.
As it stands, Kagi cannot meaningfully guarantee those things, because the starting point is the client providing a unique identifier to Kagi.
>That said, I will point out that this Issuer-Client unlinkability issue can be solved by introducing a 3rd-party service or when Kagi starts accepting Monero payments.
Sure, but at that point, there is no need for any of the Privacy Pass infrastructure in the first place.
>Also completely valid, but also not something Kagi claims to guarantee.
I disagree. Their marketing here is "we can't link your searches to your identity, because cryptography."
>They believe the extension should be responsible for guarding attainer issuance partitioning. I don't think it's implemented currently but it shouldn't be too hard, especially since they currently use only 1 keypair.
If Kagi is going to insist on being the attester and on requiring uniquely identifiable information as the basis for issuing tokens, then yes, the only way to even try to confirm that they're not acting maliciously is to keep track not only of distinct keypairs, but also of public and private metadata blocks within the tokens, and to share all of that data (in a trustworthy manner, of course) with other confirmed Kagi users. And if a user doesn't understand all of the nuances that would entail, or all of the nuances just discussed here, and instead just trusts the Kagi-written client implicitly? Then it's all just privacy theater.
> If the Client, Attester, and Origin are all a single party (Kagi), then it follows from that threat model that Kagi does not provide Kagi-Client unlinkability, no?
Kagi does not provide Kagi-Client unlinkability as the Client's payment information allows Kagi to trivially determine the identity of the Client. Kagi does provide Search-Client unlinkability (what the RFC calls Origin-Client unlinkability). More formally: If we assume Kagi cannot derive any identifying information from the privacy token (which I understand you dispute), then given any two incoming search requests, Kagi would not be able to determine whether those two requests came from the same Client or two different Clients.
> Kagi are explicitly stating that they provide the guarantees of § 3.3. They even use more plain language:
Not 100% sure I am understanding you correctly but if you are claiming that Kagi promises all the unlinkability properties in § 3.3, I would say that would be unfair, since they explicitly deny this in the FAQ at the bottom of the post.
I think they are citing that section as they reference several definitions from it in the text that follows.
> >Generation-redemption unlinkability: Kagi cannot link the tokens presented during token redemption (i.e. during search) with any specific token generation phase. *This means that Kagi will not be able to tell who it is serving search results to*, only that it is someone who presented a valid Privacy Pass token. > >Redemption-redemption unlinkability: Kagi cannot link the tokens presented during two different token redemptions. This means that *Kagi will not be able to tell from tokens alone whether two searches are being performed by the same user*.
>
> As it stands, Kagi cannot meaningfully guarantee those things, because the starting point is the client providing a unique identifier to Kagi.
These specific unlinkability properties are satisfied given that the earlier assumption about the token not providing identifiable information is true.
> Sure, but at that point, there is no need for any of the Privacy Pass infrastructure in the first place.
Kagi Privacy Pass in combination with a 3rd party can acheive a level of privacy that cannot be matched by architectures that don't involve the Privacy Pass or some other exotic cryptography.
I claim that a 3rd-party service + Kagi Privacy Pass meets all unlinkability properties in the RFC (except Attester-Origin for obvious reasons). Additionally, it guarantees confidentiality of the search request and response from malicious middleboxes, given the assumption about the token is true and that the user has access to a trusted proxy.
> I disagree. Their marketing here is "we can't link your searches to your identity, because cryptography."
Disagreement acknowledged. And yes, that quote is a fairly accurate summary of the marketing!
> If Kagi is going to insist on being the attester and on requiring uniquely identifiable information as the basis for issuing tokens, then yes, the only way to even try to confirm that they're not acting maliciously is to keep track not only of distinct keypairs, but also of public and private metadata blocks within the tokens, and to share all of that data (in a trustworthy manner, of course) with other confirmed Kagi users. And if a user doesn't understand all of the nuances that would entail, or all of the nuances just discussed here, and instead just trusts the Kagi-written client implicitly? Then it's all just privacy theater.
Yeah, I'm glad you are willing to say it at least, a lot of stuff these days is security theatre, people just kinda stick their heads in the sand I guess? I'm still hoping that people will realize that SSL has been long in need of a successor, and frankly BGP needs a complete rework too. It's also surprising to me that people are still willing to use Linux distros, although realistically modern computing as a whole is rotten at it's core. At least PGP is still alive, but it has its problems too...
I was able to follow their guide to scrape the resultadosconvzla.com website, and ended up with ~22,000 JPGs of receipts. A random sampling of them shows that, for the most part, they contain no actual inked signatures and/or fingerprints that would be present on the receipts signed by the poll workers. Some of the receipts do have signatures and/or fingerprints, but not most of them. Most of them look like this:
I.e., it looks like they asked a voting machine to print out a receipt, and it did. Then, they scanned the receipt in and put it online. The important part though, where individual poll workers scattered across hundreds of stations all over the country all sign their receipts in ink, for comparison against the computerized signatures gathered beforehand, does not appear to have happened for most of the receipts that the opposition has in possession.
I'm frustrated that the Maduro government has released highly improbable numbers. And I'm frustrated that it (certainly appears that) the opposition doesn't have nearly as much validated data as they claim to have. My gut tells me that the CNE got hacked, that the results are thus untrustworthy, and that they'll need to re-run the election, preferably by pen and paper. But the Maduro administration didn't want to face up to that fact and so, made up numbers instead -__-
As is explained in detail here: https://x.com/i/broadcasts/1YpKklRpzAyGj
The signatures on the Actas are digital, not ink. The testigos sign on the voting machine's screen. The machine will print out the receipt once the witnesses agree to the electronic count against their tallies of the individual paper votes. After printing, the machine goes online to transmit the electronic results, which can always be audited by the physical results.
What's more likely, that the opposition forged tens of thousands of receipts in less than a day, or a dictator reported fake results to remain in power? Receipts, mind you, copies of which are given to each witness from the top-three political parties, at any point now could have been called into question but not a single counter example has been shown.
Please don't drink their "North Macedonia" hack kool-aid.
>The signatures on the Actas are digital, not ink.
Yes, each acta has a digital signature, gathered ahead of time. It is there to compare against the inked signatures signed by the members of the mesa, after confirmation that the sampled ballots converge toward the computer's results. The ballots are the source of truth here, not what the computer receipt says. And the link between the ballots and the receipt are the inked signatures (or fingerprints) of the members of the mesa.
>What's more likely, that the opposition forged tens of thousands of receipts in less than a day, or a dictator reported fake results to remain in power?
The opposition need not have been the one to hack the machines. A third party could have done that. And again, the opposition haven't released "forged" receipts, merely receipts that have not actually been certified. How they have obtained those receipts is an open question at this point.
>Receipts, mind you, copies of which are given to each witness from the top-three political parties, at any point now could have been called into question but not a single counter example has been shown.
90% of their receipts lack any inked certification from the presidents, secretaries, members, witnesses, or operators of the mesas on the ground. That should be garnering an enormous amount of skepticism from a crowd that is normally adamant about not trusting computers during elections.
I come back a few weeks or months later, and find that HN is still silently killing NSA stories. This was near the top of the front page, until I refreshed and it disappeared, ranked 25 among the new stories despite 13 points and 2 comments.
Stay classy, mods. Signing off now :D
Edit: Currently ranked #72, probably well on its way to the fourth or fifth page. At least on https://lobste.rs, I might be able to see a modlog explanation if it gets removed there too.
>An hour at the helpdesk can help you discover great product ideas, feedback and suggestions - a gold mine when you're chasing product-market fit.
An hour with your front line support --or bothering to read through or even solicit their thoughts-- can do just as much for you, multiplied by however many support members you have. If anyone knows the flaws of your product, it's the guys and gals on the front lines that have to make excuses for it every day.
>A support rep can only go so far. Support agents often don't have the visibility in an organization to go back and fix bigger process problems. Only you can.
Speaking as someone who has done support before, and will likely continue to do so in the future in one way or another: you've got this all backwards. If anyone knows how screwed up a process is in your organization, or how broken your product is, it's the poor saps like us that are tasked with carrying those processes out and supporting those crappy products. Support agents don't lack "visibility." They lack authority and autonomy to handle issues on their own without fear of reprisal for not using the proper openers and closers and not keeping all calls under 12 minutes so they hit that magic 5 calls and 10 chats an hour marker. For all the talk of "horizontal" and "flat" organizations, most support shops have a very clearly defined hierarchy and strict control over lateral movement that blows up the very "gold mine" you're chasing after.
>When employees see their CEO on Support, they realize it's absolutely essential for them to go above and beyond call of duty to make sure their customers are more than just satisfied.
If you want "above and beyond," be prepared to compensate for it: more-than-COL raises, PTO, TOIL, year-end bonuses, above-average salaries/wages. You're the CEO. You'll go above and beyond because at the end of the day your compensation is tied directly to how well the business does financially. Front line support? We get paid the same amount no matter how easy or rough the day was, no matter how "above and beyond" we went. If anything, going "above and beyond" just means "this call will take me an hour," which means "my metrics are totally fucked for the rest of the day and possibly the week." And that could mean losing your job. Or it could just be justification for denying a raise or promotion.
===================================
Overall, I don't think you'll really get the experience you're looking for as a CEO. Unless you insist that your support manager treat you like any other front-line support tech, with all of the same metrics, and expectations, and "rough" customers, and "in-house" problems, and hours, and compensation, and fear of reprisal, you're going to miss things by simple virtue of the fact that what you're experiencing simply isn't what actually occurs on a day-to-day basis.
Exactly. It's easy to tout this CEO on support as new age profundity but it has no basis except in dinky startup a where the CEO is most likely doing a lot of other tasks because the funds are tight or employees few. In a reasonable company, not a little hipster startup, the CEO is not going to waste time on tech support calls when they can survey their tech support workers and see an overview.
Look tech world, it's a sorry thing but a startup is just a 'small business.'
Also part of the reason I grabbed the eject handles on enterprise devops consulting. Although I did manage to fight and win open sourcing changes to net-ldap so that it worked with A-D.
=================================
From their blog:
>As standardized in [2 - 4], the Privacy Pass protocol is able to accommodate many “architectures.” Our deployment model follows the original architecture presented by Davidson et al. [1], called “Shared Origin, Attester, Issuer” in § 4 of [2].
From [2] RFC 9576 § 3.3 "Privacy Goals and Threat Model" :
>Clients explicitly trust Attesters to perform attestation correctly and in a way that does not violate their privacy. In particular, this means that Attesters that may be privy to private information about Clients are trusted to not disclose this information to non-colluding parties. Colluding parties are assumed to have access to the same information; see Section 4 for more about different deployment models and non-collusion assumptions. However, Clients assume that Issuers and Origins are malicious.
And From [2] RFC 9576 § 4.1 "Shared Origin, Attester, Issuer" :
>As a result, attestation mechanisms that can uniquely identify a Client, e.g., requiring that Clients authenticate with some type of application-layer account, are not appropriate, as they could lead to unlinkability violations.
Womp womp :(
This is not genuinely private in any meaningful sense of the term. Kagi plays the role of all three parties, and even relies on the very thing section 4.1 says is not appropriate: to use mechanisms that can uniquely identify a client. They utilize a client's session token: "In the case of Kagi’s users, this can be done by presenting their Kagi session cookie to the server."
Frankly, that blog post is disingenuous at best, and malicious at worst.
=================================
I want to be wrong here. Where am I wrong? What am I missing?