Gmail is an Electronic Communication Service as defined in 18 U.S.C § 2510, meaning its contents are protected under the Stored Communications Act (18 U.S.C. Chapter 121 §§ 2701–2713).
Communications with an AI system do not involve a human so are not protected by ECPA or the SCA and get less protection. This is controversial and some people have called on ECPA/SCA to be extended to cover AI services. That means a warrant would be necessary to get your OpenAI history, not just a subpoena.
In a way it's like someone talking to themselves in the bathroom mirror. It's almost a higher privacy expectation than regular emails. You expect no human to see it at all.
Um, Windows 11 still hasn’t moved all the necessary utilities and administrative panels over to the windowing toolkit Microsoft introduced in 2012, and MacOS 26(??) is… hideous.
IANAL and this is not legal advice, but you probably fine reverse engineering a mobile app and intercepting your own network traffic. He was doing ok until he started enumerating IDs in their database, at which point he started venturing into the territory that got weev 3.5 yrs.
I mean, he ventured in that direction, but until he discloses PII and leaks evidence of his intent that's the extent of the similarity: directional. People on message boards drastically underrate the importance of intent evidence in criminal cases; they all want there to be some hard-and-fast rule like "if you can see it in the URL, and you don't use a single-quote character to break SQL with it, it's fair game", which is not at all how it works.
His blog post seem to make it clear that his intent was to gain access to data in a computer system he did not have permission to access. Why would "disclose PII" be relevant?
CFAA cases turn on the "why" as much as the "how", and "because I wanted to find and disclose security vulnerabilities for the good of the public" is a disfavored "why". Read the sentencing filings in the case you're talking about to see more about the implication of disclosure.
Agreed. I've been doing this for 25+ years and personally know a dozen people who have been threatened and several who have been sued or faced potential prosecution for legitimate security research. I've experienced both situations!
That doesn't make it right, and the treatment of the researcher here was completely inappropriate, but telling young researchers to just go full disclosure without being careful about documentation, legal advice and staying within the various legal lines is itself irresponsible.
They aren't talking about general purposed datacenters, but satellite uplink stations. These new constellations of low-Earth orbit (LEO) internet satellites (like Starlink) can network with each other but eventually need to downlink into a big terrestrial dish where the traffic meets a fiber backbone. It's position in the southern hemisphere, middle of the Atlantic and political stability (still part of keeping the sun from setting on the British Empire) would make this an interesting place for downlink stations.
Not a ton of jobs, but some CapEx for construction and probably a couple dozen people year-round.
We use Microsoft's PhotoDNA scanning service on all images we intake for research, which has access to hash banks collected by NCMEC, a government-sponsored clearinghouse on child exploitation, and the Tech Coalition, the private group coordinating child safety work between major platforms.
Communications with an AI system do not involve a human so are not protected by ECPA or the SCA and get less protection. This is controversial and some people have called on ECPA/SCA to be extended to cover AI services. That means a warrant would be necessary to get your OpenAI history, not just a subpoena.