From the dialogues in the pictures it doesn’t sound like they are using anyones emails for training. The messaging indicates it’s more like using as context and/or generating embeddings for RAG. Unless there’s something else I’m not aware of.
I know that Google does a lot of bad stuff but we don’t need to make up stuff they just aren’t doing
This doomsday messaging an alarmism is only serves to degrade the whole cause
edit: and before someone say that they also don’t want that then let’s criticize it for what it is (opting users to feature without consent). We don’t need to make stuff up, it really doesn’t help.
>When smart features are on, your data may be used to improve these features. Across Google and Workspace, we’ve long shared robust privacy commitments that outline how we protect user data and prioritize privacy. Generative AI doesn’t change these commitments — it actually reaffirms their importance. Learn how Gemini in Gmail, Chat, Docs, Drive, Sheets, Slides, Meet & Vids protects your data.
When I click "Learn more" in toggling the smart features on/off
It may not do it now, but I really don't like the implications. Especially a tone of "it's not actually bad, it's good!"
"Your data stays in Workspace. We do not use your Workspace data to train or improve the underlying generative AI and large language models that power Gemini, Search, and other systems outside of Workspace without permission."
But then if the terms include a vague permission and/or license to use the data for improving the results, the text is factually correct while obscuring the fact that they do in fact solicit your permission and thus use the data, with your permission.
Discovering new settings that I was opted in to without being asked does not scream good faith.
Separately, their help docs are gibberish. They must use this phrase 20 times: "content is not used for training generative AI models outside of your domain without your permission." Without telling you if that checkbox is that permission; where that permission is set; or indeed, even if that permission is set. From reading their documentation, I cannot tell if that checkbox in gmail allows using my data outside my organization or not.
Sorry, but that "doomsday" "alarmism" is exactly what is needed and warranted. But this practice of sneakily opting users in into things they don't want instead of a very clear full on pop up saying "We now use your data and private emails for AI training" is exactly the problem.
> I know that Google does a lot of bad stuff but we don’t need to make up stuff they just aren’t doing
No no. a) they ARE doing a lot of bad stuff and b) that shit ain't made up and they ARE exactly doing that. Or do you also think that Github is NOT using priI know that Google does a lot of bad stuff but we don’t need to make up stuff they just aren’t doingvate repos to train Copilot? Do you honestly and truly believe that?
If you do truly believe that I got a bunch of bridges to sell to you.
They disable so many features when you remove "Smart features" i.e.
Grammar, Spelling, Autocorrect, Smart Compose/Reply (those templated suggested replies), Nudges, Package tracking, Desktop notifications.
Google really wants to punish you for doing that.
Turning off the inbox categories feature was particularly annoying. A feature they had for a good decade before deciding they weren't jsut happy with collecting my data.
It is unbelievably manipulative that they tied this organizational feature with this new LLM-training scam they have running now.
Learning to not rely on inbox categories does make it easier for me to finally leave Gmail for a real email provider though, so maybe this will all work out in the end.
Yep I went through this sad journey with my gmail this week. Got tired of seeing "coming soon" packages cluttering up my inbox, so I looked into how to turn them off. It turned off the categories. Reminds me of the dark pattern used by many apps, where if you turn off notifications to avoid ads/spam, you also lose useful notifications.
I never enabled that and I verified it's not enabled. However I still have the "Happening Soon" section in my inbox that has status of some packages.
My guess is that Smart Features will (along with everything else it does) scan your emails to populate "Happening Soon" with package status, and if you then enable "Turn on package tracking", it will also periodically poll the shippers for those packages to keep the status up to date.
My complaint still stands. I want to entirely remove "Happening Soon" without disabling categories. It's not even the "Google reading your emails" creepiness. I just don't want my UI to be cluttered.
I see. The "Happening Soon" section of the inbox is populated with stuff other than order tracking, such as your airline tickets. So I can see why you'd have to disable the whole shooting match to get rid of it. But I agree that it would be nice for some people if there was an inbox layout setting to just get rid of it altogether.
These switches don't control whether your emails are used to train models. They control whether you get to use machine inference features on your own emails.
I'm Australian, I use Australian idioms, spelling, contractions, et al in emails to regular contacts for 20+ years via gmail (Yet Another Early Gmail Invite User).
Despite having selected UK English (there's no 'Strayla option) gmail via the web still insists on suggesting I morph into a cookie cutter middle north American Engrish typer.
They're claiming that these options allow Google to use your data to train its AI, but that's not what it says at all. Where are they getting that idea from?
"training on our data" has turned into a catchphrase like "taking our guns" or "banning our books" - dumb propaganda for anti-AI crowd to enrage people. Whether personalized AI-based experience is useful can be debated but everything has to be twisted into culture wars, thats just how media is nowadays
Privacy is not a culture war issue. Not wanting massive amounts of personal data hoovered up to train an AI for Google is reasonable. Arguing against this kind of invasion of privacy is not "dumb propaganda".
It's like with Gemini and Google smart devices. You need to opt-in for data training to use Gemini apps. This means you won't be able to access basic features like asking Gemini to turn off your smart light bulbs. Essentially, Google is preventing you from using any smart features unless you allow data training on your own. Even to access basic features like chat history, you need to enable Gemini activity. This essentially allows Google to train on all of your conversations. This is even for paid tiers
One could use encryption but most receiving users have not the faintest clue on how to use that. If most people would care, then I wouldn't give a flying fuck about what Google does because they can't read my mail even if they wanted to.
You can't really use encryption for a couple of reasons:
1. Encrypted email only protects the email body. All the metadata - who is emailing who, when, from what servers - as well as the subject line, are in the clear regardless. The metadata is more valuable than the data in many cases.
2. Unless users download all their Gmail messages, leaving none on the server (essentially, like an old POP account), then the decrypted message bodies will be in their Gmail account for Google to read.
Hey, if anyone wants to use AI to draft replies but wants to make sure their data isn't trained by it, I've built a Chrome extension that does exactly that! You can plug in your API key, and it supports all SOTA models.
The article is useful overall, but the following line is such bad journalism that I want to call it out, since we can't afford to have people dumbed-down.
> The reason behind this is Google’s push to power new Gmail features with its Gemini AI, helping you write emails faster and manage your inbox more efficiently.
A journalist is stating a corporate PR explanation for a sketchy action as fact. And it's not even a very believable reason, at least not completely, and the journalist certainly isn't verifying it. And it's important.
Honestly, get a small VM at openbsd.amsterdam or another good hoster and just do it yourself. OpenBSD has outstanding documentation, standing up SMTPD and Dovecot there is super simple, especially for a single user and not a whole group or family (where LDAP or Kerberos would be warranted). Just take initiative and own your data. Does it cost a little bit upfront time? Sure, but maybe just weekend. Does it require a little maintenance here and there? Sure. But still cheaper than paying for fastmail or waiting for fastmail to also exploit your data because the shareholders said so....just own your data.
I like Fastmail with my own domain for personal email, but the reality is nothing is a complete replacement for a Google account, given how tied in it is with auth and the whole Google ecosystem. I still have to use Google for work.
Proton is another one people often suggest. Hey.com sometimes too. No experience with those myself.
There are other options (such as the big guys, iCloud mail or Outlook.com), but aside from self-hosting (which I don't want to spend time maintaining just for my personal mail), I personally haven't seen much outside of those ones that are recommended often.
Assumedly you need an email hosting service to place behind your domain, correct? Or do you self host an email server? If you are completely self-hosting, how do you deal with being marked as spam by large spam filter organizations for being a low-trust sender?
In daily practice you don't. Unless you host in a DC with lots of spam. Use small but trusted hosters.
But as long as you get SSL/DKIM/SPF and the other stuff right, and it's not THAT difficult, then most hosters will let you through. Unless it's German Telecom, because for some reason t-online.de decided to only allow emails from hosters they whitelisted and there's a whole approval process which even requires registering with them with an email that is NOT from your domain and even a fax, but honestly fuck anyone using that domain.
Reading comprehension and media literacy at all-time lows, below what cognitive scientists formerly believed was a hard floor of "absolute zero comprehension".
It's just for "improving", not training! Why would anyone assume the worst? They're the good guys.. more like a cuddly teddy bear than a monopolistic megacorp! Surely they'd explicitly tell us exactly what they were doing before they do it. Stop fear-mongering, people! anxiously checks the price of my GOOG stock
actually, I do want them to read my auto subscribed newsletter slop for random services that I register to. I do want them to use that messy, unorganized, faulty or maybe even useless data to train their AI models.
I know that Google does a lot of bad stuff but we don’t need to make up stuff they just aren’t doing
This doomsday messaging an alarmism is only serves to degrade the whole cause
edit: and before someone say that they also don’t want that then let’s criticize it for what it is (opting users to feature without consent). We don’t need to make stuff up, it really doesn’t help.