I, personally, use chatGPT for search more than I do Google these days. It, more often than not, gives me more exact results based on what I'm looking for and it produces links I can visit to get more information. I think this is where their competitive advantage lies if they can figure out how to monetize that.
We don’t need anecdotes. We have data. Google has been announcing quarter after quarter of record revenues and profits and hasn’t seen any decrease in search traffic. Apple also hinted at the fact that it also didn’t see any decreased revenues from the Google Search deal.
AI answers is good enough and there is a long history of companies who couldn’t monetize traffic via ads. The canonical example is Yahoo. Yahoo was one of the most traffic sites for 20 years and couldn’t monetize.
2nd issue: defaults matter. Google is the default search engine for Android devices, iOS devices and Macs whether users are using Safari or Chrome. It’s hard to get people to switch
3rd issue: any money that OpenAI makes off search ads, I’m sure Microsoft is going to want there cut. ChatGPT uses Bing
4th issue: OpenAIs costs are a lot higher than Google and they probably won’t be able to command a premium in ads. Google has its own search engine, its own servers, its own “GPUs” [sic],
5th: see #4. It costs OpenAI a lot more per ChatGPT request to serve a result than it costs Google. LLM search has a higher marginal cost.
I personally know people that used ChatGPT a lot but have recently moved to using Gemini.
There’s a couple of things going on but put simply - when there is no real lock in, humans enjoy variety. Until one firm creates a superior product with lock in, only those who are generating cash flows will survive.
I'm genuinely curious. Why do you do this instead of Google Searches which also have an AI Overview / answer at the top, that's basically exactly the same as putting your search query into a chat bot, but it ALSO has all the links from a regular Google search so you can quickly corroborate the info even using sources not from the original AI result (so you also see discordant sources from what the AI answer had)?
The regular google search AI doesn’t do thinky thinky mode. For most buying decisions these days I ask ChatGPT to go off and search and think for a while given certain constraints, while taking particular note of Reddit and YouTube comments, and come back with some recommendations. I’ve been delighted with the results.
I wouldn’t be surprised if ChatGPT was Pareto optimal for buying decisions… but I suspect there are a whole pile of Pareto optimal ways to make buying decisions, including “buy one of the Wirecutter picks” or “buy whatever Costco is selling”.
Even in the case where you have a good shortlist of items, the ability to then ask follow up questions in a conversational format is very useful for me. Anyway, just explaining why one might use ChatGPT for this rather than the Google search box, obviously your mileage is welcome to vary.
Even within compounding there's an interesting phenomenon.
Start with $100 and compound it at 10% per year for 30 years and you end up with about $1,700. Improve that return by just 1% to 11% and after the same 30 years you have about $2,300.
That small 1% edge produces roughly 35% more money in the end. Compounding is extremely powerful, and even marginally better money management leads to vastly better outcomes over time.
I agree but does the happiness report actually measure all of that with their single question:
Please imagine a ladder with steps numbered from zero at the bottom to
ten at the top. Suppose we say that the top of the ladder represents
the best possible life for you and the bottom of the ladder represents
the worst possible life for you. If the top step is 10 and the bottom
step is 0, on which step of the ladder do you feel you personally
stand at the present time?
Yes? "The best possible life" covers pretty much exactly these socioeconomic factors for most people. Is there any of these factors that you think is not covered by this question?
I've tried running teams with and without estimates and I've noticed that when work isn't estimated it tends to drag on forever. But when you put a number on it, even if it's a rough or inaccurate guess, it gets done much faster.
Not the gp, but currently reading a web novel with a card game where the author didn't include alt text in the card images. I contacted them about it and they started, but in the meantime ai was a big help. all kinds of other images on the internet as well when they are significant to understanding the surrounding text. better search experience when Google, DDG, and the like make finding answers difficult. I might use smart glasses for better outdoor orientation, though a good solution might take some time. phone camera plus ai is also situationally useful.
The question to ask is, what a sighted person learns after looking at the image? The answer is the alt text. E.g if the image is a floppy, maybe you communicate that this is the save button. If it shows a cat sleeping on the windowsill, the alt text is yep: "my cat looking cute while sleeping on the windowsill".
I really like how you framed this as the takeaway or learning that needs to happen as what should be in the alt and not a recitation of the image. Where I've often had issues is more for things like business charts and illustrations and less cute cat photos.
The logic stays the same though the answer is longer and not always easy. Just saying "business chart" is totally useless. You can make a choice on what to focus and say "a chart of the stock for the last five years with constant improvement and a clear increase by 17 percent in 2022" (if it is a simple point that you are trying to make) or you can provide an html table with the datapoints if there is data that the user needs to explore on their own.
but the table exists outside the alt text, right? i don't know a mechanism to say "this html table represents the contents of this image" , in a way that screen readers and other accessibility technologies take advantage of
The figure tag has both image and caption tags that link them. As far as I remember, some content could be marked as screen reader only if you don't want for the table to be visible to the rest of the users.
Additionally, recently I've been a participant in accessibility studies where charts, diagrams and the like have been structured to be easier to explore with a sr. Those needed js to work and some of them looked custom, but they are also an alternative way to layer data.
Charts are one I've wondered about, do I need to try to describe the trend of the data, or provide several conclusions that a person seeing the chart might draw?
Just saying "It's a chart" doesn't feel like it'd be useful to someone who can't see the chart. But if the other text on the page talks about the chart, then maybe identifying it as the chart is enough?
It depends on the context. What do you want to say? How much of it is said in the text? Can the content of the image be inferred from the text part? Even in the best scenario though, giving a summary of the image in the alt text / caption could be immensely useful and include the reader in your thought process.
What are you trying to point out with your graph in general? Write that basically. Usually graphs are added for some purpose, and assuming it's not purposefully misleading, verbalizing the purpose usually works well.
I might be an unusual case, but when I present graphs/charts it's not usually because I'm trying to point something out. It's usually a "here's some data, what conclusions do you draw from this?" and hopefully a discussion will follow. Example from recently: "Here is a recent survey of adults in the US and their religious identification, church attendance levels, self-reported "spirituality" level, etc. What do you think is happening?"
Would love to hear a good example of alt text for something like that where the data isn't necessarily clear and I also don't want to do any interpreting of the data lest I influence the person's opinion.
Yeah, I think I misunderstood the context. I understood/assumed it to be for an article/post you're writing, where you have something you want to say in general/some point of what you're writing. But based on what you wrote now, it seems to be more about how to caption an image you're sending to a blind person in a conversation/discussion of some sort.
I guess at that point it'd be easier for them if you just share the data itself, rather than anything generated by the data, especially if there is nothing you want to point out.
An image is the wrong way to convey something like that to a blind person. As written in one of my other comments, give the data in a table format or a custom widget that could be explored.
I'm gonna flip this around... have you tried pasting the image (and the relevant paragraph of text) and asking ChatGPT (or another LLM) to generate the alt text for the image and see what it produces?
> I'm gonna flip this around... have you tried pasting the image (and the relevant paragraph of text) and asking ChatGPT (or another LLM) to generate the alt text for the image and see what it produces?
There's a great app by an indie developer that uses ML to identify objects in images. Totally scriptable via JavaScript, shell script and AppleScript. macOS only.
Important to add for blind people: "... assuming they never seen anything and visual metaphors won't work"
The amount of times I've seem captions that wouldn't make sense for people who never been able to see is staggering, I don't think most people realize how visual our typical language usage is.
Fair enough. Anyway I wasn't trying to say what actually changed GP's life, I was just expressing my opinion on what video models could potentially bring as an improvement to a blind person.
It’s presumptuous of you to assume I was offended.
Accusing someone of “virtue signaling” is itself virtue signaling, just for a different in-group to use as a thought terminating cliche. It has been for decades. “Performative bullshit” is a great way to put it, just not in the way you intended.
If the OP had a substantive point to make they would have made it instead of using vague ad hominem that’s so 2008 it could be the opening track on a Best of Glenn Beck album (that’s roughly when I remember “virtue signaling” becoming a cliche).
The two cents are not literally monetary - your opinion is literally the two cents. You're contributing your understanding to the shared pot of understanding and that's represented by putting money into the pot, showing you have skin in the game. It's contributing to a larger body of knowledge by putting your small piece in - the phrases you suggest don't have that context behind them and in my opinion are worse for it. The beauty of the phrase is because the two cents are your opinion, everyone has enough, because everyone can have an opinion.
The lens through which you're analyzing the phrase is coloring how you see it negatively, and the one I'm using is doing the opposite. There is no need to change the phrase, just how it's viewed, I think.
people put too much weight onto words, the first lesson I learned on the internet is that words are harmless, might be deeply painful for some, but because people as my self put no weight behind them we don't even have a concept of keeping such things mindful since it never crosses our minds and it's really difficult to see if any other way even if we try to since it just seems like a bad joke.
And when I say 'it never crosses our minds' I really mean it, there's zero thoughts between thinking about a message and having it show up in a text box.
A really great example are slurs, for a lot of people they have to double take, but there's zero extra neurons fired when I read them. I guess early internet culture is to blame since all kinds of language was completely uncensored and it was very common to run into very hostile people/content.
> The metaphor of assigning a literal monetary value to one's opinion reinforces the idea that contributions are transactional and that their "worth" is measured through an economic lens. That framing can be exclusionary, especially for people who have been historically marginalized by economic systems. It subtly normalizes a worldview where only those with enough "currency" - social, financial, or otherwise - deserve to be heard.
No. It’s acknowledging that that perhaps one’s opinion may not be as useful as somebody else’s in that moment. Which is often true!
Your first and third paragraphs are true, but they don’t apply to every bloody phrase.
guessing that being able to hear a description of what the camera is seeing (basically a special case of a video) in any circumstances is indeed life changing if you're blind...? take a picture through the window and ask what's the commotion? door closed outside that's normally open - take a picture, tell me if there's a sign on it? etc.
Image descriptions. TalkBack on Android has it built in and uses Gemini. VoiceOver still uses some older, less accurate, and far less descriptive ML model, but we can share images to Seeing AI or Be My Eyes and such and get a description.
Video descriptions, through PiccyBot, have made watching more visual videos or videos where things happen that don't make sense without visuals much easier. Of course, it'd be much better if YouTube incorporated audio description through AI the same way they do captions, but that may happen in a good 2 years or so. I'm not holding my breath. Google as a whole is hard to get accessibility out of more than the bare minimum.
Looking up information like restaurant menus. Yes it can make things up, but worst-case, the waiter says they don't have that.
reply