It's surprising to me at how hard companies are pushing AI when it's in such a poor usability state.
I was trying to sign up my step dad to SiriusXM (he wanted it) so I called their phone number. The first interaction with the company is them saying you are speaking to an AI and to ask what I'm trying to do. So I said something like "I'd like to sign up for a new account but have a question about the promotional price". It said it couldn't understand the request and I had to repeat things a few times until it gave up and sent me to a human where the question was resolved quickly but it took minutes to reach a human.
It's wild to me that companies are putting AI at the top of their sales funnel.
The modern AI phone support systems I’ve encountered aren’t able to do anything or go off script, so it sounds better but it’s still a lousy experience.
I'd bet there's some calculation that people who try to sign up for a plan over the phone end up using the phone more down the line, which would mean more costly operator time. So the math works out where the overall savings of making enough people give up before reaching a human outweighs the cost of potentially lost new subscriptions by phone call. Or, they just didn't study that. Or, the decision-makers don't contact customer support for themselves and so don't know how infuriatingly unhelpful AI ones are.
> I'd bet there's some calculation that people who try to sign up for a plan over the phone end up using the phone more down the line, which would mean more costly operator time. So the math works out where the overall savings of making enough people give up before reaching a human outweighs the cost of potentially lost new subscriptions by phone call.
That's an example of a weird heuristic I frequently see applied to corporations: assume some awful decision is the result of some scarily hyper-competent design process, and construct a speculative explanation along those lines.
But must of us have worked in corporations, an know how stupid and incompetent they can be.
> Or, they just didn't study that. Or, the decision-makers don't contact customer support for themselves and so don't know how infuriatingly unhelpful AI ones are.
The real issue with these tools is taste. Most business people/clients have poor taste and they need creatives or engineers etc to actually rein them in, then produce the great work they need, gained through years of experience, and taste refinement .
The AI tools can produce the work, the quality can be good but taste is lost as the professionals are removed from the process.
There’s a quote I can’t remember the source of… “anyone can have an idea but not everyone can execute on it.”
AI gives the illusion you can create your ideas and compete with actual professionals
The ad is hilariously bad but McDonald’s has done many terrible ads over the years where “creatives” were involved eg the infamous random red couch ad.
I can sympathize but the comment that “This commercial single-handedly ruined my Christmas spirit” is insane to me. Who cares so much about advertisements lol.
Are you seriously comparing a half minute ad with something that's five times longer at two minutes and half? Way too expensive for tv and way too long anyway. I think it's longer than the average TikTok video.
It looked like the preview to an upcoming horror movie. Flash through a bunch of scenes where the world is suddenly bizarre and everyone is acting strange.
> However, we notice – based on the social comments and international media coverage - that for many guests this period is 'the most wonderful time of the year'.
Cringe. I suspect the same people who needed social comments and international media coverage to figure out that Christmas might actually be a nice time for some people are the ones who decided that video was appropriate in content and aesthetics. Also, that quote reads a bit like a machine desperately trying to understand humans.
The fact that we are talking about it here (and offline - had that discussion with a colleague) means that they are getting what they want --> attention
I always wonder about the truth in, “No advertising is bad advertising.” I think you can have bad advertising that isolates customers but this doesn’t seem to cross that line. We’re all talking about McDonalds now after all.
AI is deeply unpopular with a large and very vocal fraction of the population. It's reflexively just "slop" to them. (And, on Twitter, I keep seeing people praise content, learn it was AI-generated, and immediately pivot to outrage.) As such, it's reputationally risky for brands to use AI-generated resources in any public-facing project, and this situation is unlikely to change any time soon. Marketing managers need to realize this.
It’s easy to be against it now because so much content that people recognise as AI is also just bad. If professionals can start to use it to produce content that is actually good, I think opinions will shift.
There are a lot of AI videos that you can very easily tell are AI, even if they are done well. For example, I just saw a Higgsfield video of a kangaroo fighting in the UFC. You can tell it is AI, mainly because it would be an insane amount of work to create any other way. But I think it is getting close to good enough that a lot of people, even knowing it is AI, wouldn't care. Everyone other than the most ardent anti-AI people are going to be fine with this when we have people creating interesting and engaging media with AI.
I think we will look back at AI "slop" as a temporary point in time where people were creating bad content, and people were defending it as good even when it was not. Instead, as you say, AI video will fall into the background as a tool creators use, just like cameras or CGI. But in my opinion it won't be that people can't tell that AI was used at all. Rather, it will be that they won't care if there is still a creative vision behind it.
At least, that is what I hope compared to the outcome where there are no creators and people just watch Sora videos tailored to them all day.
Okay, you guys are funny, because "it's not x, it's y" (or "it's not just x, it's also y) is probably the most characteristic post-2023 LLM writing quirk.
These days, though, it's not as common as it used to be. Kimi K2, in particular, is a weirdly good and stylistically flexible writer.
Yup. I'm not sure if the person I replied to was going for that, but as soon as I see anything like it I hate to say my mind instantly jumps to AI, along with its grandiosity. I guess it might already be able to write like a normal person by default and I haven't noticed. Haven't heard of Kimi K2
I was trying to sign up my step dad to SiriusXM (he wanted it) so I called their phone number. The first interaction with the company is them saying you are speaking to an AI and to ask what I'm trying to do. So I said something like "I'd like to sign up for a new account but have a question about the promotional price". It said it couldn't understand the request and I had to repeat things a few times until it gave up and sent me to a human where the question was resolved quickly but it took minutes to reach a human.
It's wild to me that companies are putting AI at the top of their sales funnel.
reply