The only thing that seems hopeful is that people are finally talking about it at mass scale.
I promise you as an anarchist agitator that is unbelievably new just even in the last couple years and precisely what usually happens prior to actual direct action.
My fellow anarchists hate the fact that Donald Trump did more for anarchist-socialist praxis than every other socialist writer in history.
But I’m sure somebody will blow this off as “it’s only three examples and is not really representative”
But if it is representative…
“then it’s not as bad as other automation waves”
or if it is as bad as other automation waves…
“well there’s nothing you can do about it”
Anecdotally I was in an Uber yesterday on the way to a major Metropolitan airport and we passed a Waymo. I asked the Uber driver how they felt about Waymo and Uber collaborating and if he felt like it was a threat to his job.
His answer was basically “yes it is but there’s nothing anybody can do about it you can’t stop technology it’s just part of life.”
If that’s how people who are being replaced feel about it, while still continuing to do the things necessary to train the systems, then there will be assuredly no human future (at least not one that isn’t either subsistence or fully machine integrated) because the people being replaced don’t feel like they have the capacity to stand up to it.
The world changes and jobs cease to exist. Historically there hasn't been a great deal of support for those who lose their jobs to change.
While there are issues that are AI specific, I don't feel as if this is one of them. This happens for many reasons, of which AI is just one. In turn, I think this means that the way to address the problem of job loss should not be AI soecific.
If it turns out that AI does not create more jobs than are lost; that will be a new thing. I think that can happen, but on a longer timeframe.
When most jobs can be done by AI, we will need a societal change to deal with that. That will be where people need a livelihood, not necessarily a job. I have read pieces nearly a hundred years old saying this, there are almost certainly much earlier writings that identify this needs to be addressed.
There will undoubtedly be a few individuals that will seek to accumulate wealth and power who aim to just not employ humans. I don't think that can happen on a systemic scale because it would be too unstable.
Two of the things that supports wealth inequality is 1) people do not want to risk what they currently have, and 2) they are too busy surviving to do anything about it.
A world where people lose their jobs and have no support results in a populace with nothing to lose and time to act. That state would not last long.
We change the world. It's not happening to you; you're doing it. You're doing it right now with your parent comment - you're not an observer on the sideline, you're in the thick of it, doing it, your every action - my every action - has consequences. Who will we be in our communities and societies?
> I have read pieces nearly a hundred years old saying this
You can read pieces 100 years old talking about famine, polio, Communist and fascist dictatorships, the subordination of women, etc. We changed the world, not by crying about inevitability but with vision, confidence, and getting to work. We'd better because we are completely responsible for the results.
Also, inevitability is a common argument of people doing bad things. 'I am inevitable.' 'Human nature is ...' (nature being inevitable). How f-ing lazy and utterly irresponsible. Could you imagine telling your boss that? Your family? I hope you don't tell yourself that.
You’re shouting into the wind friend - My post even told you that would be the response “there’s nothing we can do”
Humans are reactive and antisocial so the idea of a “common good” would require two things humans can’t do: Create sustainable commons and act as though we are all equal
Any position that assumes it’s possible is not even aspirational it’s naive
There will always be value in doing work that other people don't want to do themselves or that requires expertise and skill that isn't conveyed all that well through books or pictures. The economy used to be full of stable masters for horses and carriages, and manual typists, and street lamp lighters, and television repairmen, and any number of jobs that don't exist anymore.
I was on the academic board of engineering mechanics for Purdue almost a decade ago.
Purdue not necessarily uniquely but specific to their charter does a really good job at workforce development focus in their engineering. They are very highly focused on staffing and training and less so on the science and research part - though that exists as well.
This tracks what I would expect an in line with what I think it should be best practice
And if my grandmother had wheels she would be a bike
There are categories and ontologies are real in the world. If you create one thing and call it something else that doesn’t mean the definition of “something else” should change
By your definition it is impossible to create a state based on coherent specifications because most states don’t align to the specification.
We know for a fact that’s wrong via functional programming, state machines, and formal verification
> Microservices is a service-oriented software architecture in which server-side applications are constructed by combining many single-purpose, low-footprint network services.
Gonna stop you right there.
Microservices have nothing to do with the hosting or operating architecture.
Martin Fowler who formalized the term, Microservices are:
“In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery”
You can have an entirely local application built on the “microservice architectural style.”
Saying they are “often HTTP and API” is besides the point.
The problem Twilio actually describe is that they messed up service granularity and distributed systems engineering processes
Twilio's experience was not a failure of the microservice architectural style. This was a failure to correctly define service boundaries based on business capabilities.
Their struggles with serialization, network hops, and complex queueing were symptoms of building a distributed monolith, which they finally made explicit with this move. So they accidentally built a system with the overhead of distribution but the tight coupling of a single application. Now they are making their foundations of architecture fit what they built, likely cause they poorly planned it.
The true lesson is that correctly applying microservices requires insanely hard domain modeling and iteration and meticulous attention to the "Distributed Systems Premium."
Just because he says something does not mean Fowler “formalized the term”. Martin wrote about every topic under the sun, and he loved renaming and or redefining things to fit his world view, and incidentally drive people not just to his blog but also to his consultancy, Thoughtworks.
PS The “single application” line shows how dated Fowlers view were then and certainly are today.
I've been developing under that understanding since before Fowler-said-so. His take is simply a description of a phenomenon predating the moniker of microservices. SOA with things like CORBA, WSDL, UDDI, Java services in app servers etc. was a take on service oriented architectures that had many problems.
Anyone who has ever developed in a Java codebase with "Service" and then "ServiceImpl"s everywhere can see the lineage of that model. Services were supposed to be the API, and the implementation provided in a separate process container. Microservices signalled a time where SOA without Java as a pre-requisite had been successful in large tech companies. They had reached the point of needing even more granular breakout and a reduction of reliance on Java. HTTP interfaces was an enabler of that. 2010s era microservices people never understood the basics, and many don't even know what they're criticizing.
That is an interesting metric but I think it is not that important.
I would be careful with (AI-generated) code that no one at the team understands well. If that kind of code is put into production, it might become a source of dragging technical debt that no one is able to properly address.
In my opinion, putting AI-generated code to production is okay, as long it has been reviewed and there is a human who can understand it well, debug it and fix it if needed.
Or, alternatively, if it is a throwaway code that does not need to be understood well and no one cares about its quality or maintainability because it would not need to be maintained in the first place.
Toothbrush UX is the same today as it was when we were hunter gatherers: use an abrasive tool to ablate plaque from the teeth and gums without removing enamel
As somebody who's tried using a miswak [0] teeth-cleaning twig out of curiosity, I can say with confidence it's not the same experience as using a modern toothbrush. It's capable of cleaning your teeth effectively, but it's slower and more difficult than a modern toothbrush. The angle of the bristles makes a huge difference. When the bristles face forward like with a teeth-cleaning twig your lips get in the the way a lot more. Sideways bristles are easier to use.
That’s just not what user experience means, two products having the same start and end state doesn’t mean the user experience is the same. Imagine two tools, one a CLI and one a GUI, which both let you do the same thing. Would you say that they by definition have the same user experience?
If you drew both brushing processes as a UML diagram the variance would be trivial
Now compare that variance to the variance options given with machine and computing UX options
you’ll see clearly that one (toothbrushing) is less than one stdev different in steps and components for the median use case and one (computing) is nearly infinite variance (no stable stdev) between median use case steps and components.
The fact that the latter state space manifold is available but the action space is constrained inside a local minima is an indictment on the capacity for action space traversal by humans.
This is reflected again with what is a point action space (physically ablate plaque with abrasive) in the possible state space of teeth cleaning for example: chemical only/non ablative, replace teeth entirely every month, remove teeth and eat paste, etc…
So yes I collapsed that complexity into calling it “UX” which classically can be described via UML
I would almost define "experience" as that which can't be described by UML.
Ask any person to go and find a stick and use it to brush their teeth, and then ask if that "experience" was the same as using their toothbrush. Invoking UML is absurd.
You know some of us old timers still remember a time before people just totally abandoned the concept of having functional definitions and iso standards and things like that.
Funny how we haven’t done anything on the scale of Hoover Dam, Three Gorges, ISS etc…since those got thrown away
User Experience also means something specific in information theory and UX and UML is designed to model that explicitly:
Notably, the terms "UX" and "experience" are not present in that document. UI and UX are different things. UX is a newer concept that is more based on observing users and their emotional reactions to using the product.
UML and functional definitions and iso standards are still important, it's just not UX.
Good luck never observing users using your product. Not everything is a space shuttle, recall that we are talking about toothbrushes here.
The computer form factor hasn’t changed since the mainframe: look into a screen for where to give input, select visual icons via a pointer, type text via keyboard into a text entry box, hit an action button, recieve result, repeat
it’s just all gotten miniaturized
Humans have outright rejected all other possible computer form factors presented to them to date including:
Purely NLP with no screen
head worn augmented reality
contact lenses,
head worn virtual reality
implanted touch sensors
etc…
Every other possible form factor gets shit on, on this website and in every other technology newspaper.
This is despite almost a century of a attempts at doing all those and making zero progress in sustained consumer penetration.
Had people liked those form factors they would’ve been invested in them early on, such that they would develop the same way the laptops and iPads and iPhones and desktops have evolved.
However nobody’s even interested at any type of scale in the early days of AR for example.
I have a litany of augmented and virtual reality devices scattered around my home and work that are incredibly compelling technology - but are totally seen as straight up dogshit from the consumer perspective.
Like everything it’s not a machine problem, it’s a human people in society problem
Cumbersome and slow with horrible failure recovery. Great if it works, huge pain in the ass if it doesn't. Useless for any visual task.
> head worn augmented reality
Completely useless if what you're doing doesn't involve "augmenting reality" (editing a text document), which probably describes most tasks that the average person is using a computer for.
> contact lenses
Effectively impossible to use for some portion of the population.
> head worn virtual reality
Completely isolates you from your surroundings (most people don't like that) and difficult to use for people who wear glasses. Nevermind that currently they're heavy, expensive, and not particularly portable.
> implanted sensors
That's going to be a very hard sell for the vast majority of people. Also pretty useless for what most people want to do with computers.
The reason these different form factors haven't caught on is because they're pretty shit right now and not even useful to most people.
The standard desktop environment isn't perfect, but it's good and versatile enough for what most people need to do with a computer.
And most computers were entirely shit in the 1950s
yet here we are today
You must’ve missed the point: people invested in desktop computers when they were shitty vacuum tubes that blow up.
That still hasn’t happened for any other user experience or interface.
> it's good and versatile enough for what most people need to do with a computer
Exactly correct! Like I said it’s a limitation of the human society, the capabilities and expectations of regular people are so low and diffuse that there is not enough collective intelligence to manage a complex interface that would measurably improve your abilities.
Said another way, it’s the same as if a baby could never “graduate” from Duplo blocks to Lego because lego blocks are too complicated
Since mainframes, you say. Well, sonny, when I first learned programming on a mainframe, we had punch cards and fan-fold printouts. Nothing beats that, eh?
That sentence smells like AI writing, so who knows what the author actually thinks. (As usual, the other major "tell" is the superfluous section headers of the form "The [awkward noun phrase]"...) I mention this because it affects how trustworthy I find the article, combined with other aspects of this situation; and because it is very easy to ask an AI to generate this kind of post.
I'm more curious how/why the author ended up with a $500 gift card. That's a large amount, and the author never shares how this was obtained, which seems like a key missing detail. Did the author buy the gift card for himself (why?) or did someone give him a very large gift (why not mention that?)
The author lives in Australia. You get points from supermarket for purchasing some gift cards during some promotion, it's around 10% of the card value.
Gift cards are associated with money laundering and many online scams. I would guess any usage of them (especially in larger denominations) would attract increased attention and additional risk. That's nonsensical of course, why does Apple sell them if they are also suspicious of them, but I would guess if he had paid with a credit card there would have been no issue.
If you receive them as a gift, use them only in a situation unconnected with your cloud ID, such as to pay for new hardware at an Apple store.
> I'm more curious how/why the author ended up with a $500 gift card. That's a large amount, and the author never shares how this was obtained, which seems like a key missing detail. Did the author buy the gift card for himself (why?) or did someone give him a very large gift (why not mention that?)
The author mentions a big store (names it similar to Walmart for US based readers).
I would assume this was an accepted form of "return a product without a receipt" or "we want to accept your complain about this product we sold going crazy 1 day after it's warranty but we cannot give you cash back" etc
I don't understand. Gift cards typically cannot be returned, at least in the US. And the author said the gift card was redeemed "to pay for my 6TB iCloud+ storage plan", which also cannot be returned I'd imagine.
But gift cards aren't supposed to work that, right? If it wasn't "legal" or "okay" to have a 500 dollar card, they shouldn't be sold. They are available, therefore they should be perfectly usable.
I don't want to speculate more, but one of the use cases for them is for people that choose to not use cards online (or even don't have credit cards at all) to be able to buy digital goods with cash.
Either way, if we're questioning buying/using the gift card, we're blaming the victim
I'm not blaming anyone; I just find it surprising that this detail wasn't mentioned or explained. Its omission makes the article less trustworthy to me.
People are fast to pull out pitchforks in response to outrage-bait posts like this, but (generally speaking) a nontrivial percentage of such posts are intentionally omitting details which can help explain the other side's actions.
Also I genuinely wasn't familiar with this specific use-case for gift cards. At least in the US, you can buy general-purpose prepaid debit cards for this type of thing instead, or use various services which generate virtual cards e.g. privacy.com. To me that seems infinitely more normal than buying a large-value "gift card" for yourself, but I'm admittedly not familiar with the options in other countries.
1. The prepaid Visa or Mastercard come with an extra fee (like 5-6 dollars per card if I recall correctly?)
2. I didn't see the prepaid cards in stores outside the US, so they are probably not that popular outside.
Sometimes you also want to shift your spending, like if you spend 500 USD this month at this store, you'll get some good % cashback. So you end up buying a gift card that you know you'll definitely use next month.
privacy.com even if it was available in some country just means you give transactions of your identity to some other company. Cash (and so gift cards if they don't accept cash) is the most private way.
AI used em-dashes initially in that type of sentence structure, but more recently moved to a mix of semicolons and commas, at least from what I've been seeing.
I never claimed the author doesn't exist.
$500 is objectively a large amount for a gift card. Off-the-shelf gift cards with predetermined amounts are almost always substantially less than this.
LLMs were trained on books like the ones written by the author, which is why AI writing "smells" like professional writing. The reason that AI is notorious for using em dashes, for example, is that professional authors use em dashes, whereas amateur writers tend not to use em dashes.
It's becoming absurd that we're now accusing professional writers of being AI.
I didn't mention em dashes anywhere in my comment!
If this isn't AI writing, why say "The “New Account” Trap" with then further sub-headers "The Legal Catch", "The Technical Trap", "The Developer Risk"... I have done a lot of copyreading in my life and humans simply didn't write this way prior to recent years.
The relevance is that it affects whether or not the article's claims are trustworthy, when combined with some other details here. It is very easy to ask AI to generate a grievance post, for whatever motivation. This is why I mentioned it in combination with the question of how/why exactly the gift card was obtained.
There's the further detail of multiple commenters here saying their various contacts at Apple all cannot solve this particular case, which seems odd.
Now that said, given the OP is a published author, it's more likely he is trustworthy on that basis, but personally I still get a "something doesn't add up here" vibe from all this. Entirely likely I'm wrong though, who knows.
I don’t think you even know what you’re arguing about anymore. You claimed that what the author wrote wasn’t what the author thinks. As evidence you provided weak arguments about other parts of it being AI written and made an appeal to your own authority. It doesn’t matter if AI wrote that line, he wrote it, a ghost writer wrote it or a billion monkeys wrote it. He published it as his own work and you can act as if he thinks it even if you don’t otherwise trust him or the article.
Ah, I see the confusion, you're still focusing entirely on this one "this isn't just x; it's y" line. I was mostly talking about the piece as a whole, for pretty much everything other than the first sentence of my first comment above. Sincere apologies if I didn't state that clearly.
> humans simply didn’t write this way prior to recent years.
Aren’t LLMs evidence that humans did write this way? They’re literally trained to copy humans on vast swaths of human written content. What evidence do you have to back up your claim?
Decades of reading experience of blog posts and newspaper articles. They simply never contained this many section headers or bolded phrases after bullet points, and especially not of the "The [awkward noun phrase]" format heavily favored by LLMs.
So what would explain why AI writes a certain way, when there is no mechanism for it, and when the way AI works is to favor what humans do? LLM training includes many more writing samples than you’ve ever seen. Maybe you have a biased sample, or maybe you’re misremembering? The article’s style is called an outline, we were taught in school to write the way the author did.
Why did LLMs add tons of emoji to everything for a while, and then dial back on it more recently?
The problem is they were trained on everything, yet the common style for a blog post previously differed from the common style of a technical book, which differed from the common style of a throwaway Reddit post, etc.
There's a weird baseline assumption of AI outputting "good" or "professional" style, but this simply isn't the case. Good writing doesn't repeat the same basic phrasing for every section header, and insert tons of unnecessary headers in the first place.
Yes, training data is a plausible answer to your own question there, as well as mine above. And that explanation does not support your claims that AI is writing differently than humans, it only suggests training sets vary.
Repeating your thesis three times in slightly different words was taught in school. Using outline style and headings to make your points clear was taught in school. People have been writing like this for a long time.
If your argument depends on your subjective idea of “good writing”, that may explain why you think AI & blog styles are changing; they are changing. That still doesn’t suggest that LLMs veer from what they see.
All that aside, as other people have mentioned already, whether someone is using AI is irrelevant, and believing you can detect it and accusing people of using AI quickly becoming a lazy trope, and often incorrect to boot.
LLMs learned from human writing. They might amplify the frequency of some particular affectations, but they didn't come up with those affectations themselves. They write like that because some people write like that.
Those are different levels of abstraction. LLMs can say false things, but the overall structure and style is, at this point, generally correct (if repetitive/boring at times). Same with image gen. They can get the general structure and vibe pretty well, but inspecting the individual "facts" like number of fingers may reveal problems.
That seems like straw man. Image generation matches style quite well. LLM hallucination conjures untrue statements while still matching the training data style and word choices.
My point was that generative AI may output certain things at a vastly different rate than it appears in the training data. It's a different phenomenon than hallucination. In the case of this example: the training data has fingers, just not at the same exact frequency as the output.
> AI may output certain things at a vastly different rate than it appears in the training data
That’s a subjective statement, but generally speaking, not true. If it were, LLMs would produce unintelligible text & images. The way neural networks function is fundamentally to produce data that is statistically similar to the training data. Context, prompts, and training data are what drive the style. Whatever trends you believe you’re seeing in AI can be explained by context, prompts, and training data, and isn’t an inherent part of AI.
Extra fingers are known as hallucination, so if it’s a different phenomenon, then nobody knows what you’re talking about, and you are saying your analogy to fingers doesn’t work. In the case of images, the tokens are pixels, while in the case of LLMs, the tokens are approximately syllables. Finger hallucinations are lack of larger structural understanding, but they statistically mimic the inputs and are not examples of frequency differences.
Heuristics are nice but must be reviewed when confronted with actual counterexamples.
If this is a published author known to write books before LLMs, why automatically decide "humans don't write like this". He's human and he does write like this!
Most of those section headers and bolded bullet-point summary phrases should simply be removed. That's why I described them as superfluous.
In cases where it makes sense to divide an article into sections, the phrasing should be varied so that they aren't mostly of the same format ("The Blahbity Blah", in the case of what AI commonly spews out).
This is fairly basic writing advice!
To be clear, I'm not accusing his books as being written like this or using AI. I'm simply responding to the writing style of this article. For me, it reduces the trustworthiness of the claims in the article, especially combined with the key missing detail of why/how exactly such a large gift card was being purchased.
> To be clear, I'm not accusing his books as being written like this or using AI. I'm simply responding to the writing style of this article.
It's unlikely that the article had the benefit of professional, external editing, unlike the books. Moreover, it's likely that this article was written in a relatively short amount of time, so maybe give the author a break that it's not formatted the way you would prefer if you were copyediting? I think you're just nitpicking here. It's a blog post, not a book.
It's a difference of opinion and that's fine. But I'll just say, notice how those 3 previous articles you linked don't contain "The Blahbity Blah" style headers throughout, while this article has nine occurrences of them.
> notice how those 3 previous articles you linked don't contain "The Blahbity Blah" style headers throughout, while this article has nine occurrences of them.
The post https://hey.paris/posts/cba/ has five bold "And..." headers, which is even worse than "The..." headers.
Would AI do that? The more plausible explanation is that the writer just has a somewhat annoying blogging style, or lack of style.
To me those "And..." headers read as intentional repetition to drive home a point. That isn't bad writing in my opinion. Notice each header varies the syntax/phrasing there. They aren't like "And [adjective] [noun]".
We're clearly not going to agree here, but I just ask that as you read various articles over the next few weeks, please pay attention to headers especially of the form "The ___ Trap", "The ___ Problem", "The ___ Solution".
> I just ask that as you read various articles over the next few weeks, please pay attention to headers especially of the form "The ___ Trap", "The ___ Problem", "The ___ Solution".
No, I'm going to try very hard to forget that I ever engaged in this discussion. I think your evidence is minimal at best, your argument self-contradictory at worst. The issue is not even whether you and I agree but whether it's justifiable to make a public accusation of AI authorship. Unless there's an open-and-shut case—which is definitely not the case here—it's best to err on the side of not making such accusations, and I think this approach is recommended by the HN guidelines.
I would also note that your empirical claim is inaccurate. A number of the headers are just "The [noun]". In fact, there's a correspondence between the headers and subheaders, where the subheaders follow the pattern of the main header:
> The Situation • The Trigger • The Consequence • The Damage
> The "New Account" Trap • The Legal Catch • The Technical Trap • The Developer Risk
This correspondence could be considered evidence of intention, a human mind behind the words, perhaps even a clever mind.
By the way, the liberal use of headers and subheaders may feel superfluous to you, but it's reminiscent of textbook writing, which is the author's specialty.
My original comment wasn't just about AI, so please don't make it out like a throwaway "AI bad" argument.
As for the section headers, my general point was that AI output includes an excessive number of these, and they are often generally of the form "The [noun phrase]". Many times there's an adjective in there, but not always. If you think this is good writing then you're welcome to your opinion, but most writing instructors feel otherwise.
Textbooks don't contain section headers every few paragraphs.
> please don't make it out like a throwaway "AI bad" argument.
The issue isn't whether AI is good or bad or neither or both. The issue is whether the author used AI or not. And you were actually the one who suggested that the author's alleged use of AI made the article less trustworthy. The only reason you mentioned it was to malign the author; you would never say, for example, "The author obviously used a spellchecker, which affects how trustworthy I find the article."
> If you think this is good writing then you're welcome to your opinion
I didn't say it's good writing. To the contrary, I said, "the writer just has a somewhat annoying blogging style, or lack of style."
The debate was never about the author's style but rather about the author's identity, i.e., human or machine.
> Textbooks don't contain section headers every few paragraphs.
Of course they do. I just pulled some off my shelves to look.
I said it affects how trustworthy I find the article, when considered in combination with other aspects of this situation that don't add up to me.
After going through my technical bookshelf I can't find a single example that follows this header/bullet style. And meanwhile I have seen countless posts that are known to be AI-assisted which do.
Apparently we exist in different realities, and are never going to agree on this, so there is no point in discussing further.
My writing from 5+ years ago was accused of being AI generated by laymen because I used Markdown, emojis and dared to use headers for different sections in my articles.
It's kind of weird realizing you write like generic ChatGPT. I've felt the need to put human errors, less markup, etc into stuff I write now.
Did you even read the article? "The only recent activity on my account was a recent attempt to redeem a $500 Apple Gift Card to pay for my 6TB iCloud+ storage plan" a 6TB plan is $29.99 monthly.. It's not farfetched to assume he purchased a $500 gift card so he could keep the subscription without worrying about it!
"The card was purchased from a major brick-and-mortar retailer (Australians, think Woolworths scale; Americans, think Walmart scale)" There's not much of a reason to assume someone else unaffiliated with the author bought this card, he mentions talking to the vendor and getting a replacement which means he has the receipt
Yes, I read the article and it simply does not directly address who purchased the card.
It certainly implies the author bought the card for himself, yes; but that seems rather unusual to me, especially in such a high amount.
Why would you purchase a $500 gift card for yourself to "keep a subscription without worrying about it" as opposed to just paying the small monthly amount? Honest question, I literally don't understand that motivation at all. In my mind a gift card is more problematic than a normal credit card in this scenario since it eventually runs out.
Second question: why did you create an HN account just to write this comment?
> Why would you purchase a $500 gift card for yourself to "keep a subscription without worrying about it" as opposed to just paying the small monthly amount? Honest question, I literally don't understand that motivation at all. In my mind a gift card is more problematic than a normal credit card in this scenario since it eventually runs out.
Asides from the promotional bonuses that other users have mentioned, if you have an Apple Family Sharing group you can only use a single credit card tied to the main account for any payments to Apple, but individual accounts will draw down from their Apple Account balance before using that credit card - so gift cards let individuals pay for their own Apple things (subscriptions or otherwise).
I wonder if you can prepay using a card ? But otherwise to answer your potential question, I understand OP as I like to prepay things like my phone operator. I put 500 USD there, and come back one year later. This way it can free-up my limit of 10 virtual cards I have, and most of all, can keep their limits as close as possible to the minimum. If you have a mix of services on the same card it is much more difficult and more risky. If you have 100 USD + 50 USD + 25 USD + 75 USD + 60 USD in monthly spend. Then you have 310 USD at risk, when your risk could be way lower.
Did you read the comment you're responding to? Where in the article does it explain why an adult is buying a $500 gift card to pay their apple subscription instead of just paying for it directly?
“Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that". ” --https://news.ycombinator.com/newsguidelines.html
I promise you as an anarchist agitator that is unbelievably new just even in the last couple years and precisely what usually happens prior to actual direct action.
My fellow anarchists hate the fact that Donald Trump did more for anarchist-socialist praxis than every other socialist writer in history.
reply