This sets a bad precedent. I'm a software contractor from EU, but I don't usually work for EU clients, and I don't want to see the trade wars spill into contracting/dev work - but fully expect it given current trends.
What value proposition exactly ? If you're comparing to similar build quality laptops you're looking at price for two devices vs one with HW upgrades. And you can't even compare it to a premium device.
And worst of all you can only upgrade to what they have available - you can't get a strix halo inside of that thing - this is the only scenario that would make sense for me - enthusiast level hardware support.
A large % of their revenue comes from app store/services and they have incentives to lock you into the ecosystem, sell you digital shit and take a cut off of everything.
I saw an ad for apple gaming service in my iphone system settings recently !
That's not to say that Google isn't worse but let's not pretend Apple is some saint here or that their incentives are perfectly aligned with the users. Hardware growth has peaked, they will be forced to milk you on services to keep growing revenue.
Personally I'm looking forward to Steam Deck, if that gets annoying with SteamOS - it's a PC built for Linux, there's going to be something available.
True. The best option currently is to buy an Nvidia Shield TV, unlock the bootloader and install a custom Android ROM. The hardware is great, and if you install a custom ROM, you have more freedom than Apple TV will ever give you.
The comment about the ad wasn't about the ad istelf. It was an apple ad for an apple service, so they didn't make any money at all on the ad. The remark was about the service Apple was pushing, and just how intrusively.
But the comment OP was replying to was about their ad services and what incentive the company has to operate in good faith or risk impacting sales to the majority of their business.
Correct, and didn’t sell your data to do it. I’m okay with that. If I trust Apple with basically my life stored on their phone and in their cloud, and processing payments for me, and filtering my email, and spoofing my mac address on networks (and,and,and), it seems foolish to be worried about them knowing what tv shows I like to watch at night too. At least to me. It’s gonna be a sad day when Tim leaves and user privacy isn’t a company focus anymore.
Services are 25% and are the only one growing/they can grow - that means all focus is going to be on expanding that revenue = enshitification.
Hardware is now purely a way to get you on to the app store - which is why iOS is so locked down and iPad has a MacBook level processor with toy OS.
If you stop looking at the marketing speak and look at it from a stock owner perspective all the user hostile moves Apple is double speaking into security and UX actually make a lot more sense.
Hardware is still 3x the revenue of services, and though it has a lower margin is the bulk of the companies profit. Apple was 3% of the PC market in 2010 and is 10% today, while Android is 75% of the global cellphone market - there's plenty of room for growth in hardware... if you stop looking at the marketing speak, whatever that means.
I don’t see how this really changes the underlying problem of the device pays on you and then they sell that information to the highest bidder? I’m not reaching for a financial report to fix that.
Apple doesn't sell information, they sell access to eyeballs. Quite a big difference. The whole point of first OPs point was that ad revenues to Apple are not worth hurting the other parts of their business built around privacy. Pointing out that Apple shows ads for owned services within their own OS isn't a case otherwise.
Apple absolutely does allow wholesale data harvesting by turning a blind eye to apps that straight up embed spyware SDKs.
This isn’t some hypothetical or abstract scenario, it’s a real life multi billion dollar a year industry that Apple allows on their devices.
You can argue that this is not the same thing as the native ad platform that they run and I’d agree but it’s also a distinction without a meaningful difference.
All you've done is move the goal posts, and it's not even ads related. I'm not entirely certain what you're arguing, other than having some feelings about Apple.
Like another comment mentioned I'm ready to go back to torrenting. Im currently paying for 4 streaming service subscriptions (if you count YouTube premium) where I have super segmented and annoying search UX, and Apple won't even let me pay for their service in my EU county (Croatia). And the DRM story is ridiculous. I'll just setup ARR stack and have a better experience than I can pay for - for free.
Jellyfin + Arr stack would take a couple of hours to setup and cost $10/month for a seedbox in Europe, but it's not as convenient as downloading an app and logging in.
If it was just one app or even two I would agree but there's :
- Netflix
- HBO max
- Sky Showtime
- Amazon Prime
- Apple TV+
- Disney+
This is just the stuff I watched this year.
Add in all the region locks, also not all the services having rights to local dubs despite them being available (more for children's stuff but still relevant, Disney+ is unusable for me because of this)
Netflix used to have a catalog worth keeping the subscription on, nowadays I maybe get to watch something once a quarter and keep it on for kids stuff.
Streaming is not convince anymore it's a shitshow.
I think a jellyfin/ARR/Seedbox setup is going to be the solution this year.
>so preoccupied with whether or not they could, they didn't stop to think if they should
This describes more than half of .net community packages and patterns. So much stuff driven by chasing "oh that's clever" high - forgetting that clever code is miserable to support in prod/maintain. Even when it's your code, but when it's third party libs - it's just asking for weekend debugging sessions and all nighters 2 months past initial delivery date. At some point you just get too old for that shit.
That's just a farmiliarity thing. I've worked on project doing full web FE, mobile and BE.
It's hard to generalize but modern frontend is very good at isolating you from dealing with complex state machine states and you're dealing with single user/limited concurrency. It's usually easy to find all references/usecases for something.
Most modern backend is building consistent distributed state machines, you need to cover all the edge cases, deal with concurrency, different clients/contracts etc. I would say getting BE right (beyond simple CRUD) is going to be hard for LLM simply because the context is usually wider and hard to compress/isolate.
>Most modern backend is building consistent distributed state machines, you need to cover all the edge cases, deal with concurrency, different clients/contracts etc. I would say getting BE right (beyond simple CRUD) is going to be hard for LLM simply because the context is usually wider and hard to compress/isolate.
Seeing the kind of complexity that agents (not standalone llm) are able to navigate - I can only start to believe - just a matter of time it can do all kinds of programming, including state of the art backend programming - even writing a database on its own - good thing with backend is its easily testable and if there is documentation that a developer can read and comprehend - an llm/agent would be able to do that - not very far from today.
Can they double the memory lanes without switching socket ? If not I feel like PC is going to fall behind even further compared to Apple chips. Having ram on chip sucks for repairability but 500gb/s main ram bandwidth is insane.
They stumbled into the right direction with strix halo but I have a feeling they won't recognize the win/follow up.
The "insane" RAM bandwidth makes sense with Apple M chips and Strix Halo because it's actually "crap" VRAM bandwidth for the GPU. What makes those nice is the quantity of memory the GPU has (even though its slow), not that the CPU has tons of RAM bandwidth.
When you go to the desktop it becomes harder to justify including beefed up memory controllers just for the CPU vs putting that towards beefing some other part of the CPU up that has more of an impact in cost or performance.
Yeah the only use of the large bandwith in Apple Silicon is for the GPU.
I'm always amazed by the fanboys who keep hyping this trope.
Even when feeding all cores, the max bandwith used by the CPU is less than 200GB/s, in fact it is quite comparable to Intel/AMD CPUs and even less than their high-end ones (x86 still rules on the multi-core front in any case).
I actually see this as a weakness of Apple Silicon, because it doesn't scale that well. It's basically the problem of their Ultra chip: doesn't allow doubling of the compute and doesn't allow faster RAM bandwith, you only get higher RAM capacity in exchange for slower GPU compute.
They just scaled up their mobile architecture and it has its limit.
No, but they can skip the socket, much like many of the mini-pcs/SFFs that include laptop chips in small desktops. Strix halo already doubled the memory channels and the next gen is supposedly going to move the memory bus from 256 bits wide to 384 bits.
The socket io locks in the amount of memory channels. Some pins could be repurposed but that's pretty much a new socket anyway.
They could in theory do on package dram as faster first level memory, but I doubt we'll see that anytime soon on desktop and it probably wouldn't fit under the heat spreader
No, trying stuff out is the valuable process. How I search for information changed (dramatically) in the last 20 years I've been programming. My intuition about how programs work is still relevant - you'll still see graybeards saying "there's a paper from 70s talking about that" for every "new" fad in programming, and they are usually right.
So if AI gets you iterating faster and testing your assumptions/hypothesis I would say that's a net win. If you're just begging it to solve the problem for you with different wording - then yeah you are reducing yourself to a shitty LLM proxy.
I constantly see top models (opus 4.5, gemini 3) get a stroke mid task - they will solve the problem correctly in one place, or have a correct solution that needs to be reapplied in context - and then completely miss the mark in another place. "Lack of intelligence" is very much a limiting factor. Gemini especially will get into random reasoning loops - reading thinking traces - it gets unhinged pretty fast.
Not to mention it's super easy to gaslight these models, just asserting something wrong with vaguely plausible explanation and you get no pushback or reasoning validation.
So I know you qualified your post with "for your use case", but personally I would very much like more intelligence from LLMs.
I don't think the analogy holds up at all. A doctor usually has a very small time window to deal with your problem and then switches to the next patient.
If I'm working on your project I'm usually dedicated to it 8 hours a day for months.
I do agree this is not new, I had clients with some development experience come up with off the cuff suggestions that just waste everyone's time and are really disrespectful (like how bad at my job do you think I am if you think I didn't try the obvious approach you came up with while listening to the problem). But AI is going to make this much worse.
I'd take the other side for most of these - Nvidia one is too vague (some could argue it's already seeing "heavy competition" from Google and other players in the space) but something more concrete - I doubt they will fall below 50% market share.
reply