>How would u setup a child's first Linux computer?
preloaded with tons of stuff my kid might find cool (depends on his or her interests which nobody knows better than I do), and with completely disabled internet access if kid will be using it without my supervision.
EdgeRouter series got major 3.0 update in August 2025, just a few months ago. It took them a while but it got released after series of release candidates.
Not familiar with demoscene needs, but might A2DVI card be of help? It seems to support all standard video modes at least. Alternatively, there's also VidHD.
Derbauer and Buildzoid on YouTube made nice informative videos on subject, and no it is not "simply a user error". So glad I went with 7900 XTX - should be all set for a couple of years.
Summary of the Buildzoid video courtesy of redditors in r/hardware:
> TL;DW: The 3090 had 3 shunt resistors set up in a way which distributed the power load evenly among the 6 power-bearing conductors. That's why there were no reports of melted 3090s. The 4090/5090 modified the engineering for whatever reason, perhaps to save on manufacturing costs, and the shunt resistors no longer distribute the power load. Therefore, it's possible for 1 conductor to bear way more power than the rest, and that's how it melts.
> The only reason why the problem was considered "fixed" (not really, it wasn't) on the 4090 is that apparently in order to skew the load so much to generate enough heat to melt the connector you'd need for the plug to not be properly seated. However with 600W, as seen on der8auer video, all it takes is one single cable or two making a bit better contact than the rest to take up all the load and, as measured by him, reach 23A.
- Most 12VHPWR connectors are rated to 9.5-10A per pin. 600W / 12V / 6 pin pairs = 8.33A. Spec requires 10% safety factor - 9.17A.
- 12VHPWR connectors are compatible with 18ga or at best 16ga cables. For 90C rated single core copper wires I've seen max allowed amperages of at most 14A for 18ga and 18A for 16ga. Less in most sources.. Near the connectors those wires are so close they can't be considered single core for purpose of heat dissipation..
Honestly with 50A of current we should be using connectors that screw into place firmly and have a single wipe or a single solid conductor pin style. Multi-pin connectors will always inherently have issues with imbalance of power delivery. With extremely slim engineering margins this is basically asking for disaster. I stand by what I've said elsewhere: If I was an insurance company I'd issue a notice that fires caused by this connector will not be covered by any issued policy as it does not satisfy reasonable engineering margins.
edit: replaced power with current... we're talking amps not watts
I was wondering if this was a thing - in RC quads and the like we use these massive bullet connectors (XT30/60/90 and similar) which often have lower resistances than the wires themselves.
Yeah, they take soldering for the wire/connector interface, but presumably there are connectors similarly designed with crimp terminals, or it's just something the mfgs will have to deal with.
Enjoying my 7900XTX as well. I really don't understand why nvidia had to pivot to this obscure power connector. It's not like this is a mobile device where that interface is very important - you plug the card in once and forget about it.
> So glad I went with 7900 XTX - should be all set for a couple of years.
Really depends on the use case. For gaming, normal office, smaller AI/ML or video-work, yeah, it's fine. But if you want the RTX 5090 for the VRAM, then the 24GB of the 7900 XTX won't be enough.
Honestly, the smart play in that case is to buy 2 3090's and connect them with nvlink. Or...and hear me out, at this point you could probably just invest your workstation build budget and use the dividends to pay for runpod instances when you actually want to spin up and do things.
I'm sure there are some use cases for 32gb of vram but most of the cutting edge models that people are using day to day on local hardware fit in 12 or even 8gb of vram. It's been a while since I've seen anything bigger than 24gb but less than 70gb.
> most of the cutting edge models that people are using day to day on local hardware fit in 12 or even 8gb of vram.
I'm not sure what your idea of "day to day" use cases are, but models that fit in 12GB of VRAM tend to be good for like autocomplete and not much more. I can't even get those models to chose the right tool at the right time, even less be moderately useful. Qwen2.5-32B seems to be the lower boundary of a useful local model, it'll at least use tools correctly. But then for "novel" (for me) coding, basically anything below O1 is more counter-productive than productive.
Yeah, I expect my next card with be AMD. I'm happy with my 3080 for now, but the cards have nearly double in price in two generations and I'm not going to support that. I can't abide the prices nor the insane power draw. I'm OK with not having DLSS.
It'll probably be fine for years, longer if you can stand looking at AI generated, upscaled frames. Liftup in GPU power is so expensive, we might as well be back to the reign of the 1080. The only thing that'll move the needle will be a new console generation.
My 1080 has been running with the same configuration for years. The only thing I consider a downside is the lack of power for exploring AI locally, and AI isn't worth buying a $1234 video card for myself.
It's true, but to be fair, by the hardware the 5080 is really more of a 70 series cared in previous gens. I was just thinking of the insane top end of the 90 series.
The numbers don't really mean anything and never have. 5080 is faster than the 5070 is faster than the 5060. That's what the number means. The performance gap between tiers isn't and has never been constant.
Everyone who cares to know about generational improvements have indeed compared performance between tiers. No they have never been conctant but the "best" generations clearly had better segmented tiers in their generation compared to prior.
Adding "fake" frames to say the 5070 has the performance of the 4090 like Jensen, tells you even Nvidia do this comparison.
> Adding "fake" frames to say the 5070 has the performance of the 4090 like Jensen, tells you even Nvidia do this comparison.
You're missing the point. The only thing that makes a 5080 a 5080 is that's what NVIDIA named it. of course comparisons are going to be made but it's meaningless to expect the numbers to correlate to some specific performance gain over the lower tier or previous generation.
The lineup is different every year. The metric that matters is performance per dollar, not the name of the product.
The 80 tier isn't great value this generation. That doesn't make it really a 5070.
Probably reference cards, yeah? I think the common advice is to not buy the reference cards. They rarely cool enough. I made that mistake with the RX 5700 XT, and will never again.
Those had an issue with their heatpipe design which affected cooling performance depending on their orientation. I made sure to buy an AIB model that didn't suffer the same issue, just in case I want to put the card somewhere funky like a server rack.
It's too bad AMD will stop even aiming for that market. But also, I bought a Sapphire 7900 XTX knowing it'd be in my machine for at least half a decade.
People are acting like this is some long-term position. There's no evidence of that. AMD didn't give up on the high end permanently after the RX 480 / 580 generation.
What AMD does need right now is:
* Don't cannibalize advanced packaging (which big RDNA4 required) from high-margin and high-growth AI chips
* Focus on software features like upscaling tech (which is a big multiplier and allows midrange GPUs to punch far above their weight) and compute drivers (which they badly need to improve to have a real shot at taking AI chip marketshare)
* Focus on a few SKUs and execute as well as possible to build mindshare and a reputation for quality
"Big" consumer GPUs are increasingly pointless. The better upscaling gets, the less raw power you need, and 4K gaming is already passable (1440p gaming very good) on the current gen with no obvious market for going beyond that. Both Intel and Nvidia are independently suffering from this masturbatory obsession with "moar power" causing downstream issues. I'm glad AMD didn't go down that road personally.
If "midrange" RDNA4 is around the same strength as "high-end" RDNA3, but $300 cheaper and with much better ray tracing and an upscaling solution at least on par with DLSS 3, then that's a solid card that should sell well. Especially given how dumb the RTX 5080 looks from a value perspective.