Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think SSDs allowing rapid swapping is as big a deal as SSDs being really fast at serving files. On a typical system, pre-SSD, you wanted gobs of RAM to make it fast - not only for your actual application use, but also for the page cache. You wanted that glacial spinning rust to be touched once for any page you'd be using frequently because the access times were so awful.

Now, with SSDs, it's a lot cheaper and faster to read disk, and especially with NVMe, you don't have to read things sequentially. You just "throw the spaghetti at the wall" with regards to all the blocks you want, and it services them. So you don't need nearly as much page cache to have "teh snappy" in your responsiveness.

We've also added compressed RAM to all major OSes (Windows has it, MacOS has it, and Linux at least normally ships with zswap built as a module, though not enabled). So that further improves RAM efficiency - part of the reason I can use 64-bit 4GB ARM boxes is that zswap does a very good job of keeping swap off the disk.

We're using RAM more efficiently than we used to be, and that's helped keep "usable amounts" somewhat stable.

Don't worry, though. Electron apps have heard the complaint and are coming for all the RAM you have! It's shocking just how much less RAM something like ncspot (curses/terminal client for Spotify) uses than the official app...



> especially with NVMe, you don't have to read things sequentially

NVMe is actually not any better than SATA SSDs at random/low-queue-depth IO. The latency-per-request is about the same for the flash memory itself and that's really the dominant factor in purely random requests.

Of course pretty soon NVMe will be used for DirectStorage so it'll be preferable to have it in terms of CPU load/game smoothness, but just in terms of raw random access, SSDs really haven't improved in over a decade at this point. Which is what was so attractive about Optane/3D X-point... it was the first improvement in disk latency in a really long time, and that makes a huge difference in tons of workloads, especially consumer workloads. The 280/480GB Optane SSDs were great.

But yeah you're right that paging and compression and other tricks have let us get more out of the same amount of RAM. Browsers just need to keep one window and a couple tabs open, and they'll page out if they see you launch a game, etc, so as long as one single application doesn't need more than 16GB it's fine.

Also, games are really the most intensive single thing that anyone will do. Browsers are a bunch of granular tabs that can be paged out a piece at a time, where you can't really do that with a game. And games are limited by what's being done with consoles... consoles have stayed around the 16GB mark for total system RAM for a long time now too. So the "single largest task" hasn't increased much, and we're much better at doing paging for the granular stuff.


Latency may be similar but:

1. Pretty sure IO depth is as high as OSes can make it so small depth only happens on a mostly idle system.

2. Throughput of NVMe is 10x higher than SATA. So in terms of “time to read the whole file” or “time to complete all I/O requests”, it is also meaningfully better from that perspective.


> NVMe is actually not any better than SATA SSDs at random/low-queue-depth IO.

The fastest NVMe SSD[0] on UserBenchmark appears to be a fair bit faster at 4k random read compared to the fastest SATA SSD[1].

75 MB/s vs 41.9 MB/s Avg Random 4k Read.

1,419 MB/s vs 431 MB/s Avg Deep queue 4k read

Edit: This comment has been edited, originally I was comparing a flash SATA SSD vs an optane nvme drive, which wasn't a fair comparison.

[0]: https://ssd.userbenchmark.com/SpeedTest/1311638/Samsung-SSD-...

[1]: https://ssd.userbenchmark.com/SpeedTest/1463967/Samsung-SSD-...


that might be with a higher queue depth though, like 4K Random QD=4 or something, I don't see it says "QD=1" or similar anywhere there and that's a fairly high result if it was really QD=1.

It's true that NVMe does better with a higher queue depth though, but, consumer workloads tend to be QD=1 (you don't start the next access until this one has been finished) and that's the pathological case due to the inherent latency of flash access. Flash is pretty bad at those scenarios whether SATA or NVMe.

https://images.anandtech.com/graphs/graph11953/burst-rr.png

https://www.anandtech.com/show/11953/the-intel-optane-ssd-90...

So eh, I suppose it's true that NVMe is at least a little better in random 4K QD=1, a 960 Pro is 59.8 MB/s vs 38.8 for the 850 Pro (although note that's only a 256GB drive, which often don't have all their flash lanes populated and a 1TB or 2TB might be faster). But it's not really night-and-day better, they're still both quite slow. In contrast Optane can push 420-510 MB/s in pure 4K Random QD=1.


Also people forget that the jump from 8 bit to 16 bit doubled address size, and 16 to 32 did it again, and 32 to 64, again. But each time the percentage of "active memory" that was used by addresses dropped.

And I feel the operating systems have gotten better at paging out large portions of these stupid electron apps, but that may just be wishful thinking.


Memory addresses were never 8 bits. Some early hobbyist machines might have had only 256 bytes of RAM present, but the address space was always larger.


Yeah, the 8bit machines I used had 16bit address space. For example from my vague/limited Z80 memories most of the 8bit registers were paired - so if you wanted a 16bit address, you used the pair. To lazy to look it up, but with the Z80 I seem to remember about 7 8bit registers and that allowed 3 pairs that could handle a 16bit value.


Even the Intel 4004--widely regarded as the first commercial microprocessor--had a 12-bit address space


This got me thinking, and I went digging even further into historic mainframes. These rarely used eight-bit bytes, so calculating memory size on them is a little funny. But all had more than 256 bytes.

Whirlwind I (1951): 2048 16-bit words, so 4k bytes. This was the first digital computer to use core memory (and the first to operate on more than one bit at a time).

EDVAC (designed in 1944): 1024 44 bit-words, so about 5.6k.

ENIAC (designed in 1943): No memory at all, at least not like we think of it.

So there you go. All but the earliest digital computer used an address space greater than eight bits wide. I'm sure there are some micro controllers and similar that have only eight bit address spaces, but general purpose machines seem to have started at 12-bits and gone up from there.


The ENIAC was upgraded to be a stored-program computer after a while, and eventually had 100 words of core memory.


I actually have 100GB of RAM in my desktop machine! It's great, but my usage is pretty niche. I use it as drive space to hold large ML datasets for super fast access.

I think for most use cases ssd is fast enough though.


From the song:

> It does all my work without me even askin'

It sounds like Wierd Al had the same use case!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: