Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

if I remember correctly the 386 didn't have branch prediction so as a thought experiment how would a 386 with design sizes from today (~9nm) fare with the other chips?


It would lose by a country mile, a 386 can handle about one instruction every three or four clocks, a modern desktop core can do as many as four or five ops PER clock.

It's not just the lack of branch prediction, but the primitive pipeline, no register renaming, and of course it's integer only.

A Pentium Pro with modern design size would at least be on the same playing field as today's cores. Slower by far, but recognisably doing the same job - you could see traces of the P6 design in modern Intel CPUs until quite recently, in the same way as the Super Hornet has traces of predecessors going back to the 1950s F-5. The CPUs in most battery chargers and earbuds would run rings around a 386.


A 386 was a beast against a 286, a 16 bit CPU. It was the minimum to run Linux with 4MB of RAM, but a 486 with and FPU destroyed it and not just in FP performance.

Bear in mind that with an 386 you can barely decode an MP2 file, while with a 486 DX you can play most MP3 files at least in mono audio and maybe run Quake at the lowest settings if you own a 100 MHZ one. A 166MHZ Pentium can at least multitask a little while playing your favourite songs.

Also, under Linux, a 386 would manage itself relativelly well with just terminal and SVGAlib tools (now framebuffer) and 8MB of RAM. With a 486 and 16MB of RAM, you can run X at sane speeds, even FVWM in wireframe mode to avoid window repaintings upon moving/resizing them.

Next, TLS/SSL. WIth a 486 DX you can use dropbear/bearssl and even Dillo happily with just a light lag upong handhaking, good enough for TLS 1.2. Under a 486, a 30-35? year old CPU. IRC over TLS, SSH with RSA256 and the like methods, web browsing/Gemini under Dillo with TLS. Doable, I did it under VM, it worked, even email and NNTP over TLS with a LibreSSL fork against BearSSL.

With a 386 in order to keep your sanity you can have plain HTTP, IRC and Gopher and plain email/Usenet. No MP3 audio, where with a 486 you could at least read news over Gopher (even today) will multitasking if you forced yourself to a terminal environment (not as hard as it sounds).

If you emulate some old i440FX based PC under Qemu, switching between the 386 and 486 with -cpu flag gives the user clear results. Just set one with the Cirrus VGA and 16MB and you'll understand upong firing X.

This is a great old distro to test how well 386's and 486's behaved:

https://delicate-linux.net/


Yep, we had a few later-generation 486s in college. They would run Windows NT4 with full GUI - not especially well, but they'd run it. And they'd do SSL stuff adequately for the time.

ISTR the cheap "Pentium clones" at the time - Cyrix, early AMDs before the K5/K6 and Athlon - were basically souped-up 486 designs.

(As an aside - it's very noticeable how much innovation happened between a single generation of CPU architectures at that time, compared to today. Even if some of them were buggy or had performance regressions. 5x86 to K5 was a complete redesign, and the same again between K6 and K7).


I ran X and emacs and gcc on a 386DX with 5MB of RAM circa 1993, and while not pleasant it was workable. The upgrade to 16MB (that cost me £600!) made a big difference.


Ten years before that I saved up for ages and spent £25 on 16KB of RAM. I could have bought a house for the cost of 16MB. It's amazing how quickly it changed.


Both the RAM (for the better) and the house (for the worse).


ZX81 rampack, right?


Nearly, it was actually for a BBC Micro.


We can't be friends!


You could run Linux with 2MB of ram with kernels before 1994 AFIK and with a.out format of binaries instead of ELF.

Nowadays I think it's still doable in theory but Linux kernel have some kind of hard coded limit of 4MB (something to do with memory paging size).


Yep but badly. Read the 4MB laptop Howto. Nowadays if I had a Pentium/k5 laptop I'd just fit a 64 MB SIMM on these and keep everything TTY/framebuffer with NetBSD and most of the unheard daemons disabled. For a 486, Delicate Linux plus a custom build queue for bearssl, libressl on top (there's a fork out there), plus brssl linked lynx, mutt, slrn, mpg123, libtls and hurl.


All you need is enough memory for 'swapon' hehe Glorious days of swap space on the floppy haha


Why is ELF so much slower and/or more memory hungry than a.out on Linux?


Relocation information, primarily.

ELF supports loading a shared library to some arbitrary memory address and fixing up references to symbols in that library accordingly, including dynamically after load time with dlopen(3).

a.out did not support this. The executable format doesn't have relocation entries, which means every address in the binary was fixed at link time. Shared libraries were supported by maintaining a table of statically-assigned, non-overlapping address spaces, and at link time resolving external references to those fixed addresses.

Loading is faster and simpler when all you do is copy sections into memory then jump to the start address.


I did some multitasking recently on my iDX4-100 + 64MB FPM. I used NT4 with SP2 because the full SP6 was much slower. I could have a browser open, PuTTY, and some tracker music playing no problem. :)


Having to manually decompress .MP3->.WAV in the early days of online music piracy just so you could play it at the expensive of most of your HDD space disappearing.


If you have a had a CD burner you didn't keep the WAV files wasting disk space...


In those days though the blank CDs were $20+ each and took two hours to burn and had a high failure rate. I paid my way through college burning discs full of "warez".


Well, in MP3 days (since ~1999) the CD prices plumetted down and they got a much better quality.

And yet the disk sizes where't that big (and tons of people still had less than 10GB).


>A 386 was a beast against a 286

386, both SX and DX, run 16bit code at ~same clock for clock speed as 286. 286 topped out at 25MHz, Intel 386 at 33MHz. Now add the fact early Intel chips had broken 32bit and its not so beastly after all :)

In one of Computer History Museum videos someone from Intel mentioned they managed to cost reduce 386SX version so hard it cost Intel $5 out the door, the rest of initial 1988 $219 price was pure money printer. Only in 1992 Intel finally calmed down with i386SX-25 going from Q1 1990 $184 to Q4 1992 $59 due to losing AMD Am386 lawsuit, and only to screw with AMD relegating its Am386DX-40 Q2 1991 $231 flagship to the title of Q1 1993 $51 bottom feeder.


Presumably it's much smaller. A similar but different thought experiment would also fill the 14gen-sized die with 386es running in parallel.


If you continue that thought experiment, you'd very quickly run into the issue that the way the 386 interfaces memory is hopelessly primitive and not a good match for running 1000s of cores in parallel.

A large reason why out of order speculative execution is needed for performance is to deal with the memory latencies that appear in such a system.


That gets you close to Larabee/Xeon Phi, although that was pentium based although amd64 and a vector engine were added and later products were Atom derived.


Modern CPUs are more or less built around the memory hierarchy, so it would be really hard to compare those two - a 386 in a modern process might be able to run at the same clock speed or even faster, but with only a few kb of memory available. As soon as you connect a large memory it will spend most of the time idling (and then of course it is the problem of power dissipation density).


While there also were cheap motherboards with 80386SX and no cache memory, most motherboards for 80386DX had a write-through cache memory, typically either of 32 kB or of 64 kB.

By the time of 80486, motherboard cache sizes had increased to the range of 128 to 256 kB, while 80486 also had an internal cache of 8 kB (much later increased to 16 kB in 80486DX4, at a time when Pentium already existed).

So except for the lower-end MBs, a memory hierarchy already existed in the 80386-based computers, because the DRAM was already not fast enough.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: