Digital cameras are very close to becoming standard on cars, and I can’t think of anyone who doesn’t have at least one of them snuck into something, not even counting actual digital cameras. The first digital camera you could buy, incredibly, didn’t use some experimental image sensor. It used a quirk of computer memory chips.
That first camera was called the Cromemco Cyclops, and it came out in 1975. Compare that to one of the earliest mass-market digital cameras, the Apple QuickTake 100, which was available in 1994. So how did Cromemco manage this so early?
The camera wasn't a small portable camera — it was a pair of cards designed to be plugged into an S-100 bus computer, which was the prevailing standard for the infant personal computer industry at the time. The most common personal computer then available was the Altair (this even predates the Apple I). What's even more incredible about the fact that a digital camera was released this early was that most Altair hobbyists didn't even have screens to show any hypothetical digital images on.
Digital cameras were being developed in 1975, but those were using CCDs, the image sensors that are the retinas of all the digital cameras we use today. They were very expensive things, and many years away from getting into the Cheeto-stained hands of computer hobbyists in the mid-70s. What kind of unholy black magic were they using to pull this off?
No black magic (for this, at least), just lots of cleverness. And a magazine article. See, Cromemco was formed to actually sell this camera, which was first described in a 1975 Popular Science article by Terry Walker, Harry Garland, and Roger Melen. The key innovation that made the camera possible was that they realized that a MOS 1024 bit memory chip, common at the time, could be modified to act as an image array.
The memory chip had a 32x32 grid of 1-bit memory cells, and those memory cells were photosensitive — they would lose their contents (like, say, a 1, the only content other than 0) when exposed to light. To keep this from happening, memory ICs had an opaque cover over the chip itself. Some memory chips, called EPROMS, had a semi-transparent window over the chip to allow for erasing the contents with bright UV light.
The realization here was that if the opaque cover over the chip could be replaced with clear glass, and a lens added to focus an image on the 32x32 array, the memory chip could become an image sensor!
Even better, the intensity of the light affected how quickly the memory contents would get lost. So, armed with these options, the Cyclops worked by writing 1s into the 1024 cells of the memory chip, then when it was exposed to light, the computer would read that chip 15 times very quickly. Each pass would act as a sort of greyscale: if the cell went to 0 on the first pass, that meant the light was brightest, and hence white. If it took all 15 passes to get to 0, light was dimmest, and that would read black, with 13 other levels of grey in between.
Now, 32x32 pixels isn't exactly Retina Display quality. In fact, it's the exact size of an individual icon on the first Macs. But, for something that uses a component in a way it was never intended to be used, that's pretty good.
Cromemco gave instructions about how to display the images on an oscilloscope, but soon realized having an actual monitor that can show images would be nice. That led them to create the Dazzler, one of the first graphics cards, and that eventually grew into the system that provided most local news weather displays throughout the 1980s.
Neat, right?
(Sources: Wikipedia, SWTPC)
The First Digital Camera You Could Buy Was A Total Hack