HP 9825A Programmable Calculator
The "Blind Programmable Calculator" project, around 1976, sponsored by the Sensory Aids Foundation and NASA
was an attempt to radically reduce the cost of access to technology to the blind.
At the time (and true today), braille printers were very costly. They were the
primary means of access. The idea was to use realtime adaptive speech synthesis
to allow applications to "speak" visual interactions, and to do so without altering
the computing environment in any way. In other words, a gadget that could snap on
to a computer (in this case a HP 9825A programmable calculator) and intuitively talk the screen/keyboard interface.
The point was to decouple the sensory aid from the base technology, so with the
rapid change of technology it wouldn't immediately go obsolete tied to a specific
kind of computing environment. Also, since the scheme could allow other applications
to interact with computers (like say a pilot in rough weather where they can't even
read the screen), it would gain volume manufacturing and work down in price due to
the law of large numbers. Clearly a very ambitious project, done on a shoestring.
Contributions of equipment, borrowing, and ingenuity got together a DEC LSI-11 as the
"microcontroller", a VOTRAX VS-6 Speech synthesizer, and a HP-9825 Programmable
Calculator (think of it as a packaged personal computer today). With extensive
help from both DEC (boards, advice) and HP (internal architecture of product).
At the time, I had no idea how unusual this was, since DEC and HP were both
aggressive competitors - it must have been very hard for them to do this.
The basic scheme was this - special logic allowed the controller to grab information
off of the personal computers bus (I used a bus extender cable taken off a chasiss
extension product, and wired it to logic that attached to a DRV-11W parallel
interface card). By snooping the bus, a passive interface to the display and keyboard
logic was obtained without involvement with the software of the personal computer.
The controller ran a coroutine based kernel built out as needed to real time parse,
text to speech, and work a synthesizer through a serial interface.
Programs were written in C and assembler on UNIX, then the remote host serial line was switched
from the terminal
to the LSI-11, where the remote host would download a bootstrap over the line,
and then the program. The controller would then be switched to the terminal, and then
the program could be run. Just assembling the parts, building a 20 Amp linear power supply
(switching power supply not donated nor affordable!), and getting the software framework
to run took many months. Debugging the electronics and interface issue took months. The
interesting part was the dynamic parser, which could figure out just from context clues
what application was running and adapt to speak, spell, translate, or rephrase display/keyboard
interactions as needed. This part was done in a rushed week.
The deep idea here was to model interaction on a "just the facts, mam!" basis.
What did the blind need to know when the personal computer reacted to input or other stimulus?
The different modes of interaction summerized operation cogently. An example - when using the
HLL interpreter, typing "p", "r", "t" for the "prt" keyword would be spelled out until
recognized as a term, then pronounced as "print". Similarly, other syntatical and semantic
expressions would be verbally assembled, with the parsing rules triggering appropriate "subvocal"
C programs, then unusual, were cross compiled on the NASA ASRS PDP-11 Unix system located
in the Life Sciences, and downloaded over the SYSTRAN/ADTRAN RS-232 serial network
strung around Ames in pre-networking days. Discovered UNIX, the Arpa net, and had an
account on mit (wfj@mit-ai, a nice PDP-10 running ITSS). Involvement with brilliant
researchers and hackers of the day pushed this project further to the reaches.
When researching text to speech, I attempted to obtain a copy of a program done by
Doug McIlroy at Bell Labs Murray Hill. In talking with him on the phone, he said
"Sure, send a check for $20,000". Shocked, I asked why, "Comes with a PDP-11 operating
system, which lets you compile and run it". Turned out never used the program, but
did end up using UNIX and C extensively.
Got to know many on the early ARPAnet, popular mailists like on wine and science fiction,
and close encounters with EMACs and Richard Stallman. Nothing like getting a message
accross the screen like "hold on, going to patch in a new UUO, may go away", and realize
that someone was modifing a part of the supervisor program in real-time, testing it
with a embedded debugger, and returning to normal operation in 10-15 minutes. The
shell was called HACTRAN, assembler was MIDAS, and almost everything had something to
do with LISP in one form or another.
Vint Cerf remarked that the synthesizers of this era like the Votrax sounded like "drunken Swedes".
Often they were mocked, and some thought they would never amount to much.
Of course, childs toys now have had excellent speach synthesis for years. Chuck Jackson
was responsible for involving me with the Votrax, as he experimented with use of it
in General Aviation, where I wrote a speech compiler/editor as a preliminary tool to
a text to speech mechanism. Tom Carrell, later at IU, helped with the pyschoacoustics modelling
to improve intellegibility with the syntactical cues.
Among the humor was that Votrax was a product of the "Federal Screw Works" Corporation - occasionally
boxes would arrive with felt pen adornments like "This is Ames!" or worse ...
The blind sensory aid paper was written, given at the conference, and may be found in
the proceedings of Jim Warren's "1977 West Coast Computer Faire", under
the title of "The Design of a Voice Output Adapter for Computer". Synopsis reads "
The design of a Voice Output Adapter for visually handicapped computer programmers
is discussed. This device, based on a DEC LSI-11 microcomputer and a VOTRAX VS-6 synthesizer
will generate speech from ordinary typed text. Phonetic translation is accomplished by
a set of rules, instead of a dictionary. Although this device uses a VOTRAX synthesizer,
and experimentation has been limited to English, the device will act independently
of language or synthesizer type. All software will reside in the main memory;
no peripheral memory will be used. This results in a compact device."