## What Can a Computer Do?

Have you ever asked yourself that question?  Some people write code for years and it never occurs to them.  If you're a Computer Scientist, you're probably more apt to ask the question, "What is the set of all classes of problems solvable by a computer?" and "What is the set of all classes of problems unsolvable by a computer?"  I'm not going to answer those.  Instead, I'm going to talk about what the fundamental capabilities are of all computers.

This will be a little depressing, because the list is so small.  You'd think that with the sheer volume of software sold that this list would be bigger, but I'm sorry, it's not.

A Computer Can:
1. Read a finite ranged value from memory
2. Write a finite ranged value in memory
3. Simulate arithmetic on finite ranged values
4. Based on the results of 1, 2, and 3 (or nothing at all) change the current flow of execution
5. Activate/Deactive/Control peripheral devices
I'm pretty sure that's it.

Now, there's a lot to be said for having variations on those five items, but given the right set of tools, that's really all you need.

Now, let's step back in time and consider one of the first computers: The Manchester Mark I.  The hardware consisted of the following (note - this is the model that was presented to me, and represents an early version of the machine):

3 registers
Memory built from Williams Tubes

The registers were:
A program counter (PC) for locating the next instruction
An accumulator for performing math
A status register (more on this later)

The instruction set was:
1. LNEG address - move the negation of the contents of address into the accumulator1, 3
2. STORE address - store the contents of the accumulator into address2
3. SUB address - subtract the contents of address from the accumulator3
4. SKNEG - skip the next instruction if the result of the last operation was negative4
6. BREL value - Branch to the address by adding value to the PC4
7. NOP - Do nothing4
8. HALT - Stop4
That's it (the final instruction set was much more extensive), but not only is it enough, you don't strictly need BREL, NOP, or HALT.  The superscript numbers correspond to my classifications of capabilities.

The status register had two bits in it, the negative bit and the halt bit.  Basically, if the result of an LNEG was negative, the negative bit was set (cleared otherwise) and if the result of a SUB was negative, the negative bit was set (cleared otherwise).  SKNEG just looked at this bit and moved the PC forward one instruction if it was set.

Now, after being presented with this instruction set and architecture, my class was offered a prize of \$1US to write the shortest correct implementation of factorial.  All data and code counted in the total length, but we were given three "free" data constants (0, 1, and -1).

Problem 1 is that factorial requires multiplication, so you have to write it.  Problem 2 is that multiplication requires addition, so you have to write that too.

Since this problem is non-trivial for this computer, I wrote an optimizing assembler for the instruction set (a half day of work) and then did the assignment.  I also noted that 0 and 1 are valid instruction codes which I needed, so two of my "free" constants were also code, meaning my program was that much smaller.  I lost the contest by 1 instruction (Damn you Erik Erikson!), but Erik and I combined our efforts to shave off another two instructions from his effort.

Given the above instructions, I can simulate the instruction set of any processor today.  It might be painful or slow or both, but it still can be done.  That means that, strictly speaking, your brand new Pentium processor is no more powerful than this little gem.

So why is it important to know this stuff?  If you are in the process of designing a language, it's helpful to know how your language features will map onto a "real" computer.  This is nice because sometimes subtle changes in a language grammar represent big wins or losses in the actual implementation.  This has been one of the big wins for C as a medium/high-level language - it translates nicely to what processors typically do.
Published Tuesday, February 28, 2006 9:11 AM by Steve Hawley