Monday, March 12, 2007

Everything in Moderation

On the night of November 18th, 1849, lovable, bearded mountain man Grizzly Adams found himself stranded on the Alaskan tundra in his simple canvas tent, with nothing to do but wait out the blizzard raging all around him. At such times Adams was wont to compose computer programs in his head. Sometimes he would write the code on his notepad, and when next he found himself in a larger town, such as Fairbanks, he would have it telegraphed to the nearest computing facility. On this particular night, however, he did something extraordinary; he conceived of a brilliant improvement over assembly language, until then the only option available to programmers on the Difference Engine MXVII. His conception was the "C" programming language, which has dominated the programming world ever since (along with its minor variants, C++, C#, Java, Smalltalk and Lisp). C is a coating of syntactic sugar over Difference Engine Assembler (DEA). For example, C's "++" operator corresponds to the DEA ADD-YE-THE-NUMERAL-ONE-TO-THE-NUMERAL-IN-QUO operation code (which was a real b***h to write, by all accounts). C's other statement types translated readily into DEA, too. For example, the "for" loop reduced 30 lines of DEA (on average) down to about 10 lines (on average) of C: a big savings in those days, when one had to write code by hand, telegraph it to a computing facility (at a penny a word!), and wait patiently for the results.

Computer Science being the conservative field that it is, programming languages have changed little in the 158 years following C's inception. A few small enhancements have been added by C's successors, but these have been relatively minor, and have upheld the "spirit" of C. For example, Danish inventor Bjarne Stroustrup created "C++" by adding to C the concept of a "class", a convenient syntax for limiting the scope of global variables. Meanwhile, other language designers took different tacks. Whereas Stroustrup created a "higher-level" C with C++, Stanford professor John McCarthy created a "lower-level" C, which he named "Lisp", wherein one programs directly in the intermediate representation (aka "abstract syntax tree") that C compilers generate. Similarly, Alan Kay's "Smalltalk" is a minor variation of Lisp: it represents the C abstract syntax tree using fewer parentheses. Although Smalltalk and Lisp remain exceedingly close to C, they are the most daring of any new programming languages. Perl, Ruby, ML, Haskell, Python, SNOBOL, and others all fall somewhere on the spectrum between C and Lisp/Smalltalk.

Lest you worry that I'm suggesting progress in the field of programming languages has been glacial, fear not! I commend heartily the intentionally moderate pace of development. It has been for a good cause, after all: ensuring that programmers do not misuse overly-powerful language features which they do not understand. By limiting programming languages to the small set of primitive operations that Charles Babbage developed, over 160 years ago, we ensure that programmers only attempt those problems for which they are qualified. Think of these intentional limitations on programming languages as being akin to those signs you see at amusement parks, consisting of a leering Technicolor elf with a big cartoon bubble saying, "You must be this tall to ride The Bombaster". The intense feat of abstract thinking required to see how useful applications could be built from such low-level functionality acts as a sort of programmer's amusement park sign. Weak programmers recognize when they're "too short" and stay off the ride (and stay out of trouble!) This is, in fact, the cornerstone of the widespread high quality, stability and performance of modern software applications that we experience every day. So do not worry, I'm not suggesting it ever change!

That said, it is interesting, purely as an exercise, to consider what scientific and technological breakthroughs from the last 158 years could be taken advantage of in a new programming language. Let's see.

First, our basic notions about time have changed since 1849. Back then, Einstein's theory of Special Relativity, and all of the subsequent work in modern Physics, was, of course, unknown. Newtonian mechanics was the only thing Grizzly Adams would have known about, and that meant action-at-a-distance. Although Newton would not have told Grizzly that everything happened simultaneously, he would have become downright confused trying to explain to Grizzly how multiple concurrent activities would actually work. Thus C, and its variants, have no easy way to represent simultaneous activities. Everything happens in a single stream of time. For example, if you want multiple things to happen simultaneously, you have to "fake it" by interleaving them in your code. For example, take this simple email client written in C:
TextBox toTxt = new TextBox("To: ", ...);
TextBox ccTxt = new TextBox("CC: ", ...);
TextBox cc2Txt = new TextBox("CCRider: ", ...);
TextBox subjectTxt = new TextBox("Subject: ", ...);
TextBox bodyTxt = new TextBox("", 24, ...);
TextBox statusTxt = new TextBox("Status", ReadOnly, ...);
Button sendButton = new Button("Send", &doSend);
Button cancelButton = new Button("Cancel");
Button resetButton = new Button("Reset");
Frame frame = new MainFrame("Email");
frame.Add(toTxt, ccTxt, cc2Txt, ...);

// ... imagine lots of gnarly code here ...

private const ipated void doSend(Button b)
{
String[] smtpHosts = Config["smtp-hosts"];
if (smtpHosts != null) {
for (smtpHostIP, port in LookupViaDNS(smtpHosts)) {
Socket sock = Socket.Open(smtpHostIP, port);
int err = sock.Send("ELHO", ...);
if (err != 0) {
statusTxt.Update(String.Format("send error {0}", strerror(err)));
}
// etc...
}
}
}
This code functions, but suffers from problems such as the entire user interface freezing while hosts are being looked up via DNS, connections opened, handshakes shook, the contents of the mail being sent, etc. That could take a long time, and users get peevish when they cannot interact with an application for extended periods.

Since Grizzly Adam's day, Albert Einstein (and some other physicists, too, I think), frustrated by the dearth of good email clients, spent time extending Newton's concept of time. The details are far beyond the scope of this essay (or, for that matter, my own comprehension) but suffice it to say that they can all be distilled thusly: everything is relative. For example, if I am riding my skateboard down the freeway at 10mph, then an 18-wheel tractor-trailer rig bearing down on me from behind, at 60mph, will appear to me to be going 50mph (if I should be smart enough to turn around and look, that is). Of course, the driver of the truck perceives her truck to be moving at 60mph, and me to be moving at negative 50mph. Furthermore, the only way I'm even aware of the tractor-trailer's presence is due to signals of one sort or the other (light and sound) bouncing around between the truck and myself. For example, I can see the photons that have bounced my way from the gleaming chrome grille on the front of the truck, and I can hear the desperate shrieking of the truck's horn as the driver attempts to avoid turning me (and my skateboard) into a pancake.

To keep programmers from turning themselves (and others) into metaphorical pancakes, these two concepts (concurrent activities happening in their own space/time continuum, and the signals that can pass between them) have been kept strictly separate, and have been kept rigorously out of any programming languages. This has been accomplished through the tireless efforts of a secret, underground organization (the cryptically-named Association for Moderation in Computing (AMC), a group about which little is known beyond their name and the few scant facts I am about to relay to you) that has maintained a long-standing disinformation campaign:
  1. AMC agents have infiltrated all language development committees and have ensured that these groups allow concurrency and signaling primitives to exist only in libraries, if indeed they exist at all.
  2. These same agents are also responsible for promulgating a false dichotomy between the two features, via strategically placed blogs, Usenet posts, blackmail and hallway conversations. Calling concurrency "threads" and signals "events", the agents have, Jason-like, thrown a proverbial rock into the crowd of computer scientists. Half of them are convinced that threads are evil and must be obliterated in favor of events, and the other half that the exact opposite is true. These camps war constantly with each other. And while the AMC's real aim is to prevent use of either feature, at least this way the damage is mitigated.
With all of that as a caveat, one could imagine a language where this sort of everything-is-relative concept, as well as signaling, were both built in directly to the language. To save me some typing in the remainder of this essay, I refer to this hypothetical language as Relang (for RElative LANGuage; get it?) In Relang, anything logically concurrent could simply be coded that way, explicitly, without significant performance penalties. Just as in real life, the only way these concurrent activities could interact with each other would be through "signals", sort of like the photons and sound waves that physicists have discovered since Grizzly Adam's time. Each concurrent activity would probably have a data structure, such as a queue, to hold incoming signals, so that they could be processed in a comprehensible way.

If we were to imagine the above example code re-written in this new language, each of the UI objects would exist in its own space/time continuum and communicate with other UI objects by sending signals (such as photons, sound waves, folded-up pieces of paper, and lewd gestures) to them. One required change would be that lines like this:
statusTxt.Update(String.Format("send error {0}", strerror(err)));
would now read:
send(statusTxt, update, String.Format("send error {0}", strerror(err)));

Meanwhile, the statusTxt object would be sitting in a loop looking something like so:
// This function is operating in its own
// space/time continuum.
static private isolated formless void superTextTNT()
{
E = receive(infinity);
case E:
{update, txt} -> self.txt = txt; send(mainFrame, update, self);
{invert} -> self.invert(); send(mainFrame, update, self);
...etc...
superTextTNT(); // this is a tail-call optimizing version of C
}
Now it is clear that each UI element is "doing its own thing" in its own personal space, and not trying to grope any of the attractive UI elements sitting on neighboring barstools. As a side-benefit, the UI would no longer freeze up all the time.

Now, of course, you are probably thinking, "Ok, wise guy, why don't languages work this way?" Well, first off, this feature falls smack dab into the too powerful category; it would be misused left, right, up, down, and sideways (hence the AMC's disinformation campaign). Even were that not the case, however, there's this other problem. To implement the above efficiently, the language would have to be aware of when blocking (e.g., I/O) could occur. Otherwise each of those "receive" statements would tie up a real OS thread, which is too expensive in the general case. And so here is where the story takes a slightly sad turn. Why not make the programming aware of when blocking occurs? Simple, because it is impossible to do that. While I am unsure of all the particulars, I have been assured by many top-notch computer scientist that that would be flat-out, 100%, totally, and absolutely, impossible. No way in heck it could ever be implemented. Oh well.

On the bright side, while the language feature is unavailable, there is an easy work-around. Use just two threads of execution. One creates a "user interface" (UI) thread and an "everything else" thread. Then one ensures that only UI-related activities occur on the UI thread and that, well, everything else occur on the other. In practice, this has turned out to be a model of simplicity (as anyone who has done UI programming could tell you, it is trivial to separate UI from "everything else") and is a big part of why most applications with graphical user interfaces work so flawlessly. It is unfortunate that one uses threads at all (they are deprecated for a reason, you know), but at least there are only two, and they can be implemented in a library, without polluting the language itself. Further, because there are only two threads, signaling is kept entirely out of the picture; that extra bit of complexity is unnecessary with so few space/time continua to keep track of. In fact, the only downside is that two or more UI elements cannot update themselves simultaneously, since they are both operating on the UI thread. Fortunately, I have never heard of a graphical application that actually required this. Let us hope that such a beast never comes crawling out of the proverbial swamp.

Speaking of UI, let us now consider another as-yet untapped technological advance: modern display technology. Back when C was first invented, all programs had to be written down in long hand by the programmer, and then a telegraph operator had to manually translate it into Morse code (unless the programmer happened to be at the computing facility). C's extreme terseness stems from this. All languages inspired by C have kept true to its roots; they are easy to write longhand, too, just in case one has to write software without a modern display handy. If we were to give up this requirement (a bad idea in practice, mind you), how might a language make use of this Faustian feature?

Since the features that have been added to C (e.g., object-oriented programming) have focused on new ways to scope data, why not start using, say, color for scope, too? For example:
int i = 0;
private explicit adults only int foo()
{
return i++;
}
This trivial example shows how one can avoid typing "static", and make it a little easier to see where the variable is used. You could imagine more complicated uses of this, though:
public mutable implicit void CrawlSites(String hostsFile)
{
try {
Stream hostStream = File.Open(hostsFile, "r");
String host;
int port;
while (host, port = Parse(hostStream)) {
sock = Socket.Open(host, port);
CrawlSite(sock);
}
} catch (IOError fileError) {
Blub.System.Console.ErrorStream.Writeln("unable to open hosts file: %s", fileError.toString());
} catch (IOError socketError) {
MaybeThrottleFutureConnections(host, port);
Blub.System.Console.ErrorStream.Writeln("unable to connect to %s:%s: %s", host.toString(), port.toString(), socketError.toString());
} catch (IOError parseError) {
Blub.System.Console.ErrorStream.Writeln("error reading from '%s': %s", hostsFile, parseError.toString());
} catch (IOError socketReadError) {
Blub.System.Console.ErrorStream.Writeln("error reading from host '%s':%d: %s", host, port, socketReadError.toString());
}

One could do more than just set the color of the variables themselves. For example, one could use the mouse to right-click on an entire line and set its background. Say you want to make sure that nobody messes with the log object in the following function:
private static cling void foo()
{
Log log = new Log("foo starting");
// ...imagine lots of stuff here...
log.End();
}
The rule would be that lines with the same background color would all be in a special sub-scope that was unavailable to anyone else. If someone accidentally added in some code that referred to "log" in the middle of the big, hairy function, it would be flagged as an error.

As stated earlier, this feature (and related features that would require a graphical user interface) is kept out of programming languages for a reason: we might have to start writing code without computers around, someday. If languages themselves depended on a GUI, society would probably collapse while programmers scrambled to re-write everything in C or one of its variants.

A friend of mine thought of a solution for this problem, but it is so far "out there", I'm only going to mention it in passing: one could use XML to "annotate" the hand-writable parts of code. He noticed noticed this as he was typing in the examples such as the ones above, in fact: he was doing stuff like... [Editor's note: Ok, dammit, here's the problem. I can't figure out how to "quote" XML in this dang blog. It keeps getting interpreted as HTML, and so, while I'd like to show you what the code would look like the way I envision it, I'm going to have to hack around the problem. Instead of actual XML, I'm going to use a mock XML that uses parentheses instead of less-than and greater-than. It's also kind of late at night, and I'm getting tired, so I'm going to not type the end tags in. Let's call this XML variant PXML, for Parenthetical XML.] The example above, in PXML, would look like:
private static cling void foo()
{
(color :yellow Log log = new Log("foo starting"));
// ...imagine lots of stuff here...
(color :yellow log.End());
}
The way this would work, with PXML (again, sorry I can't write out the real XML as I would prefer), is that, if you were writing out the code using a pen and paper (or a typewriter, or the dusty teletype you came across in your company's storeroom while looking for a place to smoke) then you'd write out the annotations such as the "color" PXML form directly. If you were using a modern IDE, on the other hand, the IDE wouldn't actually show you the PXML (well, not unless you actually asked it to). It would just show the whizzy colors (again, unless for some reason you preferred the PXML). And of course, you could use this mechanism for something more than just coloring and scope.

For example, suppose you were a big Python fan. You could edit your code in a sort of Python-looking mode. Under the covers, the IDE would be generating PXML, but it wouldn't show that to you, after all, you're a Python fan, right? I.e., if you wrote this in Python:
def foo(a, b):
return a + b;
Then the IDE could just generate the following under the covers:
(def foo (a b)
(return (+ a b)))
Of course, in order to let you write code using any Python feature (list comprehensions, iterators, etc.), there would have to be some way for you to express those things in XML. Fortunately, as we all know, XML can represent anything, so this would not be a problem.
# Python
(x * y for x in l1 for y in l2 if x % 2 == 0])
would be:
(generator (* x y)
((x l1))
(y l2))
(= (mod x 2) 0))

[Again, sorry for not being able to write out the actual XML, which would look a lot cooler; I know that PXML is painful to look at. Darn blog! Actually, I guess since there is no fundamental difference between XML and PXML, the IDE could probably use either one, interchangeably, under the covers. But I doubt anyone in their right mind would actually choose PXML over XML, so it would not be worth the extra time to implement that.]

In any case, the AMC would look askance at any of these proposed features, so let us resume thinking about another technology!

Perhaps the biggest invention to come along has been the "Internet". No longer are computers linked together via a tenuous chain of telegraph wires and human operators, as Grizzly Adams had to endure. Modern computers are linked up more or less directly by a vast web of routers, operated not by humans but tiny elves who can move with great precision at near-light speed. It would be incredibly dangerous to make this "Internet" into a first-class concept within a programming language. Programmers would just share bad code with each other that much more easily! Nevertheless, just for fun, let's look at how things might work if one were to add such a dangerous feature. For inspiration, let us consider a feature that Sun Microsystems playfully suggested via an in-joke, of sorts, in Java's package system.

To use another company's Java package, Sun requires the following procedure:
  1. The programmer mails in a form, ordering a magnetic tape containing the desired Java package from the appropriate vendor.
  2. After a few weeks it arrives on a donkey cart, driven by a crusty old farmer.
  3. The programmer tips the farmer generously (if they ever want another delivery).
  4. The programmer must then use the cpio utility to remove the package from the tape.
  5. At this point, the package (which is in a "jar" file, named for irrepressible Star Wars character Jar Jar Binks) is intentionally not usable until you go through the further step of adding its location to an environment variable called the CLASSPATH.
  6. The programmer is now almost ready to use the package, provided that s/he restarts any Java software running on their machine, and did not mistype the path to the jar file.
  7. It may also be helpful, at this point, for them to do a Navajo rain dance and sacrifice a chicken, especially if trying to get more than one "jar" file installed properly.
Now, the in-joke that I referred to above is the following: Sun's convention for naming these packages is to use "Internet" addresses (well, actually, reversed addresses, to make it more of an in-joke). I think they were making a subtle reference to the unimplemented feature whereby this:
import com.sun.java.jaaxbrpc2ee;
would cause the Java Virtual Machine to download the jaaxbrpc2ee package directly from "java.sun.com", without the programmer having to do anything. For numerous reasons this would never work. What if your computer weren't connected to this "Internet"? Or what if you were to download an evil version written by evil hackers? It is well-known that both of these problems are completely intractable, whereas the manual steps listed above ensure nothing like this could ever happen. Nonetheless, the direct downloading way does seem like it might speed up the pace of software development a bit! Too bad there are all those problems with it.

What if writing:
import com.sun.java.jaaxbrpc2ee;
actually did download, install, and, well, import the jaaxbrpc2ee package? Ignoring the impossibility of actually making a socket connection, finding space on local disk, etc (more intractable problems, I'm afraid), I can think of two major issues with this:
  1. Security
  2. Versioning
Security would be a toughie. You might have to have an MD5 or SHA1 hash of the Jar Jar file available somewhere for the JVM to check on in order to determine if it has downloaded a valid copy. Not sure this would work, but if it did you could actually download the Jar Jar file from your cousin Harold's warez site that he runs out of his basement. That way if com.sun.java was down you'd have at least one other option.

Then there's the versioning issue. What if you were developing against version 2.3.5 of jaaxbrpc2ee and Sun released version 3.0.0? How would the JVM know not to use it? Chances are, you'd have to add some syntax to Java in order to handle this. You could use a regular expression to indicate which versions were acceptable:
import com.sun.java.jaaxbrpc2ee 2.3.*;
import org.asm.omygawd *.*.*.*;
import org.standards.jdbc.odbc.dbase2 19.007a*;

Of course, as you can see, anyone can publish a package that others could download and use directly. There could even be a free service that maintained these packages, for people who lacked the wherewithal to host their own packages. They'd just have to prove they owned their domain name. Also, people wouldn't be allowed to update an existing version, they'd have to release a new version.

Ideally, one would be able to use the "Internet" anywhere in your code, not just when importing packages. Pretty much anything could use the "Internet":
com.att.bell-labs.int32 i = org.ieee.math.pi * (radius ^ 2);
For example, if you wanted to run code on another machine, you might say (in PXML with Relang extensions):
(on com.megacorp.server11:90210
(if ...mumble... (send com.megacorp.server23:2001 '(d00d u suck))
The above would run the "if" expression on server11.megacorp.com (port 90210), and that code would sometimes send a message to yet another machine. Of course, you could use variables instead of hard-coded machine names and ports. Web servers could communicate with web browsers (if they supported PXML/Relang) like so:
(import com.niftystuff.clock 1.*)

(def clock ()
(register :clock)
(receive infinity
(update-thine-self (com.niftystuff.clock:draw)))
(clock))

(def display-time ()
(while true
(sleep 1)
(send :clock 'update-thine-self)))
;; imagine lots more code here...
(html
(body
(on browser
(spawn clock)
(spawn display-time)
(do some other stuff))))

Ah, but it's getting late, and I can see a strange van parked outside, and my dog is acting nervous (and I have heard something about an AMC "wet" team.) Maybe it is time to post this blog and go to bed! The crazy language sketched out above, is, well, just that: crazy!

Avoiding fallback in distributed systems

As previously mentioned , I was recently able to contribute to the Amazon Builders' Library . I'd also like to share another post t...