A Nickel's Worth

Tuesday, April 28, 2009

Six Audacious Goals for Your System

With apologies to the authors of the futurist programming notes, here is a set of audacious goals for your system (by which I mean your collection of web sites, webservices, databases, and so on):
  1. Make your entire system runnable (for debugging/development) on a single machine, via a single command,
  2. ...without manual configuration,
  3. ...without an Internet connection.
  4. Demonstrate that your system works flawlessly even when you permanently and unexpectedly unplug the power cable from any one machine (even your most precious database).
  5. Make one of your N-tier architectures M-tier, where M < N.
  6. Optimize every user-visible action so that it is perceived to complete in less than one second.

Wednesday, May 21, 2008

Ulrich Drepper's Memory Paper and CL

I recently came across Ulrich Drepper's excellent paper, What Every Programmer Should Know About Memory. There is a lot of fascinating stuff in there about an important class of things you sometimes need to do to achieve excellent performance. While the paper concentrates on C, I was wondering if some of the same effects could be observed in a high-level language like CL (for the record, I think CL is both high- and low-level, but whatever...) I did a quick experiment which suggests that at least one of Ulrich's optimizations works in CL. [Note: the following is not an attempt to produce mathematically correct results, as Ulrich did; I just wanted to see if the order in which memory was accessed seemed to matter.]:
(declaim (optimize (speed 3) (safety 0)))

(defun matrix-multiply-fast? ()
(let ((m1 (make-array '(1000 1000) :element-type 'fixnum))
(m2 (make-array '(1000 1000) :element-type 'fixnum))
(m3 (make-array '(1000 1000) :element-type 'fixnum)))
(loop repeat 100
do
(loop for i upto 1000
do
(loop for j upto 1000
do
(setf (aref m3 i j) (* (aref m1 i j) (aref m2 i j))))))))

(defun matrix-multiply-slow? ()
(let ((m1 (make-array '(1000 1000) :element-type 'fixnum))
(m2 (make-array '(1000 1000) :element-type 'fixnum))
(m3 (make-array '(1000 1000) :element-type 'fixnum)))
(loop repeat 100
do
(loop for i upto 1000
do
(loop for j upto 1000
do
(setf (aref m3 i j) (* (aref m1 i j) (aref m2 j i))))))))
Note the only difference between the two functions is the order in which the elements of the 2nd array are accessed. After compiling the above file I did the following from the CL prompt:
CL-USER> (time (matrix-multiply-fast?))
Evaluation took:
2.776 seconds of real time
2.508157 seconds of user run time
0.048003 seconds of system run time
[Run times include 0.028 seconds GC run time.]
0 calls to %EVAL
0 page faults and
12,000,160 bytes consed.

CL-USER> (time (matrix-multiply-slow?))
Evaluation took:
3.926 seconds of real time
3.632227 seconds of user run time
0.016001 seconds of system run time
0 calls to %EVAL
0 page faults and
12,008,280 bytes consed.
Assuming the above is all correct, it sort-of duplicates Ulrich's result where accessing an array in a column-major order is slower (see figure 6.1 for an illustration). Now, I have to admit that, for lack of time, I didn't read Ulrich's paper in great depth and kind of lazily jumped to section six. I'd be curious to know if this sort of optimization is already well-known in the CL world (and/or for other "high-level" programming languages). Some casual googling turned up an excellent paper about optimizing CL via type annotations, but I didn't see anyone directly addressing Ulrich's points. I'd be curious if any readers of this blog know more.

Sunday, March 23, 2008

Optimizing CL

This blog is about mechanically optimizing CL code. We will not discuss optimizing algorithms; instead we'll transform the code in rote ways to improve performance. To illustrate these techniques, we'll re-implement Perl's core Text::Soundex function in CL. Let's start with the Perl code, straight from the 5.8.8 distribution.
$soundex_nocode = undef;

sub soundex
{
local (@s, $f, $fc, $_) = @_;

push @s, '' unless @s; # handle no args as a single empty string

foreach (@s)
{
$_ = uc $_;
tr/A-Z//cd;

if ($_ eq '')
{
$_ = $soundex_nocode;
}
else
{
($f) = /^(.)/;
tr/AEHIOUWYBFPVCGJKQSXZDTLMNR/00000000111122222222334556/;
($fc) = /^(.)/;
s/^$fc+//;
tr///cs;
tr/0//d;
$_ = $f . $_ . '000';
s/^(.{4}).*/$1/;
}
}

wantarray ? @s : shift @s;
}
We do not implement the auxiliary behavior of the Perl version, namely we do not optionally accept a list of strings, nor do we support overriding the NIL return case. Both would be easy to implement but would be tangential to this blog entry. We port the rest of the functionality as faithfully as possible, though, so that the Perl can serve as a useful performance benchmark.

Perl's soundex uses regular expressions and string substitution operators heavily, some having analogues in CL and some not. For example, CL lacks Perl's tr/// operator, so we implement a crude version:
(defparameter *ascii-table* (let ((table (make-array '(256) :element-type 'character)))
(loop
for i below 256
do (setf (aref table i) (code-char i)))
table))

(defun tr (string from-table to-table)
"Crude version of Perl's tr/// operator."
(let ((table (copy-seq *ascii-table*)))
(loop
for from-char across from-table
and to-char across to-table
do (setf (aref table (char-code from-char)) to-char))
(map 'string
#'(lambda (c) (aref table (char-code c)))
string)))
Our TR supports only the limited case needed by SOUNDEX (i.e., mapping one set of characters to another). The Perl version can do more, such as removing letters that don't appear in the first set, and removing duplicates. In fact, the Perl soundex relies on that ability to remove duplicates, so we implement a function to do that as well.
(defun uniq! (seq)
(cond
((> (length seq) 1)
(do* ((cur 0)
(cur-elt (elt seq cur) (elt seq cur))
(next 1 (1+ next)))
((>= next (length seq)) (subseq seq 0 (1+ cur)))
(let ((next-char (elt seq next)))
(unless (eql cur-elt next-char)
(incf cur)
(setf (elt seq cur) next-char)))))
(t seq)))
UNIQ! coalesces adjacent duplicate items into one item. CL's built-in DELETE-DUPLICATES doesn't work because it coalesces all duplicates, not just adjacent ones.

These two utility functions make porting the rest of the soundex easy. The following shows the CL equivalents of each important line of soundex:
PerlWhat It DoesCL Equivalent
$_ = uc $_;Uppercase the string(string-upcase ...)
tr/A-Z//cd;Remove any non-upper-alpha characters(remove-if-not 'alpha-char-p string)
($f) = /^(.)/;Gets the first character of the string(char s 0)
tr/AE.../00.../;Map letters to digits values(tr s "AE..." "00...")
s/^$fc+//;Remove leading copies of the character in $fc(string-left-trim (vector fc) s2)
tr///cs;Remove adjacent duplicates(uniq! ...)
tr/0//d;Delete any '0' characters(delete #\0 ...)
$_ = $f . $_ . '000';Concatenate, plus ensure length of at least 4(concatenate 'string ...)
s/^(.{4}).*/$1/;Strip off all but the first 4 characters(subseq ... 0 4)

Here is the actual code:
(defun soundex (string)
(let ((s (string-upcase (remove-if-not 'alpha-char-p string))))
(when (plusp (length s))
(let ((f (char s 0)))
(let* ((s2 (tr s "AEHIOUWYBFPVCGJKQSXZDTLMNR" "00000000111122222222334556"))
(fc (char s2 0)))
(setf s2 (delete #\0
(uniq! (string-left-trim (vector fc) s2))))
(subseq (concatenate 'string (vector f) s2 "000") 0 4))))))

Now let's see if it works. Using SLIME, type C-c C-k to compile the file. Then try SOUNDEX at the CL prompt:
CL-USER> (soundex "supercalifrag")
"S162"
If you try the Perl version, you'll find it returns the same thing (I tried other test cases, as well, but they are not relevant to this blog).

At this point you might be feeling rather proud of yourself, after all you ported that Perl code pretty quickly, right? And I bet it even performs better already; after all Perl is interpreted and CL is compiled! Let's verify that assumption using the TIME macro built in to CL:
CL-USER> (time (dotimes (i 100000) (soundex "supercalifrag")))
Evaluation took:
2.644 seconds of real time
2.640165 seconds of user run time
0.012001 seconds of system run time
[Run times include 0.076 seconds GC run time.]
0 calls to %EVAL
0 page faults and
207,987,480 bytes consed.
Now let's compare that to the Perl code's performance:
$ time ./soundex-bench.pl 100000
real 0m1.069s
user 0m1.064s
sys 0m0.008s
D'oh! The Perl code kicked our ass! It's more than twice as fast as our CL! How could this have happened to us? Maybe we forgot to turn on some optimizations? We add:
(declaim (optimize (speed 3) (safety 0)))
to the top of our file and recompile. Now let's see the results:
CL-USER> (time (dotimes (i 100000) (soundex "supercalifrag")))
Evaluation took:
2.061 seconds of real time
2.040128 seconds of user run time
0.020001 seconds of system run time
[Run times include 0.1 seconds GC run time.]
0 calls to %EVAL
0 page faults and
183,988,752 bytes consed.
Ok, that's a little better, but come on, it is still approximately twice as slow as the Perl! And look at how much memory is allocated ("consed") in the course of doing only 100,000 calls: that's approximately 175 megabytes (admittedly not all at once, but still, that's just plain embarrassing!)

Now before diving into optimization, let us review a good approach to optimizing CL.
  1. Measure first.
  2. Avoid guessing!
  3. Fix your algorithm(s) first (not shown in this blog entry).
  4. Fix memory consumption next.
  5. Then go after CPU consumption, primarily by adding type information.
Remember, this blog is not about algorithmic optimizations; we will pretend (for the sake of illustration only!) that you've already ruled out the need, in order to focus on mechanical optimization.

Following step one from the above strategy, start by profiling. Define a function, MANY-SOUNDEX that performs our TIME loop from above. Also define a function PROFILE-SOUNDEX that employs SBCL's sb-profile package to profile the functions involved in implementing SOUNDEX, including some built-ins that it calls. To profile a function, pass its name to SB-PROFILE:PROFILE. After exercising the code, call SB-PROFILE:REPORT, which prints timing information. Call SB-PROFILE:RESET between runs unless you want the results to accumulate (we don't, in this case).
(defun many-soundex ()
(time
(dotimes (i 100000)
(soundex "supercalifrag"))))

(defun profile-soundex ()
;; Don't accumulate results between runs.
(sb-profile:reset)
;; Calling this every time through in case any of the user-defined
;; functions was recompiled.
(sb-profile:profile soundex soundex-tr uniq!
concatenate make-string subseq
string-left-trim delete-if-not
tr delete string-upcase nsubseq)
(many-soundex)
(sb-profile:report))
If you notice some unfamiliar functions there, don't worry, they're going to be defined later; for the purposes of this blog I just want to define this function once, even though in real life I modified it many times.

Let's see what it produces:
CL-USER> (profile-soundex)
3.713 seconds of real time
3.160197 seconds of user run time
0.548034 seconds of system run time
[Run times include 0.08 seconds GC run time.]
0 calls to %EVAL
0 page faults and
200,022,056 bytes consed.
seconds | consed | calls | sec/call | name
-----------------------------------------------------------
1.394 | 140,115,928 | 100,000 | 0.000014 | TR
0.618 | 26,230,456 | 100,000 | 0.000006 | CONCATENATE
0.210 | 661,840 | 100,000 | 0.000002 | REMOVE-IF-NOT
0.170 | 3,864,248 | 100,000 | 0.000002 | DELETE
0.164 | 9,865,784 | 100,000 | 0.000002 | SOUNDEX
0.131 | 0 | 100,000 | 0.000001 | UNIQ!
0.130 | 9,274,680 | 100,000 | 0.000001 | STRING-UPCASE
0.058 | 10,012,888 | 100,003 | 0.000001 | SUBSEQ
0.000 | 0 | 7 | 0.000000 | STRING-LEFT-TRIM
-----------------------------------------------------------
2.878 | 200,025,824 | 800,010 | | Total
This shows that we probably should optimize TR for memory consumption first. What's wrong with TR? Every call to TR copies *ASCII-TABLE* into the TABLE local variable, which will later be used to map each character to a (potentially) different one. The inefficiency is that TABLE only varies based on the 2nd and 3rd arguments (FROM-TABLE and TO-TABLE). Since SOUNDEX always passes in the same thing for those two arguments every time, it is wasteful to continually recreate the same TABLE. To fix it use a closure that "closes over" a single instance of TABLE:
(defun make-tr-fn (from-table to-table)
(let ((table (copy-seq *ascii-table*)))
(loop
for from-char across from-table
and to-char across to-table
do (setf (aref table (char-code from-char)) to-char))
(lambda (string)
(declare ((simple-array character) string))
(map-into string
#'(lambda (c) (aref table (char-code c)))
string))))

(defparameter *soundex-tr-fn* (make-tr-fn "AEHIOUWYBFPVCGJKQSXZDTLMNR" "00000000111122222222334556"))

(defun soundex-tr (string)
(funcall *soundex-tr-fn* string))
This is an example of the sort of rote transformations you often need to do in order to speed up performance.
  1. Separate out the part of the function that varies dynamically into a LAMBDA.
  2. Return the LAMBDA instead of the original value so that it can be reused over and over again.
This is something you often do in Java, too, where the pattern goes:
  1. Create a class.
  2. Have the constructor do the stuff that only needs to happen once (equivalent to the part outside of the LAMBDA.)
  3. Create a method that does the dynamic part.
The main difference the CL and Java is that the Java would be more verbose.

You may notice another difference between MAKE-TR-FN and TR; it's now calling MAP-INTO instead of MAP, which avoids making a copy of the result, thus reducing the memory consumption further. Now this particular optimization does change the semantics of the function to modify its argument. In this case it's ok because we've already made a copy of the string passed to SOUNDEX, but do not use this technique without thinking through the consequences. Also, the DECLARE statement fixes a minor compiler warning caused by the change. DECLARE is discussed later.

When originally doing this work, I re-ran PROFILE-SOUNDEX after making each change, to see if it helped. That is not shown here in order to save space, but obviously you'd want to do the same thing. Let's now move on to the next biggest offender in the list, CONCATENATE. It's called towards the end of SOUNDEX in order to ensure that the return value is at least 4 characters (right-padded with '0' characters). This was a direct port of the Perl code (or at least as direct as you can get without using CL-PPCRE), and it's inefficient. Since the return value is always a 4-character string, we need only allocate a fixed-size string (using MAKE-STRING), prefilled with '0', and then copy in the (up to) 4 characters we need. We no longer need SUBSEQ which allocates a copy of its result (note: most of the memory consumed by SUBSEQ comes from it being called by UNIQ!).

In fact, since SUBSEQ is next on our list of memory hogs, let's find a way to fix it. When UNIQ! calls it, it's operating on a so-called "garbage" sequence (S2 from SOUNDEX); i.e., an intermediate result that is not used outside of the bowels of SOUNDEX. As such, we could modify it, so it is a shame to use SUBSEQ on it within UNIQ!. CL has a loosely-followed convention that "destructive" (non-copying) version of functions begin with the letter N. There is no built-in NSUBSEQ but a quick Google search finds one that works for our purposes:
;; From http://darcs.informatimago.com/lisp/common-lisp/utility.lisp
(defun nsubseq (sequence start &optional (end nil))
"
RETURN: When the SEQUENCE is a vector, the SEQUENCE itself, or a displaced
array to the SEQUENCE.
When the SEQUENCE is a list, it may destroy the list and reuse the
cons cells to make the subsequence.
"

(if (vectorp sequence)
(if (and (zerop start) (or (null end) (= end (length sequence))))
sequence
(make-array (- (if end
(min end (length sequence))
(length sequence))
start)
:element-type (array-element-type sequence)
:displaced-to sequence
:displaced-index-offset start))
(let ((result (nthcdr start sequence)))
(when end
(setf (cdr (nthcdr (- end start -1) sequence)) nil))
result)))
We'll call this from UNIQ! and that should take care of that 10MB of allocations. The last bit of memory we're going to take care of is that allocated by STRING-UPCASE and REMOVE-IF-NOT. Now we have to be careful here because we want one copy of the argument to SOUNDEX; it would be rude to transform the caller's argument unexpectedly. So we actually relying on either STRING-UPCASE or REMOVE-IF-NOT making a copy of STRING. As such, we have to pick one to optimize. Since REMOVE-IF-NOT has a destructive equivalent, DELETE-IF-NOT, we will use that instead. Unfortunately, it's not as simple as just dropping in DELETE-IF-NOT. If you read the CLHS entry you'll discover that STRING-UPCASE is allowed to return the same string it was passed in (i.e., not a copy) if it doesn't need to change it (e.g., all the letters are already uppercase). We account for this by calling REMOVE-IF-NOT in the case where STRING-UPCASE does this by seeing if the return value is EQ (same object) to the original string. You can see this in the new code listing, which has certain key optimizations highlighted for your convenience:
(defparameter *ascii-table* (let ((table (make-array '(256) :element-type 'character)))
(loop
for i below 256
do (setf (aref table i) (code-char i)))
table))

(defun make-tr-fn (from-table to-table)
(let ((table (copy-seq *ascii-table*)))
(loop
for from-char across from-table
and to-char across to-table
do (setf (aref table (char-code from-char)) to-char))
(lambda (string)
(declare ((simple-array character) string))
(map-into string
#'(lambda (c) (aref table (char-code c)))
string))))

(defparameter *soundex-tr-fn* (make-tr-fn "AEHIOUWYBFPVCGJKQSXZDTLMNR" "00000000111122222222334556"))

(defun soundex-tr (string)
(funcall *soundex-tr-fn* string))

(defun uniq! (seq)
(cond
((> (length seq) 1)
(do* ((cur 0)
(cur-elt (elt seq cur) (elt seq cur))
(next 1 (1+ next)))
((>= next (length seq)) (nsubseq seq 0 (1+ cur)))
(let ((next-char (elt seq next)))
(unless (eql cur-elt next-char)
(incf cur)
(setf (elt seq cur) next-char)))))
(t seq)))

(defun soundex (string)
(let ((s (let ((maybe-a-copy (string-upcase string)))
;; STRING-UPCASE can return original string if no changes
;; were needed.
(if (eq maybe-a-copy string)
;; REMOVE-IF-NOT makes a copy
(remove-if-not 'alpha-char-p maybe-a-copy)
;; DELETE-IF-NOT doesn't
(delete-if-not 'alpha-char-p maybe-a-copy)))))
(when (plusp (length s))
(let ((f (char s 0)))
(let* ((s2 (soundex-tr s))
(fc (char s2 0))
(result (make-string 4 :initial-element #\0)))
(setf s2 (delete #\0 (uniq! (string-left-trim (vector fc) s2))))
(setf (char result 0) f)
(loop
for i from 0 below (min (length s2) 4)
do (setf (char result (1+ i)) (char s2 i)))
result)))))
As mentioned above, in the course of actually doing this work I ran PROFILE-SOUNDEX many times, but I don't show the intermediate results here in order to save space. Now that we've completed a major chunk of work, however, let's see how we're doing:
CL-USER> (many-soundex)
Evaluation took:
0.464 seconds of real time
0.460029 seconds of user run time
0.0 seconds of system run time
[Run times include 0.008 seconds GC run time.]
0 calls to %EVAL
0 page faults and
22,402,176 bytes consed.
We run MANY-SOUNDEX here in a fresh instance of SBCL to remove any injected profiling code or other stuff that might affect the results. It also has speed optimizations turned on. As you can see, the benefit of simply removing memory allocations is significant. The new code is a whopping 4.4 times faster! It's also now more than twice as fast as the Perl code.

We could continue to to whittle away at the memory allocation, as you can see we're still at around 21MB of memory allocated. However, since we have to copy the argument to SOUNDEX no matter what, 21MB is only about twice the minimum (if you look at the profiling results, we can't do better than the memory allocated by STRING-UPCASE). To keep this blog entry to a reasonable length we shall deem this acceptable.

On to CPU optimization. SBCL automatically detects certain performance problems when you up SPEED to 3. Fix these first, as it makes little sense to manually search out problems when the compiler has already found some. Using SLIME makes it easy:
  1. Put
    (declaim (optimize (speed 3) (debug 0) (safety 0)))
    at the top of the file.
  2. Type C-c C-k to compile the file.
  3. Hit M-n and M-p to get SLIME to highlight the next (or previous) compiler warning.
I won't go through all of the compiler warnings I got when I did this, but instead will highlight some of them.
(defun soundex-tr (string)
(funcall *soundex-tr-fn* string))
; note: unable to
; optimize away possible call to FDEFINITION at runtime
; due to type uncertainty:
; The first argument is a (OR FUNCTION SYMBOL), not a FUNCTION.
This example seems a little strange at first, if, say, you're used to primitive type systems such as Java's. CL types can be defined in sophisticated ways that deserve a blog entry of their own. In this particular case, the compiler has inferred the type of *SOUNDEX-TR-FN* to be (OR FUNCTION SYMBOL), meaning it isn't sure if it could sometimes be null (figuring that out would require "global" analysis to ensure that *SOUNDEX-TR-FN* is never modified by any function anywhere, which is still, apparently, beyond the scope of most compilers). We can fix the warning with a THE expression. THE is one of CL's ways of adding manual static typing (which you may be familiar with from more primitive languages such as Java) into your code. Although CL implementations such as SBCL perform type inference, the compiler occasionally needs your help:
(funcall (the function *soundex-tr-fn*) ...)
Since we're sure that *SOUNDEX-TR-FN* is effectively constant (it's all private code under our own control, after all), it is safe to add in this THE.

Another interesting set of warnings comes from NSUBSEQ:
(defun nsubseq (sequence start &optional (end nil))
(if (vectorp sequence)
(if (and (zerop start) (or (null end) (= end (length sequence))))
sequence
(make-array (- (if end
(min end (length sequence))
(length sequence))
start)
:element-type (array-element-type sequence)
:displaced-to sequence
:displaced-index-offset start))
(let ((result (nthcdr start sequence)))
(when end
(setf (cdr (nthcdr (- end start -1) sequence)) nil))
result)))
; in: DEFUN NSUBSEQ
; (ZEROP START)
; ==>
; (= START 0)
;
; note: unable to
; open-code FLOAT to RATIONAL comparison
; due to type uncertainty:
; The first argument is a NUMBER, not a FLOAT.
;
; note: unable to
; optimize
; due to type uncertainty:
; The first argument is a NUMBER, not a (COMPLEX SINGLE-FLOAT).
;
; note: unable to
; optimize
; due to type uncertainty:
; The first argument is a NUMBER, not a (COMPLEX DOUBLE-FLOAT).
...blah blah blah...
Those warnings go on for a while. What they all boil down to, though, is that the compiler doesn't know the type of START or END. Now here is a key point. Should you find yourself in this sort of situation, always remember what the great computer scientist Igor Stravinsky once said, "Lesser artists borrow, great artists steal." In other words, don't try to figure out the type declarations; steal them from SBCL. Using SLIME makes this easy. Just type M-. and then type SUBSEQ, because obviously it should take the same parameters. This will take you to the source code (you should always have the SBCL source code handy) for SUBSEQ:
(defun subseq (sequence start &optional end)
#!+sb-doc
"Return a copy of a subsequence of SEQUENCE starting with element number
START and continuing to the end of SEQUENCE or the optional END."

(seq-dispatch sequence
(list-subseq* sequence start end)
(vector-subseq* sequence start end)
(sb!sequence:subseq sequence start end)))
Since we only care about the vector case here, move the cursor to VECTOR-SUBSEQ* and hit M-. again, to see how it declares its arguments:
(defun vector-subseq* (sequence start end)
(declare (type vector sequence))
(declare (type index start)
(type (or null index) end))
;; blah blah blah
Ah! So what is the type INDEX? Use M-. once more and you'll discover this:
(def!type index () `(integer 0 (,sb!xc:array-dimension-limit)))
This makes sense. The type obviously has to be an integer, cannot be negative, and cannot be more than the maximum array length allowed by your CL implementation. This particular type definition is internal to SBCL and cannot be used directly, but we create our own. Improved NSUBSEQ looks like:
(deftype index () `(integer 0 ,array-dimension-limit))

;; From http://darcs.informatimago.com/lisp/common-lisp/utility.lisp
(defun nsubseq (sequence start &optional (end nil))
(if (vectorp sequence)
(locally
(declare (index start)
((or null index) end))

(if (and (zerop start) (or (null end) (= end (length sequence))))
sequence
(make-array (- (if end
(min end (length sequence))
(length sequence))
start)
:element-type (array-element-type sequence)
:displaced-to sequence
:displaced-index-offset start)))
(let ((result (nthcdr start sequence)))
(when end
(setf (cdr (nthcdr (- end start -1) sequence)) nil))
result)))
Note we use LOCALLY only when the argument is actually a vector; we can't make the same assumptions about other types of sequences. Plus we don't care about the list case (in fact a compiler warning for that case remains, but since we're not using lists in this example we'll skip the fix for it).

Let's look at optimizing UNIQ!. When we compile it, we get a number of warnings such as:
(defun uniq! (seq)
(cond
((> (length seq) 1)
(do* ((cur 0)
(cur-elt (elt seq cur) (elt seq cur))
(next 1 (1+ next)))
((>= next (length seq)) (nsubseq seq 0 (1+ cur)))
(let ((next-char (elt seq next)))
(unless (eql cur-elt next-char)
(incf cur)
(setf (elt seq cur) next-char)))))
(t seq)))
; in: DEFUN UNIQ!
; (LENGTH SEQ)
;
; note: unable to
; optimize
; due to type uncertainty:
; The first argument is a SEQUENCE, not a (SIMPLE-ARRAY * (*)).
;
; note: unable to
; optimize
; due to type uncertainty:
; The first argument is a SEQUENCE, not a VECTOR.

; (ELT SEQ CUR)
;
; note: unable to
; optimize
; due to type uncertainty:
; The first argument is a SEQUENCE, not a (SIMPLE-ARRAY * (*)).
...blah blah blah...
Here the compiler is telling us that it could make some optimizations if it knew for sure that the SEQ were a SIMPLE-ARRAY. What the heck is a simple array? Clicking link (or using C-c C-d h in SLIME), shows:
The type of an array that is not displaced to another array, has no fill pointer, and is not
expressly adjustable is a subtype of type simple-array. The concept of a simple array exists to
allow the implementation to use a specialized representation and to allow the user to declare that
certain values will always be simple arrays.
In case you're wondering, "displaced" means an array slice (as in Perl or Python), and the other two concepts refer to various ways an array can be (or appear to be) extensible. Is it safe for UNIQ! to assume it is receiving a simple array? Yes; its input comes from STRING-LEFT-TRIM. Although STRING-LEFT-TRIM is allowed to return its input when there are no changes to be made, we know that it will always make a change because we always remove at least the first character of S2, and thus its return value will always be simple. So let's add a function declaration for UNIQ!:
(declaim (ftype (function (simple-array) string) uniq!))
This tells the compiler that UNIQ!'s single argument is a SIMPLE-ARRAY and that it returns a STRING. The reason we say it returns a STRING (instead of a "simple" sequence type) is that UNIQ! uses NSUBSEQ, which does return an array slice. Unfortunately, we still get compiler warnings after making this change (albeit fewer):
             (cur-elt (elt seq cur) (elt seq cur))
; note: unable to
; optimize
; due to type uncertainty:
; The first argument is a (SIMPLE-ARRAY * (*)), not a SIMPLE-STRING.
...blah blah blah...
This is telling us that it could generate better code if it knew the sequence was a string. ELT is a generic accessor that works on any sequence type; presumably if the compiler knows the sequence type is a string it can take advantage of the fact that each element is the same size. Anyway, this is easy to fix. SOUNDEX only deals in strings, so we can safely change the declaration from SIMPLE-ARRAY to SIMPLE-STRING. Sure enough, this makes the compiler warnings disappear.

The remaining changes made were generally of the same nature as the above so are not covered here in detail, but you can see them in the following complete code listing that highlights all the CPU-related changes:
(declaim (optimize (speed 3) (debug 0) (safety 0)))

(defparameter *ascii-table* (let ((table (make-array '(256) :element-type 'character)))
(loop
for i below 256
do (setf (aref table i) (code-char i)))
table))

(defun make-tr-fn (from-table to-table)
(declare (simple-string from-table to-table)
(simple-array *ascii-table*))
(let ((table (the (simple-array character) (copy-seq *ascii-table*))))
(loop
for from-char across from-table
and to-char across to-table
do (setf (aref table (char-code from-char)) to-char))
(lambda (string)
(declare ((simple-array character) string))
(map-into string
#'(lambda (c) (aref table (char-code c)))
string))))

(defparameter *soundex-tr-fn* (make-tr-fn "AEHIOUWYBFPVCGJKQSXZDTLMNR" "00000000111122222222334556"))

(declaim (ftype (function (simple-string) simple-string) soundex-tr))
(defun soundex-tr (string)
(funcall (the function *soundex-tr-fn*) string))

(deftype index () `(integer 0 ,array-dimension-limit))

;; From http://darcs.informatimago.com/lisp/common-lisp/utility.lisp
(defun nsubseq (sequence start &optional (end nil))
"
RETURN: When the SEQUENCE is a vector, the SEQUENCE itself, or a displaced
array to the SEQUENCE.
When the SEQUENCE is a list, it may destroy the list and reuse the
cons cells to make the subsequence.
"

(if (vectorp sequence)
(locally
(declare (index start)
((or null index) end))

(if (and (zerop start) (or (null end) (= end (length sequence))))
sequence
(make-array (- (if end
(min end (length sequence))
(length sequence))
start)
:element-type (array-element-type sequence)
:displaced-to sequence
:displaced-index-offset start)))
(let ((result (nthcdr start sequence)))
(when end
(setf (cdr (nthcdr (- end start -1) sequence)) nil))
result)))

(declaim (ftype (function (simple-string) string) uniq!))
(defun uniq! (seq)
(let ((seq-len (length seq)))
(cond
((> seq-len 1)
(do* ((cur 0)
(cur-elt (elt seq cur) (elt seq cur))
(next 1 (1+ next)))
((>= next seq-len) (nsubseq seq 0 (1+ cur)))
(let ((next-char (elt seq next)))
(unless (eql cur-elt next-char)
(incf cur)
(setf (elt seq cur) next-char)))))
(t seq))))
(defun soundex (string)
(let ((s (the simple-string
(let ((maybe-a-copy (string-upcase string)))
;; STRING-UPCASE can return original string if no changes
;; were needed.
(if (eq maybe-a-copy string)
;; REMOVE-IF-NOT makes a copy
(remove-if-not 'alpha-char-p maybe-a-copy)
;; DELETE-IF-NOT doesn't
(delete-if-not 'alpha-char-p maybe-a-copy))))))
(when (plusp (length s))
(let ((f (char s 0)))
(let* ((s2 (soundex-tr s))
(fc (char s2 0))
(result (make-string 4 :initial-element #\0)))
(setf s2 (the string (delete #\0 (uniq! (string-left-trim (vector fc) s2)))))
(setf (char result 0) f)
(let ((end (min (length s2) 4)))
(loop
for i from 0 below end
do (setf (char result (1+ i)) (char s2 i))))
result)))))


Here's the effect of the CPU optimizations:
CL-USER> (many-soundex)
Evaluation took:
0.357 seconds of real time
0.360022 seconds of user run time
0.0 seconds of system run time
[Run times include 0.008 seconds GC run time.]
0 calls to %EVAL
0 page faults and
22,406,384 bytes consed.
Although the results vary from run to run, 0.357 seconds is a pretty typical on my hardware. This is approximately a 23% reduction in time compared to before. Not nearly as big of a gain, which is why one should optimize memory first. Still, 23% is nothing to sneeze at for a relatively small number of changes, most of which were pointed out to us by the compiler! If you're interested in learning more about optimizing CL, I recommend checking out SBCL's SB-SPROF package, which is more sophisticated than the SB-PROFILE package used here.

Tuesday, February 12, 2008

Scripting in CL

CL has been overlooked as a scripting language, even though scripting in CL is faster than it is in traditional scripting languages. CL's interactivity and macro system make it ideal for quick-and-dirty hacking.

For one thing, its syntax is so simple that almost anything can be typed in at the CL command-line:
CL-USER> 23
23
That's a valid CL "program". It returns 23. So say that you forgot how part of the language works; resolve the problem by typing some test code at the command-line without breaking your concentration. For example, say you forget how division works in CL; you may have forgotten whether CL has support for rationals. Don't look it up, just try it out immediately:
CL-USER> (/ 23 45)
23/45
Sure enough, it doesn't return a float! This interactivity isn't just useful for exploring (or remembering) basic language features. It's also how you script in CL.

For example, here is how to write a CL version of the popular Unix utility grep, which you might then use as the basis for a log analyzer or some other script. Start by working interactively on a small part of the problem, such as how to see if a string contains a regexp. Of course our goal is to pick lines out of a file, but let's ignore the file part for now, we have to walk before we run. First define some sample data:
CL-USER> (defparameter *lines* "hi there
my name is bob
I live in Kalamazoo")
Now google for how to use regular expressions in CL. All roads lead to CL-PPCRE. To install it, do this:
CL-USER> (require :asdf-install) ; if you haven't already
CL-USER> (asdf-install:install :cl-ppcre) ; if you don't have it already
To use it, do this:
CL-USER> (require :cl-ppcre)
CL-USER> (use-package :cl-ppcre) ; to save typing cl-ppcre: a lot
Next use SLIME's autocomplete to figure out what functions are available:
CL-USER> (cl-ppcre:<TAB>
Flags: boundp fboundp generic-function class macro special-operator package

Completion: Flags: Score:
---------------------------------- ------- --------
cl-ppcre: b------ 98.44
cl-ppcre-test: b------ 90.11
cl-ppcre:*allow-named-registers* b------ 0.98
cl-ppcre:*allow-quoting* b------ 0.98
cl-ppcre:*regex-char-code-limit* b------ 0.98
cl-ppcre:*use-bmh-matchers* b------ 0.98
cl-ppcre:all-matches -f----- 0.98
cl-ppcre:all-matches-as-strings -f----- 0.98
cl-ppcre:create-scanner -fg---- 0.98
cl-ppcre:define-parse-tree-synonym -f--m-- 0.98
cl-ppcre:do-matches -f--m-- 0.98
cl-ppcre:do-matches-as-strings -f--m-- 0.98
cl-ppcre:do-register-groups -f--m-- 0.98
cl-ppcre:do-scans -f--m-- 0.98
cl-ppcre:parse-tree-synonym -f----- 0.98
cl-ppcre:ppcre-error ---c--- 0.98
...etc...
Although this technique may seem haphazard, in practice one finds what one wants in about a minute or two (at most), typically without losing one's concentration. Most of the functions can be trivially filtered out simply based on their names (e.g., PARSE-TREE-SYNONYM is obviously not what I'm looking for). Of the rest, you can either quickly check their documentation (C-c C-d d if you're using SLIME) or else, heck, just try them out immediately. In this particular case I settled on SCAN.
CL-USER> (scan "Kalamazoo" *line*)
34
43
#()
#()
It doesn't matter what those return values mean (probably the offset of the match is in there somewhere), the key point is that SCAN would have returned NIL if there hadn't been a match. Or so we hope. We had better verify that (quickly; don't look at the documentation, just try it!):
CL-USER> (scan "snord" *line*)
NIL

Ok, now how do we read in a line from a file? Well, first we have to open the file:
CL-USER> (open "/etc/motd")
#<SB-SYS:FD-STREAM for "file /etc/motd" {A78C2E9}>
That looks promising. A little more research on Google shows that it's best to use WITH-OPEN-FILE because that will automatically close the file (even if an exception is thrown).
CL-USER> (with-open-file (stream "/etc/motd"))
NIL
How do you read a line from a file? Turns out there's a function READ-LINE:
CL-USER> (with-open-file (stream "/etc/motd") (read-line stream nil nil))
"Linux D830 2.6.22-14-generic #1 SMP Fri Feb 1 04:59:50 UTC 2008 i686"
Ok, so we can read lines from a file, now what? Well, let's play around with the LOOP macro to print out all the lines of the file (don't worry about the regexp matching just yet):
CL-USER> (defparameter *motd* (open "/etc/motd"))
*MOTD*
CL-USER> (loop for line = (read-line *motd* nil nil)
while line
do (format t "~A~%" line))

Linux D830 2.6.22-14-generic #1 SMP Fri Feb 1 04:59:50 UTC 2008 i686

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
Ok, that worked. Now let's scan for all the lines that contain the word "Ubuntu".
CL-USER> (defparameter *motd* (open "/etc/motd"))
*MOTD*
CL-USER> (loop for line = (read-line *motd* nil nil)
while line
do (when (scan "Ubuntu" line)
(format t "~A~%" line)))

The programs included with the Ubuntu system are free software;
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
Ok, that looks good. Now let's write a function with everything we just did:
(defun grep (regexp filename)
(with-open-file (stream filename)
(loop for line = (read-line stream nil nil)
while line
do (when (scan regexp line)
(format t "~A~%" line)))))
CL-USER> (grep "Ubuntu" "/etc/motd")
The programs included with the Ubuntu system are free software;
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
At this point you've seen what a typical CL scripting session is like, working from the bottom up.

Now the above was a pretty simplistic example, and it used (apart from CL-PPCRE) good old fashioned CL, which hasn't changed since the mid-1980s. Since then, scripting languages like Perl have been invented by smart people like Larry Wall, having many built-in features that are missing from stock CL. Why suffer without them? It would seem that you have to either choose between a general-purpose, compiled language like CL or purpose-built, interpreted Perl. Fortunately, this is not the case! I've found that each piece of Perl functionality I've wanted has taken on the order of 45 minutes to 1 hour to implement using CL's powerful DEFMACRO facility.

Let's take, for example, Perl's magic <> operator. In Perl, you can write:
#!/usr/bin/env perl
# file: "foo.pl"

for $line (<>) {
print $line;
}
Which will slurp up all the filenames on the command-line and let you iterate over each line. E.g.,
% foo.pl /etc/motd /etc/resolv.conf
would print out those files, sort of like cat. Wouldn't it be nice to have a similar feature in CL? Let's start by taking our GREP function and adding the ability to customize the action taken on each line:
(defun do-lines (fn filename)
(with-open-file (stream filename)
(loop for line = (read-line stream nil nil)
while line
do (when (scan regexp line)
(funcall fn line)))))
DO-LINES is almost exactly the same as GREP, except that it just calls a function on each line of the file rather than assume grep-like behavior. To verify that we can implement GREP in terms of DO-LINES we try the following at the command-line:
CL-USER> (do-lines (lambda (line)
(when (scan "Ubuntu" line) (format t "~A~%" line))) "/etc/motd")

The programs included with the Ubuntu system are free software;
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
Isn't that great? Baaaarf. Not really, eh? Kind of verbose compared to the Perl code, and it doesn't even handle multiple files like the Perl code does!

What we'd really like to be able to do is something CL-esque yet approximately equal to the Perl code in brevity. This sounds like a good use for a CL macro. Let's start by just fantasizing about what we'd like:
(with-lines-from-files (line "/etc/motd" "/etc/resolv.conf") 
(format t "~A~%" line)) ; just a simple example, could put anything here
Well, before going further, we need to enhance DO-LINES to take more than one file:
(defun do-all-lines (fn &rest filenames)
(dolist (filename-or-list filenames)
;; For convenience we allow you also to pass in lists of filenames
(dolist (filename (typecase filename-or-list
(cons filename-or-list)
(t (list filename-or-list))))
(with-open-file (stream filename)
(loop
for line = (read-line stream nil nil)
while line
do (funcall fn line))))))
Now we just need to write a macro so that you don't have to write all that crazy LAMBDA stuff:
(defmacro with-lines-from-files ((var &rest filenames) &body body)
`(do-all-lines (lambda (,var) ,@body) ,@filenames))
Armed with this macro, we can re-write GREP with way fewer lines (and the ability to grep across multiple files):
(defun grep (regexp &rest filenames)
(with-lines-from-files (line filenames)
(when (scan regexp line)
(format t "~A~%" line))))
CL-USER> (grep "Ubuntu|name" "/etc/motd" "/etc/resolv.conf")
The programs included with the Ubuntu system are free software;
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
nameserver 68.87.69.146
nameserver 68.87.85.98
If it weren't for macros, we'd be stuck implementing grep with the clunky use of LAMBDA:
(defun clunky-grep (regexp &rest filenames)
(do-all-lines (lambda (line)
(when (scan regexp line)
(format t "~A~%" line)))
filenames))
The LAMBDA-free version of GREP is much easier to read, it almost reads like English: "with lines from the files, when lines scan the regexp, format the line". Whereas the LAMBDA version reads "with lines, LAMBDA (huh!!!?), when lines scan the regexp, format the line, across the filenames". No contest!

Similarly, other Perl operations can be easily implemented with macros:
;; The following macro is mostly like Perl's foreach, in the sense
;; that you can pass in as many references to sequences or "scalars"
;; as you want and it will iterate over them and allow you to modify
;; them. Unlike the Perl code, it sets the variable IT to each
;; element rather than $_. Also, you have to just pass in the hash
;; table directly, not a flattened list of hash keys.
(defmacro perl-foreach ((&rest refs) &body body)
(let* ((gensyms (loop repeat (length refs) collect (gensym))))
(list*
'let
(mapcar #'list gensyms refs)
(loop
for ref in refs
and indirect-ref in gensyms
collect
`(typecase ,indirect-ref
(hash-table
(maphash #'(lambda (key value)
(declare (ignore value))
(symbol-macrolet ((it (gethash key ,indirect-ref)))
,@body))
,indirect-ref))
((and (or vector list) (not string))
(map-into ,indirect-ref
#'(lambda (it)
,@body
it)
,indirect-ref))
(t
(symbol-macrolet ((it ,ref))
,@body)))))))

;; trim whitespace in the scalar, the list, the array, and all the
;; values in the hash
(perl-foreach (scalar my-list my-array my-hash)
(setf it (regex-replace "^\\s+" it ""))
(setf it (regex-replace "\\s+$" it "")))
Or:
(defmacro perl-splice (sequence-place &optional (offset 0) length replacement-sequence)
(let* ((seq (gensym "SEQUENCE-PLACE-"))
(off-arg (gensym "OFFSET-ARG-"))
(off (gensym "OFFSET-"))
(len (gensym "LENGTH-"))
(end (gensym "END-"))
(rep (gensym "REPLACEMENT-SEQUENCE-"))
(left-part (list `(subseq ,seq 0 ,off)))
(right-part (when length
(list `(subseq ,seq ,end)))))
`(let* ((,seq ,sequence-place)
(,off-arg ,offset)
(,off (if (minusp ,off-arg)
(+ (length ,seq) ,off-arg)
,off-arg))
(,len ,length)
(,end (when ,len
(if (minusp ,len)
(+ (length ,seq) ,len)
(+ ,off ,len))))
(,rep ,replacement-sequence))
(prog1 (subseq ,seq ,off ,end)
(when (or ,rep (not (eql ,off ,end)))
(setf ,sequence-place (concatenate (typecase ,seq
(cons 'list)
(t 'vector))
,@left-part
,rep
,@right-part)))))))

;; Now the syntax is almost exactly the same.
(setf front (perl-splice my-array 0 n))
(setf end (perl-splice my-array 0 (- n)))

Tuesday, November 6, 2007

Effective .emacs

Tip #0: Use Emacs 22
Emacs 22 is super stable. About half of my .emacs file (before I cleaned it up) was loading stuff that's now part of Emacs 22 and has autoloads.

Tip #1: Never quit Emacs
Okay, this has nothing to do with your .emacs file, but I have to put it in here. Just because your .emacs file should load quickly doesn't imply that you should quit and restart all the time. Figure it out!

Tip #2: (require 'cl)
I put this at the top of my .emacs. It's a no-brainer. It adds in a ton of compatibility with CL, so that you can just use the CL functions you know and love (well, most of them, anyway), without a second thought.

Tip #3: Never LOAD, never REQUIRE
Your .emacs file shouldn't contain any calls to LOAD or REQUIRE (which are slow and often cause errors on startup). The only possible exceptions are loading files that contain nothing but autoloads (or similar stuff). How do you avoid loads and requires? First try removing each call to LOAD or REQUIRE to see if it's needed at all. Often (e.g., if you follow Tip #0) Emacs already has autoloads in place for the library already (e.g., "cc-mode"). For other libraries, where that's not true, put your own autoloads in your .emacs file. For example, rather than load SLIME in my .emacs (so I can bind the F1 key to SLIME-SELECTOR), instead I have:
(autoload 'slime-selector "slime" t)
The only call to LOAD in my .emacs file is for "erlang-start", but if you look inside the file you can see it contains only autoloads (and morally equivalent stuff). I also load the custom file, but that's different, see Tip #7. I don't have a single call to REQUIRE (beyond that mandated by Tip #2).

Tip #4: Understand and use EVAL-AFTER-LOAD
Another reason why you might have strewn needless REQUIRE and LOAD calls throughout your .emacs file is that you need to call a function from a specific library. For example, let's say you want to set your default SQL database type to MySQL:
(sql-set-product 'mysql)
If you put this in your .emacs, you'll get an error because the SQL library isn't loaded so SQL-SET-PRODUCT isn't yet defined. But before you add a LOAD or REQUIRE, stop! Instead do:
(eval-after-load "sql"
'(progn
(sql-set-product 'mysql)
;; any other config specific to sql
))
As the name suggests, this will defer calling that code until the SQL module is actually loaded. This saves startup time and prevents errors!

Tip #5: Time your .emacs
You really ought to know how much time it's taking to load your .emacs file. Use the following in your .emacs:
(require 'cl) ; a rare necessary use of REQUIRE
(defvar *emacs-load-start* (current-time))

;; rest of your .emacs goes here

(message "My .emacs loaded in %ds" (destructuring-bind (hi lo ms) (current-time)
(- (+ hi lo) (+ (first *emacs-load-start*) (second *emacs-load-start*)))))
After Emacs finishing initializing, you can switch to the *Messages* buffer and see how much of that time was taken by loading your .emacs. Mine now contributes less than one second!

Tip #6: Set background colors
Don't just stand for the default colors! Set them to what you really want. In my case I want a reverse video effect:
(set-background-color "black")
(set-face-background 'default "black")
(set-face-background 'region "black")
(set-face-foreground 'default "white")
(set-face-foreground 'region "gray60")
(set-foreground-color "white")
(set-cursor-color "red")

Tip #7: Separate custom file
It's annoying to have your .emacs file modified by Emacs' "custom" library, especially if you check in your .emacs to a source code control system such as Subversion (which you should do) and synchronize it on multiple machines. Keep those customizations in a separate file:
(setq custom-file "~/.emacs-custom.el")
(load custom-file 'noerror)

Sunday, September 2, 2007

The Icarus Ultimatum

It is unlikely that every programmer is familiar with Icarus, but I bet that almost all programmers have something in common with him. Programmers are saturated with advice not to do things, similar to the advice Icarus' dad gave him about aviation. Don't use threads unless you really know what you're doing (and then don't use them anyway.) Don't use new language features (they're too dangerous.) Use the "right tool for the right job" (i.e., not the one you like.) Don't "prematurely" optimize (note: it's always premature.) Don't use macros, nobody will be able to read your code. Don't use multiple inheritance. You ain't gonna need it (you dork!) Don't, don't, don't; no, No, NO! For the rest of this essay I will refer to such advice as "Dead" advice, in honor of Icarus' father, Daedalus (and also to honor my own lack of spelling ability.)

Let's remember one thing about Icarus. Yes, he died a fiery, horrible death. But at the same time, he flew close to the freakin' Sun! How cool is that? Likewise, the most exciting developments in software have been crazy stupid. I remember how quixotic the notion that you could index and search the entire Internet (or even a significant part of it) once seemed. Only by thinking big does anything cool get done, it doesn't get done by sitting around pondering all the things you shouldn't be doing.

Dead advice exists for many reasons, of course. Programming is so open-ended that it is easy to create unmaintainable, unreliable, slow software. Programs start out as a blank slate; one can do anything. Without discipline, anything turns into unmitigated crap. Thus experienced programmers observe mistakes (including their own) and, sometimes, craft memorable axioms as a result. Unfortunately, these well-intentioned rules-of-thumb often turn into blindly-followed commandments by the time they penetrate the consciousness of millions of programmers.
For example, the original advice about premature optimization has become twisted over the years, such that programmers are afraid ever to optimize lest their effort be labeled "premature". Similarly, Microsoft's infamous detour into Hungarian notation started with a misunderstood paper by future astronaut Charles Simonyi. But this sort of telephone game isn't the only source of Dead advice.
"It's not guns that kill people, It's these little hard things!"
--- the Flash from "The Trickster Returns"
Sometimes Dead advice emerges not from a misinterpretation of the original advice but from a failure to perceive a problem's underlying cause. The treatment of threads in computer science illustrates this phenomenon well. Typical computer science programs teach little, if anything about threads; threads are an "implementation detail" with minor theoretical import (compared, say, to data structures, Big O, recursion, etc). What one does learn about them, though, makes them sound cool. Heck, let's face it, they are pretty darn cool. They're the expression of multitasking; a piece of magic. Junior programmers are disappointed when, on their first job, their mentors seem ready to pour boiling sulphuric acid on their keyboard if they so much as mention the possibility of using a thread. "My advice is not to use threads unless you're extremely experienced, a thread wizard, and even then don't use them." That's what they'll hear.

I'm going to step out on a limb here, insert earplugs to drown out the chorus of "boos" that I can already hear even though nobody's read this post yet, and state: there's nothing wrong with using threads. This is a big reversal for me; I spread the threads are evil gospel for many years. Yet I have confidence in this assertion for a number of reasons, foremost among them being that the admonitions against threads haven't helped much. How many applications have you used where the user interface freezes up every time the application does something that takes longer than a second or two? How many times have you found Cancel buttons (you know, those ones you're trying to hit when you realize you're accidentally deleting all of your un-backed-up files?) that take minutes to have any effect?

If anti-thread prohibitions haven't worked, what would? I've been casually researching this issue for a long time. At first I decided, like many, that threads are indeed evil, and that event-driven programming was the only way to go (programmers love nothing more than a dichotomy.) When other programmers would point out to me that event-driven programming is, at times, a giant pain in the rear, I'd silently mark them as wimps. Finally, I read a series of papers, including this one, that convinced me there was, at least, more to the story. Even more recently, someone clued me in to the work of Joe Armstrong, one of the only people outside of academia to have stepped back from the problem of threads and tackle, instead, concurrency, which was the real issue all along. By far the coolest thing Joe did was realize that you can in fact fly close to the Sun and spawn as many darn "threads" as the problem you're solving truly warrants. You can also use events, too, there having been, it turns out, no dichotomy between events and threads either.

I found this revelatory after having wasted a lot of time, over the years, writing things like HTTP parsers in frameworks that either force you to use only events, or frameworks that let you spawn threads easily (though not necessarily efficiently) but have no standard event-passing mechanism. It's not just that I didn't think of it myself, it's that almost everyone I talked to about this sort of issue was either in the "threads are evil" camp or the "events are confusing" camp. I wasted a lot of time following bad advice.

Experiences with Dead advice such as "threads are evil" have led me to question other Dead advice, such as:
  • Premature optimization is the root of all evil.

  • Never use multiple inheritance.

  • YAGNI.

  • Never use macros.
Each of these have a kernel of truth to them, but are also too easily misunderstood and have not had the overall effect on programming that was originally intended. If I have some time I'll try to give more examples of each of the above. In the meantime, I do have to mention one piece of (arguably) Dead advice that I haven't found any fault with yet:

Saturday, August 25, 2007

Macrophobia

As a fledgling CL programmer, I've found macros to be one of its most enduring charms. Yet it is also one of its most criticized (or at least feared) features. Much like threads have become the bogeyman of the programming world, so macros would be too, if more than a handful of programmers actually knew about them. Among this handful, though, it would be nice to dispel the myth that macros are too powerful.

I mean, obviously, they are, in some sense, too powerful. The common complaint about them is that junior programmers (especially) can create what appear to be language-level constructs that confuse the heck out of everyone else, impairing the code's "readability". As with most broadly-accepted maxims (in this case, broadly accepted by the 15 people in the world who actually know about the issue in the first place), it is both well-intended, intuitively reasonable, and incorrect, for the simple reason that if, at your company, junior programmers are running amok with no mentors to teach them, the code will be unreadable whether or not it uses macros.

Frankly, I'd like to dispel the whole myth that there's great import to be placed on the "readability" of a programming language. I see blog after blog, each about as scientifically sound as my 10th grade Chemistry report (in case you're wondering, I'm not a chemist), about whether Perl is more readable than C, or vice-versa, or whether Python is more readable than CL, etc. This tiresome, endless debate stirs the fanboy in all of us, but is missing not just the forest for the trees, but the solar system for the planet.

I've read a lot of code in my day, I'll have you know. What matters most to readability is, first, comments, followed by identifier names, followed by there being as little code as possible, followed by program structure. Ok, I can hear the chorus of "boos" in the background at the preceding statement. All I can say is, most programmers have no idea how to write good comments (even though, as they say, the "force" is with them; they just have no idea, and lack a Yoda-like figure to explain it to them). When comments are good, there is nothing quite like it. The code could be written in TECO for all I care. Maybe blog, will I, about it, some day.

So macros, in fact, cannot really get in the way of readability. In fact, they have the potential to greatly enhance readability, used judiciously (and as pointed out earlier, if your company is not run judiciously, you have larger issues to work out first). How many of you out there have worked somewhere that had some kind of coding guidelines? I'd warrant that a fair number of you have. I certainly have. I'm talking about stuff like Ellemtel's Programming in C++, Rules and Recommendations.

Give those rules a once-over, if you haven't already. It's pretty complicated stuff. Use this feature this way, use that feature that other way, don't do this, do do that, etc, etc. Yawn. I've worked at companies that successfully enforced such rules (at least 80% of the time), and some that failed utterly. In both cases, it was wickedly hard to enforce these rules (successfully or not). All programmers had to read the coding guidelines document, understand it, remember it, and periodically re-read it when it was updated. Senior programmers had to enforce the guidelines during code reviews (which was sometimes easier than other times, depending on the stubbornness of the reviewee).

At more sophisticated shops, one could imagine, at least, that some of the rules would be enforced by build scripts. A poor solution if ever there was one. For one thing, build scripts tend to be written in a different language (e.g., Perl), and usually resort to crude regexp-matching in order to catch offending code. Only a handful of programmers at your company are brave (or foolish) enough to grok the build scripts. These poor souls find themselves, before too long, at the bottom of a pit of despair from which they never return (save by leaving for another team or company). Eventually they label themselves with self-deprecating monikers such as "build bitch".

Macros afford the possibility, at least, that some of these conventions be enforced within the language itself, rather than by a bunch of hacky Perl scripts (or no automated process at all). Take, for instance, the use of CL's DEFCLASS. Now DEFCLASS has roughly one grillion options for how to do this, that, or the other thing. It's also remarkably lax about, well, almost everything. If you want a member variable ("slot") called FOO but wish for the accessor to it to be called BAR, you can do that.

If you wanted to prevent such wanton creativity on the part of your less-trustworthy programmers, you could do so by writing a macro wrapping DEFCLASS which might both limit its complexity and enforce constraints such as the regular naming of accessors and the like. You could prevent the use of multiple inheritance if you found it too frightening (I'm not suggesting you go out and do this, just pointing out that you could). These rules would be enforce using code written in the same language as everything else, making them easier to write (in that they can harness the reflective power of the language itself, i.e., no hack regexps) and easier to find volunteers to maintain and develop.

I could go on and on about this, but it's getting late so I'll just wrap it up there. One last thing, though, I urge the readers of this blog (all three of you) to re-evaluate the various maxims you may have assimilated over the years. Are threads really that bad if there is a problem facing you that is genuinely concurrent? Are you really optimizing prematurely? Is there really such thing as a "scripting language"? Should operating systems and programming languages keep each other at arm's length? The list goes on.

Friday, August 24, 2007

Proactive Optimization

In the programming world, one of the most widely repeated maxims is, "Premature optimization is the root of all evil." Having found this to be one of the most well-intended yet misunderstood pieces of advice in the history of the Universe, I was all set to write a blog about why. Unfortunately, for me, a quick trip to Wikipedia led me to this article: The Fallacy of Premature Optimization, by Randall Hyde, which said everything I intended to say, and more, and did so more eloquently that I could ever hoped to have (although I might have put in some better jokes).

So the only thing I really have to add to that article is the following crude attempt at marketing: let's put a little spin on that phrase and turn it around! Whenever you hear "premature optimization is the root of all evil", retort (politely) with "but proactive optimization is the root of all goodness."

Sunday, June 17, 2007

CL-ROGUE

Over the last six months I have been porting the 1981 version of Rogue to CL. My intention was to learn the basics of CL; it's a straight port of the game. I wish I had thought of porting something sooner (especially Rogue, which I once ported to PC/GEOS), but for some reason this didn't occur to me until recently. It is a great way to learn a language if you don't have much time. I heartily recommend it (over, say, trying to think of a "pet project" to code up in the new language, which is cool, but requires a substantial investment). With porting, the project is already written and working, and you can pick it up and drop it frequently (if you have only a few hours a week, for example) without losing too much context (the original authors did all the hard work!) In the case of C and CL, porting was a good way to learn all the side-effect-y features of CL that are often glossed over in CL books (which focus more on functional programming).

Having spent most of my career doing C/C++/Unix (and a bit of C#/Windows), it is interesting to see first-hand the alternative road that CL would have taken us down. One cannot compare C and CL directly, because in CL the operating system and language are indivisible. That is probably its biggest strength. By contrast, although C is the "first class" language of Unix, it was not treated especially royally. For example, you cannot use C directly in the command shell. When I was first learning Unix, it was confusing that I couldn't call C functions from the so-called "C Shell".

At first, CL's REPL seems alien. Once you realize it is the same thing as the command shell, all makes sense. For example, I had a little difficulty understanding what ASDF was until I realized it's equivalent to make or ant. Many CL functions and packages correspond to Unix utilities that you'd normally have to run from the command-line. Likewise, the lack of an Eclipse- or Visual Studio-like debugger at first seemed to be a major weakness, until it dawned on me that you're always in the debugger in CL.

Porting C to CL was mostly mechanical and surprisingly easy. What was most intriguing was that some relatively obscure-sounding features of CL proved extremely useful (unfortunately I didn't discover this until late in the game). In particular, SYMBOL-MACROLET and DEFINE-SYMBOL-MACRO helped immensely (or at least they did once I figured out how to use them!)

The port itself is here.

Monday, March 12, 2007

Everything in Moderation

On the night of November 18th, 1849, lovable, bearded mountain man Grizzly Adams found himself stranded on the Alaskan tundra in his simple canvas tent, with nothing to do but wait out the blizzard raging all around him. At such times Adams was wont to compose computer programs in his head. Sometimes he would write the code on his notepad, and when next he found himself in a larger town, such as Fairbanks, he would have it telegraphed to the nearest computing facility. On this particular night, however, he did something extraordinary; he conceived of a brilliant improvement over assembly language, until then the only option available to programmers on the Difference Engine MXVII. His conception was the "C" programming language, which has dominated the programming world ever since (along with its minor variants, C++, C#, Java, Smalltalk and Lisp). C is a coating of syntactic sugar over Difference Engine Assembler (DEA). For example, C's "++" operator corresponds to the DEA ADD-YE-THE-NUMERAL-ONE-TO-THE-NUMERAL-IN-QUO operation code (which was a real b***h to write, by all accounts). C's other statement types translated readily into DEA, too. For example, the "for" loop reduced 30 lines of DEA (on average) down to about 10 lines (on average) of C: a big savings in those days, when one had to write code by hand, telegraph it to a computing facility (at a penny a word!), and wait patiently for the results.

Computer Science being the conservative field that it is, programming languages have changed little in the 158 years following C's inception. A few small enhancements have been added by C's successors, but these have been relatively minor, and have upheld the "spirit" of C. For example, Danish inventor Bjarne Stroustrup created "C++" by adding to C the concept of a "class", a convenient syntax for limiting the scope of global variables. Meanwhile, other language designers took different tacks. Whereas Stroustrup created a "higher-level" C with C++, Stanford professor John McCarthy created a "lower-level" C, which he named "Lisp", wherein one programs directly in the intermediate representation (aka "abstract syntax tree") that C compilers generate. Similarly, Alan Kay's "Smalltalk" is a minor variation of Lisp: it represents the C abstract syntax tree using fewer parentheses. Although Smalltalk and Lisp remain exceedingly close to C, they are the most daring of any new programming languages. Perl, Ruby, ML, Haskell, Python, SNOBOL, and others all fall somewhere on the spectrum between C and Lisp/Smalltalk.

Lest you worry that I'm suggesting progress in the field of programming languages has been glacial, fear not! I commend heartily the intentionally moderate pace of development. It has been for a good cause, after all: ensuring that programmers do not misuse overly-powerful language features which they do not understand. By limiting programming languages to the small set of primitive operations that Charles Babbage developed, over 160 years ago, we ensure that programmers only attempt those problems for which they are qualified. Think of these intentional limitations on programming languages as being akin to those signs you see at amusement parks, consisting of a leering Technicolor elf with a big cartoon bubble saying, "You must be this tall to ride The Bombaster". The intense feat of abstract thinking required to see how useful applications could be built from such low-level functionality acts as a sort of programmer's amusement park sign. Weak programmers recognize when they're "too short" and stay off the ride (and stay out of trouble!) This is, in fact, the cornerstone of the widespread high quality, stability and performance of modern software applications that we experience every day. So do not worry, I'm not suggesting it ever change!

That said, it is interesting, purely as an exercise, to consider what scientific and technological breakthroughs from the last 158 years could be taken advantage of in a new programming language. Let's see.

First, our basic notions about time have changed since 1849. Back then, Einstein's theory of Special Relativity, and all of the subsequent work in modern Physics, was, of course, unknown. Newtonian mechanics was the only thing Grizzly Adams would have known about, and that meant action-at-a-distance. Although Newton would not have told Grizzly that everything happened simultaneously, he would have become downright confused trying to explain to Grizzly how multiple concurrent activities would actually work. Thus C, and its variants, have no easy way to represent simultaneous activities. Everything happens in a single stream of time. For example, if you want multiple things to happen simultaneously, you have to "fake it" by interleaving them in your code. For example, take this simple email client written in C:
TextBox toTxt = new TextBox("To: ", ...);
TextBox ccTxt = new TextBox("CC: ", ...);
TextBox cc2Txt = new TextBox("CCRider: ", ...);
TextBox subjectTxt = new TextBox("Subject: ", ...);
TextBox bodyTxt = new TextBox("", 24, ...);
TextBox statusTxt = new TextBox("Status", ReadOnly, ...);
Button sendButton = new Button("Send", &doSend);
Button cancelButton = new Button("Cancel");
Button resetButton = new Button("Reset");
Frame frame = new MainFrame("Email");
frame.Add(toTxt, ccTxt, cc2Txt, ...);

// ... imagine lots of gnarly code here ...

private const ipated void doSend(Button b)
{
String[] smtpHosts = Config["smtp-hosts"];
if (smtpHosts != null) {
for (smtpHostIP, port in LookupViaDNS(smtpHosts)) {
Socket sock = Socket.Open(smtpHostIP, port);
int err = sock.Send("ELHO", ...);
if (err != 0) {
statusTxt.Update(String.Format("send error {0}", strerror(err)));
}
// etc...
}
}
}
This code functions, but suffers from problems such as the entire user interface freezing while hosts are being looked up via DNS, connections opened, handshakes shook, the contents of the mail being sent, etc. That could take a long time, and users get peevish when they cannot interact with an application for extended periods.

Since Grizzly Adam's day, Albert Einstein (and some other physicists, too, I think), frustrated by the dearth of good email clients, spent time extending Newton's concept of time. The details are far beyond the scope of this essay (or, for that matter, my own comprehension) but suffice it to say that they can all be distilled thusly: everything is relative. For example, if I am riding my skateboard down the freeway at 10mph, then an 18-wheel tractor-trailer rig bearing down on me from behind, at 60mph, will appear to me to be going 50mph (if I should be smart enough to turn around and look, that is). Of course, the driver of the truck perceives her truck to be moving at 60mph, and me to be moving at negative 50mph. Furthermore, the only way I'm even aware of the tractor-trailer's presence is due to signals of one sort or the other (light and sound) bouncing around between the truck and myself. For example, I can see the photons that have bounced my way from the gleaming chrome grille on the front of the truck, and I can hear the desperate shrieking of the truck's horn as the driver attempts to avoid turning me (and my skateboard) into a pancake.

To keep programmers from turning themselves (and others) into metaphorical pancakes, these two concepts (concurrent activities happening in their own space/time continuum, and the signals that can pass between them) have been kept strictly separate, and have been kept rigorously out of any programming languages. This has been accomplished through the tireless efforts of a secret, underground organization (the cryptically-named Association for Moderation in Computing (AMC), a group about which little is known beyond their name and the few scant facts I am about to relay to you) that has maintained a long-standing disinformation campaign:
  1. AMC agents have infiltrated all language development committees and have ensured that these groups allow concurrency and signaling primitives to exist only in libraries, if indeed they exist at all.
  2. These same agents are also responsible for promulgating a false dichotomy between the two features, via strategically placed blogs, Usenet posts, blackmail and hallway conversations. Calling concurrency "threads" and signals "events", the agents have, Jason-like, thrown a proverbial rock into the crowd of computer scientists. Half of them are convinced that threads are evil and must be obliterated in favor of events, and the other half that the exact opposite is true. These camps war constantly with each other. And while the AMC's real aim is to prevent use of either feature, at least this way the damage is mitigated.
With all of that as a caveat, one could imagine a language where this sort of everything-is-relative concept, as well as signaling, were both built in directly to the language. To save me some typing in the remainder of this essay, I refer to this hypothetical language as Relang (for RElative LANGuage; get it?) In Relang, anything logically concurrent could simply be coded that way, explicitly, without significant performance penalties. Just as in real life, the only way these concurrent activities could interact with each other would be through "signals", sort of like the photons and sound waves that physicists have discovered since Grizzly Adam's time. Each concurrent activity would probably have a data structure, such as a queue, to hold incoming signals, so that they could be processed in a comprehensible way.

If we were to imagine the above example code re-written in this new language, each of the UI objects would exist in its own space/time continuum and communicate with other UI objects by sending signals (such as photons, sound waves, folded-up pieces of paper, and lewd gestures) to them. One required change would be that lines like this:
statusTxt.Update(String.Format("send error {0}", strerror(err)));
would now read:
send(statusTxt, update, String.Format("send error {0}", strerror(err)));

Meanwhile, the statusTxt object would be sitting in a loop looking something like so:
// This function is operating in its own
// space/time continuum.
static private isolated formless void superTextTNT()
{
E = receive(infinity);
case E:
{update, txt} -> self.txt = txt; send(mainFrame, update, self);
{invert} -> self.invert(); send(mainFrame, update, self);
...etc...
superTextTNT(); // this is a tail-call optimizing version of C
}
Now it is clear that each UI element is "doing its own thing" in its own personal space, and not trying to grope any of the attractive UI elements sitting on neighboring barstools. As a side-benefit, the UI would no longer freeze up all the time.

Now, of course, you are probably thinking, "Ok, wise guy, why don't languages work this way?" Well, first off, this feature falls smack dab into the too powerful category; it would be misused left, right, up, down, and sideways (hence the AMC's disinformation campaign). Even were that not the case, however, there's this other problem. To implement the above efficiently, the language would have to be aware of when blocking (e.g., I/O) could occur. Otherwise each of those "receive" statements would tie up a real OS thread, which is too expensive in the general case. And so here is where the story takes a slightly sad turn. Why not make the programming aware of when blocking occurs? Simple, because it is impossible to do that. While I am unsure of all the particulars, I have been assured by many top-notch computer scientist that that would be flat-out, 100%, totally, and absolutely, impossible. No way in heck it could ever be implemented. Oh well.

On the bright side, while the language feature is unavailable, there is an easy work-around. Use just two threads of execution. One creates a "user interface" (UI) thread and an "everything else" thread. Then one ensures that only UI-related activities occur on the UI thread and that, well, everything else occur on the other. In practice, this has turned out to be a model of simplicity (as anyone who has done UI programming could tell you, it is trivial to separate UI from "everything else") and is a big part of why most applications with graphical user interfaces work so flawlessly. It is unfortunate that one uses threads at all (they are deprecated for a reason, you know), but at least there are only two, and they can be implemented in a library, without polluting the language itself. Further, because there are only two threads, signaling is kept entirely out of the picture; that extra bit of complexity is unnecessary with so few space/time continua to keep track of. In fact, the only downside is that two or more UI elements cannot update themselves simultaneously, since they are both operating on the UI thread. Fortunately, I have never heard of a graphical application that actually required this. Let us hope that such a beast never comes crawling out of the proverbial swamp.

Speaking of UI, let us now consider another as-yet untapped technological advance: modern display technology. Back when C was first invented, all programs had to be written down in long hand by the programmer, and then a telegraph operator had to manually translate it into Morse code (unless the programmer happened to be at the computing facility). C's extreme terseness stems from this. All languages inspired by C have kept true to its roots; they are easy to write longhand, too, just in case one has to write software without a modern display handy. If we were to give up this requirement (a bad idea in practice, mind you), how might a language make use of this Faustian feature?

Since the features that have been added to C (e.g., object-oriented programming) have focused on new ways to scope data, why not start using, say, color for scope, too? For example:
int i = 0;
private explicit adults only int foo()
{
return i++;
}
This trivial example shows how one can avoid typing "static", and make it a little easier to see where the variable is used. You could imagine more complicated uses of this, though:
public mutable implicit void CrawlSites(String hostsFile)
{
try {
Stream hostStream = File.Open(hostsFile, "r");
String host;
int port;
while (host, port = Parse(hostStream)) {
sock = Socket.Open(host, port);
CrawlSite(sock);
}
} catch (IOError fileError) {
Blub.System.Console.ErrorStream.Writeln("unable to open hosts file: %s", fileError.toString());
} catch (IOError socketError) {
MaybeThrottleFutureConnections(host, port);
Blub.System.Console.ErrorStream.Writeln("unable to connect to %s:%s: %s", host.toString(), port.toString(), socketError.toString());
} catch (IOError parseError) {
Blub.System.Console.ErrorStream.Writeln("error reading from '%s': %s", hostsFile, parseError.toString());
} catch (IOError socketReadError) {
Blub.System.Console.ErrorStream.Writeln("error reading from host '%s':%d: %s", host, port, socketReadError.toString());
}

One could do more than just set the color of the variables themselves. For example, one could use the mouse to right-click on an entire line and set its background. Say you want to make sure that nobody messes with the log object in the following function:
private static cling void foo()
{
Log log = new Log("foo starting");
// ...imagine lots of stuff here...
log.End();
}
The rule would be that lines with the same background color would all be in a special sub-scope that was unavailable to anyone else. If someone accidentally added in some code that referred to "log" in the middle of the big, hairy function, it would be flagged as an error.

As stated earlier, this feature (and related features that would require a graphical user interface) is kept out of programming languages for a reason: we might have to start writing code without computers around, someday. If languages themselves depended on a GUI, society would probably collapse while programmers scrambled to re-write everything in C or one of its variants.

A friend of mine thought of a solution for this problem, but it is so far "out there", I'm only going to mention it in passing: one could use XML to "annotate" the hand-writable parts of code. He noticed noticed this as he was typing in the examples such as the ones above, in fact: he was doing stuff like... [Editor's note: Ok, dammit, here's the problem. I can't figure out how to "quote" XML in this dang blog. It keeps getting interpreted as HTML, and so, while I'd like to show you what the code would look like the way I envision it, I'm going to have to hack around the problem. Instead of actual XML, I'm going to use a mock XML that uses parentheses instead of less-than and greater-than. It's also kind of late at night, and I'm getting tired, so I'm going to not type the end tags in. Let's call this XML variant PXML, for Parenthetical XML.] The example above, in PXML, would look like:
private static cling void foo()
{
(color :yellow Log log = new Log("foo starting"));
// ...imagine lots of stuff here...
(color :yellow log.End());
}
The way this would work, with PXML (again, sorry I can't write out the real XML as I would prefer), is that, if you were writing out the code using a pen and paper (or a typewriter, or the dusty teletype you came across in your company's storeroom while looking for a place to smoke) then you'd write out the annotations such as the "color" PXML form directly. If you were using a modern IDE, on the other hand, the IDE wouldn't actually show you the PXML (well, not unless you actually asked it to). It would just show the whizzy colors (again, unless for some reason you preferred the PXML). And of course, you could use this mechanism for something more than just coloring and scope.

For example, suppose you were a big Python fan. You could edit your code in a sort of Python-looking mode. Under the covers, the IDE would be generating PXML, but it wouldn't show that to you, after all, you're a Python fan, right? I.e., if you wrote this in Python:
def foo(a, b):
return a + b;
Then the IDE could just generate the following under the covers:
(def foo (a b)
(return (+ a b)))
Of course, in order to let you write code using any Python feature (list comprehensions, iterators, etc.), there would have to be some way for you to express those things in XML. Fortunately, as we all know, XML can represent anything, so this would not be a problem.
# Python
(x * y for x in l1 for y in l2 if x % 2 == 0])
would be:
(generator (* x y)
((x l1))
(y l2))
(= (mod x 2) 0))

[Again, sorry for not being able to write out the actual XML, which would look a lot cooler; I know that PXML is painful to look at. Darn blog! Actually, I guess since there is no fundamental difference between XML and PXML, the IDE could probably use either one, interchangeably, under the covers. But I doubt anyone in their right mind would actually choose PXML over XML, so it would not be worth the extra time to implement that.]

In any case, the AMC would look askance at any of these proposed features, so let us resume thinking about another technology!

Perhaps the biggest invention to come along has been the "Internet". No longer are computers linked together via a tenuous chain of telegraph wires and human operators, as Grizzly Adams had to endure. Modern computers are linked up more or less directly by a vast web of routers, operated not by humans but tiny elves who can move with great precision at near-light speed. It would be incredibly dangerous to make this "Internet" into a first-class concept within a programming language. Programmers would just share bad code with each other that much more easily! Nevertheless, just for fun, let's look at how things might work if one were to add such a dangerous feature. For inspiration, let us consider a feature that Sun Microsystems playfully suggested via an in-joke, of sorts, in Java's package system.

To use another company's Java package, Sun requires the following procedure:
  1. The programmer mails in a form, ordering a magnetic tape containing the desired Java package from the appropriate vendor.
  2. After a few weeks it arrives on a donkey cart, driven by a crusty old farmer.
  3. The programmer tips the farmer generously (if they ever want another delivery).
  4. The programmer must then use the cpio utility to remove the package from the tape.
  5. At this point, the package (which is in a "jar" file, named for irrepressible Star Wars character Jar Jar Binks) is intentionally not usable until you go through the further step of adding its location to an environment variable called the CLASSPATH.
  6. The programmer is now almost ready to use the package, provided that s/he restarts any Java software running on their machine, and did not mistype the path to the jar file.
  7. It may also be helpful, at this point, for them to do a Navajo rain dance and sacrifice a chicken, especially if trying to get more than one "jar" file installed properly.
Now, the in-joke that I referred to above is the following: Sun's convention for naming these packages is to use "Internet" addresses (well, actually, reversed addresses, to make it more of an in-joke). I think they were making a subtle reference to the unimplemented feature whereby this:
import com.sun.java.jaaxbrpc2ee;
would cause the Java Virtual Machine to download the jaaxbrpc2ee package directly from "java.sun.com", without the programmer having to do anything. For numerous reasons this would never work. What if your computer weren't connected to this "Internet"? Or what if you were to download an evil version written by evil hackers? It is well-known that both of these problems are completely intractable, whereas the manual steps listed above ensure nothing like this could ever happen. Nonetheless, the direct downloading way does seem like it might speed up the pace of software development a bit! Too bad there are all those problems with it.

What if writing:
import com.sun.java.jaaxbrpc2ee;
actually did download, install, and, well, import the jaaxbrpc2ee package? Ignoring the impossibility of actually making a socket connection, finding space on local disk, etc (more intractable problems, I'm afraid), I can think of two major issues with this:
  1. Security
  2. Versioning
Security would be a toughie. You might have to have an MD5 or SHA1 hash of the Jar Jar file available somewhere for the JVM to check on in order to determine if it has downloaded a valid copy. Not sure this would work, but if it did you could actually download the Jar Jar file from your cousin Harold's warez site that he runs out of his basement. That way if com.sun.java was down you'd have at least one other option.

Then there's the versioning issue. What if you were developing against version 2.3.5 of jaaxbrpc2ee and Sun released version 3.0.0? How would the JVM know not to use it? Chances are, you'd have to add some syntax to Java in order to handle this. You could use a regular expression to indicate which versions were acceptable:
import com.sun.java.jaaxbrpc2ee 2.3.*;
import org.asm.omygawd *.*.*.*;
import org.standards.jdbc.odbc.dbase2 19.007a*;

Of course, as you can see, anyone can publish a package that others could download and use directly. There could even be a free service that maintained these packages, for people who lacked the wherewithal to host their own packages. They'd just have to prove they owned their domain name. Also, people wouldn't be allowed to update an existing version, they'd have to release a new version.

Ideally, one would be able to use the "Internet" anywhere in your code, not just when importing packages. Pretty much anything could use the "Internet":
com.att.bell-labs.int32 i = org.ieee.math.pi * (radius ^ 2);
For example, if you wanted to run code on another machine, you might say (in PXML with Relang extensions):
(on com.megacorp.server11:90210
(if ...mumble... (send com.megacorp.server23:2001 '(d00d u suck))
The above would run the "if" expression on server11.megacorp.com (port 90210), and that code would sometimes send a message to yet another machine. Of course, you could use variables instead of hard-coded machine names and ports. Web servers could communicate with web browsers (if they supported PXML/Relang) like so:
(import com.niftystuff.clock 1.*)

(def clock ()
(register :clock)
(receive infinity
(update-thine-self (com.niftystuff.clock:draw)))
(clock))

(def display-time ()
(while true
(sleep 1)
(send :clock 'update-thine-self)))
;; imagine lots more code here...
(html
(body
(on browser
(spawn clock)
(spawn display-time)
(do some other stuff))))

Ah, but it's getting late, and I can see a strange van parked outside, and my dog is acting nervous (and I have heard something about an AMC "wet" team.) Maybe it is time to post this blog and go to bed! The crazy language sketched out above, is, well, just that: crazy!