a digital scan of a 35mm film image of a processing sketch running on an LCD
Skip to Content

ASCII Cellular Automata: CAd nauseam

There’s a slightly pathetic anticlimax when a cellular automata bound for infinity runs into the edge of a computer screen and halts. This unfortunate behavior can be diverted by the most trivial of interface elements: the scroll bar.

So, I created an HTML / JavaScript implementation of a Wolfram’s one-dimensional binary cellular automata: CAd nauseam. The name is intended to reflect the sense of conceptual exhaustion around this particular set of 256 rules, which has been researched, poked, and prodded within an inch of its life.

Give it a try: http://frontiernerds.com/cad-nauseam

As the CA is rendered, the page simply grows to accommodate as much output as you have patience for. It’s easy to pause, scroll back up, and reminisce about past lines. (If you’re into that sort of thing…)

In addition to the browser’s native scrollability, I added a few knobs and dials that let you manipulate the rules in real time.

In this context, ASCII doesn’t offer many endearing qualities beyond a certain nostalgic cheekiness, but I suppose one could argue that the output is easy to cut / paste and it allows the simulation to run relatively quickly in the browser. (At least compared to rendering the pixels with a table or using the canvas object.)

The code is based heavily on Dan Shiffman’s Processing example of the same CA. Just view the source to get a sense of how it works — although most of my contributions to the code are just interface-related cruft.

There are two ways to set the active rule. You can click each four-square icon to toggle that particular rule on or off. (The top three squares represent the seed condition, the single bottom square indicates whether that condition will turn the next pixel on or off.) Alternately, you can type the rule number you’d like to see directly into the text box at the top of the palette. Hit enter, or click outside the box to activate the rule.

As you change the rules, the URL will change to represent a direct-link to the current rule set. For example, you can visit http://frontiernerds.com/cad-nauseam/#30 to see the output of rule 30.

The rest of the interface should be self-explanatory. “Reseed” will clear the latest line and replace it with a new line with single X in the center. “Go / Stop” freezes the simulation so you can scroll through the history more easily. “Rule Info” takes you to the Wolfram|Alpha page describing the current rule.

Runs best in Safari, the experience is much slower and stickier in Firefox, IE, and Chrome.

April 7 2010 at 12 AM

Free Body Diagram

I sketched a free-body diagram of a hot glue gun’s stick-advance mechanism in an attempt to formalize its action. (I was also surprised to see hot glue used to secure wires inside the hot glue gun… a textbook chicken / egg paradox.)

April 1 2010 at 2 PM


Charles Babbage’s Brain, Photo: James Wheare


Can a machine accumulate enough information about your patterns of communication to create an effective digital doppelgänger? Could we use the data left behind on Google’s servers and our own hard disks to effectively replace ourselves with an artificial intelligence born and bred of our online conversations and quirks? What might it be like to have a conversation with a past representation of ourselves, what would a hypothetical exchange between two digitally-reconstructed individuals look like?

Michael Edgcumbe and I approached these questions with Caprica, our rough attempt to commit to code some of the ideas of digital reincarnation put forth in the (reportedly mediocre) eponymous television series.

Both Michael and I have managed to retain a good portion of our instant messenger chat logs. My archives represent just over a half-million lines of conversation logged from about 2001 to 2004. Michael’s are a bit more recent, and weigh in at 34,000 lines. So data is in relative abundance.

The goal was to build an autonomous chat bot that would draw from the content of our logs to construct an infinite stream of back-and-forth conversation between our younger selves. Ideally, these conversations should be reasonably cogent and reflect whatever personality / themes we left behind in our logs.


Our initial approach to an algorithm was simple — the entire chat log can be considered a kind of question / answer training set. There’s a bit of latent intelligence built right into the log, since it literally documents how you responded to a wide range of queries. By finding which line in the log is the closest match to a given query, we should be able to walk forward a few lines and retrieve a reasonable response. This turns the problem into one of sentence similarity and avoids the issue of extracting and classifying meaning from the logs.

There are some peculiarities about instant messenger conversations which needed to be considered:

  • Typos are rampant
  • Netspeak doesn’t play well with NLTK dictionaries and algorithms trained on more formal corpora
  • A new line of conversation often acts as a comma; single line responses and serial responses from one person are common

With these points in mind, we tried a number of techniques for ranking similarity between a query string and lines of logged conversation. First, we wanted to increase the opportunities for a match between the query and the log, so we used lemmatization / synonym lookup to expand the query.

For example, for the query how about the weathereach word is expanded into a list of synonymous terms:

['about', 'astir', 'approximately', 'close_to', 'just_about', 'some', 'roughly', 'more_or_less', 'around', 'or_so', 'almost', 'most', 'nearly', 'near', 'nigh', 'virtually', 'well-nigh'],
['weather', 'weather_condition', 'conditions', 'atmospheric_condition', 'endure', 'brave', 'brave_out', 'upwind']]

From there, the chat log is searched for lines containing these synonyms — each word match improves the score of a particular line, which means its more likely to wind up as the best match to the query.

Other methods attempted include turning the logs into bigrams, to give a bit more weight to pairs of words used in context — this proved too slow to run in real time, we would need to set up a cache or database of bigrams for each log to use this approach in the future. (It’s currently scrapped from the working implementation.)

We also attempted to ignore line breaks in the logs and instead treat each stream of replies from one individual as a single chunk. This left us with unnaturally long-winded responses, slower searches (since the queries were much longer) and less of a quality improvement than we expected. (Also scrapped from the working implementation.)

Finally, our algorithm handles some basic house keeping: A response gets flagged after it’s used, so that conversations won’t repeat themselves. Response scores are also normalized based on length, so that longer lines (with more potential word matches) don’t dominate the conversation. It also manages the eternal conversational bounce between each log: After a response is generated, that response becomes the query to the other log… ad infinitum until every single line is used.

The source code is available on GitHub. The caprica3-presented.py file represents the most recent working implementation.


Here’s an excerpt of a hypothetical conversation between my adolescent self and Michael:

Edgwired: what are we lying about?
obrigado: the royal you
Edgwired: we had to transfer them as files rather than as music
obrigado: hah
Edgwired: heh
obrigado: wtf?
Edgwired: music is portable
obrigado: J.R. Rodale
Edgwired: plus
obrigado: additionaly
Edgwired: cool to hang out
obrigado: all this time coco
Edgwired: this is what i’m leaning towards
obrigado: i have assumed
Edgwired: LOL
obrigado: haha
Edgwired: what monitor?
obrigado: right
Edgwired: that one is pretty good
obrigado: that the version of remind me
Edgwired: fuck it
obrigado: actually it is
Edgwired: serious

The full text is also available.

Even with our crude implementation, the generated conversations are at least moderately interesting. Humans are quite good at finding patterns and extrapolating meaning where there is actually very little of either, and I think this helps mask the mediocrity of the algorithm.

Future Improvements

We have a number of ideas for improvements that didn’t make it into the first cut.

We considered Stemming the logs to increase the number of matches. However, the search code we’re using at the moment allows for partial word matches, so I’m not sure how much we would gain from this step.

Another major issue is that the log data requires a massive amount of clean-up before it’s ready for use. Ideally, we would have a program that would automatically aggregate a user’s chat (or email, or twitter, etc.) data without them needing to dig up their logs from the depths of the file system and run a bunch of finicky clean-up routines to get the data ready for use. Michael and I spent a huge amount of time dealing with character encoding issues and generally restructuring the log data so that it was consistent for both of us. Writing a reliable, hands-off parser would be a lot of work, but it would be consistent with the goals of the project: to provide access to an interactive, digital representation of oneself.

Python starts to show its slowness when you’re handling many thousands of lines of strings… for efficiency’s sake, the logs would benefit from migration to a database system.

And most importantly, the sentence similarity approach is deeply naïve. There’s a lot more to the reconstruction process than finding word matches, and to improve the results we will really need a way to extract and tag actual data from the logs. We will need some way to identify major themes and then weave them together into more convincing conversation.

March 22 2010 at 10 AM

You Mean



Google’s automatic search completion give an instant zeitgeist from just a few words of input. Here’s an example of it at work:

A universal auto-complete function would be a useless and wonderful thing to have, and right now I think Google’s search completion is as close as we can get. I’m interested in what would happen if a piece of text was forced to conform with Google’s platonic search query, essentially handing over final editorial authority the their algorithm — which in itself is just a representation of grooves worn into the search engine by millions of people searching for exactly the same thing.

Google sometimes imposes their assistance by placing a link at the top of search results suggesting “did you mean something?” This officious interjection is often creepily right — why yes, I did mean something.

Hence my proposed poetic form: You Mean. This form takes Google’s help a step further by forcing a given string through the suggestion algorithm and reconstructing output consisting entirely of suggestions.

For example, the paragraph above becomes the following:

Henceforth myspace proposed health care bill poetic forms you mean the world to me this form is submitted in connection with an application for takeshi kaneshiro google scholar help a reporter out step further or farther byu forcing amaryllis longest palindrome in a given string through the looking glass suggestion algorithms andkon reconstructing history output devices consisting essentially of entirely pets office depot suggestions lyrics.


First, I needed programmatic access to Google’s suggestions. Google itself was helpful enough to point me to this gentle hack of their toolbar code — a URL that you can hit for an XML list of suggestions for a given query. Handy.

Next, there was the issue of how to atomize input text. This proved a bit trickier, since judgments would have to be made as to how much of a line should be fed through the algorithm at a time. Initially, I tried sending words in individually. This was helpful in creating repetitive structures in the output, but I thought it was loosing to much of the source text’s content.

So I implemented a recursive algorithm that takes the full text of a line, and then tests to see if there are suggestions for it. If there are suggestions, it declares success. If not, it pops a word off the end up the sentence, and tries to find a suggestion for the new, shorter line. It continues to pop words until it finds a suggestion, and then will return to the rest of the sentence and go through the same process of shortening until a suggestion is found. Eventually, a whole line is digested this way. It unfairly weights the beginning of the line (since it’s tested first) but it seemed like a reasonable compromise between performance (the http queries take some time) and content retention.

With some extra print statements, processing looks like this — showing the recursive approach to suggested-sentence generation:

You say: showing the recursive approach
trying: showing the recursive approach
no suggestions
trying: showing the recursive
no suggestions
trying: showing the
suggestion: shown thesaurus
trying: recursive approach
no suggestions
trying: recursive
suggestion: recursive formula
trying: approach
suggestion: approach plates
You mean: shown thesaurus recursive formula approach plates

Occasionally, Google gets stumped on a single word and runs out of suggestions. (“Pluckest”, for example.) In these cases, the algorithm relents and lets the original word through. It’s conceivable that an entire body of text could elude suggestions in this way, if the words were far enough from the online vernacular.


An interesting behavior emerges in canonical texts. Partial lines will be automatically completed with the original text, which gives the text a tendency to repeat itself.

For example, here’s Frost:

whose woods these are i think i know his house is in the village though
his house is in the village though thought for the day
he will not see me stopping here to watch his woods fill up with snow
to watch his woods fill up with snow snowshoe mountain
my little horse must think it queer to stop without a farmhouse near
to stop without a farmhouse near near death experiences
between the woods and frozen lake the darkest evening of the year
the darkest evening of the year by dean koontz
he gives his harness bells a shake to ask if there is some mistake
to ask in spanish if there is something lyrics mistake quotes
the only other sound’s the sweep sounds the same spelled differently sweepstakes
of easy wind and downy flake flake lyrics
the woods are lovely dark and deep poem
but i have promises to keep and miles to go before i sleep
and miles to go before i sleep meaning
and miles to go before i sleep meaning

Source Code

The code is designed to work in two possible configurations. You can either pass it text via standard input, which it will suggestify and spit back out. Or, you can run it with the argument “interactive”, which will bring up a prompt for you to experiment quickly with different suggested text transformations.

  1. import sys
  2. import urllib
  3. from xml.dom import minidom
  4. import string
  6. # set to true for more output
  7. debug = 0
  9. def strip_punctuation(s):
  10.         return s.translate(string.maketrans("",""), string.punctuation)
  12. # returns a list of google suggestions
  14. # store them in a dictionary for basic caching… then when parsing the text
  15. # fetch the suggestion from google only if we need to
  16. suggestion_cache = dict();
  18. def fetch_suggestions(query):
  19.         if query in suggestion_cache:
  20.                 return suggestion_cache[query]
  22.         # here’s the suggestion "API"
  23.         # google.com/complete/search?output=toolbar&q=microsoft
  24.         # adding a trailing space prevents partial matches
  25.         # how to handle multi-word? find the largest possible suggestions
  26.         query_string = urllib.urlencode({"output" : "toolbar", "q" : query})   
  28.         # returns some xml
  29.         suggestion_request = urllib.urlopen("http://www.google.com/complete/search?" + query_string)
  31.         suggestions = list();  
  33.         # handle the odd xml glitch from google
  34.         try:
  35.                 suggestion_xml = minidom.parse(suggestion_request)
  36.                 # let’s extract the suggestions (throw them in a list)
  37.                 for suggestion in suggestion_xml.getElementsByTagName("suggestion"):
  38.                         suggestions.append(suggestion.attributes["data"].value)
  40.                 suggestion_cache[query] = suggestions;
  41.         except:
  42.                 pass
  44.         suggestion_request.close()
  46.         return suggestions
  49. # glues together a list of words into a sentence based on start and end indexes
  50. def partial_sentence(word_list, start, end):
  51.         if len(word_list) >= end:      
  52.                 sentence = str()
  53.                 for i in range(start, end):
  54.                         sentence = sentence + word_list[i] + " "
  56.                 return sentence.strip()
  57.         else:
  58.                 return "partial sentence length error"
  61. # takes a line and recursively returns google’s suggestion
  62. def suggestify_line(line):
  63.         output_text = ""       
  64.         words = line.lower().strip().split(" ")
  66.         if len(words) > 1:
  68.                 end_index = len(words)
  69.                 start_index = 0
  70.                 suggested_line = ""
  71.                 remaining_words = len(words)
  73.                 # try to suggest based on as much of the original line as possible, then
  74.                 # walk left to try for matches on increasingly atomic fragments
  75.                 while remaining_words > 0:
  76.                         query = partial_sentence(words, start_index, end_index)
  77.                         suggestions = fetch_suggestions(query)
  79.                         if debug: print "trying: " + query
  81.                         if suggestions:
  82.                                 if debug: print "suggestion: " + suggestions[0]
  83.                                 output_text += suggestions[0] + " "
  85.                                 remaining_words = len(words) - end_index
  86.                                 start_index = end_index;
  87.                                 end_index = len(words)
  89.                         else:
  90.                                 # else try a shorter query length              
  91.                                 if debug: print "no suggestions"
  93.                                 # if we’re at the end, relent and return original word
  94.                                 if (end_index - start_index) == 1:
  95.                                         if debug: print "no suggestions, using: " + words[start_index]
  96.                                         output_text += words[start_index] + " "
  97.                                         remaining_words = len(words) - end_index
  98.                                         start_index = end_index;
  99.                                         end_index = len(words)                                 
  100.                                 else:
  101.                                         end_index -= 1
  103.         # handle single word lines
  104.         elif len(words) == 1:
  105.                 if debug: print "trying: " + words[0]          
  106.                 suggestions = fetch_suggestions(words[0])
  107.                 if suggestions:
  108.                         if debug: print "suggestion: " + suggestions[0]
  109.                         output_text += suggestions[0] + " ";                   
  110.                 else:
  111.                         if debug: print "defeat"
  112.                         # defeat, you get to use the word you wanted
  113.                         if debug: print words[0]
  114.                         output_text += words[0] + " ";                 
  116.         output_text.strip()
  117.         return output_text
  121. # are we in interactive mode?
  123. if len(sys.argv) <= 1:
  124.         # Grab a file from standard input, dump it in a string.
  125.         # source_text = sys.stdin.readlines()
  126.         source_text = open("frost.txt").readlines()
  127.         #source_text = "His house is in the village though"
  129.         output_text = ""
  131.         for line in source_text:
  132.                 output_text += suggestify_line(strip_punctuation(line))
  133.                 output_text += "\n"
  135.         print output_text
  138. elif sys.argv[1] == "interactive":
  139.         while 1:
  140.                 resp = raw_input("You say: ")
  141.                 print "You mean: " + suggestify_line(strip_punctuation(resp)) + "\n"
  142.                 if resp == "exit":
  143.                         break

March 12 2010 at 1 PM

Phys Pix

The (slightly anemic) start of an homage to the most beloved software of my youth, the original KidPix.

Based heavily on the PBox2D examples, this sketch lets you draw objects with physical properties onto a canvas.


March 2 2010 at 12 PM

Mechanisms Midterm Concept: TagBot

Some of the most interesting mechanical solutions seem to have emerged from the transition period between manual and automated means of production — the process of adapting tasks long performed by hand to operate under machine power for the first time. The initial iterations of the adaptation process usually result in endearingly anthropomorphic machines, since the process of abstracting human motions out of the process isn’t yet complete. (Examples include electric typewriters, sewing machines, automated assembly lines, etc.)

I’m interested in working through this process of converting hand power to mechanical power myself. A Dymo tapewriter represents an unlikely but possibly satisfying platform to turn into an automatic, electronic device.

I’m also interested in unintended and unknown physical consequences for actions taken online. The stream of new tag data on sites like Flickr could provide interesting source text, and would force the idea of a tag into the physical world – e.g. here’s a machine that involuntarily spits out sticky pieces of tape with letters on them that could, conceivably, tag real-world objects.

Thus, the TagBot. A mechanized, automatic Dymo tapewriter which scrapes new tag data from Flickr in real time, and generates labels accordingly.

A slightly more ambitious variant could be built with mobility in mind — you could position it somewhere in the city, and it would spit out tags from photographs taken in the vicinity.

Mechanically, several factors need to be accounted for:

  • Rotation and indexing of the character wheel — a stepper motor would probably suffice.
  • The space — a light squeeze on the handle advances the tape without printing a letter. A strong motor or solenoid could manage this.
  • Character printing — a harder squeeze on the handle.
  • Cut — a hard squeeze on another lever.
  • Pull — a motor to pull the finished tag out of the printer.
  • Tape reloading — Dymo tape rolls are pretty short, some kind of automated reloading system would be great, but probably beyond the scope / time available for the midterm.

Code and control will require the following:

  • An Arduino to coordinate the motions of each mechanical element.
  • An interface with the Flickr API to fetch the latest tags. (Either serially from a laptop, or directly over a WiShield.)
  • Code to reduce the character set to those present on the character wheel.
February 25 2010 at 5 PM

Line Weight

NOC Midterm Concept: A physics-based drawing tool.

February 23 2010 at 12 PM

POS Shuffler

Homework #2: The digital cut-up. Write a program that reads in and creatively re-arranges the content of one or more source texts. What is the unit of your cut-up technique? (the word, the line, the character? something else?) How does your procedure relate (if at all) to your choice of source text? Feel free to build on your assignment from last week.

I wanted to build a cut-up machine that was as grammatically and syntactically non-invasive as possible, while still significantly munging the source text. So I decided to shuffle the text in a way that treated parts of speech as sacred — the words could fall where they may, but if the first word was a noun in the source text, it had better be replaced with a noun in the output. If the second word was a verb, nothing but a verb from elsewhere in the text should take its place, and so on.

Programmatically deriving a word’s part of speech turns out to be a major issue, so I leaned on NLTK to take care of this. It actually does a pretty decent job. From there it’s just a matter of storing lists of the words in a dictionary keyed to each part of speech, shuffling them, and then reconstituting the text.

I ran Obama’s most recent state of the union address through my algorithm. It’s formal enough and carefully constructed enough so as not to pose a significant challenged to NLTK’s Brown Corpus trained part of speech tagger. Also, I was struck by how much the output resembles the famous Bushisms, albeit with a lingering Obama-esque tinge.

Here’s some output:

Bloody Biden, succeed. Run Constitution, convictions from America, were moments, But great Allies: tested. union. declares during on time to look, the prosperity must time to union. history of the struggle at our For Sunday one leaders, our people have fulfilled all Bull Union chose done so of hesitations despite progress And America and we are turned again on the war that call and midst; in marchers as inevitable anything and fellow state. they’s tempting to move very of the guests and president at our victory distinguished civil, that President were back done to Tuesday and when the Beach was tested back about Our Omaha but these Americans great beaten that duty. Congress, nation crashed forward so of These Madame the future was of certain. tranquility. And first years periods was landed because doubt. Again, the depression was assume and Congress Speaker was rights on destined the market of our moments and the courage in our Black and at this our fears and divisions, our disagreements and our members, Vice prevailed that we have to answer always of one strife And 220 times.

They, It have When and much, we shall give information’s strength.

Sample output from the full text of Obama’s 2010 state of the union is also available here. The original text is also available for comparison.

The source code follows. Pretty simple, NLTK does most of the heavy lifting.

  1. import sys
  2. import nltk
  3. import random
  4. import re
  6. # Grab a file from standard input, dump it in a string.
  7. source_text = sys.stdin.read()
  9. # Use NLTK to make some guesses about each word’s part of speech.
  10. token_text = nltk.word_tokenize(source_text)
  11. pos_text = nltk.pos_tag(token_text)
  13. # Set up a dictionary where each key is a POS holding a list
  14. # of each word of that type from the text.
  15. pos_table = dict()
  17. for tagged_word in pos_text:
  18.   # Create the list, if it doesn’t exist already.
  19.   if tagged_word[1] not in pos_table:
  20.     pos_table[tagged_word[1]] = list()
  22.   pos_table[tagged_word[1]].append(tagged_word[0])
  24. # Scramble the word lists.
  25. for pos_key in pos_table:
  26.   random.shuffle(pos_table[pos_key])
  28. # Rebuild the text.
  29. output = str()
  31. for tagged_word in pos_text:
  32.   # Take the last word from the scrambled list.
  33.   word = pos_table[tagged_word[1]].pop()
  35.   # Leave out the space if it’s punctuation.
  36.   if not re.match("[.,;:’!?]", word):
  37.     output +=  " "
  39.   # Accmulate the words
  40.   output +=  word
  42. # Remove white space.
  43. output = output.strip()
  45. print output

February 19 2010 at 1 PM

Vinyl as Visualization

Vinyl under an electron microscope.

A vinyl record, magnified. From Chris Supranowitz’s OPT 307 Final Project.

One of Arthur C. Clarke’s laws of prediction states that “any sufficiently advanced technology is indistinguishable from magic.” There’s something bootstrappy about one sufficiently advanced technology (SEM) laying bare the magic from a formerly advanced technology (Vinyl). In this case, to see the waveform etched in the vinyl is to understand how the medium works in a more-than-conceptual way. No magic required.

Yet the magnifier doesn’t shed the same revelatory light on a compact disc. There’s another layer of abstraction — and it’s arguably beyond visualization. (Still, it’s unusual treat to see the atoms behind those etherial bits… given our tendency to segregate the two.)

Via Noise for Airports.

February 18 2010 at 3 AM

Weight of Your Words

Assignment: Use a physics library.

Physics libraries like Box2D tend to use extremely rational language in extremely literal ways (mass, friction, gravity, etc.) — I wanted to build on this language by overloading its meaning and taking it in an absurdist direction. Electrons, in the quantities pushed around by our machines, certainly don’t carry much physical weight… how, then, can we weigh a string of characters?

Google seems to have this sorted out… just about any conceivable string of text can be quantified, weighed, and perhaps valued by the number of results it dredges up.

So I whipped up an app that literally uses Google’s search result count to determine how game elements behave — with the intention to pressure a player into testing their own judgment of the worth of a word against Google’s. It looks like this:

How does our understanding of how much a word weighs depart from Google’s absolutism? How much weight can you balance?

The game mechanics are pretty basic… Box2D manages the interactions between the words and the tilting ledge below. The ledge is attached to a joint in the middle of the screen, and if words are distributed unevenly it will tilt and send terms sliding into the abyss.

The cloud floating above has a text input box which sends search queries off to Google. A bit of code scrapes the resulting HTML to figure out how many results exist for the query. This runs in its own thread so as not to block the rendering / physics work. After the query count comes back from google, the term you entered drops from the cloud onto the ledge. (You can influence where it will land by positioning the cloud with the mouse beforehand.) The more results a term has, the higher its density — this means that major search terms will help you load extra weight on the ledge, but their extra mass also means they’re more likely to tilt the ledge past the point of recovery. This, I hope, forces the player to estimate the weight of a term before they drop it from the cloud.

Here are a few more screenshots from development and testing:

Weight of Your Words help screenWeight of Your Words tilting overboard

The game doesn’t work very well as an Applet (it uses the net library, which requires code signing), so it’s probably easiest to download the zip below if you’d like to give it a try.

The source is a bit long to embed here, so I’ve attached it below.

February 16 2010 at 8 AM