a digital scan of a 35mm film image of a processing sketch running on an LCD
Skip to Content

Big Screens Ideas

I have a couple of ideas for big screens that attempt to visualize and force into perspective large volumes of real-time data. For a little context, here’s an archive of past big screens projects.

Private Radio

Private Radio concept still

Anyone carrying a cell phone has a radio signature… whether they like it or not they are emitting and receiving radio waves as various gadgets talk to the web.

I’d like to fill the IAC with a network of antennas to pick up chatter from GSM / CDMA / WIFI wavelengths and map the audience’s radio presence to a visualization on the screen.

Ideally the antennas would have some sense of the location of different levels of signal strength throughout the room, which could in turn create regions of high and low radio concentration. If someone receives or places a call, presumably they would create.

WiFi packet sniffers also give access to huge volumes of real-time data, although the vast majority is just the machine-machine chatter necessary to keep networks alive.

The scale of the screen would be used both as a 1:1 real-time heat-map of radio activity in the space, possibly with node-style connections drawn between maxima. This map would be overlaid with data collected at different wavelengths streaming across the screen horizontally.

I’m not completely sure of the technical feasibility of this project, and the hardware involved might be expensive (at best) or logistically / technically untenable (at worst) — I plan to speak with Eric Rosenthal and Rob Faludi for a reality check.

Real Time Web Clock

Real Time Web Clock concept still

Our daily use of the web consists of a call / response model that makes the web seem relatively stable and even a bit static. However, new content is dumped on at such a remarkable rate that it might be more useful to think of the web as a real-time stream.

To put this into context: 100 years of video was uploaded to YouTube today. 7309 edits were made to Wikipedia in the last hour. 4,459 photos were uploaded to Flickr in the last minute. Around 600 tweets were posted in the last second. For every second that passes on the clock, 4.5 hours are spent on Facebook.

I’d like to make a linear, timeline style clock that runs for exactly three minutes, starting with a blank screen and rapidly filling with real-time web content of various types.

The clock would probably be arranged by duration and depth. The first layer would be 10ths of a second. The next would be Individual seconds, the back layer would be minutes. The clock wouldn’t “tick” but scroll smoothly in real time. The layers would combine to create a parallax effect and build up a wall of content and noise over the course of three minutes.

And for good measure, here’s one more idea that’s more of a vague pipe dream than an actual plan:

Live Coding
Has this ever been done before at the IAC? Is 3 minutes enough time to do anything? Presumably you could run a python interpreter on top of Processing or something of the sort and distribute fresh strings of code to each Mac Pro using a socket server. Crashes and restarting would be problematic, and the big screens audience might not be nerdy enough to enjoy a process instead of a product.


Patrick: Using a prop to stage the radio scanning. Airport security like wand or kiosk?

Niel: Finding the wavelength of various web 2.0 services…. interleave and audo.

September 24 2010 at 11 AM

Driving Force Paper Proposal

Synthetic biology stands to have a major influence on the course of technology over the next 5 – 15 years. Specifically, continuing decreases in the cost of DNA synthesis will allow for more experimentation with life’s building blocks by an increasingly diverse group of scientists and amateurs. The core uncertainty surrounding synthetic biology is not “if” or “when”, but rather how this newfound control over the stuff of life will factor into the future. The answer holds implications for a wide swath of fields from energy policy to artificial intelligence to bioterrorism.

The field’s most recent milestone was the creation of a self-replicating bacterial cell from a completely synthetic genome. This proves the basic viability of synthetic biology’s promise. A few other factors will work to compound the field’s influence: The creation of abstractions above the protein / DNA will allow biological processes and characteristics to be treated as basic functional units in the design of new life. This abstraction process is already under-way by the The BioBricks Foundation and similar initiatives.

Research will consist primarily of review of scientific literature on the topic — both technical material and bioethics related commentary will be of interest. Statistical analysis of historical costs for the technical procedures associated with synthetic biology — perhaps most importantly, DNA synthesis — should reveal trends and allow for projections regarding critical cost milestones. Finally, interviews with researchers and amateurs who working on the forefront of the field will round-out my understanding of the role synthetic biology will play in shaping our future.

September 24 2010 at 5 AM

Foamcore Mouse

Original apple desktop bus mouseFinished foam core mouse

To get acquainted with three-dimensional prototyping in foam core, I created a model of the first mouse I ever used, the Apple Desktop Bus mouse. The mouse was first released in 1986 alongside the Apple IIGS.

I don’t have the original mouse on hand, so I used a combination of memory and photographs to reconstruct the approximate dimensions and proportions. (It might have been more interesting to have worked completely from memory, since I haven’t used one of these vintage mice in at least 18 years.)

I drew up the plans in Adobe Illustrator, printed them to scale, and then used the scale print to guide the cutting process for the model mouse.

Foam core plansThe final

Original mouse photo by Pinot & Dita

September 22 2010 at 3 PM

Foam Phone

The finished foam phone

To get acquainted with prototyping with 2” blue insulating foam, I decided to build a large-scale model of a classic phone-booth telephone handset.

The process was relatively simple. Each step is documented below.

First, I cut two pieces of 2” thick foam down to the approximate size of the handset, and then joined the pieces using transfer tape.

Joining the pieces

Next, I sketched the basic outline of a two-dimensional version of the phone, and did a rough cut on the band saw.

Cutting plan, including relief cutsFirst two dimensions of cuts

With a basic two-dimensional version of the phone in hand, I sketched out the third dimension and made the corresponding cuts on the band saw.

Planned cuts on the next planeFinished cuts in three dimensions

And finally, the ear and microphone cups were sketched and cut. I removed a wedge of foam from each disk on the belt sander to make sure they would mate to the handset at a slight angle. A drill press took care of the holes in each disk.

Preparing the ear cupsEar cups ready for attachment

I used another round of transfer tape to attach the disks to the handset. About 20 minutes of sanding and finishing work leaves the finished phone:

The final foam phone

I learned a few things about the material that will guide any future use:

  • Higher speed tools do cleaner, more consistent work — the belt sander and band saw avoid tearing / chunking the foam the way hand tools do.

  • Extra-wide transfer tape is worth the up-front expense for larger projects.

  • The foam seems to have a grain. Sanding in certain directions minimizes chunking. I haven’t figured out how to identify the grain.

  • Relief cuts make shorter work of tight curves.

September 22 2010 at 12 AM

Geo Bot Postmortem

My work on the graph bot ended up veering a bit from my initial plans — rather than constrain several automatons via lengths of string, I worked instead towards a group of drawing machines that would chart their course through a room by excreting yarn in their wake. The intention was to capture both the criss-cross of attention in and to visualize larger patterns in the geographic distribution of activity on the web.

Although I eventually became less and less convinced of the conceptual merits of the project (for which I have no one to blame but myself), it was nevertheless a useful exercise in combining techniques from a number of disciplines.

A picture of the device’s guts, is I suppose, an appropriate place to start since I spent an inordinate amount of time on this aspect of the project, chasing down minor details rather than reconsidering a more elegant approach to the entire concept.

The underside of the Geo Bot.

Here’s how the project’s requirements break down:

  • A mobile robot platform, associated circuit building and firmware development, a rudimentary navigation system, wireless communication and power.
  • A yarn storage and excretion mechanism that can reliably dole out yarn at a range of speeds.
  • Centralized control software and associated connections to live data sources on the web.

More to come on the process and discoveries made along the way.

May 7 2010 at 8 PM

Human vs. Computational Strategies for Face Recognition

Face recognition is one of the mechanical turk’s canonical fortes — reliably identifying faces from a range of perspectives is something we do with out second though, but it proves to be excruciatingly tricky for computers. Why are our brains so good at this? How, exactly, do we work? How do computational strategies differ from biological ones? Where do they overlap?

Behold: Chapter 15 of the Handbook of Face Recognition explores these questions in some detail, describing theories of how the human brain identifies and understands faces. A few highlights from the chapter follow:

First, a few semantic nuances:
Recognition: Have I seen this face before?
Identification: Whose face is it?
Stimulus factors: Facial features
Photometric factors: Amount of light, viewing angle

The Thatcher Illusion: Processing is biased towards typical views

Thatcher Effect


Beyond the basic physical categorizations — race, gender, age — we also associate emotional / personality characteristics with the appearance of a face. These use of these snap judgments was found to improve identification rates over those achieved with physical characteristics alone.

Prototype Theory of Face Recognition

Unusual faces were found to be more easily identified than common ones. The ability to recognize atypical faces implies a prototypical face against which others are compared. Therefore recognition may involved positioning a particular face relative to the average, prototypical face. The greater the distance, the higher the accuracy. (The PCA / eigenface model implements this idea.)

This also has implications for the other-race effect, which describes the difficulty humans have with identifying individuals of races to which they are not regularly exposed. However, the PCA approach to face recognition actually does well with the minority faces, since they exist outside the cluster of most faces and therefore have fewer neighbors and lower odds of misidentificaiton.


The prototype theory suggests that amplification of facial features should improve recognition and identification even further.

Here’s an example, the original face is at left, and a caricature based on amplifying the face’s distance from the average is at right:

Camera in tupperware enclosure with lid

This also opens the possibility of an anti-caricature, or anti-face, which involved moving in the opposite direction, back past the average, and amplifying the result.

The original face is at left, the anti-face is at right:

Face and Anti-face

Interestingly, caricaturization also seems to age the subject. (Supporting the notion that age brings distinction:

Caricature aging


Prosopagnosia is a condition affecting some stroke / brain injury victims which destroys the ability to identify faces, while leaving other visual recognition tasks intact. This suggests that face identification and recognition is concentrated in one area of the brain, suggesting a modular approach to processing.

(Images: Handbook of Face Recognition)

April 15 2010 at 12 PM

Geo Graph Bot Platform

I’ve created a quick hardware sketch of the Geo Graph Bot:

Current Revision

The bot receives commands over the air to steer, turn, etc. The wheels are too small, and the 9V battery is too weak for the steppers, so it’s not quite as fast / maneuverable as I expect the final version to be. Still, it works.

Here’s what it looks like in motion (it’s receiving commands wirelessly from a laptop):

Pending Modifications

Much of this version was limited by the supplies I had on hand. Several elements will change once the rest of the parts come in:

  • It still needs the compass modules. (And accompanying auto-steering code.)
  • Larger wheels (from 2” diameter to 4” or 5”) should increase speed and improve traction.
  • The whole thing will be powered by a 12v 2000mAh NiMH rechargeable battery. (Instead of a pair of 9Vs.)
  • There will be a mechanism for the excretion of yarn to graph the bots path.
  • Also planning on some kind of aesthetically satisfying enclosure once I have the final dimensions.
  • I will use my own stepper drivers instead of the Adafruit motor shield.

I’m reducing the scope slightly from the originally planned three bots to just two. The parts turned out to be more expensive than I anticipated, so my initial goal is to prepare two bots, and then if time / finances allow, create a third. Part of the idea is to create a platform.

Steppers vs. DC Motors

I agonized a bit about whether to use stepper motors or DC motor to drive the bot’s wheels.

A plain DC motor seems to have some advantages in terms of control (you aren’t dealing with a digital signal), and since steering will be accomplished via a feedback loop from the compass data, their lack of precision probably would not be a big issue.

However, I already had steppers on hand, so I ended up using them instead. Steppers have a few advantages of their own. For one, there’s no need for gearing — in this case, the motor drives the wheels directly. Second, I have finer control over how far the bot travels and how it steers (assuming traction is good), so the platform itself will be more flexible for future (unknown) applications.

The big issue with steppers is that the Arduino code that drives them is all written in a blocking way… that is, you can’t run any other code while the motors are running. This was a problem, since I needed the bots to perform a number of steps in the background while it’s driving around: It needs to receive data from the control laptop, monitor the compass heading, reel out yarn, etc.

For now, I’m using some work-around code that uses a timer to call the stepping commands only when necessary, leaving time for other functions. This might not hold up once the main loop starts to get weighed down with other stuff, so I might end up writing an interrupt-driven version of the stepper library.

April 12 2010 at 5 PM

Haiku Laureate

And now for something completely banal…


Haiku Laureate generates haiku about a particular geographic location.

For example, the address “Washington D.C.” yields the following haiku:

the white house jonas
of washington president
and obama tree

Much of the work we’ve created in Electronic Text has resulted in output that’s interesting but very obviously of robotic origin. English language haiku has a very simple set of rules, and its formal practice favors ambiguous and unlikely word combinations. These conventions / constraints give haiku a particularly shallow uncanny valley; low-hanging fruit for algorithmic mimicry.

Haiku Laureate takes a street address, a city name, etc. (anything you could drop into Google maps), and then asks Flickr to find images near that location. It skims through the titles of those images, building a list of words associated with the location. Finally, it spits them back out using the familiar three-line 5-7-5 syllable scheme (and a few other basic rules).

The (intended) result is a haiku specifically for and about the location used to seed the algorithm: The code is supposed to become an on-demand all-occasion minimally-talented poet laureate to the world.


The script breaks down into three major parts: Geocoding, title collection, and finally haiku generation.


Geocoding takes a street address and returns latitude and longitude coordinates. Google makes this easy, their maps API exposes a geocoder that returns XML, and it works disturbingly well. (e.g. a query as vague as “DC” returns a viable lat / lon.)

This step leaves us with something like this:

721 Broadway, New York NY is at lat: 40.7292910 lon: -73.9936710

Title Collection:

Flickr provides a real glut of geocoded data through their API, and much of it is textual — tags, comments, descriptions, titles, notes, camera metadata, etc. I initially intended to use tag data for this project, but it turned out that harvesting words from photo titles was more interesting and resulted in more natural haiku. The script passes the lat / lon coordinates from Google to Flickr’s photo search function, specifying an initial search radius of 1 mile around that point. It reads through a bunch of photo data, storing all the title words it finds along the way, and counting the number times each word turned up.

If we can’t get enough unique words within a mile of the original search location, the algorithm tries again with a progressively larger search radius until we have enough words to work with. Asking for around 100 - 200 unique words work well. (However, for rural locations, the search radius sometimes has to grow significantly before enough words are found.)

The result of this step is a dictionary of title words, sorted by frequency. For example, here’s the first few lines of the list for ITP’s address:

{"the": 23, "of": 16, "and": 14, "washington": 12, "village": 11, "square": 10, "park": 10, "nyu": 9, "a": 9, "new": 8, "in": 8, "greenwich": 8, "street": 6, "webster": 6, "philosophy": 6, "hall": 6, "york": 6, [...] }

Haiku Generation:

This list of words is passed to the haiku generator, which assembles the words into three-line 5-7-5 syllable poems.

Programmatic syllable counting is a real problem — the dictionary-based lookup approach doesn’t work particularly well in this context due to the prevalence of bizarre words and misspellings on the web. I ended up using a function from the nltk_contrib library which uses phoneme-based tricks to give a best guess syllable count for non-dictionary words. It works reasonably well, but isn’t perfect.

Words are then picked from the top of the list to assemble each line, using care to produce a line of the specified syllable count. This technique alone created mediocre output — it wasn’t uncommon to get lines ending with “the” or a line with a string of uninspired conjunctions. So I isolated these problematic words into a boring_words list — consisting mostly of prepositions and conjunctions — which was used to enforce to enforce a few basic rules: First, each line is allowed to contain only one word from the boring word list. Second, a line may not end in a boring word. This improved readability dramatically. Here’s the output:

the washington square
of village park nyu new street
and greenwich webster

More Sample Output

A few more works by the Haiku Laureate:

Chicago, IL
chicago lucy
trip birthday with balloons fun
gift unwraps her night

the gettysburg view
monument and from devils
of den sign jess square

Dubai Museum Bur
in Hotel The Ramada
with Dancing Room Tour

tokyo shinjuku
metropolitan the night
from government view

Canton, KS
jul thu self me day
and any first baptist cloud
the canton more up

Las Vegas, NV
and eiffel tower
in flamingo from view glass
at caesars palace

eve revolution
trails fabulous heralds blue
emptiness elton

monorail hide new
above bird never jasmine
path boy cleopatra

I’ve also attached a list of 150 haiku about New York generated by the haiku laureate.

Note that the Haiku Laureate isn’t limited to major cities… just about any first-world address will work. Differences in output can be seen at distances of just a few blocks in densely populated areas.

Source Code

The code is intended for use on the command line. You’ll need your own API keys for Google Maps and Flickr.

The script takes one or two arguments. The first is the address (in quotes), and the second is the number of haiku you would like to receive about the particular location.

For example: $ python geo_haiku.py "central park, ny" 5

Will return five three-line haiku about central park.

The source is too long to embed here, but it’s available for download.

April 8 2010 at 8 PM