a digital scan of a 35mm film image of a processing sketch running on an LCD
Skip to Content

Rough Thesis Production Schedule

A bit of optimism…

thesis februarythesis marchthesis aprilthesis may


February 9 2011 at 9 AM

Loose bedside inventory

This is a strange, experimental essay composed over winter break

One white cap, striated, counter-clockwise, childproof. One inch, three tenths, twenty hundredths and seven thousandths in diameter, seven tenths, eighty hundredths and one thousandth in height. Twenty-four extra-strength Excedrin, caplets. Seven hundred ninety-four thousandths in length. Three hundred eighty five thousandths in width. Two hundred forty-one thousandths in height. Engraved with an “E”, does not stand for ecstasy. Super-dome like in cross section. One white label, three thousandths in thickness affixed to soviet-green translucent bottle. Promises for headaches, colds, arthritis, muscle aches, sinusitis, toothache, premenstrual and menstrual cramps delineated with bullets. You can call them at 800 468 7746. Possibly made in New Jersey.

Clear plastic blister on printed cardboard backing, nineteen thousandths thick. Contents formerly four, now two. Cylindrical, five-hundred fifty-six thousandths in diameter. Length measures one thousand nine hundred and ninety-one thousandths, and one point five volts. The positive nip, two hundred and twelve thousandths in diameter. Forty-four thousandths in height. Do not connect improperly. Made in U.S.A. Ne pas installer de maniére inapproprie. Fabriqué aux É-U.

Sixteen thousandths of cardboard, folded over. One thousand five hundred twenty-one thousandths in width. One thousand eight hundred ninety-two thousandths in height (closed). Tapered, one hundred twenty three thousandths at one end, two hundred sixty nine at the other. Profile like an arrow loop. Contents formerly twenty, now sixteen. Tip contains red phosphorus, potassium chlorate, sulfur and starch, a neutralizer, siliceous filler, diatomite and glue. Certain family members consider this a delicacy. Made in New Haven.

February 9 2011 at 9 AM

Godspeed, Comp Cameras

Computational Cameras and I have parted ways. I’m sure I’ll end up doing my share of pixel munging as I start work on Thesis II.

February 9 2011 at 8 AM

Street View Automatic

Why is it always daytime in Google Street View?

The disagreement between Street View’s 100:0 ratio of light to dark and my window’s less optimistic 50:50 ratio has been particularly jarring lately. What a tax on our brittle circadian rhythms!

I have created a bookmarklet to solve the simpler (street view) half of this disparity. Now, you can push a button to instantly cast any Street View scene into a weak approximation of darkness. The degree of night is based on what time it actually is in the corner of the world you’re viewing, combined with information on when the sun will rise or set.

An open Street View window, left unattended, will now cycle from day, to night, and back again, indefinitely. No longer will you leave the house under the false promise of daylight at your destination.

The algorithm is operating on the four sample scenes above. If you’d like to give it a try, drag and drop the link below into your bookmarks bar (for quick access) or right click and add it to your bookmarks (for less obtrusive access).


drag and drop the below link to your bookmarks bar

Street View Automatic

drag and drop the above link to your bookmarks bar


Next, navigate to Google Maps, and get into a Street View as you would otherwise. Once the view has loaded, give the new Street View Automatic link in your bookmarks bar a click to show the scene in its true (and current) light. Of course, if it’s actually daytime, you won’t see much change at all. The code also won’t work on embedded maps or portable devices.

My thanks to Jonathan Stott of Earthtools for making his excellent lat / lon to local time and sunrise / sunset API services available free of charge to the public.

January 30 2011 at 9 PM

Ten Face-Related Ideas and One Implementation

This post is in progress!

Film Faceprints (implemented and shown below) Run face detection on film frames, grabbing full-size frames in which non-face areas are masked off. Average these frames together to generate a single-frame representation of presence and characters over the duration of the film. This leaves you with a kind of thumbprint of the film and it’s characters. The results are kind of anticlimactic, there are only vague shadows of faces. A failed experiment, but brings to mind some more interesting directions of approach to the content. (Animating the accumulation of the average, scaling all of the faces to the same dimension before averaging — or maybe ditching the averaging idea and trying a grid arrangement that would reduce a film’s narrative to a series of faces.)

Abe LincolnYellow Submarine

Titicut Follies is at top. Bottom left is an excerpt from a film about Abraham Lincoln, bottom right is Yellow Submarine.

Perhaps more interesting are the algorithms leftovers. As it runs, the latest faces are dumped into a buffer and drawn to the screen. A couple averages in progress are shown below:

Titicut Follies in progressYellow Submarine in progress

Here is the rather messy source code.

Quantify Contact Run face detection on the contents of your computer screen. Log how often faces are encountered in web browsing / photo editing / whatever. In this way the relative loneliness of extended sessions in front of a machine could be quantified.

Curb Paranoia Implement a face-detecting and obfuscating filter at a very low level (somewhere in the camera driver, probably). Pseudo-privacy protection

Quantifibate Run face detection on your laptop’s camera all the time. Since computers tend to be left on, “uptime” doesn’t say much about the hours per day sunk into these machines. Face detection could give more accurate statistics about presence / attention.

Tenso Automate the face swapping / tenso meme.

Almost Face Go through large sets of face-tagged images (an iPhoto library, for example) and hand-pick all of the false positives to build a collection of almost-faces.

Street View Process Google Street View panoramas for faces. Hit rate might be a bit low since google blurs faces, but it would be interesting to build a map of geolocated faces.

A few more to come…


Attachments

January 27 2011 at 11 AM

Your World of Text

I spent twenty minutes trying to remember the name of this brilliant, unmoderated, real-time, infinitely-large canvas of collaborative and anti-collaborative text. It’s Your World of Text by Andrew Badr. The window above is live… anything you type is published instantly. If you run out of room, you can scroll to a fresh plot of page à la Google Maps.

Even more brilliant, Andrew released the source a while back. Interesting to see that it’s built on Django, and that clients keep sync by polling the server instead of some kind of pushed data from the server via Comet or a hidden socket.

January 26 2011 at 9 PM

Upload to Flickr from Processing

About The PImage Uploader
I’ve attached a quick Processing sketch that uploads PImages from a camera directly to Flickr each time you click the mouse.

The actually upload process is pretty simple — it just involves posting a bunch of bytes over HTTP to a specific URL. The hard part is getting Flickr to believe that you are who you say you are so that it will accept the images you upload.

That’s where this code is meant to help. In order to upload images to a Flickr account, your app will need write permission. In order to get write permission, you’ll need to go through the authentication process.

Basically, the first time your app wants to upload it will open up a URL on the Flickr website prompting you to log in and “allow” the app to do what it wants to do. You may be familiar with this procedure if you’ve had to authenticate third party apps that tie into Flickr (such as iPhoto or a desktop flickr uploader). In the case of the attached code, Processing opens the authentication link for you, and then gives you 15 seconds to approve the app on Flickr’s website before continuing on its way.

After this, it stores the authentication data in a text file (called token.txt) local to the Processing sketch, so that you won’t have to go through the online authentication process each time you run the app. I’ve encapsulated this process into a single function called authenticate() to make things as simple as possible. If the token is lost or becomes corrupted, the app will automatically try to fetch a new one the next time it runs. (Note that you should not distribute any sketches with your own generated token file!)

The code makes use of a Flickr library for Java called flickrj. Since flickrj is a generic Java library and isn’t designed specifically for Processing, its use is not quite as intuitive as you’re accustomed to. For one, the steps to use the library with your sketch are a bit different. Instead of putting files in your ~/Documents/Processing/libraries folder, you’ll need to download the .jar file from the flickrj website and drag and drop it onto your sketch window. This creates a folder called “code” inside your sketch folder with a copy of the .jar file inside for your sketch to reference as needed.

If you prefer, you can create the folder and copy the .jar file manually. You’ll end up with the same setup as if you dragged and dropped the file. Also note that you’ll never see anything appear in the “import” menu list since flickrj wasn’t built with Processing in mind. The flickrj jar is included in the zipped uploader code below to make your life easier.


The API / Library Conundrum
The amount of code and number steps involved in getting the necessary authorization is kind of ridiculous. It’s easy to imagine a range of places to improve upon the library.

Flickrj is a pretty direct mirror to the official Flickr API, and that’s how most API libraries are designed. It seems to be designed for experienced Java programmers working on large-scale projects instead of the quick and dirty sketches typical to Processing work. It’s tough to find exactly the right balance between a library that makes sense relative to the official API, and one that adds new features or code and leverages the paradigms of a particular programming language or framework.

For example, a Processing-specific library might incorporate a threaded image downloader that could return arrays of PImages from a given query. It could also wrap up the authorizations into a few lines of code as outlined in this post. These Processing-esque abstractions on top of Flickr’s own API abstractions add a lot of code and maintenance liabilities to our hypothetical library — but it would certainly open things up for beginner coders.

My Processing to-do list is pretty long, but I’ll add a new Flickr library filed under “maybe someday”.


The Code
The core of the sketch is shown below, but note that it will be easiest to download flickr_uploader.zip for testing since it includes the flickrj library. The code looks a bit lengthy and convoluted, but it mostly consists of helper functions to take care of the authentication process and image compression to make the upload process as simple as possible — and the helper functions should be reusable without modification, so all you really need to worry about is creating the Flickr object, calling the authentication function, and then uploading to your heart’s desire.

  1. // Simple sketch to demonstrate uploading directly from a Processing sketch to Flickr.
  2. // Uses a camera as a data source, uploads a frame every time you click the mouse.
  3.  
  4. import processing.video.*;
  5. import javax.imageio.*;
  6. import java.awt.image.*;
  7. import com.aetrion.flickr.*;
  8.  
  9. // Fill in your own apiKey and secretKey values.
  10. String apiKey = "********************************";
  11. String secretKey = "****************";
  12.                    
  13. Flickr flickr;
  14. Uploader uploader;
  15. Auth auth;
  16. String frob = "";
  17. String token = "";
  18.  
  19. Capture cam;
  20.  
  21. void setup() {
  22.   size(320, 240);
  23.  
  24.   // Set up the camera.
  25.   cam = new Capture(this, 320, 240);  
  26.  
  27.   // Set up Flickr.
  28.   flickr = new Flickr(apiKey, secretKey, (new Flickr(apiKey)).getTransport());
  29.  
  30.   // Authentication is the hard part.
  31.   // If you’re authenticating for the first time, this will open up
  32.   // a web browser with Flickr’s authentication web page and ask you to
  33.   // give the app permission. You’ll have 15 seconds to do this before the Processing app
  34.   // gives up waiting fr you.
  35.  
  36.   // After the initial authentication, your info will be saved locally in a text file,
  37.   // so you shouldn’t have to go through the authentication song and dance more than once
  38.   authenticate();
  39.  
  40.   // Create an uploader
  41.   uploader = flickr.getUploader();
  42. }
  43.  
  44. void draw() {
  45.   if (cam.available()) {
  46.     cam.read();
  47.     image(cam, 0, 0);
  48.     text("Click to upload to Flickr", 10, height - 13);
  49.   }
  50. }
  51.  
  52. void mousePressed() {
  53.   // Upload the current camera frame.
  54.   println("Uploading");
  55.  
  56.   // First compress it as a jpeg.
  57.   byte[] compressedImage = compressImage(cam);
  58.  
  59.   // Set some meta data.
  60.   UploadMetaData uploadMetaData = new UploadMetaData();
  61.   uploadMetaData.setTitle("Frame " + frameCount + " Uploaded from Processing");
  62.   uploadMetaData.setDescription("To find out how, go to http://frontiernerds.com/upload-to-flickr-from-processing");  
  63.   uploadMetaData.setPublicFlag(true);
  64.  
  65.   // Finally, upload/
  66.   try {
  67.     uploader.upload(compressedImage, uploadMetaData);
  68.   }
  69.   catch (Exception e) {
  70.     println("Upload failed");
  71.   }
  72.  
  73.   println("Finished uploading");  
  74. }
  75.  
  76. // Attempts to authenticate. Note this approach is bad form,
  77. // it uses side effects, etc.
  78. void authenticate() {
  79.   // Do we already have a token?
  80.   if (fileExists("token.txt")) {
  81.     token = loadToken();    
  82.     println("Using saved token " + token);
  83.     authenticateWithToken(token);
  84.   }
  85.   else {
  86.    println("No saved token. Opening browser for authentication");    
  87.    getAuthentication();
  88.   }
  89. }
  90.  
  91. // FLICKR AUTHENTICATION HELPER FUNCTIONS
  92. // Attempts to authneticate with a given token
  93. void authenticateWithToken(String _token) {
  94.   AuthInterface authInterface = flickr.getAuthInterface();  
  95.  
  96.   // make sure the token is legit
  97.   try {
  98.     authInterface.checkToken(_token);
  99.   }
  100.   catch (Exception e) {
  101.     println("Token is bad, getting a new one");
  102.     getAuthentication();
  103.     return;
  104.   }
  105.  
  106.   auth = new Auth();
  107.  
  108.   RequestContext requestContext = RequestContext.getRequestContext();
  109.   requestContext.setSharedSecret(secretKey);    
  110.   requestContext.setAuth(auth);
  111.  
  112.   auth.setToken(_token);
  113.   auth.setPermission(Permission.WRITE);
  114.   flickr.setAuth(auth);
  115.   println("Authentication success");
  116. }
  117.  
  118.  
  119. // Goes online to get user authentication from Flickr.
  120. void getAuthentication() {
  121.   AuthInterface authInterface = flickr.getAuthInterface();
  122.  
  123.   try {
  124.     frob = authInterface.getFrob();
  125.   }
  126.   catch (Exception e) {
  127.     e.printStackTrace();
  128.   }
  129.  
  130.   try {
  131.     URL authURL = authInterface.buildAuthenticationUrl(Permission.WRITE, frob);
  132.    
  133.     // open the authentication URL in a browser
  134.     open(authURL.toExternalForm());    
  135.   }
  136.   catch (Exception e) {
  137.     e.printStackTrace();
  138.   }
  139.  
  140.   println("You have 15 seconds to approve the app!");  
  141.   int startedWaiting = millis();
  142.   int waitDuration = 15 * 1000; // wait 10 seconds  
  143.   while ((millis() - startedWaiting) < waitDuration) {
  144.     // just wait
  145.   }
  146.   println("Done waiting");
  147.  
  148.   try {
  149.     auth = authInterface.getToken(frob);
  150.     println("Authentication success");
  151.     // This token can be used until the user revokes it.
  152.     token = auth.getToken();
  153.     // save it for future use
  154.     saveToken(token);
  155.   }
  156.   catch (Exception e) {
  157.     e.printStackTrace();
  158.   }
  159.  
  160.   // complete authentication
  161.   authenticateWithToken(token);
  162. }
  163.  
  164. // Writes the token to a file so we don’t have
  165. // to re-authenticate every time we run the app
  166. void saveToken(String _token) {
  167.   String[] toWrite = { _token };
  168.   saveStrings("token.txt", toWrite);  
  169. }
  170.  
  171. boolean fileExists(String filename) {
  172.   File file = new File(sketchPath(filename));
  173.   return file.exists();
  174. }
  175.  
  176. // Load the token string from a file
  177. String loadToken() {
  178.   String[] toRead = loadStrings("token.txt");
  179.   return toRead[0];
  180. }
  181.  
  182. // IMAGE COMPRESSION HELPER FUNCTION
  183.  
  184. // Takes a PImage and compresses it into a JPEG byte stream
  185. // Adapted from Dan Shiffman’s UDP Sender code
  186. byte[] compressImage(PImage img) {
  187.   // We need a buffered image to do the JPG encoding
  188.   BufferedImage bimg = new BufferedImage( img.width,img.height, BufferedImage.TYPE_INT_RGB );
  189.  
  190.   img.loadPixels();
  191.   bimg.setRGB(0, 0, img.width, img.height, img.pixels, 0, img.width);
  192.  
  193.   // Need these output streams to get image as bytes for UDP communication
  194.   ByteArrayOutputStream baStream        = new ByteArrayOutputStream();
  195.   BufferedOutputStream bos              = new BufferedOutputStream(baStream);
  196.  
  197.   // Turn the BufferedImage into a JPG and put it in the BufferedOutputStream
  198.   // Requires try/catch
  199.   try {
  200.     ImageIO.write(bimg, "jpg", bos);
  201.   }
  202.   catch (IOException e) {
  203.     e.printStackTrace();
  204.   }
  205.  
  206.   // Get the byte array, which we will send out via UDP!
  207.   return baStream.toByteArray();
  208. }

December 17 2010 at 3 PM

Spring Thesis Plans

THE POST-APOCALYPTIC PIRATE INTERNET
For background on the basic idea of the post-apocalyptic pirate internet, please read an earlier post on the subject

Problem: The centrally-distributed internet is fragile and politically fickle
The web’s current implementation is built from millions of geographically dispersed clients communicating with a handful of extremely high-density data centers. Despite the many ⇔ many ideals of the web, the infrastructure looks more like many ⇒ one ⇒ many. This topology means that there are points in the network of significant vulnerability: Backbone fiber, ISP central offices, data centers, etc. all represent potential choke points in the web. The destruction of physical infrastructure or installation of firewalls to screen and censor data at one of these points could snuff access to the web. That would be a shame, since the web is arguably the most significant aggregation of knowledge and culture humanity has ever assembled.

How could this knowledge be protected, and how could the current freedom of expression and exchange enjoyed on the centralized web reemerge under a distributed model that is technically immune to data loss and censorship?

Solution: Distributed, mesh-networked backups of the entire web
I propose a distributed backup system for the web to ensure the survival of data and continuation of the platform’s ideals in the face of a political or infrastructural apocalypse.

The basic unit of the post-apocalyptic pirate internet is the “backup node”. These are relatively small, suitcase-sized computers with lots of storage space. Servers, basically. They’re designed for use by consumers of average technical aptitude. Backup nodes would sit in the corner of a room and sip data from the internet to build a backup of some portion of the web. If and when the centralized web infrastructure falls apart, the backup nodes would be poised to respond by automatically transforming from data aggregators to data distributors. Requests for web data in the absence of centralized infrastructure (post-apocalypse) would instead be fulfilled by the backup nodes — at least to the extent that backups are available.

The technical infrastructure of the post-apocalyptic pirate internet has two basic components. The first is physical: local storage nodes — hard disks, flash memory, etc. — on which fragments of the web will be backed up and paired with a supporting computer and interface (most likely a browser). The second is ethereal: wireless communication which will enable the formation of mesh network between physically proximate nodes. This would give apocalypse survivors access to more than just the data stored on their local node. In this sense, a new internet would take shape as the backup nodes enmeshed — an internet that was not vulnerable to centralized oversight or obstruction.

Execution: Research demand and feasibility, then build a backup node
First I’ll have to figure out how / why, exactly, such a system could / should be built. How would the content of the backups backups curated? By some distributed democratic means? By the usage patterns of the backup node’s owner? There’s a judgment to be made in deciding between saving the data people actually interact with on a daily basis (say, Twitter), and the data that actually carries forward knowledge essential to civilization (OpenCourseWare comes to mind).

What role will the backup nodes play before the apocalypse? Will they be seemingly dormant black boxes going about their work without human intervention, or will they become distribution points for content censored from the centralized web (Wikileaks would be the example of the day).

Marina has encouraged me to focus on the conceptual justifications for the system instead of technical implementation. However I’m personally interested in creating at least one actual node to demonstrate the concept. I understand the futility of the gesture, since the pirate internet would require thousands of backup nodes to be built, sold, and operated if it was going to actually protect (and eventually distribute) an appreciable amount of data. A single node is not particularly useful. Nevertheless, I’d like to end the semester with more than an exhaustive string of justifications / marketing material for something that doesn’t actually exist.

December 8 2010 at 3 PM