2013-12-21 20.08.56
1.2 ComponentsExtree is a small, desktop sized, Christmas tree that can be programmed. It’s intended as a third channel feedback device (e.g., make it blink when you get an email), as well as an educational tool. It’s circuit is as simple as it gets: two components, an Arduino Pro Mini and an Adafruit Neopixel Ring, connected with three wires (VCC, GND, DATA). All structural parts can be produced on a laser-cutter, and it connects to the computer using an FTDI cable.

I’ve written a hackish Arduino sketch and Mono based client for it. All that, including the drawings as Autodesk Inventor source and DXF, can be found on Github.

Pebble, your light shines (too often)

Pebble is this great smartwatch that, among other things, sports an accelerometer and a backlight. The stock firmware supports a feature called “motion backlight” which combines the two: flicking ones wrist, resulting in high acceleration, activates the backlight. During my first week of using the Pebble, I noticed that the backlight would turn on by when not intended pretty often; and that often it would not turn on when I wanted it to. In short, the “wrist flick” gesture detection seemed to not perform so well.
So the question I want to answer is: how well does the current motion backlight implementation perform?
Spoiler alert: It does not perform very well. Over the course of two days, only 10 out of 1331 backlight activations were correct.


To answer this question, I wanted some data. So I wrote a watchface that would record every backlight activation and allow me to self-report correct and missed activations; thanks to the new Pebble SDK, that is now possible. So I first wrote a Pebble wachface to see what accelerometer tap event would activate the backlight (at some point their documentation said that Pebble uses these taps to activate the backlight). Turns out that every tap event turns on the backlight. Next, I’d log every tap and self-annotation event to my phone using the new datalogging API. Unfortunately, that crashed the Android Pebble app, so back to good old App Messages. On Android side, I’d have an app receiving the data sent from the Pebble data and storing it in a SQLite database.

Analysis. Once I had collected the data, I wrote an R script that removes one backlight activation for each self-reported correct activation (true positive, tp). All remaining backlight activations are then false positives, fp. The script plots the data over time and computes precision \frac{tp}{tp + fp} and recall \frac{tp}{tp + fn}.
Limitations. This procedure has two main disadvantages: first, we don’t get the true negative rate, tn; that is whenever the “wrist flick gesture classifier” dismisses a movement as non-gestural. If we were interested in this, we’d have to think about time segmentation to count discrete tn events in a continuous time stream. Second, we make the assumption that every tap event results in a backlight activation. Empirically this seems to hold for all activations more than three seconds apart. Unless someone from Pebble confirms the details of how the motion backlight feature works, there is no knowing, though.


I ran the “study” for about two days, during which I went about my normal daily activities. These include getting changed, roughly 30 minutes of cycling, time at the gym and office work. During the recorded time, I deliberately used the “motion backlight” feature tp = 10 times and it failed to activate when I wanted it to once (fn = 1). The backlight was turned on incorrectly fp = 1321 times. Precision (probability of an activation to be correct) was 0.7\%, and recall (probability of a “wrist flick gesture” being correctly detected) was 90\%.
Plotting the backlight activations over time shows a few clear clusters, that seem to occur during times of increased activity (cycling, cardio, squash), marked with a black line. During sleep or desk work, false activations rarely happen.

What does that mean?

Technical issue. The high number of false-positives (fp = 1321) is probably due to the simple gesture detection mechanism, which not only needs to perform well, but also be energy-efficient. While no documentation of how the “tap” mechanism works is available, I’d reckon it’s implemented by thresholding the acceleration vector’s magnitude; something that is easy to implement in hardware. A more refined process or some post-tap logic for detecting the “wrist flick” gesture might bring the number of false activations down.
Motion backlight usage. Seems like I’m not using the motion backlight feature that much. At least that’s what looking at the graph and number of deliberate backlight activations (tp = 10) over the course of two days would indicate. After this experiment I’ve turned motion backlight off, hoping to extend my battery life a bit.
Increased activity. Pebble’s motion backlight feature wastes energy when doing sports or at times of increased activity. In fact, I’ve first noticed the false activations while cycling; looking at the graph confirms that. To some extend, this was to be expected. The tarmac around here is rough (to say the least), so false positives are inevitable while cycling. But also when changing clothes or during simple activities (such as soldering, or using a laser-cutter), the Pebble tends to unintentionally light up.
This is a problem. I don’t exactly know how much energy is consumed per backlight activation, but I can speculate. Given that the Pebble has three backlight LEDs, assuming 5 mA per LED, and 3 seconds per activation, each activation consumes 1.25 \mu\text{Ah}. Given the 1321 false activations, that amounts to a total of 16.5 \text{mAh}: more than 12% of the battery capacity (130 \text{mAh}) over the course of two days.

How could this be fixed?

Off the top of my head I can think of three solutions: a workaround, improved recognition and the users context.
The workaround. Turn of the motion backlight feature. As didn’t use it so often to begin with, I turned it off altogether. Of course this more a workaround, rather than a proper solution; the motion backlight feature comes in handy at times, and certainly is a neat trick.
Improved recognition. The way the “wrist flick” gesture detection is implemented right now is likely a combination of technical necessity and low hanging fruits. The accelerometer hardware built into the pebble can trigger the “tap” events and wakeup the CPU; like this, the expensive CPU doesn’t have to be woken up so often. However, using the accelerometer values before or after the tap event (e.g., by utilizing the accelerometers capability to store values), recognition performance could be improved.
User context. Using the users context, we could enable or disable the motion backlight feature. For example, as we now know (or at least have an indicator for) that during sport activities the motion backlight is often triggered, we could disable the feature during such times. E.g. if a user starts the RunKeeper app, this app could automatically disable motion backlight. For this, we need an API to disable the motion backlight feature.

Want to repeat this experiment?

I’ve uploaded all the source code, binaries, R scripts and my results on GitHub: https://github.com/32leaves/PebbleLightLogger – it’s all under a MIT License. In case anyone repeats this experiment, I’d be very interested to hear about it and get the results. Feel free to leave a comment below.

Personal fabrication with aluminium

Personal fabrication with laser cutters and 3D printers can mainly produce plastic (PLA, ABS, Acrylic), wood (MDF, Plywood) or fabric parts; metal parts however, remain out of reach. While there are laser cutters (and water jets, plasma cutters) that can produce metal parts, they’re in a price range that’s typically too high for a FabLab, Hackerspace or single person. The RepRap community has experimented with 3D printing liquid metal [1] and molten solder [2] but encountered issues like the surface tension of the material.
Indirect processes in which the artefact produced by the personal fabrication machine is not the end product but only an intermediary step have become common. Casting resin parts using 3D printed molds [3], even carbon fibre cars [4] have been produced that way. A widely explored method of producing metal parts is a process called “lost wax [5]/PLA casting [6]“. This process involves producing a wax/PLA positive, surrounding in in plaster and pouring in hot metal, replacing the wax/PLA positive. To do this, however, one needs a furnace to melt the metal; something that again, is typically not found in FabLabs and Hackerspaces.

A new process

The general idea of this process is

  1. to apply an etch resistant mask using a 3D printer or laser cutter,
  2. etch the part using saline sulphate solution [7],
  3. and remove the etch resistant mask.

This process is certainly not new in itself (it was inspired by faceplace of the beautiful Audio Infuser 4700 [8]), but the steps along the way are. I’m primarily interested in the etch resistant mask, and how it can be produced in a personal fabrication setting. Several methods come to mind: the well known toner transfer method, applying a layer of filament as mask, drawing a mask using a permanent marker, pray-painting the mask on the aluminium using a laser-cut stencil. I’ve tried the last two methods. To test the process I tried to produce a piece of jewellery: a Voronoi like pattern.

Creating the pattern

To create the pattern, I reused some code of mine to generate Voronoi patterns [9]. I’ve added something like a perimeter editor, to design the boundary of the necklace (yellow border); as well as a G-code export to draw the pattern and an OpenSCAD export to produce the 3D preview.

1st shot: permanent marker

2013-10-17 20.07.39 My first attempt was to have my MendelMax draw the mask using a permanent marker (STAEDTLER permanent Lumocolor, red, medium); turns out this marker is not resistant to the saline etchant. The saline etch solution consisted of 200ml water, 15g sodium chloride (salt) and 25g copper sulphate, which I stirred for about two minutes. After the initial submerging of the aluminium piece, I scraped of the red residue about every two minutes. Half an hour later, very little of the aluminium was etched away, but the permanent marker even more so; I stopped the experiment.

Within the 30 minutes of etching, the material thickness was reduced by 0.06mm, from 1.01mm to 0.95mm. It would seem that the etchant was not very powerful, yet strong enough to have an effect on the marker. Removing the red spunk seemed to have increased the reaction speed, if the formation of bubbles is any indication.

2nd shot: spray-paint stencil

2013-10-20 12.03.28 As the permanent marker did not withstand the etching process, I tried a different masking material: acrylic paint. To apply the mask, I cut a stencil from 2mm artboard; the pattern was generated using the OpenSCAD script exported from my Voronoi designer. Before painting on the mask, I cleaned the aluminium surface using Acetone. Then I carefully fixed the pattern in place using brown packet tape.

The saline sulphate solution consisted of 1 liter water, 140g sodium chloride and 70g copper sulphate, again stirred for about 2 minutes. To not loose the small piece of aluminium in the bigger tank, I fixed a strip of brown paket tape to the back of the aluminium sheet, which I then attached to the etchant container. I left the aluminium piece in the acid bath for about 2 hours, checking up on it every 15-30 minutes, again removing the red residue.

While the acrylic paint withstood the saline solution without any problems, it’s precise application proved difficult. It seems that acrylic paint has a high surface tension, preventing it from producing sharp edges around the stencil, thus making the whole process more imprecise. At some point undercutting became a problem. The more material the etchant took away, the more bare aluminium there was – also underneath the mask. Thin traces were etched away like this.
This attempt was a success, nonetheless. The saline solution was more potent this time, removing 0.6mm of material thickness in a bit under two hours. After the etching, I did some cleanup using a Dremel and removed the mask in an Acetone bath.

Going from here

Applying the mask was a very tedious and labour intensive process. Toner transfer or depositing PLA directly on the heated aluminium may be better ways to do it. Next steps could to be try those methods and see if the toner/PLA can withstand the etchant for a prolonged period of time.
Undercutting remains a problem, that would be tricky to solve, using even thinner sheets maybe a way to go. Also alternative etching solutions/baths could be explored.

Results and lessons learned

Overall I’m quite happy with the outcome, although it’s not exactly what I was going for. Once the masking problem is solved, undercut is the next problem to deal with. This time, undercutting prevented longer etching time so that the holes of the Voronoi pattern are not fully developed. If I was to repeat the process, I’d make the traces thicker and use thinner material to begin with. While this method works fine for small items, produced out of thin sheet material, it will not work for non-flat or precise objects.


SenseLamp: a ultra-low cost, WiFi enabled sensor platform

The SenseLamp is an ultra-low cost (less than 30GBP/45US$), WiFi enabled sensor platform that is easy to build, easy to deploy and fully open source. A SenseLamp is a lamp shade that can be remotely controlled and gathers temperature, humidity, light-levels and motion data. It runs Linux and can do some on-board processing before passing the data on to e.g. busfarhn or COSM.

The story

One can write a story from two angles: a historical account or coming from the motivation. Since my motivation is something like “it sounded like a fun thing to do”, I’ll give the historical account.
The purpose of all this is to put a temperature, humidity, light-level and motion sensor in every room. And while we’re at it, throw in a relay to control a light-bulb. This combination allows some nice home-automation to be implemented. Even further: it provides a test-bed for activity recognition and helps to implement the idea of a quantified self. But in the end, it really just sounded like a fun thing to do.

Mark 0

SenseLamp v0.0

SenseLamp v0.0

After I had finished a first version of busfarhn and implemented the central heating control, I wanted to take things to the next level and started working on the SenseLamp. It didn’t take too long to whip up a first prototype: SenseLamp v0.0. It consisted of a TL wr703n, TI Launchpad with an MSP430G2553, a relay module, HC-SR501 PIR sensor and a DHT11 temperature/humidity sensor. All neatly wired up with jumper cables and attached to a lunchbox to which I went to town on with a Dremel. It wasn’t pretty, but got the job done – and it was running for 2 months!

I learned three things from this first prototype:
  1. The Linux CDC ACM drivers are buggy: it seems to be a well known problem that the Launchpads serial interface doesn’t work so well in Linux. The reason for this is a bug in the CDC ACM kernel module (see here, here and here). Despite the patches provided, I was unable to fix the issue on the WR703N resulting a bit of instability. A cronjob to reboot the router every 6 hours served as a usable workaround, but certainly does not fix the issue.
  2. HTTP only goes so far: using a simple HTTP based interface implemented via CGI worked beautifully for the central heating control as the interface was unidirectional. The SenseLamp however, needs to be able to send sensor data as well. So a different protocol/interface is needed.
  3. It will have to look better than this: While the dremeled lunchbox worked fine as a first prototype, I wouldn’t want plastic lumps hanging throughout my flat. So if I was to deploy this on a bigger scale (count > 1), it would have to look better than this.

Mark I

SenseLamp v0.1

SenseLamp v0.1

The next iteration was designed to do away with the flaws of the prototype. To get rid of all those jumper cables and get things neat and tidy, I designed a PCB and sent if of to SeeedStudio Fusion. As much as I favor local production and business – SeeedStudio was simply 10 times cheaper than anything I found in the UK – even with similar lead time. It only took about 2.5 weeks until I got the PCB in the mail, with the cheapest delivery option that is.
The goal for the SenseLamp PCB is to serve as a "shield&quot: for the WR703N router. The shield is powered by the router and directly talks to the routers AR7240 CPU via the serial port exposed via two test-pads (yep, fiddly soldering included). To make the power supply work, the board sports a 3.3V LDO. All sensor protocols required (the single wire protocol of the DHT11/DHT22), GPIO stuff for the PIR sensor and ADC stuff for the LDR are handled by the MSP430G2553 which is also on the PCB. A neat package all in all.
At the same I started playing around with different frame designs. The lamp shade side of things would have to support the boards, mains to 5V power supply (which handily comes with the router), some space for the light bulb and a spot to place the PIR and temperature sensor away from the light-bulb. As I happen to be interested in personal fabrication (sigh) I laser-cut a triangular frame that unfortunately is not really portable across different machines and can certainly not be ordered from services like Ponoko. But then, any other form of shade/lamp holder would do.
I ended up deploying that version in the kitchen and ran it there for a while. It successfully did away with the communication issues that plagued version 0. Also it looked quite a bit better. But it still wasn’t quite there yet:
  • forgot the PIR sensor: the lesson learned here is never design a PCB in a hurry. For some reason I forgot to add the the appropriate pinheaders for the PIR sensor on the PCB. So I had to compensate for that while wiring up the whole thing – it worked, but it was messy. Because like this the PIR sensor can not simply sit on the board, it has to be placed somewhere on the frame. Yet another thing to consider.
  • assembly was still too difficult: which is in part due to the PIR screw-up. Until this version I assumed I could simply tie it all together using some good old zipties. But as it turns out the holes in the relay board, as well as the router board are close to M2 and that’s too small for those standard 50mm zipties. So, I’d have to put in screws. Combine that with my tendency to pack everything too tight (on the frame) and you end up with something that’s very hard to put together.
  • DHT11 sensors are too inaccurate: according to their datasheet, DHT11 sensors have a tolerance of +- 2 Celsius and 1% humidity. That’s quite a lot when one wants to monitor temperatures in the range of 20 +- 3 degrees. So I needed better sensors: the pin-compatible DHT22 to the rescue.

Mark II

SenseLamp v0.2

SenseLamp v0.2

That’s the version that I ended up building four times and putting one up in every room. It uses a new revision of the SenseLamp PCB (taking the PIR sensor into account), has a way cleaner cable structure and sports DHT22 sensors. Let’s see how it’s made in the next section.

How it’s made


Besides the sensors, there are two main hardware components involved: the wr703n router and the SenseLamp board. As the figure shows, the router merely passes through the 5V USB power and provides WiFi – effectively turning a Linux machine into a wifi shield. This is still twice as cheap as buying a dedicated WiFi shield. And as power consumption is not really of concern here, this works just fine.

Power supply

The router as well as the SenseLamp board are powered from a single 5V power supply, which in turn is powered from the power line which would normally light up the lamp. As a result, existing light-switches effectively turn the whole device on and off and not just the light. I’ve taped something over the light switches to prevent accidentally turning of a SenseLamp.
To connect the 5V power supply (which comes with the wr703n), I solder cables to the plugs of the power supply and wire them in parallel to the light bulb.
Sensorboard of SenseLamp v0.2

Sensorboard of SenseLamp v0.2

Sensor board

This little board does most of the heavy lifting: it provides the 3.3V power supply for the MSP320, connects the relay to the MCU and provides a home for all the sensors. There is a programming port as well, which allows in-circuit programming/testing of the firmware using a TI Launchpad as FET. At some point one may ask: why an MSP430 and not something Arduino-ish? I have two reasons: MSP430s require a lower external part count if one wants stable timings (just a resistor and no quartz), and a TI Launchpad is a lot cheaper than an Arduino or an AVR ISP. In short, the MSP430G2553 just delivers more "bang for the buck" than an AVR, let alone an Arduino.


We’re not using much of the WR703N except for its WiFi, serial port and 5V coming from its USB port. The only thing that’s tricky here is soldering wires to the two testpads, as described elsewhere.


The software side of things consists of two parts: the firmware on the MSP430 and some userland program running on the WR703N.
The MSP430 firmware has to deal with all the sensors and serial communication to the "WiFi shield". Thanks to that awesome project called Energia, writing code for the MSP430 is as convenient as it is with an Arduino (but at a fraction of the cost). The firmware can be found in the GitHub repository.

OpenWRT and busfahrn

Having a sensor laden lamp shade hanging around is boring, unless one does something with those sensors. Earlier this year, I wrote busfarhn which is a message bus at the core of my home automation effort. This message bus also sports a GUI module implemented using MetroUI CSS and socket.io.
I’ve written a simple TCP based client that bridges between busfarhn and the SenseLamp firmware. It relays incomming commands and translates the SenseLamp output into busfarhn messages. All of that code can be found in the GitHub repository of busfarhn.


First things first: all SenseLamp material is open source. If not stated otherwise, it is published under the MIT license and can be found on GitHub. Should you be in the process of building one yourself and have questions – I’d be happy to help.
Building a SenseLamp is a straight forward process, involving three main activities:

  1. Gathering all parts and components
  2. Assembling the hardware
  3. Flashing OpenWRT and firmware + installing some userland driver program.

Gathering all parts and components

The part count of a SenseLamp is lower than one might expect. Besides the WR703N router and sensors and microcontroller, only common place parts such as resistors and caps are required. The SenseLamp PCB is single-sided, uses exclusively through-hole components and not too fine traces so that it can be produced at home if necessary. Find a A rudimentary bill of materials can be found here.
I bought most of the components from eBay, RS Components and Farnell. The board was manufactured by SeeedStudio Fusion. It took roughly a month to source all parts – including the custom PCB.

Assembling the hardware

Hardware assembly takes part in three stages:

  1. First, all components have to be soldered to the SenseLamp PCB, some wires to the WR703N router (described above), and some thick enough wires to the 5V power supply coming with the router (it helps to drill a hole in the plugs). Then, a "sandwich" is made by stacking the SenseLamp board on top of the WR703N using M2 screws (I drilled slightly bigger holes in the WR703N) and some spacers, which I 3D printed. Once that tower was assembled, I connected the SenseLamp PCB to a 5V power supply, FTDI 223 3V3 serial cable and TI Launchpad to test the SenseLamp board in isolation and program the firmware.
  2. The frame is assembled (depends on the kind of frame) and the 5V power supply is added. Try and layout the cables in this stage as well. Add the SenseLamp board sandwich in this stage as well.
  3. At last one has to wire it all up. I used luster terminals to wire the 5V parallel to the lamp and ran one lead through the relay, so that the lamp could be controlled from the SenseLamp.
  4. At this point it is a good idea to power it all up and see if everything’s working fine. I used a leftover cable, which I opened on one end and fed the leads into the top luster terminal to simulate it hanging from the ceiling.

Closing remarks

All in all I built four of these puppies and installed them in my livingroom, hallway, kitchen and bedroom. For now I’m simply using the light and motion sensor to automatically turn on the light when its dark. The temperature/humidity data is simply logged away and I glance at it every now and then out of interest. In the future I plan to use them for some light activity recognition and infer my sleep times and similar activities from the data.
Another thing I might do is design different frames as I’m not entirely happy with the current form. The WR703N, as well as the relay module both have LEDs which are constantly on and are slightly annoying – despite their usefulness.

My final remark should be that this was a fun build that took about twice as long as originally anticipated. I spent about two months (spare time) going through the prototypes, implementing the firmware and busfarhn integration. Would I do it again? Anytime.

Central heating control on the cheap

The gas heating system in my flat is pretty rudimentary when it comes to features. Not only is there the absence of a proper billing system (yep, pre-paid gas causes me to wake up in a cold bedroom once or twice a month), but also in terms of heating control. Besides controlling how much water flows through each radior, there are two ways to control the heating centrally: a timer with 30min. resolution and a dial without a scale that sets something like the “heating level”. Now, the timer controls when the boiler is active at all, and that includes warm water. So what I would typically do is set the timer to always on and set the room temperature by controlling the heating dial once a day. With that method I’d still heat during the day, when I’m not at home which is highly suboptimal.
I’m also currently in the process of building a simple home automation system using busfarhn running on a RasperryPi. So it seemed like a natural fit to have some form of automated heating control, integrated into the home automation system. The solution is pretty more or less straight forward:heating control schema
The TP-MR3020 WiFi router gets signals via http, relays them to a serial port where it is received from an MSP430, which in turn bit bangs a signal for a servo motor controlling the central heating dial.

The WiFi part

For the whole thing to integrate with busfarhn, it has to somehow become part of the network – read, WiFi. As power consumption is not one of the primary concerns here (the whole setup is going to be close to a power outlet), I opted for the cheap WiFi solution. Embedded WiFi is still pretty expensive and complex, although the Electric Imp eases the complexity side and hopefully the TI CC300 modules will cut costs in the future. But were not there yet.
Another way to get WiFi for an embedded project is to appropriate existing hardware. A prime example of such appropriation is the TP MR3020 WiFi router (or its smaller brother, TP WR703 for that matter). This inexpensive piece of hardware can run OpenWRT, sports a USB port and is powered using a USB power supply itself.
Getting OpenWRT to run on this little white box is really easy, and described plenty elsewhere. Once the OS is setup, the only thing missing is the HTTP-to-Serial relay part.
First, we need to be able to control serial ports. Unfortunately, OpenWRT does not ship with stty support in its busybox build. Johan von Konnow has figured that out as well and provides a rebuilt busybox. After ungzipping it, I installed it with

mv busybox /bin/busybox.stty && ln -s /bin/busybox.stty /bin/stty

OpenWRT comes with uhttpd by default. uhttpd is a nifty little webserver that supports CGI. So placing a script much like the following one in /www/cgi-bin and one can send commands to the serial port via: http://<ip-of-the-router>/cgi-bin/servo?position=[0-9].

#!/bin/sh -ax

CMD=`echo "$QUERY_STRING" | grep -oE "(^|[?&])position=[0-9]+" | cut -f 2 -d "=" | head -n1`

echo "Content-type: application/json"
echo ""

if [ -z "$CMD" ]; then
    echo "{ 'status': 'error', 'msg': 'missing position' }"
elsif [ ! -f $SP ]
    echo "{ 'status': 'error', 'msg': 'missing servo controller' }"
    [ "$(stty -F $SP-a | grep speed | cut -d ' ' -f 2)" != "9600" ] && stty -F $SP raw speed 9600 -crtscts cs8 -parenb -cstopb

    echo $CMD > $SP
    echo "{ 'status' : 'success', 'msg' : 'done' }"

The business end

Sending commands alone will not turn that knob. To make that happen, one needs a servo motor. A typical RC servos works very well here, as their power requirements tend to be < 250mA so they can be powered straight from the USB bus. Connecting the servo with the actual knob was a bit harder than I anticipated. For one, I'm not allowed to permanently modify the boiler (it's not my property). And then there is that strange dial shape (see video below). So for the actuation part, two things had to be done: writing a bit of code for the MSP430 to control the servo and building a rig to hold everything in place and to connect the servo with the dial its supposed to turn.
Programming an MSP430 Launchpad is really straight forward thanks to an awesome project called Energia (if basic Wiring-style programming suffices and one doesn’t mind the overhead). The code for the MSP430 – which would also run on an Arduino – can be found in the source archive.
Mounting the whole system to the boiler was one the bigger challenges of this project. My first attempt with duckt tape held about 3 hours before it hit the floor. As it turns out, stick-on velcro bought from the local haberdashery does the job quite well. It allows me to easily remove and reposition the contraption from/on the boiler without leaving any permanent damage. And should the day come at which I move out of this place, I’ll hope that acetone will get easy the velcro removal.
Besides mounting the whole thing to the boiler, some form of adapter to fit the servo to the dial had to be created. It took me two attempts get the strange curvature of the dial right, but thanks to my trusty printrbot that was less done in less than two hours time.
Again, the outlines and models can be found in the source archive below.

Let’s see it

So far I’m pretty happy with the outcome. The machine sticks to the boiler and hasn’t come down yet. WiFi interface and servo work reliably. In case you want to build your own system like this, you might find the files uploaded here helpful.
Download the sources here.

Milling and drilling on a MendelMax

A few days ago, I’ve mounted a Dremel on my MendelMax using this thing. Such as setup allows for a few nice things: milling wood or drilling printed circuit boards – once you got the software down.
As the scripts are a bit hidden in this post, check them out on GitHub.

Milling wood

Once the Dremel is on the machine, milling wood is pretty much a matter of converting a 2D drawing (e.g. stored as DXF) to Marlin compatible GCode. The weapon of choice here seems to be a program called dxf2gcode. What that program outputs however, is not directly suited for feeding it into the MendelMax:

  • comments: comments are surrounded by parenthesis, whereas for this use they’re single-line comments starting with a semicolon.
  • feed rate: feed rate is set using an F command (e.g. F400), whereas for the RepRap it should be a G1 expression (more like G1 F400)
  • unsupported commands: As the generated GCode is designed for real CNC milling machines, several specific commands have to be issues such as starting the cooland flow or getting the spindle up to speed. As such commands could confuse the RepRap (not checked, just assumed) we should filter them out or replace them with more suited counterparts.
  • different movement commands: dxf2gcode produces G0 for initial positioning, whereas moving a RepRap works with G1 commands.
  • different use of whitespace: typically GCode seems to be written so that there is no whitespace between the code character and numeric data. dxf2gcode however, produces whitespace where they typically aren’t. That whitespace makes the output look nice, but again might not work so well with a RepRap.

All those steps are implemented in a little script on GitHub.

Drilling PCBs

Drilling PCBs is something easy to do at first glance. It turns out, however, that aligning the layer mask with the drilled holes is a delicate issue. So far I’ve achieved the best results using the following steps (in that order):

  1. Export an excellon drill list from EAGLE using the CAM Processor excellon.cam script
  2. Convert the exported Excellon file (probably ending with .drd) to GCode using this script from GitHub.
  3. Drill the holes using the generated GCode
  4. Print the mask, cut it out and align it with the drilled holes
  5. Transfer the mask (toner transfer, UV exposure) and etch the PCB
Converting the Excellon drill file is a key part in this process. The script linked above does just that, including mil to millimeter conversion, and starting point selection. Personally I tend to identify a certain point on the board with the starting point of the printer. See the help output from the script below for a list of what it can do.

Usage: gcodeutils/excellon2gcode.rb [options] drillfile.drd outfile.gcode
    -s, --start STARTPOINT           The start point of the printer, format [XX]x[YY] [mm]
    -f, --first FIRSTPOINT           First drill point to go for and equate to zero, format [XX]x[YY] [mm]
    -m, --mil2mm                     Translate units from MIL to MM
    -i, --invert-xX                  Inverts the x axis
    -t, --travel-height HEIGHT       The travel height in mm
    -d, --drill-height HEIGHT        The drill height in mm
    -r, --rotate ANGLE               Rotates the holes by ANGLE degrees
        --preamble FILE              Prepend the preamble from FILE
    -p, --postamble FILE             Append the postamble from FILE
    -v, --verbose                    Produce verbose output
    -g, --gnuplot                    Plot the drill holes on the console
    -h, --help                       Display this screen
On a side node, try not to move the PCB while the drill is still in the board – it will snap. Obviously -.-


These days I’m building a simple home automation system – turning lights and appliances on/off. Not only do I want to be able to switch appliances on and off, but I want the system to do it for me. Hence, some context inference is in order. The centerpiece of this effort is a bus system that would be used to transmit sensor information, infered high-level context and actuator commands. As it so happens, that bus system was a good opportunity to learn Node.JS.
The outcome of this effort are a few lines of JavaScript which I’ve come to call busfahrn (colloquial German for taking a bus ride). It’s basically a wrapper around EventEmitter, but with a ton of different IO support. Its main features are
  • A lot of IO modules to pass along messages. Out of the box support exists for HTTP(S), Redis, serial ports and the console.
  • A notion of message/state inference using redis and simple rules formulated in JavaScript
  • Clean and simple code, easy to extend and modify
  • Written entirely in Node.JS
If you want to give it a spin or read more, please checkout Github.

From bent wire to 3D printed cookie cutters


With the advent of 3D printers in private homes, producing custom kitchen utensils such as cookie cutters becomes feasible. However, using existing interfaces – such as Autodesk Inventor, SolidWorks or SketchUp – to design a such customized kitchen artifact is out of reach for most users. In this work we present a system that takes a bent wire, or thick line drawing, as shape input and creates a producible cookie cutter model. We use computer vision and implement the idea of using household items for shape input, as well as fiducial markers.


The creation and consumption of Christmas cookies is an essential activity during the holidays. For baking cookies one needs, besides the edible side of things, a cookie cutter – or potentially a few of them. They come in many shapes and sizes, however, stars, hearts and similar motives dominate the commercially available selection.
Creating custom cookie shapes is typically accomplished using a knife instead of the fixed-shape cookie cutters. The results of this creative endeavour depends highly on the cutting skills of the people involved and are seldom reproducible. So if one wants high-quality, reproducible, custom cookie shapes creating a custom cookie cutter seems inevitable.
One of the things that immediately comes to mind is 3D printing the cookie cutter. And indeed, quite a few people have done so already [1]. Also, the idea of building something like a cookie cutter creation tool is not new [2]. However, this tool (and others alike) suffer from the fact that judging the physical dimensions of a drawn shape remains difficult or that their expressiveness is rather limited.
This work makes three main points: it presents an easy system to design and build cookie cutters, it demonstrates the idea of using household items for shape input and dwells on the idea of maintaining a close relationship to the real world.
The process described here goes as follows (see figure 1):
  1. design cookie shape by bending a wire or drawing a thick black line
  2. place/draw the outline on an A4 sheet of paper
  3. take a photo of the outline and extract, smooth a polygon
  4. convert outline into a printable cookie cutter
(a) designing the cookie cutter shape by bending a wire, placing it on an A4 sheet of paper and taking a photo of it (b) filtering the photo by binarizing it, extracting the paper sheet corners, and using a canny edge detector (c) constructing a 3D model using OpenSCAD and extracted polygon (d) printing the cookie cutter on a 3D printer

Figure 1: the design process demonstrated with a user-designed shape: (a) designing the cookie cutter shape by bending a wire, placing it on an A4 sheet of paper and taking a photo of it (b) filtering the photo by binarizing it, extracting the paper sheet corners, and using a canny edge detector (c) constructing a 3D model using OpenSCAD and extracted polygon (d) printing the cookie cutter on a 3D printer

Shape definition

Many ways of defining 2D shapes are described in literature. From sketch based interfaces [3] to more traditional 2D CAD systems [4]. All of those approaches suffer from their disconnection to world they are designing for. It is hard to judge the dimensions and appeal of those virtual artifacts before they become physical reality.
Our approach in this work lets the user define the shape using a simple, tangible shape input controller: a piece of wire. Imagine there was something like a "cookie cutter band" that one could bend into the desired shape. Once bent, the shape would be fixed and additional struts would be introduced to strengthen the cutter. That’s exactly what this system does. The user bends a piece of wire into the desired shape and the system constructs a cookie cutter, which is then printed on a 3D printer.

Shape extraction

The wire shape has to fulfill a set of constraints, so that it makes sense as a cookie cutter:

  • planarity: the wire has to be bent flat, much like the cutter itself is going to be. This constraint is also imposed by the application domain, as cookie dough is generally flat.
  • not self-intersecting: a self intersecting cookie cutter would produce cookies which do not hold together as one piece – hence, we require the shape to be a simple polygon.

After bending the shape, the user places the piece of wire on an A4 sheet of paper and takes a photo of that assembly. The photo is then fed into this system which extracts the shape using computer vision. Since the outcome is going to be produced for the real world, the polygon has to be translated into real world units. We map the image to real world coordinates by detecting the corners of the A4 paper. This leads to the following processing pipeline (implemented in C++ using the OpenCV [5]):

  1. threshold filter to binarize the image
  2. find paper corners and compute the homography
  3. warp perspective of the input image based on the homography
  4. canny filter the warped image
  5. erode the image to connect spurious lines
  6. find contours in eroded image
  7. select contour with largest area that is not the whole image
  8. if no contour is found, exit
  9. find center of the polygon using the enclosing circle
  10. approximate outline using Douglas-Pecker to smooth the polygon

Model creation and printing

The polygon coming from the shape extraction stage is then scaled into a smaller and a bigger version, which are assembled to form the cutter using CSG operations, implemented in OpenSCAD [6]. Scaling a concave polygon to build an outline is not as straight forward as scaling a convex one. While a convex polygon can be adequately scaled by multiplying each vertex with a scalar value, scaling a concave polygon that way is unsuitable for creating the model (see figure 2, b). To properly scale a concave polygon P = \mathbf{p}_0, \dots, \mathbf{p}_n for our needs, we translate each \mathbf{p}_i by the edge normal

    \[\lambda\frac{\mathbf{n}}{\vert \mathbf{n} \vert}, \mathbf{n}=(\mathbf{p}_{i+1} - \mathbf{p}_i)\]

with results as depicted in figure 2,c.

(a) the polygon to be scaled (b) the naively scaled version, drawn as dotted line (c) the properly scaled polygon

Figure 2: Scaling a concave polygon to create a thin outline: (a) the polygon to be scaled (b) the naively scaled version, drawn as dotted line (c) the properly scaled polygon

The correctly scaled polygons are then extruded into 3D space using OpenSCADs 2D subsystem. Additional struts are added to stiffen the cookie cutter and ease its handling. Strut size is determined by computing the bounding box of the polygon.


We implemented this system using simple computer vision algorithms. While this implementation is sufficient in many cases, it is not particularly robust. The paper sheet corner detection does not always work and the whole process depends on proper parameter choice – parameters which are image dependent. A more sophisticated processing pipeline or some form of automated parameter choice could be explored.
Also, the line extraction depends on the characteristics of the shape being extracted. While the erosion stage somewhat mitigates this effect, e.g. glossy wires or "sketchy" are still unsuited for this system.

Our system does not enforce the constraints imposed by the domain (i.e. simple, not self-intersecting polygon). Users can input such a shape and the system will try to create a cookie cutter from it, regardless if such a cutter makes sense or not. The system could inform the user when a shape is not meaningful and suggest corrections.
The concave polygon scaling algorithm we used in this work, results in very thin shells around sharp corners. Some 3D printers, or their slicing software, can not reproduce such corners – or even strips close to those corners. As a result, the cutter can have gaps in its perimeter yielding an unclean cutting result. A more constant perimeter thickness could be achieved by sampling the polygon at a finer rate (better normal computation) or by employing post model generation algorithms, such as the one described by Stava et. al. [7]
Future work could investigate the suitability of certain media as shape input controllers. A question that comes to mind is, if it is easier to bend a wire into shape rather than to draw the shape with a thick pen? An investigation of this question should take the tangibility and affordance of the wire *as compared to a drawing on a sheet of paper) into account.
Also, the importance of real-time feedback could be explored. Is it enough to get feedback on the producability/constraint enforcement at discrete points in time, e.g. when we feed an image into the system? Or should the system provide continuous feedback? Also, what is the relationship between constraint system complexity and real-time feedback need?


In this work, we described the idea of using household items for modeling, specifically investigating wire as shape input sensor and A4 paper as fiducial marker. We implemented a system for designing cookie cutters as case study, targeting 3D printing as production technique. Our system is easy to use, as it is situated in the real world.


The source code can be found here [8], including a few test pictures. I’ve only tested it on Linux, however, it should run on OSX as well. To compile/run the stuff run:

tar xvfz contours.tar.gz && cd contours
mkdir build && cd build
cmake .. && make
./contours ../test4.jpg && openscad main.scad


Cookie 3D printing

Today I finally got around to try printing cookie dough using RichRaps paste extruder on my new MendelMax (a post about that awesome machine is bound to follow).
At first glance, printing 3D structures with cookie dough seems to work just fine. However, I encountered several issues:

  • losing shape while baking: at this point this is probably the most severe issue. No “object” I printed survived the baking intact. All of them melted down to an unrecognizable blob – most likely because of the high butter content of the dough. It seems that it is the butter that provides most of the structural integrity and said structure is obviously gone in the oven.
  • getting the prints to stick to the print bed was problematic as well. My first attempt was with baking paper, not the brightest idea in hindsight as this stuff is designed to be grease-proof. The second idea was to use aluminium foil (in accordance to RichRap) which worked slightly better. Getting the Z height right and having a perfectly level bed – which I don’t have yet – seems to be key here.
  • the size of the syringe severely limits the size of objects that can be printed, considering the nozzle diameter of 2mm. This is not a problem specific to printing cookie dough, however it’s not that much of an issue with other pastes as they may permit the usage of a needle.
Printing flat objects works reasonably well. Starting with RichRaps “Masa Slic3r config”, I played with different infil-patterns to achieve the look I wanted. A rather moderate infil of 10% did the trick, as the dough will merge into one coherent piece in the oven.
Cookie dough seems to be an unsuitable printing material as it is completely unsuitable for maintaining any shape when exposed to heat. There are more traditional recipes that involve sculpting the cake/biscuit. They might be worth investigating.

In summary, it was a great first print for this new machine and made want to look into alternative print materials – next up is ceramic clay. And some day I might even print PLA on my new machine.

STL slicing for 3D printing

Some 3D printing methods like the additive layer manufacturing require the model to be sliced into discrete layers, which are then being printed one after another. These days I’m playing around with 3D printing thus I needed to perform some slicing myself. Unfortunately I didn’t like the methods to perform such slicing that much, so I decided to give it a shot and write my own.
Some time ago I wrote a utility that visualizes the flight of a quadrocopter. To make things easy I used the Visualization Toolkit. Remembering that I hit google and found an example the pretty much did what I wanted. The model’s loaded using vtkSTLReader and vtkStripper is employed to merge the polyline strips to connected components.
Unfortunately vtkStripper still has a bug (since 2004!) which rendered it unusable for my endeavor. It causes some images to look quite wrong (thus they’d be printed wrong) as it combined some polylines in an unsuitable manner. The slice pictured below has that white/inverted triangle which is not supposed to be there.

After patching vtkStripper.cxx with the patch attached to be bug, everything was fine. (Well pretty much, I’ve still experienced the problem one time, but hey, what’s perfect in this world ;-))
So the whole slicing process is:
  1. Slice STL model using vtkCutter and store the polylines in vtp files. By decuppling the cutting process from rendering the images, we gain flexibility, since we do not have to do the cutting each time we want to use a different rendering algorithm. The first step also computes the bounds of the model (using a bounding box) and stores them in a file.
  2. Convert polyline to SVG. We use SVG since it provides multiple benefits over directly rasterizing the polyline. First of all we retain control over the units (during the whole process you want to make sure you don’t mess up the units, or otherwise your printed object may be twice as large as anticipated or similar problems may occur).
  3. Use ImageMagick to rasterize the SVG graphics. Actually that’s something pretty cool, because in this step we can easily ensure that we’re using the correct resolution for our printer. So if we used an Ink printer to apply the binder during the 3D printing process, we could simply use the resolution of the printer.

So now that we have all slices we can (of course) print them, or we could make a little movie out of them, which is exactly what I did:

The model’s coming from Thingiverse and the music is from SoundCloud. You might notice the full model in upper right corner, that’s just visual sugar for the video and not part of the sliced images.

To make the whole process a little easier, I wrapped a Makefile around it and wrote a little ruby script that builds the environment for the makefile to work. That ruby script, as well as the source can be found in the ZIP file after the break. You’ll need VTK to build to build the tools and ImageMagick to run the whole thing.
Download me here.
Next Page »
Fork me on GitHub