Personal fabrication with aluminium

Personal fabrication with laser cutters and 3D printers can mainly produce plastic (PLA, ABS, Acrylic), wood (MDF, Plywood) or fabric parts; metal parts however, remain out of reach. While there are laser cutters (and water jets, plasma cutters) that can produce metal parts, they’re in a price range that’s typically too high for a FabLab, Hackerspace or single person. The RepRap community has experimented with 3D printing liquid metal [1] and molten solder [2] but encountered issues like the surface tension of the material.
Indirect processes in which the artefact produced by the personal fabrication machine is not the end product but only an intermediary step have become common. Casting resin parts using 3D printed molds [3], even carbon fibre cars [4] have been produced that way. A widely explored method of producing metal parts is a process called “lost wax [5]/PLA casting [6]“. This process involves producing a wax/PLA positive, surrounding in in plaster and pouring in hot metal, replacing the wax/PLA positive. To do this, however, one needs a furnace to melt the metal; something that again, is typically not found in FabLabs and Hackerspaces.

A new process

The general idea of this process is

  1. to apply an etch resistant mask using a 3D printer or laser cutter,
  2. etch the part using saline sulphate solution [7],
  3. and remove the etch resistant mask.

This process is certainly not new in itself (it was inspired by faceplace of the beautiful Audio Infuser 4700 [8]), but the steps along the way are. I’m primarily interested in the etch resistant mask, and how it can be produced in a personal fabrication setting. Several methods come to mind: the well known toner transfer method, applying a layer of filament as mask, drawing a mask using a permanent marker, pray-painting the mask on the aluminium using a laser-cut stencil. I’ve tried the last two methods. To test the process I tried to produce a piece of jewellery: a Voronoi like pattern.

Creating the pattern

To create the pattern, I reused some code of mine to generate Voronoi patterns [9]. I’ve added something like a perimeter editor, to design the boundary of the necklace (yellow border); as well as a G-code export to draw the pattern and an OpenSCAD export to produce the 3D preview.

1st shot: permanent marker

2013-10-17 20.07.39 My first attempt was to have my MendelMax draw the mask using a permanent marker (STAEDTLER permanent Lumocolor, red, medium); turns out this marker is not resistant to the saline etchant. The saline etch solution consisted of 200ml water, 15g sodium chloride (salt) and 25g copper sulphate, which I stirred for about two minutes. After the initial submerging of the aluminium piece, I scraped of the red residue about every two minutes. Half an hour later, very little of the aluminium was etched away, but the permanent marker even more so; I stopped the experiment.

Within the 30 minutes of etching, the material thickness was reduced by 0.06mm, from 1.01mm to 0.95mm. It would seem that the etchant was not very powerful, yet strong enough to have an effect on the marker. Removing the red spunk seemed to have increased the reaction speed, if the formation of bubbles is any indication.

2nd shot: spray-paint stencil

2013-10-20 12.03.28 As the permanent marker did not withstand the etching process, I tried a different masking material: acrylic paint. To apply the mask, I cut a stencil from 2mm artboard; the pattern was generated using the OpenSCAD script exported from my Voronoi designer. Before painting on the mask, I cleaned the aluminium surface using Acetone. Then I carefully fixed the pattern in place using brown packet tape.

The saline sulphate solution consisted of 1 liter water, 140g sodium chloride and 70g copper sulphate, again stirred for about 2 minutes. To not loose the small piece of aluminium in the bigger tank, I fixed a strip of brown paket tape to the back of the aluminium sheet, which I then attached to the etchant container. I left the aluminium piece in the acid bath for about 2 hours, checking up on it every 15-30 minutes, again removing the red residue.

While the acrylic paint withstood the saline solution without any problems, it’s precise application proved difficult. It seems that acrylic paint has a high surface tension, preventing it from producing sharp edges around the stencil, thus making the whole process more imprecise. At some point undercutting became a problem. The more material the etchant took away, the more bare aluminium there was – also underneath the mask. Thin traces were etched away like this.
This attempt was a success, nonetheless. The saline solution was more potent this time, removing 0.6mm of material thickness in a bit under two hours. After the etching, I did some cleanup using a Dremel and removed the mask in an Acetone bath.

Going from here

Applying the mask was a very tedious and labour intensive process. Toner transfer or depositing PLA directly on the heated aluminium may be better ways to do it. Next steps could to be try those methods and see if the toner/PLA can withstand the etchant for a prolonged period of time.
Undercutting remains a problem, that would be tricky to solve, using even thinner sheets maybe a way to go. Also alternative etching solutions/baths could be explored.

Results and lessons learned

Overall I’m quite happy with the outcome, although it’s not exactly what I was going for. Once the masking problem is solved, undercut is the next problem to deal with. This time, undercutting prevented longer etching time so that the holes of the Voronoi pattern are not fully developed. If I was to repeat the process, I’d make the traces thicker and use thinner material to begin with. While this method works fine for small items, produced out of thin sheet material, it will not work for non-flat or precise objects.


SenseLamp: a ultra-low cost, WiFi enabled sensor platform

The SenseLamp is an ultra-low cost (less than 30GBP/45US$), WiFi enabled sensor platform that is easy to build, easy to deploy and fully open source. A SenseLamp is a lamp shade that can be remotely controlled and gathers temperature, humidity, light-levels and motion data. It runs Linux and can do some on-board processing before passing the data on to e.g. busfarhn or COSM.

The story

One can write a story from two angles: a historical account or coming from the motivation. Since my motivation is something like “it sounded like a fun thing to do”, I’ll give the historical account.
The purpose of all this is to put a temperature, humidity, light-level and motion sensor in every room. And while we’re at it, throw in a relay to control a light-bulb. This combination allows some nice home-automation to be implemented. Even further: it provides a test-bed for activity recognition and helps to implement the idea of a quantified self. But in the end, it really just sounded like a fun thing to do.

Mark 0

SenseLamp v0.0

SenseLamp v0.0

After I had finished a first version of busfarhn and implemented the central heating control, I wanted to take things to the next level and started working on the SenseLamp. It didn’t take too long to whip up a first prototype: SenseLamp v0.0. It consisted of a TL wr703n, TI Launchpad with an MSP430G2553, a relay module, HC-SR501 PIR sensor and a DHT11 temperature/humidity sensor. All neatly wired up with jumper cables and attached to a lunchbox to which I went to town on with a Dremel. It wasn’t pretty, but got the job done – and it was running for 2 months!

I learned three things from this first prototype:
  1. The Linux CDC ACM drivers are buggy: it seems to be a well known problem that the Launchpads serial interface doesn’t work so well in Linux. The reason for this is a bug in the CDC ACM kernel module (see here, here and here). Despite the patches provided, I was unable to fix the issue on the WR703N resulting a bit of instability. A cronjob to reboot the router every 6 hours served as a usable workaround, but certainly does not fix the issue.
  2. HTTP only goes so far: using a simple HTTP based interface implemented via CGI worked beautifully for the central heating control as the interface was unidirectional. The SenseLamp however, needs to be able to send sensor data as well. So a different protocol/interface is needed.
  3. It will have to look better than this: While the dremeled lunchbox worked fine as a first prototype, I wouldn’t want plastic lumps hanging throughout my flat. So if I was to deploy this on a bigger scale (count > 1), it would have to look better than this.

Mark I

SenseLamp v0.1

SenseLamp v0.1

The next iteration was designed to do away with the flaws of the prototype. To get rid of all those jumper cables and get things neat and tidy, I designed a PCB and sent if of to SeeedStudio Fusion. As much as I favor local production and business – SeeedStudio was simply 10 times cheaper than anything I found in the UK – even with similar lead time. It only took about 2.5 weeks until I got the PCB in the mail, with the cheapest delivery option that is.
The goal for the SenseLamp PCB is to serve as a "shield&quot: for the WR703N router. The shield is powered by the router and directly talks to the routers AR7240 CPU via the serial port exposed via two test-pads (yep, fiddly soldering included). To make the power supply work, the board sports a 3.3V LDO. All sensor protocols required (the single wire protocol of the DHT11/DHT22), GPIO stuff for the PIR sensor and ADC stuff for the LDR are handled by the MSP430G2553 which is also on the PCB. A neat package all in all.
At the same I started playing around with different frame designs. The lamp shade side of things would have to support the boards, mains to 5V power supply (which handily comes with the router), some space for the light bulb and a spot to place the PIR and temperature sensor away from the light-bulb. As I happen to be interested in personal fabrication (sigh) I laser-cut a triangular frame that unfortunately is not really portable across different machines and can certainly not be ordered from services like Ponoko. But then, any other form of shade/lamp holder would do.
I ended up deploying that version in the kitchen and ran it there for a while. It successfully did away with the communication issues that plagued version 0. Also it looked quite a bit better. But it still wasn’t quite there yet:
  • forgot the PIR sensor: the lesson learned here is never design a PCB in a hurry. For some reason I forgot to add the the appropriate pinheaders for the PIR sensor on the PCB. So I had to compensate for that while wiring up the whole thing – it worked, but it was messy. Because like this the PIR sensor can not simply sit on the board, it has to be placed somewhere on the frame. Yet another thing to consider.
  • assembly was still too difficult: which is in part due to the PIR screw-up. Until this version I assumed I could simply tie it all together using some good old zipties. But as it turns out the holes in the relay board, as well as the router board are close to M2 and that’s too small for those standard 50mm zipties. So, I’d have to put in screws. Combine that with my tendency to pack everything too tight (on the frame) and you end up with something that’s very hard to put together.
  • DHT11 sensors are too inaccurate: according to their datasheet, DHT11 sensors have a tolerance of +- 2 Celsius and 1% humidity. That’s quite a lot when one wants to monitor temperatures in the range of 20 +- 3 degrees. So I needed better sensors: the pin-compatible DHT22 to the rescue.

Mark II

SenseLamp v0.2

SenseLamp v0.2

That’s the version that I ended up building four times and putting one up in every room. It uses a new revision of the SenseLamp PCB (taking the PIR sensor into account), has a way cleaner cable structure and sports DHT22 sensors. Let’s see how it’s made in the next section.

How it’s made


Besides the sensors, there are two main hardware components involved: the wr703n router and the SenseLamp board. As the figure shows, the router merely passes through the 5V USB power and provides WiFi – effectively turning a Linux machine into a wifi shield. This is still twice as cheap as buying a dedicated WiFi shield. And as power consumption is not really of concern here, this works just fine.

Power supply

The router as well as the SenseLamp board are powered from a single 5V power supply, which in turn is powered from the power line which would normally light up the lamp. As a result, existing light-switches effectively turn the whole device on and off and not just the light. I’ve taped something over the light switches to prevent accidentally turning of a SenseLamp.
To connect the 5V power supply (which comes with the wr703n), I solder cables to the plugs of the power supply and wire them in parallel to the light bulb.
Sensorboard of SenseLamp v0.2

Sensorboard of SenseLamp v0.2

Sensor board

This little board does most of the heavy lifting: it provides the 3.3V power supply for the MSP320, connects the relay to the MCU and provides a home for all the sensors. There is a programming port as well, which allows in-circuit programming/testing of the firmware using a TI Launchpad as FET. At some point one may ask: why an MSP430 and not something Arduino-ish? I have two reasons: MSP430s require a lower external part count if one wants stable timings (just a resistor and no quartz), and a TI Launchpad is a lot cheaper than an Arduino or an AVR ISP. In short, the MSP430G2553 just delivers more "bang for the buck" than an AVR, let alone an Arduino.


We’re not using much of the WR703N except for its WiFi, serial port and 5V coming from its USB port. The only thing that’s tricky here is soldering wires to the two testpads, as described elsewhere.


The software side of things consists of two parts: the firmware on the MSP430 and some userland program running on the WR703N.
The MSP430 firmware has to deal with all the sensors and serial communication to the "WiFi shield". Thanks to that awesome project called Energia, writing code for the MSP430 is as convenient as it is with an Arduino (but at a fraction of the cost). The firmware can be found in the GitHub repository.

OpenWRT and busfahrn

Having a sensor laden lamp shade hanging around is boring, unless one does something with those sensors. Earlier this year, I wrote busfarhn which is a message bus at the core of my home automation effort. This message bus also sports a GUI module implemented using MetroUI CSS and socket.io.
I’ve written a simple TCP based client that bridges between busfarhn and the SenseLamp firmware. It relays incomming commands and translates the SenseLamp output into busfarhn messages. All of that code can be found in the GitHub repository of busfarhn.


First things first: all SenseLamp material is open source. If not stated otherwise, it is published under the MIT license and can be found on GitHub. Should you be in the process of building one yourself and have questions – I’d be happy to help.
Building a SenseLamp is a straight forward process, involving three main activities:

  1. Gathering all parts and components
  2. Assembling the hardware
  3. Flashing OpenWRT and firmware + installing some userland driver program.

Gathering all parts and components

The part count of a SenseLamp is lower than one might expect. Besides the WR703N router and sensors and microcontroller, only common place parts such as resistors and caps are required. The SenseLamp PCB is single-sided, uses exclusively through-hole components and not too fine traces so that it can be produced at home if necessary. Find a A rudimentary bill of materials can be found here.
I bought most of the components from eBay, RS Components and Farnell. The board was manufactured by SeeedStudio Fusion. It took roughly a month to source all parts – including the custom PCB.

Assembling the hardware

Hardware assembly takes part in three stages:

  1. First, all components have to be soldered to the SenseLamp PCB, some wires to the WR703N router (described above), and some thick enough wires to the 5V power supply coming with the router (it helps to drill a hole in the plugs). Then, a "sandwich" is made by stacking the SenseLamp board on top of the WR703N using M2 screws (I drilled slightly bigger holes in the WR703N) and some spacers, which I 3D printed. Once that tower was assembled, I connected the SenseLamp PCB to a 5V power supply, FTDI 223 3V3 serial cable and TI Launchpad to test the SenseLamp board in isolation and program the firmware.
  2. The frame is assembled (depends on the kind of frame) and the 5V power supply is added. Try and layout the cables in this stage as well. Add the SenseLamp board sandwich in this stage as well.
  3. At last one has to wire it all up. I used luster terminals to wire the 5V parallel to the lamp and ran one lead through the relay, so that the lamp could be controlled from the SenseLamp.
  4. At this point it is a good idea to power it all up and see if everything’s working fine. I used a leftover cable, which I opened on one end and fed the leads into the top luster terminal to simulate it hanging from the ceiling.

Closing remarks

All in all I built four of these puppies and installed them in my livingroom, hallway, kitchen and bedroom. For now I’m simply using the light and motion sensor to automatically turn on the light when its dark. The temperature/humidity data is simply logged away and I glance at it every now and then out of interest. In the future I plan to use them for some light activity recognition and infer my sleep times and similar activities from the data.
Another thing I might do is design different frames as I’m not entirely happy with the current form. The WR703N, as well as the relay module both have LEDs which are constantly on and are slightly annoying – despite their usefulness.

My final remark should be that this was a fun build that took about twice as long as originally anticipated. I spent about two months (spare time) going through the prototypes, implementing the firmware and busfarhn integration. Would I do it again? Anytime.

Central heating control on the cheap

The gas heating system in my flat is pretty rudimentary when it comes to features. Not only is there the absence of a proper billing system (yep, pre-paid gas causes me to wake up in a cold bedroom once or twice a month), but also in terms of heating control. Besides controlling how much water flows through each radior, there are two ways to control the heating centrally: a timer with 30min. resolution and a dial without a scale that sets something like the “heating level”. Now, the timer controls when the boiler is active at all, and that includes warm water. So what I would typically do is set the timer to always on and set the room temperature by controlling the heating dial once a day. With that method I’d still heat during the day, when I’m not at home which is highly suboptimal.
I’m also currently in the process of building a simple home automation system using busfarhn running on a RasperryPi. So it seemed like a natural fit to have some form of automated heating control, integrated into the home automation system. The solution is pretty more or less straight forward:heating control schema
The TP-MR3020 WiFi router gets signals via http, relays them to a serial port where it is received from an MSP430, which in turn bit bangs a signal for a servo motor controlling the central heating dial.

The WiFi part

For the whole thing to integrate with busfarhn, it has to somehow become part of the network – read, WiFi. As power consumption is not one of the primary concerns here (the whole setup is going to be close to a power outlet), I opted for the cheap WiFi solution. Embedded WiFi is still pretty expensive and complex, although the Electric Imp eases the complexity side and hopefully the TI CC300 modules will cut costs in the future. But were not there yet.
Another way to get WiFi for an embedded project is to appropriate existing hardware. A prime example of such appropriation is the TP MR3020 WiFi router (or its smaller brother, TP WR703 for that matter). This inexpensive piece of hardware can run OpenWRT, sports a USB port and is powered using a USB power supply itself.
Getting OpenWRT to run on this little white box is really easy, and described plenty elsewhere. Once the OS is setup, the only thing missing is the HTTP-to-Serial relay part.
First, we need to be able to control serial ports. Unfortunately, OpenWRT does not ship with stty support in its busybox build. Johan von Konnow has figured that out as well and provides a rebuilt busybox. After ungzipping it, I installed it with

mv busybox /bin/busybox.stty && ln -s /bin/busybox.stty /bin/stty

OpenWRT comes with uhttpd by default. uhttpd is a nifty little webserver that supports CGI. So placing a script much like the following one in /www/cgi-bin and one can send commands to the serial port via: http://<ip-of-the-router>/cgi-bin/servo?position=[0-9].

#!/bin/sh -ax

CMD=`echo "$QUERY_STRING" | grep -oE "(^|[?&])position=[0-9]+" | cut -f 2 -d "=" | head -n1`

echo "Content-type: application/json"
echo ""

if [ -z "$CMD" ]; then
    echo "{ 'status': 'error', 'msg': 'missing position' }"
elsif [ ! -f $SP ]
    echo "{ 'status': 'error', 'msg': 'missing servo controller' }"
    [ "$(stty -F $SP-a | grep speed | cut -d ' ' -f 2)" != "9600" ] && stty -F $SP raw speed 9600 -crtscts cs8 -parenb -cstopb

    echo $CMD > $SP
    echo "{ 'status' : 'success', 'msg' : 'done' }"

The business end

Sending commands alone will not turn that knob. To make that happen, one needs a servo motor. A typical RC servos works very well here, as their power requirements tend to be < 250mA so they can be powered straight from the USB bus. Connecting the servo with the actual knob was a bit harder than I anticipated. For one, I'm not allowed to permanently modify the boiler (it's not my property). And then there is that strange dial shape (see video below). So for the actuation part, two things had to be done: writing a bit of code for the MSP430 to control the servo and building a rig to hold everything in place and to connect the servo with the dial its supposed to turn.
Programming an MSP430 Launchpad is really straight forward thanks to an awesome project called Energia (if basic Wiring-style programming suffices and one doesn’t mind the overhead). The code for the MSP430 – which would also run on an Arduino – can be found in the source archive.
Mounting the whole system to the boiler was one the bigger challenges of this project. My first attempt with duckt tape held about 3 hours before it hit the floor. As it turns out, stick-on velcro bought from the local haberdashery does the job quite well. It allows me to easily remove and reposition the contraption from/on the boiler without leaving any permanent damage. And should the day come at which I move out of this place, I’ll hope that acetone will get easy the velcro removal.
Besides mounting the whole thing to the boiler, some form of adapter to fit the servo to the dial had to be created. It took me two attempts get the strange curvature of the dial right, but thanks to my trusty printrbot that was less done in less than two hours time.
Again, the outlines and models can be found in the source archive below.

Let’s see it

So far I’m pretty happy with the outcome. The machine sticks to the boiler and hasn’t come down yet. WiFi interface and servo work reliably. In case you want to build your own system like this, you might find the files uploaded here helpful.
Download the sources here.

Milling and drilling on a MendelMax

A few days ago, I’ve mounted a Dremel on my MendelMax using this thing. Such as setup allows for a few nice things: milling wood or drilling printed circuit boards – once you got the software down.
As the scripts are a bit hidden in this post, check them out on GitHub.

Milling wood

Once the Dremel is on the machine, milling wood is pretty much a matter of converting a 2D drawing (e.g. stored as DXF) to Marlin compatible GCode. The weapon of choice here seems to be a program called dxf2gcode. What that program outputs however, is not directly suited for feeding it into the MendelMax:

  • comments: comments are surrounded by parenthesis, whereas for this use they’re single-line comments starting with a semicolon.
  • feed rate: feed rate is set using an F command (e.g. F400), whereas for the RepRap it should be a G1 expression (more like G1 F400)
  • unsupported commands: As the generated GCode is designed for real CNC milling machines, several specific commands have to be issues such as starting the cooland flow or getting the spindle up to speed. As such commands could confuse the RepRap (not checked, just assumed) we should filter them out or replace them with more suited counterparts.
  • different movement commands: dxf2gcode produces G0 for initial positioning, whereas moving a RepRap works with G1 commands.
  • different use of whitespace: typically GCode seems to be written so that there is no whitespace between the code character and numeric data. dxf2gcode however, produces whitespace where they typically aren’t. That whitespace makes the output look nice, but again might not work so well with a RepRap.

All those steps are implemented in a little script on GitHub.

Drilling PCBs

Drilling PCBs is something easy to do at first glance. It turns out, however, that aligning the layer mask with the drilled holes is a delicate issue. So far I’ve achieved the best results using the following steps (in that order):

  1. Export an excellon drill list from EAGLE using the CAM Processor excellon.cam script
  2. Convert the exported Excellon file (probably ending with .drd) to GCode using this script from GitHub.
  3. Drill the holes using the generated GCode
  4. Print the mask, cut it out and align it with the drilled holes
  5. Transfer the mask (toner transfer, UV exposure) and etch the PCB
Converting the Excellon drill file is a key part in this process. The script linked above does just that, including mil to millimeter conversion, and starting point selection. Personally I tend to identify a certain point on the board with the starting point of the printer. See the help output from the script below for a list of what it can do.

Usage: gcodeutils/excellon2gcode.rb [options] drillfile.drd outfile.gcode
    -s, --start STARTPOINT           The start point of the printer, format [XX]x[YY] [mm]
    -f, --first FIRSTPOINT           First drill point to go for and equate to zero, format [XX]x[YY] [mm]
    -m, --mil2mm                     Translate units from MIL to MM
    -i, --invert-xX                  Inverts the x axis
    -t, --travel-height HEIGHT       The travel height in mm
    -d, --drill-height HEIGHT        The drill height in mm
    -r, --rotate ANGLE               Rotates the holes by ANGLE degrees
        --preamble FILE              Prepend the preamble from FILE
    -p, --postamble FILE             Append the postamble from FILE
    -v, --verbose                    Produce verbose output
    -g, --gnuplot                    Plot the drill holes on the console
    -h, --help                       Display this screen
On a side node, try not to move the PCB while the drill is still in the board – it will snap. Obviously -.-


These days I’m building a simple home automation system – turning lights and appliances on/off. Not only do I want to be able to switch appliances on and off, but I want the system to do it for me. Hence, some context inference is in order. The centerpiece of this effort is a bus system that would be used to transmit sensor information, infered high-level context and actuator commands. As it so happens, that bus system was a good opportunity to learn Node.JS.
The outcome of this effort are a few lines of JavaScript which I’ve come to call busfahrn (colloquial German for taking a bus ride). It’s basically a wrapper around EventEmitter, but with a ton of different IO support. Its main features are
  • A lot of IO modules to pass along messages. Out of the box support exists for HTTP(S), Redis, serial ports and the console.
  • A notion of message/state inference using redis and simple rules formulated in JavaScript
  • Clean and simple code, easy to extend and modify
  • Written entirely in Node.JS
If you want to give it a spin or read more, please checkout Github.

STL slicing for 3D printing

Some 3D printing methods like the additive layer manufacturing require the model to be sliced into discrete layers, which are then being printed one after another. These days I’m playing around with 3D printing thus I needed to perform some slicing myself. Unfortunately I didn’t like the methods to perform such slicing that much, so I decided to give it a shot and write my own.
Some time ago I wrote a utility that visualizes the flight of a quadrocopter. To make things easy I used the Visualization Toolkit. Remembering that I hit google and found an example the pretty much did what I wanted. The model’s loaded using vtkSTLReader and vtkStripper is employed to merge the polyline strips to connected components.
Unfortunately vtkStripper still has a bug (since 2004!) which rendered it unusable for my endeavor. It causes some images to look quite wrong (thus they’d be printed wrong) as it combined some polylines in an unsuitable manner. The slice pictured below has that white/inverted triangle which is not supposed to be there.

After patching vtkStripper.cxx with the patch attached to be bug, everything was fine. (Well pretty much, I’ve still experienced the problem one time, but hey, what’s perfect in this world ;-))
So the whole slicing process is:
  1. Slice STL model using vtkCutter and store the polylines in vtp files. By decuppling the cutting process from rendering the images, we gain flexibility, since we do not have to do the cutting each time we want to use a different rendering algorithm. The first step also computes the bounds of the model (using a bounding box) and stores them in a file.
  2. Convert polyline to SVG. We use SVG since it provides multiple benefits over directly rasterizing the polyline. First of all we retain control over the units (during the whole process you want to make sure you don’t mess up the units, or otherwise your printed object may be twice as large as anticipated or similar problems may occur).
  3. Use ImageMagick to rasterize the SVG graphics. Actually that’s something pretty cool, because in this step we can easily ensure that we’re using the correct resolution for our printer. So if we used an Ink printer to apply the binder during the 3D printing process, we could simply use the resolution of the printer.

So now that we have all slices we can (of course) print them, or we could make a little movie out of them, which is exactly what I did:

The model’s coming from Thingiverse and the music is from SoundCloud. You might notice the full model in upper right corner, that’s just visual sugar for the video and not part of the sliced images.

To make the whole process a little easier, I wrapped a Makefile around it and wrote a little ruby script that builds the environment for the makefile to work. That ruby script, as well as the source can be found in the ZIP file after the break. You’ll need VTK to build to build the tools and ImageMagick to run the whole thing.
Download me here.

Soap Bubble Bot

During my studies at the HFU, I have to participate in at least two semester projects. This time we built a new USB interface for some pretty old hardware. That old hardware is a robot arm built in 80′s (there is “Made in West Germany” written on it). I’ve blogged about it before.
Today we got to present our work. But how do you show something as abstract as interface hardware/software? So we came up with that demo which we consider pretty neat: we taught the robot arm to make soap bubbles. Well, we used a bunch of ruby scripts to grad the events generated by a gamepad (HID device) and interpreted them, so that one can control the arm. We also taped something (I’d call a soap bubble device) at the front of robot. Add some of the soap fluid and we could’ve made soap bubbles manually. But that’s too easy. So we added a sequence of positions (one of the firmware’s features) and used that to do the work. Video’s after the break.

LogicAnalyzer – extensible logic debugging

In the past few months it has become quite on this blog. That’s not because I wasn’t involved in any activity, but rather because I was too involved in one (building a quadrocopter). During that particular activity we started using a logic analyzer. There is a great open source project building such a device (originally here). Unfortunately for quite some time it was out of stock. So we set on to build one ourselves. In that process I started writing some client software for it, since there was a lack of extensible, open source, logic analyzer client software. In the meantime, the Open Workbench Logic Sniffer was in stock again. So we ordered two of them. At the time they arrived, the LogicAnalyzer was pretty much usable.
While designing the LogicAnalyzer, I had three major goals in mind:

  1. Extensibility: integrating new devices should be as easy as possible. Generally speaking, integrating any kind of data source should be easy.
  2. Clean UI: some logic analyzer client software that is around has a pretty cluttered UI. The LogicAnalyzer UI should be clean and intuitive.
  3. Architecture: in order to achieve the required extensibility and to enhance maintainability, a good and clean architecture is required.

All goals have been met (although one might discuss how well goal three has been reached – that’s in the eye of the beholder).

LogicAnalyzer is built using the Eclipse Rich Client Platform. This powerful platform provides mechanisms to extend applications built using it. So we’re using the extension point mechanism of Eclipse to provide the required extensibility. Special focus has been put on a clean API and unobtrusive interfaces. By using Eclipse RCP, we also gain multi-platform support. LA runs on Mac OSX, Linux and Windows.

The clean UI is achieved by only showing what’s currently required. When you open LogicAnalyzer, all you’ll see is the button to get you started. The tools integrated into the program may require a UI, which is only shown when they’re selected. We wanted to maximize the amount of pixels available to show what’s really important: the data we’re visualizing.

An important (but simple) rule during the design phase, has been a clear separation of concerns. Thus we’ve clearly separated UI from business logic. Most components are connected using Eclipse mechanisms (e.g. extension points). A lot has been designed against interfaces (where appropriate). Those measures resulted in a clean structure and good maintainability.
If you want to know more (such as a list of features) or want to give it a try, I recommend that you have a look at LogicAnalyzers homepage. Some numbers for those interested: the initial commit was roughly 10k lines of code written in two weeks. Still, so far we’ve experienced only few (known but unfixed) bugs.

Making the world a safer place: self-verifying SMC

We all use code-generators, no matter if we love or hate them (or do both). But for most compilers you simply have to trust that they work correctly and do what they’re supposed to do. They typically do not provide any sort of proof for their correctness. Well, the SMC is different in that regard. It can generate verification code for its C backend.
The idea of self-verifying compilers it not completely new. Several methods have been proposed for this task, proof-carying code and credible compiler to name a few. What I’m doing here is some mixture of both (with a bit more of the credible compiler – altough the SMC is no optimizing compiler, so there is only a single transformation involved). Basically once we generated C code out of the FSM description, we have two (possibly different) FSMs. The one we described and the one SMC generated. The intuitive perception of those two FSMs being equal, is that their languages (dented by \mathcal{L}(F)) are equal. But how do we show that?
For our purpose it’s enough to show that the two transition functions are equal, as we only care about the generated FSM behaving exactly as the described one does. That still leaves us with the problem that we have to unobtrusively check the generated FSM.
Let us define p=(t_1, \dots, t_n), t_i = ((s, \sigma), (s', A_{in}, A_{out}))\in\delta to be a sequence of transitions (called path or trace). We’ll call a path loop free if \forall t: \neg\exists t': t = t' \vee s_t \neq s_{t'} holds – meaning that there are no two transitions within a path having the same start state.
Finally we accept the two FSMs to be equal iff for all loop-free paths in the original FSM, there exists an equivalent path in the generated machine. Following we accept the SMC to work correctly if the two FSMs (orignal one and generated one) are equal.
Generating the traces of the original FSM is a rather trivial task. We already have a good structure to work on (the SMC AST). That’s not the case with the generated FSM. First of all generating the trace must be unobtrusive. We must be sure that our instrumentation does alter the behavior of the FSM. We can accept that a statement of the form


generally does not alter the behavior, if placed appropriate. So we simply place print statements (with a unique ID) at each transition and action. Then we generate code that causes the FSM to transit along the previously determined paths.
If everything goes right, executing that code should cause a sequence of IDs to be printed. In case the SMC works correctly that sequence should be identical to those produced by traces of the original FSM.

To use this verification mechanism, run the SMC with the cverify backend and execute the generated _checker.sh script. That script either ends with “NO ERRORS FOUND” (which should always be the case), with a list of traces which the generated FSM did not perform. Additionally a checker.log file is generated which contains entries like this:

[ ] AST_Transition(_INIT,RT,List(AST_ConstEqCondition(CMD__MODE__RT)),AST_Actions(Some(List(clr)),None)) AST_Transition(RT,_ERR,List(AST_NegatedCondition(AST_PredicateCondition(command))),AST_Actions(None,None))
[!] A98601 T-93118478 T-1152986909 ;
[?] A98601 T-93118478 T-1152986909 ;

where the line marked by the exclamation mark denote the trace of the original FSM, as opposed to question mark which marks trace of the generated FSM.

Ahh, and there is yet another new feature, the SMC can now generate a NuSMV model. Of course that’s all in the repository.

State machine compiler

As mentioned in one my previous posts, next semester I’m going to participate in a project where we’re supposed to build a new interface to existing roboting hardware. To describe the protocol that is spoken between the computer and that hardware interface, I wanted to use some sort of DFA. I also wanted avoid having to implement that DFA in the different programming languages involved in the project. Especially since such implementations tend to diverge over time.
So I sat down and wrote something I call the state machine compiler. It allows the specification of a so called predicate deterministic finite automaton. Such a PDFA can be compiled to C or Java code – or to whichever language one writes a backend for. I won’t go into detail here, as I’ve written a manual which explains everything in detail.
You can grab the source code from subversion (username: guest, password: guest). As usual it’s published under CC-BY-SA. Be aware that this is considered work in progress, so the generated code may still be far from being optimal.
Next Page »
Fork me on GitHub