32leaves.net

Soap Bubble Bot

During my studies at the HFU, I have to participate in at least two semester projects. This time we built a new USB interface for some pretty old hardware. That old hardware is a robot arm built in 80′s (there is “Made in West Germany” written on it). I’ve blogged about it before.
Today we got to present our work. But how do you show something as abstract as interface hardware/software? So we came up with that demo which we consider pretty neat: we taught the robot arm to make soap bubbles. Well, we used a bunch of ruby scripts to grad the events generated by a gamepad (HID device) and interpreted them, so that one can control the arm. We also taped something (I’d call a soap bubble device) at the front of robot. Add some of the soap fluid and we could’ve made soap bubbles manually. But that’s too easy. So we added a sequence of positions (one of the firmware’s features) and used that to do the work. Video’s after the break.

State machine compiler

As mentioned in one my previous posts, next semester I’m going to participate in a project where we’re supposed to build a new interface to existing roboting hardware. To describe the protocol that is spoken between the computer and that hardware interface, I wanted to use some sort of DFA. I also wanted avoid having to implement that DFA in the different programming languages involved in the project. Especially since such implementations tend to diverge over time.
So I sat down and wrote something I call the state machine compiler. It allows the specification of a so called predicate deterministic finite automaton. Such a PDFA can be compiled to C or Java code – or to whichever language one writes a backend for. I won’t go into detail here, as I’ve written a manual which explains everything in detail.
You can grab the source code from subversion (username: guest, password: guest). As usual it’s published under CC-BY-SA. Be aware that this is considered work in progress, so the generated code may still be far from being optimal.

Generating syntax diagrams from Scala parser combinators

Scala parser combinators are a powerful concept which allow you to write parsers with the standard Scala distribution. They’re implemented as an internal DSL which makes them feel naturally in the language. As mentioned before I’m currently writing some sort of a state machine compiler. As I plan to write some documentation for that software (which actually is kind of rare :-P) I need some way to describe the syntax. Besides EBNF, syntax diagrams are a neat way to do that. The external DSL which is used to describe the state machines is directly written using the parser combinator, so I had figure out a way to generate syntax diagrams as well as EBNF out of them.
My first approach was to pattern match the resulting Parser structure. But unfortunately the combinators themselves are mostly modeled as a function, thus do not appear in the resulting object graph. So I decided to write a Scala compiler plugin and run thru the AST. Basically all this software does is to transform trees from the AST to some intermediate expression structure to a graphing tree one. This approach suffers some limitations and I made some assumptions as well:

  • The name of the class which contains the production rules must equal the filename without “.scala”
  • Production rules must not be named: “stringWrapper”, “literal”, “regex”, “Predef”, “r” or “accept”
  • def production rules must not have parameters, otherwise they’re not considered to be production rules
  • production rules defined via val will appear twice (this is most likely a bug)

All in all this implementation works in my case, but might not work for others. This more of a quick hack than carefully engineered software, but it proofs the concept and gets the job done.

With the ability to draw syntax diagrams and having the expression tree structure at hand (which can be seen as an AST for EBNF) writing an EBNF parser (which is semi ISO/IEC 14977 : 1996(E) compatible) and feeding the diagram generator with the result was a snap.
So let’s consider the following example (which actually is the EBNF parser I wrote):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
def start: Parser[List[ProductionRule]] = syntax_rule*
def syntax_rule = meta_identifier ~ '=' ~ definitions_list ~ ';' ^^ {
  case name ~ _ ~ expr ~ _ => ProductionRule(name, expr)
}
def meta_identifier = """[a-zA-Z0-9]+"""r
def definitions_list:Parser[Expression] = (single_definition ~ (('|' ~ single_definition)*)) ^^ {
  case t ~ o => OrExpression(t :: o.map(_ match { case _ ~ x => x }))
}
def single_definition = (primary ~ (((","?) ~ primary)*)) ^^ {
  case t ~ o => SeqExpression(t :: o.map(_ match { case _ ~ x => x }))
}
def primary: Parser[Expression] = optional_sequence | repeated_sequence | grouped_sequence | terminal_string | meta_identifier ^^ {
  case s => RuleRefExpression(s)
}
def optional_sequence = "[" ~ definitions_list ~ "]" ^^ {
  case _ ~ os ~ _ => OptionExpression(os)
}
def repeated_sequence = "{" ~ definitions_list ~ "}" ^^ {
  case _ ~ rs ~ _ => ZeroToManyExpression(rs)
}
def grouped_sequence = "(" ~ definitions_list ~ ")" ^^ {
  case _ ~ gs ~ _ => gs
}
def terminal_string = (""""[^"\\\r\n]*(?:\\.[^"\\\r\n]*)*" """r) ^^ {
  case s => LiteralExpression(s)
}

The corresponding EBNF expression generated by this software is:

1
2
3
4
5
6
7
8
9
10
start = { syntax_rule };
syntax_rule = ( '[a-zA-Z0-9]+' '=' ( single_definition { ( '|' single_definition ) } ) ';' );
meta_identifier = '[a-zA-Z0-9]+';
definitions_list = ( ( primary { ( [ ',' ] primary ) } ) { ( '|' single_definition ) } );
single_definition = ( ( optional_sequence | repeated_sequence | grouped_sequence | terminal_string | meta_identifier ) { ( [ ',' ] primary ) } );
primary = ( ( '[' definitions_list ']' ) | ( '{' definitions_list '}' ) | ( '(' definitions_list ')' ) | '"[^"\\\r\n]*(?:\\.[^"\\\r\n]*)*" ' | '[a-zA-Z0-9]+' );
optional_sequence = ( '[' ( single_definition { ( '|' single_definition ) } ) ']' );
repeated_sequence = ( '{' ( single_definition { ( '|' single_definition ) } ) '}' );
grouped_sequence = ( '(' ( single_definition { ( '|' single_definition ) } ) ')' );
terminal_string = '"[^"\\\r\n]*(?:\\.[^"\\\r\n]*)*" ';

And of course the syntax diagrams we wanted:

So far the syntax diagram generator understands two options:

  • -P:virtual:rule01,...,ruleN: declares rule01,…,ruleN to be virtual rules which causes them to be resolved wherever they’re referenced. Be aware that recursive virtual rules are only resolved to one level
  • -P:depth:N: resolve rule references until a depth of N (recursive rules are only resolved for one level)
If you’re interested in the code you can grab it from this SVN repository (username: guest, password: guest). You’ll need Scala as well as Batik to run it.
Creative Commons License
SyntaxDiagramGenerator is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License

Visualizing Sorting Algorithms – my shot

These days I have to write some sort of reference card for basic sorting algorithms. Of course I wanted some sort of visualization of those algorithms on that card. The coolest sorting algorithm visualization I’ve seen so far has been created by Aldo Cortesi. He used Cairo to draw his stuff (according to his post learning Cairo was the whole point of this excercise).
Personally I still found those diagrams pretty hard to read if you really wanted to follow the algorithm. And of course I wanted to do something with Cairo, too. So I decided to rewrite the whole thing in C. My version shows distinct sections for each loop iteration (as most of those algorithms can be followed by thinking in that iterative loop way). In the diagrams below each number denotes the loop nesting level while B stands for being and E for end.
The source can be downloaded here and is under CC-BY-SA.

Unix Friends and User Group Campus Camp 2010

The UnFUCK2010 just went over. UnFUCK is a three day event organized by students of the Hochschule Furtwangen University (incl. myself) which features a lot of talks on quite a variety of topics. This year – which was the second time UnFUCK took place – we had talks starting from the power of prime numbers to fiddling with the ARM Cortex-M3. The timeline gives some more information about what was going on.
All in all it was a weekend definitely well spent, I had the opportunity to meet a lot of inspiring people and to hear about new and cool ideas. Videos of this conference should be available at the UnFUCK wiki within the next days (unfortunately everything in German).
On Sunday I had the opportunity to hold my Informatik Rockstars (that’s no typo here, it’s German :-P) talk again. It basically presents people who have delivered extraordinary participation to computer science (with a slight bias towards theoretical computer science) as a trading card game. As I’ve been asked a couple of times now to publish those cards I do so (unfortunately they’re also in German). They are in no particular order, far away from being complete and should be considered to be work in progress. Download link and license are at the end of this post.
You can also download the whole set here.
Creative Commons License
Informatik Rockstars Trading Cards by Christian Weichel is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License

Spectral graph drawing

Whenever I attend a lecture about some topic, I tend to gather some more information about that particular field than the lecture itself provides. This time it’s about graph theory. While I was reading about graph drawing algorithms, I stumbled upon something called spectral graph theory [2]. In order to get the idea of what this is all about, I started looking thru the presentation on spectral graph theory [2] and reading a few [15] papers [8] on this topic.
Note: Please forgive me my formal imprecision. If anyone feels “offended” by this, she might want to read the papers instead.
The basic idea behind all this theory is to compute the eigenvectors [5] of the laplacian matrix [6] of a graph. Then sort the eigenvalues in ascending order (thus the eigenvectors are sorted, too). Then if

    \[v_0,\dots,v_n\]

are the eigenvectors and

    \[\lambda_0,\dots,\lambda_n\]

the according eigenvalues of the laplacian matrix of the graph

    \[G = (\{k_0,\dots,k_n\}, E)\]

sorted that

    \[\lambda_m \leq \lambda_{m+1}\]

,

    \[0<m<n\]

, the position vector for vertex

    \[k_i\]

is

    \[\begin{bmatrix}v_1(i) \\ v_2(i)\end{bmatrix}\]

where

    \[v_k(i)\]

denotes the

    \[i\]

th component of the eigenvector

    \[v_k\]

.

Due to reasons explained in the papers [15] [8] this method of graph drawing tends to draw the graph planarily [9]. Thus large graphs (especially mesh graphs) tend to look pretty good if drawn like this.
Heaving understood the theory (at least most of it), I wanted to try things out. So I got some test data [10], wrote a small ruby script which converted the PARTY format [11] (or at least a subset of it) into a text-based matrix format, so that Octave [12] was able to read the ajency matrices [13] of the graphs. Let

    \[D\]

denote the degree matrix [14] of the graph and

    \[A\]

the adjency matrix, then the laplacian matrix is

    \[L=D-A\]

. Now we may calculate the eigenvectors of

    \[D\]

. The Octave code below loads some graph’s adjency matrix in text format and calculates the (first three) eigenvectors of the laplacian matrix (basically taken from [15]):

1
2
3
4
load graph.txt;
A = sparse(graph);
L = diag(sum(A)) - A;
[v,d] = eigs(L, 3, 'sm');

Then I plotted the graph using gnuplot (to be more precise: using gplot, Octaves built-in graph plotting functionality). The result was already pretty pleasing:

crack_graph

But I wanted things to look better, so I wrote a MEX file [16] which plotts graphs using OpenCV [17].

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
#include "mex.h"
#include <opencv/cv.h>
#include <opencv/highgui.h>

#define SCALE  40000
#define IN_ADJ prhs[0]
#define IN_XY  prhs[1]

#define min(x,y) ((x) < (y) ? (x) : (y))
#define max(x,y) ((x) > (y) ? (x) : (y))
#define XY(i)    (int)((xyOffset + xy[i]) * SCALE)

void mexFunction (int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]) {
   
    if(nrhs < 2) mexErrMsgTxt("expects two parameters: adjency matrix (sparse, n times n), xy vector (n times 2)");
    if(! mxIsSparse(IN_ADJ)) mexErrMsgTxt("Adjency matrix must be sparse");
    if(mxGetM(IN_ADJ) != mxGetN(IN_ADJ)) mexErrMsgTxt("Adjency matrix must be square");
    int n = mxGetN(IN_ADJ);
    int nz = mxGetNzmax(IN_ADJ);
   
    if(mxGetN(IN_XY) != 2 || mxGetM(IN_XY) != n) mexErrMsgTxt("XY vector must be (n times 2)");

    double *xy = mxGetPr(IN_XY);
    double *adj = mxGetPr(IN_ADJ);
   
    // get dimensions
    int i, j;
    double xyOffset = 0, maxX = 0, maxY = 0;
    for(i = 0; i < n; i++) {
        if(xy[i] < xyOffset) xyOffset = xy[i];
        if(xy[i] > maxX) maxX = xy[i];
    }
    for(i = n; i < 2*n; i++) {
        if(xy[i] < xyOffset) xyOffset = xy[i];
        if(xy[i] > maxY) maxY = xy[i];
    }
    xyOffset = -xyOffset;
    maxX += xyOffset;
    maxY += xyOffset;
   
    mexPrintf("img size: %i x %i, xyOffset: %f, maxX: %f, maxY: %f\n", (int)(maxX * SCALE), (int)(maxY * SCALE), xyOffset, maxX, maxY);
    IplImage *img = cvCreateImage(cvSize((int)(maxX * SCALE), (int)(maxY * SCALE)), IPL_DEPTH_8U, 3);

    int *ir = mxGetIr(IN_ADJ);
    int *jc = mxGetJc(IN_ADJ);
   
    int total = 0;
    CvPoint p1, p2;
   
    for (i=0; i<n; i++)  {
        p1.x = XY(i);
        p1.y = XY(n + i);
       
        int starting_row_index = jc[i];
        int stopping_row_index = jc[i+1];
        if (starting_row_index == stopping_row_index) continue;
        else {
            for (j = starting_row_index; j < stopping_row_index; j++)  {
                p2.x = XY(ir[j]);
                p2.y = XY(n + ir[j]);
               
                cvLine(img, p1, p2, cvScalar(255,128,255,0), 1, CV_AA, 0);
               
            //  mexPrintf("\t(%d,%d) = %g || (%i %i) -- (%i %i)\n", ir[j]+1, i+1, adj[total++], p1.x, p1.y, p2.x, p2.y);
            }
        }
    }
   
    if(!cvSaveImage("/tmp/gr.png", img)) mexErrMsgTxt("Unable to write /tmp/gr.png");
    cvNamedWindow("mainWin", CV_WINDOW_AUTOSIZE);
    cvShowImage("mainWin", img );
    cvWaitKey(20);
   
}

That code was used to produce the following graphs (except for the blue one, my WordPress simply can’t exclude images in a gallery):

With something looking that cool in my hands, I simply had to create posters out of that. So I postprocessed two graph images and ordered prints (the two black and white images). That stuff is fun, and extremly interessting. I’ll for sure do some further investigation on this topic.

References

Playing around with Markov algorithms

This is just something quick I wanted to for about a week now: an interpreter for Markov algorithms. So I wrote two of them, one in Ruby to get the idea how to do this and one in Javascript to have it on the web. Both are not particularly beautiful written but get the job done. Click this neat little link to get to the online version or have a look at the Ruby code (as you can see the production system and input word are “hard coded”). In both versions the production system removes the trailing zeros from the input word.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
E = 0x00
P = [
    { "a1" => "1a" },
    { "a0" => "0a" },
    { "0a" => "a" },
    { "a" => [E] },
    { E => "a" }
]
I = "0101101001110110000"


word = I
while true do
    ruleToRun = nil
    P.each {|rule|
        rule = rule.to_a[0]
        score = word.index(rule[0])
        if (!score.nil? || rule[0] == E) && ruleToRun.nil?
            ruleToRun = rule       
        end
    }
   
    break if ruleToRun.nil?
    shouldStopAfterThisRule = ruleToRun[1].is_a? Array
    ruleToRun[1] = ruleToRun[1][0] if shouldStopAfterThisRule
    oldword = word
    if ruleToRun[0] == E
        word = ruleToRun[1] + word;
    elsif ruleToRun[1] == E
        word = word.sub(ruleToRun[0], "")
    else
        word = word.sub(ruleToRun[0], ruleToRun[1])
    end
    puts "#{ruleToRun[0]} -> #{(shouldStopAfterThisRule ? "." : "")}#{ruleToRun[1].to_s}: #{oldword} => #{word}"
    break if shouldStopAfterThisRule
end
puts word

Practicing binary addition

Today we we’re talking about binary addition and the twos complement. As our professor really sets a high value on being able to do binary addition and to calculate the twos complement, I decided write a small program to aid practicing those two. Click here to get this started.

Once you’ve filled in all the missing fields (or hit the help me link) hit return to see whether you’ve been right or not.

Fork me on GitHub