32leaves.net

Milling and drilling on a MendelMax

A few days ago, I’ve mounted a Dremel on my MendelMax using this thing. Such as setup allows for a few nice things: milling wood or drilling printed circuit boards – once you got the software down.
As the scripts are a bit hidden in this post, check them out on GitHub.

Milling wood


Once the Dremel is on the machine, milling wood is pretty much a matter of converting a 2D drawing (e.g. stored as DXF) to Marlin compatible GCode. The weapon of choice here seems to be a program called dxf2gcode. What that program outputs however, is not directly suited for feeding it into the MendelMax:

  • comments: comments are surrounded by parenthesis, whereas for this use they’re single-line comments starting with a semicolon.
  • feed rate: feed rate is set using an F command (e.g. F400), whereas for the RepRap it should be a G1 expression (more like G1 F400)
  • unsupported commands: As the generated GCode is designed for real CNC milling machines, several specific commands have to be issues such as starting the cooland flow or getting the spindle up to speed. As such commands could confuse the RepRap (not checked, just assumed) we should filter them out or replace them with more suited counterparts.
  • different movement commands: dxf2gcode produces G0 for initial positioning, whereas moving a RepRap works with G1 commands.
  • different use of whitespace: typically GCode seems to be written so that there is no whitespace between the code character and numeric data. dxf2gcode however, produces whitespace where they typically aren’t. That whitespace makes the output look nice, but again might not work so well with a RepRap.

All those steps are implemented in a little script on GitHub.

Drilling PCBs

Drilling PCBs is something easy to do at first glance. It turns out, however, that aligning the layer mask with the drilled holes is a delicate issue. So far I’ve achieved the best results using the following steps (in that order):

  1. Export an excellon drill list from EAGLE using the CAM Processor excellon.cam script
  2. Convert the exported Excellon file (probably ending with .drd) to GCode using this script from GitHub.
  3. Drill the holes using the generated GCode
  4. Print the mask, cut it out and align it with the drilled holes
  5. Transfer the mask (toner transfer, UV exposure) and etch the PCB
Converting the Excellon drill file is a key part in this process. The script linked above does just that, including mil to millimeter conversion, and starting point selection. Personally I tend to identify a certain point on the board with the starting point of the printer. See the help output from the script below for a list of what it can do.

Usage: gcodeutils/excellon2gcode.rb [options] drillfile.drd outfile.gcode
    -s, --start STARTPOINT           The start point of the printer, format [XX]x[YY] [mm]
    -f, --first FIRSTPOINT           First drill point to go for and equate to zero, format [XX]x[YY] [mm]
    -m, --mil2mm                     Translate units from MIL to MM
    -i, --invert-xX                  Inverts the x axis
    -t, --travel-height HEIGHT       The travel height in mm
    -d, --drill-height HEIGHT        The drill height in mm
    -r, --rotate ANGLE               Rotates the holes by ANGLE degrees
        --preamble FILE              Prepend the preamble from FILE
    -p, --postamble FILE             Append the postamble from FILE
    -v, --verbose                    Produce verbose output
    -g, --gnuplot                    Plot the drill holes on the console
    -h, --help                       Display this screen
On a side node, try not to move the PCB while the drill is still in the board – it will snap. Obviously -.-

From bent wire to 3D printed cookie cutters

Abstract

With the advent of 3D printers in private homes, producing custom kitchen utensils such as cookie cutters becomes feasible. However, using existing interfaces – such as Autodesk Inventor, SolidWorks or SketchUp – to design a such customized kitchen artifact is out of reach for most users. In this work we present a system that takes a bent wire, or thick line drawing, as shape input and creates a producible cookie cutter model. We use computer vision and implement the idea of using household items for shape input, as well as fiducial markers.

Introduction

The creation and consumption of Christmas cookies is an essential activity during the holidays. For baking cookies one needs, besides the edible side of things, a cookie cutter – or potentially a few of them. They come in many shapes and sizes, however, stars, hearts and similar motives dominate the commercially available selection.
Creating custom cookie shapes is typically accomplished using a knife instead of the fixed-shape cookie cutters. The results of this creative endeavour depends highly on the cutting skills of the people involved and are seldom reproducible. So if one wants high-quality, reproducible, custom cookie shapes creating a custom cookie cutter seems inevitable.
One of the things that immediately comes to mind is 3D printing the cookie cutter. And indeed, quite a few people have done so already [1]. Also, the idea of building something like a cookie cutter creation tool is not new [2]. However, this tool (and others alike) suffer from the fact that judging the physical dimensions of a drawn shape remains difficult or that their expressiveness is rather limited.
This work makes three main points: it presents an easy system to design and build cookie cutters, it demonstrates the idea of using household items for shape input and dwells on the idea of maintaining a close relationship to the real world.
The process described here goes as follows (see figure 1):
  1. design cookie shape by bending a wire or drawing a thick black line
  2. place/draw the outline on an A4 sheet of paper
  3. take a photo of the outline and extract, smooth a polygon
  4. convert outline into a printable cookie cutter
(a) designing the cookie cutter shape by bending a wire, placing it on an A4 sheet of paper and taking a photo of it (b) filtering the photo by binarizing it, extracting the paper sheet corners, and using a canny edge detector (c) constructing a 3D model using OpenSCAD and extracted polygon (d) printing the cookie cutter on a 3D printer

Figure 1: the design process demonstrated with a user-designed shape: (a) designing the cookie cutter shape by bending a wire, placing it on an A4 sheet of paper and taking a photo of it (b) filtering the photo by binarizing it, extracting the paper sheet corners, and using a canny edge detector (c) constructing a 3D model using OpenSCAD and extracted polygon (d) printing the cookie cutter on a 3D printer

Shape definition

Many ways of defining 2D shapes are described in literature. From sketch based interfaces [3] to more traditional 2D CAD systems [4]. All of those approaches suffer from their disconnection to world they are designing for. It is hard to judge the dimensions and appeal of those virtual artifacts before they become physical reality.
Our approach in this work lets the user define the shape using a simple, tangible shape input controller: a piece of wire. Imagine there was something like a "cookie cutter band" that one could bend into the desired shape. Once bent, the shape would be fixed and additional struts would be introduced to strengthen the cutter. That’s exactly what this system does. The user bends a piece of wire into the desired shape and the system constructs a cookie cutter, which is then printed on a 3D printer.

Shape extraction

The wire shape has to fulfill a set of constraints, so that it makes sense as a cookie cutter:

  • planarity: the wire has to be bent flat, much like the cutter itself is going to be. This constraint is also imposed by the application domain, as cookie dough is generally flat.
  • not self-intersecting: a self intersecting cookie cutter would produce cookies which do not hold together as one piece – hence, we require the shape to be a simple polygon.

After bending the shape, the user places the piece of wire on an A4 sheet of paper and takes a photo of that assembly. The photo is then fed into this system which extracts the shape using computer vision. Since the outcome is going to be produced for the real world, the polygon has to be translated into real world units. We map the image to real world coordinates by detecting the corners of the A4 paper. This leads to the following processing pipeline (implemented in C++ using the OpenCV [5]):

  1. threshold filter to binarize the image
  2. find paper corners and compute the homography
  3. warp perspective of the input image based on the homography
  4. canny filter the warped image
  5. erode the image to connect spurious lines
  6. find contours in eroded image
  7. select contour with largest area that is not the whole image
  8. if no contour is found, exit
  9. find center of the polygon using the enclosing circle
  10. approximate outline using Douglas-Pecker to smooth the polygon

Model creation and printing

The polygon coming from the shape extraction stage is then scaled into a smaller and a bigger version, which are assembled to form the cutter using CSG operations, implemented in OpenSCAD [6]. Scaling a concave polygon to build an outline is not as straight forward as scaling a convex one. While a convex polygon can be adequately scaled by multiplying each vertex with a scalar value, scaling a concave polygon that way is unsuitable for creating the model (see figure 2, b). To properly scale a concave polygon P = \mathbf{p}_0, \dots, \mathbf{p}_n for our needs, we translate each \mathbf{p}_i by the edge normal

    \[\lambda\frac{\mathbf{n}}{\vert \mathbf{n} \vert}, \mathbf{n}=(\mathbf{p}_{i+1} - \mathbf{p}_i)\]

with results as depicted in figure 2,c.

(a) the polygon to be scaled (b) the naively scaled version, drawn as dotted line (c) the properly scaled polygon

Figure 2: Scaling a concave polygon to create a thin outline: (a) the polygon to be scaled (b) the naively scaled version, drawn as dotted line (c) the properly scaled polygon

The correctly scaled polygons are then extruded into 3D space using OpenSCADs 2D subsystem. Additional struts are added to stiffen the cookie cutter and ease its handling. Strut size is determined by computing the bounding box of the polygon.

Discussion

We implemented this system using simple computer vision algorithms. While this implementation is sufficient in many cases, it is not particularly robust. The paper sheet corner detection does not always work and the whole process depends on proper parameter choice – parameters which are image dependent. A more sophisticated processing pipeline or some form of automated parameter choice could be explored.
Also, the line extraction depends on the characteristics of the shape being extracted. While the erosion stage somewhat mitigates this effect, e.g. glossy wires or "sketchy" are still unsuited for this system.

Our system does not enforce the constraints imposed by the domain (i.e. simple, not self-intersecting polygon). Users can input such a shape and the system will try to create a cookie cutter from it, regardless if such a cutter makes sense or not. The system could inform the user when a shape is not meaningful and suggest corrections.
The concave polygon scaling algorithm we used in this work, results in very thin shells around sharp corners. Some 3D printers, or their slicing software, can not reproduce such corners – or even strips close to those corners. As a result, the cutter can have gaps in its perimeter yielding an unclean cutting result. A more constant perimeter thickness could be achieved by sampling the polygon at a finer rate (better normal computation) or by employing post model generation algorithms, such as the one described by Stava et. al. [7]
Future work could investigate the suitability of certain media as shape input controllers. A question that comes to mind is, if it is easier to bend a wire into shape rather than to draw the shape with a thick pen? An investigation of this question should take the tangibility and affordance of the wire *as compared to a drawing on a sheet of paper) into account.
Also, the importance of real-time feedback could be explored. Is it enough to get feedback on the producability/constraint enforcement at discrete points in time, e.g. when we feed an image into the system? Or should the system provide continuous feedback? Also, what is the relationship between constraint system complexity and real-time feedback need?

Conclusion

In this work, we described the idea of using household items for modeling, specifically investigating wire as shape input sensor and A4 paper as fiducial marker. We implemented a system for designing cookie cutters as case study, targeting 3D printing as production technique. Our system is easy to use, as it is situated in the real world.

Download

The source code can be found here [8], including a few test pictures. I’ve only tested it on Linux, however, it should run on OSX as well. To compile/run the stuff run:

1
2
3
4
tar xvfz contours.tar.gz && cd contours
mkdir build && cd build
cmake .. && make
./contours ../test4.jpg && openscad main.scad

References

STL slicing for 3D printing

Some 3D printing methods like the additive layer manufacturing require the model to be sliced into discrete layers, which are then being printed one after another. These days I’m playing around with 3D printing thus I needed to perform some slicing myself. Unfortunately I didn’t like the methods to perform such slicing that much, so I decided to give it a shot and write my own.
Some time ago I wrote a utility that visualizes the flight of a quadrocopter. To make things easy I used the Visualization Toolkit. Remembering that I hit google and found an example the pretty much did what I wanted. The model’s loaded using vtkSTLReader and vtkStripper is employed to merge the polyline strips to connected components.
Unfortunately vtkStripper still has a bug (since 2004!) which rendered it unusable for my endeavor. It causes some images to look quite wrong (thus they’d be printed wrong) as it combined some polylines in an unsuitable manner. The slice pictured below has that white/inverted triangle which is not supposed to be there.

After patching vtkStripper.cxx with the patch attached to be bug, everything was fine. (Well pretty much, I’ve still experienced the problem one time, but hey, what’s perfect in this world ;-))
So the whole slicing process is:
  1. Slice STL model using vtkCutter and store the polylines in vtp files. By decuppling the cutting process from rendering the images, we gain flexibility, since we do not have to do the cutting each time we want to use a different rendering algorithm. The first step also computes the bounds of the model (using a bounding box) and stores them in a file.
  2. Convert polyline to SVG. We use SVG since it provides multiple benefits over directly rasterizing the polyline. First of all we retain control over the units (during the whole process you want to make sure you don’t mess up the units, or otherwise your printed object may be twice as large as anticipated or similar problems may occur).
  3. Use ImageMagick to rasterize the SVG graphics. Actually that’s something pretty cool, because in this step we can easily ensure that we’re using the correct resolution for our printer. So if we used an Ink printer to apply the binder during the 3D printing process, we could simply use the resolution of the printer.

So now that we have all slices we can (of course) print them, or we could make a little movie out of them, which is exactly what I did:

The model’s coming from Thingiverse and the music is from SoundCloud. You might notice the full model in upper right corner, that’s just visual sugar for the video and not part of the sliced images.

To make the whole process a little easier, I wrapped a Makefile around it and wrote a little ruby script that builds the environment for the makefile to work. That ruby script, as well as the source can be found in the ZIP file after the break. You’ll need VTK to build to build the tools and ImageMagick to run the whole thing.
Download me here.
Fork me on GitHub