Erik's XY-Plotter

created 03/05/2007

last update 21/05/2007 ==> NEW1  NEW2  NEW3  NEW4

1. INTRODUCTION

A. Motivation

This year we will participate at the "Cirque des Sciences", which is the 2007 edition of the Science Festival. Our theme is ROBOT-ART. We want to demonstrate that robots can be creative. Some cool robots will prove this. One of them has evolved to its final stage. It's a new astute xy-plotter system that will:

  1. Snap a portrait-photo with a web-cam
  2. Process the image and convert it into a drawing
  3. Draw the picture on paper using a pen

B: DEVICE

We started the project in 2001 and 2004, but for several reasons, we could not finish it until now. The device combines the following parts:

  1. LEGO camera 

  1. RCX

  1. TTT-robot e.a. 3 translatory degrees of freedom; (xy) plane for drawing, z-axis for pen movements

 

C: PROGRAMMING

The evident choice for this kind of project is a mixture of standard ROBOLAB and ULTIMATE ROBOLAB.

  1. ROBOLAB has all the required image capturing and processing facilities needed to snap and convert a photo into a pen-drawing.
  2. ULTIMATE ROBOLAB provides the necessary low-level functions that are needed for processing tons of data coming from the PC. In fact, with the standard firmware there is no good way to rapidly transmit thousands of point coordinates to the RCX within a reasonable time.

2. JOURNAL 

(last update: 12/05/2007 see below)

A. ROBOT   (development stage: 04/05/2007)

This development took almost 4 months at a rate of 2 hours per week, and had a lot of intermediate stages with tests of suitability, stability and accuracy. Erik worked patiently and with a sense of perfectionnism to obtain the desired result although the device is still under construction.

B. IMAGE PROCESSING (development stage: 21/05/2007)

It is essential to convert the photo into a line-drawing. This is done using a ROBOLAB program that later will be integrated in the complete robot control program. First the gray-plane is extracted, the picture is shrinked to the desired size, then the contrast is enhanced. The resulting image is converted into a binary picture, which means that any pixel-value within a certain range is set to logical 1. All the other pixel-values are set to logical 0. Finally the picture passes a particle filter, followed by an edge detection filter. It is inverted and put out for the next program stage.

21/05/07

Improving the filtering we get much better results as can be shown here below:


C. VECTORIZING

04.05.2007

At the end of the previous stage the picture has a typical bitmap format. In fact, the data now has the form of a 2D-array. The row-index corresponds to the abscissa x and the column-index to the ordinate y. Any element P[i,j] of the array has the value 0 or 1. However, since the robot is going to draw the picture like a human curved line by curved line, it must know the exact order of the points that are on a particular line. Therefore the controlling program must convert the image into a series of point chains. We operate the conversion by starting at P[0,0], while incrementing j. Each visited point is marked "visited", which practically means that it receives another value than 0 or 1. If the program encounters a black point, it stops, adds it to a new chain (or vector, since the resulting chains are added to a matrix of vectors) and checks whether one of the 8 neighbour fields also is black. If so, the point is appended to the chain and the program checks this point's neighbours a.s.o. until there is no black neighbour. In that case, it restarts the scan from the place it stopped until the end of the picture.

This procedure collects chains of picture points. The data-size is reduced, because there are much more white points than black ones. However, we may compress the data, if we only consider the starting point and save the relative position of each following point in relation to its predecessor. Our point-chain definition tells us that two succesive points are neighbours, which means that each of them may only occupy one of 8 possible locations. Thus, relative positions maybe saved in octal representation that only needs 3 bits per point. The robot may reconstruct the line, if it starts at the initial location and walks through the chain point per point, always changing the relative position of the robot head.

D. ASTUTE XY-CONTROL

04.05.2007

From the definition of the point vector or chain we also can deduce that the robot may reach each neighbour within the same time. The proof of this assertion is simple: If we consider the discrete time tic Dt=1, the atomized velocity vector and its module is , which may only have the values 1 or . Example: If the next point is situated to the left or right, or at the top or the bottom of the current point, the velocity-module is 1 and only one of both controlling motors is active. However, if the next point is situated on the diagonal, the resulting velocity-module is and both motors are active. The only condition that must be fulfilled is that the motors turn at the same power and gear ratio. On the next picture we can see how both motors rotate individually. The important observation is that the motors are only in three possible states: FWD, REV or OFF.

We can conclude that if both motors are controlled in open-loop technique by two synchronized modulated signals, the resulting combined movement will be the one of the initial point-chain. If we implement this into the RCX and run Erik's plotter, we obtain the desired result, as can be seen in the following picture. (Note that for this test the pen was fixed with rubber belt only.)

 

It is essential now to find a method to determine the optimal value of Dt. Anyway, it must correspond to the desired picture size and correlate with the intial point-coordinates. But the choice of the picture resolution must also correlate with the gearing and motor-characteristics. (For instance with the LEGO motor we can't do the switching between fwd and rev states too quickly.) What can be concluded is that in the optimal case, errors only grow linearily and the picture may be distorted by homotecy only.

12.05.2007

We dramatically could improve the result by using an 8-bit timer interrupt for the motor waveform generation. With Ultimate Robolab we can easily implement interrupt service routines into the RCX, as can be seen on the next picture. Note that we use the motor direct function that doesn't need the asynchronous update through the main 16-bit timer interrupt that clocks the RCX. Also note that we are using here timer1, which normally is used to produce the IR-carrier, for the reason that we dispose of a special RCX edition with hardwired RS232 port -very limited edition only. With standard RCX we would have use timer0 that usually controls the sound.

With a special LabVIEW VI, we generated a single vector composed of 2700 points that we sent to the RCX. Note that in order to fulfill the neighbourhood condition, we must add points to the graph. If the step is 1 for x-values, this does not mean in the case of a function that the step for y-values also is 1. Normally a computed function is injective, e.a. there are y-values that are not image of any x-value. Thus the y-values must be interpolated. On the next picture the left graph is the one that fulfills the neighbourhood criteria.

The timer interrupt is triggered every 5ms and the resulting curve may be seen on the next picture :

Obviously we have to deal with a serious error-propagation. The cause may be that LEGO motors don't work identically in both modes fwd or rev. The error per point is: 2.5cm/2700 = 10micrometer only ! Thus, we will not have much trouble, since picture vectors tend to have no more than 500 points (640x480 resolution) with a maximal error of 5mm. The average vector has less than 100 points, so the maximal error here is 1mm. So, if we try to reproduce the first vector (192 points) of another scan of the picture above, we obtain the following result :

It will be important to remove the chariot back to [0,0] after each vector, in order to recalibrate the system, otherwise the error will be too important. Here the first drawing with 20ms interrupt time. (Note that very small vectors have been pruned. Also note that the movement to the initial position is not done by interrupt and has not been adjusted correctly, as can be seen, since the absolute positions of the vectors do not correspond to the initial picture. Sometimes the pen rose before the end of the vector. This is a bug.) :

Remember: this is open-loop: we have no sensorial control over the motor-position and though we get this beautiful picture! Cool !!!

19/'05/07

We could improve the result by adding rotation sensors... and we discovered a bad bug : we only compressed 12 instead of 13 points per message. This explains the constant difference of final point of the sine wave (constant at different delta_t). Here the result with the new controlling and filtering program:

And another portrait:

E. ASTUTE PC-RCX communication

The PC is going to control the plotter-robot. Therefore we will need a driver program on the RCX. It consists of a state-machine that reacts on PC commands:

The PC must poll a certain variable that contains the state of the RCX in order to know which command to send. We use the SET (0x05) opcode that has 5 data bytes to send the picture-vectors (chains) to the RCX. The data is compressed, so that 13 relative positions may be sent at a time, drastically reducing the transmission time. (13*3 bits=39 bits). we therefore must use a different handler for opcode 0x05 as shown on the next picture:

With ULTIMATE ROBOLAB it is possible to decrease the main processor interrupt time. We are going to use 0.5ms instead of 1ms. This provides a better communication and a shorter switching time for th emotor states. If we test the transmission with a simple program, 1000 relative positions are transmitted within 24 seconds. So, we later can program the robot to charge the new vector, while the previous one is being drawn and realize a real spooler.

19/05/07 -

The ROBOLAB communication looks like the following. Note that the normal communication node has been replaced with a more complex one that allows better error control and wait for certain replies.


19/05/07 --

F. RCX-firmware

We now use rotation-sensors and touch-sensors. They are always connected as couples and depending on the program state, we toggle between sensor configurations. It is essential to initialize the head at the [0,0,0] position through the touch-sensors. Then the configurations are switched to rotation-sensors, which are zeroed.

 

WILL BE CONTINUED, SO STAY TUNED !


RetourMain Page