Exploring Drawing Techniques with a Robotic Arm

Tim Chinenov
7 min readAug 4, 2018

--

The participation of robots into the social sphere of human life is currently limited to driverless cars and has not affected more mundane tasks. An average office or workplace is void of robots, with clerical staff committed to organizing workspace, cleaning, delivering packages, and so on. With an exception of automated vacuum cleaners, American homes are also devoid of robotic technology. Increasing the domestication of robotic technology, would certainly enhance interest in the field of robotics from the general public.

To explore robotic participation in more usual tasks, our team considered how a robotic arm can benefit us, as engineers, on a regular day. Engineers are not known for their exemplary drawing and artistic abilities. Many have shy’d away from writing utensils since the dawn of the keyboard and touch screen. However, it is still convenient to be able to produce quick sketches that that can be passed to project managers, bosses, and other superiors that may not be able to interpret our chicken scratches. To find a replacement for the engineering hand, the Dobot Arm V1.0 was used.

The Dobot Arm V1.0, Duck not included

The Dobot Arm V1.0 is a relatively small three degree-of-freedom (DOF) robotic arm. The arm is produced by a Chinese company by the name of Shenzhen Yuejiang Technology Co. Ltd. The arm is designed with education and small projects particularly in mind.

In terms of hardware functionality, the arm is rather limited. The end effector (Represented with duck in the left picture) of the arm can be substituted with grasping claw, a suction pump, or a writing utensil. This is perfect for the scope of this project. The joints of the arm have relatively low angles of freedom. The end effector of the robot is fixed relative to the first joint and the second and third joint of the robot are related to each other.

In order to more directly translate Robotic kinematic concepts to the Dobot, MATLAB 2017 was used. MATLAB is especially intuitive for working with matrices. A MATLAB library, known as Robot Raconteur was used to communicate and send joint angles to the Dobot. Additionally, MATLAB comes with a plethora of useful image processing tools. Due to the nature of the project, image processing methods were necessary.

Kinematics

Kinematic Model for Dobot Arm

When programming robots, it’s necessary to represent the robot in a mathematical model. Without a model that a robot can be programmed by, the robot is just a hunk of metal with some wasted motors.

The picture to the right represents such a model of the robot. This representation of the Dobot gives structure to the robot and allows us to program the exact position the end effector needs to move to. The forward kinematic equations of the robot are shown below. These equations allows for the end effector position to be found if the joint angles are known. Unfortunately, angles are hardly ever known and are usually what need to be found. Assuming the Cartesian x,y, and z coordinates are known, the forward kinematics can be used to derive the inverse kinematics. For the sake of brevity, this derivation is not displayed.

Drawing

The initial drawing technique that was explored was based on intuition. When a human draws they put their hand down and draw between points. We all learned this method from a child-hood game known as connect the dots. The image detection method of connect-dots is known as corner detection. With corner detection, an array of random points were found in an image. The points were ordered and then used to draw.

While this technique worked fairly optimally with simple shapes, it quickly ran into problems with more complicated shapes (i.e. 2D representation of a cube). Connecting the dots was the fastest method for drawing. Unlike other methods of drawing, corner detection never required removing the pen of off the surface of the paper.

Of course, most engineers sketch items that are slightly more complex than basic geometric shapes. So corner detection was not the most optimal solution. Moving away from corner detection, it dawned on us that populating and entire image with points is redundant. When drawing straight edge pictures and artist only cares where a line starts and ends.

An alternative method for detecting an image was done using Hough (pronounced “Huff”) Transforms to detect lines. This method required a rather elaborate algorithm of image manipulation which is summarized in the image below. A standard black and white image was taken (A). The image was then complemented for better contrast (B). To avoid repeated lines due to line width the skeleton of the image was taken (C). The skeleton reduced the lines to one pixel width lines. Line detection was iterated across the skeleton image until every line was found (C-E). Unfortunately, even after the processing that was done on the image, errors where still found (F). Some lines were too short or overlapped with other lines. An optimization algorithm was written too condense lines with similar slopes and in similar regions (G). Finally, the Dobot was given a collection of starting and end points of each line and drew the shape (H).

Image Processing Steps

Line detection proved to be one of the most successful methods of drawing images. The lines came out smooth unlike those in corner detection. The line detection method was successful in producing much more complex shapes that the robot was able to draw. These shapes involved multiple lines at corners, overlapping lines, and close parallel lines . The new strategy produced shapes that were smooth, unlike images produced using corner detection. Furthermore, the line detection method stored less points. Using line detection, the robot needed to know just two points per line. Corner detection involved multiple points for each line.

A clear limitation is seen with line detection. If you give the Dobot a circle, it would give you a quizzical response. We couldn’t demonstrate drawing capabilities of a robot with out considering curved lines.

A method based on Probabilistic Road Mapping (PRM) was developed to draw circles. The method was similar to corner detection in that it searched for points within a contrasted line. However, the algorithm written made an effort to make splines between points that left the region of the circle. This created a more curved like path. The algorithm was very rudimentary. Circles were recognizable but crude, as can be seen in the image below. PRM worked with many shapes that could be drawn with the aforementioned approaches.

Finally, the team wanted to be able to draw gradients. This is rather difficult to do, considering that a calligraphy pen had been used to draw up to this point.

Gradients and shadows were represented through density of dots. An image was converted into gray-scale and then divided into sections. For each section an average gray-scale value was found. The average gray scale value was mapped to a density of dots value. Using this method gradient images were developed based on inserted images. An example is shown below. This method was slow. Slow is an understatement. The image below took over ten minutes to produce. This is quite lengthy compared to the other methods used for drawing, which often took no longer than a thirty seconds to draw.

Vision

In order for a complex technology to be accepted into society it needs to be considerably simplified, better yet, autonomous. The final part of the project worked to make an autonomous scanning and printing system. A basic Logitech web camera was used and was represented using the standard pin-hole camera model.

The webcam was responsible for finding a drawn image. The camera specifically searched for a symbol (x,y-axis frame) and then cropped the captured image to exclude everything outside of the region that the artist drew in. The webcam then searched for a region to draw in. A drawing region was defined as the area in between four plus signs. The regions of highest correlation were simplified to four points, shown in the image below. The image that was designated to be drawn was then scaled to fit the drawing region.

Results

Judging from the outcome of many of the drawings, printers may yet retain their annoying occupations in the office and home for awhile longer. Besides the obvious greater cost of current robotic arms, Robots are currently slower than printers. While a robot draws with a single pen, a printer colors in multiple points on a paper at once. Not considering time, it was demonstrated that a robotic arm could perform many of the tasks a printer can. Through several drawing methods, including corner detection, line detection, PRM, and gradients, the Dobot displayed a robots ability to replicate the basics of graphic artistry. All this was implemented using rather fundamental elements of robotics, such as rigid-body kinematics and the pin-hole model. A basic demonstration of a autonomous scanning system was developed as a proof of concept with plenty of room of optimization and improvement.

--

--

Tim Chinenov
Tim Chinenov

Written by Tim Chinenov

A SpaceX software engineer. Im an equal opportunity critic that writes about tech and policy. instagram: @classy.tim.writes

No responses yet