Monday, May 13, 2013

HCI presentation

Hello, today I bring my toy car with me. As you know, I made a gesture control car this semester. First, let’s watch a short video which introduces my car to you in an interesting way.

From the video, you can see how I made it and how does it work. First, I got that idea which is to control a kind of car with Arduino and Max/Msp, using the color track, so I searched a lot information about how to communicate between Max and Arduino, and I found Maxuino, it provides me an easy way to control the Arduino board, I can just use the standard firmata in Arduino to communicate with Max. However, it causes some problems. First, maybe the complicated function of Maxuino, it makes the communication unstable. Sometimes it works well, but sometimes I didn’t change anything, it didn’t work. Another problem is that I tried some methods to use Bluetooth module to make the communication wireless, but I failed each time. So I finally changed my mind, and I decide to use another way to communicate between Arduino and Max which is what I am using now.

Here are the Max patch, Processing code and Arduino code. The max patch is basically divided into four parts. The first one is jit.pwindow. It takes a Jitter matrix and displays the numerical values as a visual image in a window. And then is the jit.qt.grab. The jit.qt.grab object permits one to digitalize video from any Quick Time-compatible video digitizer, and decompress the signal into a Jitter matrix. It also offers a grab-to-disk mode. The jit.findbounds object scans a matrix for values in the range [min, max] and sends out the minimum and maximum points that contain values in the range [min, max]. The mxj object is to send the data to Processing program.

This is the screenshot of my processing code. The first thing I would like to say is about how to get the data from Max. I imported two libraries, one is processing.serial which is used to open a serial port for processing. While, maxlink is a library that allows processing to get data from Max.

So now I can get the data of my movement in front of the camera in Processing.

The next step is to send them to the Arduino Board. I use to get the data from Processing.

So far, I have already completed the basic design of my car.

Combined with the simple UI design, let me talk about how to control the toy car. Now I just use two simple color objects, one is red, one is green to simulate the basic gesture control. I use the red one to control the speed of the car, and use the green one to control the car’s direction. At the beginning, I use both of them in a horizon control, but suddenly I found that based on people’s real life, they prefer to use a vertical control as the speed control.

Here is a general timeline of developing this kind of car. Most of them have already done. Such as the testing, we need to test what we want to use in our project before we start to assemble our project. And then we could decide what to use, or what not to use. In my project, I spend more time on Assembling and Programming. First, it’s difficult to make a car like this because it’s difficult for me to find materials. Besides, the front wheels which are used to control the direction is also difficult to build. I tried many ways and finally I make it like this. In terms of the back wheels, first time I used a gear box to change the direction and the speed of the dc motor, but I found that it couldn’t drive the car because of its weight. So I made another change and use the dc motor directly to drive the wheels. But sometimes, because the digital output can only provide 5 volts voltage, it works still not well. So I added extra batteries here to drive it well. Programming, the most difficulties came from that sending data from processing and arduino, it often had error and almost made me crazy. Finally, with the help of Aditha, I fixed all the errors. Now it’s still a prototype and I still have a lot of things to do.  

Now is the Materials and the cost of them.

That’s it, thanks.

No comments:

Post a Comment