27 Aug 2012

Wall-E Robot (object/face recognition, sound recognition, AI)




[ 27/08/2012 ] Test 1 - Colour Tracking




[ 05/09/2012 ] Test 2 - Servos and motor driver






[ 08/09/2012 ] Test 3 - Controlling Wall-E from PC




[ 11/09/2012 ] Test 4 - Speech Recognition





=============================================

Flow chart

          commands
PC -------->-------->--------> Robot (Arduino)
PC < ------<--------<------- Camera (wireless)
               video


PC -----> Bluetooth ------> Air -----> Bluetooth -----> Arduino
     Serial
  communication




=============================================
update 05/08/2012

I will need a easy way to create user interface for the control program, there are a couple of options: SDL, Qt. 

SDL the convenient library or tool that handle "Create a Window" , "Load a Image File" , "Play Sound and Music" , "Load Font in ttf Format" , "TCP/IP Protocol" , "Using a Gamepad , a Keyboard , and a Mouse" for you.    So you can concentrate on creating your own world with these handy tools.      It gives you the relative easy way to make a multi-media windows program.


Also need to learn how to user serial ports.
The tutorial of Win32 Serial Port Programming


=============================================
update 08/08/2012

I think i have most of the project planned, and come up with quite a few functionalities.

For Object tracking and recognition, I will write the code myself with OpenCV in C++. And the program will run on a PC, images are transmitted from Wall-E using the wireless Wecam, and after processing the corresponding cammands will be sent back to Wall-E via bluetooth.

I have been looking very hard for programming solution for speech recognition, and hope someone has already wriiten a API or some sort. And Iaccidentally bumpped into a YouTube Video showing a much simpler way of doing this - EasyVR Arduino Shield! So I might use that instead of writing codes myself!


=============================================
update 10/08/2012

Wall-E arrived! :)
I should have started this week, but just before I was going to take the video for my lastest version of the hexapod robot, one of the servos broke!! I guess i just had to wait...



It's a great toy for 8's, it's only got one motor, which means it can only turn left, or go forward. Moves its hands as well, but that's pretty much it. Here is a video showing roughly the same one:

http://www.youtube.com/watch?v=VEoh8Iws-kk



=============================================
update 12/08/2012

Still waiting for the servo gear to arrive, I am so bored, so I started working on the robot hardware. 

I took it apart and amazed by how well it works considering it's only got so few components.







It was quite dirty since it's second hand. I had to wash every piece of it with soup water! 
I will leave the assembling another day.

=============================================
update 14/08/2012

Finally found the time to look at the robot pieces and could get started to assemble it.

I recycled the motors and motor driver from my previous robot (Wally Object tracking robot). 

















It was quite challenging to modify the robot to fit the servos. but i did at the end ^.^.
I will start coding another day!



=============================================
update 17/08/2012



colour tracking code! 

    capwebcam.read(matOriginal);

    if(matOriginal.empty() == true) return;
    inRange(matOriginal, cv::Scalar(0,0,175), cv::Scalar(100,100,256), matProcessed);
    GaussianBlur(matProcessed, matProcessed, cv::Size(9,9), 1.5);
    cv::HoughCircles(matProcessed, vecCircles, CV_HOUGH_GRADIENT, 2, matProcessed.rows/4, 100, 50, 10, 400);

    for(itrCircles = vecCircles.begin(); itrCircles != vecCircles.end(); itrCircles++){
        ui->txtXYRadius->appendPlainText(QString("ball position x =") +
                                         QString::number((*itrCircles)[0]).rightJustified(4, ' ') +
                                         QString(", y =") +
                                         QString::number((*itrCircles)[1]).rightJustified(4, ' ') +
                                         QString(", radius =") +
                                         QString::number((*itrCircles)[2], 'f', 3).rightJustified(7, ' '));

        cv::circle(matOriginal, cv::Point((int)(*itrCircles)[0], (int)(*itrCircles)[1]), 3, cv::Scalar(0,255,0), CV_FILLED);
        cv::circle(matOriginal, cv::Point((int)(*itrCircles)[0], (int)(*itrCircles)[1]), (int)(*itrCircles)[2], cv::Scalar(0,0,255), 3);

    }

    // Convert OpenCV image to QImage

    cv::cvtColor(matOriginal, matOriginal, CV_BGR2RGB);

    QImage qimgOriginal((uchar*)matOriginal.data, matOriginal.cols, matOriginal.rows, matOriginal.step, QImage::Format_RGB888);
    QImage qimgProcessed((uchar*)matProcessed.data, matProcessed.cols, matProcessed.rows, matProcessed.step, QImage::Format_Indexed8);

    // update label on form

    ui->lblOriginal->setPixmap(QPixmap::fromImage(qimgOriginal));
    ui->lblProcessed->setPixmap(QPixmap::fromImage(qimgProcessed));



=============================================
update 24/08/2012



The Servo Gear finally arrived! (actually the servo did, i guess they must have send me the wrong thing, ^.^ )

Anyway, i immediately fixed the hexapod robot, and started making the video, hope I can finally begin working on the coding for Wall-E.


=============================================
update 27/08/2012

As a starting point, I wrote a Qt program to detect colour (red), and send out command via serial port to arduino, to turn Wall-E's head to follow the object. I will extend the object that can be tracked to faces, certain objects, light source etc..

I struggled so much at the beginning, because everytime i connect to arduino via serial port, it freezes the video. I later realized it's the thread issue. when the program is waiting for data from serial port (or reading, or writing? i am not sure), it actually hangs the thread, so I decided to modify both serialport class, and video class to have their own thread when running.

Some people suggest it's not a very good idea to use thread if we don't have a formal education on this subject. And I did find it confusing how to start with thread, because some say we shouldn't make QThread subclass and we should instead move a object into a thread. But since it's in the official documentation that we should make it subclass, I followed the latter.

I am still very new to Qt and OpenCV, since I only started learning these a few days back, and I was already thinking about multithreading, and I now realized how crazy that was!

With frustrations, I spent the whole weekend and my bank holiday just debugging the code. I dropped my diet  routine, my exercises, and my movies!  But I won at the end. Altought it is still not as good as I would expect, tracking is quite slow and inaccurate, and the head shakes a lot, at least it works ^.^

I will look around for some better algorithm, at the meantime might add a few more functionality in the program like adjusting the video properties, and better threading coding...

see you now..




=============================================
update 28/08/2012

The whole reason i spent so much time coding the colour tracking was because, i needed to write a program that does multithreading, and that's because i need to listen to the serial port for input from arduino while processing video.

I need to confirm arduino has completed the previous command before i send another one out. but still, it's not fast enough.

I saw someone has done a project similar to this, but he doesn't listen to signal from arduino, but send out commands from computer every frame he processes. and the result is actually better than mine!

i am thinking, with enough delay between each frame, there might be a possibility that this could work. I can also say good bye to the confusing multi-threading programming too!

I should also stop sending commands when there is nothing detected.

should calculate the middle point of the detected image, so it will work regardless the size of the detected image.

i might try it out tomorrow.



=============================================
update 29/08/2012





So I tried sending commands without feedback signal from arduino, and it works great! I modified the code based on my initial Qt program, using single thread, it  wasn't lagging at all! So now I know what was slowing the program down, must be the 'serial port data listener'. For whatever reason either I am using it wrong, or by nature it is blocking other processes, I should avoid using it. But in the future when I add the command recognition functionality, I will need to somehow send data back to computer, to run certain applications, for example, if I want Wall-E to track Faces, I would say 'Wall-E, follow faces', and arduino will send the computer the command, and open 'track faces application'.






=============================================
update 01/09/2012

Assembling the 
eye.














=============================================

update 02/09/2012


Improving the Tracking computer programs:

1. options to the device.2. options to each object tracking.
this is very useful when it comes to camera devices: http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images_and_video.html?highlight=cv_cap_prop_contrast

re-writing the colour tracking algorithm, here is how it works in the new version:

1. Preprocess the image using cv::inRange(), with necessary colour bounds to isolate a desired colour. It might be a good idea to transform to a color-space like HSV or YCbCr for more stable color bounds because chrominance and luminance are better separated. You can use cvtColor() for this.  I imagine we will only need to worry about the Hue and Saturation channels as the Value channel doesn't contain any color info (i.e., the Value range would be left untouched [0, 255]).



link to cv::cvtColor()


but how to determine the min/max colour boundries?

2. smooth the image (pre-process) with GuassianBlur to get rid of some of the jagginess. I used a bigger dilation kernel than erosion kernel (5x5 vs. 3x3) to get rid of some noisy pixels. The smoothing might help this also tweaking the thresholds could make the erosion unnecessary.

3. compare sizes if multiple objects detected, choose the biggest one.



=============================================
update 05/09/2012

Finally I have some time to sit down and continue my project! I finished the inside layout and tidied up all the cablings tonight. I Also tested the servos and motor driver, all seem working fine!

But i just have to say how much i hate soldering right now!! I literally spent 2 hours trying to solder a switch to some long cables. The first try, i found the cables was a bit out of contacts, so I put hot glue to stick them together, didn't help. So I tried very hard, to take the glue off, and replace all the cables with new ones, and soldered again.

This time was better. I then installed it on the robot, and hot-glued it on. Only found that the switch itself isn't working properly, I have to occasionally push the plastic bit. I guess i might have damaged the switch when I was taking the hot glue off... 

There is a reason I love programming so much! When I was at Uni doing projects, I always left all the soldering and cabling works to my lab partner, and I would take care of all the coding and maths. I just don't have the hands to do these things i guess... :(










=============================================
update 08/09/2012

We can now control Wall-E from PC.

We can also use it as a spy robot :)




=============================================
update 11/09/2012

Added Speech recognition, so i can now control wall-e with my voice.


I am using EasyVR arudino Shield.  The shield is great, easy to use, and work quite well actually. But it's intolerate to background noise, even it's just a little.

At first, I trained the robot with commands, and it's all working fine. But when I switched on the robot, it starts ignoring my commands, being unresponsive. I later realize it's the noise from the servo and motor that interferes the input from the Mic.

So I re-trained the robot with the motor and servos on, to simulate the noisy environment. And really, it works much better, although still not ideal.

To conclude, I don't recommend using EasyVR on a robot that is sitting next to servos and motors, or operate in a noisy environment.

For future work, i will migrate the speech recognition to the computer site, and send commands via bluetooth. (actually that's even better because now I don't need to send signal back from Wall-E to computer anymore, so communication will be strickly one way, see last few updates about the bluetooth listener issue)

No comments:

Post a Comment

Note: only a member of this blog may post a comment.