Thursday 25 December 2014

Modelling the response of a differential drive robot in Simulink and controlling using finite state machines (Stateflow)



Recently, I have been thinking to work on a model based approach to control the behavior of a robot in a simulated environment which can also be immediately deployed to a hardware prototype of the model.

After going through a couple of software, I have found that MATLAB Simulink and Stateflow would be the best option for me to implement my idea. I have modelled my robot in Autodesk Inventor and brought my model into MATLAB using Inventor – Simmechanics interface.


Once I have my robot model in MATLAB, I have designed the kinematic equations for my model  in simulink so as to control its behavior in the simulated environment. The robot behavior is controlled by the logic created in Stateflow. The robot has Forward, Reverse,Left and Right states. Based on the velocity inputs given to the wheels of the robot,  the state of the robot changes.




Inspired by MATLAB webinar - http://in.mathworks.com/videos/mobile-robot-simulation-for-collision-avoidance-with-simulink-90193.html

Tuesday 29 July 2014

An Interactive book with Augmented Reality


Augmented Reality!, a technology that is about to transform the world that we are looking today. Right from the business to advanced sciences, its impact can be predicted. Interacting with the physical world will be an amazing experience  with the perfect combination of technologies like google glass and Augmented Reality(AR).

Here is an example of such interaction which I have developed. The following video is a book powered with AR. As a student, we all would have experienced trouble while understanding a concept given in a book and wasted time in searching for related simple explanations in the internet. Now, with the Augmented reality and with the smart phones/google glass,  it is possible to make learning more interesting. In the video, it can be seen that, once the camera sees the image/circuit in the book, it automatically overlays its corresponding video tutorial. This makes learning interactive and efficient.


And finally, It is the image recognition that triggers the Augmented Reality as said in this TED video.

The software that I have used to develop this application include,

1. Eclipse IDE
2. Android SDK
3. JDK 6
4. Metaio SDK 5.3

There are lot Augmented Reality SDKs like Vuforia SDK, Layar SDK, Metaio SDK etc, whose libraries can be used to develop AR applications along with the Android SDK. Even OpenCV4Android( along with Android SDK) can also be used with proper recognition algorithms, image/video overlaying and with proper initialization of the internal Android senors to develop such applications.





Wednesday 4 June 2014

Image Processing in Android : A kick start to Augmented Reality

I never thought that I would be developing Android Applications a few months back. I sincerely thank Dr. Pinnamaneni Bhanu Prasad for introducing me into this interesting domain and guiding me through the process of step by step development of Augmented Reality applications.

For those who have experience in image processing at a computer level interfaces, it would be very exciting if you see your algorithm running in your own android mobile ( I have experienced this on my very first app, and surely you`ll!).  Moreover, you can take it along with you anywhere and you can demonstrate your algorithm to all your friends/family in any android mobile (which you cannot do with the computer level algorithm you used to develop previously, no matter how complex the algorithm is!).

So, I have decided to write some basic steps that can guide a newbie (new to android but with good programming skills) to develop their own android application for image processing. So, if you like to develop an app, all you need is INTEREST and PATIENCE and all it gives in return is FUN!(if you are a curious techie like me). I have provided some links which would be very useful for anyone to start over ( these  links really helped me a lot than other tutorials on the internet...just my opinion!, you can find your own link that suits you well)

The very first thing you need is an IDE with all libraries and tools of android. It is good to start with the  Android Developer portal, where you can find tutorials about android and setting up any IDE (Eclipse, in my case) for android programming. But from my experience, manual setup of android tools/OpenCV to an IDE is very complicated and you may experience lot of compile errors even with minor directory changes. So, I strongly recommend you to use the NVIDIA Tegra Android Development Pack which will install all the necessary libraries along with OpenCV, thus making your work simpler. Install the Tegra android pack and go through the basic tutorial " Build your first App" in Android developer portal. This will give you a clear idea on various files/folders and its functionality in android development.

Once you are clear with the basics of the android development, different files used in it, accessing various sensors/hardware components, you can start looking after the example codes first and then you`ll be able to develop your own code upon analyzing the example codes.

This is a good link with clear explanation of all required files to build a basic android app. After going through that tutorial, you can start with the image processing in android. To do this, click on the file menu, choose import and then choose existing android application. Now navigate to OpenCV-Android sample programs that you have in your computer (click here to see how to do).

One must always keep in mind  that the image processing methods are same in both android and windows platform. All you need to take care is, the input image and the output image format. The Android camera`s format is RGBA, which means the image what you will be acquiring will be in RGBA format and you need to return the image of same format after doing your processing.

So,
1. Acquire RGBA image
2. Convert it into required format (RGB/Ycrcb/Grayscale etc)
3. Do all your image processing analysis as you would do in OpenCV
4. Convert the output image into RGBA
5. Return the output image to display

These are all the basic steps that one should always keep in mind while programming. Here is a video of my first android application in which I have implemented the following,

1. Thresholding (60/255)
2. Skin Detection and contour drawing over detected skin
3. Canny edge detection
4. Circle detection

I have used a simple switch case to toggle between these four processing when a menu in the android is selected. Which means, all the processing and logics are same for both windows and Android. All you need to do is to think that 'it is easy to work on', spent some time in understanding the basics and eventually you will succeed.



 Once you know how it is done, then you can proceed with some advanced techniques and can do high level  image processing with your own android mobile. And with further interest and patience, you can go through some tutorials about Augmented Reality and can start developing your own Augmented reality apps (as I am doing right now). Have FUN! and feel free to contact me for further assistance.

Object Recognition in MATLAB

Recently, I was working in a conveyor belt setup, where, a camera was used to recognize the objects in the belt and an actuator to eject the unwanted objects. As I always like to simulate the algorithm before an actual implementation, I have used MATLAB for image recognition and based on the object classification made by MATLAB, the actuator will be actuated (to eject unwanted objects).

For simulation purpose, I have established serial communication between MATLAB and Proteus ISIS ( where I can have microcontroller, LEDs, Actuators etc). I have used a microcontroller of my choice and enabled its serial receive pin, so that it can receive the data that MATLAB transmits. MATLAB recognizes a known object by matching its SURF features and serially transmit a unique character/integer for each object. The microcontroller is coded in such a way that, if a specific character/integer is found in the receive buffer, then the actuator will be actuated.

Below shown is a simulated video of a conveyor belt setup. The video was taken using a 2MP cellphone camera. LEDs were interfaced with the microcontroller in Proteus ISIS to check the image recognition algorithm developed in MATLAB ( Every object has its own LED interfaced). Microcontroller coding was done in Keil and the .hex file is dumped into the simulated microcontroller in Proteus ISIS.



The detection algorithm was a bit slow in the video used. However, with the use of Bluecougar-X-X120aG GigE camera, improvement in detection speed was observed. In addition to this, I have also tested the similar object recognition algorithm in OpenCV. By my experience in working with both MATLAB and OpenCV, the detection speed is remarkably high in OpenCV when compared to MATLAB

  

Wednesday 5 February 2014

Image matching in MATLAB


Image matching- which means comparing two images for identical features. Often its practical usages are biometric applications such as finger print identification, Iris matching etc. Two images matches when their features coincides and these feature could be edges, corners, blobs, color, shape etc. By identifying these specific features in an image and comparing it with the features of other image, we can determine whether the images matches or not. Below shown is a demonstration of the image matching in the application of finger print recognition in MATLAB,




In this program, the number of edges in the image1 is compared with that of the number of edges in the image2. If the image matches, the GUI returns a 'Match' dialogue box.

Algorithm:

1. Load the image into MATLAB
2. Convert to grayscale image
3. Apply the edge detector of your choice after removing noises, if any.
4. Create a copy of this image for comparison (say img1 and img2)
5. Now traverse through the every pixel of both the images using a looping statement,
6. And, check the pixel value at the corresponding pixel
7. If the comparison percentage if greater than certain percentage( say 90, as per your wish), then the finger prints matches, else, the finger prints are not a match



Note: This not the exact way how the conventional biometric scanners work, but, this method can be used for rough matching purpose