Thursday 17 September 2015

Augmented Reality Library for MATLAB


I have developed a simple Augmented Reality library in MATLAB. By using this library, anyone with no knowledge about AR can implement the functionalities of AR  in MATLAB easily without any codings.

The only requirements from the user are,

  • 1. MATLAB R2014a or greater
  • 2. Target / Trigger Image
  • 3. Overlay Image / Video
  • 4. Image Acquisition Device
The video demonstration of the library can be found in the below video,




The library file can be downloaded from this link .



Implementation:

  • Download the file and change the MATLAB path to the file path
  • Call the function "ARinvoke" with appropriate arguments to render AR
  • Syntax (type in MATLAB command line):
    • ARinvoke('TargetImagePath', 1/2, 'OverlayObjectPath', 'ImageAcquisitionDeviceName', Device number);
  • Here, the second argument (1/2) means,
      • 1 for image overlay over the target image
      • 2 for video overlay over the target image

  • Example:
      • ARinvoke('C:\targetimage.jpg',1,'C:\overlayimage.jpg','winvideo',1);  // second argument - 1 -> overlay image
      • ARinvoke('C:\targetimage.jpg',2,'C:\video.avi','winvideo',1);   // second argument - 2 -> Overlay video

  • Device name (winvideo) is the webcam name in windows. You can use any camera of your choice. Use "imaqhwinfo" for list of cameras available in your device
  • Device number (1) means internal camera of laptop and (2) is external USB webcam connected. 
Currently, only 2D  objects rendering has been done. I am about to work  on 3D objects rendering and will update it once it is finished.

For now, only one image can be tracked. If anyone interested to track more than one image, feel free to contact me. 

Note:
The video being rendered (videooverlay) might be slow based on the processing speed of the Computer and MATLAB. I have increased the video framespersecond to 100 fps for better speed.

Try the library and have fun with your own custom images and videos.

Monday 17 August 2015

Gesture Controlled Robotic ARM using Microsoft Kinect and MATLAB


Kinect sensor is one of the amazing product from Microsoft. It enabled me to implement my gesture controlled tasks very easily. I have built a 3 DOF robotic arm, which is to be controlled by human hand gestures. I have done this by obtaining my  wrist coordinates from the skeletal structure acquired from the kinect sensor in MATLAB by calling functions of Kinect SDK in MATLAB. Once tracking of my arm coordinates has been done, the coordinate system of the robotic arm can be calibrated to the X and Y axis of the wrist coordinates. The control signal from the MATLAB is sent to the servo motors of the Robotic ARM through the Aduino-MATLAB interface.


The demonstration of the gesture controlled robotic arm using Kinect can be seen below,






Saturday 4 April 2015

Vision Guided Robot- Video Processing with BeagleBone Black and OpenCV C++

This is what kept me really busy for two months!. As I was new to BeagleBone Black and to Linux operating system, I struggled a bit during the starting stage of the project. But excellent tutorial of Dr.Derek Molly helped me a lot with the BeagleBone Black. (This project was done as a part of the Texas Instruments Innovation Challenge)

I was aiming at developing a simple robot that have the ability to bring the required object to the user. For this purpose, the robot was mounted with a robotic arm to grasp the object and a camera to sense the objects in its surroundings. 

Detailed description of the various parts of the robot and the operational demo can be seen in the following video,




If you are very curious about the programming part of the robot, you can see the BeagleBone Black coding of the robot in my github repository. I have tested the coding with the Ubuntu 14.04 LTS and latest version of Debian Operating System on the BeagleBone Black(BBB) and it worked fine. Object Recognition with OpenCV was faster in Ubuntu than in Debian (I have no idea why recognition is slower is Debian..trying to figure it out!!)


For those who would like to try this project, here are few tips,

1. Boot your BBB with any OS of your choice following the instructions from here (I`ve installed Ubuntu)

2. Install OpenCV to the BBB (some OS comes with pre installed OpenCV) using the following command,

                  sudo apt-get install libopencv-dev

3. Download my code from here to the BBB.

4. I have used simple template matching to identify objects. So choose your object of interest and specify the path to the image in the program.

5.Connect the webcam to the BBB. Make sure your webcam works with BBB first. I was initially testing with a webcam that worked fine with the computer but not with the BBB. So I have bought a new Logitech C70 webcam and it worked fine.

6. Thats it!, Compile the code and run it. I have coded in such a way that the GPIO pins(14,15,16,17,18) of P8 header of the BBB will respond to the object based on its coordinates. and a log file of the processed video output will be stored as a media file in .avi format. If you are using Windows OS and Putty to develop the coding you can use Win SCP software access the logged file from your windows machine. Also if you are using lxde session in Debian OS, you can very well use imshow() function of OpenCV to see the realtime object recognition.
       





Thursday 8 January 2015

Face and Eye detection, Cornea(eye center) tracking using OpenCV



I was just thinking to work on eye gaze detection to estimate where a person is looking. So, as a first step, I need to detect the eyes and then the cornea/pupil. Since I have planned to execute the project in Android, I have done the coding in OpenCV, so that I can use the same function in Android.

Algorithm:

First, I have detected the face using haar cascade and extracted the ROI. Then segmented the left and right eye by rough calculations and within the segmented region of interest, I have used image gradients to locate and track the cornea/pupil.

The demonstration of the above algorithm can be seen in the video. The work is still under progress and will share more about the techniques used later.


Thursday 25 December 2014

Modelling the response of a differential drive robot in Simulink and controlling using finite state machines (Stateflow)



Recently, I have been thinking to work on a model based approach to control the behavior of a robot in a simulated environment which can also be immediately deployed to a hardware prototype of the model.

After going through a couple of software, I have found that MATLAB Simulink and Stateflow would be the best option for me to implement my idea. I have modelled my robot in Autodesk Inventor and brought my model into MATLAB using Inventor – Simmechanics interface.


Once I have my robot model in MATLAB, I have designed the kinematic equations for my model  in simulink so as to control its behavior in the simulated environment. The robot behavior is controlled by the logic created in Stateflow. The robot has Forward, Reverse,Left and Right states. Based on the velocity inputs given to the wheels of the robot,  the state of the robot changes.




Inspired by MATLAB webinar - http://in.mathworks.com/videos/mobile-robot-simulation-for-collision-avoidance-with-simulink-90193.html

Tuesday 29 July 2014

An Interactive book with Augmented Reality


Augmented Reality!, a technology that is about to transform the world that we are looking today. Right from the business to advanced sciences, its impact can be predicted. Interacting with the physical world will be an amazing experience  with the perfect combination of technologies like google glass and Augmented Reality(AR).

Here is an example of such interaction which I have developed. The following video is a book powered with AR. As a student, we all would have experienced trouble while understanding a concept given in a book and wasted time in searching for related simple explanations in the internet. Now, with the Augmented reality and with the smart phones/google glass,  it is possible to make learning more interesting. In the video, it can be seen that, once the camera sees the image/circuit in the book, it automatically overlays its corresponding video tutorial. This makes learning interactive and efficient.


And finally, It is the image recognition that triggers the Augmented Reality as said in this TED video.

The software that I have used to develop this application include,

1. Eclipse IDE
2. Android SDK
3. JDK 6
4. Metaio SDK 5.3

There are lot Augmented Reality SDKs like Vuforia SDK, Layar SDK, Metaio SDK etc, whose libraries can be used to develop AR applications along with the Android SDK. Even OpenCV4Android( along with Android SDK) can also be used with proper recognition algorithms, image/video overlaying and with proper initialization of the internal Android senors to develop such applications.





Wednesday 4 June 2014

Image Processing in Android : A kick start to Augmented Reality

I never thought that I would be developing Android Applications a few months back. I sincerely thank Dr. Pinnamaneni Bhanu Prasad for introducing me into this interesting domain and guiding me through the process of step by step development of Augmented Reality applications.

For those who have experience in image processing at a computer level interfaces, it would be very exciting if you see your algorithm running in your own android mobile ( I have experienced this on my very first app, and surely you`ll!).  Moreover, you can take it along with you anywhere and you can demonstrate your algorithm to all your friends/family in any android mobile (which you cannot do with the computer level algorithm you used to develop previously, no matter how complex the algorithm is!).

So, I have decided to write some basic steps that can guide a newbie (new to android but with good programming skills) to develop their own android application for image processing. So, if you like to develop an app, all you need is INTEREST and PATIENCE and all it gives in return is FUN!(if you are a curious techie like me). I have provided some links which would be very useful for anyone to start over ( these  links really helped me a lot than other tutorials on the internet...just my opinion!, you can find your own link that suits you well)

The very first thing you need is an IDE with all libraries and tools of android. It is good to start with the  Android Developer portal, where you can find tutorials about android and setting up any IDE (Eclipse, in my case) for android programming. But from my experience, manual setup of android tools/OpenCV to an IDE is very complicated and you may experience lot of compile errors even with minor directory changes. So, I strongly recommend you to use the NVIDIA Tegra Android Development Pack which will install all the necessary libraries along with OpenCV, thus making your work simpler. Install the Tegra android pack and go through the basic tutorial " Build your first App" in Android developer portal. This will give you a clear idea on various files/folders and its functionality in android development.

Once you are clear with the basics of the android development, different files used in it, accessing various sensors/hardware components, you can start looking after the example codes first and then you`ll be able to develop your own code upon analyzing the example codes.

This is a good link with clear explanation of all required files to build a basic android app. After going through that tutorial, you can start with the image processing in android. To do this, click on the file menu, choose import and then choose existing android application. Now navigate to OpenCV-Android sample programs that you have in your computer (click here to see how to do).

One must always keep in mind  that the image processing methods are same in both android and windows platform. All you need to take care is, the input image and the output image format. The Android camera`s format is RGBA, which means the image what you will be acquiring will be in RGBA format and you need to return the image of same format after doing your processing.

So,
1. Acquire RGBA image
2. Convert it into required format (RGB/Ycrcb/Grayscale etc)
3. Do all your image processing analysis as you would do in OpenCV
4. Convert the output image into RGBA
5. Return the output image to display

These are all the basic steps that one should always keep in mind while programming. Here is a video of my first android application in which I have implemented the following,

1. Thresholding (60/255)
2. Skin Detection and contour drawing over detected skin
3. Canny edge detection
4. Circle detection

I have used a simple switch case to toggle between these four processing when a menu in the android is selected. Which means, all the processing and logics are same for both windows and Android. All you need to do is to think that 'it is easy to work on', spent some time in understanding the basics and eventually you will succeed.



 Once you know how it is done, then you can proceed with some advanced techniques and can do high level  image processing with your own android mobile. And with further interest and patience, you can go through some tutorials about Augmented Reality and can start developing your own Augmented reality apps (as I am doing right now). Have FUN! and feel free to contact me for further assistance.