David Azar

New York

Product Engineer

Get my resume

  • github
  • Black Instagram Icon
  • Black LinkedIn Icon

Air Brush

Curiosity got sparked after spending the first few sessions exposed to this new technology.


Computer vision adds really interesting possibilities to many of my projects. Letting any computer "see" gives us the chance to enter the physical space in an unparalleled way.


OpenFrameworks is the tool of choice for this experiment.



Creative use of computer vision


Many ideas came to mind when I started to think about a creative use for computer vision. I'm interested in creating tangible objects that fit in our hands and can augment our lives somehow. So at first I was thinking of creating something with a Raspberry Pi or a mobile device, but I decided that it'd be best if for this first experiment I spent some time understanding the way openFrameworks works on a desktop before moving to a mobile platform.


Another area I'm interested in is HCI – "how can computer vision assist or augment this area?"


I wanted to create some sort of experiment that allowed me to work in this space.

There were a few interesting ideas that came out during a brainstorming session:


  • Controlling a spaceship inside a video game with the movement of your hand

  • A game for kids to teach them how to organize shapes

  • A sunflower detector

Eventually I decided to go with another idea - a painting tool that works by tracking your fingers in space, or simply "Air Brush"



Process


The concept was clear: a tool to allow people to draw in space with the movement of their hands. Pinching your index and thumb fingers trigger the paint tool and some degree of color options should be provided.


Tracking fingers on a hand is no simple task. In order to solve this I created two finger accessories to make the tracking simpler.



Fusion 360 model. Green goes on index, black on thumb



The green targets on the accessories are what the camera will track. When pinching your fingers, they align and create a larger target. This specific one is modeled around my hands, so they wont fit everyone.



After the accessories were 3D printed and ready to use, I started the development process. First, I recreated the in-class examples done in class for tracking the red card, but adjusted my blurring and dilation parameters to better suit this application. Tried different values until a consistent contour recognition was achieved. Following that I worked to place a circular marker in the middle of the bounding rectangle for both contours.




Pinching gesture


At first I was using tracking to determine when the fingers come together. This lead to inconsistent results and decided to try it another way.


I added a filter for the contours that the ContourFinder returned – since the targets are somewhat squared, anything close to an aspect ratio of 1 is considered a finger target. After that, I spent a lot of time trying to find a way of detecting the change in state from 2 small targets that are not touching each other to a larger rectangular one.


The solution I went with (which honestly don't think is the most elegant one) was to apply another filter to the contours returned. If there was only 1 bounding rectangle found that matched an aspect ratio of 2 (same width as the individual finger but twice the height) then that rectangle is considered as the "fingers closed" target. Instead of tracking the fingers and how the merge together, the algorithm looks for the closed finger shape in the frame regardless of what came before it.


There is a lot of work that can be made in this part to improve the performance.




Drawing


Being able to draw on screen was the next logical feature thanks to the pinching gesture being set in place. To achieve this, I tracked the centroid of the closed finger target and saved its position in a custom class called daDrawPoint.


daDrawPoint contains an x, y and ofColor. A dynamic array of daDrawPoints is kept at ofApp and for every frame that the fingers are pinched, a daDrawPoint is pushed to it with its x, y and a fill color.


The fill color is chosen by hovering your fingers on top of one of the 4 color targets. These color targets come from a custom class called daColorOption:ofRectangle.


By inheriting from ofRectangle I can leverage its inside() function. This is key in order to change colors.


A small rectangle on the lower right corner of the screen tells you the color you are currently drawing with.



Final result





The experiment works. There are performance improvements pending but you can draw on screen using only your fingers and 4 colors!






Conclusions


  • Lighting is everything. testing the app during different times of the day yielded very different results, so complete control over the lighting of your scene is impetuous.

  • openFrameworks has a lot of redundant classes due to its nature. It was tricky determining which version of a Point2f to use.

  • I'd love to be able to mirror the image on the y axis. I wasn't able to flip both the image and the contours at the same time, so the drawing experience is a bit confusing since the x axis is mirrored: moving your hand rights paints to your left.

  • Computer vision is tricky, but it's so interesting. I'm really looking forward to integrate openFrameworks with other tools such as Arduino and Unity.



You can see the repo for this code here



Thanks for reading!