Creative Cube
1 week
April, 2013
We were challenged to create a new form of interaction using the Intel Perceptual Camera. We used the camera + Arduino to create a cube which together allow for a new way of expressing creativity. Each face of the cube has a different image on it. Once the cube is set down, the image with on top face is displayed on the screen using a projector and the user can draw on that screen using their hands. They can turn the cube to a different side and draw again as well as return to their previous drawings.
More Details
Intel reached out to our design program, for the purpose of challenging us to design new methods of interactions with their Intel Perceptual Computing Camera.
We worked inside the realm of perceptual computing to address the creative process. We introduced the modality of a cube to control different canvases. On a particular canvas, the user can then use one hand to draw and sketch, and the second hand to erase. These canvases then allow the user to draw on a projected artboard to begin the creative process in the form of storytelling. Each canvas saves the drawing, and the user can come back to them upon a flip of the cube.
Team members : Adam Williams + Stephen Hicks + Emma Fagregren + Deepak Bhagchandani
Hackathon and first concept - Finding our opportunity space

From the very beginning our team was interested in applying the perceptual computing technology to creative spaces such as drawing and painting. We felt that exploring more “embodied” ways of drawing would be interesting. In the hackathon on October 21 we prototyped a simple gesture based drawing tool. It consisted of a white canvas and a black marker, which you could change the size of based on your distance to the camera. During the hackathon we realized that it would be hard to make use of the cameras for more precise drawing, as they are still not reliable enough for working with small details. This insight was confirmed in conversation with Robert Cooksey, who told us that a more “impressionistic” or abstract form of drawing would currently be more well suited for the cameras. Keeping this in mind we thought that drawing would still be a worthwhile space to explore, but that we would have to be okay with a less precision for now. Future generations of the camera would no doubt be more precise. .

Why can’t I just…. Wouldn’t it be cool if…

This exercise had us brainstorming other possible ways of making use of the cameras. As mentioned above we got critique for being too focused on problem solving. In all honesty, the answers generated did not end up impacting our final design concept much. However the exercise made us fully realize the possibilities associated with the perceptual computing challenge, and later in the process we could go back and compare our final concept with answers generated in this exercise. This comparison helped us judge our design and make sure that the concept was not focused on problem solving.

Looking at inspiring videos and art/design

It wasn’t until after looking at exemplars and videos of art and interactive installations that the idea of making an interactive installation started to take form. See “Exemplars” for a list of the exemplars that inspired us. Using exemplars proved to be a useful method that helped us take our idea generation one step further. We started talking about how drawing in combination with the gesture camera could be used in different spaces.

Concept 1: Interactive Scrapbook Inspiration: Light Tracer, Poetic Computation

People can hold up different materials, which are scanned/imaged by a camera, and a digital representation of their materiality is then made on a larger screen or projection surface. Holding up different tools (i.e. scissors) allows the person to interact with the digital material in different ways. Holding up scissors allow the digital material to be cut or sectioned into smaller pieces. Hands/fingers can be used to position the pieces into different compositions. Photos can be imported, and holding up a pen allows for free-hand drawing. This could be used by crafters who want to digital explore their materials, and work with them in a non-destructive manner. It takes the tactual experience of working with physical tools and materials, and brings them into a digital space, using their analogue functionality as a metaphor for a digital interaction.

Concept 2: Digital Tattoos
Inspiration: Lightrail

Use the human body as a surface that can be painted upon and morphed in an ephemeral and non-permanent manner. An overhead projector could pick up hand gestures over the body. When a person draws a finger on their skin with the camera engaged, that shape is projected on that part of the body, and proceeds to move with the person as long as the projector is engaged. Wiping or rubbing your hand across skin could remove the digital tattoo. This idea explores the increasingly popular idea of body modification within the domain of HCI, but in an ephemeral and intimate manner. The person can come to understand their own body through an artistic mindset, and the tactual experience of changing their body and appearance by interacting with oneself.

Concept 3: Poetic Bodystorming
Inspiration: Poetic Computation, Light Tracer

This concept explores the notion of brainstorming with a perceptual-driven approach. We wanted to explore how gesture, bodystorming, and artifacts could be used to create poetry or other creative compositions as a form of brainstorming. Inspirational pieces from Pinterest or snips of existing poetry could be imported into the composition. Using existing writing implements and gestures, they could be annotated and remixed through drawing or painting. The camera could keep track of bodystorming movements, and project them as an animated shadow on the canvas. Watching the video, the artist could make notes on the entire bodystorming session, adding how they might be feeling or thinking for each part. This explores the notion of “inspiration”, focusing on the embodiment of ideas and inspiration in the form of movement and perception, which can then be saved, watched, and reflected upon.

Concept 4: Crowdsourced Paint
Inspiration: Video Painting

What happens when perception becomes crowd sourced? This example explores how multiple people could come together and collaborate on one specific composition over a period of time. As people travel by a wall, they make marks from the movement of their bodies. They stop, and explore, adding strokes to the composition before continuing on their way. This concepts adds a more “temporal” notion of perception, and places a greater emphasis on a crowd-sourced creative movement. This could be used in any highly-trafficked public place, and be used to understand a more embodied (and creative) understanding of crowd-sourcing.

Concept 5: The Clothes Make The…
Inspiration: Scattered Pixel

This idea considers perceptual and wearable computing. As people walk by a storefront, a camera recognizes them and “dresses” them with an article of clothing they might like in the store. It forces the person to stop and look at themselves, and as they move, the “clothing” moves with them. They can also “try on” these clothes with other strangers, creating a new (public) environment by which these clothes could be tried on, and facilitate communication (turning to a stranger and asking… “how do you think this looks”?) This concept take the artifact of clothing and the experience of trying it on, and allows for a new type of context in which this experience can happen (i.e. a public space).

Concept 6: A Story a Day
Inspiration: Video Painting, Light Tracer, Lightrail, Exquisite Forest, Scattered Pixel

Use the human body as a surface that can be painted upon and morphed in an ephemeral and non-permanent manner. An overhead projector could pick up hand gestures over the body. When a person draws a finger on their skin with the camera engaged, that shape is projected on that part of the body, and proceeds to move with the person as long as the projector is engaged. Wiping or rubbing your hand across skin could remove the digital tattoo. This idea explores the increasingly popular idea of body modification within the domain of HCI, but in an ephemeral and intimate manner. The person can come to understand their own body through an artistic mindset, and the tactual experience of changing their body and appearance by interacting with oneself.

Story boarding
Storyboarding After discussing what ideas we felt passionate about we went ahead and storyboarded a combination of our favorite ones into three storyboards: PoeticBrainstorming, Interactive Scrapbooking and Collaborative Storytelling. Putting our ideas to paper let us explore them further and in conversation with other team we started to get a sense for what would be realistic to implement. We felt the constraints of time as well as technical limitations of the camera; this ultimately caused us to move away from the idea of object recognition.

Final Concept
Our final design concept emerged in our last meeting before we parted for Thanksgiving break. Many ideas were discussed in the meeting, but as a team we did not feel excited about them. We played around with ideas of how we could move away from the computer screen based interface and use projectors to project drawings on a wall. A breakthrough came when we stopped thinking of our concept as a tool per se, but more of something to have fun with. Then we started thinking about how we could use arduino to incorporate physical objects in our concept, and from there our final concept began to emerge.

started looking at the different Processing wrapper functions for the Intel Perceptual Computing SDK that I could use to allow people to draw and erase using the interactions with the two hands. Initially I just played with all the different interactions that were possible with the camera, but later on started focussing more on the interactions that would afford the goals we wanted to achieve. I read the sample code that was provided in different projects and played around to learn more about how processing actually works with Intel Perceptual Computing SDK. I used the Java Map data structure to store the different screens. The most difficult part was getting the frames to render correctly as processing requires that each frame be redrawn completely from scratch every time. This required me to save the previous frame in memory and redraw it again along with new content for the new frame. It was challenging but fun at the same time.


Stephen was largely responsible for engineering the mechanics behind the Cube interface device. Early on, we thought of the Cube as being wireless to allow a greater dimension of movement, but also to reinforce the feeling that it was a tool that could be easily manipulated and controlled by the user. This required some thought as to exactly how the arduino could communicate wirelessly with a PC and function as an interface device. Because the Cube was, at its core, a glorified keyboard, we began thinking about how particular sensors might be used to register a “button press”. As a side was turned “up”, only that sensor would activate and send a keyboard command to the screen, which would bring up the corresponding artboard in the Processing script. A bluetooth module was soldered and attached the Arduino to allow wireless connectivity from the tilt sensors to the PC. There was also some “creative” coding involved to ensure that only one tilt sensor could activate at a time, and that it only sent one command to the PC (instead of it sending multiple, such as when you press and hold down a keyboard key). Essentially, whenever a tilt sensor was triggered, it immediately turned ‘off ’ and turned all the other sensors ‘on’, where the ‘off ’ state was defined by a 1 and the ‘on’ state by a 0. A huge thank-you goes out to Bryan Hicks and Nathan Potts for their help in getting the code functioning, and Vamsi and CJ for allowing us to use their Arduino shield from their previous project.

Building the Box

Once we were set on the direction of drawing on different canvases, a cube quickly emerged as the prefered form to display multiple scenes. The cube needed to be large enough to contain arduino and give a semblance of the different scenarios that corresponded to each size. The decision of 10 inches was made to allow for a bit of a playful size while not being too large to handle. The next step was to investigate materials. Wood allows for much more precision but has a risk of weighing too much while taking a large toll on resources to construct and replicate. For prototyping purposes, foamboard stands out as easy to work with and a lgihter weight for handling for extended periods of time. The foamboard was cut down to 6 10”x10” pieces. 2 pieces were left whole, 2 had lap joints cut on opposite sides with the Xacto foamboard cutter, and the last 2 had lap joints cut on all four sides. These joints allowed the edges to connect in a more seamless manner. The sides were then all hot-glued together less one with the 4 lap joints. This allowed for access to the interior to setup the Arduino components. This side was able to stay fixed due to friction. Next the Arduino was put in place by banding it to a piece of foambard and gluing that to the interior of the cube. Each tilt sensor was then affixed to a corresponding side. This array allowed for one side to be triggered at a time as the cube was turned. Lastly, the scenes were printed onto 100lb coated paper and glued to the outside. This allowed for visual feedback of which side would call up which scene.


Emma was responsible for creating the graphics that would make up the canvas background. The graphics were created in Adobe Photoshop using a digital drawing tablet. Since they were to be both printed out and glued to the cube, as well as projected onto a wall, the resolution had to be high. An important thing to consider was that since the gesture camera is not 100% reliable the backgrounds had to allow for less than pixel perfect drawings on top of them. Since we choose to restrict the color of the pen to one preset color per background, the backgrounds also had to be designed to allow that particular color to be clearly visible. We choose natural sceneries as a theme that would support these requirements.

After creating the different parts of the prototype we met to put it all together. This was done in the following steps:

  • The first step was to build the cube around the arduino and bluetooth prototype. We mapped each “direction sensor” of the arduino with its corresponding side of the cube and glued it to the inside of the foam core. The cube was built with a finished “lib” of paper that would make the seams look more attractive.
  • We then synchronized the cube wirelessly with the complete Processing code and made sure it worked, tilting it from side to side to ensure that keyboard command was being sent from the arduino to the perceptual prototype. Testing helped us fine-tune the prototype, and add a few lines of delay code to “debounce” the tilt sensors and keep them from firing too fast.
  • After fine-tuning the code, we imported the finished graphics into Processing to substitute the placeholders we had set up. We also had our graphics printed out and glued onto the sides of the cube.
  • When all this was done we worked on refining the details. We choose the music for each background to create an ambient feeling during the creative process, and decided on pen colors to be used with each different background.

Target Audience
We envision the following target audience and use case for our prototype:

We envision our prototype to be set up as a museum or public installation to correlate with our thinking in terms of “problem setting” and not “problem solving”. In terms of our target audience, we imagine those interested in augmenting their creative process (designers, writers, artists, and architects) would find enjoyment and engagement from using this design as a way to brainstorm and create. We expect these members of the “creative class” to see this primarily as a device with a low barrier to entry that can be influential in the early brainstorming process, but not directly replace the triedand- true tools they use on an individual basis. Our hope is that our target audience will see the same potential for problem-solving impacts that we see in our design - that the movement of a physical device to represent artboards could correspond to a “desktop”, and being able to quickly sketch, transition to another artboard, and back again become a new means by which creators interact with their process. We look forward to discovery and unanticipated uses pointing us to new design directions as well.

Stephen Hicks : For writing parts of this text + video