Event Name: Siggraph Asia 2011

Date: 12-15 December, 2011

Place: Hong Kong Convention & Exhibition Center


SIGGRAPH Asia 2011 is the 4th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia. It is the place to be in for an extensive range of best-practice-based education.

This year, the theme for "Emerging Tech" session is "PLAY". Play is one of the simple pleasures in life. From birth to death there is nothing as enjoyable as spending a few moments in play. Stress melts away, laughter flows and magic memories are made. Whether with a hoop and a stick, or a multi-million dollar boat, people have always used technology to enable themselves to play in new and exciting ways. In the modern world, with all of its complexities we need play more than ever before. At SIGGRAPH Asia 2011 the Emerging Technologies program features content that shows new forms of play. These technologies may include, but are not limited to:Virtual Reality, Augmented Reality, Haptic devices, Audio, 3D graphics, Desktop games, Mobile games, Robots, Sensor systems, Pervasive computing, Wearable devices, etc. I am pleased to have the FoodGenie project accepted for Emerging Technologies demo at this conference, and I also had the chance to give a 20 mins presentation at the exhibition talk stage. This is a really big conference, including various courses, talks, art gallery and exhibitions. Although I did not have enough time to attend most of the sessions, I got good experience from the conference.

The conference site opened from Dec 11 afternoon. Since it is Sunday afternoon, the site was really empty, and it took me a long time to find the right location. The wifi connection was really unstable. Then on the next day, I asked for private wifi connection code and then tested the connection with the Singapore site with Xavier, finally set up the demo successfully.

During the 3 day exhibition, the demo had a lot of visitors, from local high school students to reserachers, and also people from industry area. They were very interested into this food communcation. Since this demo is a little different from the other works, which were more about graphics and augmented reality, or 3D imaging, most of them were quite surprised about this communication. Although it would be pity for us not to show the food printer system on site, some of them actually prefer the remote communication. They got very excited when they saw the remote food printer moving in real time and print out the actual pattern!! Although this application seemed to be a little weird for common users at first, they actually liked the idea after they understand the whole interaction concept and the scenarios.


There were also quite some interesting works, unluckily, I did not have time to look into every one of them, I posted some of them here.


In an enclosed arena a dozen small color-coded autonomous robots coexist and communicate with participants. Through sensors and programmed behaviors the robots sense and respond to the presence of people within the arena. Participants interact with the robots and can attempt to control them.

KUSUGURI: Visual Tactile Integration for TicklingEdit


they focus on a palm suitable for physical contact, and propose the method including a visual tactile display which appears as a part of the palm. Users can observe a moving fingertip which looks as if someone is tickling the palm with his/her fingertip, and users feel a simple and slight vibration provided from the device during being tickled. As a result, users perceive tickling sensations due to sensory integration between the visual cues (the movement of fingertips) and tactile cues as (simple vibration).

360-Degree Fog Projection Interactive DisplayEdit

they uses multiple projectors, each showing a different picture from a different viewpoint. Though all images are projected on one cylindrical fog screen, the fog shows Mie scattering that creates the forward scattering of projected images. When the observer walks around the fog screen, the fog acts like a parallax barrier and shows one image to the observer at a time. In this way, the 3D shape of the object can be recognized from the motion parallax.


Community content is available under CC-BY-SA unless otherwise noted.