Here are some insights from the event:
WowWee talked about and demonstrated AR games where fixed, pre-defined physical objects (model airplanes or toy figurines) that when paired with the game running on the smartphone (Android and iOS) would present a new 3D world on the phone screen. For example, an airplane model in the view of the smartphone camera would show up as a virtual airplane on the smartphone screen allowing you to play a dogfight with other virtual airplanes but the background, instead of being a virtual world, would be the view from the camera. So, you could visualize flying the airplane around the walls and ceilings of your room. Quite an interesting demo.
Qualcomm, through their internal startup ventures is pushing the technology behind AR and working with companies trying to use this functionality to develop apps on smartphones (again Android and iOS) that enhance user engagement by including AR into the app. They are providing an SDK for AR called Vuforia that can possibly exploit the AR engine in their SnapDragon chipsets that are being used by a large number of smartphones these days. They mentioned three areas where this is picking up interest - gaming, advertising and instructional apps. The first one was similar to that from WowWee. The second one was quite interesting. Brands (for example, Heinz ketchup) can build an app that, when you bring the ketchup bottle in the view of the smartphone camera, you can see recipes overlaid on top of the bottle. Or another example they gave was apparel manufacturers. Scanning a watch ad and then placing the hand in front of the camera caused the watch to be overlaid on top of the hand on the smartphone display. This was pretty cool and I think has interesting applications where you can "try" out how things might look on you without really stepping outside your home. The third application, instructional apps was also quite novel. If, for example, you point the camera at a TV set, its interactive user manual would be overlaid on top of the camera view on the smartphone display, allowing you to view specific aspects of control/setup of the TV without actually opening a printed or online user manual.
I think this field is in its infancy but has huge potential going forward. One of the key questions that was discussed was if holding the smartphone in one hand was cumbersome and clunky and the possibility of moving to heads up displays like those in the Terminator. The panel seemed to indicate that the expectation from users was always a heads up display but the form factor, weight, connectivity etc. will finally decide if users will adopt it. If the form factor is somewhat similar to that of Google glasses, and performs similar to some of the demos and videos then it might get better adoption. There was also mention of some companies working on contact lenses with this technology built straight into the lens. Spooky? Remember the scene from Minority Report with Tom Cruise walking through the mall and the various stores targeting ads directly to his retina?
I like the idea of using AR to code in front of a huge virtual screen, somewhat similar to that scene in Minority Report with Colin Farrel/Tom Cruise. I use dual monitors at work but I still feel that I need more screen space! Ideally, I would like to have a wall of virtual screen space with windows that I could pan, tilt and zoom, using hand gestures to move windows around and a keyboard to type on the active window and my finder as a 3D mouse. The various windows could be the different browser, debugger, emulator etc. sessions allowing me to switch contexts easily without using precious screen real estate. It would be interesting to see if a proof-of-concept like this could be hacked using a Kinect and a projector?