Electronic Entertainment Expo 2014

https://i1.wp.com/oyster.ignimgs.com/wordpress/www.ign.com/1587/2014/05/e3-logo.jpgThis past week, the Electronic Entertainment Expo (commonly referred to as E3) kicked off. This annual trade fair , which is presented by the Entertainment Software Association (ESA), is used by video game publishers, accessory manufacturers, and members of the computing industry to present their upcoming games and game-related merchandise. The festivities wrapped up this Friday, and was the source of some controversy and much speculation.

For starters, the annual show opened amidst concerns that the dent caused by Massively Multilayer Online Role Playing Games (MMORPGs) and online gaming communities would start to show. And this did seem to be the case. While the annual Los Angeles show normally sets up the expectations for the rest of the year in video games – and that certainly did happen – but E3 2014 was mainly about clearing the runway for next year.

https://i0.wp.com/oyster.ignimgs.com/mediawiki/apis.ign.com/e3/thumb/f/f3/E32014-Inline1.jpg/468px-E32014-Inline1.jpgNowhere was this more clear than with Nintendo, which was the source of quite a bit of buzz when the Expo began. But it was evident that games – particularly for the Wii U – were not going to materialize until 2015. The company got a jump on the next-generation console battle by launching its Wii U in late 2012, a year ahead of Sony and Microsoft, but poor sales have led to big game developers largely abandoning it.

And while the company did announce a number of new games –  including an open-world Legend of Zelda; the new Mario game that allows players to create custom levels, called Mario Maker; and Splatoon, where teams of players shoot coloured ink at each other – none are scheduled for release until next year. That dearth of blockbusters for the rest of 2014 is mirrored at Microsoft and Sony, which are also light on heavyweight first-party titles for the rest of this year.

https://i0.wp.com/cdn1-www.craveonline.com/assets/uploads/2014/04/PS4WiiUXboxOne.jpgThe companies have some respective big guns in the works, such as Halo 5: Guardians and Uncharted 4: A Thief’s End, but they’re also scheduled for release in 2015. However, with the brisk sales of the Xbox One and PlayStation 4 consoles, both companies have the luxury of taking their time with big games. Nintendo is not so fortunate, since the jump they made with the Wii U leaves them with a big gap that they aren’t apparently filling.

Nintendo’s comparatively under-powered Wii U, in contrast, will look even less capable than its rivals as time passes, meaning it can’t afford to wait much longer to get compelling titles to market, especially as financial losses mount. Even long-time Nintendo supporters such as Ubisoft aren’t exactly sure of what to make of the Wii U’s future. The other big question heading into E3 was whether Microsoft could regain its mojo.

https://i0.wp.com/sourcefed.com/wp-content/uploads/2014/06/e3BLOG.pngThe software giant bumbled the Xbox One launch last year and alienated many gamers, mainly by focusing on TV and entertainment content instead of gaming and tying several unpopular policies to the console, which included restrictions on used games. The company eventually relented, but the Xbox One still came bundled with the voice- and motion-sensing Kinect peripheral and a price tag that was $100 higher than Sony’s rival PlayStation 4.

The result is that while the Xbox One has sold faster than the Xbox 360 at five million units so far, it has still moved two million fewer units than the PS4. Changes began to happen in March when Microsoft executive Phil Spencer, known as a champion of games, took over the Xbox operation and wasted no time in stressing that the console is mainly about gaming, and made the Kinect optional – thus lowering the Xbox One’s price to match the PS4.

https://i1.wp.com/www.highscorereviews.com/wp-content/uploads/2014/05/xbox-e3-booth.jpgThat was certainly the focus for Microsoft at E3. TV features weren’t even mentioned during the company’s one-and-a-half-hour press conference on Monday, with Microsoft instead talking up more than 20 upcoming games. As Mike Nichols, corporate vice-president of Xbox and studios marketing, said in an interview:

We didn’t even talk about all the platform improvements to improve the all-out gaming experience that we’ve made or will be making. We wanted to shine a light on the games.

Another big topic that generated talk at the show was virtual reality, as this year’s E3 featured demonstrations of the Oculus Rift VR headset and Sony’s Project Morpheus. The latter has been the source of attention in recent years, with many commentators claiming that it has effectively restored interest in VR gaming. Though popular for a brief period in the mid 90’s, interest quickly waned as bulky equipment and unintuitive controls led to it being abandoned.

https://i2.wp.com/www.stuff.co.nz/content/dam/images/z/s/5/p/0/image.related.StuffLandscapeSixteenByNine.620x349.zs5ol.png/1402551049990.jpgBut this new virtual reality headset, which was recently bought by Facebook for $2 billion, was undeniably the hottest thing on the show floor. And the demo booth, where people got to try it on and take it for a run, was booked solid throughout the expo. Sony also wowed attendees with demos of its own VR headset, Project Morpheus. And while the PlayStation maker’s effort isn’t as far along in development as the Oculus Rift, it does work and adds legitimacy to the VR field.

And as already noted, the expo also had its share of controversy. For starters, Ubisoft stuck its proverbial foot in its mouth when a developer from its Montreal studio admitted that plans for a female protagonist in the upcoming Assassin’s Creed: Unity had been scrapped because it would supposedly have been “too much work”. This lead to a serious fleecing by internet commentators who called the company sexist for its remarks.

https://i0.wp.com/guardianlv.com/wp-content/uploads/2014/04/assassins-creed-650x365.jpgLegendary Japanese creator Hideo Kojima also had to defend the torture scenes in his upcoming Metal Gear Solid V: The Phantom Pain, starring Canadian actor Kiefer Sutherland (man loves torture!), which upset some viewers. Kojima said he felt the graphic scenes were necessary to explain the main character’s motivations, and that games will never be taken seriously as culture if they can’t deal with sensitive subjects.

And among the usual crop of violent shoot-‘em-up titles, previews of Electronic Arts upcoming Battlefield: Hardline hint that the game is likely to stir up its share of controversy when it’s released this fall. The game puts players in the shoes of cops and robbers as they blow each other away in the virtual streets of Los Angeles. Military shooters are one thing, but killing police will undoubtedly rankle some feathers in the real world.

https://i0.wp.com/allthingsxbox.com/wp-content/uploads/2014/06/Call-of-Duty.jpgIf one were to draw any conclusions from this year’s E3, it would undoubtedly be that times are both changing and staying the same. From console gaming garnering less and less of the gamers market, to the second coming of virtual reality, it seems that there is a shift in technology which may or may not be good for the current captains of industry. At the same time, competition and trying to maintain a large share of the market continues, with Sony, Microsoft and Nintendo at the forefront.

But in the end, arguably the most buzz was focused upon the trailers for the much-anticipated game releases. These included the trailers for Batman: Arkham Knight, Call of Duty: Advanced Warfare, Farcry 4, Sid Meier’s Civilization: Beyond Earth, and the aforementioned Metal Gear Solid V: The Phantom Pain, and Assassins Creed Unity. Be sure to check these out below:

Assassins Creed Unity:


Batman: Arkham Knight


Call of Duty: Advanced Warfare


Halo 5: Guardians


Sources:
cbc.ca, ca.ign.com, e3expo.com, gamespot.com

Digital Eyewear Through the Ages

google_glassesGiven the sensation created by the recent release of Google Glass – a timely invention that calls to mind everything from 80’s cyberpunk to speculations about our cybernetic, transhuman future – a lot of attention has been focused lately on personalities like Steve Mann, Mark Spritzer, and the history of wearable computers.

For decades now, visionaries and futurists have been working towards a day when all personal computers are portable and blend seamlessly into our daily lives. And with countless imitators coming forward to develop their own variants and hate crimes being committed against users, it seems like portable/integrated machinery is destined to become an issue no one will be able to ignore.

And so I thought it was high time for a little retrospective, a look back at the history of eyewear computers and digital devices and see how far it has come. From its humble beginnings with bulky backpacks and large, head-mounted displays, to the current age of small fixtures that can be worn as easily as glasses, things certainly have changed. And the future is likely to get even more fascinating, weird, and a little bit scary!

Sword of Damocles (1968):
swordofdamoclesDeveloped by Ivan Sutherland and his student Bob Sprouli at the University of Utah in 1968, the Sword of Damocles was the world’s first heads-up mounted display. It consisted of a headband with a pair of small cathode-ray tubes attached to the end of a large instrumented mechanical arm through which head position and orientation were determined.

Hand positions were sensed via a hand-held grip suspended at the end of three fishing lines whose lengths were determined by the number of rotations sensed on each of the reels. Though crude by modern standards, this breakthrough technology would become the basis for all future innovation in the field of mobile computing, virtual reality, and digital eyewear applications.

WearComp Models (1980-84):
WearComp_1_620x465Built by Steve Mann (inventor of the EyeTap and considered to be the father of wearable computers) in 1980, the WearComp1 cobbled together many devices to create visual experiences. It included an antenna to communicate wirelessly and share video. In 1981, he designed and built a backpack-mounted wearable multimedia computer with text, graphics, and multimedia capability, as well as video capability.

Wearcomp_4By 1984, the same year that Apple’s Macintosh was first shipped and the publication of William Gibson’s science fiction novel, “Neuromancer”, he released the WearComp4 model. This latest version employed clothing-based signal processing, a personal imaging system with left eye display, and separate antennas for simultaneous voice, video, and data communication.

Private Eye (1989):
Private_eye_HUDIn 1989 Reflection Technology marketed the Private Eye head-mounted display, which scanned a vertical array of LEDs across the visual field using a vibrating mirror. The monochrome screen was 1.25-inches on the diagonal, but images appear to be a 15-inch display at 18-inches distance.

EyeTap Digital Eye (1998):
EyeTap1
Steve Mann is considered the father of digital eyewear and what he calls “mediated” reality. He is a professor in the department of electrical and computer engineering at the University of Toronto and an IEEE senior member, and also serves as chief scientist for the augmented reality startup, Meta. The first version of the EyeTap was produced in the 1970’s and was incredibly bulky by modern standards.

By 1998, he developed the one that is commonly seen today, mounted over one ear and in front of one side of the face. This version is worn in front of the eye, recording what is immediately in front of the viewer and superimposing the view as digital imagery. It uses a beam splitter to send the same scene to both the eye and a camera, and is tethered to a computer worn to his body in a small pack.

MicroOptical TASK-9 (2000):
MicroOptical TASK-9Founded in 1995 by Mark Spitzer, who is now a director at the Google X lab. the company produced several patented designs which were bought up by Google after the company closed in 2010. One such design was the TASK-9, a wearable computer that is attachable to a set of glasses. Years later, MicroOptical’s line of viewers remain the lightest head-up displays available on the market.

Vuzix (1997-2013):
Vuzix_m100Founded in 1997, Vuzix created the first video eyewear to support stereoscopic 3D for the PlayStation 3 and Xbox 360. Since then, Vuzix went on to create the first commercially produced pass-through augmented reality headset, the Wrap 920AR (seen at bottom). The Wrap 920AR has two VGA video displays and two cameras that work together to provide the user a view of the world which blends real world inputs and computer generated data.

vuzix-wrapOther products of note include the Wrap 1200VR, a virtual reality headset that has numerous applications – everything from gaming and recreation to medical research – and the Smart Glasses M100, a hands free display for smartphones. And since the Consumer Electronics Show of 2011, they have announced and released several heads-up AR displays that are attachable to glasses.

vuzix_VR920

MyVu (2008-2012):
Founded in 1995, also by Mark Spitzer, MyVu developed several different types of wearable video display glasses before closing in 2012. The most famous was their Myvu Personal Media Viewer (pictured below), a set of display glasses that was released in 2008. These became instantly popular with the wearable computer community because they provided a cost effective and relatively easy path to a DIY, small, single eye, head-mounted display.myvu_leadIn 2010, the company followed up with the release of the Viscom digital eyewear (seen below), a device that was developed in collaboration with Spitzer’s other company, MicroOptical. This smaller, head mounted display device comes with earphones and is worn over one eye like a pair of glasses, similar to the EyeTap.

myvu_viscom

Meta Prototype (2013):
Developed by Meta, a Silicon Valley startup that is being funded with the help of a Kickstarter campaign and supported by Steve Mann, this wearable computing eyewear ultizes the latest in VR and projection technology. Unlike other display glasses, Meta’s eyewear enters 3D space and uses your hands to interact with the virtual world, combining the benefits of the Oculus Rift and those being offered by “Sixth Sense” technology.

meta_headset_front_on_610x404The Meta system includes stereoscopic 3D glasses and a 3D camera to track hand movements, similar to the portrayals of gestural control in movies like “Iron Man” and “Avatar.” In addition to display modules embedded in the lenses, the glasses include a portable projector mounted on top. This way, the user is able to both project and interact with computer simulations.

Google Glass (2013):
Google Glass_Cala
Developed by Google X as part of their Project Glass, the Google Glass device is a wearable computer with an optical head-mounted display (OHMD) that incorporates all the major advances made in the field of wearable computing for the past forty years. These include a smartphone-like hands-free format, wireless internet connection, voice commands and a full-color augmented-reality display.

Development began in 2011 and the first prototypes were previewed to the public at the Google I/O annual conference in San Francisco in June of 2012. Though they currently do not come with fixed lenses, Google has announced its intention to partner with sunglass retailers to equip them with regular and prescription lenses. There is also talk of developing contact lenses that come with embedded display devices.

Summary:
Well, that’s the history of digital eyewear in a nutshell. And as you can see, since the late 60’s, the field has progressed by leaps and bounds. What was once a speculative and visionary pursuit has now blossomed to become a fully-fledged commercial field, with many different devices being produced for public consumption.

At this rate, who knows what the future holds? In all likelihood, the quest to make computers more portable and ergonomic will keep pace with the development of more sophisticated electronics and computer chips, miniaturization, biotechnology, nanofabrication and brain-computer interfacing.

The result will no doubt be tiny CPUs that can be implanted in the human body and integrated into our brains via neural chips and tiny electrodes. In all likelihood, we won’t even need voice commands at that point, because neuroscience will have developed a means to communicate directly to our devices via brainwaves. The age of cybernetics will have officially dawned!

Like I said… fascinating, weird, and a little bit scary!

‘High Dynamic Range’

News From Space: Walk on Mars with VR

oculus-rift-omni-treadmill-mars-nasa-640x353Virtual Reality, which was once the stuff of a cyberpunk wet dream, has grown somewhat stagnant in recent years. Large, bulky headsets, heavy cables, and graphics which were low definition and two-dimensional just didn’t seem to capture the essence of the concept. However, thanks to the Oculus Rift, the technology known as Virtual Reality has been getting a new lease on life.

Though it is still in the development phase, the makers of the Oculus Rift has mounted some impressive demos. Though still somewhat limited – using it with a mouse is counter-intuitive, and using it with a keyboard prevents using your body to scan virtual environments –  the potential is certainly there and the only question at this point is how to expand on it and give users the ability to do more.

Oculus-RiftOne group that is determined to explore its uses is NASA, who used it in combination  with an Omni treadmill to simulate walking on Mars. Already, the combination of these two technologies has allowed gamers to do some pretty impressive things, like pretend they are in an immersive environment, move, and interact with it (mainly shooting and blowing things up), which is what VR is meant to allow.

NASA’s Jet Propulsion Laboratory, however, went a step beyond this by combining the Omni and a stereoscopic 360-degree panorama of Mars to create a walking-on-Mars simulator. The NASA JPL team was able to give depth to the image so users could walk around an image of the Martian landscape. This is perhaps the closest normal folks will ever get to walking around on a “real” alien planet.

omni_treadmillAlong with the Martian terrain, JPL created a demo wherein the user could wander around the International Space Station. The JPL team also found that for all the sophisticated imagery beamed back to Earth, it is no substitute for being immersed in an environment. Using a rig similar to the Rift and Omni could help researchers better orient themselves with alien terrain, thus being able to better plan missions and experiments.

Looking to the long run, this kind of technology could be a means for creating “telexploration” (or Immersive Space Exploration) – a process where astronauts would be able to explore alien environments by connecting to rover’s or satellites camera feed and controlling their movements. In a way that is similar to teleconferencing, people would be able to conduct true research on an alien environment while feeling like they were actually in there.

mars-180-degrees-panorama_croppedAlready, scientists at the Mars Science Laboratory have been doing just that with Curiosity and Opportunity, but the potential to bring this immersive experience to others is something many NASA and other space scientists want to see in the near future. What’s more, it is a cheap alternative to actually sending manned mission to other planets and star systems.

By simply beaming images back and allowing users to remotely control the robotic platform that is sending them, the best of both worlds can be had at a fraction of the cost. Whats more, it will allow people other than astronauts to witness and feel involved in the process of exploration, something that social media and live broadcasts from space is already allowing.

As usual, it seems that the age of open and democratic space travel is on its way, my friends. And as usual, there’s a video clip of the Oculus Rift and the Omni treadmill bringing a walk on Mars to life. Check it out:


Sources:
extremetech.com, engadget.com

The Future is Here: The VR Cave!

Cave2It’s called the CAVE2, a next-generation virtual reality platform that is currently the most advanced visualization environment on Earth. Whereas other VR platforms are either in 2D or limited in terms of interactive capability, the CAVE2 is about the closest thing there is to a real-life holodeck. This is accomplished through a series of panoramic, floor-to-ceiling LCD displays and an optical tracking interface that is capable of rendering remarkably realistic 3D environments.

Developed by the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago, CAVE2 is a direct follow up to the VR platform the university created back in 1992. Like the original, the name stands for “Cave Automatic Virtual Environment”; but whereas its predecessor was set in a cube-shaped room, the new environment is set within a cylindrical, 320 degree immersive space. In addition, the screens, sounds, and resolution have all been vastly upgraded.

ModelFor example, the 7.5 by 2.5 meter space (24 feet x 8 feet) is covered floor-to-ceiling with 72 3D LCD screens, each of which outputs images at 37-megapixels (that’s 7,360 x 4,912 pixels, twice that of 2D). This allows for a pixel density that is on par with the human eye’s own angular resolution at 20/20 vision. Headgear is needed to get the full 3D effect, and the entire apparatus is controlled by a hand-held wand.

Yes, in addition to the holodeck, some other science fiction parallels are coming to mind right now. For example, there’s the gloved-controlled holographic interface from Minority Report, the high-tech nursery in Ray Bradbury’s short story The Veldt, and the parlor walls he envisioned in Fahrenheit 451. And apparently, this is no accident, since director Jason Leigh, the head of the project, is a major sci-fi geek!

mars_lifeBut of course, all this technology was designed with some real-life, practical applications in mind. These range from the exploration of outer space to the exploration of inner space, particularly the human body. As Ali Alaraj, a noted neuroscientist who used the CAVE2 put it:

“You can walk between the blood vessels. You can look at the arteries from below. You can look at the arteries from the side. …That was science fiction for me. It’s fantastic to come to work. Every day is like getting to live a science fiction dream. To do science in this kind of environment is absolutely amazing.”

All of this bodes well for NASA’s plans for space exploration that would involve space probes, holographics, and avatars. It would also be incredibly awesome as far as individual hospitals were concerned. Henceforth, they could perform diagnostic surgery using nanoprobes which could detail a patients body, inch for inch, from the inside out.

And of course, the EVL has provided a cool video of the CAVE2 platform in action. Check them out:

Source: IO9, evl.uic.edu

The Future Is Here: The EyeTap

There has been some rather interesting and revolutionary technology being released lately, and a good deal of it involves the human eye. First, there was the Google Glasses, then there were the VR contact lenses, and now the new EyeTap! This new technology, which is consistent with the whole 6th sense computing trend, uses the human eye as an actual display and camera… after a fashion.

Used in conjunction with a portable computer, the EyeTap combines the latest in display technology and Augmented Reality which allows for computer mediated interaction with their environment. This consists of the device taking in images of the surrounding area, and with the assistance of the computer, augment, diminish, or otherwise alter a user’s visual perception of what they see.

In addition, plans for the EyeTap include computer-generated displays so the user can also interface with the computer and do work while their AFK (Away From Keyboard, according to The Big Bang Theory). The figure below depicts the basic structure of the device and how it works.

Ambient light is taken in by the device just as a normal eye is, but are then reflected by the Diverter. These rays are then collected by a sensor (typically a CCD camera) while the computer processes the data. At this point, the Aremac display device (“camera” spelt backwards) redisplays the image as rays of light. These rays reflect again off the diverter, and are then collinear with the rays of light from the scene. The light which the viewer perceives is what is referred to as “Virtual Light”, which can either be altered or show the same image as before.

While the technology is still very much under development, it represents a major step forward in terms of personal computing, augmented reality, and virtual interfacing. And if this sort of technology can be permanently implanted to the human eye, it will also be a major leap for cybernetics.

Once again, Gibson must be getting royalties! His fourth novel, the first of the Bridge Trilogy, was named Virtual Light and featured a type of display glasses that relied on this very technology in order to project display images in the user’s visual field. Damn that man always seems to be on top of things!

And just for fun, here’s a clip from the recent Futurama episode featuring the new eyePhone. Hilarious, if I do so myself!