The Future of Electronics: Touch Taiwan 2013!

touch-taiwan_amoledEvery year, companies from all over the world that are dedicated to creating touch surfaces, displays, and personal digital devices convene on Taipei Taiwan for the International Touch Panel and Optical Film Exhibition – otherwise known as Touch Taiwan. Running from August 28th to 30th, visitors were treated to over 1000 exhibition booths that showcased the latest from developers in touch panels, OLED, flexible displays and optical films.

One such company is AUO, a display company based in Taiwan, which is working on flexible, ultra-thin technology. Much like the AMOLED (Active-Matrix Organic Light Emitting Diode) display Nokia showcased at CES in Las Vegas last year, the AUO exhibit showed a series of screens that could be bent, but would still broadcast a crystal clear imagine with 512 pixels per inch.

This is in keeping with the apparent “pixel race” that is on, where developers are trying to outdo each other in sheer pixel density. 512 seems to be the current high, though that can be expecting to change soon! And though the AUO displays seen here are not yet been available on a specific device, it is clear that future devices will look something like this:

AUO Ultra-Thin Display Tech:


Another big hit at the show were display glasses. Clearly, the consumer electronics industry is now in a race to create the next generation of Google Glass, looking for ways to improve on the existing technology by making it smaller, cheaper, and the images sharper. That was the rationale behind CPT’s display booth, where a series of display glasses were shown that relied on a “smartbox” displays rather than display lenses.

As you can see, the smartbox resides in the upper right corner of the glasses, which a person can consult whenever they are out and about. Simply look to your upper right to get a desktop image or browse, and look away to see the rest of the world. The goal here is clearly utilitarian, with CPT hoping to create something that could beam images into your eye without fear of distraction.

What’s impressive about this is the fact that CPT was able to use AMOLED technology to create detailed, multi-colored images with 200 ppi in a smartbox display that was only half an inch big. The technology is ready to ship, so expect to see a wider range of display glasses at your electronics store soon!

CPT AMOLED Smart Glass:


Aside from AMOLED technology are the equally important developments being made in Micro-Light Emitting Diode (or MLED) technology, which offers the same benefits as LEDs but in a much smaller package which relies on significantly less power. The company leading the charge here is ITRI, a research division of the Taiwanese government that also creates consumer electronics.

So far, the display is monochromatic, as you can see from the video below. However, ITRI expects to have a full-color version ready towards the end of 2013. Have a gander:

ITRI MicroLED Display:


And then there was Corning Glass, which once again made big waves with the display of their “Gorilla Glass”, a next-generation type of display glass developed with Microsoft. As their promotional video from last year demonstrated (“A Day with Glass”), the company hopes that this new type of display surface will one day be integrated into all walks of life because of its sheer versatility.

And aside from the usual benefit being offered – a thin surface that is sensitive to touch commanders and offers high-definition imagery – Gorilla Glass (as its name suggests) is also highly resistant to damage. Whereas other makers are focusing on small devices that can withstand damage by being flexible, Corning and Microsoft are thinking big and resilient. Check out the video:

Gorilla Glass Demo:


If it were not already clear from all the new devices making it to the street in recent years, these exhibitions certainly confirm that the future is getting increasingly digitized, personalized, ergonomic, and invasive! And the devices powering this future, allowing us to network and access untold amounts of information at any moment in our day, are looking more and more like something out of a William Gibson or Charles Stross novel!

If I weren’t such a sci-fi geek, I might be worried!

Sources: mobilegeeks.com, displaytawain.com, chaochao.com.tw

The Future is Here: The AR Bike Helmet

AR_helmetAR displays are becoming all the rage, thanks in no small part to Google Glass and other display glasses. And given the demand and appeal of the technology, it seemed like only a matter of time before AR displays began providing real-time navigation for vehicles. For decades, visor-mounted heads-up displays have been available, but fully-integrated displays have yet to have been produced.

Live Helmet is one such concept, a helmet that superimposes information and directions into a bike-helmet visor. Based in Moscow, this startup seeks to combine a head-mounted display, built-in navigation, and Siri-like voice recognition. The helmet will have a translucent, color display that’s projected on the visor in the center of the field of vision, and a custom user interface, English language-only at launch, based on Android.

AR_helmet1This augmented reality helmet display includes a light sensor for adjusting image brightness according to external light conditions, as well as an accelerometer, gyroscope, and digital compass for tracking head movements. Naturally, the company anticipated that concerns about driver safety would come up, hence numerous safety features which they’ve included.

For one, the digital helmet is cleverly programmed to display maps only when the rider’s speed is close to zero to avoid distracting them at high speeds. And for the sake of hands-free control, it comes equipped with a series of voice commands for navigation and referencing points of interest. No texting and driving with this thing!

ar_helmet4So far, the company has so far built some prototype hardware and software for the helmet with the help of grants from the Russian government, and is also seeking venture capital. However, they have found little within their home country, and have been forced to crowdfund via an Indiegogo campaign. As CEO, Andrew Artishchev, wrote on LiveMap’s Indiegogo page:

Russian venture funds are not disposed to invest into hardware startups. They prefer to back up clones of successful services like Groupon, Airnb, Zappos, Yelp, Booking, etc. They are not interested in producing hardware either.

All told, they are seeking to raise $150,000 to make press molds for the helmet capsule. At present, they have raised $5,989 with 31 days remaining. Naturally, prizes have been offered, ranging from thank yous and a poster (for donations of $1 to $25) to a test drive in a major city (Berlin, Paris, Rome, Moscow, Barcelona) for $100, and a grand prize of a helmet itself for a donation of $1500.

ar_helmet3And of course, the company has announced that they have some “Stretched Goals”, just in case people want to help them overshoot their mandate of $150,000. For 300 000$, they will include a Bluetooth with a headset profile to their helmet, and for 500 000$, they will merge a built-in high-resolution 13Mpix photo&video camera. Good to have goals.

Personally, I’d help sponsor this, except for the fact that I don’t have motorbike and wouldn’t know how to use it if I did. But a long drive across the autobahn or the Amber Route would be totally boss! Speaking of which, check out the company’s promotional video:

Sources: news.cnet.com, indiegogo.com

The Future is Here: The Cybernetic “Third Eye”

neil_harbissonAchromatopsia is a rare form of color blindness that effects one in thirty-five thousand people. One such individual is Neil Harbisson, who was born with the genetic mutation that rob him of the ability to see the world in anything other than black and white. But since 2004, he has been able to “hear” color, thanks to a body modification that has provided with him with a cybernetic third eye.

EyeborgThis device is known as the “eyeborg”, and given that it constitutes a cybernetic enhancement, some have taken to calling Harbisson a genuine cyborg. For others, he’s an example of a posthuman era where cybernetic enhancements will be the norm. In either case, the function of the eyeborg works was described in the following way in an article by Nautilus entitled “Encounters with the Posthuman”:

It transposes color into a continuous electronic beep, exploiting the fact that both light and sound are made up of waves of various frequencies. Red, at the bottom of the visual spectrum and with the lowest frequency, sounds the lowest, and violet, at the top, sounds highest. A chip at the back of Harbisson’s head performs the necessary computations, and a pressure-pad allows color-related sound to be conducted to Harbisson’s inner ear through the vibration of his skull, leaving his outer ears free for normal noise. Harbisson, who has perfect pitch, has learned to link these notes back to the colors that produced them.

Harbisson’s brain doesn’t convert those sounds back into visual information, so he still doesn’t know exactly what the color blue looks like. But he knows what it sounds like. As he explained to an audience at a TED Talks segment, he used to dress based on appearances. Now, he dresses in a way that sounds good. For example, the pink blazer, blue shirt and yellow pants he was wearing for the talk formed a C Major chord.

neil_harbisson1This may sound like an abstract replacement for actual color perception, but in many ways, the eyeborg surpasses human chromatic perception. For example, the device is capable of distinguishing 360 different hues, he can hear ultraviolet and infrared. So basically, you don’t need a UV index when you have the cybernetic third eye. All you need to do is take a look outside and instantly know if you need sunblock or not.

These and other extension of human abilities are what led Harbisson to found the Cyborg Foundation, a society that is working to create cybernetic devices that compensate for and augment human senses. These include the “fingerborg” that replaces a finger with a camera, a “speedborg” that conveys how fast an object is moving with earlobe vibrations and–according to a promotional film–a “cybernetic nose” that allows people to perceive smells through electromagnetic signals.

steve-mann1In addition to helping people become cyborgs, the foundation claims to fight for cyborg rights. While this might sounds like something out of science fiction, the recent backlash against wearers of Google glasses and the assault on Steve Mann are indications that such a society is increasingly necessary. In addition, Harbisson wants to find ways to fix devices like his eyeborg permanently to his skull, and recharge it with his blood.

For more information on the eyeborg and Project Cyborg, check out Harbisson’s website here. Neil Harbisson’s Project Cyborg promotional video is also available on Vimeo. And be sure to watch the video of Neil Harbisson at the TED Talks lecture:


Sources:
fastcoexist.com, nautil.us, eyeborgproject.com

The Future of the Classroom

virtual_learning2As an educator, technological innovation is a subject that comes up quite often. Not only are teachers expected to keep up with trends so they can adapt them into their teaching strategies, classrooms,and prepare children in how to use them, they are also forced to contend with how these trends are changing the very nature of education itself. If there was one thing we were told repeatedly in Teacher’s College, it was that times are changing, and we must change along with them.

And as history has repeatedly taught us, technological integration not only changes the way we do things, but the way we perceive things. As we come to be more and more dependent on digital devices, electronics and wireless communications to give us instant access to a staggering amount of technology, we have to be concerned with how this will effect and even erode traditional means of information transmission. After all, how can reading and lecture series’ be expected to keep kid’s attention when they are accustomed to lighting fast videos, flash media, and games?

envisioning-the-future-of-education

And let’s not forget this seminal infographic, “Envisioning the future of educational technology” by Envisioning Technology. As one of many think tanks dedicated to predicting tech-trends, they are just one of many voices that is predicting that in time, education will no longer require the classroom and perhaps even teachers, because modern communications have made the locale and the leader virtually obsolete.

Pointing to such trends as Massive Open Online Courses, several forecasters foresee a grand transformation in the not too distant future where all learning happens online and in virtual environments. These would be based around “microlearning”, moments where people access the desired information through any number of means (i.e. a google search) and educate themselves without the need for instruction or direction.

virtual_learning3The technical term for this future trend is Socialstructured Learning = an aggregation of microlearning experiences drawn from a rich ecology of content and driven not by grades but by social and intrinsic rewards. This trend may very well be the future, but the foundations of this kind of education lie far in the past. Leading philosophers of education–from Socrates to Plutarch, Rousseau to Dewey–talked about many of these ideals centuries ago. The only difference is that today, we have a host of tools to make their vision reality.

One such tool comes in the form of augmented reality displays, which are becoming more and more common thanks to devices like Google Glass, the EyeTap or the Yelp Monocle. Simply point at a location, and you are able to obtain information you want about various “points of interest”. Imagine then if you could do the same thing, but instead receive historic, artistic, demographic, environmental, architectural, and other kinds of information embedded in the real world?

virtual_learningThis is the reasoning behind projects like HyperCities, a project from USC and UCLA that layers historical information on actual city terrain. As you walk around with your cell phone, you can point to a site and see what it looked like a century ago, who lived there, what the environment was like. The Smithsonian also has a free app called Leafsnap, which allows people to identify specific strains of trees and botany by simply snapping photos of its leaves.

In many respects, it reminds me of the impact these sorts of developments are having on politics and industry as well. Consider how quickly blogging and open source information has been supplanting traditional media – like print news, tv and news radio. Not only are these traditional sources unable to supply up-to-the-minute information compared to Twitter, Facebook, and live video streams, they are subject to censorship and regulations the others are not.

Attractive blonde navigating futuristic interfaceIn terms of industry, programs like Kickstarter and Indiegogo – crowdsources, crowdfunding, and internet-based marketing – are making it possible to sponsor and fund research and development initiatives that would not have been possible a few years ago. Because of this, the traditional gatekeepers, aka. corporate sponsors, are no longer required to dictate the pace and advancement of commercial development.

In short, we are entering into a world that is becoming far more open, democratic, and chaotic. Many people fear that into this environment, someone new will step in to act as “Big Brother”, or the pace of change and the nature of the developments will somehow give certain monolithic entities complete control over our lives. Personally, I think this is an outmoded fear, and that the real threat comes from the chaos that such open control and sourcing could lead to.

Is humanity ready for democratic anarchy – aka. Demarchy (a subject I am semi-obsessed with)? Do we even have the means to behave ourselves in such a free social arrangement? Opinion varies, and history is not the best indication. Not only is it loaded with examples of bad behavior, previous generations didn’t exactly have the same means we currently do. So basically, we’re flying blind… Spooky!

Sources: fastcoexist.com, envisioningtech.com

New Video Shows Google Glasses in Action

GOOGLE-GLASS-LOGO1In a recently released teaser video, designed to expand Google Glass’ potential consumer base from the tech-savvy to what it refers to as “bold, creative individuals”. While the first video of their futuristic AR specs followed a New Yorker as they conducted mundane tasks through the city, this new clip hosts a dizzying array of activities designed to show just how versatile the product can be.

This includes people engaged in skydiving, horseback riding, catwalking at a fashion show, and performing ballet. Quite the mixed bag! All the while, we are shown what it would look like to do these activities while wearing a set of Google glasses. The purpose here is not only to show their functionality, but to give people a taste of what it an augmented world looks like.google_glass

And based on product information, videos and stillpics from the Google Glass homepage, it also appears that these new AR glasses will take advantage of the latest in flexible technology. Much like the new breeds of smartphones and PDAs which will be making the rounds later this year, these glasses are bendable, flexible, and therefore much more survivable than conventional glasses, which probably cost just as much!

Apparently, this is all in keeping with CEO and co-founder Larry Page’s vision of a world where Google products make their users smarter. In a 2004 interview, Page shared that vision with people, saying: “Imagine your brain is being augmented by Google.” These futurist sentiments may be a step closer now, thanks to a device that can provide on-the-spot information about whatever situation or environment we find ourselves in.

google_glass1One thing is for sure though. With the help of some AR specs, the middle man is effectively cut out. No longer are we required to aim our smartphones, perform image searches, or type things into a search engine (like Google!). Now we can just point, look, and wait for the glasses to identify what we are looking at and provide the requisite information.

Check out the video below: