When it comes to the future, it is clear that the concept of the “Internet of Things” holds sway. This idea – which states that all objects will someday be identifiable thanks to a virtual representations on the internet – is at the center of a great deal of innovation that drives our modern economy. Be it wearables, wireless, augmented reality, voice or image recognition, that which helps us combine the real with the virtual are on the grow.
And so it’s really no surprise that innovators are looking to take augmented reality to the next level. The fruit of some of this labor is Blippar, a market-leading image-recognition and augmented reality platform. Lately, they have been working on a proof of concept for Google Glass showing that 3-D searches are doable. This sort of technology is already available n the form of apps for smartphones, but a central database is lacking that could any device into a visual search engine.
As Ambarish Mitra, the head of Blippar stated, AR is already gaining traction among consumers thanks to some of the world’s biggest industrial players recognizing the shift to visually mediated lifestyles. Examples include IKEA’s interactive catalog, Heinz’s AR recipe booklet or Amazon’s recent integration of the Flow AR technology into its primary shopping app. As this trend continues, we will need a Wikipedia-like database for 3-D objects that will be available to us anytime, anywhere.
Social networks and platforms like Instagram, Pinterest, Snapchat and Facebook have all driven a cultural shift in the way people exchange information. This takes the form of text updates, instant messaging, and uploaded images. But as the saying goes, “a picture is worth a thousand words”. In short, information absorbed through visual learning has a marked advantage over that which is absorbed through reading and text.
In fact, a recent NYU study found that people retain close to 80 percent of information they consume through images versus just 10 percent of what they read. If people are able to regularly consume rich content from the real world through our devices, we could learn, retain, and express our ideas and information more effectively. Naturally, there will always be situations where text-based search is the most practical tool, but searches arise from real-world experiences.
Right now, text is the only option available, and oftentimes, people are unable to best describe what they are looking for. But an image-recognition technology that could turn any smartphone, tablet or wearable device into a scanner that could identify any 3-D object would vastly simplify things. Information could be absorbed in a more efficient way, using an object’s features and pulling up information from a rapidly learning engine.
For better or for worse, wearable designs of consumer electronics have come to reflect a new understanding in the past few years. Basically, they have come to be extensions of our senses, much as Marshall McCluhan wrote in his 1964 book Understanding Media: The Extensions of Man. Google Glass is representative of this revolutionary change, a step in the direction of users interacting with the environment around them through technology.
Leading tech companies are already investing time and money into the development of their own AR products, and countless patents and research allocations are being made with every passing year. Facebook’s acquisition of virtual reality company Oculus Rift is the most recent example, but even Samsung received a patent earlier this year for a camera-based augmented reality keyboard that is projected onto the fingers of the user.
Augmented reality has already proven itself to be a multi-million dollar industry – with 60 million users and around half a billion dollars in global revenues in 2013 alone. It’s expected to exceed $1 billion annually by 2015, and combined with a Google-Glass type device, this AR could eventually allow individuals to build vast libraries of data that will be the foundation for finding any 3-D object in the physical world.
In other words, the Internet of Things will become one step closer, with an evolving database of visual information at the base of it that is becoming ever larger and (in all likelihood) smarter. Oh dear, I sense another Skynet reference coming on! And in the meantime, enjoy this video that showcases Blippar’s vision of what this future of image overlay and recognition will look like:
Source: wired.com, dashboardinsight.com, blippar.com

The acquisition makes sense given that Silevo’s technology has the potential to reduce the cost of installing solar panels, Solar City’s main business. But the decision to build a huge factory in the U.S. seems daring – especially given the recent failures of other U.S.-based solar manufacturers in the face of competition from Asia. Ultimately, however, Solar City may have little choice, since it needs to find ways to reduce costs to keep growing.
Silevo isn’t the only company to produce high-efficiency solar cells. A version made by Panasonic is just as efficient, and SunPower makes ones that are significantly more so. But Silevo claims that its panels could be made as cheaply as conventional ones if they could scale their production capacity up from their current 32 megawatts to the factory Musk has planned, which is expected to produce 1,000 megawatts or more.
In a hilarious appearance on “Last Week Tonight” – John Oliver’s HBO show – guest Stephen Hawking spoke about some rather interesting concepts. Among these were the concepts of “imaginary time” and, more interestingly, artificial intelligence. And much to the surprise of Oliver, and perhaps more than a few viewers, Hawking’s was not too keen on the idea of the latter. In fact, his predictions were just a tad bit dire.
At worst, this could lead to the machines concluding that humanity is no longer necessary. At best, it would lead to an earthly utopia where machines address all our worries. But in all likelihood, it will lead to a future where the pace of technological change will impossible to predict. As history has repeatedly shown, technological change brings with it all kinds of social and political upheaval. If it becomes a runaway effect, humanity will find it impossible to keep up.
Brainwaves can now be used to control an impressive number of things these days: prosthetics, computers, quadroptors, and even cars. But recent research released by the Technische Universität München (TUM) in Germany indicates that they might also be used to flying an aircraft. Using a simple EEG cap that read their brainwaves, a team of researchers demonstrated that thoughts alone could navigate a plane.
It’s official: all of Tesla’s electric car technology is now available for anyone to use. Yes, after hinting that he might be willing to do so last weekend, Musk announced this week that his companies patents are now open source. In a blog post on the Tesla website, Musk explained his reasoning. Initially, Musk wrote, Tesla created patents because of a concern that large car companies would copy the company’s electric vehicle technology and squash the smaller start-up.
But that turned out to be an unnecessary worry, as carmakers have by and large decided to downplay the viability and relevance of EV technology while continuing to focus on gasoline-powered vehicles. At this point, he thinks that opening things up to other developers will speed up electric car development. And after all, there’s something to be said about competition driving innovation.
And the move should come as no surprise. As the Hyperloop demonstrated, Musk is not above making grandiose gestures and allowing others to run with ideas he knows will be profitable. And as Musk himself pointed in a webcast made after the announcement, his sister-company SpaceX – which deals with the development of reusable space transports – has virtually no patents.
As it stands, auto emissions account for a large and growing share of greenhouse gas emissions. For decades now, the technology has been in development and the principles have all been known. However, whether it has been due to denial, intransigence, complacency, or all of the above, no major moves have been made to effect a transition in the auto industry towards non-fossil fuel-using cars.
This past week, the Electronic Entertainment Expo (commonly referred to as E3) kicked off. This annual trade fair , which is presented by the Entertainment Software Association (ESA), is used by video game publishers, accessory manufacturers, and members of the computing industry to present their upcoming games and game-related merchandise. The festivities wrapped up this Friday, and was the source of some controversy and much speculation.
The companies have some respective big guns in the works, such as Halo 5: Guardians and Uncharted 4: A Thief’s End, but they’re also scheduled for release in 2015. However, with the brisk sales of the Xbox One and PlayStation 4 consoles, both companies have the luxury of taking their time with big games. Nintendo is not so fortunate, since the jump they made with the Wii U leaves them with a big gap that they aren’t apparently filling.
The software giant bumbled the Xbox One launch last year and alienated many gamers, mainly by focusing on TV and entertainment content instead of gaming and tying several unpopular policies to the console, which included restrictions on used games. The company eventually relented, but the Xbox One still came bundled with the voice- and motion-sensing Kinect peripheral and a price tag that was $100 higher than Sony’s rival PlayStation 4.
That was certainly the focus for Microsoft at E3. TV features weren’t even mentioned during the company’s one-and-a-half-hour press conference on Monday, with Microsoft instead talking up more than 20 upcoming games. As Mike Nichols, corporate vice-president of Xbox and studios marketing, said in an interview:
But this new virtual reality headset, which was recently bought by Facebook for $2 billion, was undeniably the hottest thing on the show floor. And the demo booth, where people got to try it on and take it for a run, was booked solid throughout the expo. Sony also wowed attendees with demos of its own VR headset, Project Morpheus. And while the PlayStation maker’s effort isn’t as far along in development as the Oculus Rift, it does work and adds legitimacy to the VR field.
Legendary Japanese creator Hideo Kojima also had to defend the torture scenes in his upcoming Metal Gear Solid V: The Phantom Pain, starring Canadian actor Kiefer Sutherland (man loves torture!), which upset some viewers. Kojima said he felt the graphic scenes were necessary to explain the main character’s motivations, and that games will never be taken seriously as culture if they can’t deal with sensitive subjects.
If one were to draw any conclusions from this year’s E3, it would undoubtedly be that times are both changing and staying the same. From console gaming garnering less and less of the gamers market, to the second coming of virtual reality, it seems that there is a shift in technology which may or may not be good for the current captains of industry. At the same time, competition and trying to maintain a large share of the market continues, with Sony, Microsoft and Nintendo at the forefront.
This past Thursday, the 2014 FIFA World Cup got underway. And all over the world, fans were glued to their television sets to watch the opening kickoff and the opening match between Croatia and Brazil. Unfortunately, astronauts Reid Wiseman, Steve Swanson, and Alexander Gerst – all of whom are serious “futbol” fans – were all stuck on board the ISS several hundred kilometers away.
And of course, Wiseman, Swanson and Gerst were sure to wish the teams and fans well in the competition before getting on with their own match. Not only is the resulting video fun thing to watch, it is also a fine representation of the age we live in, where social media and high-speed communications allow everyone – even astronauts – the ability to instantly communicate with the world.
The 2014 FIFA World Cup made history when it opened in Sao Paolo this week when a 29-year-old paraplegic man named Juliano Pinto kicked a soccer ball with the aid of a robotic exoskeleton. It was the first time a mind-controlled prosthetic was used in a sporting event, and represented the culmination of months worth of planning and years worth of technical development.
The result of many years of development, the mind-controlled exoskeleton represents a breakthrough in restoring ambulatory ability to those who have suffered a loss of motion due to injury. Using metal braces that were tested on monkeys, the exoskeleton relies on a series of wireless electrodes attached to the head that collect brainwaves, which then signal the suit to move. The braces are also stabilized by gyroscopes and powered by a battery carried by the kicker in a backpack.
The amber colors are due to the scattering of longer wavelengths of light by dust and pollution in our atmosphere. As astronomer Raminder Signh Samra of the H.R. MacMillian Space Centre in Vancouver said:
Scientists are not entirely sure what accounts for this optical illusion of a larger moon near the horizon, but they suspect it has something to do with the human mind trying to make sense of the moon’s proximity to more familiar objects like mountains, trees and houses in the foreground.
Hence why it may have made it appear unusually large to some keen-eyed sky-watchers. As Samra explained:


The study was published online late last month in Lab on a Chip. The study’s senior author, Ali Khademhosseini – PhD, biomedical engineer, and director of the BWH Biomaterials Innovation Research Center – explained the challenge and their goal as follows:
They were also able to successfully embed these functional and perfusable microchannels inside a wide range of commonly used hydrogels, such as methacrylated gelatin or polyethylene glycol-based hydrogels. In the former case, the cell-laden gelatin was used to show how their fabricated vascular networks functioned to improve mass transport, cellular viability and cellular differentiation. Moreover, successful formation of endothelial monolayers within the fabricated channels was achieved.