First announced in 2012, the Qualcomm Tricorder XPRIZE has sought to bring together the best and brightest minds in the field together to make science fiction science fact. In short, they sought to create a handheld device that could would mimic some of the key functions of the iconic Star Trek tricorder, allowing consumers access to reliable, easy to use diagnostic equipment any time, anywhere, with near instantaneous results.
And now, the list of potential candidates has been whittled down to ten finalists. And while they might be able to live up to the fictitious original, the devices being developed are quite innovative and could represent a significant technological advancement in the diagnostic domain. Qualcomm is offering a US$10 million prize purse in the hope of stimulating the research and development of precision diagnostic equipment.
In order to qualify for the prize, the successful scanner must comply with an ambitious set of parameters. First, the device must be able to reliably capture an individual’s heart rate, respiratory rate, blood pressure, and oxygen saturation in an easy to use and completely non-invasive fashion. It must also diagnose 13 core diseases – including pneumonia, tuberculosis and diabetes – along with three additional health conditions to be chosen by each team.
Each device varies widely in terms of appearance and composition, but that’s hardly surprising. The only limitations placed on the teams in terms of construction is that the entire apparatus must have a mass of less than 2.3kg (5 lb). Due to the wide range of tests needed to be carried out by the tricorder in order to capture the necessary health metrics, it is highly unlikely that any of the scanners will take the form of a single device.
The shortlisted entries include Scanadu (pictured above), a company which is currently developing an entire portfolio of handheld medical devices. The circular sensor is programmed to measure blood pressure, temperature, ECG, oximetry, heart rate, and the breathing rate of a patient or subject – all from a simple, ten second scan. Then there’s Aezon, an American-based team comprised of student engineers from Johns Hopkins University, Maryland.
The Aezon device is made up of a wearable Vitals Monitoring Unit – designed to capture oxygen saturation, blood pressure, respiration rate and ECG metrics – and The Lab Box, a small portable device that makes use of microfluidic chip technology in order to diagnose diseases ranging from streptococcal pharyngitis to a urinary tract infection by analyzing biological samples.
The other finalists include CloudDX, a Canadian company from Mississauga, Ontario; Danvantri, from Chennai, India; DMI from Cambridge, Mass; the Dynamical Biomarkers Group from Zhongli City, Taiwan; Final Frontier Medical Devices from Paoli, PA; MESI Simplifying Diagnostics from Ljubljana, Slovenia; SCANurse from London, England; and the Zensor from Belfast, Ireland.
In all cases, the entrants are compact, lightweight and efficient devices that push the information obtained through their multiple sensors to a smartphone or tablet interface. This appears to be done with a proprietary smartphone app via the cloud, where it can also be analyzed by a web application. Users will also be able to access their test results, discover information regarding possible symptoms and use big data to form a possible diagnosis.
The next and final round of tests for the teams will take place next year between November and December. The scanners will be put through a diagnostic competition involving 15-30 patients whilst judges evaluate the consumers user experience. The final test will also assess the scanners’ adequacy in high-frequency data logging, and the overall winners will be announced in early 2016, and awarded the lucrative $10 million prize to develop their product and bring it to market.
If such a device could be simple enough to allow for self-diagnosis by the general public, it could play a key part in alleviating the pressure on overburdened healthcare systems by cutting down on unnecessary hospital visits. It will also be a boon for personalized medicine, making regular hospital visits quicker, easier, and much less expensive. And let’s not forget, it’s science fiction and Trekky-nerd gold!
Be sure to check out the video below that outlines the aims and potential benefits of the Qualcomm Tricorder XPRIZE challenge. And for more information on the finalists, and to see their promotional videos, check out the Qualcomm website here.
The wearable computing revolution that has been taking place in recent years has drawn in developers and tech giants from all over the world. Though its roots are deep, dating back to the late 60’s and early 80’s with the Sword of Damocles concept and the work of Steve Mann. But in recent years, thanks to the development of Google Glass, the case for wearable tech has moved beyond hobbyists and enthusiasts and into the mainstream.
And with display glasses now accounted for, the latest boom in development appears to be centered on smart watches and similar devices. These range from fitness trackers with just a few features to wrist-mounted version of smart phones that boast the same constellations of functions and apps (email, phone, text, skyping, etc.) And as always, the big-name industries are coming forward with their own concepts and designs.
First, there’s the much-anticipated Apple iWatch, which is still in the rumor stage. The company has been working on this project since late 2012, but has begun accelerating the process as it tries to expand its family of mobile devices to the wrist. Apple has already started work on trademarking the name in a number of countries in preparation for a late 2014 launch perhaps in October, with the device entering mass production in July.
And though it’s not yet clear what the device will look like, several mockups and proposals have been leaked. And recent reports from sources like Reuters and The Wall Street Journal have pointed towards multiple screen sizes and price points, suggesting an array of different band and face options in various materials to position it as a fashion accessory. It is also expected to include a durable sapphire crystal display, produced in collaboration with Apple partner GT Advanced.
While the iWatch will perform some tasks independently using the new iOS 8 platform, it will be dependent on a compatible iOS device for functions like receiving messages, voice calls, and notifications. It is also expected to feature wireless charging capabilities, advanced mapping abilities, and possibly near-field communication (NFC) integration. But an added bonus, as indicated by Apple’s recent filing for patents associated with their “Health” app, is the inclusion of biometric and health sensors.
Along with serving as a companion device to the iPhone and iPad, the iWatch will be able to measure multiple different health-related metrics. Consistent with the features of a fitness band, these will things like a pedometer, calories burned, sleep quality, heart rate, and more. The iWatch is said to include 10 different sensors to track health and fitness, providing an overall picture of health and making the health-tracking experience more accessible to the general public.
Apple has reportedly designed iOS 8 with the iWatch in mind, and the two are said to be heavily reliant on one another. The iWatch will likely take advantage of the “Health” app introduced with iOS 8, which may display all of the health-related information gathered by the watch. Currently, Apple is gearing up to begin mass production on the iWatch, and has been testing the device’s fitness capabilities with professional athletes such as Kobe Bryant, who will likely go on to promote the iWatch following its release.
Not to be outdone, Google launched its own brand of smartwatch – known as Android Wear – at this year’s I/O conference. Android Wear is the company’s software platform for linking smartwatches from companies including LG, Samsung and Motorola to Android phones and tablets. A preview of Wear was introduced this spring, the I/O conference provided more details on how it will work and made it clear that the company is investing heavily in the notion that wearables are the future.
Android Wear takes much of the functionality of Google Now – an intelligent personal assistant – and uses the smartwatch as a home for receiving notifications and context-based information. For the sake of travel, Android Wear will push relevant flight, weather and other information directly to the watch, where the user can tap and swipe their way through it and use embedded prompts and voice control to take further actions, like dictating a note with reminders to pack rain gear.
For the most part, Google had already revealed most of what Wear will be able to do in its preview, but its big on-stage debut at I/O was largely about getting app developers to buy into the platform and keep designing for a peripheral wearable interface in mind. Apps can be designed to harness different Android Wear “intents.” For example, the Lyft app takes advantage of the “call me a car” intent and can be set to be the default means of hailing a ride when you tell your smartwatch to find you a car.
Google officials also claimed at I/O that the same interface being Android Wear will be behind their new Android Auto and TV, two other integrated services that allow users to interface with their car and television via a mobile device. So don’t be surprised if you see someone unlocking or starting their car by talking into their watch in the near future. The first Android Wear watches – the Samsung Gear Live and the LG G Watch – are available to pre-order and the round-face Motorola Moto 360 is expected to come out later this summer.
All of these steps in integration and wearable technology are signs of an emergent trend, one where just about everything from personal devices to automobiles and even homes are smart and networked together – thus giving rise to a world where everything is remotely accessible. This concept, otherwise known as the “Internet of Things”, is expected to become the norm in the next 20 years, and will include other technologies like display contacts and mediated (aka. augmented) reality.
And be sure to check out this concept video of the Apple iWatch:
When it comes to the future, it is clear that the concept of the “Internet of Things” holds sway. This idea – which states that all objects will someday be identifiable thanks to a virtual representations on the internet – is at the center of a great deal of innovation that drives our modern economy. Be it wearables, wireless, augmented reality, voice or image recognition, that which helps us combine the real with the virtual are on the grow.
And so it’s really no surprise that innovators are looking to take augmented reality to the next level. The fruit of some of this labor is Blippar, a market-leading image-recognition and augmented reality platform. Lately, they have been working on a proof of concept for Google Glass showing that 3-D searches are doable. This sort of technology is already available n the form of apps for smartphones, but a central database is lacking that could any device into a visual search engine.
As Ambarish Mitra, the head of Blippar stated, AR is already gaining traction among consumers thanks to some of the world’s biggest industrial players recognizing the shift to visually mediated lifestyles. Examples include IKEA’s interactive catalog, Heinz’s AR recipe booklet or Amazon’s recent integration of the Flow AR technology into its primary shopping app. As this trend continues, we will need a Wikipedia-like database for 3-D objects that will be available to us anytime, anywhere.
Social networks and platforms like Instagram, Pinterest, Snapchat and Facebook have all driven a cultural shift in the way people exchange information. This takes the form of text updates, instant messaging, and uploaded images. But as the saying goes, “a picture is worth a thousand words”. In short, information absorbed through visual learning has a marked advantage over that which is absorbed through reading and text.
In fact, a recent NYU study found that people retain close to 80 percent of information they consume through images versus just 10 percent of what they read. If people are able to regularly consume rich content from the real world through our devices, we could learn, retain, and express our ideas and information more effectively. Naturally, there will always be situations where text-based search is the most practical tool, but searches arise from real-world experiences.
Right now, text is the only option available, and oftentimes, people are unable to best describe what they are looking for. But an image-recognition technology that could turn any smartphone, tablet or wearable device into a scanner that could identify any 3-D object would vastly simplify things. Information could be absorbed in a more efficient way, using an object’s features and pulling up information from a rapidly learning engine.
For better or for worse, wearable designs of consumer electronics have come to reflect a new understanding in the past few years. Basically, they have come to be extensions of our senses, much as Marshall McCluhan wrote in his 1964 book Understanding Media: The Extensions of Man. Google Glass is representative of this revolutionary change, a step in the direction of users interacting with the environment around them through technology.
Leading tech companies are already investing time and money into the development of their own AR products, and countless patents and research allocations are being made with every passing year. Facebook’s acquisition of virtual reality company Oculus Rift is the most recent example, but even Samsung received a patent earlier this year for a camera-based augmented reality keyboard that is projected onto the fingers of the user.
Augmented reality has already proven itself to be a multi-million dollar industry – with 60 million users and around half a billion dollars in global revenues in 2013 alone. It’s expected to exceed $1 billion annually by 2015, and combined with a Google-Glass type device, this AR could eventually allow individuals to build vast libraries of data that will be the foundation for finding any 3-D object in the physical world.
In other words, the Internet of Things will become one step closer, with an evolving database of visual information at the base of it that is becoming ever larger and (in all likelihood) smarter. Oh dear, I sense another Skynet reference coming on! And in the meantime, enjoy this video that showcases Blippar’s vision of what this future of image overlay and recognition will look like:
The Consumer Electronics Show has been in full swing for two days now, and already the top spots for most impressive technology of the year has been selected. Granted, opinion is divided, and there are many top contenders, but between displays, gaming, smartphones, and personal devices, there’s been no shortage of technologies to choose from.
And having sifted through some news stories from the front lines, I have decided to compile a list of what I think the most impressive gadgets, displays and devices of this year’s show were. And as usual, they range from the innovative and creative, to the cool and futuristic, with some quirky and fun things holding up the middle. And here they are, in alphabetical order:
As an astronomy enthusiast, and someone who enjoys hearing about new and innovative technologies, Celestron’s Cosmos 90GT WiFi Telescope was quite the story. Hoping to make astronomy more accessible to the masses, this new telescope is the first that can be controlled by an app over WiFi. Once paired, the system guides stargazers through the cosmos as directions flow from the app to the motorized scope base.
In terms of comuting, Lenovo chose to breathe some new life into the oft-declared dying industry of desktop PCs this year, thanks to the unveiling of their Horizon 2. Its 27-inch touchscreen can go fully horizontal, becoming both a gaming and media table. The large touch display has a novel pairing technique that lets you drop multiple smartphones directly onto the screen, as well as group, share, and edit photos from them.
Next up is the latest set of display glasses to the world by storm, courtesy of the Epson Smart Glass project. Ever since Google Glass was unveiled in 2012, other electronics and IT companies have been racing to produce a similar product, one that can make heads-up display tech, WiFi connectivity, internet browsing, and augmented reality portable and wearable.
Epson was already moving in that direction back in 2011 when they released their BT100 augmented reality glasses. And now, with their Moverio BT200, they’ve clearly stepped up their game. In addition to being 60 percent lighter than the previous generation, the system has two parts – consisting of a pair of glasses and a control unit.
The glasses feature a tiny LCD-based projection lens system and optical light guide which project digital content onto a transparent virtual display (960 x 540 resolution) and has a camera for video and stills capture, or AR marker detection. With the incorporation of third-party software, and taking advantage of the internal gyroscope and compass, a user can even create 360 degree panoramic environments.
At the other end, the handheld controller runs on Android 4.0, has a textured touchpad control surface, built-in Wi-Fi connectivity for video content streaming, and up to six hours of battery life.
The BT-200 smart glasses are currently being demonstrated at Epson’s CES booth, where visitors can experience a table-top virtual fighting game with AR characters, a medical imaging system that allows wearers to see through a person’s skin, and an AR assistance app to help perform unfamiliar tasks .
This year’s CES also featured a ridiculous amount of curved screens. Samsung seemed particularly proud of its garish, curved LCD TV’s, and even booked headliners like Mark Cuban and Michael Bay to promote them. In the latter case, this didn’t go so well. However, one curved screen device actually seemed appropriate – the LG G Flex 6-inch smartphone.
When it comes to massive curved screens, only one person can benefit from the sweet spot of the display – that focal point in the center where they feel enveloped. But in the case of the LG G Flex-6, the subtle bend in the screen allows for less light intrusion from the sides, and it distorts your own reflection just enough to obscure any distracting glare. Granted, its not exactly the flexible tech I was hoping to see, but its something!
In the world of gaming, two contributions made a rather big splash this year. These included the Playstation Now, a game streaming service just unveiled by Sony that lets gamers instantly play their games from a PS3, PS4, or PS Vita without downloading and always in the most updated version. Plus, it gives users the ability to rent titles they’re interested in, rather than buying the full copy.
Then there was the Maingear Spark, a gaming desktop designed to run Valve’s gaming-centric SteamOS (and Windows) that measures just five inches square and weighs less than a pound. This is a big boon for gamers who usually have to deal gaming desktops that are bulky, heavy, and don’t fit well on an entertainment stand next to other gaming devices, an HD box, and anything else you might have there.
Next up, there is a device that helps consumers navigate the complex world of iris identification that is becoming all the rage. It’s known as the Myris Eyelock, a simple, straightforward gadget that takes a quick video of your eyeball, has you log in to your various accounts, and then automatically signs you in, without you ever having to type in your password.
So basically, you can utilize this new biometric ID system by having your retinal scan on your person wherever you go. And then, rather than go through the process of remembering multiple (and no doubt, complicated passwords, as identity theft is becoming increasingly problematic), you can upload a marker that leaves no doubt as to your identity. And at less than $300, it’s an affordable option, too.
And what would an electronics show be without showcasing a little drone technology? And the Parrot MiniDrone was this year’s crowd pleaser: a palm-sized, camera-equipped, remotely-piloted quad-rotor. However, this model has the added feature of two six-inch wheels, which affords it the ability to zip across floors, climb walls, and even move across ceilings! A truly versatile personal drone.
Another very interesting display this year was the Scanadu Scout, the world’s first real-life tricorder. First unveiled back in May of 2013, the Scout represents the culmination of years of work by the NASA Ames Research Center to produce the world’s first, non-invasive medical scanner. And this year, they chose to showcase it at CES and let people test it out on themselves and each other.
All told, the Scanadu Scout can measure a person’s vital signs – including their heart rate, blood pressure, temperature – without ever touching them. All that’s needed is to place the scanner above your skin, wait a moment, and voila! Instant vitals. The sensor will begin a pilot program with 10,000 users this spring, the first key step toward FDA approval.
And of course, no CES would be complete without a toy robot or two. This year, it was the WowWee MiP (Mobile Inverted Pendulum) that put on a big show. Basically, it is an eight-inch bot that balances itself on dual wheels (like a Segway), is controllable by hand gestures, a Bluetooth-conncted phone, or can autonomously roll around.
Its sensitivity to commands and its ability to balance while zooming across the floor are super impressive. While on display, many were shown carrying a tray around (sometimes with another MiP on a tray). And, a real crowd pleaser, the MiP can even dance. Always got to throw in something for the retro 80’s crowd, the people who grew up with the SICO robot, Jinx, and other friendly automatons!
But perhaps most impressive of all, at least in my humble opinion, is the display of the prototype for the iOptik AR Contact Lens. While most of the focus on high-tech eyewear has been focused on wearables like Google Glass of late, other developers have been steadily working towards display devices that are small enough to worse over your pupil.
Developed by the Washington-based company Innovega with support from DARPA, the iOptik is a heads-up display built into a set of contact lenses. And this year, the first fully-functioning prototypes are being showcased at CES. Acting as a micro-display, the glasses project a picture onto the contact lens, which works as a filter to separate the real-world from the digital environment and then interlaces them into the one image.
Embedded in the contact lenses are micro-components that enable the user to focus on near-eye images. Light projected by the display (built into a set of glasses) passes through the center of the pupil and then works with the eye’s regular optics to focus the display on the retina, while light from the real-life environment reaches the retina via an outer filter.
This creates two separate images on the retina which are then superimposed to create one integrated image, or augmented reality. It also offers an alternative solution to traditional near-eye displays which create the illusion of an object in the distance so as not to hinder regular vision. At present, still requires clearance from the FDA before it becomes commercially available, which may come in late 2014 or early 2015.
Well, its certainly been an interesting year, once again, in the world of electronics, robotics, personal devices, and wearable technology. And it manages to capture the pace of change that is increasingly coming to characterize our lives. And according to the tech site Mashable, this year’s show was characterized by televisions with 4K pixel resolution, wearables, biometrics, the internet of personalized and data-driven things, and of course, 3-D printing and imaging.
And as always, there were plenty of videos showcasing tons of interesting concepts and devices that were featured this year. Here are a few that I managed to find and thought were worthy of passing on:
It sounds like something out of science fiction, using existing existing internet electromagnetic signals to power our devices. But given the concerns surrounding ewaste and toxic materials, anything that could make an impact by eliminating batteries is a welcome idea. And if you live in an urban environment, chances are you’re already cloaked in TV and radio waves invisible that are invisible to the naked eye.
And that’s precisely what researchers at the University of Washington have managed to do. Nine months ago, Joshua Smith (an associate professor of electrical engineer) and Shyam Gollakota (an assistant professor of computer science and engineering) started investigating how one might harvest energy from TV signals to communicate, and eventually designed two card-like devices that can swap data without using batteries.
Running on what the researchers coined “ambient backscatter,” the device works by capturing existing energy and reflecting it, like a transistor. Currently, our communications and computing devices require a lot of power, even by battery, in order to function. But as Gollakota explains, all of these objects are already creating energy that could be harnessed:
Every object around you is reflecting signals. Imagine you have a desk that is wooden, and it’s reflecting signals, but if you actually make [the desk] iron, it’s going to reflect a much larger amount of energy. We’re trying to replicate that on an analog device.
The new technique is still in its infancy, but shows great promise. Their device transfers data at a rate of one kilobit per second and can only transmit at distances under 2.5 feet. Still, it has exciting implications, they say, for the “Internet of things.” The immediate use for this technology, everything from smart phones to tablets and MP3 players, is certainly impressive.
But on their website, the team provides some added examples of applications that they can foresee taking advantage of this technology. Basically, they foresee an age when backscatter devices can be implanted in just about anything ranging from car keys and appliances to structural materials and buildings, allowing people to find them if they get lost, or to be alerting people that there’s some kind of irregularity.
As Smith claimed on the team’s website:
I think the Internet of things looks like many objects that kind of have an identity and state–they can talk to each other. Ultimately, I think people want to view this information… That’s part of the vision. There will be information about objects in the physical world that we can access.
The energy harvester they used for the paper, which they presented at the Association for Computing Machinery’s Special Interest Group on Data Communication in Hong Kong, requires 100 microwatts to turn on, but the team says it has a design that can run on as low as 15 microwatts. Meanwhile, the technique is already capable of communicating location, identity, and sensor data, and is sure to increase in range as efficiency improves.
The University of Washington presentation took home “best paper” in Hong Kong, and researchers say they’re excited to start exploring commercial applications. “We’ve had emails from different places–sewer systems, people who have been constrained by the fact that you need to recharge things,” Gollakota says. “Our goal for next six months is to increase the data rate it can achieve.”
Combined with Apple’s development of wireless recharging, this latest piece of technology could be ushering in an age of wireless and remotely powered devices. Everything from smartphones, tablets, implants, and even household appliances could all be running on the radio waves that are already permeating our world. All that ambient radiation we secretly worry is increasing our risks of cancer would finally be put to good use!
And in the meantime, enjoy this video of the UofW’s backscatter device in action:
It was only a matter of time, I guess. But we really should have known that with all the improvements being made in biometrics and biotechnology – giving patients and doctors the means to monitor their vitals, blood pressure, glucose levels and the like with tiny devices – and all the talk of how it looked like something out of science fiction that it wouldn’t be long before someone took it upon themselves to build a device right out of Star Trek.
It’s known as a the Scanadu Scout, a non-invasive medical device that is capable of measuring your vitals simply by being held up to your temple for a mere 10 seconds. The people responsible for its creation are a startup named Scanadu, a group of research and medtech enthusiasts who are based at the NASA Ames Research Center. For the past two years, they have been seeking to create the world’s first handheld medical scanner, and with the production of the Scout, they have their prototype!
All told, the device is able to track pulse transit time (to measure blood pressure), temperature, ECG, oximetry, heart rate, and the breathing rate of a patient or subject. A 10 second scan of a person’s temple yields data that has a 99% accuracy rate, which can then be transmitted automatically via Bluetooth to the user’s smartphone, tablet or mobile device.
The device has since been upgraded from its original version and runs at a rate of 32 bits (up from the original 8). And interestingly enough, the Scouts now runs on Micrium, the operation system that NASA uses for Mars sample analysis on the Curiosity rover. The upgrade became necessary when Scanadu co-founder Walter De Brouwer, decided to add an extra feature: the ability to remotely trigger new algorithms and plug in new sensors (like a spectrometer).
One would think that working with NASA is effecting his thinking. But as Brouwer points out, the more information the machine is capable of collecting, the better is will be at monitoring your health:
If we find new algorithms to find relationships between several readings, we can use more of the sensors than we would first activate. If you know a couple of the variables, you could statistically predict that something is going to happen. The more data we have, the more we can also predict, because we’re using data mining at the same time as statistics.
One of the Scout’s cornerstone algorithms, for example, allows it to read blood pressure without the inflating cuff that we’ve all come to know and find so uncomfortable. In the future, Scanadu could discover an algorithm that connects, age, weight, blood pressure, and heart rate with some other variable, and then be able to make recommendations.
Everyone who pre-orders a Scout has their data sent to a cloud service, where Scanadu will collect it in a big file for the FDA. Anyone who opts-in will also gain access to the data of other users who have also elected to share their vitals. Brouwer explains that this is part of the products early mission to test the parameters of information sharing and cloud-medical computing:
It’s going to be a consumer product in the future, but right now we are positioning it as a research tool so that it can be used to finalize the design and collect data to eventually gain regulatory approval. In the end, you have to prove how people are going to use the device, how many times a day, and how they are going to react to the information.
In the future, De Brouwer imagines this kind of shared information could be used for population scanning, kind of like Google Flu Trends does, except with data being provided directly from individuals. The focus will also be much more local, with people using the Scout’s stats to able to see if their child, who suddenly has flu symptoms, is alone of ir other kids at their school are also sick. Pandemics and the outbreaks of fatal diseases could also be tracked in the same way and people forewarned.
Naturally, this raises some additional questions. With it now possible to share and communicate medical information so easily between devices, from people to their doctors, and stored within databases of varying accessibility, there is the ongoing issue of privacy. If in fact medical information can be actively shared in real-time or with the touch of a button, how hard will it be for third parties to gain access to them?
The upsides are clear: a society where health information is easily accessible is likely to avoid outbreaks of infectious disease and be able to contain pandemics with greater ease. But on the flip side, hackers are likely to find ways to access and abuse this information, since it will be in a public place where people can get at it. And naturally, there are plenty of people who will feel squeamish or downright terrified about the FDA having access to up-to-the-moment medical info on them.
It’s the age of cloud computing, wireless communications, and information sharing my friends. And much as people feel guarded about their personal information now, this is likely to take on extra dimensions when their personal medical info is added to the mix. Not a simple or comfortable subject.
But while I’ve still got you’re here, no doubt contemplating the future of medicine, take a look at this video of the Scanadu Scout in action:
It’s one of the cornerstones of the coming technological revolution: machinery that can assemble, upgrade, and/or fix itself without the need for regular maintenance. Such devices would forever put an end to the hassles of repairing computers, replacing components, or having to buy new machines when something vital broke down. And thanks to researchers at Caltech, we now have a microchip that accomplish one of these feats: namely, fix itself.
The chip is the work of Ali Hajimiri and a group of Caltech researchers who have managed to create an integrated circuit that, after taking severe damage, can reconfigure itself in such a way where it can still remain functional. This is made possible thanks to a secondary processor that jumps into action when parts of the chip fail or become compromised. The chip is also able to tweak itself on the fly, and can be programmed to focus more on saving energy or performance speed.
In addition, the chip contains 100,000 transistors, as well as various sensors that give it the ability to monitor the unit’s overall health. Overall, the microchip is comparable to a power amplifier as well as a microprocessor, the kind of circuit that processes signal transmissions, such as those found in mobile phones, as well as carrying out complex functions. This combined nature is what gives it this self-monitoring ability and ensures that it can keep working where other chips would simply stop.
To test the self-healing, self-monitoring attributes of their design, Hajimiri and his team blasted the chip with a laser, effectively destroying half its transistors. It only took the microchip a handful of milliseconds to deal with the loss and move on, which is an impressive feat by any standard. On top of that, the team found that a chip that wasn’t blasted by lasers was able to increase its efficiency by reducing its power consumption by half.
Granted, the chip can only fix itself if the secondary processor and at least some of the parts remain intact, but the abilities to self-monitor and tweak itself are still of monumental importance. Not only can the chip monitor itself in order to provide the best possible performance, it can also ensure that it will continue to provide a proper output of data if some of the parts do break down.
Looking ahead, Hajimiri has indicated that the technology behind this self-healing circuit can be applied to any other kind of circuit. This is especially good news for people with portable computers, laptops and other devices who have watched them break down because of a hard bump. Not only would this save consumers a significant amount of money on repairs, replacement, and data recovery, it is pointing the way towards a future where embedded repair systems are the norm.
And who knows? Someday, when nanomachines and self-assembling structures are the norm, we can look forward to devices that can be totally smashed, crushed and shattered, but will still manage to come back together and keep working. Hmm, all this talk of secondary circuits and self-repairing robots. I can’t help but get the feeling we’ve seen this somewhere before…
With recent advances being made in flexible electronics, researchers are finding more and more ways to adapt medical devices to the human body. These include smart tattoos, stretchable patches for organs, and even implants. But what of band-aids? Aren’t they about due for an upgrade? Well as it happens, a team of chemical engineering at Northeastern University are working towards just that.
Led by associate professor Ed Goluch, the team is working towards the development of a “smart bandage” that will not only dress wounds, but can monitor infections and alert patients to their existence. Based around an electrochemical sensor that is capable of detecting Pseudomonas aeruginosa – a common bacteria that can kill if untreated – this bandage could very prove to be the next big step in first aid.
According to Goluch, the idea came to him while he was studying how different bacterial cells behave individually and he and his colleagues began speaking about building other types of sensors:
I was designing sensors to be able to track individual cells, measure how they produce different toxins and compounds at the single-cell level and see how they change from one cell to another and what makes one cell more resistant to an antibiotic.
Naturally, addition research is still needed so that smart band-aids of this kind would be able to detect other forms of infections. And Goluch and his colleagues are quite confident, claiming that they are adapting their device to be able to detect the specific molecules emitted by Staphylococcal – the bacteria responsible for staph infections.
So far, Goluch and his team have tested the system with bacteria cultures and sensors. The next step, which he hopes to begin fairly soon, will involve humans and animals testing. The professor isn’t sure exactly how much the sensor would cost when commercialized, but he believes “it’s simple enough that you’d be able to integrate it in a large volume fairly cheap.”
At this rate, I can foresee a future where all first-aid devices are small patches that are capable of gathering data on your wounds, checking your vitals, and communicating all this information directly to your PDA or tablet, your doctor, or possibly your stretchable brain implant. I tell ya, it’s coming, so keep your apps up to date!
Twenty-five years ago, Los Angeles magazine envisioned what the world would look like in the current decade. And unlike Blade Runner, they avoided the cool but standard science fiction allegories – like massive billboards, flying cars and sentient robots – and went straight for the things that seemed entirely possible by contemporary standards.
The cover story of the magazine’s April 3, 1988 edition showed a futuristic downtown L.A. crisscrossed with electrically charged, multi-tiered freeways permeated by self-driving cars. The article itself then imagined a day in the life of the fictional Morrow family of the L.A. suburb Granada Hills, as “profiled” by the magazine in 2013 by science fiction writer Nicole Yorkin.
Ironically, the magazine did not envision that it would one day go out of business, or that print media would one day be lurching towards extinction. Nevertheless, the fictional article and the world it detailed were interesting reading. Little wonder then why, earlier this month, the LA Times along with an engineering class at USC, revisited the archives to assess what it predicted correctly versus incorrectly.
Together, professor Jerry Lockenour and his class made a list of the hits and misses, and what they found paints a very interesting picture of how we predict the future and how its realization so often differs from what we expect. Of the major predictions to be found in LA of the 2013, as well as in the lives of the Morrow family (get it?), here is what they got right:
Smart-Houses: In the article, the Morrows are said to begin every morning when their “Smart House” automatically turns on. This consists of all the appliances activating and preparing them breakfast, and no doubt turning on all the environmental controls and opening the shades to get the temperature and ambient lighting just right.
While this isn’t the norm for the American family yet, the past few years have proved a turning point for home devices hooking up with the Internet, to become more programmable and serve our daily needs. And plans are well under way to find a means of networking them all together so they function as one “smart” unit.
Self-Driving Cars: The writers of the article predicted that by 2013, cars would come standard with computers that control most of the settings, along with GPS systems for navigation. They also predict self-driving cars, which Google and Chevy are busy working on. In addition to using clean, alternative energy sources, these cars are expected to be able t0 self-drive, much in the same way a pilot puts their plane on auto-pilot. Drivers will also be able to summon the cars to their location, connect wirelessly to the internet, and download apps and updates to keep their software current.
But of course, they got a few things wrong as well. Here they are, the blots on their predictive record:
Homeprinted newspapers: The article also predicts that each morning the Morrows would begin their day with a freshly printed newspaper, as rendered by their laser-jet printer. These would be tailor-made, automatically selecting the latest news feeds that would be of most interest to them. What this failed to anticipate was the rise in e-media and the decline of printed media, though hardly anyone would fault them for this. While news has certainly gotten more personal, the use of tablets, ereaders and smartphones is the way the majority of people now read their selected news.
Robot servants and pets: In what must have seemed like a realistic prediction, but which now comes across as a sci-fi cliche, the Morrows’ home was also supposed to come equipped with a robotic servant that had a southern accent. The family’s son was also greeted every morning by a robot dog that would come to play with him. While we are certainly not there yet, the concept of anthropomorphic robot assistants is becoming more real every day. Consider, for example, the Kenshiro robot (pictured at right), the 3D printed android, or the proposed Roboy, the Swiss-made robotic child. With all of these in the works, a robotic servant or pet doesn’t seem so far-fetched does it?
Summary:
Between these four major predictions and which came to be true, we can see that the future is not such an easy thing to predict. In addition to always being in motion, and subject to acceleration, slowing and sudden changes, the size and shape of it can be very difficult to pin down. No one can say for sure what will be realized and when, or if any of the things we currently take for granted will even be here tomorrow.
For instance, during the 1960’s and 70’s, it was common practice for futurists and scientists to anticipate that the space race, which had culminated with humans setting foot on the moon in 1969, would continue into the future, and that humanity would be seeing manned outposts on the moon by and commercial space flight by 1999. No one at the time could foresee that a more restrictive budget environment, plus numerous disasters and a thawing of the Cold War, would slow things down in that respect.
In addition, most predictions that took place before the 1980’s completely failed to predict the massive revolution caused by miniaturization and the explosion in digital technology. Many futurist outlooks at the time predicted the rise in AI, but took it for granted that computers would still be the size of a desk and require entire rooms dedicated to their processors. The idea of a computer that could fit on top of a desk, let alone on your lap or in the palm of your hand, must have seemed farfetched.
What’s more, few could predict the rise of the internet before the late 1980’s, or what the realization of “cyberspace” would even look like. Whereas writer’s like William Gibson not only predicted but coined the term, he and others seemed to think that interfacing with it would be a matter of cool neon-graphics and avatars, not the clean, page and site sort of interface which it came to be.
And even he failed to predict the rise of such things as email, online shopping, social media and the million other ways the internet is tailored to suit the average person and their daily needs. When it comes right down to it, it is not a dangerous domain permeated by freelance hacker “jockeys” and mega-corporations with their hostile counter-intrusion viruses (aka. Black ICE). Nor is it the social utopia promoting open dialogue and learning that men like Bill Gates and Al Gore predicted it would be in the 1990’s. If anything, it is an libertarian economic and social forum that is more democratic and anarchistic than anyone could have ever predicted.
But of course, that’s just one of many predictions that came about that altered how we see things to come. As a whole, the entire thing has come to be known for being full of shocks and surprises, as well as some familiar faces. In short, the future is an open sea, and there’s no telling which way the winds will blow, or what ships will make it to port ahead of others. All we can do is wait and see, and hopefully trust in our abilities to make good decisions along the way. And of course, the occasional retrospective and issue congratulations for the things we managed to get right doesn’t hurt either!
Imagine threads that would turn the wearer into a walking power source. That’s the concept behind a new type of fiber-optic solar cell developed by John Badding of Penn State University. Announced back in December of 2012, this development could very well lead to the creation of full-body solar cells that you wear, providing you with an ample amount of renewable electricity that you could could carry with you everywhere you go.
Similar in appearance to most fiber-optic cables made from flexible glass fibers, these new solar cells are thinner than the average human hair and could conceivably be woven into clothing. Whereas you conventional solar cell exists only in two-dimensions and can only absorb energy when facing the sun, this 3D cross-section of silicon infused fiber are capable of absorbing light from any direction.
Already, John Badding and his research team have received interest from the United States military about creating clothing that can act as a wearable power source for soldiers while they’re in the field. In addition, like peel and stick solar panels, we can expect commercial applications for satchels, like the kind used to house laptops. Forget the power cable, now you can charge your battery pack just by setting it in the sun.
And given the upsurge in wearable tattoos and implantable medical devices, these fibers could also prove useful in clothing to ensure a steady supply of power that they could draw from. Hell, I can picture “solar shirts” that have a special recharging pocket where you can place your MP3 player, smartphone, tablet, or any other electronic device once the battery runs down.
Naturally, all of this is still in the research and development stage of things. John Badding and his team have yet to aggregate the single strands into a piece of woven material, meaning it is still speculative as to whether or not they will be able to withstand the stress faced by regular clothing without breaking down. Nevertheless, the material is still a significant advancement for solar energy, with the new cells presenting many possibilities for remote energy use and accessibility.
And I for one am still excited about the emergence of fabric that generates electricity. Not only is it a surefire and sophisticated way of reducing our carbon footprint, it’s science fiction gold!