The Future of Medicine: The Era of Artificial Hearts

05Between artificial knees, total hip replacements, cataract surgery, hearing aids, dentures, and cochlear implants, we are a society that is fast becoming transhuman. Basically, this means we are dedicated to improving human health through substitution and augmentation of our body parts. Lately, bioprinting has begun offering solutions for replacement organs; but so far, a perfectly healthy heart, has remained elusive.

Heart disease is the number one killer in North America, comparable only to strokes, and claiming nearly 600,000 lives every year in the US and 70,000 in Canada. But radical new medical technology may soon change that. There have been over 1,000 artificial heart transplant surgeries carried out in humans over the last 35 years, and over 11,000 more heart surgeries where valve pumps were installed have also been performed.

artificial-heart-abiocor-implantingAnd earlier this month, a major step was taken when the French company Carmat implanted a permanent artificial heart in a patient. This was the second time in history that this company performed a total artificial heart implant, the first time being back in December when they performed the implant surgery on a 76-year-old man in which no additional donor heart was sought. This was a major development for two reasons.

For one, robotic organs are still limited to acting as a temporary bridge to buy patients precious time until a suitable biological heart becomes available. Second, transplanted biological hearts, while often successful, are very difficult to come by due to a shortage of suitable organs. Over 100,000 people around the world at any given time are waiting for a heart and there simply are not enough healthy hearts available for the thousands who need them.

carmat_heartThis shortage has prompted numerous medical companies to begin looking into the development of artificial hearts, where the creation of a successful and permanent robotic heart could generate billions of dollars and help revolutionize medicine and health care. Far from being a stopgap or temporary measure, these new hearts would be designed to last many years, maybe someday extending patients lives indefinitely.

Carmat – led by co-founder and heart transplant specialist Dr. Alain Carpentier – spent 25 years developing the heart. The device weighs three times that of an average human heart, is made of soft “biomaterials,” and operates off a five-year lithium battery. The key difference between Carmat’s heart and past efforts is that Carmat’s is self-regulating, and actively seeks to mimic the real human heart, via an array of sophisticated sensors.

carmat-artificial-heartUnfortunately, the patient who received the first Carmat heart died prematurely only a few months after its installation. Early indications showed that there was a short circuit in the device, but Carmat is still investigating the details of the death. On September 5th, however, another patient in France received the Carmat heart, and according to French Minister Marisol Touraine the “intervention confirms that heart transplant procedures are entering a new era.”

More than just pumping blood, future artificial hearts are expected to bring numerous other advantages with them. Futurists and developers predict they will have computer chips and wi-fi capacity built into them, and people could be able to control their hearts with smart phones, tuning down its pumping capacity when they want to sleep, or tuning it up when they want to run marathons.

carmat_heart1The benefits are certainly apparent in this. With people able to tailor their own heart rates, they could control their stress reaction (thus eliminating the need for Xanax and beta blockers) and increase the rate of blood flow to ensure maximum physical performance. Future artificial hearts may also replace the need for some doctor visits and physicals, since it will be able to monitor health and vitals and relay that information to a database or device.

In fact, much of the wearable medical tech that is in vogue right now will likely become obsolete once the artificial heart arrives in its perfected form. Naturally, health experts would find this problematic, since our hearts respond to our surroundings for a reason, and such stimuli could very well have  unintended consequences. People tampering with their own heart rate could certainly do so irresponsibly, and end up causing damage other parts of their body.

carmat_heart2One major downside of artificial hearts is their exposure to being hacked thanks to their Wi-Fi capability. If organized criminals, an authoritarian government, or malignant hackers were dedicated enough, they could cause targeted heart failure. Viruses could also be sent into the heart’s software, or the password to the app controlling your heart could be stolen and misused.

Naturally, there are also some critics who worry that, beyond the efficacy of the device itself, an artificial heart is too large a step towards becoming a cyborg. This is certainly true when it comes to all artificial replacements, such as limbs and biomedical implants, technology which is already available. Whenever a new device or technique is revealed, the specter of “cyborgs” is raised with uncomfortable implications.

transhuman3However, the benefit of an artificial heart is that it will be hidden inside the body, and it will soon be better than the real thing. And given that it could mean the difference between life and death, there are likely to be millions of people who will want one and are even willing to electively line up for one once they become available. The biggest dilemma with the heart will probably be affordability.

Currently, the Carmat heart costs about $200,000. However, this is to be expected when a new technology is still in its early development phase. In a few years time, when the technology becomes more widely available, it will likely drop in price to the point that they become much more affordable. And in time, it will be joined by other biotechnological replacements that, while artificial, are an undeniably improvement on the real thing.

The era of the Transhumanism looms!

Source: motherboard.vice.com, carmatsa.com, cdc.gov, heartandstroke.com

Finalists Selected for Qualcomm Tricorder XPrize

Tricorder X_prizeFirst announced in 2012, the Qualcomm Tricorder XPRIZE has sought to bring together the best and brightest minds in the field together to make science fiction science fact. In short, they sought to create a handheld device that could would mimic some of the key functions of the iconic Star Trek tricorder, allowing consumers access to reliable, easy to use diagnostic equipment any time, anywhere, with near instantaneous results.

And now, the list of potential candidates has been whittled down to ten finalists. And while they might be able to live up to the fictitious original, the devices being developed are quite innovative and could represent a significant technological advancement in the diagnostic domain. Qualcomm is offering a US$10 million prize purse in the hope of stimulating the research and development of precision diagnostic equipment.

medical_tricorderIn order to qualify for the prize, the successful scanner must comply with an ambitious set of parameters. First, the device must be able to reliably capture an individual’s heart rate, respiratory rate, blood pressure, and oxygen saturation in an easy to use and completely non-invasive fashion. It must also diagnose 13 core diseases – including pneumonia, tuberculosis and diabetes – along with three additional health conditions to be chosen by each team.

Each device varies widely in terms of appearance and composition, but that’s hardly surprising. The only limitations placed on the teams in terms of construction is that the entire apparatus must have a mass of less than 2.3kg (5 lb). Due to the wide range of tests needed to be carried out by the tricorder in order to capture the necessary health metrics, it is highly unlikely that any of the scanners will take the form of a single device.

qualcommtricorderchallenge-3The shortlisted entries include Scanadu (pictured above), a company which is currently developing an entire portfolio of handheld medical devices. The circular sensor is programmed to measure blood pressure, temperature, ECG, oximetry, heart rate, and the breathing rate of a patient or subject – all from a simple, ten second scan. Then there’s Aezon, an American-based team comprised of student engineers from Johns Hopkins University, Maryland.

The Aezon device is made up of a wearable Vitals Monitoring Unit – designed to capture oxygen saturation, blood pressure, respiration rate and ECG metrics – and The Lab Box, a small portable device that makes use of microfluidic chip technology in order to diagnose diseases ranging from streptococcal pharyngitis to a urinary tract infection by analyzing biological samples.

Tricorder XThe other finalists include CloudDX, a Canadian company from Mississauga, Ontario; Danvantri, from Chennai, India; DMI from Cambridge, Mass; the Dynamical Biomarkers Group from Zhongli City, Taiwan; Final Frontier Medical Devices from Paoli, PA; MESI Simplifying Diagnostics from Ljubljana, Slovenia; SCANurse from London, England; and the Zensor from Belfast, Ireland.

In all cases, the entrants are compact, lightweight and efficient devices that push the information obtained through their multiple sensors to a smartphone or tablet interface. This appears to be done with a proprietary smartphone app via the cloud, where it can also be analyzed by a web application. Users will also be able to access their test results, discover information regarding possible symptoms and use big data to form a possible diagnosis.

 

qualcommtricorderchallenge-2

The next and final round of tests for the teams will take place next year between November and December. The scanners will be put through a diagnostic competition involving 15-30 patients whilst judges evaluate the consumers user experience. The final test will also assess the scanners’ adequacy in high-frequency data logging, and the overall winners will be announced in early 2016, and awarded the lucrative $10 million prize to develop their product and bring it to market.

If such a device could be simple enough to allow for self-diagnosis by the general public, it could play a key part in alleviating the pressure on overburdened healthcare systems by cutting down on unnecessary hospital visits. It will also be a boon for personalized medicine, making regular hospital visits quicker, easier, and much less expensive. And let’s not forget, it’s science fiction and Trekky-nerd gold!

Be sure to check out the video below that outlines the aims and potential benefits of the Qualcomm Tricorder XPRIZE challenge. And for more information on the finalists, and to see their promotional videos, check out the Qualcomm website here.


Source:
gizmag.com, tricorder.xprize.org

The Future is Here: Glucose-Monitoring Contact Lenses

google-novartis-alcon-smart-contact-lens-0Earlier this year, Google announced that it was developing a contact lens that would be capable of monitoring blood glucose levels. By monitoring a person’s glucose levels through their tears, and sending that information to a smartphone, the device promised to do away with tests that require regular blood samples and pinpricks. And now, a partnership has been announced between that will help see this project through to completion.

Alcon, the eye care division of Novartis – a Swiss multinational pharmaceutical company – recently joined Google’s project to commercialize “smart contact lens” technology. The project, which came out of the Google X blue-sky innovation arm of the company, aimed to utilize a “tiny wireless chip and miniaturized glucose sensor that are embedded between two layers of soft contact lens material,” in order to detect glucose levels present in tears.

google-novartis-alcon-smart-contact-lensAt the time of the initial announcement in January, Google said its prototypes were able to take one glucose reading per second and that they was investigating ways for the device to act as an early warning system for the wearer should glucose levels become abnormal. All that was needed was a partner with the infrastructure and experience in the medical industry to see the prototypes put into production.

Under the terms of the new agreement, Google will license the technology to Alcon “for all ocular medical uses” and the two companies will collaborate to develop the lens and bring it to market. Novartis says that it sees Google’s advances in the miniaturization of electronics as complementary to its own expertise in pharmaceuticals and medical device. No doubt, the company also sees this as an opportunity to get in on the new trend of digitized, personalized medicine.

future_medicineAs Novartis said in a recent press release:

The agreement marries Google’s expertise in miniaturized electronics, low power chip design and microfabrication with Alcon’s expertise in physiology and visual performance of the eye, clinical development and evaluation, as well as commercialization of contact and intraocular lenses.

The transaction remains subject to anti-trust approvals, but assuming it goes through, Alcon hopes it will help to accelerate its product innovation. And with that, diabetics can look forward to yet another innovative device that simplifies the blood monitoring process and offers better early warning detection that can help reduce the risk of heart disease, stroke, kidney failure, foot ulcers, loss of vision, and coma.

Sources: gizmag.com, novartis.com

The Future of Devices: The Wearable Tech Boom

Wearable-Computing-RevolutionThe wearable computing revolution that has been taking place in recent years has drawn in developers and tech giants from all over the world. Though its roots are deep, dating back to the late 60’s and early 80’s with the Sword of Damocles concept and the work of Steve Mann. But in recent years, thanks to the development of Google Glass, the case for wearable tech has moved beyond hobbyists and enthusiasts and into the mainstream.

And with display glasses now accounted for, the latest boom in development appears to be centered on smart watches and similar devices. These range from fitness trackers with just a few features to wrist-mounted version of smart phones that boast the same constellations of functions and apps (email, phone, text, skyping, etc.) And as always, the big-name industries are coming forward with their own concepts and designs.

apple_iwatch1First, there’s the much-anticipated Apple iWatch, which is still in the rumor stage. The company has been working on this project since late 2012, but has begun accelerating the process as it tries to expand its family of mobile devices to the wrist. Apple has already started work on trademarking the name in a number of countries in preparation for a late 2014 launch perhaps in October, with the device entering mass production in July.

And though it’s not yet clear what the device will look like, several mockups and proposals have been leaked. And recent reports from sources like Reuters and The Wall Street Journal have pointed towards multiple screen sizes and price points, suggesting an array of different band and face options in various materials to position it as a fashion accessory. It is also expected to include a durable sapphire crystal display, produced in collaboration with Apple partner GT Advanced.

iWatchWhile the iWatch will perform some tasks independently using the new iOS 8 platform, it will be dependent on a compatible iOS device for functions like receiving messages, voice calls, and notifications. It is also expected to feature wireless charging capabilities, advanced mapping abilities, and possibly near-field communication (NFC) integration. But an added bonus, as indicated by Apple’s recent filing for patents associated with their “Health” app, is the inclusion of biometric and health sensors.

Along with serving as a companion device to the iPhone and iPad, the iWatch will be able to measure multiple different health-related metrics. Consistent with the features of a fitness band, these will things like a pedometer, calories burned, sleep quality, heart rate, and more. The iWatch is said to include 10 different sensors to track health and fitness, providing an overall picture of health and making the health-tracking experience more accessible to the general public.

iOS8Apple has reportedly designed iOS 8 with the iWatch in mind, and the two are said to be heavily reliant on one another. The iWatch will likely take advantage of the “Health” app introduced with iOS 8, which may display all of the health-related information gathered by the watch. Currently, Apple is gearing up to begin mass production on the iWatch, and has been testing the device’s fitness capabilities with professional athletes such as Kobe Bryant, who will likely go on to promote the iWatch following its release.

Not to be outdone, Google launched its own brand of smartwatch – known as Android Wear – at this year’s I/O conference. Android Wear is the company’s software platform for linking smartwatches from companies including LG, Samsung and Motorola to Android phones and tablets. A preview of Wear was introduced this spring, the I/O conference provided more details on how it will work and made it clear that the company is investing heavily in the notion that wearables are the future.

android-wear-showdownAndroid Wear takes much of the functionality of Google Now – an intelligent personal assistant – and uses the smartwatch as a home for receiving notifications and context-based information. For the sake of travel, Android Wear will push relevant flight, weather and other information directly to the watch, where the user can tap and swipe their way through it and use embedded prompts and voice control to take further actions, like dictating a note with reminders to pack rain gear.

For the most part, Google had already revealed most of what Wear will be able to do in its preview, but its big on-stage debut at I/O was largely about getting app developers to buy into the platform and keep designing for a peripheral wearable interface in mind. Apps can be designed to harness different Android Wear “intents.” For example, the Lyft app takes advantage of the “call me a car” intent and can be set to be the default means of hailing a ride when you tell your smartwatch to find you a car.

androidwear-3Google officials also claimed at I/O that the same interface being Android Wear will be behind their new Android Auto and TV, two other integrated services that allow users to interface with their car and television via a mobile device. So don’t be surprised if you see someone unlocking or starting their car by talking into their watch in the near future. The first Android Wear watches – the Samsung Gear Live and the LG G Watch – are available to pre-order and the round-face Motorola Moto 360 is expected to come out later this summer.

All of these steps in integration and wearable technology are signs of an emergent trend, one where just about everything from personal devices to automobiles and even homes are smart and networked together – thus giving rise to a world where everything is remotely accessible. This concept, otherwise known as the “Internet of Things”, is expected to become the norm in the next 20 years, and will include other technologies like display contacts and mediated (aka. augmented) reality.

And be sure to check out this concept video of the Apple iWatch:


Sources:
cnet.com, (2), macrumors.com, engadget.com, gizmag.com

The Internet of Things: AR and Real World Search

https://i0.wp.com/screenmediadaily.com/wp-content/uploads/2013/04/augmented_reality_5.jpgWhen it comes to the future, it is clear that the concept of the “Internet of Things” holds sway. This idea – which states that all objects will someday be identifiable thanks to a virtual representations on the internet – is at the center of a great deal of innovation that drives our modern economy. Be it wearables, wireless, augmented reality, voice or image recognition, that which helps us combine the real with the virtual are on the grow.

And so it’s really no surprise that innovators are looking to take augmented reality to the next level. The fruit of some of this labor is Blippar, a market-leading image-recognition and augmented reality platform. Lately, they have been working on a proof of concept for Google Glass showing that 3-D searches are doable. This sort of technology is already available n the form of apps for smartphones, but a central database is lacking that could any device into a visual search engine.

https://i0.wp.com/inthralld.com/wp-content/uploads/2013/08/Say-Hello-to-Ikeas-2014-Interactive-Catalog-App-4.jpegAs Ambarish Mitra, the head of Blippar stated, AR is already gaining traction among consumers thanks to some of the world’s biggest industrial players recognizing the shift to visually mediated lifestyles. Examples include IKEA’s interactive catalog, Heinz’s AR recipe booklet or Amazon’s recent integration of the Flow AR technology into its primary shopping app. As this trend continues, we will need a Wikipedia-like database for 3-D objects that will be available to us anytime, anywhere.

Social networks and platforms like Instagram, Pinterest, Snapchat and Facebook have all driven a cultural shift in the way people exchange information. This takes the form of text updates, instant messaging, and uploaded images. But as the saying goes, “a picture is worth a thousand words”. In short, information absorbed through visual learning has a marked advantage over that which is absorbed through reading and text.

Augmented_Reality_Contact_lensIn fact, a recent NYU study found that people retain close to 80 percent of information they consume through images versus just 10 percent of what they read. If people are able to regularly consume rich content from the real world through our devices, we could learn, retain, and express our ideas and information more effectively. Naturally, there will always be situations where text-based search is the most practical tool, but searches arise from real-world experiences.

Right now, text is the only option available, and oftentimes, people are unable to best describe what they are looking for. But an image-recognition technology that could turn any smartphone, tablet or wearable device into a scanner that could identify any 3-D object would vastly simplify things. Information could be absorbed in a more efficient way, using an object’s features and pulling up information from a rapidly learning engine.

https://i0.wp.com/24reviews.com/wp-content/uploads/2014/03/QWERTY-keyboard.pngFor better or for worse, wearable designs of consumer electronics have come to reflect a new understanding in the past few years. Basically, they have come to be extensions of our senses, much as Marshall McCluhan wrote in his 1964 book Understanding Media: The Extensions of Man. Google Glass is representative of this revolutionary change, a step in the direction of users interacting with the environment around them through technology.

Leading tech companies are already investing time and money into the development of their own AR products, and countless patents and research allocations are being made with every passing year. Facebook’s acquisition of virtual reality company Oculus Rift is the most recent example, but even Samsung received a patent earlier this year for a camera-based augmented reality keyboard that is projected onto the fingers of the user.

https://i0.wp.com/blogs.gartner.com/it-glossary/files/2012/07/internet-of-things-gartner.pngAugmented reality has already proven itself to be a multi-million dollar industry – with 60 million users and around half a billion dollars in global revenues in 2013 alone. It’s expected to exceed $1 billion annually by 2015, and combined with a Google-Glass type device, this AR could eventually allow individuals to build vast libraries of data that will be the foundation for finding any 3-D object in the physical world.

In other words, the Internet of Things will become one step closer, with an evolving database of visual information at the base of it that is becoming ever larger and (in all likelihood) smarter. Oh dear, I sense another Skynet reference coming on! And in the meantime, enjoy this video that showcases Blippar’s vision of what this future of image overlay and recognition will look like:


Source: wired.com, dashboardinsight.com, blippar.com

The Future of Medicine: 3D Printing and Bionic Organs!

biomedicineThere’s just no shortage of breakthroughs in the field of biomedicine these days. Whether it’s 3D bioprinting, bionics, nanotechnology or mind-controlled prosthetics, every passing week seems to bring more in the way of amazing developments. And given the rate of progress, its likely going to be just a few years before mortality itself will be considered a treatable condition.

Consider the most recent breakthrough in 3D printing technology, which comes to us from the J.B Speed School of Engineering at the University of Louisville where researchers used a printed model of a child’s hear to help a team of doctors prepare for open heart surgery. Thanks to these printer-assisted measures, the doctors were able to save the life of a 14-year old child.

3d_printed_heartPhilip Dydysnki, Chief of Radiology at Kosair Children’s Hospital, decided to approach the school when he and his medical team were looking at ways of treating Roland Lian Cung Bawi, a boy born with four heart defects. Using images taken from a CT scan, researchers from the school’s Rapid Prototyping Center were able to create and print a 3D model of Roland’s heart that was 1.5 times its actual size.

Built in three pieces using a flexible filament, the printing reportedly took around 20 hours and cost US$600. Cardiothoracic surgeon Erle Austin III then used the model to devise a surgical plan, ultimately resulting in the repairing of the heart’s defects in just one operation. As Austin said, “I found the model to be a game changer in planning to do surgery on a complex congenital heart defect.”

Roland has since been released from hospital and is said to be in good health. In the future, this type of rapid prototyping could become a mainstay for medical training and practice surgery, giving surgeons the options of testing out their strategies beforehand. And be sure to check out this video of the procedure from the University of Louisville:


And in another story, improvements made in the field of bionics are making a big difference for people suffering from diabetes. For people living with type 1 diabetes, the constant need to extract blood and monitor it can be quite the hassle. Hence why medical researchers are looking for new and non-invasive ways to monitor and adjust sugar levels.

Solutions range from laser blood-monitors to glucose-sensitive nanodust, but the field of bionics also offer solutions. Consider the bionic pancreas that was recently trialled among 30 adults, and has also been approved by the US Food and Drug Administration (FDA) for three transitional outpatient studies over the next 18 months.

bionic-pancreasThe device comprises a sensor inserted under the skin that relays hormone level data to a monitoring device, which in turn sends the information wirelessly to an app on the user’s smartphone. Based on the data, which is provided every five minutes, the app calculates required dosages of insulin or glucagon and communicates the information to two hormone infusion pumps worn by the patient.

The bionic pancreas has been developed by associate professor of biomedical engineering at Boston University Dr. Edward Damiano, and assistant professor at Harvard Medical School Dr. Steven Russell. To date, it has been trialled with diabetic pigs and in three hospital-based feasibility studies amongst adults and adolescents over 24-48 hour periods.

bionic_pancreasThe upcoming studies will allow the device to be tested by participants in real-world scenarios with decreasing amounts of supervision. The first will test the device’s performance for five continuous days involving twenty adults with type 1 diabetes. The results will then be compared to a corresponding five-day period during which time the participants will be at home under their own care and without the device.

A second study will be carried out using 16 boys and 16 girls with type 1 diabetes, testing the device’s performance for six days against a further six days of the participants’ usual care routine. The third and final study will be carried out amongst 50 to 60 further participants with type 1 diabetes who are also medical professionals.

bionic_pancreas_technologyShould the transitional trials be successful, a more developed version of the bionic pancreas, based on results and feedback from the previous trials, will be put through trials in 2015. If all goes well, Prof. Damiano hopes that the bionic pancreas will gain FDA approval and be rolled out by 2017, when his son, who has type 1 diabetes, is expected to start higher education.

With this latest development, we are seeing how smart technology and non-invasive methods are merging to assist people living with chronic health issues. In addition to “smart tattoos” and embedded monitors, it is leading to an age where our health is increasingly in our own hands, and preventative medicine takes precedence over corrective.

Sources: gizmag.com, (2)

The Future is Here: The Copenhagen Wheel

copenhagen_wheelFans of the cable show Weeds ought to instantly recognize this invention. It was featured as a product invented by one of the characters while living (predictably) in Copenhagen. In addition, it was the subject of news stories, articles, design awards, and a whole lot of public interest. People wanted to get their hands on it, and for obvious reasons.

It’s known as the Copenhagen Wheel, a device invented by MIT SENSEable City Lab back in 2009 to electrify the bicycle. Since that time, engineers at MIT have been working to refine it in preparation for the day when it would be commercially available. And that time has come, as a new company called Superpedestrian announced that it has invested $2.1 million in venture capital to make the device available to the public.

copenhagen_wheel1Superpedestrian founder Assaf Biderman, who is also the SENSEable City lab associate director and one of the creators of the wheel, along with lab director Carlo Ratti, had this to say:

The project touched an exposed nerve somehow. Aside from news coverage and design awards, people were wanting it. Over 14,000 people emailed saying ‘I want to buy it, sell it, make it for you.

Three years after inventing it, Biderman finally decided that it was time to spin off a company to make it happen. MIT filed all the relevant patents, and Superpedestrian acquired exclusive licenses to the Copenhagen Wheel technology. And by late November, they plan to launch the wheel to the public for the very first time.

copenhagen_wheel2And though the much of the facts are being carefully guarded in preparation for the release, some details are already known. For example, the wheel can be fitted to almost any bike, is controlled by sensors in the peddles, and has a power assist feature that doesn’t require any work on the part of the rider. And according to Biderman, its range “will cover the average suburban commute, about 15 miles to and from work and back home.”

On top of that, a regenerative braking system stores energy for later use in a lithium battery. The wheel also comes with an app that allows users to control special features from their smartphone. These include being able to lock and unlock the bike, select motor assistance, and get real-time data about road conditions. An open-source platform called The Superpedestrian SDK also exists to allow developers to make on their own apps.

smartwheelrotatingInterestingly enough,the Copenhagen Wheel also has a rival, who’s appearance on the market seems nothing short of conspiratorial. Its competitor, the FlyKly Smart Wheel, a device which has raised over $150,000 on Kickstarter so far. It is extremely similar to the Copenhagen Wheel in most respects, from its electrical assistance to the fact that it can be integrated via smartphone.

According to Biderman, the appearance of the Smart Wheel is just a coincidence, though it is similar to their product. And her company really doesn’t have to worry about competition, since the Copenhagen Wheel has years of brand recognition and MIT name behind it. In terms of the the target audience, Biderman says that they are looking at targeting city dwellers as well as cyclists:

If you’re an urbanite, you can use it to move all around, and go as far as the edges of most cities with this quite easily. You overcome topographical challenges like hills. The point is to attract more people to cycling.

Though no indication has been given how much an individual unit will cost, it is expected to have a price point that’s competitive with today’s e-bikes.

copenhagen_wheel3The FlyKly Smart Wheel, by comparison, can be pre-ordered for $550 apiece. In total, that campaign has raised $301,867 (their original goal was $100,000) since opening on Oct. 16th. As a result, they have been able to reach their first “stretch goal” of producing a 20″ wheel. If they can reach $500,000 before the campaign closes on Nov. 25th, they will be able to deliver on their other goals: a motor brake and a glow in the dark casing.

For some time, designers and engineers have been trying to find ways to make alternative transportation both effective and attractive. Between these designs and a slew of others that will undoubtedly follow, it looks like e-bicycling may be set to fill that void. Combined with electric cars, self-driving cars, hydrogen cars, robotaxis, podcars, and high speed trains, we could be looking at the revolution in transit that we’ve been waiting for.

Sources: fastcoexist.com, (2), kickstarter.com

The Future of Education: Facial Recognition in the Classroom

https://i0.wp.com/edudemic.com/wp-content/uploads/2012/07/intel-bridge-the-gap.pngFor some time now, classroom cameras have been used to see what teachers do in the course of their lessons, and evaluate their overall effectiveness as educators. But thanks to a recent advances in facial recognition software, a system has been devised that will assess teacher effectiveness by turning the cameras around and aiming at them at the class.

It’s what’s known as EngageSense, and was developed by SensorStar Labs in Queens, New York. It begins by filming student’s faces, then applying an algorithm to assess their level of interest. And while it might sound a bit Big Brother-y, the goal is actually quite progressive. Traditional logic has it that by filming the teacher, you will know what they are doing right and wrong.

https://i0.wp.com/f.fastcompany.net/multisite_files/fastcompany/imagecache/slideshow_large/slideshow/2013/10/3018861-slide-makerfaire1.jpgThis system reverses that thinking, measuring reactions to see how the students feel and react, measuring their level of interest over time to see what works for them and what doesn’t. As SensorStar Labs co-founder Sean Montgomery put it:

This idea of adding the cameras and being able to use that information to assist teachers to improve their lessons is already underway. Where this is trying to add a little value on top of that is to make it less work for the teachers.

Montgomery also emphasized that the technology is in the research and development research and development  phase. In its current form, it uses webcams to shoot students’ faces and computer vision algorithms to analyze their gaze – measuring eye movement, the direction they are facing, and facial expressions. That, coupled with audio, can be transformed into a rough, automated metric of student engagement throughout the day.

https://i0.wp.com/endthelie.com/wp-content/uploads/2012/08/facial-recognition-data-points.jpgAfter a lesson, a teacher could boot up EngageSense and see, with a glance at the dashboard, when students were paying rapt attention, and at what points they became confused or distracted. Beyond that, the concept is still being refined as SensorStar Labs looks both for funding and for schools to give EngageSense a real-world trial.

The ultimate goal here is to tailor lessons so that the learning styles of all students can be addressed. And given the importance of classroom accommodation and the amount of time dedicated to ensuring individual student success, a tool like this may prove very useful. Rather than relying on logs and spreadsheets, the EngageSense employs standard computer hardware that simplifies the evaluation process over the course of days, weeks, months, and even years.

https://i0.wp.com/image.slidesharecdn.com/technologyandeducation2-110922110134-phpapp01/95/slide-1-728.jpgAt the present time, the biggest obstacle would definitely be privacy concerns. While the software is designed for engaging student interest right now, it would not be difficult at all to imagine the same technology applied to police interrogations, security footage, or public surveillance.

One way to assuage these concerns in the classroomstudents, according to Montgomery, is to make the entire process voluntary. Much in the same way that smartphone apps ask permission to access your GPS or other personal data, parental consent would be needed before a child could be recorded or their data accessed and analyzed.

Sources: fastcoexist.com, labs.sensorstar.com

Digital Eyewear Through the Ages

google_glassesGiven the sensation created by the recent release of Google Glass – a timely invention that calls to mind everything from 80’s cyberpunk to speculations about our cybernetic, transhuman future – a lot of attention has been focused lately on personalities like Steve Mann, Mark Spritzer, and the history of wearable computers.

For decades now, visionaries and futurists have been working towards a day when all personal computers are portable and blend seamlessly into our daily lives. And with countless imitators coming forward to develop their own variants and hate crimes being committed against users, it seems like portable/integrated machinery is destined to become an issue no one will be able to ignore.

And so I thought it was high time for a little retrospective, a look back at the history of eyewear computers and digital devices and see how far it has come. From its humble beginnings with bulky backpacks and large, head-mounted displays, to the current age of small fixtures that can be worn as easily as glasses, things certainly have changed. And the future is likely to get even more fascinating, weird, and a little bit scary!

Sword of Damocles (1968):
swordofdamoclesDeveloped by Ivan Sutherland and his student Bob Sprouli at the University of Utah in 1968, the Sword of Damocles was the world’s first heads-up mounted display. It consisted of a headband with a pair of small cathode-ray tubes attached to the end of a large instrumented mechanical arm through which head position and orientation were determined.

Hand positions were sensed via a hand-held grip suspended at the end of three fishing lines whose lengths were determined by the number of rotations sensed on each of the reels. Though crude by modern standards, this breakthrough technology would become the basis for all future innovation in the field of mobile computing, virtual reality, and digital eyewear applications.

WearComp Models (1980-84):
WearComp_1_620x465Built by Steve Mann (inventor of the EyeTap and considered to be the father of wearable computers) in 1980, the WearComp1 cobbled together many devices to create visual experiences. It included an antenna to communicate wirelessly and share video. In 1981, he designed and built a backpack-mounted wearable multimedia computer with text, graphics, and multimedia capability, as well as video capability.

Wearcomp_4By 1984, the same year that Apple’s Macintosh was first shipped and the publication of William Gibson’s science fiction novel, “Neuromancer”, he released the WearComp4 model. This latest version employed clothing-based signal processing, a personal imaging system with left eye display, and separate antennas for simultaneous voice, video, and data communication.

Private Eye (1989):
Private_eye_HUDIn 1989 Reflection Technology marketed the Private Eye head-mounted display, which scanned a vertical array of LEDs across the visual field using a vibrating mirror. The monochrome screen was 1.25-inches on the diagonal, but images appear to be a 15-inch display at 18-inches distance.

EyeTap Digital Eye (1998):
EyeTap1
Steve Mann is considered the father of digital eyewear and what he calls “mediated” reality. He is a professor in the department of electrical and computer engineering at the University of Toronto and an IEEE senior member, and also serves as chief scientist for the augmented reality startup, Meta. The first version of the EyeTap was produced in the 1970’s and was incredibly bulky by modern standards.

By 1998, he developed the one that is commonly seen today, mounted over one ear and in front of one side of the face. This version is worn in front of the eye, recording what is immediately in front of the viewer and superimposing the view as digital imagery. It uses a beam splitter to send the same scene to both the eye and a camera, and is tethered to a computer worn to his body in a small pack.

MicroOptical TASK-9 (2000):
MicroOptical TASK-9Founded in 1995 by Mark Spitzer, who is now a director at the Google X lab. the company produced several patented designs which were bought up by Google after the company closed in 2010. One such design was the TASK-9, a wearable computer that is attachable to a set of glasses. Years later, MicroOptical’s line of viewers remain the lightest head-up displays available on the market.

Vuzix (1997-2013):
Vuzix_m100Founded in 1997, Vuzix created the first video eyewear to support stereoscopic 3D for the PlayStation 3 and Xbox 360. Since then, Vuzix went on to create the first commercially produced pass-through augmented reality headset, the Wrap 920AR (seen at bottom). The Wrap 920AR has two VGA video displays and two cameras that work together to provide the user a view of the world which blends real world inputs and computer generated data.

vuzix-wrapOther products of note include the Wrap 1200VR, a virtual reality headset that has numerous applications – everything from gaming and recreation to medical research – and the Smart Glasses M100, a hands free display for smartphones. And since the Consumer Electronics Show of 2011, they have announced and released several heads-up AR displays that are attachable to glasses.

vuzix_VR920

MyVu (2008-2012):
Founded in 1995, also by Mark Spitzer, MyVu developed several different types of wearable video display glasses before closing in 2012. The most famous was their Myvu Personal Media Viewer (pictured below), a set of display glasses that was released in 2008. These became instantly popular with the wearable computer community because they provided a cost effective and relatively easy path to a DIY, small, single eye, head-mounted display.myvu_leadIn 2010, the company followed up with the release of the Viscom digital eyewear (seen below), a device that was developed in collaboration with Spitzer’s other company, MicroOptical. This smaller, head mounted display device comes with earphones and is worn over one eye like a pair of glasses, similar to the EyeTap.

myvu_viscom

Meta Prototype (2013):
Developed by Meta, a Silicon Valley startup that is being funded with the help of a Kickstarter campaign and supported by Steve Mann, this wearable computing eyewear ultizes the latest in VR and projection technology. Unlike other display glasses, Meta’s eyewear enters 3D space and uses your hands to interact with the virtual world, combining the benefits of the Oculus Rift and those being offered by “Sixth Sense” technology.

meta_headset_front_on_610x404The Meta system includes stereoscopic 3D glasses and a 3D camera to track hand movements, similar to the portrayals of gestural control in movies like “Iron Man” and “Avatar.” In addition to display modules embedded in the lenses, the glasses include a portable projector mounted on top. This way, the user is able to both project and interact with computer simulations.

Google Glass (2013):
Google Glass_Cala
Developed by Google X as part of their Project Glass, the Google Glass device is a wearable computer with an optical head-mounted display (OHMD) that incorporates all the major advances made in the field of wearable computing for the past forty years. These include a smartphone-like hands-free format, wireless internet connection, voice commands and a full-color augmented-reality display.

Development began in 2011 and the first prototypes were previewed to the public at the Google I/O annual conference in San Francisco in June of 2012. Though they currently do not come with fixed lenses, Google has announced its intention to partner with sunglass retailers to equip them with regular and prescription lenses. There is also talk of developing contact lenses that come with embedded display devices.

Summary:
Well, that’s the history of digital eyewear in a nutshell. And as you can see, since the late 60’s, the field has progressed by leaps and bounds. What was once a speculative and visionary pursuit has now blossomed to become a fully-fledged commercial field, with many different devices being produced for public consumption.

At this rate, who knows what the future holds? In all likelihood, the quest to make computers more portable and ergonomic will keep pace with the development of more sophisticated electronics and computer chips, miniaturization, biotechnology, nanofabrication and brain-computer interfacing.

The result will no doubt be tiny CPUs that can be implanted in the human body and integrated into our brains via neural chips and tiny electrodes. In all likelihood, we won’t even need voice commands at that point, because neuroscience will have developed a means to communicate directly to our devices via brainwaves. The age of cybernetics will have officially dawned!

Like I said… fascinating, weird, and a little bit scary!

‘High Dynamic Range’

IFA 2013!

IFA2013There are certainly no shortages of electronic shows happening this year! It seems that I just finished getting through all the highlights from Touch Taiwan which happened back in August. And then September comes around and I start hearing all about IFA 2013. For those unfamiliar with this consumer electronics exhibition, IFA stands for Internationale Funkausstellung Berlin, which loosely translated means the Berlin Radio Show.

As you can tell from the name, this annual exhibit has some deep roots. Beginning in 1924, the show was intended to gives electronics producers the chance to present their latest products and developments to the general public, as well as showcasing the latest in technology. From radios and cathode-ray display boxes (i.e. television) to personal computers and PDAs, the show has come a long way, and this year’s show promised to be a doozy as well.

IFA-2013Of all those who presented this year, Sony seems to have made the biggest impact. In fact, they very nearly stole the show with their presentation of their new smartphones, cameras and tablets. But it was their new Xperia Z1 smartphone that really garnered attention, given all the fanfare that preceded it. Check out the video by TechRadar:


However, their new Vaio Tap 11 tablet also got quite a bit of fanfare. In addition to a Haswell chip (Core i3, i5 or i7), a six-hour battery, full Windows connectivity, a camera, a stand, 128GB to 512GB of solid-state storage, and a wireless keyboard, the tablet has what is known as Near Field Communications (NFC) which comes standard on smartphones these days.

This technology allows the tablet to communicate with other devices and enable data transfer simply by touching them together or bringing them into close proximity. The wireless keyboard is also attachable to the device via a battery port which allows for constant charging, and the entire thin comes in a very thin package. Check out the video by Engadget:


Then there was the Samsung Galaxy Gear smartwatch, an exhibit which was equally anticipated and proved to be quite entertaining. Initially, the company had announced that their new smartwatch would incorporate flexible technology, which proved to not be the case. Instead, they chose to release a watch that was comparable to Apple’s own smartwatch design.

But as you can see, the end result is still pretty impressive. In addition to telling time, it also has many smartphone-like options, like being able to take pictures, record and play videos, and link to your other devices via Bluetooth. And of course, you can also phone, text, instant message and download all kinds of apps. Check out the hands-on video below:


Toshiba also made a big splash with their exhibit featuring an expanded line of tablets, notebooks and hybrids, as well as Ultra High-Definition TVs. Of note was their M9 design, a next-generation concept that merges the latest in display and networking technology – i.e. the ability to connect to the internet or your laptop, allowing you to stream video, display pictures, and play games on a big ass display!

Check out the video, and my apologies for the fact that this and the next one are in German. There were no English translations:


And then there was their Cloud TV presentation, a form of “smart tv” that merges the best of a laptop to that of a television. Basically, this means that a person can watch video-on-demand, use social utilities, network, and save their files via cloud memory storage, all from their couch using a handheld remote. Its like watching TV, but with all the perks of a laptop computer – one that also has a very big screen!


And then there was the HP Envy Recline, an all-in-one PC that has a hinge that allows the massive touchscreen to pivot over the edge of a desk and into the user’s lap. Clearly, ergonomics and adaptability were what inspired this idea, and many could not tell if it was a brilliant idea or the most enabling invention since the LA-Z-BOY recliner. Still, you have to admit, it looks pretty cool:


Lenovo and Acer also attracted show goers with their new lineup of smartphones, tablets, and notebooks. And countless more came to show off the latest in their wares and pimp out their own versions of the latest and greatest developments. The show ran from September 6th to 11th and there are countless videos, articles and testimonials to still making it to the fore.

For many of the products, release dates are still pending. But all those who attended managed to come away with the understanding that when it comes to computing, networking, gaming, mobile communications, and just plain lazing, the technology is moving by leaps and bounds. Soon enough, we are likely to have flexible technology available in all smart devices, and not just in the displays.

nokia_morphNanofabricated materials are also likely to create cases that are capable of morphing and changing shape and going from a smartwatch, to a smartphone, to a smart tablet. For more on that, check out this video from Epic Technology, which showcases the most anticipated gadgets for 2014. These include transparent devices, robots, OLED curved TVs, next generation smartphones, the PS4, the Oculus Rift, and of course, Google Glass.

I think you’ll agree, next year’s gadgets are even more impressive than this year’s gadgets. Man, the future is moving fast!


Sources:
b2b.ifa-berlin.com, technologyguide.com, telegraph.co.uk, techradar.com