Ending Parkinsons: Wearables and Cloud Storage

parkinsonsBehind Alzheimer’s, Parkinson’s disease is the second-most widespread neurodegenerative brain disorder in the world, and affects one out of every 100 people over the age of 60. After first being described in 1817 by Dr. James Parkinson, treatment and diagnosis have barely changed. Surgery, medications, and management techniques can help relieve symptoms, but as of yet, there is no cure.

In addition, the causes are not fully understood and appear to vary depending on the individual. But measuring it is often a slow process that doesn’t generate nearly enough data for researchers to make any significant progress. Luckily, Intel recently teamed up with the Michael J. Fox Foundation to and have proposed using wearable devices, coupled with cloud computing, to speed up the data collection process.

apple_iwatch1Due to the amount of variables involved in Parkinson’s symptoms — speed of movement, frequency and strength of tremors, how it affects sleep, and so on — the symptoms are difficult and tedious to track. Often, data is accrued through patient diaries, which is a slow process. Intel’s plan, which will involve the deployment of smartwatches, can not only increase the rate of data collection, but detect a much higher volume of variables and frequency than a personal diary could.

It is hopes that they will be able to record 300 observations per second, thus creating a massive amount of data per patient. The use of wearables means that the data can even be reported and monitored by researchers and doctors in real time. Later this year, the MJFF is even planning on launching a mobile app that adds medication intake monitoring and allows patients to record how they feel, making personal diaries easier to create and share.

cloud-serverIn order to collect and manage the data, it will be uploaded to a cloud storage data platform, and has the ability to notice changes in the data in real time. This allows researchers to track the changes in patient symptoms and share from a large field of data to better spot common patterns and symptoms. In the end, its not quite a cure, but it should help speed up the process of finding one.

Wearable technology, cloud computing and wireless data monitoring are the hallmarks of personalized medicine, which appears to be the way of the future. And while the concept of metadata and keeping medical information in centralized databases may make some nervous (as it raises certain privacy issues), keeping it anonymous and about the symptoms should lead to a speedy development of treatments and ever cures.

And be sure to check out this video from the intelnewsroom, explaining the collaboration in detail:

Source: extremetech.com

 

The Future of Devices: The Wearable Tech Boom

Wearable-Computing-RevolutionThe wearable computing revolution that has been taking place in recent years has drawn in developers and tech giants from all over the world. Though its roots are deep, dating back to the late 60’s and early 80’s with the Sword of Damocles concept and the work of Steve Mann. But in recent years, thanks to the development of Google Glass, the case for wearable tech has moved beyond hobbyists and enthusiasts and into the mainstream.

And with display glasses now accounted for, the latest boom in development appears to be centered on smart watches and similar devices. These range from fitness trackers with just a few features to wrist-mounted version of smart phones that boast the same constellations of functions and apps (email, phone, text, skyping, etc.) And as always, the big-name industries are coming forward with their own concepts and designs.

apple_iwatch1First, there’s the much-anticipated Apple iWatch, which is still in the rumor stage. The company has been working on this project since late 2012, but has begun accelerating the process as it tries to expand its family of mobile devices to the wrist. Apple has already started work on trademarking the name in a number of countries in preparation for a late 2014 launch perhaps in October, with the device entering mass production in July.

And though it’s not yet clear what the device will look like, several mockups and proposals have been leaked. And recent reports from sources like Reuters and The Wall Street Journal have pointed towards multiple screen sizes and price points, suggesting an array of different band and face options in various materials to position it as a fashion accessory. It is also expected to include a durable sapphire crystal display, produced in collaboration with Apple partner GT Advanced.

iWatchWhile the iWatch will perform some tasks independently using the new iOS 8 platform, it will be dependent on a compatible iOS device for functions like receiving messages, voice calls, and notifications. It is also expected to feature wireless charging capabilities, advanced mapping abilities, and possibly near-field communication (NFC) integration. But an added bonus, as indicated by Apple’s recent filing for patents associated with their “Health” app, is the inclusion of biometric and health sensors.

Along with serving as a companion device to the iPhone and iPad, the iWatch will be able to measure multiple different health-related metrics. Consistent with the features of a fitness band, these will things like a pedometer, calories burned, sleep quality, heart rate, and more. The iWatch is said to include 10 different sensors to track health and fitness, providing an overall picture of health and making the health-tracking experience more accessible to the general public.

iOS8Apple has reportedly designed iOS 8 with the iWatch in mind, and the two are said to be heavily reliant on one another. The iWatch will likely take advantage of the “Health” app introduced with iOS 8, which may display all of the health-related information gathered by the watch. Currently, Apple is gearing up to begin mass production on the iWatch, and has been testing the device’s fitness capabilities with professional athletes such as Kobe Bryant, who will likely go on to promote the iWatch following its release.

Not to be outdone, Google launched its own brand of smartwatch – known as Android Wear – at this year’s I/O conference. Android Wear is the company’s software platform for linking smartwatches from companies including LG, Samsung and Motorola to Android phones and tablets. A preview of Wear was introduced this spring, the I/O conference provided more details on how it will work and made it clear that the company is investing heavily in the notion that wearables are the future.

android-wear-showdownAndroid Wear takes much of the functionality of Google Now – an intelligent personal assistant – and uses the smartwatch as a home for receiving notifications and context-based information. For the sake of travel, Android Wear will push relevant flight, weather and other information directly to the watch, where the user can tap and swipe their way through it and use embedded prompts and voice control to take further actions, like dictating a note with reminders to pack rain gear.

For the most part, Google had already revealed most of what Wear will be able to do in its preview, but its big on-stage debut at I/O was largely about getting app developers to buy into the platform and keep designing for a peripheral wearable interface in mind. Apps can be designed to harness different Android Wear “intents.” For example, the Lyft app takes advantage of the “call me a car” intent and can be set to be the default means of hailing a ride when you tell your smartwatch to find you a car.

androidwear-3Google officials also claimed at I/O that the same interface being Android Wear will be behind their new Android Auto and TV, two other integrated services that allow users to interface with their car and television via a mobile device. So don’t be surprised if you see someone unlocking or starting their car by talking into their watch in the near future. The first Android Wear watches – the Samsung Gear Live and the LG G Watch – are available to pre-order and the round-face Motorola Moto 360 is expected to come out later this summer.

All of these steps in integration and wearable technology are signs of an emergent trend, one where just about everything from personal devices to automobiles and even homes are smart and networked together – thus giving rise to a world where everything is remotely accessible. This concept, otherwise known as the “Internet of Things”, is expected to become the norm in the next 20 years, and will include other technologies like display contacts and mediated (aka. augmented) reality.

And be sure to check out this concept video of the Apple iWatch:


Sources:
cnet.com, (2), macrumors.com, engadget.com, gizmag.com

The First Government-Recognized Cyborg

harbisson_cyborgThose who follow tech news are probably familiar with the name Neil Harbisson. As a futurist, and someone who was born with a condition known as achromatopsia – which means he sees everything in shades in gray – he spent much of his life looking to augment himself so that he could see what other people see. And roughly ten years ago, he succeeded by creating a device known as the “eyeborg”.

Also known as a cybernetic “third eye”, this device – which is permanently integrated to his person – allows Harbisson to “hear” colors by translating the visual information into specific sounds. After years of use, he is able to discern different colors based on their sounds with ease. But what’s especially interesting about this device is that it makes Harbisson a bona fide cyborg.

neil_harbisson1What’s more, Neil Harbisson is now the first person on the planet to have a passport photo that shows his cyborg nature. After a long battle with UK authorities, his passport now features a photo of him, eyeborg and all. And now, he is looking to help other cyborgs like himself gain more rights, mainly because of the difficulties such people have been facing in recent years.

Consider the case of Steve Mann, the man recognized as the “father of wearable computers”. Since the 1970’s, he has been working towards the creation of fully-portable, ergonomic computers that people can carry with them wherever they go. The result of this was the EyeTap, a wearable computer he invented in 1998 and then had grafted to his head.

steve-mann1And then in July of 2012, he was ejected from a McDonald’s in Paris after several staff members tried to forcibly remove the wearable device. And then in April of 2013, a bar in Seattle banned patrons from using Google Glass, declaring that “ass-kickings will be encouraged for violators.” Other businesses across the world have followed, fearing that people wearing these devices may be taking photos or video and posting it to the internet.

Essentially, Harbisson believes that recent technological advances mean there will be a rapid growth in the number of people with cybernetic implants in the near future, implants that can will either assist them or give them enhanced abilities. As he put it in a recent interview:

Our instincts and our bodies will change. When you incorporate technology into the body, the body will need to change to accommodate; it modifies and adapts to new inputs. How we adapt to this change will be very interesting.

cyborg_foundationOther human cyborgs include Stelarc, a performance artist who has implanted a hearing ear on his forearm; Kevin Warwick, the “world’s first human cyborg” who has an RFID chip embedded beneath his skin, allowing him to control devices such as lights, doors and heaters; and “DIY cyborg” Tim Cannon, who has a self-administered body-monitoring device in his arm.

And though they are still in the minority, the number of people who live with integrated electronic or bionic devices is growing. In order to ensure that the transition Harbisson foresees is accomplished as painlessly as possible, he created the Cyborg Foundation in 2010. According to their website, the organization’s mission statement is to:

help humans become cyborgs, to promote the use of cybernetics as part of the human body and to defend cyborg rights [whilst] encouraging people to create their own sensory extensions.

transhumanism1And as mind-controlled prosthetics, implants, and other devices meant to augment a person’s senses, faculties, and ambulatory ability are introduced, we can expect people to begin to actively integrate them into their bodies. Beyond correcting for injuries or disabilities, the increasing availability of such technology is also likely to draw people looking to enhance their natural abilities.

In short, the future is likely to be a place in which cyborgs are a common features of our society. The size and shape of that society is difficult to predict, but given that its existence is all but certain, we as individuals need to be able to address it. Not only is it an issue of tolerance, there’s also the need for informed decision-making when it comes whether or not individuals need to make cybernetic enhancements a part of their lives.

Basically, there are some tough issues that need to be considered as we make our way into the future. And having a forum where they can be discussed in a civilized fashion may be the only recourse to a world permeated by prejudice and intolerance on the one hand, and runaway augmentation on the other.

johnnymnemonic04In the meantime, it might not be too soon to look into introducing some regulations, just to make sure we don’t have any yahoos turning themselves into killer cyborgs in the near future! *PS: Bonus points for anyone who can identify which movie the photo above is taken from…

Sources: IO9.com, dezeen.com, eyeborg.wix.com

The Future of Computing

digital_sentienceLook what you started, Nicolla 😉 After talking, at length, about the history of computing a few days ago, I got to thinking about the one aspect of the whole issue that I happened to leave out. Namely, the future of computing, with all the cool developments that we are likely to see in the next few decades or centuries.

Much of that came up in the course of my research, but unfortunately, after thirteen or so examples about the history of computing, I was far too tired and burnt to get into the future of it as well. And so, I carry on today, with a brief (I promise!) list of developments that we are likely to see before the century is out… give or take. Here they are:

Chemical Computer:
Here we have a rather novel idea for the future of hardware. Otherwise known as a reaction-diffusion or “gooware” computer, this concept calls for the creation of a semi-solid chemical “soup” where data is represented by varying concentrations of chemicals and computations are performed by naturally occurring chemical reactions.

Based on the Belousov-Zhabotinsky reaction, a chemical experiment which demonstrated that wave phenomena can indeed take place in chemical reactions, contradicting the theory of thermodynamics which states that entropy will only increase in a closed system. By contrast, the BZ experiments showed that cyclic effects can take place without breaking the laws of nature.

Amongst theoretical models, it remains a top contender for future use for the simple reason that it is far less limiting that current microprocessors. Whereas the latter only allows the flow of data in one direction at a time, a chemical computer theoretically allows for the movement of data in all directions, all dimensions, both away and against each other.

For obvious reasons, the concept is still very much in the experimental stage and no working models have been proposed at this time.

DNA Computing:
Yet another example of an unconventional computer design, one which uses biochemistry and molecular biology, rather than silicon-based hardware, in order to conduct computations. Originally proposed by Leonard Adleman of the University of Southern Calfornia in 1994, Adleman was able to demonstrate how DNA could be used to conduct multiple calculations at once.

Much like chemical computing, the potential here is to be able to build a machine that is not restricted as conventional machines are. In addition to being able to compute in multiple dimensions and directions, the DNA basis of the machine means it could be merged with other organic technology, possibly even a fully-organic AI (a la the 12 Cylon models).

While progress in this area remains modest thus far, Turing complete models have been constructed, the most notable of which is the model crated by the Weizmann Institute of Science in Rehovot, Israel in 2002. Here, researchers unveiled a programmable molecular computing machine composed of enzymes and DNA molecules instead of silicon microchips which would theoretically be capable of diagnosing cancer in a cell and releasing anti-cancer drugs.

Nanocomputers:
In keeping with the tradition of making computers smaller and smaller, scientists have proposed that the next generation of computers should measure only a few nanometers in size. That’s 1×10-9 meters for those who mathematically inclined. As part of the growing field of nanotechnology, the application is still largely theoretical and dependent on further advancements. Nevertheless, the process is a highly feasible one with many potential benefits.

Here, as with many of these other concepts, the plan is simple. By further miniaturizing the components, a computer could be shrunk to the size of a chip and implanted anywhere on a human body (i.e. “Wetware” or silicate implants). This will ensure maximum portability, and coupled with a wireless interface device (see Google Glasses or VR Contact Lenses) could be accessed at any time in any place.

Optical Computers:
Compared to the previous two examples, this proposed computer is quite straightforward, even if it radically advanced. While today’s computer rely on the movement of electrons in and out of transistors to do logic, an optical computer relies on the movement of photons.

The immediate advantage of this is clear; given that photons are much faster than electrons, computers equipped with optical components would be able to process information of significantly greater speeds. In addition, researchers contend that this can be done with less energy, making optical computing a potential green technology.

Currently, creating optical computers is just a matter of replacing electronic components with optical ones, which requires an optical transistor, which are composed of non-linear crystals. Such materials exist and experiments are already underway. However, there remains controversy as to whether the proposed benefits will pay off, or be comparable to other technologies (such as semiconductors). Only time will tell…

Quantum Computers:
And last, and perhaps most revolutionary of all, is the concept of quantum computing – a device which will rely on the use of quantum mechanical phenomena to performs operations. Unlike digital computers, which require that data to be encoded into binary digits (aka. bits), quantum computation utilizes quantum properties to represent data and perform calculations.

The field of quantum computing was first introduced by Richard Feynman in 1982 and represented the latest advancements in field theory. Much like chemical and DNA-based computer designs, the theoretical quantum computer also has the ability to conduct multiple computations at the same time, mainly because it would have the ability to be in more than one state simultaneously.

The concept remains highly theoretical, but a number of experiments have been conducted in which quantum computational operations were executed on a very small number of qubits (quantum bits). Both practical and theoretical research continues, and many national government and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes, such as cryptanalysis.

Wearable Computers:
Last, and most feasible, is the wearable computer, which has already been developed for commercial use. Essentially, these are a class of miniature electronic devices that are worn on the bearer’s person, either under or on top of clothing. A popular version of this concept is the wrist mounted option, where the computer is worn like a watch.

The purposes and advantages of this type of computer are obvious, especially where applications that require more complex computational support than hardware coded logics can provide. Another advantage is the constant interactions between user and computer, as it is augmented into all other functions of the user’s daily life. In many ways, it acts as a prosthesis, being an extension of the users mind and body.

Pretty cool, huh? And to think that these and possibly other concepts could be feasible within our own lifetimes. Given the current rate of progress in all thing’s high-tech, we could be looking at fully-integrated computer implants, biological computers and AI’s with biomechanical brains. Wouldn’t that be both amazing and potentially frightening!