Ending Parkinsons: Wearables and Cloud Storage

parkinsonsBehind Alzheimer’s, Parkinson’s disease is the second-most widespread neurodegenerative brain disorder in the world, and affects one out of every 100 people over the age of 60. After first being described in 1817 by Dr. James Parkinson, treatment and diagnosis have barely changed. Surgery, medications, and management techniques can help relieve symptoms, but as of yet, there is no cure.

In addition, the causes are not fully understood and appear to vary depending on the individual. But measuring it is often a slow process that doesn’t generate nearly enough data for researchers to make any significant progress. Luckily, Intel recently teamed up with the Michael J. Fox Foundation to and have proposed using wearable devices, coupled with cloud computing, to speed up the data collection process.

apple_iwatch1Due to the amount of variables involved in Parkinson’s symptoms — speed of movement, frequency and strength of tremors, how it affects sleep, and so on — the symptoms are difficult and tedious to track. Often, data is accrued through patient diaries, which is a slow process. Intel’s plan, which will involve the deployment of smartwatches, can not only increase the rate of data collection, but detect a much higher volume of variables and frequency than a personal diary could.

It is hopes that they will be able to record 300 observations per second, thus creating a massive amount of data per patient. The use of wearables means that the data can even be reported and monitored by researchers and doctors in real time. Later this year, the MJFF is even planning on launching a mobile app that adds medication intake monitoring and allows patients to record how they feel, making personal diaries easier to create and share.

cloud-serverIn order to collect and manage the data, it will be uploaded to a cloud storage data platform, and has the ability to notice changes in the data in real time. This allows researchers to track the changes in patient symptoms and share from a large field of data to better spot common patterns and symptoms. In the end, its not quite a cure, but it should help speed up the process of finding one.

Wearable technology, cloud computing and wireless data monitoring are the hallmarks of personalized medicine, which appears to be the way of the future. And while the concept of metadata and keeping medical information in centralized databases may make some nervous (as it raises certain privacy issues), keeping it anonymous and about the symptoms should lead to a speedy development of treatments and ever cures.

And be sure to check out this video from the intelnewsroom, explaining the collaboration in detail:

Source: extremetech.com

 

Making Tech Accessible: Helping Amputees in War-Torn Sudan

3Dprinting_SudanThe new year is just flying by pretty quickly, and many relevant stories involving life-changing tech developments are flying by even faster. And in my business and haste to deal with my own writing, I’ve sadly let a lot of stories slip through my fingers. Lucky for me that there’s no statute of limitations when it comes to blogging. Even if you cover something late, it’s not like someone’s going to fire you!

That said, here is one news item I’m rather of ashamed of having not gotten to sooner. It’s no secret that 3D printing is offering new possibilities for amputees and prosthetic devices, in part because the technology is offering greater accessibility and lower costs to those who need them. And one area that is in serious need is the developing and wartorn nation of Sudan.

robotic_hand2And thanks to Mick Ebeling, co-founder and CEO of Not Impossible Labs, 3D printed prosthetics are now being offered to victims of the ongoing war. After learning of a 14-year old boy named Daniel who lost both arms in a government air raid, he traveled to the Nuba Mountains to meet him in person. Having already worked on a similar project in South Africa, he decided to bring 3D printed prosthetics to the area.

Ebeling was so moved by Daniel’s plight that he turned to a world-class team of thinkers and doers – including the inventor of the Robohand, an MIT neuroscientist, a 3D printing company in California, and funding from Intel and Precipart – to see how they could help Daniel and kids like him. Fittingly, he decided to name it “Project Daniel”.

ProjectDaniel-Training-NotImpossibleAnd now, just a year later, Not Impossible Labs has its own little lab at a hospital in the region where it is able to print prosthetic arms for $100 a pop, and in less than six hours. Meanwhile, Daniel not only got his left-arm prosthetic in November, but he is currently employed at the hospital helping to print prosthetics for others children who have suffered the same fate as him.

Ebeling says the printed arm isn’t as sophisticated as others out there, but it did allow him to feed himself for the first time in two years. And while Daniel won’t be able to lift heavy objects or control his fingers with great precision, the prosthetic is affordable and being produced locally, so it also serves as an economically viable stand-in until the tech for 3D-printed prosthetics improves and comes down in cost.

Not-ImpossibleNot Impossible Labs, which has already fitted others with arms, says it hopes to extend its campaign to thousands like Daniel. It’s even made the design open source in the hopes that others around the world will be able to replicate the project, setting up similar labs to provide low-cost prosthetics to those in need. After all, there are plenty of war torn regions in the developing world today, and no shortage of victims.

In the coming years, it would be incredibly encouraging to see similar labs set up in developing nations in order to address the needs of local amputees. In addition to war, landmines, terrorism, and even lack of proper medical facilities give rise to the need for cheap, accessible prosthetics. All that’s really needed is an internet connection, a 3D printer, and some ABS plastic for raw material.

ProjectDaniel-Mohammad&Daniel-NotImpossibleNone of this is beyond the budgets of most governments or NGOs, so such partnerships are not only possible but entirely feasible. For the sake of kids like Daniel, it’s something that we should make happen! And in the meantime, check out this video below courtesy of Not Impossible Labs which showcases the printing technology used by Project Daniel and the inspiring story behind it.

And be sure to check out their website for more information and information on how you can help!



Source:
news.cnet.com, notimpossiblelabs.com

The Future is Here: Paper-Thin Computers

papertab-touchScore one for Canadian researchers and ingenuity! Oh, and Intel and Plastic Logic helped out a little bit 😉 It’s known as the PaperTab, a revolutionary concept which builds on the paper-thin smartphone and recent advances in AMOLED flexible displays. The design made the rounds at this year’s Consumer Electronics Show in Las Vegas, and it turned quite a few heads!

As already noted, the PaperTab incorporates the latest in display and flexible technology to create a 10.7-inch e-ink touchscreen display, powered by a Core i5 processor. Users control it by bending and flexing, touching the screen, and tapping one tab to the next. But instead of using it like a normal tablet, the idea is that you have lots of PaperTabs, with each tablet representing a different app – such as email, a typeface, a browser, a and so on.

This might sound like a bit of a downgrade, but the coolest thing about this new computing paradigm is that each PaperTab is aware of other PaperTabs in its proximity. You might push two PaperTabs together to extend an app onto two screens, or you could attach a file to an email by simply tapping one PaperTab on another. In this way, a PaperTab functions like an ordinary document, but with the added benefit of being electronic and transferable.

As it stands, the concept is merely a tech demo being put on by researchers from Queen’s University and corporate reps from Intel and Plastic Logic. No other information is currently available from any of these sources, but it’s likely more will trickle down to the market now that CES 2013 has wrapped up and they don’t need to keep us guessing anymore. And if I were a betting man, I’d say they’ll be available in packs of five or ten, for roughly the same price as an IPad 7 since they’ll probably be coming out at the same time.

Check out the video below of the PaperTab on display at CES 2013 and the demo the team provided:

Of Mechanical Minds

A few weeks back, a friend of mine, Nicola Higgins, directed me to an article about Google’s new neural net. Not only did she provide me with a damn interesting read, she also challenged me to write an article about the different types of robot brains. Well, Nicola, as Barny Stintson would say “Challenge Accepted!”And I got to say, it was a fun topic to get into.

After much research and plugging away at the lovely thing known as the internet (which was predicted by Vannevar Bush with his proposed Memor-Index system (aka. Memex) 50 years ago, btw) I managed to compile a list of the most historically relevant examples of mechanical minds, culminating in the development of Google’s Neural Net. Here we go..

Earliest Examples:
Even in ancient times, the concept of automata and arithmetic machinery can be found in certain cultures. In the Near East, the Arab World, and as far East as China, historians have found examples of primitive machinery that was designed to perform one task or another. And even though few specimens survive, there are even examples of machines that could perform complex mathematical calculations…

Antikythera mechanism:
Invented in ancient Greece, and recovered in 1901 on the ship that bears the same name, the Antikythera is the world’s oldest known analog calculator, invented to calculate the positions of the heavens for ancient astronomers. However, it was not until a century later that its true complexity and significance would be fully understood. Having been built in the 1st century BCE, it would not be until the 14th century CE that machines of its complexity would be built again.

Although it is widely theorized that this “clock of the heavens” must have had several predecessors during the Hellenistic Period, it remains the oldest surviving analog computer in existence. After collecting all the surviving pieces, scientists were able to reconstruct the design (pictured at right), which essentially amounted to a large box of interconnecting gears.

Pascaline:
Otherwise known as the Arithmetic Machine and Pascale Calculator, this device was invented by French mathematician Blaise Pascal in 1642 and is the first known example of a mechanized mathematical calculator. Apparently, Pascale invented this device to help his father reorganize the tax revenues of the French province of Haute-Normandie, and went on to create 50 prototypes before he was satisfied.

Of those 50, nine survive and are currently on display in various European museums. In addition to giving his father a helping hand, its introduction launched the development of mechanical calculators all over Europe and then the world. It’s invention is also directly linked to the development of the microprocessing circuit roughly three centuries later, which in turn is what led to the development of PC’s and embedded systems.

The Industrial Revolution:
With the rise of machine production, computational technology would see a number of developments. Key to all of this was the emergence of the concept of automation and the rationalization of society. Between the 18th and late 19th centuries, as every aspect of western society came to be organized and regimented based on the idea of regular production, machines needed to be developed that could handle this task of crunching numbers and storing the results.

Jacquard Loom:
Invented by Joseph Marie Jacquard, a French weaver and merchant, in 1801, the Loom that bears his name is the first programmable machine in history, which relied on punch cards to input orders and turn out textiles of various patterns. Thought it was based on earlier inventions by Basile Bouchon (1725), Jean Baptiste Falcon (1728) and Jacques Vaucanson (1740), it remains the most well-known example of a programmable loom and the earliest machine that was controlled through punch cards.

Though the Loom was did not perform computations, the design was nevertheless an important step in the development of computer hardware. Charles Babbage would use many of its features to design his Analytical Engine (see next example) and the use of punch cards would remain a stable in the computing industry well into the 20th century until the development of the microprocessor.

Analytical Engine:
Also known as the “Difference Engine”, this concept was originally proposed by English Mathematician Charles Babbage. Beginning in 1822 Babbage began contemplating designs for a machine that would be capable of automating the process of creating error free tables, which arose out of difficulties encountered by teams of mathematicians who were attempting to do it by hand.

Though he was never able to complete construction of a finished product, due to apparent difficulties with the chief engineer and funding shortages, his proposed engine incorporated an arithmetical unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first Turing-complete design for a general-purpose computer. His various trial models (like that featured at left) are currently on display in the Science Museum in London, England.

The Birth of Modern Computing:
The early 20th century saw the rise of several new developments, many of which would play a key role in the development of modern computers. The use of electricity for industrial applications was foremost, with all computers from this point forward being powered by Alternating and/or Direct Current and even using it to store information. At the same time, older ideas would be remain in use but become refined, most notably the use of punch cards and tape to read instructions and store results.

Tabulating Machine:
The next development in computation came roughly 70 years later when Herman Hollerith, an American statistician, developed a “tabulator” to help him process information from the 1890 US Census. In addition to being the first electronic computational device designed to assist in summarizing information (and later, accounting), it also went on to spawn the entire data processing industry.

Six years after the 1890 Census, Hollerith formed his own company known as the Tabulating Machine Company that was responsible for creating machines that could tabulate info based on punch cards. In 1924, after several mergers and consolidations, Hollerith’c company was renamed International Business Machines (IBM), which would go on to build the first “supercomputer” for Columbia University in 1931.

Atanasoff–Berry Computer:
Next, we have the ABC, the first electronic digital computing device in the world. Conceived in 1937, the ABC shares several characteristics with its predecessors, not the least of which is the fact that it is electrically powered and relied on punch cards to store data. However, unlike its predecessors, it was the first machine to use digital symbols to compute and was the first computer to use vacuum tube technology

These additions allowed the ABC to acheive computational speeds that were previously thought impossible for a mechanical computer. However, the machine was limited in that it could only solve systems of linear equations, and its punch card system of storage was deemed unreliable. Work on the machine also stopped when it’s inventor John Vincent Atanasoff was called off to assist in World War II cryptographic assignments. Nevertheless, the machine remains an important milestone in the development of modern computers.

Colossus:
There’s something to be said about war being the engine of innovation. The Colossus is certainly no stranger to this rule, the machine used to break German codes in the Second World War. Due to the secrecy surrounding it, it would not have much of an influence on computing and would not be rediscovered until the 1990’s. Still, it represents a step in the development of computing, as it relied on vacuum tube technology and punch tape in order to perform calculations, and proved most adept at solving complex mathematical computations.

Originally conceived by Max Newman, the British mathematician who was chiefly responsible fore breaking German codes in Bletchley Park during the war, the machine was a proposed means of combatting the German Lorenz machine, which the Nazis used to encode all of their wireless transmissions. With the first model built in 1943, ten variants of the machine for the Allies before war’s end and were intrinsic in bringing down the Nazi war machine.

Harvard Mark I:
Also known as the “IBM Automatic Sequence Controlled Calculator (ASCC)”, the Mark I was an electro-mechanical computer that was devised by Howard H. Aiken, built by IBM, and officially presented to Harvard University in 1944. Due to its success at performing long, complex calculations, it inspired several successors, most of which were used by the US Navy and Air Force for the purpose of running computations.

According to IBM’s own archives, the Mark I was the first computer that could execute long computations automatically. Built within a steel frame 51 feet (16 m) long and eight feet high, and using 500 miles (800 km) of wire with three million connections, it was the industry’s largest electromechanical calculator and the largest computer of its day.

Manchester SSEM:
Nicknamed “Baby”, the Manchester Small-Scale Experimental Machine (SSEM) was developed in 1948 and was the world’s first computer to incorporate stored-program architecture.Whereas previous computers relied on punch tape or cards to store calculations and results, “Baby” was able to do this electronically.

Although its abilities were still modest – with a 32-bit word length, a memory of 32 words, and only capable of performing subtraction and negation without additional software – it was still revolutionary for its time. In addition, the SSEM also had the distinction of being the result of Alan Turing’s own work – another British crytographer who’s theories on the “Turing Machine” and development of the algorithm would form the basis of modern computer technology.

The Nuclear Age to the Digital Age:
With the end of World War II and the birth of the Nuclear Age, technology once again took several explosive leaps forward. This could be seen in the realm of computer technology as well, where wartime developments and commercial applications grew by leaps and bounds. In addition to processor speeds and stored memory multiplying expontentially every few years, the overall size of computers got smaller and smaller. This, some theorized would lead to the development of computers that were perfectly portable and smart enough to pass the “Turing Test”. Imagine!

IBM 7090:
The 7090 model which was released in 1959, is often referred to as a third generation computer because, unlike its predecessors which were either electormechanical  or used vacuum tubes, this machine relied transistors to conduct its computations. In addition, it was an improvement on earlier models in that it used a 36-bit word length and could store up to 32K (32,768) words, a modest increase in processing over the SSEM, but a ten thousand-fold increase in terms of storage capacity.

And of course, these improvements were mirrored in the fact the 7090 series were also significantly smaller than previous versions, being about the size of a desk rather than an entire room. They were also cheaper and were quite popular with NASA, Caltech and MIT.

PDP-8:
In keeping with the trend towards miniaturization, 1965 saw the development of the first commercial minicomputer by the Digital Equipment Corporation (DEC). Though large by modern standards (about the size of a minibar) the PDP-8, also known as the “Straight-8”, was a major improvement over previous models, and therefore a commercial success.

In addition, later models also incorporated advanced concepts like the Real-Time Operating System and preemptive multitasking. Unfortunately, early models still relied on paper tape in order to process information. It was not until later that the computer was upgraded to take advantage of controlling language  such as FORTRAN, BASIC, and DIBOL.

Intel 4004:
Founded in California in 1968, the Intel Corporation quickly moved to the forefront of computational hardware development with the creation of the 4004, the worlds first Central Processing Unit, in 1971. Continuing the trend towards smaller computers, the development of this internal processor paved the way for personal computers, desktops, and laptops.

Incorporating the then-new silicon gate technology, Intel was able to create a processor that allowed for a higher number of transistors and therefore a faster processing speed than ever possible before. On top of all that, they were able to pack in into a much smaller frame, which ensured that computers built with the new CPU would be smaller, cheaper and more ergonomic. Thereafter, Intel would be a leading designer of integrated circuits and processors, supplanting even giants like IBM.

Apple I:
The 60’s and 70’s seemed to be a time for the birthing of future giants. Less than a decade after the first CPU was created, another upstart came along with an equally significant development. Named Apple and started by three men in 1976 – Steve Jobs, Steve Wozniak, and Ronald Wayne – the first product to be marketed was a “personal computer” (PC) which Wozniak built himself.

One of the most distinctive features of the Apple I was the fact that it had a built-in keyboard. Competing models of the day, such as the Altair 8800, required a hardware extension to allow connection to a computer terminal or a teletypewriter machine. The company quickly took off and began introducing an upgraded version (the Apple II) just a year later. As a result, Apple I’s remain a scarce commodity and very valuable collector’s item.

The Future:
The last two decades of the 20th century also saw far more than its fair of developments. From the CPU and the PC came desktop computers, laptop computers, PDA’s, tablet PC’s, and networked computers. This last creation, aka. the Internet, was the greatest leap by far, allowing computers from all over the world to be networked together and share information. And with the exponential increase in information sharing that occurred as a result, many believe that it’s only a matter of time before wearable computers, fully portable computers, and artificial intelligences are possible. Ah, which brings me to the last entry in this list…

The Google Neural Network:
googleneuralnetworkFrom mechanical dials to vacuum tubes, from CPU’s to PC’s and laptops, computer’s have come a hell of a long way since the days of Ancient Greece. Hell, even within the last century, the growth in this one area of technology has been explosive, leading some to conclude that it was just a matter of time before we created a machine that was capable of thinking all on its own.

Well, my friends, that day appears to have dawned. Already, Nicola and myself blogged about this development, so I shan’t waste time going over it again. Suffice it to say, this new program, which thus far has been able to identify pictures of cats at random, contains the necessary neural capacity to acheive 1/1000th of what the human brain is capable of. Sounds small, but given the exponential growth in computing, it won’t be long before that gap is narrowed substantially.

Who knows what else the future will hold?  Optical computers that use not electrons but photons to move information about? Quantum computers, capable of connecting machines not only across space, but also time? Biocomputers that can be encoded directly into our bodies through our mitochondrial DNA? Oh, the possibilities…

Creating machines in the likeness of the human mind. Oh Brave New World that hath such machinery in it. Cool… yet scary!