Nanotech News: Smart Sponges, Nanoparticles and Neural Dust!

nanomachineryNanotechnology has long been the dream of researchers, scientists and futurists alike, and for obvious reasons. If machinery were small enough so as to be microscopic, or so small that it could only be measured on the atomic level,  just about anything would be possible. These include constructing buildings and products from the atomic level up, with would revolutionize manufacturing as we know it.

In addition, microscopic computers, smart cells and materials, and electronics so infinitesimally small that they could be merged with living tissues would all be within our grasp. And it seems that at least once a month, universities, research labs, and even independent skunkworks are unveiling new and exciting steps that are bringing us ever closer to this goal.

Close-up of a smart sponge
Close-up of a smart sponge

Once such breakthrough comes from the University of North Carolina at Chapel Hill, where biomedical scientists and engineers have joined forces to create the “smart sponge”. A spherical object that is microscopic — just 250 micrometers across, and could be made as small as 0.1 micrometers – these new sponges are similar to nanoparticles, in that they are intended to be the next-generation of delivery vehicles for medication.

Each sponge is mainly composed of a polymer called chitosan, something which is not naturally occurring, but can be produced easily from the chitin in crustacean shells. The long polysaccharide chains of chitosan form a matrix in which tiny porous nanocapsules are embedded, and which can be designed to respond to the presence of some external compound – be it an enzyme, blood sugar, or a chemical trigger.

bloodstreamSo far, the researchers tested the smart sponges with insulin, so the nanocapsules in this case contained glucose oxidase. As the level of glucose in a diabetic patient’s blood increases, it would trigger the nanocapsules in the smart sponge begin releasing hydrogen ions which impart a positive charge to the chitosan strands. This in turn causes them to spread apart and begin to slowly release insulin into the blood.

The process is also self-limiting: as glucose levels in the blood come down after the release of insulin, the nanocapsules deactivate and the positive charge dissipates. Without all those hydrogen ions in the way, the chitosan can come back together to keep the remaining insulin inside. The chitosan is eventually degraded and absorbed by the body, so there are no long-term health effects.

NanoparticlesOne the chief benefits of this kind of system, much like with nanoparticles, is that it delivers medication when its needed, to where its needed, and in amounts that are appropriate to the patient’s needs. So far, the team has had success treating diabetes in rats, but plans to expand their treatment to treating humans, and branching out to treat other types of disease.

Cancer is a prime candidate, and the University team believes it can be treated without an activation system of any kind. Tumors are naturally highly acidic environments, which means a lot of free hydrogen ions. And since that’s what the diabetic smart sponge produces as a trigger anyway, it can be filled with small amounts of chemotherapy drugs that would automatically be released in areas with cancer cells.

nanorobotAnother exciting breakthrough comes from University of California at Berkeley, where medical researchers are working towards tiny, implantable sensors . As all medical researchers know, the key to understanding and treating neurological problems is to gather real-time and in-depth information on the subject’s brain. Unfortunately, things like MRIs and positron emission tomography (PET) aren’t exactly portable and are expensive to run.

Implantable devices are fast becoming a solution to this problem, offering real-time data that comes directly from the source and can be accessed wirelessly at any time. So far, this has taken the form of temporary medical tattoos or tiny sensors which are intended to be implanted in the bloodstreams. However, what the researchers at UofC are proposing something much more radical.

neural_dustIn a recent research paper, they proposed a design for a new kind of implantable sensor – an intelligent dust that can infiltrate the brain, record data, and communicate with the outside world. The preliminary design was undertaken by Berkeley’s Dongjin Seo and colleagues, who described a network of tiny sensors – each package being no more than 100 micrometers – in diameter. Hence the term they used: “neural dust”.

The smart particles would all contain a very small CMOS sensor capable of measuring electrical activity in nearby neurons. The researchers also envision a system where each particle is powered by a piezoelectric material rather than tiny batteries. The particles would communicate data to an external device via ultrasound waves, and the entire package would also be coated in a polymer, thus making it bio-neutral.

smart_tatoosBut of course, the dust would need to be complimented by some other implantable devices. These would likely include a larger subdural transceiver that would send the ultrasound waves to the dust and pick up the return signal. The internal transceiver would also be wirelessly connected to an external device on the scalp that contains data processing hardware, a long range transmitter, storage, and a battery.

The benefits of this kind of system are again obvious. In addition to acting like an MRI running in your brain all the time, it would allow for real-time monitoring of neurological activity for the purposes of research and medical monitoring. The researchers also see this technology as a way to enable brain-machine interfaces, something which would go far beyond current methods. Who knows? It might even enable a form of machine-based telepathy in time.

telepathySounds like science fiction, and it still is. Many issues need to be worked out before something of this nature would be possible or commercially available. For one, more powerful antennae would need to be designed on the microscopic scale in order for the smart dust particles to be able to send and receive ultrasound waves.

Increasing the efficiency of transceivers and piezoelectric materials will also be a necessity to provide the dust with power, otherwise they could cause a build-up of excess heat in the user’s neurons, with dire effects! But most importantly of all, researchers need to find a safe and effective way to deliver the tiny sensors to the brain.

prosthetic_originalAnd last, but certainly not least, nanotechnology might be offering improvements in the field of prosthetics as well. In recent years, scientists have made enormous breakthroughs in the field of robotic and bionic limbs, restoring ambulatory mobility to accident victims, the disabled, and combat veterans. But even more impressive are the current efforts to restore sensation as well.

One method, which is being explored by the Technion-Israel Institute of Technology in Israel, involves incorporating gold nanoparticles and a substrate made of polyethylene terephthalate (PET) – the plastic used in bottles of soft drinks. Between these two materials, they were able to make an ultra-sensitive film that would be capable of transmitting electrical signals to the user, simulating the sensation of touch.

gold_nanoparticlesBasically, the gold-polyester nanomaterial experiences changes in conductivity as it is bent, providing an extremely sensitive measure of physical force. Tests conducted on the material showed that it was able to sense pressures ranging from tens of milligrams to tens of grams, which is ten times more sensitive than any sensors being build today.

Even better, the film maintained its sensory resolution after many “bending cycles”, meaning it showed consistent results and would give users a long term of use. Unlike many useful materials that can only really be used under laboratory conditions, this film can operate at very low voltages, meaning that it could be manufactured cheaply and actually be useful in real-world situations.

smart-skin_610x407In their research paper, lead researcher Hossam Haick described the sensors as “flowers, where the center of the flower is the gold or metal nanoparticle and the petals are the monolayer of organic ligands that generally protect it.” The paper also states that in addition to providing pressure information (touch), the sensors in their prototype were also able to sense temperature and humidity.

But of course, a great deal of calibration of the technology is still needed, so that each user’s brain is able to interpret the electronic signals being received from the artificial skin correctly. But this is standard procedure with next-generation prosthetic devices, ones which rely on two-way electronic signals to provide control signals and feedback.

nanorobot1And these are just some examples of how nanotechnology is seeking to improve and enhance our world. When it comes to sensory and mobility, it offers solutions to not only remedy health problems or limitations, but also to enhance natural abilities. But the long-term possibilities go beyond this by many orders of magnitude.

As a cornerstone to the post-singularity world being envisioned by futurists, nanotech offers solutions to everything from health and manufacturing to space exploration and clinical immortality. And as part of an ongoing trend in miniaturization, it presents the possibility of building devices and products that are even tinier and more sophisticated than we can currently imagine.

It’s always interesting how science works by scale, isn’t it? In addition to dreaming large – looking to build structures that are bigger, taller, and more elaborate – we are also looking inward, hoping to grab matter at its most basic level. In this way, we will not only be able to plant our feet anywhere in the universe, but manipulate it on the tiniest of levels.

As always, the future is a paradox, filling people with both awe and fear at the same time.

Sources: extremetech.com, (2), (3)

The Future of Firearms: The Inteliscope!

inteliscope-iphone-adapterGiven the many, many uses that smartphones have these days, and the many technologies being adapted to work with them, I guess it was only a matter of time before someone found a way to militarize it. And that’s exactly what inventor Jason Giddings and his new company, Inteliscope, LLC, decided to do when they combined guns with smart devices to launch the Inteliscope Tactical Rifle Adapter.

Along with an iOS app and a mount that can be affixed to tactical rails, the adapter allows gun owners to mount their iPhone or iPod Touch to a firearm and use it as a sight with a heads-up display showing real-time data on their surroundings. The app also works in portrait mode, so the adapter can be affixed to the side of a firearm if needed.

Inteliscope_2Some might ask how an iPhone could be expected to improve upon a standard scope, but that’s where things get particularly interesting. By offering a range of visual enhancements and features, the user is essentially able to convert their smartphone into an integrated ballistic computer system, but at a fraction of the cost of a military variant.

Added features include a 5x digital zoom, an adjustable mount that lets users peek around corners, a choice of different cross hairs, data on local prevailing winds, a GPS locator, a compass, ballistics info, and a shot timer. The attached device can even act as a mounted flashlight or strobe, but probably the most useful feature is the ability to record and play back video of each shot.

inteliscope-iphone-adapter-4Naturally, there are some drawbacks to the Inteliscope. For example, the iPhone/iPod Touch’s camera optics only offer support for short range targets, and using calibers larger than .223 or 5.56 mm could damage your smart device. The developers have also advised potential customers to make sure hunting with electronic-enhanced devices is legal in their region.

Still, it does provide a fairly cost-effective means for giving any gun that Future Warrior look, and for the relatively cheap price of $69.99. Inteliscope is currently accepting pre-orders through its website, with adapters available for the iPhone 4, iPhone 4S, iPhone 5 and iPod Touch, and plan to ship to begin shipping in June.

And of course, there’s a video of the system in action:


Source:
gizmag.com

Towards a Greener Future: The Desalination Chip

?????????????????????????????????????????When it comes to providing for the future, clean, drinkable water is one challenge researchers are seriously looking into. Not only is overpopulation seriously depleting the world’s supply of fresh water, Climate Change threatens to make a bad situation even worse. As sea levels rise and flooding threatens population centers, water tables are also drying up and being ruined by toxic chemicals and runoff.

One idea is to take sea water, which is in growing supply thanks to the melting polar ice caps, and making it drinkable. However, desalination, in its traditional form, is an expensive and difficult process. Typical large-scale desalination involves forcing salt water through a membrane are costly, can be fouled, and which require powerful pumps to circulate the water.

desalination_chipHowever, scientists from the University of Texas at Austin and Germany’s University of Marburg are taking another approach. Working with a process known as “electrochemically mediated seawater desalination”, they have developed a prototype plastic “water chip” that contains a microchannel which branches in two, separating salt from water chemically without the need for membranes.

The process begins with seawater being run into the microchannel where a 3-volt electrical current is applied. This causes an electrode embedded at the branching point of the channel to neutralize some of the chloride ions in the water, which in turn increases the electrical field at that point. That area of increased current, called an ion depletion zone, diverts the salt to one branch in the channel while allowing the water to continue down another.

waterchip-1In its present form, the system can run on so little energy that a store-bought battery is all that’s required as a power source. Developed on a larger scale, such chips could be employed in future offshore developments – such as Lillypad cities or planned coastal arcologies like NOAH, BOA, or Shimizu Mega-City – where they would be responsible for periodically turning water that was piped in from the sea into something drinkable and useable for crops.

Two challenges still need to be overcome, however. First of all, the chip currently removes only 25 percent of the salt from the water. 99 percent must be removed in order for seawater to be considered drinkable. Second, the system must be scaled up in order to be practical. It presently produces about 40 nanoliters of desalted water per minute.

That being said, the scientists are confident that with further research, they can rectify both issues. And with the involvement of Okeanos Technologies – a major desalination research firm – and the pressing need to come up with affordable solutions, it shouldn’t be too long until a fully-scaled, 99 percent efficient model is developed.

Source: gizmag.com

Google CEO Wants Land Set Aside for Experimentation

future-city-1Back in May, Google co-founder and CEO Larry Page hosted a rare Q&A session with the attendees of the Google I/O keynote speech. During this time, he gave some rather unfiltered and unabashed answers to some serious questions, one of which was how he and others should focus on reducing negativity and focusing on changing the world.

Page responded by saying that “the pace of change is increasing” and that “we haven’t adapted systems to deal with that.” He was also sure to point out that “not all change is good” and said that we need to build “mechanisms to allow experimentation.” Towards that end, he claimed that an area of the world should be set aside for unregulated scientific experimentation. His exact words were:

There are many exciting things you could do that are illegal or not allowed by regulation. And that’s good, we don’t want to change the world. But maybe we can set aside a part of the world… some safe places where we can try things and not have to deploy to the entire world.

So basically he’s looking for a large chunk of real-estate to conduct beta tests in it. What could possibly go wrong?

detroit_experimentOne rather creative suggestion comes from Roy Klabin of PolicyMic, who suggest that an aging and dilapidated Detroit might be just the locale Page and his associates are looking for. This past week, the city declared bankruptcy, and began offering to sell city assets and eradicate retirement funds to meet its $18 billion debt obligations.

What’s more, he suggests that SpaceX founder Elon Musk, who’s always after innovation, should team up with Google. Between the two giants, there’s more than enough investment capital to pull Detroit out of debt and work to rehabilitate the city’s economy. Hell, with a little work, the city could be transformed back into the industrial hub it once was.

And due to a mass exodus of industry and working people from the city, there is no shortage of space. Already the city is considering converting segments of former urban sprawl into farming and agricultural land. But looking farther afield, Klabin sees no reason why these space couldn’t be made available for advanced construction projects involving arcologies and other sustainable-living structures.

dragonfly-vertical-farm-for-a-future-new-york-1Not a bad idea, really. With cities like Boston, New York, Las Vegas, New Orleans, Moscow, Chendu, Tokyo and Masdar City all proposing or even working towards the creation of arcologies, there’s no reason why the former Industrial Heartland – now known as the “Rust Belt” – shouldn’t be getting in on the action.

Naturally, there are some who would express fear over the idea, not to mention Page’s blunt choice of words. But Page did stress the need for positive change, not aimless experimentation. And future generations will need housing and food, and to be able to provide these things in a way that doesn’t burden their environment the way urban sprawl does. Might as well get a jump on things!

And thanks to what some are calling the “New Industrial Revolution” – a trend that embraces nanofabrication, self-assembling DNA structures, cybernetics, and 3D printing – opportunities exist to rebuild our global economy in a way that is cleaner, more efficient and more sustainable. Anyone with space to offer and an open mind can get in on the ground floor. The only question is, what are they willing to give up?

venus_projectThere’s also a precedent here for what is being proposed. The famous American architect and designer Jacque Fresco has been advocating something similar for decades. Believing that society needs to reshape the way it lives, works, and produces, he created the Venus Project – a series of designs for a future living space that would incorporate new technologies, smarter materials and building methods, and alternative forms of energy.

And then there’s the kind of work being proposed by designer Mitchell Joachim and Terreform ONE (Open Network Ecology). And amongst their many proposed design concepts is one where cities use vertical towers filled with energy-creating algae (pictured below) to generate power. But even more ambitious is their plan to “urbaneer” Brooklyn’s Navy Yard by turning natural ecological tissues into viable buildings.

future-city2This concept also calls to mind Arconsanti, the brainchild of architect Paolo Solari, who invented the concept of arcology. His proposed future city began construction back in the 1970 in central Arizona, but remains incomplete. Designed to incorporate such things as 3D architecture, vertical farming, and clean, renewable energy, this unfinished city still stands as the blueprint for Solari’s vision of a future where architecture and ecology could be combined.

What’s more, this kind of innovation and development will come in mighty handy when it comes to time to build colonies on the Moon and Mars. Already, numerous Earth cities and settlements are being considered as possible blueprints for extra-Terran settlement – places like Las Vegas, Dubai, Arviat, Black Rock City and the Pueblos and pre-Columbian New Mexico.

Black Rock City - home to "Burning Man" - shown in a Martian crater
Black Rock City – home to “Burning Man” – shown in a Martian crater

These are all prime examples of cities built to withstand dry, inhospitable environments. As such, sustainability and resource management play a major role in each of their designs. But given the pace at which technology is advancing and the opportunities it presents for high-tech living that is also environmentally friendly, some test models will need to be made.

And building them would also provide an opportunity to test out some of the latest proposed construction methods, one that do away with the brutally inefficient building process and replace it with things like drones, constructive bacteria, additive manufacturing, and advanced computer modelling. At some point, a large-scale project to see how these methods work together will be in order.

Let’s just hope Page’s ideas for a beta-testing settlement doesn’t turn into a modern day Laputa!

And be sure to check out this video from the Venus Project, where Jacque Fresco explains his inspirations and ideas for a future settlement:


Sources:
1.
Elon Musk and Google Should Purchase and Transform a Bankrupt Detroit (http://www.policymic.com/)
2. Larry Page wants to ‘set aside a part of the world’ for unregulated experimentation (theverge.com)

3. Six Earth Cities That Will Provide Blueprints for Martian Settlements (io9.com)
4. The Venus Project (thevenusproject.org)
5. Arcosanti Website (arcosanti.org)
6. Terreform ONE website (terreform.org)

The Future of Medicine: “Hacking” Neurological Disorders

brain-scan_530Officially, it’s known as “neurohacking” – a method of biohacking that seeks to manipulate or interfere with the structure and/or function of neurons and the central nervous system to improve or repair the human brain. In recent years, scientists and researchers have been looking at how Deep Brain Stimulation (DBS) could be used for just such a purpose. And the results are encouraging, indicating that the technology could be used to correct for neurological disorders.

The key in this research has to do with the subthalamic nucleus (STN) – a component of the basal ganglia control system that is interconnected to the motor areas of the brain. Researchers initially hit upon the STN as a site for stimulation when studying monkeys with artificially induced movement disorders. When adding electrical stimulation to this center, the result was a complete elimination of debilitating tremors and involuntary movements.

DIY biohacker Anthony Johnson – aka. “Cyber AJ” – also recently released a dramatic video where he showed the effects of DBS on himself. As a Parkison’s sufferer, Johnson was able to demonstrate how the applications of a mild electrical stimulus from his Medtronic DBS to the STN region of his brain completely eliminated the tremors he has had to deal with ever since he was diagnosed.


But in spite of these positive returns, tests on humans have been slow-going and somewhat inconclusive. Basically, scientists have been unable to conclude why stimulating the STN would eliminate tremors, as the function of this region of the brain is still somewhat of a mystery. What’s more, they also determined that putting electrodes in any number of surrounding brain nuclei, or passing fiber tracts, seems to have similar beneficial effects.

In truth, when dealing with people who suffer from neurological disorders, any form of stimulation is likely to have a positive effect. Whether it is Parkinson’s, Alzheimer’s, Tourettes, Autism, Aspergers, or neurological damage, electrical stimulation is likely to produce moments of lucidity, greater recall, and more focused attention. Good news for some, but until such time as we know how and in what ways the treatment needs to happen, lasting treatment will be difficult.

brain-activityLuckily, research conducted by the Movement Disorders Group at Oxford University, led by Peter Brown, has provided some degree of progress in this field. Since DBS was first discovered, they have been busily recording activity through what is essentially a brain-computer interface (BCI) in the hopes of amassing meaningful data from the brain as it undergoes stimulation moment-by-moment.

For starters, it is known that the symptoms of Parkinson’s and other such disorders fluctuate continuously and any form of smart control needs to be fast to be effective. Hence, DBS modules need to be responsive, and not simply left on all the time. Hence, in addition to their being electrodes that can provide helpful stimulus, there also need to be sensors that can detect when the brain is behaving erratically.

neuronsHere too, it was the Oxford group that came up with a solution. Rather than simply implanting more junk into the brain – expensive and potentially dangerous – Brown and his colleagues realized that the stimulation electrodes themselves can be used to take readings from the local areas of the brain and send signals to the DBS device to respond.

By combining BCI with DBS – lot of acronyms, I know! – the Oxford group and those like them have come away with many ideas for improvements, and are working towards an age where a one-size-fits-all DBS system will be replaced with a new series of personalized implants.

tcdsIn the meantime, a number of recreational possibilities also exist that do not involve electrodes in the brain. The tDCS headband is one example, a headset that provides transcranial direct current stimulation to the brain without the need for neurosurgery or any kind of brain implant. In addition to restoring neuroplasticity – the ability of the brain to be flexible and enable learning and growth – it has also been demonstrated to promote deeper sleep and greater awareness in users.

But it is in the field of personalized medical implants, the kinds that can correct for neurological disorders, that the real potential really exists. In the long-run, such neurological prosthesis could not only going to lead to the elimination of everything from mental illness to learning disabilities, they would also be the first step towards true and lasting brain enhancement.

transhuman3It is a staple of both science fiction and futurism that merging the human brain with artificial components and processors is central to the dream of transhumanism. By making our brains smarter, faster, and correcting for any troubling hiccups that might otherwise slow us down, we would effectively be playing with an entirely new deck. And what we would be capable of inventing and producing would be beyond anything we currently have at our disposal.

Sources: Extremetech.com, (2)

News from Space: The Orion MPCV gets a Manned Mission

Orion_arraysIt’s known as the Orion Multi-Purpose Crew Vehicle (MPCV), and it represents NASA’s plans for a next-generation exploration craft. This plan calls for the Orion to be launched aboard the next-generation Space Launch System, a larger, souped-up version of the Saturn V’s that took the Apollo teams into space and men like Neil Armstrong to the Moon.

The first flight, called Exploration Mission 1 (EM-1), will be targeted to send an unpiloted Orion spacecraft to a point more than 70,000 km (40,000 miles) beyond the Moon. This mission will serve as a forerunner to NASA’s new Asteroid Redirect Initiative – a mission to capture an asteroid and tow it closer to Earth – which was recently approved by the Obama Administration.

orion_arrays1But in a recent decision to upgrade the future prospects of the Orion, the EM-1 flight will now serve as an elaborate harbinger to NASA’s likewise enhanced EM-2 mission. This flight would involve sending a crew of astronauts for up close investigation of the small Near Earth Asteroid that would be relocated to the Moon’s vicinity. Until recently, NASA’s plan had been to launch the first crewed Orion atop the 2nd SLS rocket to a high orbit around the moon on the EM-2 mission.

However, the enhanced EM-1 flight would involve launching an unmanned Orion, fully integrated with the SLS, to an orbit near the moon where an asteroid could be moved to as early as 2021. This upgrade would also allow for an exceptionally more vigorous test of all the flight systems for both the Orion and SLS before risking a flight with humans aboard.

orion_arrays2It would also be much more technically challenging, as a slew of additional thruster firings would be conducted to test the engines ability to change orbital parameters, and the Orion would also be outfitted with sensors to collect a wide variety of measurements to evaluate its operation in the harsh space environment. And lastly, the mission’s duration would also be extended from the original 10 to a full 25 days.

Brandi Dean, NASA Johnson Space Center spokeswoman, explained the mission package in a recent interview with Universe Today:

The EM-1 mission with include approximately nine days outbound, three to six days in deep retrograde orbit and nine days back. EM-1 will have a compliment of both operational flight instrumentation and development flight instrumentation. This instrumentation suite gives us the ability to measure many attributes of system functionality and performance, including thermal, stress, displacement, acceleration, pressure and radiation.

The EM-1 flight has many years of planning and development ahead and further revisions prior to the 2017 liftoff are likely. “Final flight test objectives and the exact set of instrumentation required to meet those objectives is currently under development,” explained Dean.

orion_spacecenterThe SLS launcher will be the most powerful and capable rocket ever built by humans – exceeding the liftoff thrust of even the Saturn V, the very rocket that sent the Apollo astronauts into space and put Neil Armstrong, Buzz Aldrin and Michael Collins on the Moon. Since NASA is in a hurry to reprise its role as a leader in space, both the Orion and the SLS are under active and accelerating development by NASA and its industrial partners.

As already stated by NASA spokespeople, the 1st Orion capsule is slated to blast off on the unpiloted EFT-1 test flight in September 2014 atop a Delta IV Heavy rocket. This mission will be what is known as a “two orbit” test flight that will take the unmanned Multi-Purpose Crew Vehicle to an altitude of 5800 km (3,600 miles) above the Earth’s surface.

After the 2021 missions to the Moon, NASA will be looking farther abroad, seeking to mount manned missions to Mars, and maybe beyond…

And in the meantime, enjoy this video of NASA testing out the parachutes on the Orion space vehicle. The event was captured live on Google+ on July 24th from the U.S. Army’s Yuma Proving Ground in Arizona, and the following is the highlight of the event – the Orion being dropped from a plane!:

Judgement Day Update: A.I. Equivalent to Four Year Old Mind

artificial_intelligence1Ever since computers were first invented, scientists and futurists have dreamed of the day when computers might be capable of autonomous reasoning and be able to surpass human beings. In the past few decades, it has become apparent that simply throwing more processing power at the problem of true artificial intelligence isn’t enough. The human brain remains several orders more complex than the typical AI, but researchers are getting closer.

One such effort is ConceptNet 4, a semantic network being developed by MIT. This AI system contains a large store of information that is used to teach the system about various concepts. But more importantly, it is designed to process the relationship between things. Much like the Google Neural Net, it is designed to learn and grow to the point that it will be able to reason autonomously.

child-ai-brainRecently, researchers at the University of Illinois at Chicago decided to put the ConceptNet through an IQ test. To do this, they used the Wechsler Preschool and Primary Scale of Intelligence Test, which is one of the common assessments used on small children. ConceptNet passed the test, scoring on par with a four-year-old in overall IQ. However, the team points out it would be worrisome to find a real child with lopsided scores like those received by the AI.

The system performed above average on parts of the test that have to do with vocabulary and recognizing the similarities between two items. However, the computer did significantly worse on the comprehension questions, which test a little one’s ability to understand practical concepts based on learned information. In short, the computer showed relational reasoning, but was lacking in common sense.

Neuromorphic-chip-640x353This is the missing piece of the puzzle for ConceptNet and those like it. An artificial intelligence like this one might have access to a lot of data, but it can’t draw on it to make rational judgements. ConceptNet might know that water freezes at 32 degrees, but it doesn’t know how to get from that concept to the idea that ice is cold. This is basically common sense — humans (even children) have it and computers don’t.

There’s no easy way to fabricate implicit information and common sense into an AI system and so far, no known machine has shown the ability. Even IBM’s Watson trivia computer isn’t capable of showing basic common sense, and though multiple solutions have been proposed – from neuromorphic chips to biomimetic circuitry – nothing is bearing fruit just yet.

AIBut of course, the MIT research team is already hard at work on ConceptNet 5, a more sophisticated neural net computer that is open source and available on GitHub. But for the time being, its clear that a machine will be restricted to processing information and incapable of making basic decisions. Good thing too! The sooner they can think for themselves, the sooner they can decide we’re in their way!

Source: extremetech.com

The Future of Medicine: Smartphone Medicine!

iphone_specIt’s no secret that the exponential growth in smartphone use has been paralleled by a similar growth in what they can do. Everyday, new and interesting apps are developed which give people the ability to access new kinds of information, interface with other devices, and even perform a range of scans on themselves. It is this latter two aspect of development which is especially exciting, as it is opening the door to medical applications.

Yes, in addition to temporary tattoos and tiny medimachines that can be monitored from your smartphone or other mobile computing device, there is also a range of apps that allow you to test your eyesight and even conduct ultrasounds on yourself. But perhaps most impressive is the new Smartphone Spectrometer, an iPhone program which will allow users to diagnose their own illnesses.

iphone_spec2Consisting of an iPhone cradle, phone and app, this spectrometer costs just $200 and has the same level of diagnostic accuracy as a $50,000 machine, according to Brian Cunningham, a professor at the University of Illinois, who developed it with his students. Using the phone’s camera and a series of optical components in the cradle, the machine detects the light spectrum passing through a liquid sample.

This liquid can consist of urine or blood, any of the body’s natural fluids that are exhibit traces of harmful infection when they are picked up by the body. By comparing the sample’s spectrum to spectrums for target molecules, such as toxins or bacteria, it’s possible to work out how much is in the sample. In short, a quickie diagnosis for the cost of a fancy new phone.

Granted there are limitations at this point. For one, the device is nowhere near as efficient as its industrial counterpart. Whereas automated $50,000 version can process up to 100 samples at a time, the iPhone spectrometer can only do one at a time. But by the time Cunningham and his team plan on commercializing the design, they hope to increase that efficiency by a few magnitudes.

iphone_spec1On the plus side, the device is far more portable than any other known spectrometer. Whereas a lab is fixed in place and has to process thousands of samples at any given time, leading to waiting lists, this device can be used just about anywhere. In addition, there’s no loss of accuracy. As Cunningham explained:

We were using the same kits you can use to detect cancer markers, HIV infections, or certain toxins, putting the liquid into our cartridge and measuring it on the phone. We have compared the measurements from full pieces of equipment, and we get the same outcome.

Cunningham is currently filing a patent application and looking for investment. He also has a grant from the National Science Foundation to develop an Android version. And while he doesn’t think smartphone-based devices will replace standard spectrometry machines with long track records, and F.D.A approval, he does believe they could enable more testing.

publiclaboratoryThis is especially in countries where government-regulated testing is harder to come by, or where medical facilities are under-supplied or waiting lists are prohibitively long. With diseases like cancer and HIV, early detection can be the difference between life and death, which is a major advantage, according to Cunningham:

In the future, it’ll be possible for someone to monitor themselves without having to go to a hospital. For example, that might be monitoring their cardiac disease or cancer treatment. They could do a simple test at home every day, and all that information could be monitored by their physician without them having to go in.

But of course, the new iPhone is not alone. Many other variations are coming out, such as the PublicLaboratory Mobile Spectrometer, or Androids own version of the Spectral Workbench. And of course, this all calls to mind the miniature spectrometer that Jack Andraka, the 16-year old who invented a low-cost litmus test for pancreatic cancer and who won the 2012 Intel International Science and Engineering Fair (ISEF). That’s him in the middle of the picture below:

ISEF2012-Top-Three-WinnersIt’s the age of mobile medicine, my friends. Thanks to miniaturization, nanofabrication, wireless technology, mobile devices, and an almost daily rate of improvement in medical technology, we are entering into an age where early detection and cost-saving devices are making medicine more affordable and accessible.

In addition, all this progress is likely to add up to many lives being saved, especially in developing regions or low-income communities. It’s always encouraging when technological advances have the effect of narrowing the gap between the haves and the have nots, rather than widening it.

And of course, there’s a video of the smartphone spectrometer at work, courtesy of Cunningham’s research team and the University of Illinois:


Source:
fast.coexist.com

The Future is Here: The Telescopic Contact Lense

telescopic_contact_lensWhen it comes to enhancement technology, DARPA has its hands in many programs designed to augment a soldier’s senses. Their latest invention, the telescopic contact lens, is just one of many, but it may be the most impressive to date. Not only is it capable of giving soldiers the ability to spot and focus in on faraway objects, it may also have numerous civilian applications as well.

The lens is the result of collaboration between researchers from the University of California San Diego, Ecole Polytechnique Federale de Lausanne in Switzerland, and the Pacific Science & Engineering Group, with the financial assistance of DARPA. Led by Joseph Ford of UCSD and Eric Tremblay of EPFL, the development of the lens was announced in a recent article entitled “Switchable telescopic contact lens” that appeared in the Optics Express journal.

telescopic-contact-lens-2

In addition to being just over a millimeter thick, the lens works by using a series of tiny mirrors to magnify light, and can be switched between normal and telescopic vision, which is due to the lens having two distinct regions. The first The center of the lens allows light to pass straight through, providing normal vision. The outside edge, however, acts as a telescope capable of magnifying your sight by close to a factor of three.

Above all, the main breakthrough here is that this telescopic contact lens is just 1.17mm thick, allowing it to be comfortably worn. Other attempts at granting telescopic vision have included a 4.4mm-thick contact lens (too thick for real-world use), telescopic spectacles (cumbersome and ugly), and most recently a telescopic lens implanted into the eye itself. The latter is currently the best option currently available, but it requires surgery and the image quality isn’t excellent.

Telescopic-Contact-Lens-3To accomplish this feet of micro-engineering, the researchers had to be rather creative. The light that will be magnified enters the edge of the contact lens, is bounced around four times inside the lens using patterned aluminum mirrors, and then beamed to the edge of the retina at the back of your eyeball. Or as the research team put it in their article:

The magnified optical path incorporates a telescopic arrangement of positive and negative annular concentric reflectors to achieve 2.8x magnification on the eye, while light passing through a central clear aperture provides unmagnified vision.

To switch between normal and telescopic vision, the central, unmagnified region of the contact lens has a polarizing filter in front of it — which works in tandem with a pair of 3D TV spectacles. By switching the polarizing state of the spectacles – a pair of active, liquid crystal Samsung 3D specs in this case – the user can choose between normal and magnified vision.

AR_glassesThough the project is being funded by DARPA for military use, the research team also indicated that the real long-term benefits of a device like this one come in the form of civilian and commercial applications. For those people suffering from age-related macular degeneration (AMD) – a leading cause of blindness for older adults – this lens could be used to correct for vision loss.

As always, enhancement technology is a two-edged sword. Devices and systems that are created to address disabilities and limitations have the added benefit of augmenting people who are otherwise healthy and ambulatory. The reverse is also true, with specialized machines that can make a person stronger, faster, and more aware providing amputees and physically challenged people the ability to overcome these imposed limitations.

telescopic-contact-lens-5However, before anyone starts thinking that all they need to slip on a pair of these to get superhero-like vision, there are certain limitations. As already stated, the lens doesn’t work on its own but needs to be paired with a modified set of 3D television glasses for it to work. Simply placing it on the pupil and expecting magnified vision is yet not an option.

Also, though the device has been tested using computer modeling and by attaching a prototype lens to a optomechanical model eye, it has not been tested on a set of human eyes just yet. As always, there is still a lot of work to do with refining the technology and improving the image quality, but it’s clear at this early juncture that the work holds a lot of promise.

It’s the age of bionic enhancements people, are we find ourselves at the forefront of it. As time goes on, we can expect such devices to become a regular feature of our society.

Sources: news.cnet.com, extremetech.com

News from Space: The Canadarm2!

Astronaut Steve K. Robertson during STS-114
Astronaut Steve K. Robertson during STS-114

For decades, the Canadian Space Agency has been building the Shuttle Remote Manipulator System (SRMS) – also known as the Canadarm. Since 1981, aboard the shuttle STS-2 Columbia, this model of robotic arm has come standard on all NASA shuttles and was used as its main grasper. However, due to the progress made in the field of robotics over the past thirty years and the need for equipment to evolve to meet new challenges, the Canadarm was retired in 2011.

Luckily, the CSA is busy at work producing its successor, the Mobile Service System – aka. Canadarm2. The latest versions are in testing right now, and their main purpose, once deployed, will be to save satellites. Currently, an earlier version of this arm serves as the main grasper aboard the ISS, where it is used to move payloads around and guide objects to the docking port.

canadarm2However, the newest models – dubbed Next Generation Canadarm (NGC) – are somewhat different and come in two parts. First, there is the 15 meter arm that has six degrees of freedom, extreme flexibility, and handles grappling and heavy lifting. The second is a 2.58 meter arm that comes attached to the larger arm, is similarly free and flexible, and handles more intricate repair and replacement work.

This new model improves upon the old in several respects. In addition to being more intricate, mobile, and handle a wider array of tasks, it is also considerably lighter than than its predecessor. When not in use, it is also capable of telescoping down to 5 meters of cubic space, which is a huge upshot for transporting it aboard a shuttle craft. All of this is expected to come in handy once they start the lucrative business of protecting our many satellites.

canadarm2_missioncontrolIt’s no secret that there is abundance of space junk clogging the Earth’s upper atmosphere. This moving debris is a serious danger to both manned and unmanned missions and is only expected to get worse. Because of this, the ability to repair and retool satellites to keep them in operation longer is of prime importance to space agencies.

Naturally, every piece of equipment needs to undergo rigorous testing before its deployed into space. And the Canadarm2 is no exception, which is currently being put through countless simulations. This battery of tests allows operators to guide dummy satellites together for docking using the arms in both full manual and semi-autonomous mode.

canadarm2_chrishadfieldNo indication on when they will be ready for service, but it seems like a safe bet that any manned missions to Mars will likely feature a Canadarm2 or two. And as you can see, Chris Hadfield – another major Canadian contribution to space – is on hand to help out. Maybe he and the new arm can perform a duet together, provided it can handle a guitar!

And be sure to check out this video of the NGC Canadarm2 in action, courtesy of the Canadian Space Agency:


Source:
Wired.com