When it comes to providing for the future, clean, drinkable water is one challenge researchers are seriously looking into. Not only is overpopulation seriously depleting the world’s supply of fresh water, Climate Change threatens to make a bad situation even worse. As sea levels rise and flooding threatens population centers, water tables are also drying up and being ruined by toxic chemicals and runoff.
One idea is to take sea water, which is in growing supply thanks to the melting polar ice caps, and making it drinkable. However, desalination, in its traditional form, is an expensive and difficult process. Typical large-scale desalination involves forcing salt water through a membrane are costly, can be fouled, and which require powerful pumps to circulate the water.
However, scientists from the University of Texas at Austin and Germany’s University of Marburg are taking another approach. Working with a process known as “electrochemically mediated seawater desalination”, they have developed a prototype plastic “water chip” that contains a microchannel which branches in two, separating salt from water chemically without the need for membranes.
The process begins with seawater being run into the microchannel where a 3-volt electrical current is applied. This causes an electrode embedded at the branching point of the channel to neutralize some of the chloride ions in the water, which in turn increases the electrical field at that point. That area of increased current, called an ion depletion zone, diverts the salt to one branch in the channel while allowing the water to continue down another.
In its present form, the system can run on so little energy that a store-bought battery is all that’s required as a power source. Developed on a larger scale, such chips could be employed in future offshore developments – such as Lillypad cities or planned coastal arcologies like NOAH, BOA, or Shimizu Mega-City – where they would be responsible for periodically turning water that was piped in from the sea into something drinkable and useable for crops.
Two challenges still need to be overcome, however. First of all, the chip currently removes only 25 percent of the salt from the water. 99 percent must be removed in order for seawater to be considered drinkable. Second, the system must be scaled up in order to be practical. It presently produces about 40 nanoliters of desalted water per minute.
That being said, the scientists are confident that with further research, they can rectify both issues. And with the involvement of Okeanos Technologies – a major desalination research firm – and the pressing need to come up with affordable solutions, it shouldn’t be too long until a fully-scaled, 99 percent efficient model is developed.
Back in May, Google co-founder and CEO Larry Page hosted a rare Q&A session with the attendees of the Google I/O keynote speech. During this time, he gave some rather unfiltered and unabashed answers to some serious questions, one of which was how he and others should focus on reducing negativity and focusing on changing the world.
Page responded by saying that “the pace of change is increasing” and that “we haven’t adapted systems to deal with that.” He was also sure to point out that “not all change is good” and said that we need to build “mechanisms to allow experimentation.” Towards that end, he claimed that an area of the world should be set aside for unregulated scientific experimentation. His exact words were:
There are many exciting things you could do that are illegal or not allowed by regulation. And that’s good, we don’t want to change the world. But maybe we can set aside a part of the world… some safe places where we can try things and not have to deploy to the entire world.
So basically he’s looking for a large chunk of real-estate to conduct beta tests in it. What could possibly go wrong?
One rather creative suggestion comes from Roy Klabin of PolicyMic, who suggest that an aging and dilapidated Detroit might be just the locale Page and his associates are looking for. This past week, the city declared bankruptcy, and began offering to sell city assets and eradicate retirement funds to meet its $18 billion debt obligations.
What’s more, he suggests that SpaceX founder Elon Musk, who’s always after innovation, should team up with Google. Between the two giants, there’s more than enough investment capital to pull Detroit out of debt and work to rehabilitate the city’s economy. Hell, with a little work, the city could be transformed back into the industrial hub it once was.
And due to a mass exodus of industry and working people from the city, there is no shortage of space. Already the city is considering converting segments of former urban sprawl into farming and agricultural land. But looking farther afield, Klabin sees no reason why these space couldn’t be made available for advanced construction projects involving arcologies and other sustainable-living structures.
Not a bad idea, really. With cities like Boston, New York, Las Vegas, New Orleans, Moscow, Chendu, Tokyo and Masdar City all proposing or even working towards the creation of arcologies, there’s no reason why the former Industrial Heartland – now known as the “Rust Belt” – shouldn’t be getting in on the action.
Naturally, there are some who would express fear over the idea, not to mention Page’s blunt choice of words. But Page did stress the need for positive change, not aimless experimentation. And future generations will need housing and food, and to be able to provide these things in a way that doesn’t burden their environment the way urban sprawl does. Might as well get a jump on things!
And thanks to what some are calling the “New Industrial Revolution” – a trend that embraces nanofabrication, self-assembling DNA structures, cybernetics, and 3D printing – opportunities exist to rebuild our global economy in a way that is cleaner, more efficient and more sustainable. Anyone with space to offer and an open mind can get in on the ground floor. The only question is, what are they willing to give up?
There’s also a precedent here for what is being proposed. The famous American architect and designer Jacque Fresco has been advocating something similar for decades. Believing that society needs to reshape the way it lives, works, and produces, he created the Venus Project – a series of designs for a future living space that would incorporate new technologies, smarter materials and building methods, and alternative forms of energy.
And then there’s the kind of work being proposed by designer Mitchell Joachim and Terreform ONE (Open Network Ecology). And amongst their many proposed design concepts is one where cities use vertical towers filled with energy-creating algae (pictured below) to generate power. But even more ambitious is their plan to “urbaneer” Brooklyn’s Navy Yard by turning natural ecological tissues into viable buildings.
This concept also calls to mind Arconsanti, the brainchild of architect Paolo Solari, who invented the concept of arcology. His proposed future city began construction back in the 1970 in central Arizona, but remains incomplete. Designed to incorporate such things as 3D architecture, vertical farming, and clean, renewable energy, this unfinished city still stands as the blueprint for Solari’s vision of a future where architecture and ecology could be combined.
What’s more, this kind of innovation and development will come in mighty handy when it comes to time to build colonies on the Moon and Mars. Already, numerous Earth cities and settlements are being considered as possible blueprints for extra-Terran settlement – places like Las Vegas, Dubai, Arviat, Black Rock City and the Pueblos and pre-Columbian New Mexico.
Black Rock City – home to “Burning Man” – shown in a Martian crater
These are all prime examples of cities built to withstand dry, inhospitable environments. As such, sustainability and resource management play a major role in each of their designs. But given the pace at which technology is advancing and the opportunities it presents for high-tech living that is also environmentally friendly, some test models will need to be made.
And building them would also provide an opportunity to test out some of the latest proposed construction methods, one that do away with the brutally inefficient building process and replace it with things like drones, constructive bacteria, additive manufacturing, and advanced computer modelling. At some point, a large-scale project to see how these methods work together will be in order.
Let’s just hope Page’s ideas for a beta-testing settlement doesn’t turn into a modern day Laputa!
And be sure to check out this video from the Venus Project, where Jacque Fresco explains his inspirations and ideas for a future settlement:
Sources:
1. Elon Musk and Google Should Purchase and Transform a Bankrupt Detroit (http://www.policymic.com/) 2. Larry Page wants to ‘set aside a part of the world’ for unregulated experimentation (theverge.com) 3. Six Earth Cities That Will Provide Blueprints for Martian Settlements (io9.com) 4. The Venus Project (thevenusproject.org) 5. Arcosanti Website (arcosanti.org) 6. Terreform ONE website (terreform.org)
Officially, it’s known as “neurohacking” – a method of biohacking that seeks to manipulate or interfere with the structure and/or function of neurons and the central nervous system to improve or repair the human brain. In recent years, scientists and researchers have been looking at how Deep Brain Stimulation (DBS) could be used for just such a purpose. And the results are encouraging, indicating that the technology could be used to correct for neurological disorders.
The key in this research has to do with the subthalamic nucleus (STN) – a component of the basal ganglia control system that is interconnected to the motor areas of the brain. Researchers initially hit upon the STN as a site for stimulation when studying monkeys with artificially induced movement disorders. When adding electrical stimulation to this center, the result was a complete elimination of debilitating tremors and involuntary movements.
DIY biohacker Anthony Johnson – aka. “Cyber AJ” – also recently released a dramatic video where he showed the effects of DBS on himself. As a Parkison’s sufferer, Johnson was able to demonstrate how the applications of a mild electrical stimulus from his Medtronic DBS to the STN region of his brain completely eliminated the tremors he has had to deal with ever since he was diagnosed.
But in spite of these positive returns, tests on humans have been slow-going and somewhat inconclusive. Basically, scientists have been unable to conclude why stimulating the STN would eliminate tremors, as the function of this region of the brain is still somewhat of a mystery. What’s more, they also determined that putting electrodes in any number of surrounding brain nuclei, or passing fiber tracts, seems to have similar beneficial effects.
In truth, when dealing with people who suffer from neurological disorders, any form of stimulation is likely to have a positive effect. Whether it is Parkinson’s, Alzheimer’s, Tourettes, Autism, Aspergers, or neurological damage, electrical stimulation is likely to produce moments of lucidity, greater recall, and more focused attention. Good news for some, but until such time as we know how and in what ways the treatment needs to happen, lasting treatment will be difficult.
Luckily, research conducted by the Movement Disorders Group at Oxford University, led by Peter Brown, has provided some degree of progress in this field. Since DBS was first discovered, they have been busily recording activity through what is essentially a brain-computer interface (BCI) in the hopes of amassing meaningful data from the brain as it undergoes stimulation moment-by-moment.
For starters, it is known that the symptoms of Parkinson’s and other such disorders fluctuate continuously and any form of smart control needs to be fast to be effective. Hence, DBS modules need to be responsive, and not simply left on all the time. Hence, in addition to their being electrodes that can provide helpful stimulus, there also need to be sensors that can detect when the brain is behaving erratically.
Here too, it was the Oxford group that came up with a solution. Rather than simply implanting more junk into the brain – expensive and potentially dangerous – Brown and his colleagues realized that the stimulation electrodes themselves can be used to take readings from the local areas of the brain and send signals to the DBS device to respond.
By combining BCI with DBS – lot of acronyms, I know! – the Oxford group and those like them have come away with many ideas for improvements, and are working towards an age where a one-size-fits-all DBS system will be replaced with a new series of personalized implants.
In the meantime, a number of recreational possibilities also exist that do not involve electrodes in the brain. The tDCS headband is one example, a headset that provides transcranial direct current stimulation to the brain without the need for neurosurgery or any kind of brain implant. In addition to restoring neuroplasticity – the ability of the brain to be flexible and enable learning and growth – it has also been demonstrated to promote deeper sleep and greater awareness in users.
But it is in the field of personalized medical implants, the kinds that can correct for neurological disorders, that the real potential really exists. In the long-run, such neurological prosthesis could not only going to lead to the elimination of everything from mental illness to learning disabilities, they would also be the first step towards true and lasting brain enhancement.
It is a staple of both science fiction and futurism that merging the human brain with artificial components and processors is central to the dream of transhumanism. By making our brains smarter, faster, and correcting for any troubling hiccups that might otherwise slow us down, we would effectively be playing with an entirely new deck. And what we would be capable of inventing and producing would be beyond anything we currently have at our disposal.
It’s known as the Orion Multi-Purpose Crew Vehicle (MPCV), and it represents NASA’s plans for a next-generation exploration craft. This plan calls for the Orion to be launched aboard the next-generation Space Launch System, a larger, souped-up version of the Saturn V’s that took the Apollo teams into space and men like Neil Armstrong to the Moon.
The first flight, called Exploration Mission 1 (EM-1), will be targeted to send an unpiloted Orion spacecraft to a point more than 70,000 km (40,000 miles) beyond the Moon. This mission will serve as a forerunner to NASA’s new Asteroid Redirect Initiative – a mission to capture an asteroid and tow it closer to Earth – which was recently approved by the Obama Administration.
But in a recent decision to upgrade the future prospects of the Orion, the EM-1 flight will now serve as an elaborate harbinger to NASA’s likewise enhanced EM-2 mission. This flight would involve sending a crew of astronauts for up close investigation of the small Near Earth Asteroid that would be relocated to the Moon’s vicinity. Until recently, NASA’s plan had been to launch the first crewed Orion atop the 2nd SLS rocket to a high orbit around the moon on the EM-2 mission.
However, the enhanced EM-1 flight would involve launching an unmanned Orion, fully integrated with the SLS, to an orbit near the moon where an asteroid could be moved to as early as 2021. This upgrade would also allow for an exceptionally more vigorous test of all the flight systems for both the Orion and SLS before risking a flight with humans aboard.
It would also be much more technically challenging, as a slew of additional thruster firings would be conducted to test the engines ability to change orbital parameters, and the Orion would also be outfitted with sensors to collect a wide variety of measurements to evaluate its operation in the harsh space environment. And lastly, the mission’s duration would also be extended from the original 10 to a full 25 days.
Brandi Dean, NASA Johnson Space Center spokeswoman, explained the mission package in a recent interview with Universe Today:
The EM-1 mission with include approximately nine days outbound, three to six days in deep retrograde orbit and nine days back. EM-1 will have a compliment of both operational flight instrumentation and development flight instrumentation. This instrumentation suite gives us the ability to measure many attributes of system functionality and performance, including thermal, stress, displacement, acceleration, pressure and radiation.
The EM-1 flight has many years of planning and development ahead and further revisions prior to the 2017 liftoff are likely. “Final flight test objectives and the exact set of instrumentation required to meet those objectives is currently under development,” explained Dean.
The SLS launcher will be the most powerful and capable rocket ever built by humans – exceeding the liftoff thrust of even the Saturn V, the very rocket that sent the Apollo astronauts into space and put Neil Armstrong, Buzz Aldrin and Michael Collins on the Moon. Since NASA is in a hurry to reprise its role as a leader in space, both the Orion and the SLS are under active and accelerating development by NASA and its industrial partners.
As already stated by NASA spokespeople, the 1st Orion capsule is slated to blast off on the unpiloted EFT-1 test flight in September 2014 atop a Delta IV Heavy rocket. This mission will be what is known as a “two orbit” test flight that will take the unmanned Multi-Purpose Crew Vehicle to an altitude of 5800 km (3,600 miles) above the Earth’s surface.
After the 2021 missions to the Moon, NASA will be looking farther abroad, seeking to mount manned missions to Mars, and maybe beyond…
And in the meantime, enjoy this video of NASA testing out the parachutes on the Orion space vehicle. The event was captured live on Google+ on July 24th from the U.S. Army’s Yuma Proving Ground in Arizona, and the following is the highlight of the event – the Orion being dropped from a plane!:
Ever since astronomers first looked up at Mars, they discerned features that few could accurately identify. For many years, speculations about irrigation, canals, and a Martian civilization abounded, firing people’s imaginations and fiction. It was not until more recently, with the deployment of the Viking probe, that Mars’ surface features have come to be seen for what they are.
Thanks several more probes, and the tireless work of rover such as Opptorunity and Curiosity, scientists have been able to amass evidence and get a first hand look at the surface. Nevertheless, they are still hard-pressed to explain everything that they’ve seen. And while much evidence exists that rivers and lakes once dotted the landscape, other geological features exist which don’t fit that model.
However, a recent report from Brown University has presented evidence that snowfall may be one answer. It has long been known that ice exists at the polar caps, but actual snowfall is a very specific meteorological feature, one that has serious implications for early Martian conditions. This is just another indication that Mars hosted an environment that was very much like Earths.
And this is not the first time that snow on Mars has been suggested. In 2008, NASA announced having detected snow falling from Martian clouds, but it was entirely vaporized before reaching the ground. The Brown researchers claim that snowfall in the past, and buildup on the surface leading to melting and runoff, could have created many of the tributary networks observed near tall mountain-ranges.
To back this claim up, the team used a computer simulation from the Laboratoire de Météorologie Dynamique called the Mars global circulation model (GCM). This model compiles evidence about the early composition of the red planet’s atmosphere to predict global circulation patterns. And since other models predict that Mars was quite cold, the program indicated the highest probability of snowfall over the densest valley systems.
Lead researcher Kat Scanlon also relied on her background in orographic studies (science for “studying mountains”) in Hawaii to arrive at this hypothesis. This includes how tall mountains lead to divergent weather patterns on either side, with warm, wet conditions one and cold, dry ones on the other. NASA’s Curiosity rover also was intrinsic, thanks to recent information that might explain why Mars no longer displays this kind of behavior.
In short, Curiosity determined that the planet is losing its atmosphere. It has taken detailed assays of the current atmosphere, which is almost entirely carbon dioxide and about 0.6% the pressure of Earth’s at sea-level. More notably, it has used its ability to laser-blast solid samples and analyze the resulting vapor to determine that Mars has an unusually high ratio of heavy to light isotopes — most importantly of deuterium to hydrogen.
The main explanation for this is atmospheric loss, since light isotopes will escape slightly more quickly than heavy. Over billions of years, this can lead to non-standard isotope levels the show a loss of atmosphere. One major theory that might explain this loss say that about 4.2 million years ago Mars collided with an object about the size of Pluto. An impact from this body would have caused a huge expulsion of atmosphere, followed by a slow, continued loss from then on.
All of this plays into the larger question of life on Mars. Is there, or was there, ever life? Most likely, there was, as all the elements – water, atmosphere, clay minerals – appear to have been there at one time. And while scientists might still stumble upon a Lake Vostok-like reserve of microbial life under the surface, it seems most likely that Mars most fertile days is behind it.
However, that doesn’t mean that it can’t once again host life-sustaining conditions. And with some tweaking, of the ecological engineering – aka. terraforming – variety, it could once again.
Ever since computers were first invented, scientists and futurists have dreamed of the day when computers might be capable of autonomous reasoning and be able to surpass human beings. In the past few decades, it has become apparent that simply throwing more processing power at the problem of true artificial intelligence isn’t enough. The human brain remains several orders more complex than the typical AI, but researchers are getting closer.
One such effort is ConceptNet 4, a semantic network being developed by MIT. This AI system contains a large store of information that is used to teach the system about various concepts. But more importantly, it is designed to process the relationship between things. Much like the Google Neural Net, it is designed to learn and grow to the point that it will be able to reason autonomously.
Recently, researchers at the University of Illinois at Chicago decided to put the ConceptNet through an IQ test. To do this, they used the Wechsler Preschool and Primary Scale of Intelligence Test, which is one of the common assessments used on small children. ConceptNet passed the test, scoring on par with a four-year-old in overall IQ. However, the team points out it would be worrisome to find a real child with lopsided scores like those received by the AI.
The system performed above average on parts of the test that have to do with vocabulary and recognizing the similarities between two items. However, the computer did significantly worse on the comprehension questions, which test a little one’s ability to understand practical concepts based on learned information. In short, the computer showed relational reasoning, but was lacking in common sense.
This is the missing piece of the puzzle for ConceptNet and those like it. An artificial intelligence like this one might have access to a lot of data, but it can’t draw on it to make rational judgements. ConceptNet might know that water freezes at 32 degrees, but it doesn’t know how to get from that concept to the idea that ice is cold. This is basically common sense — humans (even children) have it and computers don’t.
There’s no easy way to fabricate implicit information and common sense into an AI system and so far, no known machine has shown the ability. Even IBM’s Watson trivia computer isn’t capable of showing basic common sense, and though multiple solutions have been proposed – from neuromorphic chips to biomimetic circuitry – nothing is bearing fruit just yet.
But of course, the MIT research team is already hard at work on ConceptNet 5, a more sophisticated neural net computer that is open source and available on GitHub. But for the time being, its clear that a machine will be restricted to processing information and incapable of making basic decisions. Good thing too! The sooner they can think for themselves, the sooner they can decide we’re in their way!
Ever since their Space Shuttle program was forcibly shut down in 2011, NASA has been forced to look to the private sector to restore their ability to put human beings into orbit from American soil. This consists of providing the seed money needed for companies to develop a new race of “space taxis”. One such program is the Dream Chaser, a reusable shuttle that will fly astronauts into low Earth orbit (LEO) and to the International Space Station (ISS).
Much like a standard Space Shuttle, the Dream Chaser is designed to launch atop a United Launch Alliance Atlas V rocket and land on a shuttle landing facility. And after lengthy periods of research and development, the Dream Chaser is now moving forward with a series of ground tests at NASA’s Dryden Flight Research Center in California that will soon lead to dramatic aerial flight tests throughout 2013.
This consisted of putting the shuttle together and then conducting a series of what’s known as “Pathfinding tow tests” on Dryden’s concrete runway. The purpose here is to validate the performance of the vehicles’ nose skid, brakes, tires and other systems to prove that it can safely land an astronaut crew after surviving the searing re-entry from Earth orbit. For the initial ground tests, the ship was pulled by a tow truck at 16 and 32 km/h (10 to 20 mph).
Later this month, the next leg of the test will consist of towing it up to speeds of 64 to 95 km and hour (40 to 60 mph). The next phases of testing will take place later this year in the form of airborne captive carry tests, where an Erickson Skycrane helicopter will fly the fuselage around to see how it holds up. Approach and Landing Tests (ALT) will follow to check the aerodynamic handling, which will consist of atmospheric drop tests in autonomous free flight mode.
In an interview with Universe Today, Marc Sirangelo – Sierra Nevada Corp. vice president and SNC Space Systems chairman – spoke on record about the shuttle and where it is in terms of development:
It’s not outfitted for orbital flight. It is outfitted for atmospheric flight tests. The best analogy is it’s very similar to what NASA did in the shuttle program with the Enterprise, creating a vehicle that would allow it to do significant flights whose design then would filter into the final vehicle for orbital flight.
In short, the Dream Chaser has a long way to go, but the program shows great promise. And as already noted, they are not the only ones benefiting from this public-private agreement that seeks to develop commercial vehicles for the sake of kick starting space travel.
Other companies include Boeing and SpaceX, companies that were also awarded contracts under NASA’s Commercial Crew Integrated Capability Initiative, or CCiCap. All three have their own commercial vehicles under development, such as the Boeing CST-100, SpaceX’s Dragon, which are similarly designed to bring a crew of up to 7 astronauts to the ISS and docking with it for up to 6 months.
But of course, everything depends on NASA’s approved budget, which seems headed for steep cuts in excess of a billion dollars if a Republican dominated US House has its way.This is the third contract in NASA’s Phase 1 CCiCap contracts, who’s combined value is about $1.1 Billion and runs through March 2014. Phase 2 contract awards will eventually lead to actual flight units after a down selection to one or more of the companies. The first orbital flight test of the Dream Chaser is not expected before 2016 and could be further delayed if NASA’s commercial crew budget is again slashed by the Congress – as was done in the past few years.
But as William Gerstenmaier – NASA’s associate administrator for human exploration and operations in Washington – indicated in a statement, the larger goal here is one of repatriation. As it stands, US astronauts are totally dependent on Russia’s Soyuz capsule for rides to the ISS, which costs upwards of $70 million a trip. NASA hopes to change that by rekindling the “good old days” of space travel:
NASA centers around the country paved the way for 50 years of American human spaceflight, and they’re actively working with our partners to test innovative commercial space systems that will continue to ensure American leadership in exploration and discovery.
And I for one wish NASA luck. Lord knows thirty-years of post-Cold War budget cutbacks hasn’t been easy on them. And hitching rides into space above Cold War era rockets is not the best way of getting your astronauts into space either!
In the meantime, check out this concept video of the Dream Chaser in action, courtesy of the Sierra Nevada Corporation:
It’s no secret that the exponential growth in smartphone use has been paralleled by a similar growth in what they can do. Everyday, new and interesting apps are developed which give people the ability to access new kinds of information, interface with other devices, and even perform a range of scans on themselves. It is this latter two aspect of development which is especially exciting, as it is opening the door to medical applications.
Yes, in addition to temporary tattoos and tiny medimachines that can be monitored from your smartphone or other mobile computing device, there is also a range of apps that allow you to test your eyesight and even conduct ultrasounds on yourself. But perhaps most impressive is the new Smartphone Spectrometer, an iPhone program which will allow users to diagnose their own illnesses.
Consisting of an iPhone cradle, phone and app, this spectrometer costs just $200 and has the same level of diagnostic accuracy as a $50,000 machine, according to Brian Cunningham, a professor at the University of Illinois, who developed it with his students. Using the phone’s camera and a series of optical components in the cradle, the machine detects the light spectrum passing through a liquid sample.
This liquid can consist of urine or blood, any of the body’s natural fluids that are exhibit traces of harmful infection when they are picked up by the body. By comparing the sample’s spectrum to spectrums for target molecules, such as toxins or bacteria, it’s possible to work out how much is in the sample. In short, a quickie diagnosis for the cost of a fancy new phone.
Granted there are limitations at this point. For one, the device is nowhere near as efficient as its industrial counterpart. Whereas automated $50,000 version can process up to 100 samples at a time, the iPhone spectrometer can only do one at a time. But by the time Cunningham and his team plan on commercializing the design, they hope to increase that efficiency by a few magnitudes.
On the plus side, the device is far more portable than any other known spectrometer. Whereas a lab is fixed in place and has to process thousands of samples at any given time, leading to waiting lists, this device can be used just about anywhere. In addition, there’s no loss of accuracy. As Cunningham explained:
We were using the same kits you can use to detect cancer markers, HIV infections, or certain toxins, putting the liquid into our cartridge and measuring it on the phone. We have compared the measurements from full pieces of equipment, and we get the same outcome.
Cunningham is currently filing a patent application and looking for investment. He also has a grant from the National Science Foundation to develop an Android version. And while he doesn’t think smartphone-based devices will replace standard spectrometry machines with long track records, and F.D.A approval, he does believe they could enable more testing.
This is especially in countries where government-regulated testing is harder to come by, or where medical facilities are under-supplied or waiting lists are prohibitively long. With diseases like cancer and HIV, early detection can be the difference between life and death, which is a major advantage, according to Cunningham:
In the future, it’ll be possible for someone to monitor themselves without having to go to a hospital. For example, that might be monitoring their cardiac disease or cancer treatment. They could do a simple test at home every day, and all that information could be monitored by their physician without them having to go in.
But of course, the new iPhone is not alone. Many other variations are coming out, such as the PublicLaboratory Mobile Spectrometer, or Androids own version of the Spectral Workbench. And of course, this all calls to mind the miniature spectrometer that Jack Andraka, the 16-year old who invented a low-cost litmus test for pancreatic cancer and who won the 2012 Intel International Science and Engineering Fair (ISEF). That’s him in the middle of the picture below:
It’s the age of mobile medicine, my friends. Thanks to miniaturization, nanofabrication, wireless technology, mobile devices, and an almost daily rate of improvement in medical technology, we are entering into an age where early detection and cost-saving devices are making medicine more affordable and accessible.
In addition, all this progress is likely to add up to many lives being saved, especially in developing regions or low-income communities. It’s always encouraging when technological advances have the effect of narrowing the gap between the haves and the have nots, rather than widening it.
And of course, there’s a video of the smartphone spectrometer at work, courtesy of Cunningham’s research team and the University of Illinois:
Might sound like the plot of a Ray Bradbury novel, where parents and children crowd into the family rocket and make a day trip to the Lunar Park. But new legislation is being proposed that would turn the Apollo 11 landing site into a national park. It would go by the name of the Lunar Landing Sites National Historical Park, and given the rate at which commercial space flight is advancing, its not surprising to see this idea being put forward.
The bill – which was introduced by Reps. Donna Edwards of Maryland and Eddie Bernice Johnson of Texas – is known as HR 2617, or “The Apollo Lunar Landing Legacy Act”. This bill, if ratified, would put the National Park Service in charge of the moon park, which would consist of all the artifacts left on the moon from the Apollo missions.
The bill also specifies that the Apollo 11 landing site should be submitted to the United Nations Educational, Scientific, and Cultural Organization for designation as a World Heritage Site. The bill refers to the Apollo lunar program as one of the greatest achievements in American history and recommends:
..establishing the Historical Park under this Act will expand and enhance the protection and preservation of the Apollo lunar landing sites and provide for greater recognition and public understanding of this singular achievement in American history.
Naturally, the bill does not specify on when ground would be broken on this new park, nor can it be expected to. At this juncture, there’s no way of knowing when commercial trips to the moon will be possible, though many hope to make it so by 2030. Still, in an age when federal and private space companies are pushing the envelope on what is possible, it’s good to plan ahead.
And let’s not forget that with Moon bases being contemplated and designs being proposed, it will be good to have certain recreational activities available for future Lunar settlers. Sooner or later, people are likely to go stir crazy living in 3D printed bases made out of lunar dust. And sightseeing is likely to be a popular option on a newly colonized world.
In the meantime, I think some ideas on what people will be able to do in this park might be in order. I’m sure the National Parks Service would be open to suggestions. Everything from buggy rides to concession stands offering typical astronaut treats, like freeze-dried ice cream and tang to albums of Chris Hadfields latest hits!
When it comes to the history of computing, cryptography and and mathematics, few people have earned more renown and respect than Alan Turing. In addition to helping the Allied forces of World War II break the Enigma Code, a feat which was the difference between victory and defeat in Europe, he also played an important role in the development of computers with his “Turing Machine” and designed the Turning Test – a basic intelligence requirement for future AIs.
Despite these accomplishments, Alan Turing became the target of government persecution when it was revealed in 1952 that he was gay. At the time, homosexuality was illegal in the United Kingdom, and Alan Turing was charged with “gross indecency” and given the choice between prison and chemical castration. He chose the latter, and after two years of enduring the effects of the drug, he ate an apple laced with cyanide and died.
Officially ruled as a suicide, though some suggested that foul play may have been involved, Turing died at the tender age of 41. Despite his lifelong accomplishments and the fact that he helped to save Britain from a Nazi invasion, he was destroyed by his own government for the simple crime of being gay.
But in a recent landmark decision, the British government made a historic ruling by indicating that they would support a backbench bill that would clear his name posthumously of all charges. This ruling is not the first time that the subject of Turing’s sentencing has been visited by the British Parliament. Though for years they have been resistant to offering an official pardon, Prime Minister Gordon Brown did offer an apology for the “appalling” treatent Turing received.
However, it was not until now that it sought to wipe the slate clean and begin to redress the issue, starting with the ruling that ruined the man’s life. The government ruling came on Friday, and Lord Ahmad of Wimbledon, a government whip, told peers that the government would table the third reading of the Alan Turin bill at the end of October if no amendments are made.
Every year since 1966, the Turing Award – the computing worlds highest honor and equivalent of the Nobel Prize- has been given by the Association for Computing Machinery for technical or theoretical contributions to the computing community. In addition, on 23 June 1998 – what would have been Turing’s 86th birthday – an English Heritage blue plague was unveiled at his birthplace in and childhood home in Warrington Crescent, London.
In addition, in 1994, a stretch of the A6010 road – the Manchester city intermediate ring road – was named “Alan Turing Way”, and a bridge connected to the road was named “Alan Turing Bridge”. A statue of Turing was also unveiled in Manchester in 2001 in Sackville Park, between the University of Manchester building on Whitworth Street and the Canal Street gay village.
This memorial statue depicts the “father of Computer Science” sitting on a bench at a central position in the park holding an apple. The cast bronze bench carries in relief the text ‘Alan Mathison Turing 1912–1954’, and the motto ‘Founder of Computer Science’ as it would appear if encoded by an Enigma machine: ‘IEKYF ROMSI ADXUO KVKZC GUBJ’.
But perhaps the greatest and most creative tribute to Turning comes in the form of the statue of him that adorns Bletchley Park, the site of the UK’s main decryption department during World War II. The 1.5-ton, life-size statue of Turing was unveiled on June 19th, 2007. Built from approximately half a million pieces of Welsh slate, it was sculpted by Stephen Kettle and commissioned by the late American billionaire Sidney Frank.
Last year, Turing was even commemorated with a Google doodle last year in honor of what would have been his 100th birthday. In a fitting tribute to Turing’s code-breaking work, this doodle designed to spell out the name Google in binary. Unlike previous tributes produced by Google, this one was remarkably complicated. Those who attempted to figure it out apparently had to consult the online source Mashable just to realize what the purpose of it was.
For many, this news is seen as a development that has been too long in coming. Much like Canada’s own admission to wrongdoing in the case of Residential Schools, or the Church’s persecution of Galileo, it seems that some institutions are very slow to acknowledge that mistakes were made and injustices committed. No doubt, anyone in a position of power and authority is afraid to admit to wrongdoing for fear that it will open the floodgates.
But as with all things having to do with history and criminal acts, people cannot be expected to move forward until accounts are settled. And for those who would say “get over it already!”, or similar statements which would place responsibility for moving forward on the victims, I would say “just admit you were wrong already!”
Rest in peace, Alan Turing, and may continued homophobes who refuse to admit they’re wrong find the wisdom and self-respect to learn and grow from their mistakes. Orson Scott Card, I’m looking in your direction!