The Future is Here: AirMule’s Autonomous Demo Flight

airmule1Vertical Take-Off and Landing craft have been the subject of military developers for some time. In addition to being able to deploy from landing strips that are damaged or small for conventional aircraft, they are also able to navigate terrain and land where other craft cannot. Add to that the ability to hover and fly close to the ground, and you have a craft that can also provide support while avoiding IEDs and landmines.

One concept that incorporates all of these features is the AirMule, a compact, unmanned, single-engine vehicle that is being developed by Tactical Robotics in Israel. In January of 2013, the company unveiled the prototype which they claimed was created for the sake of supporting military personnel,  evacuating the wounded, and conducting remote reconnaissance missions.

airmule-1Now, less than a year later, the company conducted a demonstration with their prototype aircraft recently demonstrated its ability to fly autonomously, bringing it one step closer to carrying out a full mission demo. During the test, which took place in December, the craft autonomously performed a vertical take-off, flew to the end of a runway, then turned around on the spot and flew back to its starting point.

All the while, it maintained altitude using two laser altimeters, while maintaining positioning via a combination of GPS, an inertial navigation system, and optical reference to markers on the ground. These autonomous systems, which allow it to fly on its own, can also be countermanded in favor of remote control, in case a mission seems particularly harry and requires a human controller.

airmule-0In its current form, the AirMule possesses many advantages over other VTOL craft, such as helicopters. For starters, it weighs only 770 kg (1,700 lb) – as opposed to a Bell UH-1 empty weights of 2,365 kg (5,215 lbs) – can carry a payload of up to 640 kg (1,400 lb), has a top speed of 180 km/h (112 mph), and can reach a maximum altitude of 12,000 ft (3,658 m).

In short, it has a better mass to carrying capacity ratio than a helicopter, comparable performance, and can land and take-off within an area of 40 square meters (430.5 sq ft), which is significantly smaller than what a manned helicopter requires for a safe landing. The internal rotor blades are reportedly also much quieter than those of a helicopter, giving the matte-black AirMule some added stealth.

BD_atlasrobotPlans now call for “full mission demonstrations” next year, utilizing a second prototype that is currently under construction. And when complete, this vehicle and those like it can expected to be deployed to many areas of the world, assisting Coalition and other forces in dirty, dangerous environments where landmines, IEDs and other man-made and natural hazards are common.

Alongside machines like the Alpha Dog, LS3 or Wildcat, machines that were built by Boston Dynamics (recently acquired by Google) to offer transport and support to infantry in difficult terrain, efforts to “unman the front lines” through the use of autonomous drones or remote-controlled robots continue. Clearly, the future battlefield is a place where robots where will be offering a rather big hand!

 

And be sure to check this video of the AirMule demonstration, showing the vehicle take-off, hover, fly around, and then come in for a landing:


Sources: gizmag.com, tactical-robotics.com

The Future of Computing: Brain-Like Computers

neuronsIt’s no secret that computer scientists and engineers are looking to the human brain as means of achieving the next great leap in computer evolution. Already, machines are being developed that rely on machine blood, can continue working despite being damaged, and recognize images and speech. And soon, a computer chip that is capable of learning from its mistakes will also be available.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system – specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

brain_chip2The first commercial version of the new kind of computer chip is scheduled to be released in 2014, and was the result of a collaborative effort between I.B.M. and Qualcomm, as well as a Stanford research team. This “neuromorphic processor” can not only automate tasks that once required painstaking programming, but can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

googleneuralnetworkFor example, computer vision systems only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation. But last year, Google researchers were able to get a machine-learning algorithm, known as a “Google Neural Network”, to perform an identification task (involving cats) without supervision.

And this past June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately. And this past November, researchers at Standford University came up with a new algorithm that could give computers the power to more reliably interpret language. It’s known as the Neural Analysis of Sentiment (NaSent).

deep_learning_laptopA similar concept known as Deep Leaning is also looking to endow software with a measure of common sense. Google is using this technique with their voice recognition technology to aid in performing searches. In addition, the social media giant Facebook is looking to use deep learning to help them improve Graph Search, an engine that allows users to search activity on their network.

Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of binary code (0s and 1s). The information is stored separately in what is known as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

neural-networksBy contrast, the new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

These processors are not “programmed”, in the conventional sense. Instead, the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” This, in turn, strengthens some connections and weakens others, reacting much the same way the human brain does.

Neuromorphic-chip-640x353In the words of Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort:

Instead of bringing data to computation as we do today, we can now bring computation to data. Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.

One great advantage of the new approach is its ability to tolerate glitches, whereas traditional computers are cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks. Another benefit is energy efficiency, another inspiration drawn from the human brain.

IBM_stacked3dchipsThe new computers, which are still based on silicon chips, will not replace today’s computers, but augment them; at least for the foreseeable future. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and the centralized computers that run computing clouds.

However, the new approach is still limited, thanks to the fact that scientists still do not fully understand how the human brain functions. As Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, put it:

We have no clue. I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.

calit2PhotoLuckily, there are efforts underway that are designed to remedy this, with the specific intention of directing that knowledge towards the creation of better computers and AIs. One such effort comes from the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

Another is the California Institute for Telecommunications and Information Technology (aka. Calit2) – a center dedicated to innovation in nanotechnology, life sciences, information technology, and telecommunications. As
Larry Smarr, an astrophysicist and director of Institute, put it:

We’re moving from engineering computing systems to something that has many of the characteristics of biological computing.

Human-Brain-project-Alp-ICTAnd last, but certainly not least, is the Human Brain Project, an international group of 200 scientists from 80 different research institutions and based in Lausanne, Switzerland. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This initiative, which has been compared to the Large Hadron Collider, will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into an overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimic the functions of the human brain.

neuromorphic_revolutionWhen future generations look back on this decade, no doubt they will refer to it as the birth of the neuromophic computing revolution. Or maybe just Neuromorphic Revolution for short, but that sort of depends on the outcome. With so many technological revolutions well underway, it is difficult to imagine how the future will look back and characterize this time.

Perhaps, as Charles Stross suggest, it will simply be known as “the teens”, that time in pre-Singularity history where it was all starting to come together, but was yet to explode and violently change everything we know. I for one am looking forward to being around to witness it all!

Sources: nytimes.com, technologyreview.com, calit2.net, humanbrainproject.eu

The Future is Here: 3-D Printed Eye Cells

printed_eyecells3In the past few years, medical researchers have been able to replicate real, living tissues samples using 3-D printing technology – ranging from replacement ears and printed cartilage to miniature kidneys and even liver cells. Well now, thanks to a team of researchers from the University of Cambridge, eye cells have been added to that list.

Using a standard ink-jet printer to form layers of two types of cells,  the research team managed to print two types of central nervous system cells from the retinas of adult rats – ganglion cells (which transmit information from the eye to the brain), and glial cells (which provide protection and support for neurons). The resulting cells were able to grow normally and remain healthy in culture.

printed_eyecells2Ink-jet printing has been used to deposit cells before, but this is the first time cells from an adult animal’s central nervous system have been printed. The research team published its research in the IOP Publishing’s open-access journal Biofabrication and plans to extend this study to print other cells of the retina and light-sensitive photoreceptors.

In the report, Keith Martin and Barbara Lorber – the co-authors of the paper who work at the John van Geest Centre for Brain Repair at the University of Cambridge – explained the experiment in detail:

Our study has shown, for the first time, that cells derived from the mature central nervous system, the eye, can be printed using a piezoelectric inkjet printer. Although our results are preliminary and much more work is still required, the aim is to develop this technology for use in retinal repair in the future.

printed_eyecellsThis is especially good news for people with impaired visual acuity, or those who fear losing their sight, as it could lead to new therapies for retinal disorders such as blindness and macular degeneration. Naturally, more tests are needed before human trials can begin. But the research and its conclusions are quite reassuring that eye cells can not only be produced synthetically, but will remain healthy after they are produced.

Clara Eaglen, a spokesperson for the Royal National Institute of Blind People (RNIB), had this to say about the breakthrough:

The key to this research, once the technology has moved on, will be how much useful vision is restored. Even a small bit of sight can make a real difference, for some people it could be the difference between leaving the house on their own or not. It could help boost people’s confidence and in turn their independence.

printed_eyecells1Combined with bionic eyes that are now approved for distribution in the US, and stem cell treatments that have restores sight in mice, this could be the beginning of the end of blindness. And with all the strides being made in bioprinting and biofabrication, it could also be another step on the long road to replacement organs and print-on-demand body parts.

Sources: news.cnet.com, singularityhub.com, cam.ca.uk, bbc.co.uk

The Future is Weird: Cyborg Sperm!

cyborg_sperm1Finding ways to merge the biological and the technological, thus creating the best of both worlds, is one of the hallmarks of our new age. Already, we have seen how bionic appendages that connect and calibrate to people’s nerve signals can restore mobility and sensation to injured patients. And EEG devices that can read and interpret brainwaves are allowing man-machine interface like never before.

But cyborg sperm? That is something that might require an explanation. You see, sperm cells have an awesome swimming ability. And wanting to take advantage of this, Oliver Schmidt and a team researchers at the Institute for Integrative Nanosciences in Dresden, Germany, combined individual sperm cells with tiny magnetic metal tubes to create the first sperm-based biobots.

Cyborg_Sperm3This means we now have a way to control a cell’s direction inside the body, a breakthrough that could lead to efficient microscopic robots – one which are not entirely mechanical. To make the “biohybrid micro-robot,” Schmidt and his colleagues captured and trapped bull sperm inside magnetic microtubes, leaving the tail outside.

To create the spermbots, the team made microtubes 50 microns long, by 5 to 8 microns in diameter from iron and titanium nanoparticles. They added the tubes to a fluid containing thawed bull sperm. Because one end of each tube was slightly narrower than the other, sperm that swam into the wider end become trapped, headfirst, with their flagella still free.

cyborg_sperm2With mobility taken care of, the team moved on to the matter of how to control and direct the microtubes. For this, they chose to rely on a system of external magnetic fields which work the same way as a compass needle does, by aligning with the Earth’s magnetic field. This enabled the team to control the direction in which the sperm swam, adjusting their speed through the application of heat.

According to the researchers, the option of using sperm as the basis for a biohybrid micro-robot is attractive because they are harmless to the human body, they provide their own power, and they can swim through viscous liquids – such as blood and other bodily fluids. As the researchers said in their paper:

The combination of a biological power source and a microdevice is a compelling approach to the development of new microrobotic devices with fascinating future application.

cyborg_spermGranted, the idea of cybernetic sperm swimming through our systems might not seem too appealing. But think of the benefits for fertility treatments and inter-uteran health. In the future, tiny biohybrid robots like these could be used to shepherd individual sperm to eggs, making for more effective artificial insemination. They could also  deliver targeted doses of drugs to uteran tissue that is either infected or cancerous.

And if nothing else, it helps to demonstrate the leaps and bounds that are being made in the field of  biotechnology and nanotechnology of late. At its current rate of development, we could be seeing advanced medimachines and DNA-based nanobots becoming a part of regular medical procedures in just a few years time.

And while we’re waiting, check out this video of the “cyborg sperm” in action, courtesy of New Scientist:


Sources:
IO9, newscientist.com

The Future of Currency: Bitcoin Hitting the Streets

bitcoinFor those familiar with digital currencies, the name Bitcoin ought to ring a bell. Developed back in 2009, this “cryptocurrency” – i.e. it uses cryptography to control the creation and transfer of money – was created as a form of online payment for products and services. Since that time, it has become the subject of scrutiny, legislative bans, volatile pricing, and a hailed as a hardinger of the coming age of “distributed currency”.

Unlike precious metals or more traditional forms of currency, which hold value because they are backed by a country or are used to manufacture goods, Bitcoin is only buoyed by market demand. There are only 12.3 million virtual Bitcoins in circulation and those “coins” are traded through a Peer-to-Peer computer network, much as people used to share music files.

bitcoin1What’s especially interesting is the fact that the creator of this new form of currency remains unknown. It is assumed that it originated with a programer from Japan, due to the fact that its first mention came in a 2008 paper published under the pseudonym “Satoshi Nakamoto”. It became operational roughly a year later with the release of the first open source Bitcoin client and the issuance of the first physical bitcoin.

And in an interesting and personally-relevant development, it now seems that a Bitcoin ATM is coming to my old hometown of Ottawa. In this respect, the nation’s capitol is joining other major cities around the globe as municipalities that dispense the crypto currency, in spite of the fact that it is still not recognized by any national banking institutions, or financial regulating bodies.

future_money_bitcoinWhat’s more, the publicly-traded cryptocurrency has seen its stock go through repeated highs and lows over the past few years, being subject to both bubbles and price drops as countries like India and China prohibited its use. But with these machines hitting the streets, a trend which began back in November with the distribution of Robocoin ATMs, there is speculation that the digital currency might just be here to stay.

Part of the appeal of cryptocurrencies is that they allow for anonymity, hence why bitcoin has been linked to a number of illegal activities, such as on the shuttered drug marketplace Silk Road. And because its value is strictly tied to speculators, and not backed by any tangible measure or authority, speculators are able to ratchet up demand and push the stock value higher.

future_money2But Bitcoin is also starting to be accepted as a mainstream form of payment for U.S.-centric sites like OkCupid and WordPress. And back in October of 2013, China’s web giant Baidu accounced that it would start accepting Bitcoin payments for a firewall security service it sells. And though the Chinese government put the brakes on Bitcoin exchanges by December, the number of mainstream institutions opening up its coffers to it is growing.

These include Richard Branson’s private space tourism company Virgin Galactic, the Sacramento Kings, the e-commerce giant Paypal, and Overstock.com, a major online retailer. And popular use is also growing, as evidenced by the visualization below which shows downloads of bitcoin client software since 2008, broken down by different operating systems.

bitcoin_globalWhat the graphic shows is quite indicative. All over the world, particularly in developed countries and areas of economic growth – the Eastern US, Europe, Brazil, Argentina, Russia, Sub-Saharan Africa, India, China, Australia and Southeast Asia – the Bitcoin software is being downloaded and used to oversee online exchanges in good and services.

And ultimately, those who believe in the service and choose to invest in it are doing so based on the promise that it will someday streamline monetary transactions and free the world from the financial manipulation of big government and big banks, breakdown the financial walls between nations, and remake the worldwide economy. In short, it will breakdown centralized economies and allow a “distributed economy” to takes its place.

bitcoin_popmapAdmittedly, the service is still flawed in a number of respects. For example, people who chose to collect bitcoins in the past were dissuaded from spending them since their value kept going up. The problem is, if economic incentives encourage people to hoard their bitcoins rather than spend them, the currency will never fulfill its role as the future of money.

Another problem is the one arising from the currency’s “deflationary nature”. Because the system was designed to allow the creation of only a finite number of bitcoins, there will come a point where, as demand rises, the value of the currency will only go up (making the price of goods and services fall, hence the term deflation). And that could lead to hoarding on an even larger scale.

bitcoin-atm-flagshipBut according to many economists who have closely followed the progress of the digital money, Bitcoin’s recent ups and downs are to be expected from a currency so young, and one that is just now attracting major attention from the mainstream population. The bottom could fall out of the market, but the currency could just as easily stabilize and reach a point where its value is consistent enough that people no longer hoard the stuff.

So at this point, its difficult to say what the future will hold for the new miracle money known as Bitcoin. But when it comes to cryptocurrencies in general, time seems to be on their side. Ever since the Internet Revolution took off, the possibilities for creating a new, de-centralized world order – research, development, politics and business are open and inclusive in ways like never before – has been emerging.

Sources: ottawacitizen.com, wired.com, fastcoexist.com, bbc.co.uk, uxblog.idvsolutions.com

Looking Forward: Science Stories to Watch for in 2014

BrightFutureThe year of 2013 was a rather big one in terms of technological developments, be they in the field of biomedicine, space exploration, computing, particle physics, or robotics technology. Now that the New Year is in full swing, there are plenty of predictions as to what the next twelve months will bring. As they say, nothing ever occurs in a vacuum, and each new step in the long chain known as “progress” is built upon those that came before.

And with so many innovations and breakthroughs behind us, it will be exciting to see what lies ahead of us for the year of 2014. The following is a list containing many such predictions, listed in alphabetical order:

Beginning of Human Trials for Cancer Drug:
A big story that went largely unreported in 2013 came out of the Stanford School of Medicine, where researchers announced a promising strategy in developing a vaccine to combat cancer. Such a goal has been dreamed about for years, using the immune system’s killer T-cells to attack cancerous cells. The only roadblock to this strategy has been that cancer cells use a molecule known as CD47 to send a signal that fools T-cells, making them think that the cancer cells are benign.

pink-ribbonHowever, researchers at Stanford have demonstrated that the introduction of an “Anti-CD47 antibody” can intercept this signal, allowing T-cells and macrophages to identify and kill cancer cells. Stanford researchers plan to start human trials of this potential new cancer therapy in 2014, with the hope that it would be commercially available in a few years time. A great hope with this new macrophage therapy is that it will, in a sense, create a personalized vaccination against a patient’s particular form of cancer.

Combined with HIV vaccinations that have been shown not only to block the acquisition of the virus, but even kill it, 2014 may prove to be the year that the ongoing war against two of the deadliest diseases in the world finally began to be won.

Close Call for Mars:
A comet discovery back in 2013 created a brief stir when researchers noted that the comet in question – C/2013 A1 Siding Springs – would make a very close passage of the planet Mars on October 19th, 2014. Some even suspected it might impact the surface, creating all kinds of havoc for the world’s small fleet or orbiting satellites and ground-based rovers.

Mars_A1_Latest_2014Though refinements from subsequent observations have effectively ruled that out, the comet will still pass by Mars at a close 41,300 kilometers, just outside the orbit of its outer moon of Deimos. Ground-based observers will get to watch the magnitude comet close in on Mars through October, as will the orbiters and rovers on and above the Martian surface.

Deployment of the First Solid-State Laser:
The US Navy has been working diligently to create the next-generation of weapons and deploy them to the front lines. In addition to sub-hunting robots and autonomous aerial drones, they have also been working towards the creation of some serious ship-based firepower. This has included electrically-powered artillery guns (aka. rail guns); and just as impressively, laser guns!

Navy_LAWS_laser_demonstrator_610x406Sometime in 2014, the US Navy expects to see the USS Ponce, with its single solid-state laser weapon, to be deployed to the Persian Gulf as part of an “at-sea demonstration”. Although they have been tight-lipped on the capabilities of this particular directed-energy weapon,they have indicated that its intended purpose is as a countermeasure against threats – including aerial drones and fast-moving small boats.

Discovery of Dark Matter:
For years, scientists have suspected that they are closing in on the discovery of Dark Matter. Since it was proposed in the 1930s, finding this strange mass – that makes up the bulk of the universe alongside “Dark Energy” – has been a top priority for astrophysicists. And 2014 may just be the year that the Large Underground Xenon experiment (LUX), located near the town of Lead in South Dakota, finally detects it.

LUXLocated deep underground to prevent interference from cosmic rays, the LUX experiment monitors Weakly Interacting Massive Particles (WIMPs) as they interact with 370 kilograms of super-cooled liquid Xenon. LUX is due to start another 300 day test run in 2014, and the experiment will add another piece to the puzzle posed by dark matter to modern cosmology. If all goes well, conclusive proof as to the existence of this invisible, mysterious mass may finally be found!

ESA’s Rosetta Makes First Comet Landing:
This year, after over a decade of planning, the European Space Agency’s Rosetta robotic spacecraft will rendezvous with Comet 67P/Churyumov-Gerasimenko. This will begin on January 20th, when the ESA will hail the R0setta and “awaken” its systems from their slumber. By August, the two will meet, in what promises to be the cosmic encounter of the year. After examining the comet in detail, Rosetta will then dispatch its Philae lander, equipped complete with harpoons and ice screws to make the first ever landing on a comet.

Rosetta_and_Philae_at_comet_node_full_imageFirst Flight of Falcon Heavy:
2014 will be a busy year for SpaceX, and is expected to be conducting more satellite deployments for customers and resupply missions to the International Space Station in the coming year. They’ll also be moving ahead with tests of their crew-rated version of the Dragon capsule in 2014. But one of the most interesting missions to watch for is the demo flight of the Falcon 9 Heavy, which is slated to launch out of Vandenberg Air Force Base by the end of 2014.

This historic flight will mark the beginning in a new era of commercial space exploration and private space travel. It will also see Elon Musk’s (founder and CEO of Space X, Tesla Motors and PayPal) dream of affordable space missions coming one step closer to fruition. As for what this will make possible, well… the list is endless.

spaceX-falcon9Everything from Space Elevators and O’Neil space habitats to asteroid mining, missions to the Moon, Mars and beyond. And 2014 may prove to be the year that it all begins in earnest!

First Flight of the Orion:
In September of this coming year, NASA is planning on making the first launch of its new Orion Multi-Purpose Crew Vehicle. This will be a momentous event since it constitutes the first step in replacing NASA’s capability to launch crews into space. Ever since the cancellation of their Space Shuttle Program in 2011, NASA has been dependent on other space agencies (most notably the Russian Federal Space Agency) to launch its personnel, satellites and supplies into space.

orion_arrays1The test flight, which will be known as Exploration Flight Test 1 (EFT-1), will be a  short uncrewed flight that tests the capsule during reentry after two orbits. In the long run, this test will determine if the first lunar orbital mission using an Orion MPCV can occur by the end of the decade. For as we all know, NASA has some BIG PLANS for the Moon, most of which revolve around creating a settlement there.

Gaia Begins Mapping the Milky Way:
Launched on from the Kourou Space Center in French Guiana on December 19thof last year, the European Space Agency’s Gaia space observatory will begin its historic astrometry mission this year. Relying on an advanced array of instruments to conduct spectrophotometric measurements, Gaia will provide detailed physical properties of each star observed, characterising their luminosity, effective temperature, gravity and elemental composition.

Gaia_galaxyThis will effectively create the most accurate map yet constructed of our Milky Way Galaxy, but it is also anticipated that many exciting new discoveries will occur due to spin-offs from this mission. This will include the discovery of new exoplanets, asteroids, comets and much more. Soon, the mysteries of deep space won’t seem so mysterious any more. But don’t expect it to get any less tantalizing!

International Climate Summit in New York:
While it still remains a hotly contested partisan issue, the scientific consensus is clear: Climate Change is real and is getting worse. In addition to environmental organizations and agencies, non-partisan entities, from insurance companies to the U.S. Navy, are busy preparing for rising sea levels and other changes. In September 2014, the United Nations will hold another a Climate Summit to discuss what can be one.

United-Nations_HQThis time around, the delegates from hundreds of nations will converge on the UN Headquarters in New York City. This comes one year before the UN is looking to conclude its Framework Convention on Climate Change, and the New York summit will likely herald more calls to action. Though it’ll be worth watching and generate plenty of news stories, expect many of the biggest climate offenders worldwide to ignore calls for action.

MAVEN and MOM reach Mars:
2014 will be a red-letter year for those studying the Red Planet, mainly because it will be during this year that two operations are slated to begin. These included the Indian Space Agency’s Mars Orbiter Mission (MOM, aka. Mangalyaan-1) and NASA’ Mars Atmosphere and Volatile EvolutioN (MAVEN) mission, which are due to arrive just two days apart – on September 24th and 22nd respectively.

mars_lifeBoth orbiters will be tasked with studying Mars’ atmosphere and determining what atmospheric conditions looked like billions of years ago, and what happened to turn the atmosphere into the thin, depleted layer it is today. Combined with the Curiosity and Opportunity rovers, ESA’s Mars Express,  NASA’s Odyssey spacecraft and the Mars Reconnaissance Orbiter, they will help to unlock the secrets of the Red Planet.

Unmanned Aircraft Testing:
A lot of the action for the year ahead is in the area of unmanned aircraft, building on the accomplishments in recent years on the drone front. For instance, the US Navy is expected to continue running trials with the X-47B, the unmanned technology demonstrator aircraft that is expected to become the template for autonomous aerial vehicles down the road.

X-47BThroughout 2013, the Navy conducted several tests with the X-47B, as part of its ongoing UCLASS (Unmanned Carrier Launched Airborne Surveillance and Strike) aircraft program. Specifically, they demonstrated that the X-47B was capable of making carrier-based take offs and landings. By mid 2014, it is expected that they will have made more key advances, even though the program is likely to take another decade before it is fully realizable.

Virgin Galactic Takes Off:
And last, but not least, 2014 is the year that space tourism is expected to take off (no pun intended!). After many years of research, development and testing, Virgin Galactic’s SpaceShipTwo may finally make its inaugural flights, flying out of the Mohave Spaceport and bringing tourists on an exciting (and expensive) ride into the upper atmosphere.

spaceshiptwo-2nd-flight-2In late 2013, SpaceShipTwo and passed a key milestone test flight when its powered rocket engine was test fired for an extended period of time and it achieved speeds and altitudes in excess of anything it had achieved before. Having conducted several successful glide and feathered-wing test flights already, Virgin Galactic is confident that the craft has what it takes to ferry passengers into low-orbit and bring them home safely.

On its inaugural flights, SpaceShipTwo will carry two pilots and six passengers, with seats going for $250,000 a pop. If all goes well, 2014 will be remembered as the year that low-orbit space tourism officially began!

Yes, 2014 promises to be an exciting year. And I look forward to chronicling and documenting it as much as possible from this humble little blog. I hope you will all join me on the journey!

Sources: Universetoday, (2), med.standford.edu, news.cnet, listosaur, sci.esa.int

By 2014: According to Asimov and Clarke

asimov_clarkeAmongst the sci-fi greats of old, there were few authors, scientists and futurists more influential than Isaac Asimov and Arthur C. Clarke. And as individuals who constantly had one eye to the world of their day, and one eye to the future, they had plenty to say about what the world would look like by the 21st century. And interestingly enough, 2014 just happens to be the year where much of what they predicted was meant to come true.

For example, 50 years ago, Asimov wrote an article for the New York Times that listed his predictions for what the world would be like in 2014. The article was titled “Visit to the World’s Fair of 2014”, and contained many accurate, and some not-so-accurate, guesses as to how people would be living today and what kinds of technology would be available to us.

Here are some of the accurate predictions:

1. “By 2014, electroluminescent panels will be in common use.”
In short, electroluminescent displays are thin, bright panels that are used in retail displays, signs, lighting and flat panel TVs. What’s more, personal devices are incorporating this technology, in the form of OLED and AMOLED displays, which are both paper-thin and flexible, giving rise to handheld devices you can bend and flex without fear of damaging them.

touch-taiwan_amoled2. “Gadgetry will continue to relieve mankind of tedious jobs.”
Oh yes indeed! In the last thirty years, we’ve seen voicemail replace personal assistants, secretaries and message boards. We’ve seen fax machines replace couriers. We’ve seen personal devices and PDAs that are able to handle more and more in the way of tasks, making it unnecessary for people to consult a written sources of perform their own shorthand calculations. It’s a hallmark of our age that personal technology is doing more and more of the legwork, supposedly freeing us to do more with our time.

3. “Communications will become sight-sound and you will see as well as hear the person you telephone.”
This was a popular prediction in Asimov’s time, usually taking the form of a videophone or conversations that happened through display panels. And the rise of the social media and telepresence has certainly delivered on that. Services like Skype, Google Hangout, FaceTime and more have made video chatting very common, and a viable alternative to a phone line you need to pay for.

skypeskype4. “The screen can be used not only to see the people you call but also for studying documents and photographs and reading passages from books.”
Multitasking is one of the hallmarks of modern computers, handheld devices, and tablets, and has been the norm for operating systems for some time. By simply calling up new windows, new tabs, or opening up multiple apps simultaneously and simply switching between them, users are able to start multiple projects, or conduct work and view video, take pictures, play games, and generally behave like a kid with ADHD on crack if they so choose.

5. “Robots will neither be common nor very good in 2014, but they will be in existence.”
If you define “robot” as a computer that looks and acts like a human, then this guess is definitely true. While we do not have robot servants or robot friends per se, we do have Roomba’s, robots capable of performing menial tasks, and even ones capable of imitating animal and even human movements and participating in hazardous duty exercises (Google the DARPA Robot Challenge to see what I mean).

Valkyrie_robotAlas, he was off on several other fronts. For example, kitchens do not yet prepare “automeals” – meaning they prepare entire meals for us at the click of a button. What’s more, the vast majority of our education systems is not geared towards the creation and maintenance of robotics. All surfaces have not yet been converted into display screens, though we could if we wanted to. And the world population is actually higher than he predicted (6,500,000,000 was his estimate).

As for what he got wrong, well… our appliances are not powered by radioactive isotopes, and thereby able to be entirely wireless (though wireless recharging is becoming a reality). Only a fraction of students are currently proficient in computer language, contrary to his expectation that all would be. And last, society is not a place of “enforced leisure”, where work is considered a privilege and not a burden. Too bad too!

Arthur-C-ClarkeAnd when it comes to the future, there are few authors whose predictions are more trusted than Arthur C. Clarke. In addition to being a prolific science fiction writer, he wrote nearly three dozen nonfiction books and countless articles about the future of space travel, undersea exploration and daily life in the 21st century.

And in a recently released clip from a 1974 ABC News program filmed in Australia, Clarke is shown talking to a reporter next to a massive bank of computers. With his son in tow, the reporter asks Clarke to talk about what computers will be like when his son is an adult. In response, Clarke offers some eerily prophetic, if not quite spot-on, predictions:

The big difference when he grows up, in fact it won’t even wait until the year 2001, is that he will have, in his own house, not a computer as big as this, but at least a console through which he can talk to his friendly local computer and get all the information he needs for his everyday life, like his bank statements, his theater reservations, all the information you need in the course of living in a complex modern society. This will be in a compact form in his own house.

internetIn short, Clarke predicted not only the rise of the personal computer, but also online banking, shopping and a slew of internet services. Clarke was then asked about the possible danger of becoming a “computer-dependent” society, and while he acknowledged that in the future humanity would rely on computers “in some ways,” computers would also open up the world:

It’ll make it possible for us to live really anywhere we like. Any businessman, any executive, could live almost anywhere on Earth and still do his business through his device like this. And this is a wonderful thing.

Clarke certainly had a point about computers giving us the ability to communicate from almost anywhere on the globe, also known as telecommunication, telecommuting and telepresence. But as to whether or not our dependence on this level of technology is a good or bad thing, the jury is still out on that one. The point is, his predictions proved to be highly accurate, forty years in advance.

computer_chip1Granted, Clarke’s predictions were not summoned out of thin air. Ever since their use in World War II as a means of cracking Germany’s cyphers, miniaturization has been the trend in computing. By the 1970’s, they were still immense and clunky, but punch cards and vacuum tubes had already given way to transistors, ones which were getting smaller all the time.

And in 1969, the first operational packet network to implement a Transmission Control Protocol and Internet Protocol (TCP/IP) was established. Known as a Advanced Research Projects Agency Network (or ARPANET), this U.S. Department of Defense network was set up to connect the DOD’s various research projects at universities and laboratories all across the US, and was the precursor to the modern internet.

In being a man who was so on top of things technologically, Clarke accurately predicted that these two trends would continue into the foreseeable future, giving rise to computers small enough to fit on our desks (rather than taking up an entire room) and networked with other computers all around the world via a TCP/IP network that enabled real-time data sharing and communications.

And in the meantime, be sure to check out the Clarke interview below:


Sources:
huffingtonpost.com, blastr.com

Year-End Tech News: Stanene and Nanoparticle Ink

3d.printingThe year of 2013 was also a boon for the high-tech industry, especially where electronics and additive manufacturing were concerned. In fact, several key developments took place last year that may help scientists and researchers to move beyond Moore’s Law, as well as ring in a new era of manufacturing and production.

In terms of computing, developers have long feared that Moore’s Law – which states that the number of transistors on integrated circuits doubles approximately every two years – could be reaching a bottleneck. While the law (really it’s more of an observation) has certainly held true for the past forty years, it has been understood for some time that the use of silicon and copper wiring would eventually impose limits.

copper_in_chips__620x350Basically, one can only miniaturize circuits made from these materials so much before resistance occurs and they are too fragile to be effective. Because of this, researchers have been looking for replacement materials to substitute the silicon that makes up the 1 billion transistors, and the one hundred or so kilometers of copper wire, that currently make up an integrated circuit.

Various materials have been proposed, such as graphene, carbyne, and even carbon nanotubes. But now, a group of researchers from Stanford University and the SLAC National Accelerator Laboratory in California are proposing another material. It’s known as Stanene, a theorized material fabricated from a single layer of tin atoms that is theoretically extremely efficient, even at high temperatures.

computer_chip5Compared to graphene, which is stupendously conductive, the researchers at Stanford and the SLAC claim that stanene should be a topological insulator. Topological insulators, due to their arrangement of electrons/nuclei, are insulators on their interior, but conductive along their edge and/or surface. Being only a single atom in thickness along its edges, this topological insulator can conduct electricity with 100% efficiency.

The Stanford and SLAC researchers also say that stanene would not only have 100%-efficiency edges at room temperature, but with a bit of fluorine, would also have 100% efficiency at temperatures of up to 100 degrees Celsius (212 Fahrenheit). This is very important if stanene is ever to be used in computer chips, which have operational temps of between 40 and 90 C (104 and 194 F).

Though the claim of perfect efficiency seems outlandish to some, others admit that near-perfect efficiency is possible. And while no stanene has been fabricated yet, it is unlikely that it would be hard to fashion some on a small scale, as the technology currently exists. However, it will likely be a very, very long time until stanene is used in the production of computer chips.

Battery-Printer-640x353In the realm of additive manufacturing (aka. 3-D printing) several major developments were made during the year 0f 2013. This one came from Harvard University, where a materials scientist named Jennifer Lewis Lewis – using currently technology – has developed new “inks” that can be used to print batteries and other electronic components.

3-D printing is already at work in the field of consumer electronics with casings and some smaller components being made on industrial 3D printers. However, the need for traditionally produced circuit boards and batteries limits the usefulness of 3D printing. If the work being done by Lewis proves fruitful, it could make fabrication of a finished product considerably faster and easier.

3d_batteryThe Harvard team is calling the material “ink,” but in fact, it’s a suspension of nanoparticles in a dense liquid medium. In the case of the battery printing ink, the team starts with a vial of deionized water and ethylene glycol and adds nanoparticles of lithium titanium oxide. The mixture is homogenized, then centrifuged to separate out any larger particles, and the battery ink is formed.

This process is possible because of the unique properties of the nanoparticle suspension. It is mostly solid as it sits in the printer ready to be applied, then begins to flow like liquid when pressure is increased. Once it leaves the custom printer nozzle, it returns to a solid state. From this, Lewis’ team was able to lay down multiple layers of this ink with extreme precision at 100-nanometer accuracy.

laser-welding-640x353The tiny batteries being printed are about 1mm square, and could pack even higher energy density than conventional cells thanks to the intricate constructions. This approach is much more realistic than other metal printing technologies because it happens at room temperature, no need for microwaves, lasers or high-temperatures at all.

More importantly, it works with existing industrial 3D printers that were built to work with plastics. Because of this, battery production can be done cheaply using printers that cost on the order of a few hundred dollars, and not industrial-sized ones that can cost upwards of $1 million.

Smaller computers, and smaller, more efficient batteries. It seems that miniaturization, which some feared would be plateauing this decade, is safe for the foreseeable future! So I guess we can keep counting on our electronics getting smaller, harder to use, and easier to lose for the next few years. Yay for us!

Sources: extremetech.com, (2)

Year-End Health News: From Cancer Prevention to Anti-Aging

medical technology The year of 2013 ended with a bang for the field of health technology. And in my haste to cover as many stories as I could before the year ended, there were some rather interesting news developments which I unfortunately overlooked. But with the New Year just beginning, there is still plenty of time to look back and acknowledge these developments, which will no doubt lead to more in 2014.

The first comes from the UK, where the ongoing fight against cancer has entered a new phase. For years, researchers have been developing various breathalyzer devices to help detect cancer in its early phases. And now, a team from the University of Huddersfield plans to introduce one such cancer-detecting breathalyser (known as the RTube) into pharmacies.

lung-cancer-xrayAccording to Dr Rachel Airley, the lead researcher of the Huddersfield team, these molecules – which consist of genes, proteins, fragments of cells, secretions and chemicals produced by the metabolism of living tissue with the disease – form a kind of chemical and biological signature. Using breath testing devices like the RTube, Dr Airley developed a project to define a lung cancer “biomarker signature” that is detectable in breath.

According to Dr Airley:

When you get certain chemicals in someone’s breath, that can be a sign that there is early malignancy. We are looking to be able to distinguish between patients with early lung cancer and patients who have maybe got bronchitis, emphysema or non-malignant smoking related disease… or who have maybe just got a cough.

cancer_breathalyserThe goal of the project is to validate the signature in a large number of patients to ensure it can reliably distinguish between lung cancer and non-cancerous lung disease. Dr. Airley told us that this will require tracking the progress of patients for up to five years to see if the disease develops and can be linked back to a signature picked up in the patient’s breath at the beginning of the project.

So far, the project has secured £105,000 (US$170,000) in funding from the SG Court Pharmacy Group with the University of Huddersfield providing matching funding. The SG also operates the chain of pharmacies in the South East of England where the initial trials of the breathalyzer technology will be carried out.

The researchers predict that people visiting their local pharmacy for medication or advice to help them quite smoking will be invited to take a quick test, with the goal of catching the disease before the patients start to experience symptoms. Once symptoms present themselves, the disease is usually at an advanced stage and it is often too late for effective treatment.

cancer_cellDr Airley stresses that the trial is to test the feasibility of the pharmacy environment for such a test and to ensure the quality of the test samples obtained in this setting are good enough to pick up the signature:

There are 12,000 community pharmacies in Britain and there is a big move for them to get involved in primary diagnostics, because people visit their pharmacies not just when they are ill but when they are well. A pharmacy is a lot less scary than a doctor’s surgery.

Dr Airley also says her team is about to start collecting breath samples from healthy volunteers and patients with known disease as a reference point and hope to start the pharmacy trials within two years. If all goes well, she says it will be at least five years before the test is widely available.

max_plank_testThe next comes from Germany, where researchers have created a test that may help doctors predict one of the most severe side effects of antidepressants: treatment-emergent suicidal ideation (TESI). The condition is estimated to affect between four and 14 percent of patients, who typically present symptoms of TESI in the first weeks of treatment or following dosage adjustments.

So far doctors haven’t been able to find the indicators that could predict which patients are more likely to develop TESI, and finding the right medication and testing for side-effects is often a matter of simple trial and error. But a new test based on research carried out by the Max Planck Institute of Psychiatry in Munich, Germany, could change all that.

genetic_circuitThe researchers carried out genome-wide association studies on 397 patients, aged 18 to 75, who were hospitalized for depression, but were not experiencing suicidal thoughts at the time they began treatment. During the study, a reported 8.1 percent of patients developed TESI, and 59 percent of those developed it within the first two weeks of treatment.

To arrive at a list of reliable predictors, the team genotyped the whole group and then compared patients who developed TESI with those who didn’t. Ultimately, they found a subset of 79 genetic variants associated with the risk group. They then conducted an independent analysis of a larger sample group of in-patients suffering from depression and found that 90 percent of the patients were shown to have these markers.

antidepressantsIn short, this test has found that the most dangerous side-effect of antidepressant use is genetic in nature, and can therefore be predicted ahead of time. In addition, the research shed new light on the age of those affected by TESI. Prior to discovering that all age groups in the study were at risk, the assumption had been that under-25s were more at risk, leading to the FDA to begin issuing warnings by 2005.

According to some experts, this warning has had the effect of reducing the prescription of antidepressants when treating depression. In other words, patients who needed treatment were unable to get it, out of fear that it might make things worse. This situation could now be reversed that doctors can avail themselves of this new assessment tool based on the research.

DNA-MicroarrayThe laboratory-developed test, featuring a DNA microarray (chip), is being launched immediately by US company Sundance Diagnostics, ahead of submission to the FDA for market clearance. As Sundance CEO Kim Bechthold said in a recent interview:

A DNA microarray is a small solid support, usually a membrane or glass slide, on which sequences of DNA are fixed in an orderly arrangement. It is used for rapid surveys of the presence of many genes simultaneously, as the sequences contained on a single microarray can number in the thousands.

Ultimately, according to Bechthold, the aim here is to assist physicians in significantly reducing the risk of suicide in antidepressant use, and also to provide patients and families with valuable personal information to use with their doctors in weighing the risks and benefits of the medications.

Wow! From detecting cancer to preventing suicides, the New Year is looking bright indeed! Stay tuned for good news from the field of future medicine!

Sources: gizmag.com, hud.ac.uk, (2), mpg.de

The First Government-Recognized Cyborg

harbisson_cyborgThose who follow tech news are probably familiar with the name Neil Harbisson. As a futurist, and someone who was born with a condition known as achromatopsia – which means he sees everything in shades in gray – he spent much of his life looking to augment himself so that he could see what other people see. And roughly ten years ago, he succeeded by creating a device known as the “eyeborg”.

Also known as a cybernetic “third eye”, this device – which is permanently integrated to his person – allows Harbisson to “hear” colors by translating the visual information into specific sounds. After years of use, he is able to discern different colors based on their sounds with ease. But what’s especially interesting about this device is that it makes Harbisson a bona fide cyborg.

neil_harbisson1What’s more, Neil Harbisson is now the first person on the planet to have a passport photo that shows his cyborg nature. After a long battle with UK authorities, his passport now features a photo of him, eyeborg and all. And now, he is looking to help other cyborgs like himself gain more rights, mainly because of the difficulties such people have been facing in recent years.

Consider the case of Steve Mann, the man recognized as the “father of wearable computers”. Since the 1970’s, he has been working towards the creation of fully-portable, ergonomic computers that people can carry with them wherever they go. The result of this was the EyeTap, a wearable computer he invented in 1998 and then had grafted to his head.

steve-mann1And then in July of 2012, he was ejected from a McDonald’s in Paris after several staff members tried to forcibly remove the wearable device. And then in April of 2013, a bar in Seattle banned patrons from using Google Glass, declaring that “ass-kickings will be encouraged for violators.” Other businesses across the world have followed, fearing that people wearing these devices may be taking photos or video and posting it to the internet.

Essentially, Harbisson believes that recent technological advances mean there will be a rapid growth in the number of people with cybernetic implants in the near future, implants that can will either assist them or give them enhanced abilities. As he put it in a recent interview:

Our instincts and our bodies will change. When you incorporate technology into the body, the body will need to change to accommodate; it modifies and adapts to new inputs. How we adapt to this change will be very interesting.

cyborg_foundationOther human cyborgs include Stelarc, a performance artist who has implanted a hearing ear on his forearm; Kevin Warwick, the “world’s first human cyborg” who has an RFID chip embedded beneath his skin, allowing him to control devices such as lights, doors and heaters; and “DIY cyborg” Tim Cannon, who has a self-administered body-monitoring device in his arm.

And though they are still in the minority, the number of people who live with integrated electronic or bionic devices is growing. In order to ensure that the transition Harbisson foresees is accomplished as painlessly as possible, he created the Cyborg Foundation in 2010. According to their website, the organization’s mission statement is to:

help humans become cyborgs, to promote the use of cybernetics as part of the human body and to defend cyborg rights [whilst] encouraging people to create their own sensory extensions.

transhumanism1And as mind-controlled prosthetics, implants, and other devices meant to augment a person’s senses, faculties, and ambulatory ability are introduced, we can expect people to begin to actively integrate them into their bodies. Beyond correcting for injuries or disabilities, the increasing availability of such technology is also likely to draw people looking to enhance their natural abilities.

In short, the future is likely to be a place in which cyborgs are a common features of our society. The size and shape of that society is difficult to predict, but given that its existence is all but certain, we as individuals need to be able to address it. Not only is it an issue of tolerance, there’s also the need for informed decision-making when it comes whether or not individuals need to make cybernetic enhancements a part of their lives.

Basically, there are some tough issues that need to be considered as we make our way into the future. And having a forum where they can be discussed in a civilized fashion may be the only recourse to a world permeated by prejudice and intolerance on the one hand, and runaway augmentation on the other.

johnnymnemonic04In the meantime, it might not be too soon to look into introducing some regulations, just to make sure we don’t have any yahoos turning themselves into killer cyborgs in the near future! *PS: Bonus points for anyone who can identify which movie the photo above is taken from…

Sources: IO9.com, dezeen.com, eyeborg.wix.com