News from Space: Latest Tests and New Players

Apollo11_earthIn the new age of space travel and exploration, commercial space companies are not only boasting immense growth and innovation, but are reaching out to fill niche markets as well. In addition to launchers that can send orbiters and payloads into space, there are also new breeds of commercial satellites, new engines, and a slew of other concepts that promise to make the industry more promising and cost effective.

A case in point is the small satellite launch company Firefly Space Systems, which recently unveiled its planned Alpha launcher. Aimed at the small satellite launch market, it’s designed to launch satellites into low-Earth orbit (LEO) and Sun-synchronous orbits for broadband communication using an unconventional aerospike engine, it is also the first orbital launcher to use methane as fuel.

firefly-alphaThe Firefly Alpha is a specialized design to launch light satellites at low cost into low Earth Designed to carry payloads of up to 400 kg (880 lb), the Alpha features carbon composite construction and uses the same basic design for both of its two stages to keep down costs and simplify assembly. Methane was chosen because it’s cheap, plentiful, clean-burning and (unlike more conventional fuels) self-pressurizing, so it doesn’t require a second pressurization system.

But the really interesting thing about the two-stage rocket assembly is that the base of the engine is ringed with rocket burners rather than the usual cluster of rocket engines. That’s because, while the second stage uses conventional rocket engines, the first stage uses a more exotic plug-cluster aerospike engine that puts out some 400.3 kN (or 40,800 kg/90,000 lb)  of thrust.

firefly-alpha-4Aerospike engines have been under development since the 1960s, but until now they’ve never gotten past the design phase. The idea behind them is that rockets with conventional bell-shaped nozzles are extremely efficient, but only at a particular altitude. Since rockets are generally used to make things go up, this means that an engine that works best at sea level will become less and less efficient as it rises.

The plug aerospike is basically a bell-shaped rocket nozzle that’s been cut in half, then stretched to form a ring with the half-nozzle forming the profile of a plug. This means that the open side of the rocket engine is replaced with the air around it. As the rocket fires, the air pressure keeps the hot gases confined on that side, and as the craft rises, the change in air pressure alters the shape of the “nozzle;” keeping the engine working efficiently.

firefly-alpha-2The result of this arrangement is a lighter rocket engine that works well across a range of altitudes. Because the second stage operates in a near vacuum, it uses conventional rocket nozzles. As Firefly CEO Thomas Markusic put it:

What used to cost hundreds of millions of dollars is rapidly becoming available in the single digit millions. We are offering small satellite customers the launch they need for a fraction of that, around US$8 or 9 million – the lowest cost in the world. It’s far cheaper than the alternatives, without the headaches of a multi manifest launch.

Meanwhile, SpaceX has been making headlines with its latest rounds of launches and tests. About a week ago, the company successfully launched six ORBCOMM advanced telecommunications satellites into orbit to upgrade the speed and capacity of their existing data relay network. The launch from Cape Canaveral Air Force Station in Florida had been delayed or scrubbed several times since the original launch date in May due to varying problems.

spacex_rocketHowever, the launch went off without a hitch on Monday, July 14th, and ORBCOMM reports that all six satellites have been successfully deployed in orbit. SpaceX also used this launch opportunity to try and test the reusability of the Falcon 9′s first stage and its landing system while splashing down in the ocean. However, the booster did not survive the splashdown.

SpaceX CEO Elon Musk tweeted about the event, saying that the:

Rocket booster reentry, landing burn & leg deploy were good, but lost hull integrity right after splashdown (aka kaboom)… Detailed review of rocket telemetry needed to tell if due to initial splashdown or subsequent tip over and body slam.

SpaceX wanted to test the “flyback” ability to the rocket, slowing down the descent of the rocket with thrusters and deploying the landing legs for future launches so the first stage can be re-used. These tests have the booster “landing” in the ocean. The previous test of the landing system was successful, but the choppy seas destroyed the stage and prevented recovery. Today’s “kaboom” makes recovery of even pieces of this booster unlikely.

sceenshot-falcon9-580x281This is certainly not good news for a company who’s proposal for a reusable rocket system promises to cut costs exponentially and make a whole range of things possible. However, the company is extremely close to making this a full-fledged reality. The take-off, descent, and landing have all been done successfully; but at present, recovery still remains elusive.

But such is the nature of space flight. What begins with conceptions, planning, research and development inevitably ends with trial and error. And much like with the Mercury and Apollo program, those involved have to keep on trying until they get it right. Speaking of which, today marks the 45th anniversary of Apollo 11 reaching the Moon. You can keep track of the updates that recreate the mission in “real-time” over @ReliveApollo11.

As of the writing of this article, the Lunar module is beginning it’s descent to the Moon’s surface. Stay tuned for the historic spacewalk!

apollo11_descent

Sources: universetoday.com, gizmag.com

The Future of Computing: Brain-Like Computers

neuronsIt’s no secret that computer scientists and engineers are looking to the human brain as means of achieving the next great leap in computer evolution. Already, machines are being developed that rely on machine blood, can continue working despite being damaged, and recognize images and speech. And soon, a computer chip that is capable of learning from its mistakes will also be available.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system – specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

brain_chip2The first commercial version of the new kind of computer chip is scheduled to be released in 2014, and was the result of a collaborative effort between I.B.M. and Qualcomm, as well as a Stanford research team. This “neuromorphic processor” can not only automate tasks that once required painstaking programming, but can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

googleneuralnetworkFor example, computer vision systems only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation. But last year, Google researchers were able to get a machine-learning algorithm, known as a “Google Neural Network”, to perform an identification task (involving cats) without supervision.

And this past June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately. And this past November, researchers at Standford University came up with a new algorithm that could give computers the power to more reliably interpret language. It’s known as the Neural Analysis of Sentiment (NaSent).

deep_learning_laptopA similar concept known as Deep Leaning is also looking to endow software with a measure of common sense. Google is using this technique with their voice recognition technology to aid in performing searches. In addition, the social media giant Facebook is looking to use deep learning to help them improve Graph Search, an engine that allows users to search activity on their network.

Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of binary code (0s and 1s). The information is stored separately in what is known as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

neural-networksBy contrast, the new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

These processors are not “programmed”, in the conventional sense. Instead, the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” This, in turn, strengthens some connections and weakens others, reacting much the same way the human brain does.

Neuromorphic-chip-640x353In the words of Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort:

Instead of bringing data to computation as we do today, we can now bring computation to data. Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.

One great advantage of the new approach is its ability to tolerate glitches, whereas traditional computers are cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks. Another benefit is energy efficiency, another inspiration drawn from the human brain.

IBM_stacked3dchipsThe new computers, which are still based on silicon chips, will not replace today’s computers, but augment them; at least for the foreseeable future. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and the centralized computers that run computing clouds.

However, the new approach is still limited, thanks to the fact that scientists still do not fully understand how the human brain functions. As Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, put it:

We have no clue. I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.

calit2PhotoLuckily, there are efforts underway that are designed to remedy this, with the specific intention of directing that knowledge towards the creation of better computers and AIs. One such effort comes from the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

Another is the California Institute for Telecommunications and Information Technology (aka. Calit2) – a center dedicated to innovation in nanotechnology, life sciences, information technology, and telecommunications. As
Larry Smarr, an astrophysicist and director of Institute, put it:

We’re moving from engineering computing systems to something that has many of the characteristics of biological computing.

Human-Brain-project-Alp-ICTAnd last, but certainly not least, is the Human Brain Project, an international group of 200 scientists from 80 different research institutions and based in Lausanne, Switzerland. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This initiative, which has been compared to the Large Hadron Collider, will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into an overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimic the functions of the human brain.

neuromorphic_revolutionWhen future generations look back on this decade, no doubt they will refer to it as the birth of the neuromophic computing revolution. Or maybe just Neuromorphic Revolution for short, but that sort of depends on the outcome. With so many technological revolutions well underway, it is difficult to imagine how the future will look back and characterize this time.

Perhaps, as Charles Stross suggest, it will simply be known as “the teens”, that time in pre-Singularity history where it was all starting to come together, but was yet to explode and violently change everything we know. I for one am looking forward to being around to witness it all!

Sources: nytimes.com, technologyreview.com, calit2.net, humanbrainproject.eu

Exploring the Universe with Robotic Avatars and Holodecks

holodeck_nasaSpace exploration is littered with all kinds of hazards. In addition to the danger of dying from decompression, mechanical failures, micro-meteoroids or just crashing into a big ball of rock, there are also the lesser-known problems created by low-gravity, time dilation, and prolonged isolation. Given all that, wouldn’t it just be easier to send probes out to do the legwork, and use virtual technology to experience it back home?

That’s the idea being presented by Dr. Jeff Norris, one of the scientists who works for NASA’s Jet Propulsion Laboratory in Pasadena, California. In a recent presentation that took place at Pax Prime last year – entitled “NASA’s Got Game” – he spoke of the agency’s plans for telexploration – the process of exploring the universe using robotic avatars and holodecks, rather than sending manned flights into deep space.

avatar_imageIn the course of making this presentation, Norris noted several key advantages to this kind of exploration. In addition to being safer and cheaper, its also more readily available. Whereas deep space exploration involving space ships with FTL engines – the Alcubierre Drive they are currently working on – will eventually be available, robot space probes and advanced telecommunications technology are available right now.

At the same time, telexploration is also more democratic. Whereas conventional space travel involves a select few of highly-trained, eminently qualified people witnessing the wonders of the universe, robotic avatars and holographic representations bring the experience home, where millions of people can experience the awe and wonder for themselves. And when you think about it, it’s something we’re already doing, thanks to the current generation of space probes, satellites and – of course! – the Curiosity Rover.

Curiosity_selfportraitBasically, rather than waiting for the warp drive, Norris believes another Star Trek technology – the holodeck – will be the more immediate future of space exploration, one that we won’t have to wait for. Yes, there are more than a few Star Trek motifs going on in this presentation, and a little Avatar too, but that’s to be expected. And as we all know, life can imitate art, and the truth is always stranger than fiction!

Check out the video of the presentation below:


And remember…

holodeck_vegasad

Should We Be Afraid? A List for 2013

emerg_techIn a recent study, the John J. Reilly Center at University of Notre Dame published a rather list of possible threats that could be seen in the new year. The study, which was called “Emerging Ethical Dilemmas and Policy Issues in Science and Technology” sought to address all the likely threats people might face as a result of all developments and changes made of late, particularly in the fields of medical research, autonomous machines, 3D printing, Climate Change and enhancements.

The list contained eleven articles, presented in random order so people can assess what they think is the most important and vote accordingly. And of course, each one was detailed and sourced so as to ensure people understood the nature of the issue and where the information was obtained. They included:

1. Personalized Medicine:
dna_selfassemblyWithin the last ten years, the creation of fast, low-cost genetic sequencing has given the public direct access to genome sequencing and analysis, with little or no guidance from physicians or genetic counselors on how to process the information. Genetic testing may result in prevention and early detection of diseases and conditions, but may also create a new set of moral, legal, ethical, and policy issues surrounding the use of these tests. These include equal access, privacy, terms of use, accuracy, and the possibility of an age of eugenics.

2. Hacking medical devices:
pacemakerThough no reported incidents have taken place (yet), there is concern that wireless medical devices could prove vulnerable to hacking. The US Government Accountability Office recently released a report warning of this while Barnaby Jack – a hacker and director of embedded device security at IOActive Inc. – demonstrated the vulnerability of a pacemaker by breaching the security of the wireless device from his laptop and reprogramming it to deliver an 830-volt shock. Because many devices are programmed to allow doctors easy access in case reprogramming is necessary in an emergency, the design of many of these devices is not geared toward security.

3. Driverless zipcars:
googlecarIn three states – Nevada, Florida, and California – it is now legal for Google to operate its driverless cars. A human in the vehicle is still required, but not at the controls. Google also plans to marry this idea to the zipcar, fleets of automobiles shared by a group of users on an as-needed basis and sharing in costs. These fully automated zipcars will change the way people travel but also the entire urban/suburban landscape. And once it gets going, ethical questions surrounding access, oversight, legality and safety are naturally likely to emerge.

4. 3-D Printing:
AR-153D printing has astounded many scientists and researchers thanks to the sheer number of possibilities it has created for manufacturing. At the same time, there is concern that some usages might be unethical, illegal, and just plain dangerous. Take for example, recent effort by groups such as Distributed Defense, a group intent on using 3D printers to create “Wiki-weapons”, or the possibility that DNA assembling and bioprinting could yield infectious or dangerous agents.

5. Adaptation to Climate Change:
climatewarsThe effects of climate change are likely to be felt differently by different people’s around the world. Geography plays a role in susceptibility, but a nation’s respective level of development is also intrinsic to how its citizens are likely to adapt. What’s more, we need to address how we intend to manage and manipulate wild species and nature in order to preserve biodiversity.This warrants an ethical discussion, not to mention suggestions of how we will address it when it comes.

6. Counterfeit Pharmaceuticals:
Syringe___Spritze___by_F4U_DraconiXIn developing nations, where life saving drugs are most needed, low-quality and counterfeit pharmaceuticals are extremely common. Detecting such drugs requires the use of expensive equipment which is often unavailable, and expanding trade in pharmaceuticals is giving rise to the need to establish legal measures to combat foreign markets being flooded with cheap or ineffective knock-offs.

7. Autonomous Systems:
X-47BWar machines and other robotic systems are evolving to the point that they can do away with human controllers or oversight. In the coming decades, machines that can perform surgery, carry out airstrikes, diffuse bombs and even conduct research and development are likely to be created, giving rise to a myriad of ethical, safety and existential issues. Debate needs to be fostered on how this will effect us and what steps should be taken to ensure that the outcome is foreseeable and controllable.

8. Human-animal hybrids:
human animal hybrid
Is interspecies research the next frontier in understanding humanity and curing disease, or a slippery slope, rife with ethical dilemmas, toward creating new species? So far, scientists have kept experimentation with human-animal hybrids on the cellular level and have recieved support for their research goals. But to some, even modest experiments involving animal embryos and human stem cells are ethical violation. An examination of the long-term goals and potential consequences is arguably needed.

9. Wireless technology:
vortex-radio-waves-348x196Mobile devices, PDAs and wireless connectivity are having a profound effect in developed nations, with the rate of data usage doubling on an annual basis. As a result, telecommunications and government agencies are under intense pressure to regulate the radio frequency spectrum. The very way government and society does business, communicates, and conducts its most critical missions is changing rapidly. As such, a policy conversation is needed about how to make the most effective use of the precious radio spectrum, and to close the digital access divide for underdeveloped populations.

10. Data collection/privacy:
privacy1With all the data that is being transmitted on a daily basis, the issue of privacy is a major concern that is growing all the time. Considering the amount of personal information a person gives simply to participate in a social network, establish an email account, or install software to their computer, it is no surprise that hacking and identity theft are also major conerns. And now that data storage, microprocessors and cloud computing have become inexpensive and so widespread, a discussion on what kinds of information gathering and how quickly a person should be willing to surrender details about their life needs to be had.

11. Human enhancements:
transhumanismA tremendous amount of progress has been made in recent decades when it comes to prosthetic, neurological, pharmaceutical and therapeutic devices and methods. Naturally, there is warranted concern that progress in these fields will reach past addressing disabilities and restorative measures and venture into the realm of pure enhancement. With the line between biological and artificial being blurred, many are concerned that we may very well be entering into an era where the two are indistinguishable, and where cybernetic, biotechnological and other enhancements lead to a new form of competition where people must alter their bodies in order to maintain their jobs or avoid behind left behind.

Feel scared yet? Well you shouldn’t. The issue here is about remaining informed about possible threats, likely scenarios, and how we as people can address and deal with them now and later. If there’s one thing we should always keep in mind, it is that the future is always in the process of formation. What we do at any given time controls the shape of it and together we are always deciding what kind of world we want to live in. Things only change because all of us, either through action or inaction, allow them to. And if we want things to go a certain way, we need to be prepared to learn all we can about the causes, consequences, and likely outcomes of every scenario.

To view the whole report, follow the link below. And to vote on which issue you think is the most important, click here.

Source: reilly.nd.edu