The Future is Here: Google Robot Cars Hit Milestone

google_robotcaIt’s no secret that amongst its many cooky and futuristic projects, self-driving cars are something Google hopes to make real within the next few years. Late last month, Google’s fleet of autonomous automobiles reached an important milestone. After many years of testing out on the roads of California and Nevada, they logged well 0ver one-million kilometers (700,000 miles) of accident-free driving. To celebrate, Google has released a new video that demonstrates some impressive software improvements that have been made over the last two years.

Most notably, the video demonstrates how its self-driving cars can now track hundreds of objects simultaneously – including pedestrians, an indicating cyclist, a stop sign held by a crossing guard, or traffic cones. This is certainly exciting news for Google and enthusiasts of automated technology, as it demonstrates that the ability of the vehicles to obey the rules of the road and react to situations that are likely to emerge and require decisions to be made.

google_robotcar_mapIn the video, we see the Google’s car reacting to railroad crossings, large stationary objects, roadwork signs and cones, and cyclists. In the case of the cyclist — not only are the cars able to discern whether the cyclist wants to move left or right, it even watches out for cyclists coming from behind when making a right turn. And while the demo certainly makes the whole process seem easy and fluid, there is actually a considerable amount of work going on behind the scenes.

For starters, there are around $150,000 of equipment in each car performing real-time LIDAR and 360-degree computer vision – a complex and computing-intensive task. The software powering the whole process is also the result of years of development. Basically, every single driving situation that can possibly occur has to be anticipated and then painstakingly programmed into the software. This is an important qualifier when it comes to these “autonomous vehicles”. They are not capable of independent judgement, only following pre-programmed instructions.

BMW 7 Series F01 July 2009 Miramas FranceWhile a lot has been said about the expensive LIDAR hardware, the most impressive aspect of the innovations is the computer vision. While LIDAR provides a very good idea of the lay of the land and the position of large objects (like parked cars), it doesn’t help with spotting speed limits or “construction ahead” signs, and whether what’s ahead is a cyclist or a railroad crossing barrier. And Google has certainly demonstrated plenty of adeptness in the past, what with their latest versions of Street View and their Google Glass project.

Naturally, Google says that it has lots of issues to overcome before its cars are ready to move out from their home town of Mountain View, California and begin driving people around. For instance, the road maps needed to be finely tuned and expanded, and Google is likely to be selling map packages in the future in the same way that apps are sold for smartphones. In the mean time, the adoption of technologies like adaptive cruise control (ACC) and lane keep assist (LKA) will bring lots of almost-self-driving cars to the road over the next few years.

In the meantime, be sure to check out the video of the driverless car in action:


Source:
extremetech.com

The Future is Here: “Terminator-style” Liquid Metal Treatment

t1000_1For ideal physical rehab, it might be necessary to go a little “cyborg”. That’s the reasoning a Chinese biomedical firm used to develop a new method of repairing damaged nerve endings. Borrowing a page from Terminator 2, their new treatment calls for the use of liquid metal to transmit nerve signals across the gap created in severed nerves. The work, they say, raises the prospect of new treatment methods for nerve damage and injuries.

Granted, it’s not quite on par with the liquid-metal-skinned cyborgs from the future, but it is a futuristic way of improving on current methods of nerve rehab that could prevent long-term disabilities. When peripheral nerves are severed, the loss of function leads to atrophy of the effected muscles, a dramatic change in quality of life and, in many cases, a shorter life expectancy. Despite decades of research, nobody has come up with an effective way to reconnect them yet.

nerveVarious techniques exist to sew the ends back together or to graft nerves into the gap that is created between severed ends. And the success of these techniques depends on the ability of the nerve ends to grow back and knit together. But given that nerves grow at the rate of one mm per day, it can take a significant amount of time (sometimes years) to reconnect. And during this time, the muscles can degrade beyond repair and lead to long-term disability.

As a result, neurosurgeons have long hoped for a way to keep muscles active while the nerves regrow. One possibility is to electrically connect the severed ends so that the signals from the brain can still get through; but up until now, an effective means of making this happen has remained elusive. For some time, biomedical engineers have been eyeing the liquid metal alloy gallium-indium-selenium for some time as a possible solution – a material that is liquid at body temperature and thought to be entirely benign.

Liquid metal nervesBut now, a biomedical research team led by Jing Liu of Tsinghua University in Beijing claims they’ve reconnected severed nerves using liquid metal for the first time. They claim that the metal’s electrical properties could help preserve the function of nerves while they regenerate. Using sciatic nerves connected to a calf muscle, which were taken from bullfrogs, they’ve managed to carry out a series of experiments that prove that the technique is viable.

Using these bullfrog nerves, they applied a pulse to one end and measured the signal that reached the calf muscle, which contracted with each pulse. They then cut the sciatic nerve and placed each of the severed ends in a capillary filled either with liquid metal or with Ringer’s solution – a solution of several salts designed to mimic the properties of body fluids. They then re-applied the pulses and measured how they propagated across the gap.

liquid metal nerves_1The results are interesting, and Jing’s team claim that the pulses that passed through the Ringer’s solution tended to degrade severely. By contrast, the pulses passed easily through the liquid metal. As they put it in their research report:

The measured electroneurographic signal from the transected bullfrog’s sciatic nerve reconnected by the liquid metal after the electrical stimulation was close to that from the intact sciatic nerve.

What’s more, since liquid metal clearly shows up in x-rays, it can be easily removed from the body when it is no longer needed using a microsyringe. All of this has allowed Jing and colleagues to speculate about the possibility of future treatments. Their goal is to make special conduits for reconnecting severed nerves that contain liquid metal to preserve electrical conduction and therefore muscle function, but also containing growth factor to promote nerve regeneration.

future_medicineNaturally, there are still many challenges and unresolved questions which must be resolved before this can become a viable treatment option. For example, how much of the muscle function can be preserved? Can the liquid metal somehow interfere with or prevent regeneration? And how safe is liquid metal inside the body – especially if it leaks? These are questions that Jing and others will hope to answer in the near future, starting with animal models and possibly later with humans..

Sources: technologyreview.com, arxiv.org, cnet.com, spectrum.ieee.org

The Future of Solar: The Space-Based Solar Farm

space-solar-headThe nation of Japan has long been regarded as being at the forefront of emerging technology. And when it comes to solar energy, they are nothing if not far-sighted and innovative. Whereas most nations are looking at building ground-based solar farms in the next few years, the Japanese are looking at the construction of vast Lunar and space-based solar projects that would take place over the course of the next few decades.

The latest proposal comes from the Japan Aerospace Exploration Agency (JAXA), which recently unveiled a series of pilot projects which, if successful, should culminate in a 1-gigawatt space-based solar power generator within just 25 years. Relying on two massive orbital mirrors that are articulated to dynamically bounce sunlight onto a solar panel-studded satellite, the energy harvested would then be beamed wirelessly to Earth using microwaves, collected Earth-side by rectifying antennas at sea, and then passed on to land.

lunaringJAXA has long been the world’s biggest booster of space-based solar power technology, making significant investments in research and rallying international support for early test projects. And in this respect, they are joined by private industries such as the Shimizu Corporation, a Japanese construction firm that recently proposed building a massive array of solar cells on the moon – aka. the “Lunar Ring” – that could beam up to 13,000 terawatts (roughly two-thirds of global power consumption) to Earth around the clock.

Considering that Japan has over 120 million residents packed onto an island that is roughly the size of Montana, this far-sighted tendency should not come as a surprise.  And even before the Fukushima disaster took place, Japan knew it needed to look to alternative sources of electricity if it was going to meet future demands. And considering the possibilities offered by space-based solar power, it should also come as no surprise that Japan – which has very few natural resources – would look skyward for the answer.

solar_array1Beyond Japan, solar power is considered the of front runner of alternative energy, at least until s fusion power comes of age. But Until such time as a fusion reaction can be triggered that produces substantially more energy than is required to initiate it, solar will remain the only green technology that could even theoretically provide for our global power demands. And in this respect, going into space is seen as the only way of circumventing the problems associated with it.

Despite solar power being in incredible abundance – the Earth’s deserts absorb more energy in a day than the human race uses in an entire year – the issue of harnessing that power and getting it to where it is needed remain as stumbling blocks. Setting up vast arrays in the Earth’s deserts would certainly deal with the former, but transmitting it to the urban centers of the world (which are far removed from it’s deserts) would be both expensive and impractical.

space-based-solarpowerLuckily, putting arrays into orbit solves both of these issues. Above the Earth’s atmosphere, they would avoid most forms of wear, the ground-based day/night cycle, and all occluding weather formations. And assuming the mirrors themselves are able to reorient to be perpetually aimed at the sun (or have mirrors to reflect the light onto them), the more optimistic estimates say that a well-designed space array could bring in more than 40 times the energy of a conventional one.

The only remaining issue lies in beaming all that energy back to Earth. Though space-based arrays can easily collect more power above the atmosphere than below it, that fact becomes meaningless if the gain is immediately lost to inefficiency during transmission. For some time, lasers were assumed to be the best solution, but more recent studies point to microwaves as the most viable solution. While lasers can be effectively aimed, they quickly lose focus when traveling through atmosphere.

spaceX_solararrayHowever, this and other plans involving space-based solar arrays (and a Space Elevator, for that matter) assume that certain advances over the next 20 years or so – ranging from light-weight materials to increased solar efficiency. By far the biggest challenge though, or the one that looks to be giving the least ground to researchers, is power transmission. With an estimated final mass of 10,000 tonnes, a gigawatt space solar array will require significant work from other scientists to improve things like the cost-per-kilogram of launch to orbit.

It currently costs around $20,000 to place a kilogram (2.2lbs) into geostationary orbit (GSO), and about half that for low-Earth orbit (LEO). Luckily, a number of recent developments have been encouraging, such as SpaceX’s most recent tests of their Falcon 9R reusable rocket system or NASA’s proposed Reusable Launch Vehicle (RLV). These and similar proposals are due to bring the costs of sending materials into orbit down significantly – Elon Musk hopes to bring it down to $1100 per kilogram.

So while much still needs to happen to make SBSP and other major undertakings a reality, the trends are encouraging, and few of their estimates for research timelines seem all that pie-eyed or optimistic anymore.

Sources: extremetech.com, (2)

Cyberwars: The Heartbleed Bug and Web Security

heartbleed-iconA little over two years ago, a tiny piece of code was introduced to the internet that contained a bug. This bug was known as Heartbleed, and in the two years it has taken for the world to recognize its existence, it has caused quite a few headaches. In addition to allowing cybercriminals to steal passwords and usernames from Yahoo, it has also allowed people to steal from online bank accounts, infiltrate governments institutions (such as Revenue Canada), and generally undermine confidence in the internet.

What’s more, in an age of cyberwarfare and domestic surveillance, its appearance would give conspiracy theorists a field day. And since it was first disclosed a month to the day ago, some rather interesting theories as to how the NSA and China have been exploiting this to spy on people have surfaced. But more on that later. First off, some explanation as to what Heartbleed is, where it came from, and how people can protect themselves from it, seems in order.

cyber_securityFirst off, Heartbleed is not a virus or a type of malware in the traditional sense, though it can be exploited by malware and cybercriminals to achieve similar results. Basically, it is a security bug or programming error in popular versions of OpenSSL, a software code that encrypts and protects the privacy of your password, banking information and any other sensitive data you provide in the course of checking your email or doing a little online banking.

Though it was only made public a month ago, the origins of the bug go back just over two years – to New Year’s Eve 2011, to be exact. It was at this time that Stephen Henson, one of the collaborators on the OpenSSL Project, received the code from Robin Seggelmann – a respected academic who’s an expert in internet protocols. Henson reviewed the code – an update for the OpenSSL internet security protocol — and by the time he and his colleagues were ringing in the New Year, he had added it to a software repository used by sites across the web.

Hackers-With-An-AgendaWhat’s interesting about the bug, which is named for the “heartbeat” part of the code that it affects, is that it is not a virus or piece of malware in the traditional sense. What it does is allow people the ability to read the memory of systems that are protected by the bug-affected code, which accounts for two-thirds of the internet. That way, cybercriminals can get the keys they need to decode and read the encrypted data they want.

The bug was independently discovered recently by Codenomicon – a Finnish web security firm – and Google Security researcher Neel Mehta. Since information about its discovery was disclosed on April 7th, 2014, The official name for the vulnerability is CVE-2014-0160.it is estimated that some 17 percent (around half a million) of the Internet’s secure web servers that were certified by trusted authorities have been made vulnerable.

cyberwarfare1Several institutions have also come forward in that time to declare that they were subject to attack. For instance, The Canada Revenue Agency that they were accessed through the exploit of the bug during a 6-hour period on April 8th and reported the theft of Social Insurance Numbers belonging to 900 taxpayers. When the attack was discovered, the agency shut down its web site and extended the taxpayer filing deadline from April 30 to May 5.

The agency also said it would provide anyone affected with credit protection services at no cost, and it appears that the guilty parties were apprehended. This was announced on April 16, when the RCMP claimed that they had charged an engineering student in relation to the theft with “unauthorized use of a computer” and “mischief in relation to data”. In another incident, the UK parenting site Mumsnet had several user accounts hijacked, and its CEO was impersonated.

nsa_aerialAnother consequence of the bug is the impetus it has given to conspiracy theorists who believe it may be part of a government-sanctioned ploy. Given recent revelations about the NSA’s extensive efforts to eavesdrop on internet activity and engage in cyberwarfare, this is hardly a surprise. Nor would it be the first time, as anyone who recalls the case made for the NIST SP800-90 Dual Ec Prng program – a pseudorandom number generator is used extensively in cryptography – acting as a “backdoor” for the NSA to exploit.

In that, and this latest bout of speculation, it is believed that the vulnerability in the encryption itself may have been intentionally created to allow spy agencies to steal the private keys that vulnerable web sites use to encrypt your traffic to them. And cracking SSL to decrypt internet traffic has long been on the NSA’s wish list. Last September, the Guardian reported that the NSA and Britain’s GCHQ had “successfully cracked” much of the online encryption we rely on to secure email and other sensitive transactions and data.

Edward-Snowden-660x367According to documents the paper obtained from Snowden, GCHQ had specifically been working to develop ways into the encrypted traffic of Google, Yahoo, Facebook, and Hotmail to decrypt traffic in near-real time; and in 2010, there was documentation that suggested that they might have succeeded. Although this was two years before the Heartbleed vulnerability existed, it does serve to highlight the agency’s efforts to get at encrypted traffic.

For some time now, security experts have speculated about whether the NSA cracked SSL communications; and if so, how the agency might have accomplished the feat. But now, the existence of Heartbleed raises the possibility that in some cases, the NSA might not have needed to crack SSL at all. Instead, it’s possible the agency simply used the vulnerability to obtain the private keys of web-based companies to decrypt their traffic.

hackers_securityThough security vulnerabilities come and go, this one is deemed catastrophic because it’s at the core of SSL, the encryption protocol trusted by so many to protect their data. And beyond abuse by government sources, the bug is also worrisome because it could possibly be used by hackers to steal usernames and passwords for sensitive services like banking, ecommerce, and email. In short, it empowers individual troublemakers everywhere by ensuring that the locks on our information can be exploited by anyone who knows how to do it.

Matt Blaze, a cryptographer and computer security professor at the University of Pennsylvania, claims that “It really is the worst and most widespread vulnerability in SSL that has come out.” The Electronic Frontier Foundation, Ars Technica, and Bruce Schneier all deemed the Heartbleed bug “catastrophic”, and Forbes cybersecurity columnist Joseph Steinberg event went as far as to say that:

Some might argue that [Heartbleed] is the worst vulnerability found (at least in terms of its potential impact) since commercial traffic began to flow on the Internet.

opensslRegardless, Heartbleed does point to a much larger problem with the design of the internet. Some of its most important pieces are controlled by just a handful of people, many of whom aren’t paid well — or aren’t paid at all. In short, Heartbleed has shown that more oversight is needed to protect the internet’s underlying infrastructure. And the sad truth is that open source software — which underpins vast swathes of the net — has a serious sustainability problem.

Another problem is money, in that important projects just aren’t getting enough of it. Whereas well-known projects such as Linux, Mozilla, and the Apache web server enjoy hundreds of millions of dollars in annual funding, projects like the OpenSSL Software Foundation – which are forced to raise money for the project’s software development – have never raised more than $1 million in a year. To top it all off, there are issues when it comes to the open source ecosystem itself.

Cyber-WarTypically, projects start when developers need to fix a particular problem; and when they open source their solution, it’s instantly available to everyone. If the problem they address is common, the software can become wildly popular overnight. As a result, some projects never get the full attention from developers they deserve. Steve Marquess, one of the OpenSSL foundation’s partners, believes that part of the problem is that whereas people can see and touch their web browsers and Linux, they are out of touch with the cryptographic library.

In the end, the only real solutions is in informing the public. Since internet security affects us all, and the processes by which we secure our information is entrusted to too few hands, then the immediate solution is to widen the scope of inquiry and involvement. It also wouldn’t hurt to commit additional resources to the process of monitoring and securing the web, thereby ensuring that spy agencies and private individuals are not exercising too much or control over it, or able to do clandestine things with it.

In the meantime, the researchers from Codenomicon have set up a website with more detailed information. Click here to access it and see what you can do to protect yourself.

Sources: cbc.ca, wired.com, (2), heartbleed.com

News from Space: Planet Hunting Flower-Shaped Starshade

nasa-starshadeWith over 1800 extra-solar planets discovered in the past 30 years, the search for life beyond our Solar System has begun anew. Astronomers believe that every star in the galaxy has a planet, and that one fifth of these might harbor life. The greatest challenge, though, is in being able to spot these “Earth-like” exoplanets. Due to the fact that they emit very little light compared to their parent stars (usually less than one-millionth the level of radiance), direct imaging is extremely rare and difficult.

As such, astronomers rely predominantly on is what is known as Transit Detection – spotting the planet’s as they cross in front of the star’s disc. This too presents difficulties, because the transit method requires that part of the planet’s orbit intersect a line-of-sight between the host star and Earth. The probability that an exoplanet will be in a randomly oriented orbit that can allow for it be observed in front of its star is therefore somewhat small.

starshade-8Luckily, engineers and astronomers at NASA and other federal space agencies are considering the possibility of evening these odds with new technology and equipment. Once such effort comes from Princeton’s High Contrast Imaging Laboratory, where Jeremy Kasdin and his team are working on a revolutionary space-based observatory known as a “starshade” – a flower petal-shaped device that allows a telescope to photograph planets from 50,000 kilometers away.

Essentially, the starshade blocks light from distant stars that ordinarily outshine their dim planets, making a clear view impossible. When paired with a space telescope, the starshade adds a new and powerful instrument to NASA’s cosmic detection toolkit. The flower-shaped petals are part of what makes the starshade so effective. The starshade is also unique in that, unlike most space-based instruments, it’s one part of a two-spacecraft observation system.

starshade-foldedAs Dr. Stuasrt Shaklan, NASA Jet Propulsion Labratory’s lead engineer on the starshade project, explaned:

The shape of the petals, when seen from far away, creates a softer edge that causes less bending of light waves. Less light bending means that the starshade shadow is very dark, so the telescope can take images of the planets without being overwhelmed by starlight… We can use a pre-existing space telescope to take the pictures. The starshade has thrusters that will allow it to move around in order to block the light from different stars.

This process presents a number of engineering challenges that Shaklan and his team are working hard to unravel, from positioning the starshade precisely in space, to ensuring that it can be deployed accurately. To address these, his research group will create a smaller scale starshade at Princeton to verify that the design blocks the light as predicted by the computer simulations. Concurrently, the JPL team will test the deployment of a near-full scale starshade system in the lab to measure its accuracy.

starshade_petalsDespite these challenges, the starshade approach could offer planet-hunters many advantages, thanks in no small part to its simplicity. Light from the star never reaches the telescope because it’s blocked by the starshade, which allows the telescope system to be simpler. Another advantage of the starshade approach is that it can be used with a multi-purpose space telescope designed to make observations that could be useful to astronomers working in fields other than exoplanets.

As part of NASA’s New World’s Mission, the starshade engineers are optimistic that refining their technology could be the key to major exoplanet discoveries in the near future. And given that over 800 planets have been detected so far in 2014 – that’s almost half of the 1800 that have been detected in total – anything that can assist in their detection process at this point is likely to lead to an explosion in planetary discoveries.

And with one-fifth of these planets being a possible candidate for life… well, you don’t have to do the math to know that the outcome will be might exciting! In the meantime, enjoy this video from TED Talks, where Professor Jeremy Kasdin speaks about the starshade project:


Source:
ted.com, planetquest.jpl.nasa.gov, princeton.edu

The Future is Fusion: Surpassing the “Break-Even” Point

JET_fusionreactorFor decades, scientists have dreamed of the day when cold fusion – and the clean, infinite energy it promises – could be made possible. And in recent years, many positive strides have been taken in that direction, to the point where scientists are now able to “break-even”. What this means is, it has become the norm for research labs to be able to produce as much energy from a cold fusion reaction as it takes in triggering that reaction in the first place.

And now, the world’s best fusion reactor – located in Oxfordshire, Engand – will become the first fusion power experiment to attempt to surpass it. This experiment, known as the Joint European Torus (JET), has held the world record for fusion reactor efficiency since 1997. If JET can reach break-even point, there’s a very good chance that the massive International Thermonuclear Experimental Reactor (ITER) currently being built in France will be able to finally achieve the dream of self-sustaining fusion. 

NASA_fusionchamber

Originally built in 1983, the JET project was conceived by the European Community (precursor to the EU) as a means of making fusion power a reality. After being unveiled the following year at a former Royal Navy airfield near Culham in Oxfordshire, with Queen Elizabeth II herself in attendance, experiments began on triggering a cold fusion reaction. By 1997, 16 megawatts of fusion power were produced from an input power of 24 megawatts, for a fusion energy gain factor of around 0.7.

Since that time, no one else has come close. The National Ignition Facility – the only other “large gain” fusion experiment on the planet, located in California – recently claimed to have broken the break-even point with their  laser-powered process. However, these claims are apparently mitigated by the fact that their 500 terrawat process (that’s 500 trillion watts!) is highly inefficient when compared to what is being used in Europe.

NIF Livermore July 2008Currently, there are two competing approaches for the artificial creation of nuclear fusion. Whereas the NIF uses “inertial confinement” – which uses lasers to create enough heat and pressure to trigger nuclear fusion – the JET project uses a process known as “magnetic confinement”. This process, where deuterium and tritium fuel are fused within a doughnut-shaped device (a tokamak) and the resulting thermal and electrical energy that is released provides power.

Of the two, magnetic confinement is usually considered a better prospect for the limitless production of clean energy, and this is the process the 500-megawatt ITER fusion reactor once its up and running. And while JET itself is a fairly low-power experiment (38 megawatts), it’s still very exciting because it’s essentially a small-scale prototype of the larger ITER. For instance, JET has been upgraded in the past few years with features that are part of the ITER design.

fusion_energyThese include a wall of solid beryllium that can withstand being bombarded by ultra-high-energy neutrons and temperatures in excess of 200 million degrees. This is a key part of achieving a sustained fusion reaction, which requires that a wall is in place to bounce all the hot neutrons created by the fusion of deuterium and tritium back into the reaction, rather than letting them escape. With this new wall in place, the scientists at JET are preparing to pump up the reaction and pray that more energy is created.

Here’s hoping they are successful! As it stands, there are still many who feel that fusion is a pipe-dream, and not just because previous experiments that claimed success turned out to be hoaxes. With so much riding on humanity’s ability to find a clean, alternative energy source, the prospects of a breakthrough do seem like the stuff of dreams. I sincerely hope those dreams become a reality within my own lifetime…

Sources: extremetech.com, (2)

News from SpaceX: Falcon 9 Reusable Rocket Test

falcon-9-reusable-test-640x353For over two years now, Elon Musk and his private space company (SpaceX) have been working towards the creation of a reusable rocket system. Known as the Falcon 9 Reusable Development Vehicle (F9R Dev) – or “Grasshopper” – this system  may prove to be the greatest development in space travel since the invention of the multistage rocket. After multiple tests that reached greater and greater altitudes, the latest attempt at a takeoff and soft landing took place this past month.

Timed to coincide with SpaceX’s launch to the International Space Station (which took place on Friday April, 18th) the landing was apparently a success. Several days after the launch, Elon Musk tweeted that the “[d]ata upload from tracking plane shows landing in Atlantic was good!” This update came on April 22nd, and as of yet, no definitive data of whether the first stage landed correctly, or whether it was still in one piece by the time the recovery boats got to it.

falcon-9-crs-3-retractable-legsPresumably SpaceX will provide another update in due course. In the meantime, they took the opportunity to release a rather awesome video of what the Falcon 9 Reusable should look like when successfully performing a vertical takeoff and vertical landing (VTVL). The video has accumulated an astonishing 3,598,143 views in the last two weeks, which is indicative of the level of interest this project and its impications have garnered over the past few years.

Meanwhile, the resupply mission went off without a hitch. Officially designated as CRS-3, this mission was even more significant due to the fact that its Falcon 9 launch vehicle featured the same retractable landing legs and the ability to soft land as the Grasshopper test rocket. However, in the case of the ISS mission, it was the first time where a Falcon 9 was tested in a real-world scenario where the rocket would return to Earth after reaching Low Earth Orbit (LEO).

spacex-dragon-capsule-grabbed-by-iss-canadarm-640x424

Though the rocket was successfully picked up by the ISS, the jury is still out on whether or not the soft landing was a success or not. To minimize any risk, the first stage of the Falcon 9 attempted to “soft land” in the Atlantic. Unfortunately, according to Elon Musk, due to “13- to 20-foot waves… It’s unlikely that the rocket was able to splash down successfully.” Using telemetry data gathered from a SpaceX spotter plane, it appears that everything else went to plan, though.

Because of the rough seas, though, the retrieval boats couldn’t make it to the landing site, and thus the rocket is unlikely to be recovered. In the meantime, SpaceX will spend the following days and weeks analyzing more detailed data from the launch, and then update the Falcon 9 design and launch protocol accordingly. However, it is clear at this point that these latest tests are not being considered a failure, or reason to cease in their efforts.

falcon-9-r-580x386As Musk himself explained in a series of public statements and interviews after the launch:

I would consider it a success in the sense that we were able to control the boost stage to a zero roll rate, which is previously what has destroyed the stage — an uncontrolled roll… I think we’re really starting to connect the dots of what’s needed [to bring the rocket back to the launch site]. I think that we’ve got a decent chance of bringing a stage back this year, which would be wonderful.

Considering the benefits of cheap, reusable rockets, and all the things they will make possible – space-based solar power, the construction of a Moon settlement, missions to Mars, the construction of a Space Elevator – there’s simply no way that a single unsuccessful rocket recovery will deter them. In the meantime, be sure to check out this video of what a successful Falcon 9 VTVL test looks like. Hopefully, we’ll be seeing a real-world example of this happening soon:


Source:
extremetech.com

 

News From Space: Cosmic Inflation and Dark Matter

big bang_blackholeHello again! In another attempt to cover events that built up while I was away, here are some stories that took place back in March and early April of this year, and which may prove to be some of the greatest scientific finds of the year. In fact, they may prove to be some of the greatest scientific finds in recent history, as they may help to answer the most fundamental questions of all – namely, what is the universe made of, and how did it come to exist?

First up, in a development that can only be described as cosmic in nature (pun intended), back in March, astrophysicists at the Harvard-Smithsonian Center announced the first-ever observation of gravitational waves. This discovery, which is the first direct evidence of the Big Bang, is comparable to significance to CERN’s confirmation of the Higgs boson in 2012. And there is already talk about a Nobel Prize for the Harvard crew because of their discovery.

big_bangThis theory, which states that the entire universe sprung into existence from a tiny spot in the universe some 13.8 billion years ago, has remained the scientific consensus for almost a century. But until now, scientists have had little beyond theory and observations to back it up. As the name would suggest, gravitational waves are basically ripples in spacetime that have been propagating outward from the center of the universe ever since the Big Bang took place.

Originally predicted as part of Einstein’s General Theory of Relativity in 1916, these waves are believed to have existed since a trillionth of a trillionth of a trillionth of a second after the Big Bang took place, and have been propagating outward for roughly 14 billion years. The theory also predicts that, if we can detect some gravitational waves, it’s proof of the initial expansion during the Big Bang and the continued inflation that has been taking place ever since.

bicep2-640x425Between 2010 and 2012, the BICEP2 – a radio telescope situated at the Amundsen–Scott South Pole Station (pictured above) – the research team listened to the Cosmic Microwave Background (CMB). They were looking for hints of B-mode polarization, a twist in the CMB that could only have been caused by the ripples of gravitational waves. Following a lot of data analysis, the leaders announced that they found that B-mode polarization.

The work will now be scrutinized by the rest of the scientific community, of course, but the general consensus seems confident that it will stand up. In terms of scientific significance, the confirmation of gravitational waves would be the first direct evidence that the universe started out as nothing, erupted into existence 13.8 billion years ago, and has continued to expand ever since. This would confirm that cosmic inflation really exists and that the entire structure of the universe was decided in the beginning by the tiniest flux of gravitational waves.

planck-attnotated-580x372And that’s not only discovery of cosmic significance that was made in recent months. In this case, the news comes from NASA’s Fermi Gamma-ray Space Telescope, which has been analyzing high-energy gamma rays emanating from the galaxy’s center since 2008. After pouring over the results, an independent group of scientists claimed that they had found an unexplained source of emissions that they say is “consistent with some forms of dark matter.”

These scientists found that by removing all known sources of gamma rays, they were left with gamma-ray emissions that so far they cannot explain. And while they were cautious that more observations will be needed to characterize these emissions, this is the first time that potential evidence has been found that may confirm that this mysterious, invisible mass that accounts for roughly 26.8% of the universe actually exists.

darkmatter1To be fair, scientists aren’t even sure what dark matter is made of. In fact, it’s very existence is inferred from gravitational effects on visible matter and gravitational lensing of background radiation. Originally, it was hypothesized to account for the discrepancies that were observed between the calculations of the mass of galaxies, clusters and entire universe made through dynamical and general relativistic means, and  the mass of the visible “luminous” matter.

The most widely accepted explanation for these phenomena is that dark matter exists and that it is most probably composed of Weakly Interacting Massive Particles (WIMPs) that interact only through gravity and the weak force. If this is true, then dark matter could produce gamma rays in ranges that Fermi could detect. Also, the location of the radiation at the galaxy’s center is an interesting spot, since scientists believe that’s where dark matter would lurk since the insofar invisible substance would be the base of normal structures like galaxies.

fermi_gamma-raysThe galactic center teems with gamma-ray sources, from interacting binary systems and isolated pulsars to supernova remnants and particles colliding with interstellar gas. It’s also where astronomers expect to find the galaxy’s highest density of dark matter, which only affects normal matter and radiation through its gravity. Large amounts of dark matter attract normal matter, forming a foundation upon which visible structures, like galaxies, are built.

Dan Hooper, an astrophysicist at Fermilab and lead author of the study, had this to say on the subject:

The new maps allow us to analyze the excess and test whether more conventional explanations, such as the presence of undiscovered pulsars or cosmic-ray collisions on gas clouds, can account for it. The signal we find cannot be explained by currently proposed alternatives and is in close agreement with the predictions of very simple dark matter models.

Hooper and his colleagues suggest that if WIMPs were destroying each other, this would be “a remarkable fit” for a dark matter signal. They again caution, though, that there could be other explanations for the phenomenon. Writing in a paper submitted to the journal Physical Review D, the researchers say that these features are difficult to reconcile with other explanations proposed so far, although they note that plausible alternatives not requiring dark matter may yet materialize.

CERN_LHCAnd while a great deal more work is required before Dark Matter can be safely said to exist, much of that work can be done right here on Earth using CERN’s own equipment. Tracy Slatyer, a theoretical physicist at the Massachusetts Institute of Technology and co-author of the report, explains:

Dark matter in this mass range can be probed by direct detection and by the Large Hadron Collider (LHC), so if this is dark matter, we’re already learning about its interactions from the lack of detection so far.This is a very exciting signal, and while the case is not yet closed, in the future we might well look back and say this was where we saw dark matter annihilation for the first time.

Still, they caution that it will take multiple sightings – in other astronomical objects, the LHC, or direct-detection experiments being conducted around the world – to validate their dark matter interpretation. Even so, this is the first time that scientists have had anything, even tentative, to base the existence of Dark Matter’s on. Much like until very recently with the Big Bang Theory, it has remained a process of elimination – getting rid of explanations that do not work rather than proving one that does.

So for those hoping that 2014 will be the year that the existence of Dark Matter is finally proven – similar to how 2012 was the year the Higgs Boson was discovered or 2013 was the year the Amplituhedron was found – there are plenty of reasons to hope. And in the meantime, check out this video of a gamma-ray map of the galactic center, courtesy of NASA’s Goddard Space Center.


Sources:
extremetech.com, IO9.com, nasa.gov, cfa.harvard.edu, news.nationalgeographic.com

Climate Crisis: Terraforming the Desert

green_machineNow that I’m back from my European adventure, I finally have the time to catch up on some news stories that were breaking earlier in the month. And between posting about said adventure, I thought I might read up and post up on them, since they are all quite interesting to behold. Take, for example, this revolutionary idea that calls for the creation of a rolling city that has one purpose in mind: to replant the deserts of the world.

Desertification is currently one of the greatest threats facing humanity. Every year, more than 75,000 square kilometers (46,000 square miles) of arable land turns to desert. As deserts spread – a process that is accelerating thanks to climate change and practices like clear-cutting – the UN estimates that more than 1 billion people will be directly affected. Many of them, living in places like Northern Africa and rural China, are already struggling with poverty, so the loss of farmland would be especially hard to handle.

green_machine_balloonsAs a result, scientists are looking to come up with creative solutions to the problem. One such concept is the Green Machine – a floating, self-powered platform that would act as a mobile oasis. Rolling on treads originally designed to move NASA rockets. Designed by Malka Architecture and Yachar Bouhaya Architecture for the Venice Biennial, this mobile city would roam the drylands and plant seeds in an effort to hold back the desert.

The huge platform would be mounted on sixteen caterpillar treads originally made to move NASA rockets, while giant floating balloons that hover from it capture water condensation. As the first treads roll over the soil, the machine uses a little water from the balloons to soften the ground while the last set of treads injects seeds, some fertilizer, and more water. The entire platform would run on renewable power, using a combination of solar towers, wind turbines, and a generator that uses temperature differences in the desert to creates electricity.

green_machine_cityThe machine could theoretically capture enough energy that it can self-support an entire small city onboard, complete with housing, schools, businesses, parks, and more farmland to grow produce for the local area. This city would house and support the many researchers, agronomers, workers and their families that would be needed to oversee the efforts. Similar to what takes place in oil drilling, these individuals could be flown in for periods of work that could last up to sixth weeks at a time before rotating out.

The designers were inspired by Allan Savory, who has proposed a much lower-tech version of the same process that relied on cattle to naturally till and fertilize the soil. For the architects, building on this idea seemed like a natural extension of their work. If the machine went into action at desert borders, the designers say it could help formerly barren soil produce 20 million tons of crops each year, and could even help slow climate change by capturing carbon in soil.

green_machine_terraOver time, biodiversity could also gradually return to the area. The architects are currently working on developing the project on the Moroccan side of the Sahara Desert. As Stephane Malka, founder of Malka Architecture, put it, it’s all about using the neglected parts of the world to plan for humanity’s future:

For a long time, my studio has developed work around neglected spaces of the city. Deserts are the biggest neglected space on Earth, as they represent more than 40% of the terrestrial surface. Building the Green Machine units would be able to re-green half of the desert borders and the meadows of the world, while feeding all of humanity

As to the sheer size of their massive, treaded city, the designers stressed that it was merely an extension of the challenge it is seeking to address. Apparently, if you want to halt a worldwide problem, you need a big-ass, honking machine!

Sources: fastcoexist.com, dvice.comdesignboom.com

The Future of Medicine: Replacement Ears and “Mini Hearts”

biomedicineBiomedicine is doing some amazing things these days, so much so that I can hardly keep up with the rate of developments. Just last month, two amazing ones were made, offering new solutions for replacing human tissue and treating chronic conditions. The first has to do with a new method of growing human using a patients own DNA, while the second involves using a patient’s own heart tissue to create “mini hearts” to aid in circulation.

The first comes from London’s Great Ormond Street Hospital, where researchers are working on a process that will grow human ears using genetic material taken from a patient’s own fat tissue. Building upon recent strides made in the field of bioprinting, this process will revolution reconstructive surgery as we know it. It also seeks to bring change to an area of medicine which, despite being essential for accident victims, has been sadly lacking in development.

replacement_earCurrently, the procedure to repair damaged or non-existent cartilage in the ear involves an operation that is usually carried out when the patient is a child. For the sake of this procedure, cartilage is extracted from the patient’s ribs and painstakingly crafted into the form of an ear before being grafted back onto the individual. Whilst this method of reconstruction achieves good results, it also comes with its share of unpleasant side effects.

Basically, the patient is left with a permanent defect around the area from where the cells were harvested, as the cartilage between the ribs does not regenerate. In this new technique, the cartilage cells are engineered from mesenchymal stem cells, extracted from the child’s abdominal adipose (fat) tissue. The benefit of this new system is that unlike the cartilage in the ribs, the adipose tissue regenerates, therefore leaving no long-term defect to the host.

stem_cells1There is also the potential to begin reconstructive treatment with stem cells derived from adipose tissue earlier than previously possible, as it takes time for the ribs to grow enough cartilage to undergo the procedure. As Dr. Patrizia Ferretti, a researcher working on the project, said in a recent interview:

One of the main benefits in using the patient’s own stem cells is that there is no need for immune suppression which would not be desirable for a sick child, and would reduce the number of severe procedures a child needs to undergo.

To create the form of the ear, a porous polymer nano-scaffold is placed in with the stem cells. The cells are then chemically induced to become chondrocytes (aka. cartilage cells) while growing into the holes in the scaffold to create the shape of the ear. According to Dr. Ferretti, cellularized scaffolds – themselves a recent medical breakthrough – are much better at integrating than fully-synthetic implants, which are more prone to extrusion and infection.

cartilage2Dr. Ferretti continued that:

While we are developing this approach with children with ear defects in mind, it could ultimately be utilized in other types of reconstructive surgery both in children and adults.

Basically, this new, and potentially more advantageous technique would replace the current set of procedures in the treatment of defects in cartilage in children such as microtia, a condition which prevents the ear from forming correctly. At the same time, the reconstructive technology also has the potential to be invaluable in improving the quality of life of those who have been involved in a disfiguring accident or even those injured in the line of service.

mini_hearts`Next up, there is the “mini heart” created by Dr. Narine Sarvazyan of George Washington University in Washington D.C.. Designed to be wrapped around individual veins, these cuffs of rhythmically-contracting heart tissue are a proposed solution to the problem of chronic venous insufficiency – a condition where leg veins suffer from faulty valves, which prevents oxygen-poor blood from being pumped back to the heart.

Much like process for creating replacement ears, the mini hearts are grown  by coaxing a patient’s own adult stem cells into becoming cardiac cells. When one of those cuffs is placed around a vein, its contractions aid in the unidirectional flow of blood, plus it helps keep the vein from becoming distended. Additionally, because it’s grown from the patient’s own cells, there’s little chance of rejection. So far, the cuffs have been grown in the lab, where they’ve also been tested. But soon, Sarvazyan hopes to conduct animal trials.

mini_hearts2As Sarvazyan explained, the applications here far beyond treating venous insufficiency. In addition, there are the long-range possibilities for organ replacement:

We are suggesting, for the first time, to use stem cells to create, rather than just repair damaged organs. We can make a new heart outside of one’s own heart, and by placing it in the lower extremities, significantly improve venous blood flow.

One of the greatest advantages of the coming age of biomedicine is the ability to replace human limbs, organs and tissue using organic substitutions. And the ability to grow these from the patient’s own tissue is a major plus, in that it cuts down on the development process and ensures a minimal risk of rejection. On top of all that, the ability to create replacement organs would also significantly cut down on the costs of replacement tissue, as well as the long waiting periods associated with donor lists.

Imagine that, if you will. A future where a patient suffering from liver, kidney, circulatory, or heart problems is able to simply visit their local hospital or clinic, donate a meager supply of tissue, and receive a healthy, fully-compatible replacement after an intervening period (days or maybe even hours). The words “healthy living” will achieve new meaning!

 

Sources: gizmag.com, (2), nanomedjournal.com