After 15 months of observing deep space, scientists with the European Space Agency Planck mission have generated a massive heat map of the entire universe.The “heat map”, as its called, looks at the oldest light in the universe and then uses the data to extrapolate the universe’s age, the amount of matter held within, and the rate of its expansion. And as usual, what they’ve found was simultaneously reassuring and startling.
When we look at the universe through a thermal imaging system, what we see is a mottled light show caused by cosmic background radiation. This radiation is essentially the afterglow of the Universe’s birth, and is generally seen to be smooth and uniform. This new map, however, provides a glimpse of the tiny temperature fluctuations that were imprinted on the sky when the Universe was just 370,000 years old.
Since it takes light so long to travel from one end of the universe to the other, scientists can tell – using red shift and other methods – how old the light is, and hence get a glimpse at what the universe looked like when the light was first emitted. For example, if a galaxy several billion light years away appears to be dwarfish and misshapen by our standards, it’s an indication that this is what galaxies looked like several billion years ago, when they were in the process of formation.
Hence, like archaeologists sifting through sand to find fossil records of what happened in the past, scientists believe this map reveals a sort of fossil imprint left by the state of the universe just 10 nano-nano-nano-nano seconds after the Big Bang. The splotches in the Planck map represent the seeds from which the stars and galaxies formed. As is heat-map tradition, the reds and oranges signify warmer temperatures of the universe, while light and dark blues signify cooler temperatures.
The cooler temperatures came about because those were spots where matter was once concentrated, but with the help of gravity, collapsed to form galaxies and stars. Using the map, astronomers discovered that there is more matter clogging up the universe than we previously thought, at around 31.7%, while there’s less dark energy floating around, at around 68.3%. This shift in matter to energy ratio also indicates that the universe is expanding slower than previously though, which requires an update on its estimated age.
All told, the universe is now believed to be a healthy 13.82 billion years old. That wrinkles my brain! And also of interest is the fact that this would appear to confirm the Big Bang Theory. Though widely considered to be scientific canon, there are those who dispute this creation model of the universe and argue more complex ideas, such as the “Steady State Theory” (otherwise known as the “Theory of Continuous Creation”).
In this scenario, the majority of matter in the universe was not created in a single event, but gradually by several smaller ones. What’s more, the universe will not inevitable contract back in on itself, leading to a “Big Crunch”, but will instead continue to expand until all the stars have either died out or become black holes. As Krzysztof Gorski, a member of the Planck team with JPL, put it:
This is a treasury of scientific data. We are very excited with the results. We find an early universe that is considerably less rigged and more random than other, more complex models. We think they’ll be facing a dead-end.
Martin White, a Planck project scientist with the University of California, Berkeley and the Lawrence Berkeley National Laboratory, explained further. According to White, the map shows how matter scattered throughout the universe with its associated gravity subtly bends and absorbs light, “making it wiggle to and fro.” As he went on to say:
The Planck map shows the impact of all matter back to the edge of the Universe. It’s not just a pretty picture. Our theories on how matter forms and how the Universe formed match spectacularly to this new data.
The Planck space probe, which launched in 2009 from the Guiana Space Center in French Guiana, is a European Space Agency mission with significant contribution from NASA. The two-ton spacecraft gathers the ancient glow of the Universe’s beginning from a vantage more than a million and a half kilometers from Earth. This is not the first map produced by Planck; in 2010, it created an all-sky radiation map which scientists, using supercomputers, removed all interfering background light from to get a clear view at the deep background of the stars.
However, this is the first time any satellite has been able to picture the background radiation of the universe with such high resolution. The variation in light captured by Planck’s instruments was less than 1/100 millionth of a degree, requiring the most sensitive equipment and the contrast. So whereas cosmic radiation has appeared uniform or with only slight variations in the past, scientists are now able to see even the slightest changes, which is intrinsic to their work.
So in summary, we have learned that the universe is a little older than previously expected, and that it most certainly was created in a single, chaotic event known as the Big Bang. Far from dispelling the greater mysteries, confirming these theories is really just the tip of the iceberg. There’s still the grandiose mystery of how all the fundamental laws such as gravity, nuclear forces and electromagnetism work together.
Ah, and let’s not forget the question of what transpires beneath the veil of an even horizon (aka. a Black Hole), and whether or not there is such a thing as a gateway in space and time. Finally, there’s the age old question of whether or not intelligent life exists somewhere out there, or life of any kind. But given the infinite number of stars, planets and possibilities that the universe provides, it almost surely does!
And I suppose there’s also that persistent nagging question we all wonder when we look up at the stars. Will we ever be able to get out there and take a closer look? I for one like to think so, and that it’s just a matter of time!
Back in January, National Geographic Magazine celebrated its 125th anniversary. In honor of this occasion, they released a special issue which commemorated the past 125 years of human exploration and looked ahead at what the future might hold. As I sat in the doctor’s office, waiting on a prescription for antibiotics to combat my awful cold, I found myself terribly inspired by the article.
So naturally, once I got home, I looked up the article and its source material and got to work. The issue of exploration, especially the future thereof, is not something I can ever pass up! So for the next few minutes (or hours, depending on how much you like to nurse a read), I present you with some possible scenarios about the coming age of deep space exploration.
Suffice it to say, National Geographic’s appraisal of the future of space travel was informative and hit on all the right subjects for me. When one considers the sheer distances involved, not to mention the amount of time, energy, and resources it would take to allow people to get there, the question of reaching into the next great frontier poses a great deal of questions and challenges.
Already, NASA, Earth’s various space agencies and even private companies have several ideas in the works or returning to the Moon, going to Mars, and to the Asteroid Belt. These include the SLS (Space Launch System), the re-purposed and upgraded version of the Saturn V rocket which took the Apollo astronauts to the Moon. Years from now, it may even be taking crews to Mars, which is slated for 2030.
And when it comes to settling the Moon, Mars, and turning the Asteroid Belt into our primary source of mineral extraction and manufacturing, these same agencies, and a number of private corporations are all invested in getting it done. SpaceX is busy testing its reusable-launch rocket, known as the Grasshopper, in the hopes of making space flight more affordable. And NASA and the ESA are perfecting a process known as “sintering” to turn Moon regolith into bases and asteroids into manufactured goods.
Meanwhile, Virgin Galactic, Reaction Engines and Golden Spike are planning to make commercial trips into space and to the Moon possible within a few years time. And with companies like Deep Space Industries and Google-backed Planetary Resources prospeting asteroids and planning expeditions, it’s only a matter of time before everything from Earth to the Jovian is being explored and claimed for our human use.
Space Colony by Stephan Martiniere
But when it comes to deep-space exploration, the stuff that would take us to the outer reaches of the Solar System and beyond, that’s where things get tricky and pretty speculative. Ideas have been on the table for some time, since the last great Space Race forced scientists to consider the long-term and come up with proposed ways of closing the gap between Earth and the stars. But to this day, they remain a scholarly footnote, conceptual and not yet realizable.
But as we embark of a renewed era of space exploration, where the stuff of science fiction is quickly becoming the stuff of science fact, these old ideas are being dusted off, paired up with newer concepts, and seriously considered. While they might not be feasible at the moment, who know what tomorrow holds? From the issues of propulsion, to housing, to cost and time expenditures, the human race is once again taking a serious look at extra-Solar exploration.
And here are some of the top contenders for the “Final Frontier”:
Nuclear Propulsion: The concept of using nuclear bombs (no joke) to propel a spacecraft was first proposed in 1946 by Stanislaw Ulam, a Polish-American mathematician who participated in the Manhattan Project. Preliminary calculations were then made by F. Reines and Ulam in 1947, and the actual project – known as Project Orion was initiated in 1958 and led by Ted Taylor at General Atomics and physicist Freeman Dyson from the Institute for Advanced Study in Princeton.
In short, the Orion design involves a large spacecraft with a high supply of thermonuclear warheads achieving propulsion by releasing a bomb behind it and then riding the detonation wave with the help of a rear-mounted pad called a “pusher”. After each blast, the explosive force is absorbed by this pusher pad, which then translates the thrust into forward momentum.
Though hardly elegant by modern standards, the proposed design offered a way of delivering the explosive (literally!) force necessary to propel a rocket over extreme distances, and solved the issue of how to utilize that force without containing it within the rocket itself. However, the drawbacks of this design are numerous and noticeable.
F0r starters, the ship itself is rather staggering in size, weighing in anywhere from 2000 to 8,000,000 tonnes, and the propulsion design releases a dangerous amount of radiation, and not just for the crew! If we are to rely on ships that utilize nuclear bombs to achieve thrust, we better find a course that will take them away from any inhabited or habitable areas. What’s more, the cost of producing a behemoth of this size (even the modest 2000 tonne version) is also staggering.
Antimatter Engine: Most science fiction authors who write about deep space exploration (at least those who want to be taken seriously) rely on anti-matter to power ships in their stories. This is no accident, since antimatter is the most potent fuel known to humanity right now. While tons of chemical fuel would be needed to propel a human mission to Mars, just tens of milligrams of antimatter, if properly harnessed, would be able to supply the requisite energy.
Fission and fusion reactions convert just a fraction of 1 percent of their mass into energy. But by combine matter with antimatter, its mirror twin, a reaction of 100 percent efficiency is achieved. For years, physicists at the CERN Laboratory in Geneva have been creating tiny quantities of antimatter by smashing subatomic particles together at near-light speeds. Given time and considerable investment, it is entirely possible this could be turned into a form of advanced propulsion.
In an antimatter rocket, a dose of antihydrogen would be mixed with an equal amount of hydrogen in a combustion chamber. The mutual annihilation of a half pound of each, for instance, would unleash more energy than a 10-megaton hydrogen bomb, along with a shower of subatomic particles called pions and muons. These particles, confined within a magnetic nozzle similar to the type necessary for a fission rocket, would fly out the back at one-third the speed of light.
However, there are natural drawback to this design as well. While a top speed of 33% the speed of light per rocket is very impressive, there’s the question of how much fuel will be needed. For example, while it would be nice to be able to reach Alpha Centauri – a mere 4.5 light years away – in 13.5 years instead of the 130 it would take using a nuclear rocket, the amount of antimatter needed would be immense.
No means exist to produce antimatter in such quantities right now, and the cost of building the kind of rocket required would be equally immense. Considerable refinements would therefore be needed and a sharp drop in the cost associated with building such a vessel before any of its kind could be deployed.
Laser Sail: Thinking beyond rockets and engines, there are some concepts which would allow a spaceship to go into deep space without the need for fuel at all. In 1948, Robert Forward put forward a twist on the ancient technique of sailing, capturing wind in a fabric sail, to propose a new form of space travel. Much like how our world is permeated by wind currents, space is filled with cosmic radiation – largely in the form of photons and energy associated with stars – that push a cosmic sail in the same way.
This was followed up again in the 1970’s, when Forward again proposed his beam-powered propulsion schemes using either lasers or masers (micro-wave lasers) to push giant sails to a significant fraction of the speed of light. When photons in the laser beam strike the sail, they would transfer their momentum and push the sail onward. The spaceship would then steadily builds up speed while the laser that propels it stays put in our solar system.
Much the same process would be used to slow the sail down as it neared its destination. This would be done by having the outer portion of the sail detach, which would then refocus and reflect the lasers back onto a smaller, inner sail. This would provide braking thrust to slow the ship down as it reached the target star system, eventually bringing it to a slow enough speed that it could achieve orbit around one of its planets.
Once more, there are challenges, foremost of which is cost. While the solar sail itself, which could be built around a central, crew-carrying vessel, would be fuel free, there’s the little matter of the lasers needed to propel it. Not only would these need to operate for years continuously at gigawatt strength, the cost of building such a monster would be astronomical, no pun intended!
A solution proposed by Forward was to use a series of enormous solar panel arrays on or near the planet Mercury. However, this just replaced one financial burden with another, as the mirror or fresnel lens would have to be planet-sized in scope in order for the Sun to keep the lasers focused on the sail. What’s more, this would require that a giant braking sail would have to be mounted on the ship as well, and it would have to very precisely focus the deceleration beam.
So while solar sails do present a highly feasible means of sending people to Mars or the Inner Solar System, it is not the best concept for interstellar space travel. While it accomplishes certain cost-saving measures with its ability to reach high speeds without fuel, these are more than recouped thanks to the power demands and apparatus needed to be it moving.
Generation/Cryo-Ship: Here we have a concept which has been explored extensively in fiction. Known as an Interstellar Ark, an O’Neill Cylinder, a Bernal Sphere, or a Stanford Torus, the basic philosophy is to create a ship that would be self-contained world, which would travel the cosmos at a slow pace and keep the crew housed, fed, or sustained until they finally reached their destination. And one of the main reasons that this concept appears so much in science fiction literature is that many of the writers who made use of it were themselves scientists.
The first known written examples include Robert H. Goddard “The Last Migration” in 1918, where he describes an “interstellar ark” containing cryogenic ally frozen people that set out for another star system after the sun died. Konstantin E. Tsiolkovsky later wrote of “Noah’s Ark” in his essay “The Future of Earth and Mankind” in 1928. Here, the crews were kept in wakeful conditions until they reached their destination thousands of years later.
By the latter half of the 20th century, with authors like Robert A. Heinlein’s Orphans of the Sky, Arthur C. Clarke’s Rendezvous with Rama and Ursula K. Le Guin’s Paradises Lost, the concept began to be explored as a distant possibility for interstellar space travel. And in 1964, Dr. Robert Enzmann proposed a concept for an interstellar spacecraft known as the Enzmann Starship that included detailed notes on how it would be constructed.
Enzmann’s concept would be powered by deuterium engines similar to what was called for with the Orion Spacecraft, the ship would measure some 600 meters (2000 feet) long and would support an initial crew of 200 people with room for expansion. An entirely serious proposal, with a detailed assessment of how it would be constructed, the Enzmann concept began appearing in a number of science fiction and fact magazines by the 1970’s.
Despite the fact that this sort of ship frees its makers from the burden of coming up with a sufficiently fast or fuel-efficient engine design, it comes with its own share of problems. First and foremost, there’s the cost of building such a behemoth. Slow-boat or no, the financial and resource burden of building a mobile space ship is beyond most countries annual GDP. Only through sheer desperation and global cooperation could anyone conceive of building such a thing.
Second, there’s the issue of the crew’s needs, which would require self-sustaining systems to ensure food, water, energy, and sanitation over a very long haul. This would almost certainly require that the crew remain aware of all its technical needs and continue to maintain it, generation after generation. And given that the people aboard the ship would be stuck in a comparatively confined space for so long, there’s the extreme likelihood of breakdown and degenerating conditions aboard.
Third, there’s the fact that the radiation environment of deep space is very different from that on the Earth’s surface or in low earth orbit. The presence of high-energy cosmic rays would pose all kinds of health risks to a crew traveling through deep space, so the effects and preventative measures would be difficult to anticipate. And last, there’s the possibility that while the slow boat is taking centuries to get through space, another, better means of space travel will be invented.
Faster-Than-Light (FTL) Travel: Last, we have the most popular concept to come out of science fiction, but which has received very little support from scientific community. Whether it was the warp drive, the hyperdrive, the jump drive, or the subspace drive, science fiction has sought to exploit the holes in our knowledge of the universe and its physical laws in order to speculate that one day, it might be possible to bridge the vast distances between star systems.
However, there are numerous science based challenges to this notion that make an FTL enthusiast want to give up before they even get started. For one, there’s Einstein’s Theory of General Relativity, which establishes the speed of light (c) as the uppermost speed at which anything can travel. For subatomic particles like photons, which have no mass and do not experience time, the speed of light is a given. But for stable matter, which has mass and is effected by time, the speed of light is a physical impossibility.
For one, the amount of energy needed to accelerate an object to such speeds is unfathomable, and the effects of time dilation – time slowing down as the speed of light approaches – would be unforeseeable. What’s more, achieving the speed of light would most likely result in our stable matter (i.e. our ships and bodies) to fly apart and become pure energy. In essence, we’d die!
Naturally, there have been those who have tried to use the basis of Special Relativity, which allows for the existence of wormholes, to postulate that it would be possible to instantaneously move from one point in the universe to another. These theories for “folding space”, or “jumping” through space time, suffer from the same problem. Not only are they purely speculative, but they raise all kinds of questions about temporal mechanics and causality. If these wormholes are portals, why just portals in space and not time?
And then there’s the concept of a quantum singularity, which is often featured in talk of FTL. The belief here is that an artificial singularity could be generated, thus opening a corridor in space-time which could then be traversed. The main problem here is that such an idea is likely suicide. A quantum singularity, aka. a black hole, is a point in space where the laws of nature break down and become indistinguishable from each other – hence the term singularity.
Also, they are created by a gravitational force so strong that it tears a hole in space time, and that resulting hole absorbs all things, including light itself, into its maw. It is therefore impossible to know what resides on the other side of one, and astronomers routinely observe black holes (most notably Sagittarius A at the center of our galaxy) swallow entire planets and belch out X-rays, evidence of their destruction. How anyone could think these were a means of safe space travel is beyond me! But then again, they are a plot device, not a serious idea…
But before you go thinking that I’m dismissing FTL in it’s entirety, there is one possibility which has the scientific community buzzing and even looking into it. It’s known as the Alcubierre Drive, a concept which was proposed by physicist Miguel Alcubierre in his 1994 paper: “The Warp Drive: Hyper-Fast Travel Within General Relativity.”
The equations and theory behind his concept postulate that since space-time can be contracted and expanded, empty space behind a starship could be made to expand rapidly, pushing the craft in a forward direction. Passengers would perceive it as movement despite the complete lack of acceleration, and vast distances (i.e. light years) could be passed in a matter of days and weeks instead of decades. What’s more, this “warp drive” would allow for FTL while at the same time remaining consistent with Einstein’s theory of Relativity.
In October 2011, physicist Harold White attempted to rework the equations while in Florida where he was helping to kick off NASA and DARPA’s joint 100 Year Starship project. While putting together his presentation on warp, he began toying with Alcubierre’s field equations and came to the conclusion that something truly workable was there. In October of 2012, he announced that he and his NASA team would be working towards its realization.
But while White himself claims its feasible, and has the support of NASA behind him, the mechanics behind it all are still theoretical, and White himself admits that the energy required to pull off this kind of “warping” of space time is beyond our means at the current time. Clearly, more time and development are needed before anything of this nature can be realized. Fingers crossed, the field equations hold, because that will mean it is at least theoretically possible!
Summary: In case it hasn’t been made manifestly obvious by now, there’s no simple solution. In fact, just about all possibilities currently under scrutiny suffer from the exact same problem: the means just don’t exist yet to make them happen. But even if we can’t reach for the stars, that shouldn’t deter us from reaching for objects that are significantly closer to our reach. In the many decades it will take us to reach the Moon, Mars, the Asteroid Belt, and Jupiter’s Moons, we are likely to revisit this problem many times over.
And I’m sure that in course of creating off-world colonies, reducing the burden on planet Earth, developing solar power and other alternative fuels, and basically working towards this thing known as the Technological Singularity, we’re likely to find that we are capable of far more than we ever thought before. After all, what is money, resources, or energy requirements when you can harness quantum energy, mine asteroids, and turn AIs and augmented minds onto the problems of solving field equations?
Yeah, take it from me, the odds are pretty much even that we will be making it to the stars in the not-too-distant future, one way or another. As far as probabilities go, there’s virtually no chance that we will be confined to this rock forever. Either we will branch out to colonize new planets and new star systems, or go extinct before we ever get the chance. I for one find that encouraging… and deeply disturbing!
In July of 2012, scientists working for the CERN Laboratory in Geneva, Switzerland announced that they believed they had found the elusive “God Particle” – aka. the Higgs Boson. In addition to ending a decades-long search, the discovery also solved one of the greatest riddles of the universe, confirming the Standard Model of particle physics and shedding light on how the universe itself came to be.
But of course, this discovery needed to be confirmed before the scientific community could accept its existence as fact. The announcement made in July indicated that what the CERN scientists had found appeared to be the Higgs Boson, in that it fit the characteristics of the hypothetical subatomic particle. But as of last Thursday, they claimed that they are now quite certain that this is what they observed.
Joe Incandela, a physicist who heads one of the two main teams at CERN (both made up of over 3000 individuals) claimed that: “To me it is clear that we are dealing with a Higgs boson, though we still have a long way to go to know what kind of Higgs boson it is”. In essence, he and his staff believe that may be several types of Higgs to be found, each of which behaves a little differently.
This was no small challenge, as the Higgs will only make an appearance once in every trillion collisions. Originally theorized in 1964 by British physicist Peter Higgs to explain why matter has mass, it has long been suspected that the Higgs stood alone, explaining how the six “flavors” of quarks, six types of leptons, and twelve gauge bosons, interact. Now, it may be the case that there are several, each of which moves differently and are responsible for different functions.
And of course, there are several larger mysteries that remain to be solved, which the discovery of the Higgs is expected to shed light on. These include why gravity is so weak, what the dark matter is that is believed to make up a large part of the total mass in the universe, and just how all the major forces of the universe work together to define this thing we know as reality.
These include gravity, weak and strong nuclear forces, and electromagnetism. The Theory of Relativity explains how gravity works, while Quantum Theory explains the other three. What has been missing for some time is a “Grand Unifying Theory”, something which could explain how these two theories could co-exist and account for all the basic forces of the universe.
If we can do that, we will have accomplished what Stephen Hawking has dreamed of for some time, and in effect be one step closer to what he described as: “understanding the mind of God”.
Despite how far solar cells have come in recent years, issues like production and installation costs have remained an ongoing obstacle to their full scale adoption. But as they say, obstacles are meant to be overcome, and can often produce very interesting solutions. For example, peel and stick solar panels that can be manufactured by a 3D printer are one option. Another is the recent creation of a solar cell as thin as a strand of hair. And as it happens, a third has just been unveiled.
This latest one comes to us from the University of Oslo, where researchers have come up with a way to produce silicon solar cells that are twenty times thinner than commercial solar cells. Typically, solar cells are fashioned out of 200-micrometer-thick (0.2mm) wafers of silicon, which given their average rate of power generation works out to about five grams of silicon per watt of solar power. Combined with all the silicon wasted in the production process, this makes for a very inefficient process.
One way around this is to reduce the thickness of solar wafers, but this presents its own problems. As the wafer gets thinner, more light passes straight through the silicon, dramatically reducing the amount of electricity produced by the photovoltaic effect. Blue light, which has a short wavelength, can be absorbed by a very thin solar cell; but red light, which has longer wavelengths, can only be captured by thicker wafers.
Enter into this the breakthrough created by the Oslo researchers. Using a revolutionary technique involving microbeads – tiny plastic spheres that create an almost perfect periodic pattern on the silicon. Apparently, these beads force the sunlight to “move sideways,” ensuring a more uniform and powerful rate of absorption. Another trick is to dot the backs of each cell with asymmetric microindentations,which can trap even more solar energy.
Using these techniques, silicon wafers can be created that measure a mere 10 micrometers in thickness but can do the job of a 200 micrometer cell. By using 95% less silicon, the cost of production drops considerably, which will reduce the cost of solar power installations and – more importantly – increase profits. With current production methods and costs, the profit margin associated with solar power is pretty negligible.
This latter aspect is especially important as far as commercial production comes into play. If we are to expect industries to adopt solar power for their energy needs, it has to be worth their while. At the moment, the Oslo researchers are in talks with industrial partners to investigate whether these methods can be scaled up to industrial production. But given the nature of their work, they seem quite confident that their technology could come to the market within five to seven years.
Stay tuned for more installments in the PBTS series!
Remember that scene in Prometheus when David, the ship’s AI, was studying ancient languages in the hopes of being able to speak to the Engineers? The logic here was that since the Engineers were believed to have visited Earth many millennia ago to tamper with human evolution, that they were also responsible for our earliest known languages. In David’s case, this meant reconstructing the ancient tongue known as Proto-Indo-European.
Given the fact that my wife is linguistics major, and that I love all things ancient and historical, I found the concept pretty intriguing – even if it was a little Ancient Astronauts-y. To think that we could trace words and meaning back through endless iterations to determine what the earliest language recognized by linguists sounded like. Given how many tongues it has “parented”, it would be cool to meet the common ancestor.
And now there is a piece of software that can do just that. Thanks to a group of linguists and computer scientists in the US and Canada, this program has shown the ability to analyze enormous groups of languages to reconstruct the earliest human languages, long before there was writing. By using this program and others like it, linguists may one day know how people sounded when they talked 20,000 years ago.
Alexandre Bouchard-Côté, a University of British Columbia statistician, began working on the program when he was a graduate student at UC Berkeley. By using algorithms to compare sounds and cognates across hundreds of different modern languages, he found he could predict which language groups were most related to each other. Basically, a sound that remained the same across distantly-related languages most likely existed early in our linguistic evolutionary tree.
Modern linguists speculate that the earliest languages that led to today’s tongues include Proto-Indo-European, Proto-Afroasiatic and Proto-Austronesian. These are the ancestral language families that gave rise to languages like Celtic, Germanic, Italic and Slavic; Arabic, Hebrew, Cushite and Somali; and Samoan, Tahitian, and Maori. Though by no means the only language family trees (they do not account of Sub-Saharan Africa or the pre-Columbian Americas, for example), they do encompass the majority of spoken languages today.
For their purposes, Bouchard-Côté and his colleagues focused on Proto-Austronesia, the family which led to today’s Polynesian languages as well as languages in Southeast Asia and parts of continental Asia. Using the software they developed, they were able to reconstruct over 600 ancient Proto-Austronesian languages and published their findings in the December issue of Proceedings of the National Academy of Sciences.
In their paper, Bouchard-Côté and his researchers said this of their new program:
“The analysis of the properties of hundreds of ancient languages performed by this system goes far beyond the capabilities of any previous automated system and would require significant amounts of manual effort by linguists.”
Ultimately, this program could allow linguists to hear languages that haven’t been spoken in millennia, reconstructing a lost world where those languages spread across the world, evolving as they went. In addition, it could be used for linguistic futurism, anticipating how languages may evolve over time and surmising what people will speak and sound like hundreds or even thousands of years from now.
Personally, I think the ability to look back and know what our ancestors sounded like is the real prize, but I’d be a poor sci-fi nerd if I didn’t at least fantasize about what our language patterns will sound like down the road. Lord knows its been speculated about plenty of times thus far, with thoughts ranging from Galego (a Slavic-English hybrid from Dune), the Chinese-English smattering used in Firefly, and City Speak from Blade Runner.
Hey, remember this little gem? Bonus points to anyone who can translate it for me (without consulting Google Translate!):
Monsieur, azonnal kövessen engem, bitte! Lófaszt! Nehogy már! Te vagy a Blade, Blade Runner! Captain Bryant toka. Meni-o mae-yo.
Since it was first clinically observed in 1981, HIV/AIDS has been responsible for an estimated 25 million people worldwide. Since 2010, an estimated 34 million people were diagnosed with HIV, most of whom live within the developing world. In spite of anti-viral medicines which makes HIV manageable, countless people still die as a result of improper treatment or a lack of access.
As such, its little wonder then why medical researchers have been working for decades to find a cure. If it were possible to inoculate against the spread of HIV, the disease would all but disappear within a few generations. In addition, if it were possible to cure those already infected, and worldwide access were assured, HIV and AIDS could very well be eliminated in a decade or less.
Not too long ago, researchers at Caltech experimented with HIV antibodies which could very well lead to a vaccine in the near future. But even more exciting than this was the announcement from the Washington University School of Medicine in St. Louis earlier this month, where a research team demonstrated that nanoparticles infused with a toxic bee venom were capable of killing HIV. With this latest breakthrough, it seems that the days of one of the greatest plagues in history may truly be numbered.
The key to this discovery, which was made by Samuel A. Wickline and his team at the Washington University, involves what is known as cytolyic melittin peptides. Melittin is found in bee venom, and it has the fortuitous trait of being able to degrade the protective envelope that surrounds HIV. When delivered in both large and free concentrations, they observed that HIV was unable to withstand the assault and died.
Moreover, these melittin-loaded nanoparticles left the surrounding cells unharmed, which incidentally was no accident. The nanoparticles Wickline and his team developed were endowed with a kind of filter that prevents healthy cells from coming into contact with the toxin. But HIV, since its a viral strain, is small enough to sift right through these filters, thus exposing it to the toxin.
Currently, all known forms of HIV treatment involve preventing the virus from replicating to the point that it will morph into AIDS. By contrast, this new process targets the virus where it lives, focusing on killing on it rather than limiting its ability to reproduce. Adding to the general sense of excitement is speculation that this same concept could be used to combat other infectious STDs, including hepatitis B and C.
As a topical gel, suggestions are already circling that melittin-loaded nanoparticles could be combined with spermicidal cream to create the ultimate contraceptive that can also protect against STDs. Not only would this ensure truly safe sex, combined with melittin-treatment treatments for the infected and preventative vaccinations, it would also open up another front on the “war on HIV”.
My thanks to Rami for bringing this article to my attention. Since he pointed it out, its been making quite a few waves in the medical community and general public! Stories like these give me hope for the future…
Welcome back to another installment in PBTS! Today’s news item is a rather interesting one, and it comes to us from the University of Delaware where researcher Erik Koepf has come up with an interest twist on solar power. In most cases, scientists think to use cells that can absorb photons and use them to generate a flow of electrons. But in Koepf’s case, sunlight is used in a different way; namely, as a means of creating alternative fuels.
Basically, the concept for Koepft’s new solar-powered reactor revolves around the idea of getting directly to the hydrogen that is found in conventional fuels, i.e. coal and fossil fuels. While they are decent enough energy sources, they do not burn clean, due to the extensive impurities they carry and by-products they create. If it were possible to remove the essential hydrogen from them, we would have a clean burning and efficient energy supply without the hassle of pollution.
And that’s where the solar reactor comes in. As the name suggests, the reactor relies on the Sun’s energy, which it then uses to split water molecules to get at their hydrogen atoms. This is done by exposing a zinc oxide powder on a ceramic surface to massive amounts of focused sunlight. From there, a thermochemical reaction happens that splits water apart into oxygen and hydrogen.
Though it may sound complicated, the sheer beauty of this concept lies in that fact that it uses the Sun’s infinite energy to do the heavy lifting and accomplish atom smashing. No particle accelerators, no nuclear fusion or fission; and best of all, no pollution! Since the process creates no emissions or Greenhouse gases, this is perhaps one of the most environmentally friendly energy concepts to date.
But of course, the project has some additional requirement which fall under the heading, “additional parts sold separately”. For one, the reactor needs to get seriously hot – between 1750° to 1950° Celsius (3182° to 3542° Fahrenheit) – before it can get to the work of splitting water molecules. For this, a focusing mirror that is roughly 13 square meters, flawlessly flat and 98% reflective is needed.
No much mirror existed when Koepf and Michael Giuliano (his research associate) got started, so they had to develop their own. In addition, that mirror needs to focus the solar energy it collects onto a tiny six centimeter circle that has to be precisely aimed. If the light is just a millimeter or two off to one side, the entire reactor could be damaged. In essence, the system is simple and ingenious, but also temperamental and very fragile.
What’s more, just how efficient it is remains to be seen. While the first tests were successful in creating small amounts of hydrogen, the the real test will take place next month when the duo present their reactor in Zurich, Switzerland, where it will be running at full power for the very first time. Naturally, expectations are high, but it is too soon to tell if this represents the future or a failed attempt at viable alternative power.
Curiosity has just finished analyzing the samples collected from its first drilling operation at the John Klein rock formation in Yellowknife Bay. And what it found confirms what scientists have suspected about the Red Planet for some time. Contained within grey the dust collected from the rock’s interior, the rover discovered some of the key chemical ingredients necessary for life to have thrived on early Mars billions of years ago.
After running the two aspirin-sized samples through its two analytical chemistry labs (SAM and CheMin), the Mars Science Laboratory was able to identify the presence of carbon, hydrogen, oxygen, nitrogen, sulfur and phosphorus in the sample – all of which are essential constituents for life as we know it based on organic molecules.
What’s more, according to David Blake – the principal investigator for the CheMin instrument – a large portion of the sample was made up of clay minerals, which in itself is telling. The combined presence of these basic elements and abundant phyllosilicate clay minerals indicate that the area was once home to a fresh water environment, one where Martian microbes could once have thrived in the distant past.
By confirming this, the Curiosity Rover has officially met one of its most important research goals – proving that all the elements necessary for life to flourish were once present on Mars. And when you consider that the Curiosity team was not expecting to find evidence of phyllosilicate minerals in the Gale Crater, the find was an especial delight. Based on spectral observations conducted from orbit, phyllosilicates were only expected to be found in the lower reaches of Mount Sharp, which is Curiosity’s ultimate destination.
So what’s next for Curiosity? According to John Grotzinger, the Principal Investigator for the Mars Science Laboratory, Curiosity will remain in the Yellowknife Bay area for several additional weeks or months to fully characterize the area. The rover will also conduct at least one more drilling campaign to try and replicate the results, check for organic molecules and search for new discoveries.
Sounds like the title of a funky children’s story, doesn’t it? But in fact, it’s actually part of NASA’s plan for building a Lunar base that could one day support inhabitants and make humanity a truly interplanetary species. My thanks to Raven Lunatick for once again beating me to the punch! While I don’t consider myself the jealous type, knowing that my friends and colleagues are in the know before I am on stuff like this always gets me!
In any case, people may recall that back in January of 2013, the European Space Agency announced that it could be possible to build a Lunar Base using 3D printing technology and moon dust. Teaming up with the architecture firm Foster + Partners, they were able to demonstrate that one could fashion entire structures cheaply and quite easily using only regolith, inflatable frames, and 3D printing technology.
And now, it seems that NASA is on board with the idea and is coming up with its own plans for a Lunar base. Much like the ESA’s planned habitat, NASA’s would be located in the Shackleton Crater near the Moon’s south pole, where sunlight (and thus solar energy) is nearly constant due to the Moon’s inclination on the crater’s rim. What’s more, NASA”s plan would also rely on the combination of lunar dust and 3D printing for the sake of construction.
However, the two plans differ in some key respects. For one, NASA’s plan – which goes by the name of SisterHab – is far more ambitious. As a joint research project between space architects Tomas Rousek, Katarina Eriksson and Ondrej Doule and scientists from Nasa’s Jet Propulsion Laboratory (JPL), SinterHab is so-named because it involves sintering lunar dust: heating it up with microwaves to the point where the dust fuses to become a solid, ceramic-like block.
This would mean that bonding agents would not have to be flown to the Moon, which is called for in the ESA’s plan. What’s more, the NASA base would be constructed by a series of giant spider robots designed by JPL Robotics. The prototype version of this mechanical spider is known as the Athlete rover, which despite being a half-size variant of the real thing has already been successfully tested on Earth.
Each one of these robots is human-controlled, has six 8.2m legs with wheels at the end, and comes with a detachable habitable capsule mounted at the top. Each limb has a different function, depending on what the controller is looking to do. For example, it has tools for digging and scooping up soil samples, manipulators for poking around in the soil, and will have a microwave 3D printer mounted on one of the legs for the sake of building the base. It also has 48 3D cameras that stream video to its operator or a remote controlling station.
The immediate advantages to NASA’s plan are pretty clear. Sintering is quite cheap, in terms of power as well as materials, and current estimates claim that an Athlete rover should be able to construct a habitation “bubble” in only two weeks. Another benefit of the process is that astronauts could use it on the surface of the Moon surrounding their base, binding dust and stopping it from clogging their equipment. Moon dust is extremely abrasive, made up of tiny, jagged morcels rather than finely eroded spheres.
Since it was first proposed in 2010 at the International Aeronautical Congress, the concept of SinterHab has been continually refined and updated. In the end, a base built on its specifications will look like a rocky mass of bubbles connected together, with cladding added later. The equilibrium and symmetry afforded in this design not only ensures that grouping will be easy, but will also guarantee the structural integrity and longevity of the structures.
As engineers have known for quite some time, there’s just something about domes and bubble-like structures that were made to last. Ever been to St. Peter’s Basilica in Rome, or the Blue Mosque in Istanbul? Ever looked at a centuries old building with Onion Dome and felt awed by their natural beauty? Well, there’s a reason they’re still standing! Knowing that we can expect similar beauty and engineering brilliance down the road gives me comfort.
In the meantime, have a gander at the gallery for the proposed SinterHab base, and be sure to check out this video of the Athlete rover in action:
Researchers continue to work steadily to make the dream of abundant solar energy a reality. And in recent years, a number of ideas and projects have begun to bear fruit. Earlier this year, their was the announcement of a new kind of “peel and stick” solar panel which was quite impressive. Little did I know, this was just the tip of the iceberg.
Since that time, I have come across four very interesting stories that talk about the future of solar power, and I feel the need to share them all! But, not wanting to fill your page with a massive post, I’ve decided to break them down and do a week long segment dedicated to emerging solar technology and its wicked-cool applications. So welcome to the first installment of Powered By The Sun!
The first story comes to us by way of SpaceX, Deep Space Industries, and other commercial space agencies that are looking to make space-based solar power (SBSP) a reality. For those not familiar with the concept, this involves placing a solar farm in orbit that would then harvest energy from the sun and then beam the resulting electricity back to Earth using microwave- or laser-based wireless power transmission.
Originally described by Isaac Asimov in his short story “Reason”, the concept of an actual space-based solar array was first adopted by NASA in 1974. Since that time, they have been investigating the concept alongside the US Department of Energy as a solution to the problem of meeting Earth’s energy demands, and the cost of establishing a reliable network of arrays here on Earth.
Constructing large arrays on the surface is a prohibitively expensive and inefficient way of gathering power, due largely to weather patterns, seasons, and the day-night cycle which would interfere with reliable solar collection. What’s more, the sunniest parts of the world are quite far from the major centers of demand – i.e. Western Europe, North America, India and East Asia – and at the present time, transmitting energy over that long a distance is virtually impossible.
NASA “Suntower” concept
Compared to that, an orbiting installation like the SBSP would have numerous advantages. Orbiting outside of the Earth’s atmosphere, it would be able to receive about 30% more power from the Sun, would be operational for almost 24 hours per day, and if placed directly above the equator, it wouldn’t be affected by the seasons either. But the biggest benefit of all would be the ability to beam the power directly to whoever needed it.
But of course, cost remains an issue, which is the only reason why NASA hasn’t undertaken to do this already. Over the years, many concepts have been considered over at NASA and other space agencies. But due to the high cost of putting anything in orbit, moving up all the materials required to build a large scale installation was simply not cost effective.
However, that is all set to change. Companies like SpaceX, who have already taken part in commercial space flight (such as the first commercial resupply to the ISS in May of 2012, picture above) are working on finding ways to lower the cost of putting materials and supplies into orbit. Currently, it costs about $20,000 to place a kilogram (2.2lbs) into geostationary orbit (GSO), and about half that for low-Earth orbit (LEO). But SpaceX’s CEO, Elon Musk, has said that he wants to bring the price down to $500 per pound, at which point, things become much more feasible.
And when that happens, there will be no shortage of clients looking to put an SBSP array into orbit. In the wake of the Fukushima accident, the Japanese government announced plans to launch a two-kilometer-wide 1-gigawatt SBSP plant into space. The Russian Space Agency already has a a working 100-kilowatt SBSP prototype, but has not yet announced a launch date. And China, the Earth’s fastest-growing consumer of electricity, plans to put a 100kW SBSP into Low-Earth Orbit by 2025.
Most notably, however, is John Mankins, the CTO of Deep Space Industries and a 25-year NASA vet, who has produced an updated report on the viability of SBSP. His conclusion, in short, is that it should be possible to build a small-scale, pilot solar farm dubbed SPS-ALPHA for $5 billion and a large-scale, multi-kilometer wide power plant for $20 billion. NASA’s funding for SPS-ALPHA dried up last year, but presumably Mankins’ work continues at Deep Space Industries.
Cost and the long-term hazards of having an array in space remain, but considering its long-term importance and the shot in the arm space exploration has received in recent years – i.e. the Curiosity Rover, the proposed L2 Moon outpost, manned missions to Mars by 2030 – we could be looking at the full-scale construction of orbital power plants sometime early in the next decade.
And it won’t be a moment too soon! Considering Earth’s growing population, its escalating impact on the surface, the limits of many proposed alternative fuels, and the fact that we are nowhere near to resolving the problem of Climate Change, space-based solar power may be just what the doctor ordered!
Thanks for reading and stay tuned for the next installment in the Powered By The Sun series!