According to a story published on July 10 in The Astrophysical Journal, a radio burst was detected that may have originated outside of our galaxy. Apparently, these split-second radio bursts have heard before, but always with the same telescope – Parkes Observatory in Australia. Given that only this observatory was detecting these signals, there was debate about whether they were coming from inside our galaxy, or even from Earth itself.
However, this time the radio signals were detected by a different telescope – the Arecibo Observatory in Puerto Rico – which concluded that the bursts are coming from outside the galaxy. This is also the first time one of these bursts have been found in the northern hemisphere of the sky. Exactly what may be causing such radio bursts represents a major new enigma for astrophysicists.
Victoria Kaspi, an astrophysics researcher at McGill University who participated in the research, explained:
Our result is important because it eliminates any doubt that these radio bursts are truly of cosmic origin. The radio waves show every sign of having come from far outside our galaxy – a really exciting prospect.
Fast radio bursts are a flurry of radio waves that last a few thousandths of a second, and at any given minute there are only seven of these in the sky on average, according to the Max Planck Institute for Radio Astronomy. Their cause is unknown, and the possibilities range from black holes, to neutron stars coming together, to the magnetic field of pulsars (a type of neutron star) flaring up.
The pulse was detected on Nov. 2, 2012, at the Arecibo Observatory – a National Science Foundation-sponsored facility that has the world’s largest and most sensitive radio telescope. While fast radio bursts last just a few thousandths of a second and have rarely been detected, the international team of scientists reporting the Arecibo finding estimate that these bursts occur roughly 10,000 times a day over the whole sky.
This astonishingly large number is inferred by calculating how much sky was observed, and for how long, in order to make the few detections that have so far been reported. Laura Spitler, a postdoctoral researcher at the Max Planck Institute for Radio Astronomy in Bonn, Germany and the lead author of the new paper, was also the first person to note the event. As she explained:
The brightness and duration of this event, and the inferred rate at which these bursts occur, are all consistent with the properties of the bursts previously detected by the Parkes telescope in Australia.
The bursts appear to be coming from beyond the Milky Way, based on measurement of an effect known as plasma dispersion. Pulses that travel through the cosmos are distinguished from man-made ones by the effect of interstellar electrons, which cause radio waves to travel more slowly at lower radio frequencies. The burst detected by the Arecibo telescope has three times the maximum dispersion measure that would be expected from a local source.
Efforts are now under way to detect radio bursts using radio telescopes that can observe broad swaths of the sky to help identify them. Telescopes under construction in Australia and South Africa as well as the CHIME telescope in Canada have the potential to detect fast radio bursts. Astronomers say these and other new facilities could pave the way for many more discoveries and a better understanding of this mysterious cosmic phenomenon.
For those hoping this was a possible resolution to the Fermi Paradox – i.e. that the radio bursts might have been extra-terrestrial in origin – this news is a little disappointing. But in truth, its yet another example of the deeper mysteries of the universe at work. Much like our ongoing research into the world of elementary particles, every answer gives rise to new questions.
When it comes to observational astronomy, scientists and cosmologists have been facing a sort of crisis of late. With so many instruments aimed at the heavens, recording what little information makes it all the way to Earth, simply observing distant stars has been providing diminishing returns. In order to keep moving forward, we must observe the most unusual and, in many cases, violent cosmic events so we can see some truly novel data.
This presents a bit of a challenge, since the the space industry can’t possibly set up enough telescopes to look at every part of the night sky all at once. With so much depth through which to zoom, it would seem a lost cause to try to capture unexpected, short-lived events. And yet, one such event, one that is truly cosmic in nature (no pun!), was captured just recently.
It took place back in late November, when an “armada of instruments” from all over the world saw a massive gamma-ray burst originating from a point in space known as GRB 130427A. This burst was more powerful than what many researchers believed was theoretically possible, and is now thought to be the collapse of a giant star and the birth of a black hole.
The event has been described as a “Rosetta stone moment” by astronomers for a number of reasons. In addition to being a truly rare and awesome sight, this burst has also sent out information that astronomers will be studying for many years to come. And while it’s too soon to draw any real conclusions, there is already widespread excitement about the sheer newness of it.
And yet, GRB 130427A only lasted about 80 seconds at observable intensities, so the fact that it was observed, letalone documented so thoroughly was truly surprising! This was all thanks to the Los Alamos National Laboratories in New Mexico, where six robotic cameras – collectively referred to as RAPTOR, or RAPid Telescopes for Optical Response – were able to respond in time to catch the event unfold.
The RAPTOR telescopes are networked together and all obey a central computer “brain”. Between their dedicated computing hardware and robotic swivel-mounts, they can turn to view any point in the sky in less than three seconds. As the world’s fastest “optical response” devices, RAPTOR’s telescopes are designed to make sure we don’t miss astronomical events when they happens, because in astronomy there are no second chances.
The RAPTOR telescopes to ensure things aren’t missed by performing extremely diffuse, wide-angle sweeps of the sky to pick up hints at about where and when a major event is taking place. When one of the telescopes sees a hint of something good, it and the others quickly reorient and zoom to capture it in full detail. And with all six telescopes capturing the same event, the wealth of information gleamed is quite impressive.
The telescopes have different specializations as well. For example, the RAPTOR-T views all events through four aligned lenses with four different color filters. By looking at the differences in color distribution in the sample, RAPTOR-T can provide info about the distance to an event (by measuring Red Shift and Blue Shift) or about some elements of its environment.
This gamma ray burst is thought to be the brightest in decades, perhaps in a century. And if astronomers had missed it, it’s likely that nobody would have gotten the chance to capture one again. Luckily, the event was also seen by a number of other gamma ray detectors and x-ray telescopes. These included NASA’s Fermi, NuSTAR, and Swift satellites, all of which managed to see some portion of the event as it unfolded.
However, most telescopes joined in to view the event’s so-called afterglow, an incredibly violent occurrence where the newly-born black hole threw out debris and damage over a wide radius. For several hours, this radius glowed and astronomers watched as it faded. The intensity of high-energy gamma rays in that afterglow faded in tandem with its conventional light emissions.
This is one of the first useful bits of information provided by this event – the link between gamma rays and optical phenomena. But this is just one way that it could be astronomy’s latest Rosetta Stone observation. In the next few months, we can all look forward to a slew of exciting updates as astronomers sort through the implications of having witnessed the birth of an unprecedented singularity.
And in the meantime, check out this video of the gamma-ray burst, as observed by the RAPTOR All-Sky Monitor:
Studying the known universe is always interesting, mainly because you never know what you’re going to find. And just when you think you’ve got something figured out – like a moon in orbit around one of the Solar Systems more distant planet’s – you learn that it can still find ways to surprise you. And interestingly enough, a few surprises have occurred back to back in recent weeks which are making scientists rethink their assumptions about these moons.
The first came from Io, Jupiter’s innermost moon and the most volcanically active body in the Solar System. All told, the surface has over 400 volcanic regions, roughly 100 mountains – some of which are taller than Mount Everest – and extensive lava flows and floodplains of liquid rock that pass between them. All of this has lead to the formation of Io’s atmosphere, which is basically a thin layer of toxic fumes.
Given its distance from Earth, it has been difficult to get a good reading on what the atmosphere is made up of. However, scientists believe that it is primarily composed of sulfur dioxide (SO2), with smaller concentrations of sulfur monoxide (SO), sodium chloride (NaCl), and atomic sulfur and oxygen. Various models predict other molecules as well, but which have not been observed yet.
However, recently a team of astronomers from institutions across the US, France, and Sweden, set out to better constrain Io’s atmosphere. Back in September they detected the second-most abundant isotope of sulfur (34-S) and tentatively detected potassium chloride (KCl).Expected, but undetected, were molecules like potassium chloride (KCl), silicone monoxide (SiO), disulfur monoxide (S2O), and other isotopes of sulfur.
But more impressive was the team’s tentative of potassium chloride (KCl), which is believed to be part of the plasma torus that Io projects around Jupiter. For some time now, astronomers and scientists have been postulating that Io’s volcanic eruptions produce this ring of plasma, which includes molecular potassium. By detecting this, the international team effectively found the “missing link” between Io and this feature of Saturn.
Another find was the team’s detection of the sulfur 34-S, an isotope which had previously never been observed. Sulfur 32-S had been detected before, but the ratio between the 34-S and 32-S was twice that of what scientists believed was possible in the Solar System. A fraction this high has only been reported once before in a distant quasar – which was in fact an early galaxy consisting of an intensely luminous core powered by a huge black hole.
These observations were made using the Atacama Pathfinder Experiment (APEX) antenna – a radio telescope located in northern Chile. This dish is a prototype antenna for the Atacama Large Millimeter Array (ALMA). And while Io is certainly an extreme example, it will likely help terrestrial scientists characterize volcanism in general – providing a better understanding of it here on Earth as well as outside the Solar System.
The second big discovery was announced just yesterday, and comes from NASA’s Cassini space probe. In its latest find investigating Saturn’s largest moon, Cassini made the first off-world detection of the molecule known as propelyne. This simple organic compound is a byproduct of oil refining and fossil fuel extraction, and is one of the most important starting molecules in the production of plastics.
The molecules were detected while Cassini used its infrared spectrometer to stare into the hydrocarbon haze that is Titan’s atmosphere. The discovery wasn’t too surprising, as Titan is full of many different types of hydrocarbons including methane and propane. But spotting propylene has thus far eluded scientists. What’s more, this is the first time that the molecule has been spotted anywhere outside of Earth.
These finding highlight the alien chemistry of Saturn’s giant moon. Titan has moisture and an atmosphere, much like our own, except that its rains are made of hydrocarbons and its seas composed of ethane. Scientists have long wanted to explore this world with a boat-like rover, but given the current budget environment, that’s a distant prospect. Still, sales of propylene on Earth are estimated at $90 billion annually.
While no one is going to be mounting a collection mission to Titan anytime soon, it does offer some possibilities for future missions. These include colonization, where atmospheric propylene could be used to compose settlements made of plastic. And when it comes to terraforming, knowing the exact chemical makeup of the atmosphere will go a long way towards finding a way to make it breathable and warm.
And in the meantime, be sure to enjoy this video about Cassini’s latest discovery. With the government shutdown in effect, NASA’s resources remain offline. So we should consider ourselves lucky that the news broke before today and hope like hell they get things up and running again soon!
For decades, the Big Bang Theory has remained the accepted theory of how the universe came to be, beating out challengers like the Steady State Theory. However, many unresolved issues remain with this theory, the most notable of which is the question of what could have existed prior to the big bang. Because of this, scientists have been looking for way to refine the theory.
Luckily, a group of theoretical physicists from the Perimeter Institute (PI) for Theoretical Physics in Waterloo, Ontario have announced a new interpretation on how the universe came to be. Essentially, they postulate that the birth of the universe could have happened after a four-dimensional star collapsed into a black hole and began ejecting debris.
This represents a big revision of the current theory, which is that universe grew from an infinitely dense point or singularity. But as to what was there before that remain unknown, and is one of a few limitations of the Big Bang. In addition, it’s hard to predict why it would have produced a universe that has an almost uniform temperature, because the age of our universe (about 13.8 billion years) does not give enough time to reach a temperature equilibrium.
Most cosmologists say the universe must have been expanding faster than the speed of light for this to happen. But according to Niayesh Afshordi, an astrophysicist with PI who co-authored the study, even that theory has problems:
For all physicists know, dragons could have come flying out of the singularity. The Big Bang was so chaotic, it’s not clear there would have been even a small homogenous patch for inflation to start working on.
The model Afshordi and her colleagues are proposing is basically a three-dimensional universe floating as a membrane (or brane) in a “bulk universe” that has four dimensions. If this “bulk universe” has four-dimensional stars, these stars could go through the same life cycles as the three-dimensional ones we are familiar with. The most massive ones would explode as supernovae, shed their skin and have the innermost parts collapse as a black hole.
The 4-D black hole would then have an “event horizon”, the boundary between the inside and the outside of a black hole. In a 3-D universe, an event horizon appears as a two-dimensional surface; but in a 4-D universe, the event horizon would be a 3-D object called a hypersphere. And when this 4-D star blows apart, the leftover material would create a 3-D brane surrounding a 3-D event horizon, and then expand.
To simplify it a little, they are postulating that the expansion of the universe was triggered by the motion of the universe through a higher-dimensional reality. While it may sound complicated, the theory does explain how the universe continues to expand and is indeed accelerating. Whereas previous theories have credited a mysterious invisible force known as “dark energy” with this, this new theory claims it is the result of the 3-D brane’s growth.
However, there is one limitation to this theory which has to do with the nearly uniform temperature of the universe. While the model does explain how this could be, the ESA’s Planck telesceop recently mapped out the universe and discovered small temperature variations in the cosmic microwave background (CBM). These patches were believed to be leftovers of the universe’s beginnings, which were a further indication that the Big Bang model holds true.
The PI team’s own CBM readings differ from this highly accurate survey by about four percent, so now they too are going back to the table and looking to refine their theory. How ironic! However, the IP team still feel the model has worth. While the Planck observations show that inflation is happening, they do not show why the inflation is happening.
Needless to say, we are nowhere near to resolving how the universe came to be, at least not in a way that resolves all the theoretical issues. But that’s the things about the Big Bang – it’s the scientific equivalent of a Hydra. No matter how many times people attempt to discredit it, it always comes back to reassert its dominance!
After 15 months of observing deep space, scientists with the European Space Agency Planck mission have generated a massive heat map of the entire universe.The “heat map”, as its called, looks at the oldest light in the universe and then uses the data to extrapolate the universe’s age, the amount of matter held within, and the rate of its expansion. And as usual, what they’ve found was simultaneously reassuring and startling.
When we look at the universe through a thermal imaging system, what we see is a mottled light show caused by cosmic background radiation. This radiation is essentially the afterglow of the Universe’s birth, and is generally seen to be smooth and uniform. This new map, however, provides a glimpse of the tiny temperature fluctuations that were imprinted on the sky when the Universe was just 370,000 years old.
Since it takes light so long to travel from one end of the universe to the other, scientists can tell – using red shift and other methods – how old the light is, and hence get a glimpse at what the universe looked like when the light was first emitted. For example, if a galaxy several billion light years away appears to be dwarfish and misshapen by our standards, it’s an indication that this is what galaxies looked like several billion years ago, when they were in the process of formation.
Hence, like archaeologists sifting through sand to find fossil records of what happened in the past, scientists believe this map reveals a sort of fossil imprint left by the state of the universe just 10 nano-nano-nano-nano seconds after the Big Bang. The splotches in the Planck map represent the seeds from which the stars and galaxies formed. As is heat-map tradition, the reds and oranges signify warmer temperatures of the universe, while light and dark blues signify cooler temperatures.
The cooler temperatures came about because those were spots where matter was once concentrated, but with the help of gravity, collapsed to form galaxies and stars. Using the map, astronomers discovered that there is more matter clogging up the universe than we previously thought, at around 31.7%, while there’s less dark energy floating around, at around 68.3%. This shift in matter to energy ratio also indicates that the universe is expanding slower than previously though, which requires an update on its estimated age.
All told, the universe is now believed to be a healthy 13.82 billion years old. That wrinkles my brain! And also of interest is the fact that this would appear to confirm the Big Bang Theory. Though widely considered to be scientific canon, there are those who dispute this creation model of the universe and argue more complex ideas, such as the “Steady State Theory” (otherwise known as the “Theory of Continuous Creation”).
In this scenario, the majority of matter in the universe was not created in a single event, but gradually by several smaller ones. What’s more, the universe will not inevitable contract back in on itself, leading to a “Big Crunch”, but will instead continue to expand until all the stars have either died out or become black holes. As Krzysztof Gorski, a member of the Planck team with JPL, put it:
This is a treasury of scientific data. We are very excited with the results. We find an early universe that is considerably less rigged and more random than other, more complex models. We think they’ll be facing a dead-end.
Martin White, a Planck project scientist with the University of California, Berkeley and the Lawrence Berkeley National Laboratory, explained further. According to White, the map shows how matter scattered throughout the universe with its associated gravity subtly bends and absorbs light, “making it wiggle to and fro.” As he went on to say:
The Planck map shows the impact of all matter back to the edge of the Universe. It’s not just a pretty picture. Our theories on how matter forms and how the Universe formed match spectacularly to this new data.
The Planck space probe, which launched in 2009 from the Guiana Space Center in French Guiana, is a European Space Agency mission with significant contribution from NASA. The two-ton spacecraft gathers the ancient glow of the Universe’s beginning from a vantage more than a million and a half kilometers from Earth. This is not the first map produced by Planck; in 2010, it created an all-sky radiation map which scientists, using supercomputers, removed all interfering background light from to get a clear view at the deep background of the stars.
However, this is the first time any satellite has been able to picture the background radiation of the universe with such high resolution. The variation in light captured by Planck’s instruments was less than 1/100 millionth of a degree, requiring the most sensitive equipment and the contrast. So whereas cosmic radiation has appeared uniform or with only slight variations in the past, scientists are now able to see even the slightest changes, which is intrinsic to their work.
So in summary, we have learned that the universe is a little older than previously expected, and that it most certainly was created in a single, chaotic event known as the Big Bang. Far from dispelling the greater mysteries, confirming these theories is really just the tip of the iceberg. There’s still the grandiose mystery of how all the fundamental laws such as gravity, nuclear forces and electromagnetism work together.
Ah, and let’s not forget the question of what transpires beneath the veil of an even horizon (aka. a Black Hole), and whether or not there is such a thing as a gateway in space and time. Finally, there’s the age old question of whether or not intelligent life exists somewhere out there, or life of any kind. But given the infinite number of stars, planets and possibilities that the universe provides, it almost surely does!
And I suppose there’s also that persistent nagging question we all wonder when we look up at the stars. Will we ever be able to get out there and take a closer look? I for one like to think so, and that it’s just a matter of time!
Back in January, National Geographic Magazine celebrated its 125th anniversary. In honor of this occasion, they released a special issue which commemorated the past 125 years of human exploration and looked ahead at what the future might hold. As I sat in the doctor’s office, waiting on a prescription for antibiotics to combat my awful cold, I found myself terribly inspired by the article.
So naturally, once I got home, I looked up the article and its source material and got to work. The issue of exploration, especially the future thereof, is not something I can ever pass up! So for the next few minutes (or hours, depending on how much you like to nurse a read), I present you with some possible scenarios about the coming age of deep space exploration.
Suffice it to say, National Geographic’s appraisal of the future of space travel was informative and hit on all the right subjects for me. When one considers the sheer distances involved, not to mention the amount of time, energy, and resources it would take to allow people to get there, the question of reaching into the next great frontier poses a great deal of questions and challenges.
Already, NASA, Earth’s various space agencies and even private companies have several ideas in the works or returning to the Moon, going to Mars, and to the Asteroid Belt. These include the SLS (Space Launch System), the re-purposed and upgraded version of the Saturn V rocket which took the Apollo astronauts to the Moon. Years from now, it may even be taking crews to Mars, which is slated for 2030.
And when it comes to settling the Moon, Mars, and turning the Asteroid Belt into our primary source of mineral extraction and manufacturing, these same agencies, and a number of private corporations are all invested in getting it done. SpaceX is busy testing its reusable-launch rocket, known as the Grasshopper, in the hopes of making space flight more affordable. And NASA and the ESA are perfecting a process known as “sintering” to turn Moon regolith into bases and asteroids into manufactured goods.
Meanwhile, Virgin Galactic, Reaction Engines and Golden Spike are planning to make commercial trips into space and to the Moon possible within a few years time. And with companies like Deep Space Industries and Google-backed Planetary Resources prospeting asteroids and planning expeditions, it’s only a matter of time before everything from Earth to the Jovian is being explored and claimed for our human use.
But when it comes to deep-space exploration, the stuff that would take us to the outer reaches of the Solar System and beyond, that’s where things get tricky and pretty speculative. Ideas have been on the table for some time, since the last great Space Race forced scientists to consider the long-term and come up with proposed ways of closing the gap between Earth and the stars. But to this day, they remain a scholarly footnote, conceptual and not yet realizable.
But as we embark of a renewed era of space exploration, where the stuff of science fiction is quickly becoming the stuff of science fact, these old ideas are being dusted off, paired up with newer concepts, and seriously considered. While they might not be feasible at the moment, who know what tomorrow holds? From the issues of propulsion, to housing, to cost and time expenditures, the human race is once again taking a serious look at extra-Solar exploration.
And here are some of the top contenders for the “Final Frontier”:
Nuclear Propulsion: The concept of using nuclear bombs (no joke) to propel a spacecraft was first proposed in 1946 by Stanislaw Ulam, a Polish-American mathematician who participated in the Manhattan Project. Preliminary calculations were then made by F. Reines and Ulam in 1947, and the actual project – known as Project Orion was initiated in 1958 and led by Ted Taylor at General Atomics and physicist Freeman Dyson from the Institute for Advanced Study in Princeton.
In short, the Orion design involves a large spacecraft with a high supply of thermonuclear warheads achieving propulsion by releasing a bomb behind it and then riding the detonation wave with the help of a rear-mounted pad called a “pusher”. After each blast, the explosive force is absorbed by this pusher pad, which then translates the thrust into forward momentum.
Though hardly elegant by modern standards, the proposed design offered a way of delivering the explosive (literally!) force necessary to propel a rocket over extreme distances, and solved the issue of how to utilize that force without containing it within the rocket itself. However, the drawbacks of this design are numerous and noticeable.
F0r starters, the ship itself is rather staggering in size, weighing in anywhere from 2000 to 8,000,000 tonnes, and the propulsion design releases a dangerous amount of radiation, and not just for the crew! If we are to rely on ships that utilize nuclear bombs to achieve thrust, we better find a course that will take them away from any inhabited or habitable areas. What’s more, the cost of producing a behemoth of this size (even the modest 2000 tonne version) is also staggering.
Antimatter Engine: Most science fiction authors who write about deep space exploration (at least those who want to be taken seriously) rely on anti-matter to power ships in their stories. This is no accident, since antimatter is the most potent fuel known to humanity right now. While tons of chemical fuel would be needed to propel a human mission to Mars, just tens of milligrams of antimatter, if properly harnessed, would be able to supply the requisite energy.
Fission and fusion reactions convert just a fraction of 1 percent of their mass into energy. But by combine matter with antimatter, its mirror twin, a reaction of 100 percent efficiency is achieved. For years, physicists at the CERN Laboratory in Geneva have been creating tiny quantities of antimatter by smashing subatomic particles together at near-light speeds. Given time and considerable investment, it is entirely possible this could be turned into a form of advanced propulsion.
In an antimatter rocket, a dose of antihydrogen would be mixed with an equal amount of hydrogen in a combustion chamber. The mutual annihilation of a half pound of each, for instance, would unleash more energy than a 10-megaton hydrogen bomb, along with a shower of subatomic particles called pions and muons. These particles, confined within a magnetic nozzle similar to the type necessary for a fission rocket, would fly out the back at one-third the speed of light.
However, there are natural drawback to this design as well. While a top speed of 33% the speed of light per rocket is very impressive, there’s the question of how much fuel will be needed. For example, while it would be nice to be able to reach Alpha Centauri – a mere 4.5 light years away – in 13.5 years instead of the 130 it would take using a nuclear rocket, the amount of antimatter needed would be immense.
No means exist to produce antimatter in such quantities right now, and the cost of building the kind of rocket required would be equally immense. Considerable refinements would therefore be needed and a sharp drop in the cost associated with building such a vessel before any of its kind could be deployed.
Laser Sail: Thinking beyond rockets and engines, there are some concepts which would allow a spaceship to go into deep space without the need for fuel at all. In 1948, Robert Forward put forward a twist on the ancient technique of sailing, capturing wind in a fabric sail, to propose a new form of space travel. Much like how our world is permeated by wind currents, space is filled with cosmic radiation – largely in the form of photons and energy associated with stars – that push a cosmic sail in the same way.
This was followed up again in the 1970’s, when Forward again proposed his beam-powered propulsion schemes using either lasers or masers (micro-wave lasers) to push giant sails to a significant fraction of the speed of light. When photons in the laser beam strike the sail, they would transfer their momentum and push the sail onward. The spaceship would then steadily builds up speed while the laser that propels it stays put in our solar system.
Much the same process would be used to slow the sail down as it neared its destination. This would be done by having the outer portion of the sail detach, which would then refocus and reflect the lasers back onto a smaller, inner sail. This would provide braking thrust to slow the ship down as it reached the target star system, eventually bringing it to a slow enough speed that it could achieve orbit around one of its planets.
Once more, there are challenges, foremost of which is cost. While the solar sail itself, which could be built around a central, crew-carrying vessel, would be fuel free, there’s the little matter of the lasers needed to propel it. Not only would these need to operate for years continuously at gigawatt strength, the cost of building such a monster would be astronomical, no pun intended!
A solution proposed by Forward was to use a series of enormous solar panel arrays on or near the planet Mercury. However, this just replaced one financial burden with another, as the mirror or fresnel lens would have to be planet-sized in scope in order for the Sun to keep the lasers focused on the sail. What’s more, this would require that a giant braking sail would have to be mounted on the ship as well, and it would have to very precisely focus the deceleration beam.
So while solar sails do present a highly feasible means of sending people to Mars or the Inner Solar System, it is not the best concept for interstellar space travel. While it accomplishes certain cost-saving measures with its ability to reach high speeds without fuel, these are more than recouped thanks to the power demands and apparatus needed to be it moving.
Generation/Cryo-Ship: Here we have a concept which has been explored extensively in fiction. Known as an Interstellar Ark, an O’Neill Cylinder, a Bernal Sphere, or a Stanford Taurus, the basic philosophy is to create a ship that would be self-contained world, which would travel the cosmos at a slow pace and keep the crew housed, fed, or sustained until they finally reached their destination. And one of the main reasons that this concept appears so much in science fiction literature is that many of the writers who made use of it were themselves scientists.
The first known written examples include Robert H. Goddard “The Last Migration” in 1918, where he describes an “interstellar ark” containing cryogenic ally frozen people that set out for another star system after the sun died. Konstantin E. Tsiolkovsky later wrote of “Noah’s Ark” in his essay “The Future of Earth and Mankind” in 1928. Here, the crews were kept in wakeful conditions until they reached their destination thousands of years later.
By the latter half of the 20th century, with authors like Robert A. Heinlein’s Orphans of the Sky, Arthur C. Clarke’s Rendezvous with Rama and Ursula K. Le Guin’s Paradises Lost, the concept began to be explored as a distant possibility for interstellar space travel. And in 1964, Dr. Robert Enzmann proposed a concept for an interstellar spacecraft known as the Enzmann Starship that included detailed notes on how it would be constructed.
Enzmann’s concept would be powered by deuterium engines similar to what was called for with the Orion Spacecraft, the ship would measure some 600 meters (2000 feet) long and would support an initial crew of 200 people with room for expansion. An entirely serious proposal, with a detailed assessment of how it would be constructed, the Enzmann concept began appearing in a number of science fiction and fact magazines by the 1970’s.
Despite the fact that this sort of ship frees its makers from the burden of coming up with a sufficiently fast or fuel-efficient engine design, it comes with its own share of problems. First and foremost, there’s the cost of building such a behemoth. Slow-boat or no, the financial and resource burden of building a mobile space ship is beyond most countries annual GDP. Only through sheer desperation and global cooperation could anyone conceive of building such a thing.
Second, there’s the issue of the crew’s needs, which would require self-sustaining systems to ensure food, water, energy, and sanitation over a very long haul. This would almost certainly require that the crew remain aware of all its technical needs and continue to maintain it, generation after generation. And given that the people aboard the ship would be stuck in a comparatively confined space for so long, there’s the extreme likelihood of breakdown and degenerating conditions aboard.
Third, there’s the fact that the radiation environment of deep space is very different from that on the Earth’s surface or in low earth orbit. The presence of high-energy cosmic rays would pose all kinds of health risks to a crew traveling through deep space, so the effects and preventative measures would be difficult to anticipate. And last, there’s the possibility that while the slow boat is taking centuries to get through space, another, better means of space travel will be invented.
Faster-Than-Light (FTL) Travel: Last, we have the most popular concept to come out of science fiction, but which has received very little support from scientific community. Whether it was the warp drive, the hyperdrive, the jump drive, or the subspace drive, science fiction has sought to exploit the holes in our knowledge of the universe and its physical laws in order to speculate that one day, it might be possible to bridge the vast distances between star systems.
However, there are numerous science based challenges to this notion that make an FTL enthusiast want to give up before they even get started. For one, there’s Einstein’s Theory of General Relativity, which establishes the speed of light (c) as the uppermost speed at which anything can travel. For subatomic particles like photons, which have no mass and do not experience time, the speed of light is a given. But for stable matter, which has mass and is effected by time, the speed of light is a physical impossibility.
For one, the amount of energy needed to accelerate an object to such speeds is unfathomable, and the effects of time dilation – time slowing down as the speed of light approaches – would be unforeseeable. What’s more, achieving the speed of light would most likely result in our stable matter (i.e. our ships and bodies) to fly apart and become pure energy. In essence, we’d die!
Naturally, there have been those who have tried to use the basis of Special Relativity, which allows for the existence of wormholes, to postulate that it would be possible to instantaneously move from one point in the universe to another. These theories for “folding space”, or “jumping” through space time, suffer from the same problem. Not only are they purely speculative, but they raise all kinds of questions about temporal mechanics and causality. If these wormholes are portals, why just portals in space and not time?
And then there’s the concept of a quantum singularity, which is often featured in talk of FTL. The belief here is that an artificial singularity could be generated, thus opening a corridor in space-time which could then be traversed. The main problem here is that such an idea is likely suicide. A quantum singularity, aka. a black hole, is a point in space where the laws of nature break down and become indistinguishable from each other – hence the term singularity.
Also, they are created by a gravitational force so strong that it tears a hole in space time, and that resulting hole absorbs all things, including light itself, into its maw. It is therefore impossible to know what resides on the other side of one, and astronomers routinely observe black holes (most notably Sagittarius A at the center of our galaxy) swallow entire planets and belch out X-rays, evidence of their destruction. How anyone could think these were a means of safe space travel is beyond me! But then again, they are a plot device, not a serious idea…
But before you go thinking that I’m dismissing FTL in it’s entirety, there is one possibility which has the scientific community buzzing and even looking into it. It’s known as the Alcubierre Drive, a concept which was proposed by physicist Miguel Alcubierre in his 1994 paper: “The Warp Drive: Hyper-Fast Travel Within General Relativity.”
The equations and theory behind his concept postulate that since space-time can be contracted and expanded, empty space behind a starship could be made to expand rapidly, pushing the craft in a forward direction. Passengers would perceive it as movement despite the complete lack of acceleration, and vast distances (i.e. light years) could be passed in a matter of days and weeks instead of decades. What’s more, this “warp drive” would allow for FTL while at the same time remaining consistent with Einstein’s theory of Relativity.
In October 2011, physicist Harold White attempted to rework the equations while in Florida where he was helping to kick off NASA and DARPA’s joint 100 Year Starship project. While putting together his presentation on warp, he began toying with Alcubierre’s field equations and came to the conclusion that something truly workable was there. In October of 2012, he announced that he and his NASA team would be working towards its realization.
But while White himself claims its feasible, and has the support of NASA behind him, the mechanics behind it all are still theoretical, and White himself admits that the energy required to pull off this kind of “warping” of space time is beyond our means at the current time. Clearly, more time and development are needed before anything of this nature can be realized. Fingers crossed, the field equations hold, because that will mean it is at least theoretically possible!
Summary: In case it hasn’t been made manifestly obvious by now, there’s no simple solution. In fact, just about all possibilities currently under scrutiny suffer from the exact same problem: the means just don’t exist yet to make them happen. But even if we can’t reach for the stars, that shouldn’t deter us from reaching for objects that are significantly closer to our reach. In the many decades it will take us to reach the Moon, Mars, the Asteroid Belt, and Jupiter’s Moons, we are likely to revisit this problem many times over.
And I’m sure that in course of creating off-world colonies, reducing the burden on planet Earth, developing solar power and other alternative fuels, and basically working towards this thing known as the Technological Singularity, we’re likely to find that we are capable of far more than we ever thought before. After all, what is money, resources, or energy requirements when you can harness quantum energy, mine asteroids, and turn AIs and augmented minds onto the problems of solving field equations?
Yeah, take it from me, the odds are pretty much even that we will be making it to the stars in the not-too-distant future, one way or another. As far as probabilities go, there’s virtually no chance that we will be confined to this rock forever. Either we will branch out to colonize new planets and new star systems, or go extinct before we ever get the chance. I for one find that encouraging… and deeply disturbing!
If you were to get into a discussion with a true Star Wars fan, it would only be a matter of time before the subject of the Kessel run came up. Long considered one of the biggest enigmas to come out of the franchise, Han’s boast in A New Hope about his ship’s capabilities – with the Kessel Run as a reference – still has some people scratching their noggins and scrambling for explanations today.
To refresh people’s memory, this is how the boast went down in the course of Han’s introduction to Luke and Obi-Wan at the Mos Eisley Cantina:
Han: “Fast ship? You’ve never heard of the Millennium Falcon?” Obi-Wan: “Should I have?” Han: “It’s the ship that made the Kessel Run in less than twelve parsecs!”
See what I mean? A parsec is a unit of distance, not time, so from an astronomical perspective, it made no sense. How could Han have used it to explain how quickly his ship could travel? Well, as it happens, there are some possible and even oddball explanations that have been drafted as the franchise has expanded over the years.
Another important point to make here is about the Kessel Run itself. As a smuggler, Han was deeply involved in running “glimmerstim spice” during his pre-Rebel days (a clear rip off from Dune, but whatever). This took him to and from Kessel, a remote planet located in the Outer Rim that is surrounded by a black hole cluster known as the Maw. As an unnavigable mess, it provided a measure of protection for smugglers running the Imperial blockade that guarded the space lanes near the planet.
All of this comes up in the Jedi Academy Trilogy, a series of novels written by Kevin J. Anderson that are part of the expanded Star Wars universe, and is the first case of the Run being detailed. From these an other sources, we are told that the Run is an 18-parsec route that led away from Kessel, around the Maw, and into the far more navigable area of space known as The Pit. Here, smugglers had to contend with asteroids, but any smuggler worth his salt could find their way through without too much difficulty, and didn’t have to worry about Imperial patrols from this point onward.
To cut down on the distance traveled, pilots could dangerously skirt the edges of the black holes, a maneuver dangerous because it involves getting pulled in by their gravitational forces. If a ship were fast enough, it could risk cutting it closer than most, thus shaving more distance of the route while still being able to break free after it all to complete the run.
Hence we have the first possible explanation to Han’s ambiguous statement. Han’s boast was not about the time taken for him to complete the Run, but the fact that Millennium Falcon was so fast that he was able to cut a full third of the Run off and still make it out. The Falcon would have to be a pretty sweet ship to do that! And it would also fit in with all his other boasts, about how the ship could “make 0.5 past light speed”, and was the “fastest ship in the fleet”.
However, there are other explanations as well. For starters, this expanded universe explanation does not jive with what Lucas himself said, what was presented in the novelization of the original movie, and of course what astronomers and megafans have to say. In the first instance, Lucas claimed in the commentary of the Star Wars: Episode IV A New Hope DVD that the “parsecs” are due to the Millennium Falcon’s advanced navigational computer rather than its engines, so the navicomputer would calculate much faster routes than other ships could.
In the A New Hope novelization, Han says “standard time units” in the course of his conversation with Luke and Ben, rather than “parsecs.” And in the revised fourth draft of A New Hope that was released in 1976, the description for “Kessel Run” is described as a bit of hapless misinformation that Obi-Wan doesn’t believe for a second. In short, Han erred when he said it and didn’t realize it.
And then there is the far more farfetched and mind-bending explanation as made by Kyle Hill in a recent article by Wired magazine. Here, he argues that the true intent of Han’s statement was that he was, in fact, a time traveler. By combining some basic laws of physics – namely that the speed of light (c) is unbreakable and 0.99 ad infinitum is as fast as anything can go – and the details of Han’s boast, a more clear picture of how this works emerges.
First, because the shortened Kessel Run spans 12 parsecs (39.6 light-years), a ship traveling nearly light-speed would take a little more than 39.6 years to get there. Factoring in time dilation, anyone watching the Kessel Run would see Solo speeding along for almost 40 years, but Solo himself would experience only a little more than half a day. So basically, in the time it takes Han to complete just one Kessel Run, the rest of the galaxy continues on its usual path for 40 years, which pushed the date of Han’s birth 40 years into the past.
Confused yet? Well, the idea is that Han would have been born long before events in A New Hope, and even The Phantom Menace took place. After completing his run, no doubt trying to avoid Republic authorities or some such equivalent, he came upon a universe that had gone through the ringer with a Sith coup d’etat, Imperial oppression, and a looming Civil War. What could he do but stick to smuggling and hope to make a living?
REALLY doesn’t make sense in terms of the storyline, does it? Ah, but what can you do? People like to find quirky explanations for things that don’t make sense. It can be fun! But of course, there’s a final and much, much simpler explanation that I haven’t even mentioned yet, and it’s one that’s far more believable given the so-called evidence.
Put simply, Lucas made a mistake. The parsecs line was a misfire, an oversight, and/or brain fart on his part. Nothing more, and all these attempts at explanation are just an obvious attempt to make something that doesn’t fit fit. It makes perfect sense when you think about it: since A New Hope was the first Star Wars movie, that meant Lucas was directing it all by himself. The assistance he sorely needed in terms of directing, writing, editing, etc. didn’t come until the movie was almost complete and he was looking bankruptcy and a nervous breakdown in the eye.
And remember, this is the same movie where a Storm Trooper walked head first into a door aboard the Death Star, Luke yells “Carrie” to Carrie Fisher while they are shooting, the cast and camera can be seen in numerous widescreen shots, and just about every technical problem that could go wrong did go wrong, some of which even made it into the final cut. As far as bloopers, outtakes and errors are concerned, the first Star Wars movie was a mess!
See? So really, is it hard to imagine a simple oversight like a typo could have made it on screen and no one caught it? Hell no! And frankly, I think fandom would be a lot happier if Lucas had remembered these early days of his career and not decided to make the prequels all by himself. Sure, there were plenty of people to catch these kinds of simple errors the second time around, but his many flaws as a movie maker found other ways to shine through – i.e. Jar Jar, lazy directing, too much special effects, wooden dialogue, confused storyline, continuity errors and plot holes galore!
Ah, but that’s another topic entirely. Point is, Star Wars had simple beginnings and plenty of mistakes were made along the way. One can’t expect something so grand and significant in terms of popular culture to be consistent or error free. And Lucas was never really good at producing a seamless product. In the end, it was a fun ride until the new ones came out, and even then he was still making money hand over fist.
And with Disney at the helm now, chances are we’re in for a real treat with some high-budgets and high-production values. And I’m sure there will be plenty of things for the meganerds and uberfans to poke fun at and make compilation videos of. And I of course will be writing about all of it 😉