Non-invasive medicine is currently one of the fastest growing industries in the world. Thanks to ongoing developments in the fields of nanofabrication, wireless communications, embedded electronics and microsensors, new means are being created all the time that can monitor our health that are both painless and hassle-free.
Consider diabetes, an epidemic that currently affects 8% of the population in the US and is growing worldwide. In October of 2013, some 347 million cases were identified by the World Health Organization, which also claims that diabetes will become the 7th leading cause of death by 2030. To make matters worse, the conditions requires constant blood-monitoring, which is difficult in developing nations and a pain where the means exist.
Hence why medical researchers and companies are looking to create simpler, non-invasive means. Google is one such company, which back in January announced that they are working on a “smart” contact lens that can measure the amount of glucose in tears. By merging a mini glucose sensor and a small wireless chip into a set of regular soft contact lenses, they are looking to take all the pin-pricks out of blood monitoring.
In a recent post on Google’s official blog, project collaborators Brian Otis and Babak Parviz described the technology:
We’re testing prototypes that can generate a reading once per second. We’re also investigating the potential for this to serve as an early warning for the wearer, so we’re exploring integrating tiny LED lights that could light up to indicate that glucose levels have crossed above or below certain thresholds.
And Google is hardly alone in this respect. Due to growing concern and the advancements being made, others are also looking at alternatives to the finger prick, including glucose measures from breath and saliva. A company called Freedom Meditech, for example, is working on a small device that can measure glucose levels with an eye scan.
Their invention is known as the I-SugarX, a handheld device that scans the aqueous humor of eye, yielded accurate results in clinical studies in less than four minutes. John F. Burd, Ph.D., Chief Science Officer of Freedom Meditech, described the process and its benefits in the following way:
The eye can be thought of as an optical window into to body for the painless measurement of glucose in the ocular fluid as opposed to the blood, and is well suited for our proprietary optical polarimetric based measurements. Based on the results of this, and other studies, we plan to begin human clinical studies as we continue our product development.
Between these and other developments, a major trend towards “smart monitoring” is developing and likely to make life easier and cut down on the associated costs of medicine. A smart contact lens or saliva monitor would make it significantly easier to watch out for uncontrolled blood sugar levels, which ultimately lead to serious health complications.
But of course, new techniques for blood-monitoring goes far beyond addressing chronic conditions like diabetes. Diagnosing and controlling the spread of debilitating, potentially fatal diseases is another major area of focus. Much like diabetes, doing regular bloodwork can be a bit difficult, especially when working in developing areas of the world where proper facilities can be hard to find.
But thanks to researchers at Rice University in Houston, Texas, a new test that requires no blood draws is in the works. Relying on laser pulse technology to create a vapor nanobubble in a malaria-infected cell, this test is able to quickly and non-invasively diagnose the disease. While it does not bring medical science closer to curing this increasingly drug-resistant disease, it could dramatically improve early diagnosis and outcomes.
The scanner was invented by Dmitro Lapotko, a physicist, astronomer, biochemist, and cellular biologist who studied laser weapons in Belarus before moving to Houston. Here, he and his colleagues began work on a device that used the same kind of laser and acoustic sensing technology employed on sub-hunting destroyers, only on a far smaller scale and for medical purposes.
Dubbed “vapor nanobubble technology,” the device combines a laser scanner and a fiber-optic probe that detect malaria by heating up hemozoin – the iron crystal byproduct of hemoglobin that is found in malaria cells, but not normal blood cells. Because the hemozoin crystals absorb the energy from the laser pulse, they heat up enough to create transient vapor nanobubbles that pop.
This, in turn, produces a ten-millionth-of-a-second acoustic signature that is then picked up by the device’s fiber-optic acoustic sensor and indicates the presence of the malaria parasite in the blood cells scanned. And because the vapor bubbles are only generated by hemozoin, which is only present in infected cells, the approach is virtually fool-proof.
In an recent issue of Proceedings of the National Academy of Sciences, Lapotko and his research team claimed that the device detected malaria in a preclinical trial on mice where only one red blood cell in a million was infected with zero false positives. In a related school news release, the study’s co-author David Sullivan – a malaria clinician a Johns Hopkins University – had this to say about the new method:
The vapor nanobubble technology for malaria detection is distinct from all previous diagnostic approaches. The vapor nanobubble transdermal detection method adds a new dimension to malaria diagnostics, and it has the potential to support rapid, high-throughput and highly sensitive diagnosis and screening by nonmedical personnel under field conditions.
At present, malaria is one of the world’s deadliest diseases, infecting hundreds of millions of people a year and claiming the lives of more than 600,000. To make matters worse, most the victims are children. All of this combines to make malaria one of the most devastating illness effecting the developing world, comparable only to HIV/AIDS.
By ensuring that blood tests that could detect the virus, and require nothing more than a mobile device that could make the determination quickly, and need only a portable car battery to power it, medical services could penetrate the once-thought impenetrable barriers imposed by geography and development. And this in turn would be a major step towards bringing some of the world’s most infectious diseases to heel.
Ultimately, the aim of non-invasive technology is to remove the testing and diagnostic procedures from the laboratory and make them portable, cheaper, and more user-friendly. In so doing, they also ensure that early detection, which is often the difference between life and death, is far easier to achieve. It also helps to narrow the gap between access between rich people and poor, not to mention developing and developing nations.
The science behind cold fusion has been a source of constant controversy for decades. Not only has this pursuit turned up its share of phony claims, the fact that it also promises to yield clean, abundant energy on the cheap has led to no shortage of romantic endorsements and vocal detractors. But if it could be made to work, there is no doubt that our energy problems would be solved, and in a way that is not harmful to our environment.
Last February, NASA made waves by announcing that they were working towards cold fusion through low-energy nuclear reaction (LENR) technology. Then in September, the National Ignition Facility (NIF) in California announced a major milestone when they managed to produce a controlled reaction that provided more energy that was required to start it.
But all of that seemed to pale in comparison to the announcement by Andrea Rossi’s that he managed to create a fusion power plant that was reportedly capable of generated a single megawatt of power. Known as the E-Cat 1MW Plant (short for Energy-Catalyser), Rossi announced its creation back in November, and indicated that he and his company were taking pre-orders and that they would start deliveries by 2014.
Today, the big news is that a large US investment company has acquired the rights to the cold fusion LENR technology. That investment company is Cherokee Investment Partners, and they appear to be interested in deploying the cold fusion tech commercially in both China and the US to meet both countries existing and projected energy needs.
Relying on the same process as other LENR technology, the E-Cat generates cold fusion by taking nickel and hydrogen and fusing them into copper – a process that has 10,000 times the energy density of gasoline, and 1,000 times the power density. Rossi says he’s found a special catalyst that makes the process work, but many scientists remain unconvinced.
Regardless of whether or it not it can deliver, it now seems that Rossi’s previously allusions to an American partner are true after all. Much like everything surrounding Rossi, he chose to be nebulous about the identity of the company that was supporting him. However, with this latest deal, Cherokee and its CEO Thomas Darden, a man who has a history of investing in clean energy, is a believer in the design.
In addition to preparing the patents through a Limited Liability Company – known as Industrial Heat – there are also reports that Darden recently visited China to showcase the E-Cat to Chinese officials and businesspeople. China is reportedly looking at using the E-Cat to significantly reduce its carbon footprint and meet its the energy needs of its growing cities in a way that won’t generate more air pollution.
Needless to say, this deal has bolstered Rossi’s and the E-Cat’s credibility, but the technology remains unproven. Rossi says that he has a team of international scientists that are planning to do another round of tests on the E-Cat which are slated to end in March, with a peer-reviewed report to follow sometime after that. Fingers crossed, those rounds of test will provide conclusive proof.
Then, we can all get to work dreaming about a bright, clean future, and the thousands of applications such plants will have!
One of the greatest threats to our planetary ecosystem is the threat of bees going extinct, a phenomenon that is often filed under the heading of Colony Collapse Disorder (CCD). Because of their role in pollination, bees are an integral part of the environment, and their disappearance would mean the sudden collapse of all life on the planet in just a few years time.
Because of this, environmentalists and entomologists are looking for ways to address the disappearance of bees. One solution, as put forward by a team of Australian scientists working in Tasmania, is to outfit bees with tiny microchip trackers to monitor their movements. By turning them into an army of mobile data-collectors, the team hopes to determine why the local bees are abandoning their hives.
For the past five months, this team has been capturing hundreds of bees, refrigerating them, shaving them, and gluing tiny sensors – which weigh about 1/4000th of a paperclip – to their backs. So far, the team has captured, tagged and released hundred bees, but the team plans to engineer a total of 5000 with these chips for the sake of their research.
Dr. Paulo de Souza, the lead scientist on the project, explained the capture and tagging process as follows:
The bees are very sensitive to temperature. We take the bees to the lab in a cage, we put them in a fridge with temps around 5 degrees Celsius, and in five minutes, all the bees fall asleep, because their metabolism goes down. We rub a bit of glue on them, and then attach the sensor. We carry them back, and in five minutes the bees wake up again.
By monitoring their behavior, the scientists are trying to prevent Colony Collapse Disorder, the mysterious phenomenon in which worker bees suddenly abandon their hives.As it stands, no one is entirely sure what causes CCD, but biological diversity, diet, management of the hives, radiation, and pesticide use are all possible influences on the bees’ behavior.
Colony Collapse Disorder remains a mystery that not only effects bees, but entire industries. If bees don’t pollinate fruit crops well enough, production decreases, prices rise, and local ecosystems can collapse. Tasmania, who’s huge agricultural tracts accounts for 65% of all Australian crop exports, could be devastated. Hence why de Souza and his colleagues are using it as a testing ground for their research.
In addition to monitoring the bees movements and checking in with them via RFID readers installed near hives and feeding stations, they’ve also created an experiment which exposes some bees to environmental contaminants (like pesticides) where other hives remain pesticide-free. By examining the effect on bees’ movements, they’ll be able to determine which factors cause bee disorientation and abnormal behavior.
As DeSouza explains it, the tagging and tracking process works a lot like a swipe card:
When you go to your office, you swipe a card to gain access. We assign different numbers to the devices on the bees, so we have 5,000 of these micro-sensors with one specific number. We follow not only the swarm, but each of the individuals to see what they’re doing.
The scientists will also be able to examine bee data through several generations within the hive. When the contaminated pollen turns to nectar, other bees within the hive feed on it, and pass contamination on to their offspring. To de Souza’s knowledge, this is the first time scientists have attempted to measure hive contamination on this scale.
Right now, their main goal is to understand CCD before it reaches Australia’s shores and effects its agricultural operations. But the research is expected to have far-reaching implications, helping to address a major ecological concern that effects the entire world. And in the long run, de Souza and his team are looking to refine the process and take it even further.
This includes adding more features to the chips and applying them to other species of crucial and threatened insects. Key to this, says de Souza, is miniaturization:
As the chips go down in size, we’ll also be able to use this in other insects. Fruit flies, for example, are another insect incredibly important for biosecurity in Australia.
An interesting concept, isn’t it? Big data meets entomology meets ecology, and all for the sake of preserving a crucial part of the food industry and an integral part of our environment. Because ultimately, its not just about preventing colonies from collapsing, but the Earth’s ecosystems as well.
Scientists and astronomers have learned a great deal about the universe in recent years, thanks to craft like the Kepler space probe and the recently launched Gaian space observatory. As these and other instruments look out into the universe and uncover stars and exoplanets, it not only lets us expand our knowledge of the universe, but gives us a chance to reflect upon the meaning of this thing we call “habitability”.
Basically, our notions of what constitutes a habitable environment are shaped by our own. Since Earth is a life-sustaining environment from which we originated, we tend to think that conditions on another life-giving planet would have to be similar. However, scientists René Heller and John Armstrong contend that there might be a planet even more suitable in this galaxy, and in the neighboring system of Alpha Centauri B.
For those unfamiliar, Alpha Centauri A/B is a triple star system some 4.3 light years away from Earth, making it the closest star system to Earth. The nice thing about having a hypothetical “superhabitable” planet in this system is that it makes it a lot easier to indulge in a bit of a thought experiment, and will make it that much more easy to observe and examine.
According to the arguments put forward by Heller, of the Department of Physics and Astronomy, McMaster University, Hamilton; and Armstrong, of the Department of Physics, Weber State University in Ogden, this planet may be even more suitable for supporting life than our own. It all comes down to meeting the particulars, and maybe even exceeding them.
For example, a habitable planet needs the right kind sun – one that has existed and remained stable for a long time. If the sun in question is too large, then it will have a very short life; and if it’s too small, it might last a long time. But the planet will have to be very close to stay warm and that can cause all sorts of problems, such as a tidally locked planet with one side constantly facing the sun.
Our own sun is a G2-type star, which means it has been alive and stable for roughly 4.6 billion years. However, K-type dwarfs, which are smaller than the Sun, have lives longer than the age of the universe. Alpha Centauri B is specifically a K1V-type star that fits the bill with an estimated age of between 4.85 and 8.9 billion years, and is already known to have an Earth-like planet called Alpha Centauri B b.
As to the superhabitable planet, assuming it exists, it will be located somewhere between 0.5 and 1.4 astronomical units (46 – 130 million mi, 75 – 209 million km) from Alpha Centauri B. All things being equal, it will have a circular orbit 1. 85 AU (276 million km / 172 million miles) away, which would place it in the middle of the star’s habitable zone.
Also, for a planet to sustain life it has to be geologically active, meaning it has to have a rotating molten core to generate a magnetic field to ward off cosmic radiation and protect the atmosphere from being stripped away by solar winds. A slightly more massive planet with more gravity means more tectonic activity, so a better magnetic field and a more stable climate.
However, the most striking difference between the superhabitable world and Earth would be that the former would lack our continents and deep oceans – both of which can be hostile to life. Instead, Heller and Armstrong see a world with less water than ours, which would help to avoid both a runaway greenhouse effect and a snowball planet that an overabundance of water can trigger.
Our superhabitable planet might not even be in the habitable zone. It could be a moon of some giant planet further away. Jupiter’s moon Io is a volcanic hellhole due to tidal heating, but a larger moon that Heller and Armstrong call a “Super Europa” in the right orbit around a gas giant could heat enough to support life even if it’s technically outside the star’s habitable zone.
According to Heller and Armstrong, this world would look significantly different from our own. It would be an older world, larger and more rugged, and would provide more places for life to exist. What water there was would be evenly scattered across the surface in the form of lakes and small, shallow seas. And, it would also be slightly more massive, which would mean more gravity.
This way, the shallow waters would hold much larger populations of more diverse life than is found on Earth, while the temperatures would be more moderated. However, it would be a warmer world than Earth, which also makes for more diversity and potentially more oxygen, which the higher gravity would help with by allowing the planet to better retain its atmosphere.
Another point made by Heller and Armstrong is that there may be more than one habitable planet in the Alpha Centauri B system. Cosmic bombardments early in the history of the Solar System is how the Earth got its water and minerals. If life had already emerged on one planet in the early history of the Alpha Centauri B system, then the bombardment might have spread it to other worlds.
But of course, this is all theoretical. Such a planet may or may not exist, and may or may not have triggered the emergence of life on other worlds within the system. But what is exciting about it is just how plausible its existence may prove to be, and how easy it will be to verify once we can get some space probes between here and there.
Just imagine the sheer awesomeness of being able to see it, the images of a super-sized Earth-moon beamed back across light years, letting us know that there is indeed life on worlds besides our own. Now imagine being able to study that life and learning that our conceptions of this too have been limited. What a time that will be! I hope we all live to see it…
Since the Wright Brothers developed the world’s first airplane, scientists and aerospace engineers have understood how important airflaps and wing design are to ensuring that a plane is able to achieve lift and land safely. During and after World War II, additional lessons were learned, where the sweep of a wing was found to be central to a plane achieving higher service ceilings and air speed velocities.
Since that time, many notable improvements have been made, but some strictures have remained the same. For example, conventional wings suffer from the problem of being fixed in a single position, which makes some aspects of performance possible but other things extremely difficult. In addition, flaps have remained virtually unchanged over the years, relying on hinged joints that are limited and vulnerable.
In both cases, the answer may lie in flexible and seamless materials, leading to wings that can change shape as needed. Such technology could not only enable better performance, but remove the need for hinges and gears. Towards this end, Michigan-based FlexSys has developed a way to optimize wing aerodynamics with FlexFoil, a seamless variable geometry airfoil system.
In development since 2001, FlexFoil is made from what is described only as “aerospace materials,” and is seamlessly integrated into the trailing edge of the wing. Based on a technology known as “distributed compliance,” the morphing structure integrates actuators and sensors that, according to Flexsys, results in “large deformations in shape morphing with very small strains.”
According to a 2006 paper co-written by mechanical engineer Dr. Sridhar Kota (the FlexFoil’s inventor), the foils are:
optimized to resist deflection under significant external aerodynamic loading and are just as stiff and strong as a conventional flap.
What this translates to in real terms is a tolerance of over 4500 kg (10,000 lbs) in air loads and the ability to distribute pressure more evenly throughout the wing, resulting in less strain in any one area. It is also said to reduce wind noise by up to 40 percent on landing, and to lessen build-up of both ice and debris.But the biggest benefit comes in terms of fuel economy.
When retrofitted onto a wing, FlexFoil can reduce fuel consumption by a claimed 4 to 8 percent, with that number climbing to 12 percent for those wings that are built are the system. What’s more, the technology could be applied to anything that moves relative to a fluid medium, including things like helicopter rotor blades, wind turbine blades, boat rudders, or pump impellers.
FlexFoil was officially introduced to the public this week at the AIAA (American Institute of Aeronautics and Astronautics) SciTech exposition in Washington, DC. Plans call for flight tests to be performed this July at NASA’s Dryden Flight Research Center, where the flaps of a Gulfstream business jet will be replaced with the foils.
Check out this video of the airwing design and what it does here:
To be fair, this is not the only case of flexible, morphing aircraft in development right now. In fact, NASA has been looking to create a morphing aircraft concept ever since 2001. So far, this has included collaborating with Boeing and the U.S. Air Force to create the Active Aeroelastic Wing (AAW) which was fitted to the F/A-18 Hornet, a multirole combat jet in use with the USAF.
But looking long-term, NASA hopes to create a design for a morphing airplane (pictured above). Known as the 21st Century Aerospace Vehicle, and sometimes nicknamed the Morphing Airplane, the concept includes a variety of smart technologies that could enable inflight configuration changes for optimum flight characteristics, and is an example of biomimetic technology.
In this case, the biological design being mimicked is that of a bird. Through the use of smart materials that are flexible and can change their shape on command, the 21st Century Aerospace Vehicle is able to shape its wings by extending the tips out and slightly upward to give it optimal lift capability. In this configuration, the inspiration for the aircraft’s wings is most clear (pictured above).
But once airborne, the aircraft needs a wing that is capable of producing less wind resistance while still maintaining lift. This is why the wings, upon reaching and service ceilings in excess of 3000 meters (10,000 feet), the wings then contract inward and sweep back to minimize drag and increase airspeed velocity.
Though this program has yet to bear fruit, it is an exciting proposal, and provides a glimpse of the future.
Be sure to check out NASA’s video of the CAV too, and keep your eyes on the skies. Chances are, jets that utilize smart, morphing surfaces are going to be there soon!
Gauging what life will be like down the road based on the emerging trends of today is something that scientists and speculative minds have been doing since the beginning of time. But given the rapid pace of change in the last century – and the way that it continues to accelerate – predicting future trends has become something of a virtual necessity today.
And the possibilities that are expected for the next generation are both awe-inspiring and cause for concern. On the one hand, several keen innovations are expected to become the norm in terms of transportation, education, health care and consumer trends. On the other, the growing problems of overpopulation, urbanization and Climate Change are likely to force some serious changes.
Having read through quite a bit of material lately that comes from design firms, laboratories, and grant funds that seek to award innovation, I decided to do a post that would take a look at how life is expected to change in the coming decades, based on what we are seeing at work today. So here we go, enjoy the ride, and remember to tip the driver!
Housing: When it comes to designing the cities of the future – where roughly 5 of the worlds 8.25 billion people are going to live – meeting the basic needs of all these folks is complicated by the need to meet them in a sustainable way. Luckily, people all across the world are coming together to propose solutions to this problem, ranging from the small and crafty to the big and audacious.
Consider that buildings of the future could be coated with Smart Paint, a form of pigment that allows people to change the color of their domicile simply by pushing a button. Utilizing nano-particles that rearrange themselves to absorb a different part of the spectrum, the paint is able to reflect whatever wavelength of visible light the user desires, becoming that color and removing the need for new coats of paint.
And consider that apartments and houses in this day could be lighted by units that convert waste light energy from their light bulbs back into functional ambient light. This is the idea behind the Trap Light, a lamp that comes equipped with photoluminescent pigments embedded directly into the glass body. Through this process, 30 minutes of light from an incandescent or LED light bulb provides a few hours of ambient lighting.
And in this kind of city, the use of space and resources has come to be very efficient, mainly because it has had to. In terms of low-rent housing, designs like the Warsaw-inspired Keret House are very popular, a narrow, 14-sqaure meter home that still manages to fit a bathroom, kitchen and bedroom. Being so narrow, city planners are able to squeeze these into the gaps between older buildings, its walls and floors snapping together like Lego.
When it comes to other, larger domiciles (like houses and apartment blocks), construction is likely to become a much more speedy and efficient process – relying on the tools of Computer-Assisted Design (CAD) and digital fabrication (aka. the D-process). Basically, the entire fabrication process is plotted in advance on computer, and then the pieces are tailor made in the factory and snapped together on site.
And lets not forget anti-gravity 3-D printing as a means of urban assembly, as proposed by architecture students from the Joris Laarman Lab in Amsterdam. Using quick-hardening materials and dispensed by robot-driven printers, entire apartment blocks – from electronic components to entire sections of wall – within a few days time. Speedier, safer and more efficient than traditional construction.
Within these buildings, water is recycled and treated, with grey water used to fertilize crops that are grown in house. Using all available spaces – dedicated green spaces, vertical agriculture, and “victory gardens” on balconies – residents are able to grow their own fruits and vegetables. And household 3-D food printers will dispense tailor-made treats, from protein-rich snacks and carb crackers to chocolate and cakes.
And of course, with advances in smart home technology, you can expect that your appliances, thermostat, and display devices will all be predictive and able to anticipate your needs for the day. What’s more, they will all be networked and connected to you via a smartphone or some other such device, which by 2030, is likely to take the form of a smartwatch, smartring or smartbracelet.
Speaking of which…
Smart Devices and Appliances:
When it comes to living in the coming decades, the devices we use to manage our everyday lives and needs will have evolved somewhat. 3-D printing is likely to be an intrinsic part of this, manufacturing everything from food to consumer products. And when it comes to scanning things for the sake of printing them, generating goods on demand, handheld scanners are likely to become all the rage.
That’s where devices like the Mo.Mo. (pictured above) will come into play. According to Futurist Forum, this molecular scanning device scans objects around your house, tells you what materials they’re made from, and whether they can be re-created with a 3-D printer. Personal, household printers are also likely to be the norm, with subscriptions to open-source software sites leading to on-demand household manufacturing.
And, as already mentioned, everything in the home and workplace is likely to be connected to your person through a smart device or embedded chips. Consistent with the concept of the “Internet of Things”, all devices are likely to be able to communicate with you and let you know where they are in real time. To put that in perspective, imagine SIRI speaking to you in the form of your car keys, telling you they are under the couch.
Telepresence, teleconferencing and touchscreens made out of every surface are also likely to have a profound effect. When a person wakes in the morning, the mirror on the wall will have displays telling them the date, time, temperature, and any messages and emails they received during the night. When they are in the shower, the wall could comforting images while music plays. This video from Corning Glass illustrates quite well:
And the current range of tablets, phablets and smartphones are likely to be giving way to flexible, transparent, and ultralight/ultrathin handhelds and wearables that use projection and holographic technology. These will allow a person to type, watch video, or just interface with cyberspace using augmented reality instead of physical objects (like a mouse or keyboard).
And devices which can convert, changing from a smartphone to a tablet to a smartwatch (and maybe even glasses) are another predicted convenience. Relying on nanofabrication technology, Active-Matrix Organic Light-Emitting Diode (AMOLED) technology, and touch-sensitive surfaces, these devices are sure to corner the market of electronics. A good example is Nokia’s Morph concept, shown here:
Energy Needs: In the cities of the near-future, how we generate electricity for all our household appliances, devices and possibly robots will be a going concern. And in keeping with the goal of sustainability, those needs are likely to be met by solar, wind, piezoelectric, geothermal and tidal power wherever possible. By 2030, buildings are even expected to have arrays built in to them to ensure that they can meet their own energy needs independently.
This could look a lot like the Strawscraper (picture above), where thousands of fronds utilize wind currents to generate electricity all day long; or fields filled with Windstalks – where standing carbon-fiber reinforced poles generate electricity by simply swaying with the wind. Wind farms, or wind tunnels and turbines (as envisioned with the Pertamina Energy Tower in Jakarta) could also be used by buildings to do the same job.
In addition, solar panels mounted on the exterior would convert daylight into energy. Assuming these buildings are situated in low-lying areas, superheated subterranean steam could easily be turned into sources of power through underground pipes connected to turbines. And for buildings located near the sea, turbines placed in the harbor could do the same job by capturing the energy of the tides.
Furthermore, piezoelectric devices could be used to turn everyday activity into electricity. Take the Pavegen as an example, a material composed of recycled tires and piezoelectric motors that turns steps into energy. Equipping every hallway, stairwell and touch surface with tensile material and motors, just about everything residents do in a building could become a source of added power.
On top of that, piezoelectric systems could be embedded in roads and on and off ramps, turning automobile traffic into electrical power. In developed countries, this is likely to take the form of advanced materials that create electrical charges when compressed. But for developing nations, a simple system of air cushions and motors could also be effective, as demonstrated by Macías Hernández’ proposed system for Mexico City.
And this would seem like a good segue into the issue of…
Mass Transit: According to UN surveys, roughly 60% of the world’s population will live in cities by the year 2030. Hopefully, the 5.1 billion of us negotiating tight urban spaces by then will have figured out a better way to get around. With so many people packed into dense urban environments, it is simply not practical for all these individuals to rely on smog-emitting automobiles.
For the most part, this can be tackled by the use of mass transit that is particularly fast and efficient, which are the very hallmarks of maglev trains. And while most current designs are already speedy and produce a smaller carbon footprint than armies of cars, next-generation designs like the Hyperloop, The Northeast Maglev (TNEM), and the Nagoya-Tokyo connector are even more impressive.
Dubbed by Elon Musk as the “fifth form” of transportation, these systems would rely on linear electric motors, solar panels, and air cushions to achieve speeds of up to 1290 kilometers per hour (800 mph). In short, they would be able to transport people from Los Angeles and San Francisco in 30 minutes, from New York to Washington D.C. in 60 minutes, and from Nagoya to Tokyo in just 41.
When it comes to highways, future designs are likely to take into account keeping electric cars charged over long distances. Consider the example that comes to us from Sweden, where Volvo is also working to create an electric highway that has embedded electrical lines that keep cars charged over long distances. And on top of that, highways in the future are likely to be “smart”.
For example, the Netherlands-based Studio Roosegaarde has created a concept which relies on motion sensors to detect oncoming vehicles and light the way for them, then shuts down to reduce energy consumption. Lane markings will use glow-in-the-dark paint to minimize the need for lighting, and another temperature-sensitive paint will be used to show ice warnings when the surface is unusually cold.
In addition, the road markings are expected to have longer-term applications, such as being integrated into a robot vehicle’s intelligent monitoring systems. As automated systems and internal computers become more common, smart highways and smart cars are likely to become integrated through their shared systems, taking people from A to B with only minimal assistance from the driver.
And then there’s the concept being used for the future of the Pearl River Delta. This 39,380 square-km (15,200 square-mile) area in southeastern China encompasses a network of rapidly booming cities like Shenzhen, which is one of the most densely populated areas in the world. It’s also one of the most polluted, thanks to the urban growth bringing with it tons of commuters, cars, and vehicle exhaust.
That’s why NODE Architecture & Urbanism – a Chinese design firm – has come up with a city plan for 2030 that plans put transportation below ground, freeing up a whole city above for more housing and public space. Yes, in addition to mass transit – like subways – even major highways will be relegated to the earth, with noxious fumes piped and tunneled elsewhere, leaving the cityscape far less polluted and safer to breathe.
Personal cars will not be gone, however. Which brings us to…
Personal Transit: In the future, the majority of transport is likely to still consist of automobiles, albeit ones that overwhelmingly rely on electric, hydrogen, biofuel or hybrid engines to get around. And keeping these vehicles fueled is going to be one of the more interesting aspects of future cities. For instance, electric cars will need to stay charged when in use in the city, and charge stations are not always available.
That’s where companies like HEVO Power come into play, with its concept of parking chargers that can offer top-ups for electric cars. Having teamed up with NYU Polytechnic Institute to study the possibility of charging parked vehicles on the street, they have devised a manhole c0ver-like device that can be installed in a parking space, hooked up to the city grid, and recharge batteries while commuters do their shopping.
And when looking at individual vehicles, one cannot underestimate the role by played by robot cars. Already, many proposals are being made by companies like Google and Chevrolet for autonomous vehicles that people will be able to summon using their smartphone. In addition, the vehicles will use GPS navigation to automatically make their way to a destination and store locations in its memory for future use.
And then there’s the role that will be played by robotaxis and podcars, a concept which is already being put to work in Masdar Eco City in the United Arab Emirates, San Diego and (coming soon) the UK town of Milton Keynes. In the case of Masdar, the 2GetThere company has built a series of rails that can accommodate 25,000 people a month and are consistent with the city’s plans to create clean, self-sustaining options for transit.
In the case of San Diego, this consists of a network known as the Personal Rapid Transit System – a series of on-call, point to point transit cars which move about on main lines and intermediate stations to find the quickest route to a destination. In Britian, similar plans are being considered for the town of Milton Keynes – a system of 21 on-call podcars similar to what is currently being employed by Heathrow Airport.
But of course, not all future transportation needs will be solved by MagLev trains or armies of podcars. Some existing technologies – such as the bicycle – work pretty well, and just need to be augmented. Lightlane is a perfect example of this, a set of lasers and LED lights that bikers use to project their own personal bike lane from under the seat as they ride.
And let’s not forget the Copenhagen Wheel, a device invented by MIT SENSEable City Lab back in 2009 to electrify the bicycle. Much like other powered-bicycle devices being unveiled today, this electric wheel has a power assist feature to aid the rider, a regenerative braking system that stores energy, and is controlled by sensors in the peddles and comes with smart features can be controlled via a smartphone app.
On top of all that, some research actually suggests that separating modes of transportation – bike lanes, car lanes, bus lanes, etc. – actually does more harm than good to the people using them. In Europe, the traffic concept known as “shared spaces” actually strips paths of traffic markings and lights, and allow walkers and drivers to negotiate their routes on their own.
Shared spaces create more consideration and consciousness for other people using them, which is why the Boston architecture firm Höweler + Yoon designed the “Tripanel” as part of their larger vision for the Boston-Washington corridor (aka. “Boswash”). The Tripanel features a surface that switches among grass, asphalt, and photovoltaic cells, offering a route for pedestrians, bikers, and electric cars.
Education:
When it comes to schooling ourselves and our children, the near future is likely to see some serious changes, leading to a virtual reinventing of educational models. For some time now, educators have been predicting how the plurality of perspectives and the rise of a globalized mentality would cause the traditional mode of learning (i.e. centralized schools, transmission learning) to break down.
And according to other speculative thinkers, such as Salim Ismail – the director of Singularity University – education will cease being centralized at all and become an “on-demand service”. In this model, people will simply “pull down a module of learning”, and schooldays and classrooms will be replaced by self-directed lessons and “microlearning moments”.
In this new learning environment, teleconferencing, telepresence, and internet resources are likely to be the main driving force. And while the size and shape of future classrooms is difficult to predict, it is likely that classroom sizes will be smaller by 2030, with just a handful of students using portable devices and display glasses to access information while under the guidance of a teacher.
At the same time, classrooms are likely to be springing up everywhere, in the forms of learning annexes in apartment buildings, or home-school environments. Already, this is an option for distance education, where students and teachers are connected through the internet. With the addition of more sophisticated technology, and VR environments, students will be able to enter “virtual classrooms” and connect across vast distances.
According to Eze Vidra, the head of Google Entrepreneurs Europe: “School kids will learn from short bite-sized modules, and gamification practices will be incorporated in schools to incentivize children to progress on their own.” In short, education will become a self-directed, or (in the case of virtual environments) disembodied experienced that are less standardized, more fun, and more suited to individual needs.
Health: Many experts believe that medicine in the future is likely to shift away from addressing illness to prevention. Using thin, flexible, skin-mounted, embedded, and handheld sensors, people will be able to monitor their health on a daily basis, receiving up-to-date information on their blood pressure, cholesterol, kidney and liver values, and the likelihood that they might contract diseases in their lifetime.
All of these devices are likely to be bundled in one way or another, connected via smartphone or other such device to a person’s home computer or account. Or, as Ariel Schwatz of CoExist anticipates, they could come in the form of a “Bathroom GP”, where a series of devices like a Dr.Loo and Dr. Sink measure everything from kidney function to glucose levels during a routine trip.
Basically, these smart toilets and sinks screen for illnesses by examining your spittle, feces, urine and other bodily fluids, and then send that data to a microchip embedded inside you or on a wristband. This info is analyzed and compared to your DNA patterns and medical records to make sure everything is within the normal range. The chip also measures vital signs, and Dr Mirror displays all the results.
However, hospitals will still exist to deal with serious cases, such as injuries or the sudden onset of illnesses. But we can also expect them to be augmented thanks to the incorporation of new biotech, nanotech and bionic advances. With the development of bionic replacement limbs and mind-controlled prosthetics proceeding apace, every hospital in the future is likely to have a cybernetics or bioenhancement ward.
What’s more, the invention of bioprinting, where 3-D printers are able to turn out replacement organic parts on demand, is also likely to seriously alter the field of medical science. If people are suffering from a failing heart, liver, kidney, or have ruined their knees or other joints, they can simply put in at the bioprinting lab and get some printed replacement parts prepared.
And as a final, encouraging point, diseases like cancer and HIV are likely to be entirely curable. With many vaccines that show the ability to not only block, but even kill, the HIV virus in production, this one-time epidemic is likely to be a thing of the past by 2030. And with a cure for cancer expected in coming years, people in 2030 are likely to view it the same way people view polio or tetanus today. In short, dangerous, but curable!
Buying/Selling: When it comes to living in 2030, several trends are expected to contribute to people’s economic behavior. These include slow economic growth, collaborative consumption, 3-D printing, rising costs, resource scarcity, an aging population, and powerful emerging economies. Some of these trends are specific, but all of them will effect the behavior of future generations, mainly because the world of the future will be even more integrated.
As already noted, 3-D printers and scanners in the home are likely to have a profound effect on the consumer economy, mainly by giving rise to an on-demand manufacturing ethos. This, combined with online shopping, is likely to spell doom for the department store, a process that is already well underway in most developed nations (thanks to one-stop shopping).
However, the emergence of the digital economy is also creating far more in the way of opportunities for micro-entrepreneurship and what is often referred to as the “sharing economy”. This represents a convergence between online reviews, online advertising of goods and services, and direct peer-to-peer buying and selling that circumvents major distributors.
This trend, which is not only reaching back in time to reestablish a bartering economy, but is also creating a “trust metric”, whereby companies, brand names, and even individuals are being measured by to their reputation, which in turn is based on their digital presence and what it says about them. Between a “sharing economy” and a “trust economy”, the economy of the future appears highly decentralized.
Further to this is the development of cryptocurrencies, a digital medium of exchange that relies solely on consumer demand to establish its value – not gold standards, speculators or centralized banks. The first such currency was Bitcoin, which emerged in 2009, but which has since been joined by numerous others like Litecoin, Namecoin, Peercoin, Ripple, Worldcoin, Dogecoin, and Primecoin.
In this especially, the world of 2030 is appearing to be a very fluid place, where wealth depends on spending habits and user faith alone, rather than the power of governments, financial organizations, or centralized bureaucracies. And with this movement into “democratic anarchy” underway, one can expect the social dynamics of nations and the world to change dramatically.
Space Travel!: This last section is of such significance that it simply must end with an exclamation mark. And this is simply because by 2030, many missions and projects that will pave the way towards a renewed space age will be happening… or not. It all comes down to whether or not the funding is made available, public interest remains high, and the design and engineering concepts involved hold true.
However, other things are likely to become the norm, such as space tourism. Thanks to visionaries like World View and Richard Branson (the pioneer of space tourism with Virgin Galactic), trips to the lower atmosphere are likely to become a semi-regular occurrence, paving the way not only for off-world space tourism, but aerospace transit across the globe as well.
Private space exploration will also be in full-swing, thanks to companies like Google’s Space X and people like Elon Musk. This year, Space X is preparing for the first launch of it’s Falcon Heavy rocket, a move which will bring affordable space flight that much closer. And by 2030, affordability will be the hallmarks of private ventures into space, which will likely include asteroid mining and maybe the construction of space habitats.
2030 is also the year that NASA plans to send people to Mars, using the Orion Multi-Purpose Crew Vehicle and a redesigned Saturn V rocket. Once there, the crew will conduct surface studies and build upon the vast legacy of the Spirit, Opportunity and Curiosity Rovers to determine what Mars once looked like. This will surely be a media event, the likes of which has not been seen since the Moon Landing.
Speaking of media events, by 2030, NASA may not even be the first space agency or organization to set foot on Mars. Not if Mars One, a nonprofit organization based in the Netherlands, get’s its way and manages to land a group of colonists there by 2023. And they are hardly alone, as Elon Musk has already expressed an interest in establishing a colony of 80,000 people on the Red Planet sometime in the future.
And Inspiration Mars, another non-profit organization hosted by space adventurist Dennis Tito, will have already sent an astronaut couple on a round-trip to Mars and back (again, if all goes as planned). The mission, which is currently slated for 2018 when the planets are in alignment, will therefore be a distant memory, but will serve as an example to all the private space ventures that will have followed.
In addition to Mars, one-way trips are likely to be taking place to other celestial bodies as well. For instance, Objective Europa – a non-profit made up of scientists, conceptual artists, and social-media experts – plans to send a group of volunteers to the Jovian moon of Europa as well. And while 2030 seems a bit soon for a mission, it is likely that (if it hasn’t been scrapped) the program will be in the advanced stages by then.
NASA and other space agencies are also likely to be eying Europa at this time and perhaps even sending ships there to investigate the possibility of life beneath it’s icy surface. Relying on recent revelations about the planet’s ice sheet being thinnest at the equator, a lander or space penetrator is sure to find its way through the ice and determine once and for all if the warm waters below are home to native life forms.
By 2030, NASA’s MAVEN and India’s MOM satellites will also have studied the Martian atmosphere, no doubt providing a much fuller picture of its disappearance. At the same time, NASA will have already towed an asteroid to within the Moon’s orbit to study it, and begun constructing an outpost at the L2 Lagrange Point on the far side of the Moon, should all go as planned.
And last, but certainly not least, by 2030, astronauts from NASA, the ESA, and possibly China are likely to be well on their way towards the creation of a permanent outpost on the Moon. Using a combination of 3-D printing, robots, and sintering technology, future waves of astronauts and settlers will have permanent domes made directly out of regolith with which to conduct research on the Lunar surface.
All of these adventures will help pave the way to a future where space tourism to other planets, habitation on the Moon and Mars, and ventures to the asteroid belt (which will solve humanity’s resource problem indefinitely), will all be the order of the day.
Summary: To break it all down succinctly, the world of 2030 is likely to be rather different than the one we are living in right now. At the same time though, virtually all the developments that characterize it – growing populations, bigger cities, Climate Change, alternative fuels and energy, 3-D printing, cryptocurrencies, and digital devices and communications – are already apparent now.
Still, as these trends and technologies continue to expand and are distributed to more areas of the world – not to mention more people, as they come down in price – humanity is likely to start taking them for granted. The opportunities they open, and the dependency they create, will have a very deterministic effect on how people live and how the next generation will be shaped.
All in all, 2030 will be a very interesting time because it will be here that so many developments – the greatest of which will be Climate Change and the accelerating pace of technological change – will be on the verge of reaching the tipping point. By 2050, both of these factors are likely to come to a head, taking humanity in entirely different directions and vying for control of our future.
Basically, as the natural environment reels from the effects of rising temperatures and an estimated CO2 concentration of 600 ppm in the upper atmosphere, the world will come to be characterized by famine, scarcity, shortages, and high mortality. At the same time, the accelerating pace of technology promises to lead to a new age where abundance, post-scarcity and post-mortality are the norm.
So in the end, 2030 will be a sort of curtain raiser for the halfway point of the 21st century, during which time, humanity’s fate will have become largely evident. I’m sure I’m not alone in hoping things turn out okay, because our children are surely expecting to have children of their own, and I know they would like to leave behind a world the latter could also live in!
If 2013 will go down in history as the year the Higgs Boson was discovered, then 2014 may very well be known as the year dark matter was first detected. Much like the Higgs Boson, our understanding of the universe rests upon the definitive existence of this mysterious entity, which alongside “dark energy” is believed to make up the vast majority of the cosmos.
Before 2014 rolled around, the Large Underground Xenon experiment (LUX) – located near the town of Lead in South Dakota – was seen as the best candidate for finding it. However, since that time, attention has also been directed towards the DarkSide-50 Experiment located deep underground in the Gran Sasso mountain, the highest peak in the Appennines chain in central Italy.
This project is an international collaboration between Italian, French, Polish, Ukrainian, Russian, and Chinese institutions, as well as 17 American universities, which aims to pin down dark matter particles. The project team spent last summer assembling their detector, a grocery bag-sized device that contains liquid argon, cooled to a temperature of -186° C (-302.8° F), where it is in a liquid state.
According to the researchers, the active, Teflon-coated part of the detector holds 50 kg (110 lb) of argon, which provides the 50 in the experiment’s name. Rows of photodetectors line the top and bottom of the device, while copper coils collect the stripped electrons to help determine the location of collisions between dark matter and visible matter.
The research team, as well as many other scientists, believe that a particle known as a WIMP (weakly interacting massive particle) is the prime candidate for dark matter. WIMP particles have little interaction with their surroundings, so the researchers are hoping to catch one of these particles in the act of drifting aloof. They also believe that these particles can be detected when one of them collides with the nucleus of an atom, such as argon.
By cramming the chamber of their detector with argon atoms, the team increases their chance of seeing a collision. The recoil from these collisions can be seen in a short-lived trail of light, which can then be detected using the chamber’s photodetectors. To ensure that background events are not interfering, the facility is located deep underground to minimize background radiation.
To aid in filtering out background events even further, the detector sits within a steel sphere that is suspended on stilts and filled with 26,500 liters (7000 gallons) of a fluid called scintillator. This sphere in turn sits inside a three-story-high cylindrical tank filled with 946,350 liters (250,000) of ultrapure water. These different chambers help the researchers differentiate WIMP particles from neutrons and cosmic-ray muons.
Since autumn of 2013, the DarkSide-50 project has been active and busy collecting data. And it is one of about three dozen detectors in the world that is currently on the hunt for dark matter, which leads many physicists to believe that elusive dark matter particles will be discovered in the next decade. When that happens, scientists will finally be able to account for 31.7% of the universe’s mass, as opposed to the paltry 4.9% that is visible to us now.
Now if we could only account for all the “dark energy” out there – which is believed to make up the other 68.3% of the universe’s mass – then we’d really be in business! And while we’re waiting, feel free to check out this documentary video about the DarkSide-50 Experiment and the hunt for dark matter, courtesy of Princeton University:
For over a century, the debate about how what dinosaurs truly looked like has raged. In that time, and owing to a poverty of hard evidence beyond fossilized bones, paleontologists have produced some rather wild theories. Whereas some have stuck to the notion that dinosaurs were scaly, others have suggested everything from flat-skin to fur to feathers. And now, it seems that a clear picture may have emerged.
After surveying all the world’s known fossils of dinosaur skin, a pair of paleontologists says the vast majority of non-avian dinosaurs were scaly-skinned, much like reptiles. While the case for certain species of theropods – that gave rise to modern avians – having feathers remains strong, it now seems that these were the exception and not the rule, as some previously thought.
Up until now, opinion remained divided because of the feather-like skin impressions that were found around the fossilized remains of certain theropods, the dinosaur group that contained the likes of Tyrannosaurus and Velociraptor. By contrast, the ornithischian lineage — i.e. Triceratops, Stegosaurus,Ankylosaurus, etc. — and the huge, long-necked sauropod’s were considered to be scaly.
However, the discovery, beginning in 2002, of a few ornithischians with filament-like structures in their skin. This led to speculation that feather-like structures were an ancestral trait for all dinosaur groups. Keen to know more, palaeontologists Paul Barrett of the Natural History Museum in London and David Evans of the Royal Ontario Museum in Toronto created a database of all known impressions of dinosaur skin tissues.
After compiling the data, they then proceeded to identify those that had feathers or feather-like structures, and considered relationships in the dinosaurian family tree. The results, which were revealed back in October at the annual meeting of the Society of Vertebrate Palaeontology, indicate that although some ornithischians had quills or filaments in their skin, the overwhelming majority had scales.
In addition, the survey results suggest that dinosaur feathers, bristles, or fuzz did not arise early enough in the family tree to spread to many non-avian dinosaurs. According to Richard Butler, a paleontologist from the University of Birmingham in the U.K who was not associated with the study, the results are a “valuable reality check” about the appearance of early dinosaurs.
Even so, during an interview with Nature News, Butler was quick to points out that the findings are not set in stone:
We don’t have primitive dinosaurs from the late Triassic and early Jurassic periods preserved in the right conditions for us to find skin or feather impressions. This picture could quickly change if we start finding early dinosaurs with feathers on them.
As a result, paleontologist cannot be precisely sure when or how dino-feathers evolved. If they arose further back in the dinosaur family tree, then more dinosaurs are likely have them. And with new discoveries being made all the time, things may once again tip back in favor of the majority of dinosaurs being feathered, furry or fuzzy.
Here we have two more stories from last year that I find I can’t move on without posting about them. And considering just how relevant they are to the field of biomedicine, there was no way I could let them go unheeded. Not only are developments such as these likely to save lives, they are also part of a much-anticipated era where mortality will be a nuisance rather than an inevitability.
The first story comes to us from the University of New South Wales (UNSW) in Australia and the Harvard Medical School, where a joint effort achieved a major step towards the dream of clinical immortality. In the course of experimenting on mice, the researchers managed to reverse the effects of aging in mice using an approach that restores communication between a cell’s mitochondria and nucleus.
Mitochondria are the power supply for a cell, generating the energy required for key biological functions. When communication breaks down between mitochondria and the cell’s control center (the nucleus), the effects of aging accelerate. Led by David Sinclair, a professor from UNSW Medicine at Harvard Medical School, the team found that by restoring this molecular communication, aging could not only be slowed, but reversed.
Responsible for this breakdown is a decline of the chemical Nicotinamide Adenine Dinucleotide (or NAD). By increasing amounts of a compound used by the cell to produce NAD, Professor Sinclair found that he and his team could quickly repair mitochondrial function. Key indicators of aging, such as insulin resistance, inflammation and muscle wasting, showed extensive improvement.
In fact, the researchers found that the tissue of two-year-old mice given the NAD-producing compound for just one week resembled that of six-month-old mice. They said that this is comparable to a 60-year-old human converting to a 20-year-old in these specific areas. As Dr Nigel Turner, an ARC Future Fellow from UNSW’s Department of Pharmacology and co-author of the team’s research paper, said:
It was shocking how quickly it happened. If the compound is administered early enough in the aging process, in just a week, the muscles of the older mice were indistinguishable from the younger animals.
The technique has implications for treating cancer, type 2 diabetes, muscle wasting, inflammatory and mitochondrial diseases as well as anti-aging. Sinclair and his team are now looking at the longer-term outcomes of the NAD-producing compound in mice and how it affects them as a whole. And with the researchers hoping to begin human clinical trials in 2014, some major medical breakthroughs could be just around the corner.
In another interesting medical story, back in mid-December, a 75 year-old man in Paris became the recipient of the world’s first Carmat bioprosthetic artificial heart. Now technically, artificial hearts have been in use since the 1980’s. But what sets this particular heart apart, according to its inventor – cardiac surgeon Alain Carpentier – is the Carmat is the first artificial heart to be self-regulating.
In this case, self-regulating refers to the Carmat’s ability to speed or slow its flow rate based on the patient’s physiological needs. For example, if they’re performing a vigorous physical activity, the heart will respond by beating faster. This is made possible via “multiple miniature embedded sensors” and proprietary algorithms running on its integrated microprocessor. Power comes from an external lithium-ion battery pack worn by the patient, and a fuel cell is in the works.
Most other artificial hearts beat at a constant unchanging rate, which means that patients either have to avoid too much activity, or risk becoming exhausted quickly. In the course of its human trials, it will be judged based on its ability to keep patients with heart failure alive for a month, but the final version is being designed to operate for five years.
The current lone recipient is reported to be recuperating in intensive care at Paris’ Georges Pompidou European Hospital, where he is awake and carrying on conversations. “We are delighted with this first implant, although it is premature to draw conclusions given that a single implant has been performed and that we are in the early postoperative phase,” says Carmat CEO Marcello Conviti.
According to a Reuters report, although the Carmat is similar in size to a natural adult human heart, it’s is somewhat larger and almost three times as heavy – weighing in at approximately 900 grams (2 lb). It should therefore fit inside 86 percent of men, but only 20 percent of women. That said, the company has stated that a smaller model could be made in time.
In the meantime, it’s still a matter of making sure the self-regulating bioprosthetic actually works and prolongs the life of patients who are in the final stages of heart failure. Assuming the trials go well, the Carmat is expected to be available within the European Union by early 2015, priced at between 140,000 and 180,000 euros, which works out to $190,000 – $250,000 US.
See what I mean? From anti-aging to artificial organs, the war on death proceeds apace. Some will naturally wonder if that’s a war meant to be fought, or an inevitably worth mitigating. Good questions, and one’s which we can expect to address at length as the 21st century progresses…
If time travelers were real, a la Doctor Who or Doc Emmett Brown style, how would they go about sharing their gift with the world? According to astrophysicist Robert Nemiroff and physics graduate student Teresa Wilson at Michigan Technological University, they would tweet about it. And so, the two began what has proven to be one of the most interesting searches on today’s social media.
To break it down succinctly, the pair began to search the backlogs of Facebook and Twitter for any indications of time travellers posting about the future. This they did by entering search terms for two major events – the appearance Comet ISON in September of 2012 and the election of Pope Francis in March 2013 – to see if there was any mention of them before they happened.
Their theory, as presented in a paper published last month on Cornell University’s Library website, was that if there were any postings containing “Comet ISON,” “#cometison,” “Pope Francis” or “#popefrancis” from before those dates, they may very well be from a time traveler. Unfortunately, their searchers on Facebook turned up results which, in their own words, “were clearly not comprehensive.
Granted, a time-traveler would be quick to delete any status updates that appeared prescient, using Facebook’s new Graph Search privacy features. However, the the time-traveler hunting due had no better luck on Twitter, where a majority of people keep their tweets public. But of course, they went on to say in their paper that just because they didn’t see any time travelers doesn’t mean they don’t exist.
As they argue it, it might not be possible for time travelers to leave any evidence of their journey behind.
…it may be physically impossible for us to find such information as that would violate some yet-unknown law of physics… time travelers may not want to be found, and may be good at covering their tracks.
Another thing to consider is that time travelers might actively try to erase any mention of their existence. For example, in the first season of the Doctor Who reboot, where the Doctor used a special virus to delete any digital trace of himself before leaving the present age. It’s academic stuff really, and the pair really shouldn’t have expected such careless errors to show up on the internet.
And as Donald Rumsfeld, another man who went searching for something and came up empty, said: “the absence of evidence is not the evidence of absence”. Yes, I know, the comparison really doesn’t help, does it? And right now, I’m sure you might be wondering if all those tax dollars that fund research grants might be better used elsewhere.
And let’s face it, it’s something many of us would wonder, and possible check for ourselves, given half a chance…