Timeline of the Future…

hyperspace4I love to study this thing we call “the future”, and began to do so as a hobby the day I made the decision to become a sci-fi writer. And if there’s anything I’ve learned, its that the future is an intangible thing, a slippery beast we try to catch by the tail at any given moment that is constantly receding before us. And when predict it, we are saying more about the time in which we are living than anything that has yet to occur.

As William Gibson famously said: “…science fiction was always about the period in which it was written.” At every juncture in our history, what we perceive as being the future changes based on what’s going on at the time. And always, people love to bring up what has been predicted in the past and either fault or reward the authors for either “getting it right” or missing the mark.

BrightFutureThis would probably leave many people wondering what the point of it all is. Why not just wait and let the future tend to itself? Because it’s fun, that’s why! And as a science fiction writer, its an indispensable exercise. Hell, I’d argue its absolutely essential to society as a whole. As a friend of one once said, “science fiction is more of a vehicle than a genre.” The point is to make observations about society, life, history, and the rest.

And sometimes, just sometimes, predictive writers get it right. And lately, I’ve been inspired by sources like Future Timeline to take a look at the kinds of predictions I began making when I started writing and revising them. Not only have times changed and forced me to revise my own predictions, but my research into what makes humanity tick and what we’re up to has come a long way.

So here’s my own prediction tree, looking at the next few centuries and whats likely to happen…

21st Century:

2013-2050:

  • Ongoing recession in world economy, the United States ceases to be the greatest economic power
  • China, India, Russia and Brazil boast highest rates of growth despite continued rates of poverty
  • Oil prices spike due to disappearance of peak oil and costs of extracting tar sands
  • Solar power, wind, tidal power growing in use, slowly replacing fossil fuel and coal
  • First arcologies finished in China, Japan, Russia, India and the United States

arcology_lillypad

  • Humanity begins colonizing the Moon and mounts manned mission to Mars
  • Settlements constructed using native soil and 3D printing/sintering technology
  • NASA tows asteroid to near Earth and begins studies, leading to plans for asteroid mining
  • Population grows to 9 billion, with over 6 living in major cities across the all five continents
  • Climate Change leading to extensive drought and famine, as well as coastal storms, flooding and fires
  • Cybernetics, nanotech and biotech leading to the elimination of disabilities
  • 3D Construction and Computer-Assisted Design create inexpensive housing in developing world

europa_report

  • First exploratory mission to Europa mounted, discovers proof of basic life forms under the surface ice
  • Rome ordains first openly homosexual priests, an extremely controversial move that splits the church
  • First semi-sentient, Turing compatible AI’s are produced and put into service
  • Thin, transparent, flexible medical patches leading to age of “digital medicine”
  • Religious orders formed opposed to “augmentation”, “transhumanism” and androids
  • First true quantum computers roll off the assembly line

quantum-teleportation-star-trails-canary-islands-1-640x353

  • Creation of the worldwide quantum internet underway
  • Quantum cryptography leads to increased security, spamming and hacking begins to drop
  • Flexible, transparent smartphones, PDAs and tablets become the norm
  • Fully immersive VR environments now available for recreational, commercial and educational use
  • Carbon dioxide in the upper atmosphere passes 600 ppm, efforts to curb emissions are redoubled
  • ISS is retired, replaced by multiple space stations servicing space shuttles and commercial firms
  • World’s first orbital colony created with a population of 400 people

2050-2100:

  • Global economy enters “Second Renaissance” as AI, nanomachinery, quantum computing, and clean energy lead to explosion in construction and development
  • Commercial space travel become a major growth industry with regular trips to the Moon
  • Implant technology removes the need for digital devices, technology now embeddable
  • Medical implants leading to elimination of neurological disorders and injuries
  • Synthetic food becoming the rage, 3D printers offering balanced nutrition with sustainability

3dfood2

  • Canada, Russia, Argentina, and Brazil become leading exporters of foodstuffs, fresh water and natural gas
  • Colonies on the Moon and Mars expand, new settlement missions plotted to Ganymede, Europa, Oberon and Titan
  • Quantum internet expanding into space with quantum satellites, allowing off-world connectivity to worldwide web
  • Self-sufficient buildings with water recycling, carbon capture and clean energy becomes the norm in all major cities
  • Second and third generation “Martians” and “Loonies” are born, giving rise to colonial identity

asteroid_foundry

  • Asteroid Belt becomes greatest source of minerals, robotic foundries use sintering to create manufactured products
  • Europe experiences record number of cold winters due to disruption of the Gulf Stream
  • Missions mounted to extra-Solar systems using telexploration probes and space penetrators
  • Average life expectancy now exceeds 100, healthy children expected to live to 120 years of age
  • NASA, ESA, CNSA, RFSA, and ISRO begin mounting missions to exoplanets using robot ships and antimatter engines
  • Private missions to exoplanets with cryogenically frozen volunteers and crowdfunded spaceships

daedalus_starship_630px

  • Severe refugee crises take place in South America, Southern Europe and South-East Asia
  • Militarized borders and sea lanes trigger multiple humanitarian crises
  • India and Pakistan go to war over Indus River as food shortages mount
  • China clamps down on separatists in western provinces of Xinjian and Tibet to protect source of the Yangtze and Yellow River
  • Biotechnology begins to grow, firms using bacteria to assemble structural materials

geminoid

  • Fully sentient AIs created and integrated into all aspects of life
  • Traditionalist communities form, people seeking to disconnect from modern world and eschew enhancement
  • Digital constructs become available, making neurological downloads available
  • Nanotech research leading to machinery and materials assembled at the atomic level
  • Traditional classrooms giving way to “virtual classrooms”, on-demand education by AI instructors
  • Medical science, augmentation, pharmaceuticals and uploads lead to the first generation of human “Immortals”

space_debris

  • Orbital colonies gives way to Orbital Nexus, with hundreds of habitats being established
  • Global population surpasses 12 billion despite widespread famine and displacement
  • Solar, wind, tidal, and fusion power replace oil and coal as the dominant power source worldwide
  • Census data shows half of world residents now have implants or augmentation of some kind
  • Research into the Alcubierre Drive begins to bear experimental results

alcubierre-warp-drive-overview22nd Century:

2100-2150:

  • Climate Change and global population begin to level off
  • First “Neural Collective” created, volunteers upload their thought patterns into matrix with others
  • Transhumanism becomes established religion, espousing the concept of transcendence
  • Widespread use of implants and augmentation leads to creation of new underclass called “organics”
  • Solar power industry in the Middle East and North Africa leading to growth in local economies
  • Biotech leads to growth of “glucose economy”, South American and Sub-Saharan economies leading in manufacture of biomaterials
  • Population in Solar Colonies and Orbital Nexus reaches 100,000 and continues to grow

asteroid_belt1

  • Off-world industry continues to grow as Asteroid Belt and colonies provide the majority of Earth’s mineral needs
  • Famine now widespread on all five continents, internalized food production in urban spaces continues
  • UN gives way to UNE, United Nations of Earth, which has near-universal representation
  • First test of Alcubierre FTL Drive successful, missions to neighboring systems planned
  • Tensions begin to mount in Solar Colonies as pressure mounts to produce more agricultural goods
  • Extinction rate of wild animals begins to drop off, efforts at ecological restoration continue
  • First attempts to creating world religion are mounted, met with limited success

networked_minds

  • Governments in most developed countries transitioning to “democratic anarchy”
  • Political process and involvement becoming digitized as representation becomes obsolete
  • “Super-sentience” emerges as people merge their neural patterns with each other or AIs
  • Law reformed to recognize neural constructs and AIs as individuals, entitled to legal rights
  • Biotech research merges with AI and nanotech to create first organic buildings with integrated intelligence

2150-2200:

  • Majority of the world’s population live in arcologies and self-sufficient environments
  • Census reveals over three quarters of world lives with implants or augmentation of some kind
  • Population of Orbital Nexus, off-world settlements surpasses 1 million
  • First traditionalist mission goes into space, seeking world insulated from rapid change and development
  • Labor tensions and off-world riots lead to creation of Solar policing force with mandate to “keep the peace”

Vladivostok-class_Frigate

  • First mission to extra=Solar planets arrive, robots begin surveying surface of Gliese 581 g, Gliese 667C c, HD 85512 b, HD 40307 g, Gliese 163 c, Tau Ceti e, Tau Ceti f
  • Deep space missions planned and executed with Alcubierre Drive to distant worlds
  • 1st Wave using relativistic engines and 2nd Wave using Alcubierre Drives meet up and begin colonizing exoplanets
  • Neighboring star systems within 25 light years begin to be explored
  • Terraforming begins on Mars, Venus and Europa using programmed strains of bacteria, nanobots, robots and satellites
  • Space Elevator and Slingatron built on the Moon, used to transport people to space and send goods to the surface

space_elevator_lunar1

  • Earth’s ecology begins to recover
  • Natural species are reintroduced through cloning and habitat recovery
  • Last reported famine on record, food production begins to move beyond urban farms
  • Colonies within 50 light years are established on Gliese 163 c, Gliese 581 g, Gliese 667C c, HD 85512 b, HD 40307 g, Tau Ceti e, Tau Ceti f
  • Off-world population reaches 5 million and continues to grow
  • Tensions between Earth and Solar Colonies continue, lead to demands for interplanetary governing body
  • Living, breathing cities become the norm on all settled worlds, entire communities build of integrated organic materials run by AIs and maintained by programmed DNA and machinery

self-aware-colony

23rd Century and Beyond:

Who the hell knows?

*Note: Predictions and dates are subject to revision based on ongoing developments and the author’s imagination. Not to be taken literally, and definitely open to input and suggestions.

New Movie Trailer: Her

HerThe first trailer for the upcoming movie Her has arrived, a movie about a writer that finds himself falling in love with his household AI. Written and directed by Spike Jonze (Being John Malkovitch, Adaptation, Where the Wild Things Are, and somehow, Jackass), the movie stars a mustachioed Joaquin Phoenix as a lonely writer and Scarlett Johansson as the voice of the AI Samantha.

Before you go thinking this is a totally creepy and weird concept, the movie trailer actually does the idea justice – examining the nature of relationships and what it means to be in love. And of course, it’s not all one way. Whereas Phoenix’s character Theodore is a recent divorcee who believes he’s found a true love in Samantha, she is a new life form that learns what love is through her relationship with him.

Her will open in a limited release this November, and I’m actually interesting in seeing it. It would be nice to see a movie where the AI’s aren’t defined by their creepy red eye or are constantly trying to kill us!

Judgement Day Update: The Human Brain Project

brain_chip2Biomimetics are one of the fastest growing areas of technology today, which seek to develop technology that is capable of imitating biology. The purpose of this, in addition to creating machinery that can be merged with our physiology, is to arrive at a computing architecture that is as complex and sophisticated as the human brain.

While this might sound the slightest bit anthropocentric, it is important to remember that despite their processing power, supercomputers like the D-Wave Two, IBM’s Blue Gene/Q Sequoia, or MIT’s ConceptNet 4, have all shown themselves to be lacking when it comes to common sense and abstract reasoning. Simply pouring raw computing power into the mix does not make for autonomous intelligence.

IBM_Blue_Gene_P_supercomputerAs a result of this, new steps are being taken to crate a computer that can mimic the very organ that gives humanity these abilities – the human brain. In what is surely the most ambitious step towards this goal to date, an international group of researchers recently announced the formation of the Human Brain Project. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This will involve mapping out the vast network known as the human brain – a network composed of over a hundred billion neuronal connections that are the source of emotions, abstract thought, and this thing we know as consciousness. And to do so, the researchers will be using a progressively scaled-up multilayered simulation running on a supercomputer.

Human-Brain-project-Alp-ICTConcordant with this bold plan, the team itself is made up of over 200 scientists from 80 different research institutions from around the world. Based in Lausanne, Switzerland, this initiative is being put forth by the European Commission, and has even been compared to the Large Hadron Collider in terms of scope and ambition. In fact, some have taken to calling it the “Cern for the brain.”

According to scientists working on the project, the HBP will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into the overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimick the functions of the human brain.

^According to a statement released by the HBP, Swedish Nobel Laureate Torsten Wiesel had this to say about the project:

The support of the HBP is a critical step taken by the EC to make possible major advances in our understanding of how the brain works. HBP will be a driving force to develop new and still more powerful computers to handle the massive accumulation of new information about the brain, while the neuroscientists are ready to use these new tools in their laboratories. This cooperation should lead to new concepts and a deeper understanding of the brain, the most complex and intricate creation on earth.

Other distinguished individuals who were quoted in the release include President Shimon Peres of Israel, Paul G. Allen, the founder of the Allen Institute for Brain Science; Patrick Aebischer, the President of EPFL in Switzerland; Harald Kainz, Rector of Graz University of Technology, Graz, Austria; as well as a slew of other politicians and academics.

Combined with other research institutions that are producing computer chips and processors that are modelled on the human brain, and our growing understanding of the human connectome, I think it would be safe to say that by the time the HBP wraps up, we are likely to see processors that are capable of demonstrating intelligence, not just in terms of processing speed and memory, but in terms of basic reasoning as well.

At that point, we really out to consider instituting Asimov’s Three Laws of Robotics! Otherwise, things could get apocalyptic on our asses! 😉


Sources:
io9.com, humanbrainproject.eu
, documents.epfl.ch

Cool Video: “Kara”, by Quantic Dream

KaraI just came across this very interesting video over at Future Timeline, where the subject in question was how by the 22nd century, androids would one day be indistinguishable from humans. To illustrate the point, the writer’s used a video produced by Quantic Dream, a motion capture and animation studio that produces 3D sequences for video games as well as their own video shorts and proprietary technologies.

The video below is entitled “Kara”, a video short that was developed for the PS3 and presented during the 2012 Game Developers Conference in San Francisco. A stunning visual feet and the winner of the Best Experimental Film award at the International LA Shorts Film Fest 2012, Kara tells the story of an AX 400 third generation android getting assembled and initiated.

Naturally, things go wrong during the process when a “bug” is encountered. I shan’t say more seeing as how I don’t want to spoil the movie, but trust me when I say it’s quite poignant and manages to capture the issue of emerging intelligence quite effectively. As the good folks at Future Timeline used this video to illustrate, the 22nd century is likely to see a new type of civil rights movement, one which has nothing to do with “human rights”.

Enjoy!

Judgement Day Update: A.I. Equivalent to Four Year Old Mind

artificial_intelligence1Ever since computers were first invented, scientists and futurists have dreamed of the day when computers might be capable of autonomous reasoning and be able to surpass human beings. In the past few decades, it has become apparent that simply throwing more processing power at the problem of true artificial intelligence isn’t enough. The human brain remains several orders more complex than the typical AI, but researchers are getting closer.

One such effort is ConceptNet 4, a semantic network being developed by MIT. This AI system contains a large store of information that is used to teach the system about various concepts. But more importantly, it is designed to process the relationship between things. Much like the Google Neural Net, it is designed to learn and grow to the point that it will be able to reason autonomously.

child-ai-brainRecently, researchers at the University of Illinois at Chicago decided to put the ConceptNet through an IQ test. To do this, they used the Wechsler Preschool and Primary Scale of Intelligence Test, which is one of the common assessments used on small children. ConceptNet passed the test, scoring on par with a four-year-old in overall IQ. However, the team points out it would be worrisome to find a real child with lopsided scores like those received by the AI.

The system performed above average on parts of the test that have to do with vocabulary and recognizing the similarities between two items. However, the computer did significantly worse on the comprehension questions, which test a little one’s ability to understand practical concepts based on learned information. In short, the computer showed relational reasoning, but was lacking in common sense.

Neuromorphic-chip-640x353This is the missing piece of the puzzle for ConceptNet and those like it. An artificial intelligence like this one might have access to a lot of data, but it can’t draw on it to make rational judgements. ConceptNet might know that water freezes at 32 degrees, but it doesn’t know how to get from that concept to the idea that ice is cold. This is basically common sense — humans (even children) have it and computers don’t.

There’s no easy way to fabricate implicit information and common sense into an AI system and so far, no known machine has shown the ability. Even IBM’s Watson trivia computer isn’t capable of showing basic common sense, and though multiple solutions have been proposed – from neuromorphic chips to biomimetic circuitry – nothing is bearing fruit just yet.

AIBut of course, the MIT research team is already hard at work on ConceptNet 5, a more sophisticated neural net computer that is open source and available on GitHub. But for the time being, its clear that a machine will be restricted to processing information and incapable of making basic decisions. Good thing too! The sooner they can think for themselves, the sooner they can decide we’re in their way!

Source: extremetech.com

Big News in Quantum Computing!

^For many years, scientists have looked at the field of quantum machinery as the next big wave in computing. Whereas conventional computing involves sending information via a series of particles (electrons), quantum computing relies on the process of beaming the states of these particles from one location to the next. This process, which occurs faster than the speed of light since no movement takes place, would make computers exponentially faster and more efficient, and lead to an explosion in machine intelligence. And while the technology has yet to be realized, every day brings us one step closer…

One important step happened earlier this month with the installment of the D-Wave Two over at the Quantum Artificial Intelligence Lab (QAIL) at the Ames Research Center in Silicon Valley, NASA has announced that this is precisely what they intend to pursue. Not surprisingly, the ARC is only the second lab in the world to have a quantum computer.  The only other lab to possess the 512-qubit, cryogenically cooled machine is the defense contractor Lockheed Martin, which upgraded to a D-Wave Two in 2011.

D-Wave’s new 512-qubit Vesuvius chip
D-Wave’s new 512-qubit Vesuvius chip

And while there are still some who question the categorization of the a D-Wave Two as a true quantum computer, most critics have acquiesced since many of its components function in accordance with the basic principle. And NASA, Google, and the people at the Universities Space Research Association (USRA) even ran some tests to confirm that the quantum computer offered a speed boost over conventional supercomputers — and it passed.

The new lab, which will be situated at NASA’s Advanced Supercomputing Facility at the Ames Research Center, will be operated by NASA, Google, and the USRA. NASA and Google will each get 40% of the system’s computing time, with the remaining 20% being divvied up by the USRA to researchers at various American universities. NASA and Google will primarily use the quantum computer to advance a branch of artificial intelligence called machine learning, which is tasked with developing algorithms that optimize themselves with experience.

nasa-ames-research-center-partyAs for what specific machine learning tasks NASA and Google actually have in mind, we can only guess. But it’s a fair bet that NASA will be interested in optimizing flight paths to other planets, or devising a safer/better/faster landing procedure for the next Mars rover. As for Google, the smart money says they will be using their time to develop complex AI algorithms for their self-driving cars, as well optimizing their search engines, and Google+.

But in the end, its the long-range possibilities that offer the most excitement here. With NASA and Google now firmly in command of a quantum processor, some of best and brightest minds in the world will now be working to forward the field of artificial intelligence, space flight, and high-tech. It will be quite exciting to see what they produce…

photon_laserAnother important step took place back in March, when researchers at Yale University announced that they had developed a new way to change the quantum state of photons, the elementary particles researchers hope to use for quantum memory. This is good news, because it effectively demonstrated that true quantum computing – the kind that utilizes qubits for all of its processes – has continually eluded scientists and researchers in recent years.

To break it down, today’s computers are restricted in that they store information as bits – where each bit holds either a “1″ or a “0.” But a quantum computer is built around qubits (quantum bits) that can store a 1, a 0 or any combination of both at the same time. And while the qubits would make up the equivalent of a processor in a quantum computer, some sort of quantum Random Access Memory (RAM) is also needed.

Photon_follow8Gerhard Kirchmair, one of Yale researchers, explained in a recent interview with Nature magazine that photons are a good choice for this because they can retain a quantum state for a long time over a long distance. But you’ll want to change the quantum information stored in the photons from time to time. What the Yale team has developed is essentially a way to temporarily make the photons used for memory “writeable,” and then switch them back into a more stable state.

To do this, Kirchmair and his associates took advantage of what’s known as a “Kerr medium”, a law that states how certain mediums will refract light in a different ways depending on the amount shined on it. This is different from normal material materials that refract light and any other form of electromagnetic field the same regardless of how much they are exposed to.

Higgs-bosonThus, by exposing photons to a microwave field in a Kerr medium, they were able to manipulate the quantum states of photons, making them the perfect means for quantum memory storage. At the same time, they knew that storing these memory photons in a Kerr medium would prove unstable, so they added a vacuum filled aluminum resonator to act as a coupler. When the resonator is decoupled, the photons are stable. When resonator is coupled, the photons are “writeable”, allowing a user to input information and store it effectively.

This is not the first or only instance of researchers finding ways to toy with the state of photons, but it is currently the most stable and effective. And coupled with other efforts, such as the development of photonic transistors and other such components, or new ways to create photons seemingly out of thin air, we could be just a few years away from the first full and bona fide quantum processor!

Sources: Extremetech.com, Wired.com, Nature.com

Judgement Day Update: Robots for Kids

kids_robotRobots are penetrating into every aspect of life, from serving coffee and delivering books to cleaning up messes and fighting crime. In fact, the International Federation of Robotics reported that worldwide sales of robots topped $8.5 billion in 2011, totaling an estimated 166,028 robots sold. And with all the advances being made in AI and musculoskeletal robots, its only likely to get worse.

Little wonder then why efforts are being made to ensure that robots can effectively integrate into society. On the one hand, there’s the RoboEarth Cloud Engine, known as Rapyuta, that will make information sharing possible between machines to help them make sense of the world. On the other, there’s items like this little gem. It’s called the Romo, and its purpose is to teach your kids about robotics in a friendly, smiling way.

romo2Scared yet? Well don’t be just yet. While some might think this little dude is putting a happy face on the coming robocalypse, the creators have stated that real purpose behind it is to inspire a new, younger generation of engineers and programmers who can help solve some of the world’s technical problems in areas like health care and disaster relief.

Created by Las Vegas-based startup Romotive, this little machine uses the computing power of iOS devices as his brain. Basically, this means that you can remotely control the bot with your smartphone. Simply plug it in to the robot’s body and activate the app, and you get his blue, smiling face. Designed for use by kids, its program comes down to a simple series of if-then dependencies.

romo1In short, Romo can be programmed to recognize faces and respond to visual or auditory clues. The most common reaction is a smile, but the Romo can also looked surprised and doe-eyed. And with regular app and software updates, the Romo is predicted to get smarter and more sophisticated with time.

To realize their goal of creating a child-friendly robot, the company launched a campaign on Kickstarter back in October of 2011 with a goal of raising the $32,000 they would. After less than two years, they have received a total of 1,152 donations totaling some $114,796. Available in stores, at $149 a pop (smartphone not included), the makers hope that Romo will become the first truly personal robot.

Still, never too soon to start your Judgement Day planning. Stock up on EMPs and ammo, it’s going to be a rough Robopocalypse! And be sure to check out the company website by clicking here.

terminator1Source: fastcoexist.com, kickstarter.com

The Future is Here: The Neuromimetic Processor

Neuromorphic-chip-640x353It’s known as mimetic technology, machinery that mimics the function and behavior of organic life. For some time, scientists have been using this philosophy to further develop computing, a process which many believe to be paradoxical. In gleaming inspiration from the organic world to design better computers, scientists are basically creating the machinery that could lead to better organics.

But when it comes to Neuromoprhic processors, computers that mimic the function of the human brain, scientists have been lagging behind sequential computing. For instance, IBM announced this past November that its Blue Gene/Q Sequoia supercomputer could clock 16 quadrillion calculations per second, and could crudely simulate more than 530 billion neurons – roughly five times that of a human brain. However, doing this required 8 megawatts of power, enough to power 1600 homes.

connectomeHowever, Kwabena Boahen, a bioengineering professor at Stanford University recently developed a new computing platform that he calls the “Neurogrid”. Each Neurogrid board, running at only 5 watts, can simulate detailed neuronal activity of one million neurons — and it can now do it in real time. Giving the processing to cost ratio in electricity, this means that his new chip is roughly 100,000 times more efficient than other supercomputer.

What’s more, its likely to mean the wide-scale adoption of processors that mimic human neuronal behavior over traditional computer chips. Whereas sequential computing relies on simulated ion-channels to create software-generated “neurons”, the neuromorphic approach involves the flow of ions through channels in a way that emulates the flow of electrons through transistors. Basically, the difference in emulation is a difference between software that mimics the behavior, and hardware.

AI_picWhat’s more, its likely to be a major stepping stone towards the creation of AI and MMI. That’s Artificial Intelligence and Man-Machine Interface for those who don’t speak geek. With computer chips imitating human brains and achieving a measure of intelligence which can be measured in terms of neurons and connections, the likelihood that they will be able to merge with a person’s brain, and thus augment their intelligence, becomes that much more likely.

Source: Extremetech.com

2013, As Imagined By 1988

bladerunnerTwenty-five years ago, Los Angeles magazine envisioned what the world would look like in the current decade. And unlike Blade Runner, they avoided the cool but standard science fiction allegories – like massive billboards, flying cars and sentient robots – and went straight for the things that seemed entirely possible by contemporary standards.

The cover story of the magazine’s April 3, 1988 edition showed a futuristic downtown L.A. crisscrossed with electrically charged, multi-tiered freeways permeated by self-driving cars. The article itself then imagined a day in the life of the fictional Morrow family of the L.A. suburb Granada Hills, as “profiled” by the magazine in 2013 by science fiction writer Nicole Yorkin.

LAtimes_2013aIronically, the magazine did not envision that it would one day go out of business, or that print media would one day be lurching towards extinction. Nevertheless, the fictional article and the world it detailed were interesting reading. Little wonder then why, earlier this month, the LA Times along with an engineering class at USC, revisited the archives to assess what it predicted correctly versus incorrectly.

Together, pro­fess­or Jerry Lock­en­our and his class made a list of the hits and misses, and what they found paints a very interesting picture of how we predict the future and how its realization so often differs from what we expect. Of the major predictions to be found in LA of the 2013, as well as in the lives of the Morrow family (get it?), here is what they got right:

Smart-Houses:
smart-house_vCe6I_25016In the article, the Morrows are said to begin every morning when their “Smart House” automatically turns on. This consists of all the appliances activating and preparing them breakfast, and no doubt turning on all the environmental controls and opening the shades to get the temperature and ambient lighting just right.

While this isn’t the norm for the American family yet, the past few years have proved a turning point for home devices hooking up with the Internet, to become more programmable and serve our daily needs. And plans are well under way to find a means of networking them all together so they function as one “smart” unit.

Self-Driving Cars:
chevy_env_croppedThe writers of the article predicted that by 2013, cars would come standard with computers that control most of the settings, along with GPS systems for navigation. They also predict self-driving cars, which Google and Chevy are busy working on. In addition to using clean, alternative energy sources, these cars are expected to be able t0 self-drive, much in the same way a pilot puts their plane on auto-pilot. Drivers will also be able to summon the cars to their location, connect wirelessly to the internet, and download apps and updates to keep their software current.

But of course, they got a few things wrong as well. Here they are, the blots on their predictive record:

Homeprinted newspapers:
news_appThe article also predicts that each morning the Morrows would begin their day with a freshly printed newspaper, as rendered by their laser-jet printer. These would be tailor-made, automatically selecting the latest news feeds that would be of most interest to them. What this failed to anticipate was the rise in e-media and the decline of printed media, though hardly anyone would fault them for this. While news has certainly gotten more personal, the use of tablets, ereaders and smartphones is the way the majority of people now read their selected news.

Robot servants and pets:
kenshiro_smallIn what must have seemed like a realistic prediction, but which now comes across as a sci-fi cliche, the Morrows’ home was also supposed to come equipped with a robotic servant that had a southern accent. The family’s son was also greeted every morning by a robot dog that would come to play with him. While we are certainly not there yet, the concept of anthropomorphic robot assistants is becoming more real every day. Consider, for example, the Kenshiro robot (pictured at right), the 3D printed android, or the proposed Roboy, the Swiss-made robotic child. With all of these in the works, a robotic servant or pet doesn’t seem so far-fetched does it?

Summary:
Between these four major predictions and which came to be true, we can see that the future is not such an easy thing to predict. In addition to always being in motion, and subject to acceleration, slowing and sudden changes, the size and shape of it can be very difficult to pin down. No one can say for sure what will be realized and when, or if any of the things we currently take for granted will even be here tomorrow.

Alpha Moon Base at http://www.smallartworks.ca
Alpha Moon Base at http://www.smallartworks.ca

For instance, during the 1960’s and 70’s, it was common practice for futurists and scientists to anticipate that the space race, which had culminated with humans setting foot on the moon in 1969, would continue into the future, and that humanity would be seeing manned outposts on the moon by and commercial space flight by 1999. No one at the time could foresee that a more restrictive budget environment, plus numerous disasters and a thawing of the Cold War, would slow things down in that respect.

In addition, most predictions that took place before the 1980’s completely failed to predict the massive revolution caused by miniaturization and the explosion in digital technology. Many futurist outlooks at the time predicted the rise in AI, but took it for granted that computers would still be the size of a desk and require entire rooms dedicated to their processors. The idea of a computer that could fit on top of a desk, let alone on your lap or in the palm of your hand, must have seemed farfetched.

CyberspaceWhat’s more, few could predict the rise of the internet before the late 1980’s, or what the realization of “cyberspace” would even look like. Whereas writer’s like William Gibson not only predicted but coined the term, he and others seemed to think that interfacing with it would be a matter of cool neon-graphics and avatars, not the clean, page and site sort of interface which it came to be.

And even he failed to predict the rise of such things as email, online shopping, social media and the million other ways the internet is tailored to suit the average person and their daily needs. When it comes right down to it, it is not a dangerous domain permeated by freelance hacker “jockeys” and mega-corporations with their hostile counter-intrusion viruses (aka. Black ICE). Nor is it the social utopia promoting open dialogue and learning that men like Bill Gates and Al Gore predicted it would be in the 1990’s. If anything, it is an libertarian economic and social forum that is more democratic and anarchistic than anyone could have ever predicted.

But of course, that’s just one of many predictions that came about that altered how we see things to come. As a whole, the entire thing has come to be known for being full of shocks and surprises, as well as some familiar faces. In short, the future is an open sea, and there’s no telling which way the winds will blow, or what ships will make it to port ahead of others. All we can do is wait and see, and hopefully trust in our abilities to make good decisions along the way. And of course, the occasional retrospective and issue congratulations for the things we managed to get right doesn’t hurt either!

Sources: factcoexist.com, LATimes.com

The Singularity: The End of Sci-Fi?

singularity.specrepThe coming Singularity… the threshold where we will essentially surpass all our current restrictions and embark on an uncertain future. For many, its something to be feared, while for others, its something regularly fantasized about. On the one hand, it could mean a future where things like shortages, scarcity, disease, hunger and even death are obsolete. But on the other, it could also mean the end of humanity as we know it.

As a friend of mine recently said, in reference to some of the recent technological breakthroughs: “Cell phones, prosthetics, artificial tissue…you sci-fi writers are going to run out of things to write about soon.” I had to admit he had a point. If and when he reach an age where all scientific breakthroughs that were once the province of speculative writing exist, what will be left to speculate about?

Singularity4To break it down, simply because I love to do so whenever possible, the concept borrows from the field of quantum physics, where the edge of black hole is described as a “quantum singularity”. It is at this point that all known physical laws, including time and space themselves, coalesce and become a state of oneness, turning all matter and energy into some kind of quantum soup. Nothing beyond this veil (also known as an Event Horizon) can be seen, for no means exist to detect anything.

The same principle holds true in this case, at least that’s the theory. Originally coined by mathematician John von Neumann in the mid-1950’s, the term served as a description for a phenomenon of technological acceleration causing an eventual unpredictable outcome in society. In describing it, he spoke of the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”

exponential_growth_largeThe term was then popularized by science fiction writer Vernor Vinge (A Fire Upon the Deep, A Deepness in the Sky, Rainbows End) who argued that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. In more recent times, the same theme has been picked up by futurist Ray Kurzweil, the man who points to the accelerating rate of change throughout history, with special emphasis on the latter half of the 20th century.

In what Kurzweil described as the “Law of Accelerating Returns”, every major technological breakthrough was preceded by a period of exponential growth. In his writings, he claimed that whenever technology approaches a barrier, new technologies come along to surmount it. He also predicted paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history”.

kurzweil-loglog-bigLooking into the deep past, one can see indications of what Kurzweil and others mean. Beginning in the Paleolithic Era, some 70,000 years ago, humanity began to spread out a small pocket in Africa and adopt the conventions we now associate with modern Homo sapiens – including language, music, tools, myths and rituals.

By the time of the “Paleolithic Revolution” – circa 50,000 – 40,000 years ago – we had spread to all corners of the Old World world and left evidence of continuous habitation through tools, cave paintings and burials. In addition, all other existing forms of hominids – such as Homo neanderthalensis and Denisovans – became extinct around the same time, leading many anthropologists to wonder if the presence of homo sapiens wasn’t the deciding factor in their disappearance.

Map-of-human-migrationsAnd then came another revolution, this one known as the “Neolithic” which occurred roughly 12,000 years ago. By this time, humanity had hunted countless species to extinction, had spread to the New World, and began turning to agriculture to maintain their current population levels. Thanks to the cultivation of grains and the domestication of animals, civilization emerged in three parts of the world – the Fertile Crescent, China and the Andes – independently and simultaneously.

All of this gave rise to more habits we take for granted in our modern world, namely written language, metal working, philosophy, astronomy, fine art, architecture, science, mining, slavery, conquest and warfare. Empires that spanned entire continents rose, epics were written, inventions and ideas forged that have stood the test of time. Henceforth, humanity would continue to grow, albeit with some minor setbacks along the way.

The_Meeting_of_Cortés_and_MontezumaAnd then by the 1500s, something truly immense happened. The hemispheres collided as Europeans, first in small droves, but then en masse, began to cross the ocean and made it home to tell others what they found. What followed was an unprecedented period of expansion, conquest, genocide and slavery. But out of that, a global age was also born, with empires and trade networks spanning the entire planet.

Hold onto your hats, because this is where things really start to pick up. Thanks to the collision of hemispheres, all the corn, tomatoes, avocados, beans, potatoes, gold, silver, chocolate, and vanilla led to a period of unprecedented growth in Europe, leading to the Renaissance, Scientific Revolution, and the Enlightenment. And of course, these revolutions in thought and culture were followed by political revolutions shortly thereafter.

IndustrialRevolutionBy the 1700’s, another revolution began, this one involving industry and creation of a capitalist economy. Much like the two that preceded it, it was to have a profound and permanent effect on human history. Coal and steam technology gave rise to modern transportation, cities grew, international travel became as extensive as international trade, and every aspect of society became “rationalized”.

By the 20th century, the size and shape of the future really began to take shape, and many were scared. Humanity, that once tiny speck of organic matter in Africa, now covered the entire Earth and numbered over one and a half billion. And as the century rolled on, the unprecedented growth continued to accelerate. Within 100 years, humanity went from coal and diesel fuel to electrical power and nuclear reactors. We went from crossing the sea in steam ships to going to the moon in rockets.

massuseofinventionsAnd then, by the end of the 20th century, humanity once again experienced a revolution in the form of digital technology. By the time the “Information Revolution” had arrived, humanity had reached 6 billion people, was building hand held devices that were faster than computers that once occupied entire rooms, and exchanging more information in a single day than most peoples did in an entire century.

And now, we’ve reached an age where all the things we once fantasized about – colonizing the Solar System and beyond, telepathy, implants, nanomachines, quantum computing, cybernetics, artificial intelligence, and bionics – seem to be becoming more true every day. As such, futurists predictions, like how humans will one day merge their intelligence with machines or live forever in bionic bodies, don’t seem so farfetched. If anything, they seem kind of scary!

singularity-epocksThere’s no telling where it will go, and it seems like even the near future has become completely unpredictable. The Singularity looms! So really, if the future has become so opaque that accurate predictions are pretty much impossible to make, why bother? What’s more, will predictions become true as the writer is writing about them? Won’t that remove all incentive to write about it?

And really, if the future is to become so unbelievably weird and/or awesome that fact will take the place of fiction, will fantasy become effectively obsolete? Perhaps. So again, why bother? Well, I can think one reason. Because its fun! And because as long as I can, I will continue to! I can’t predict what course the future will take, but knowing that its uncertain and impending makes it extremely cool to think about. And since I’m never happy keeping my thoughts to myself, I shall try to write about it!

So here’s to the future! It’s always there, like the horizon. No one can tell what it will bring, but we do know that it will always be there. So let’s embrace it and enter into it together! We knew what we in for the moment we first woke up and embraced this thing known as humanity.

And for a lovely and detailed breakdown of the Singularity, as well as when and how it will come in the future, go to futuretimeline.net. And be prepared for a little light reading 😉