Reciprocity – The Deets

self-aware-colonyHey again, all. I find myself with some spare time for the first time in awhile. So I thought I might take a moment to share an idea I’ve been working with, in a bit more detail. Last post I made, I talked about the bare bones of a story I am working on known as Reciprocity, the successor to the story known as Apocrypha. But as it turns out, there are a lot of details to that story idea that I still want to share and get people’s opinion on.

You might say this is a story that I am particularly serious about. Should it work out, it would be my break from both space-opera sci-fi and zombie fiction. A foray into the world of hard-hitting social commentary and speculative science fiction.

The Story:
So the year is 2030. The world is reeling from the effects of widespread drought, wildfires, coastal storms, flooding, and population displacement. At the same time, a revolution is taking place in terms of computing, robotics, biomachinery, and artificial intelligence. As a result, the world’s population finds itself being pulled in two different directions – between a future of scarcity and the promise of plenty.

space-solar-headSpace exploration continues as private aerospace and space agencies all race to put boots on Mars, a settlement on the Moon, and lay claim to the resources of the Solar System. India, China, the US, the EU, Russia, Argentina, Brazil, and Iran are all taking part now – using robotic probes and rovers to telexplore the System and prospect asteroids. Humanity’s future as an interplanetary species seems all but guaranteed at this point.

Meanwhile, a new global balance of power is shaping up. While the US and the EU struggle with food and fuel shortages, Russia remains firmly in the grips of quasi-fascist interests, having spurned the idea of globalization and amicable relations with NATO and the EU in favor of its Collective Security Treaty, which in recent years has expanded to include Iran, Afghanistan and Pakistan.

shanghai_towerMeanwhile, China is going through a period of transition. After the fall of Communism in 2023, the Chinese state is lurching between the forces of reform and ultra-nationalism, and no one is sure which side it will fall on. The economy has largely recovered, but the divide between rich and poor is all too apparent. And given the sense of listless frustration and angst, there is fear that a skilled politician could exploit it all too well.

It’s an era of uncertainty, high hopes and renewed Cold War.

The MacGuffin:
The central item of the story is a cybervirus known as Baoying, a quantum-decryption algorithm that was designed by Unit 61398 in the early 2020’s to take down America’s quantum networks in the event of open war. When the Party fell from power, the Unit was dissolved and the virus itself was destroyed. However, rumors persisted that one or more copies still exist…

MatrixBackgroundNotable Characters:
For this ensemble to work, it had to represent a good cross-section of the world that will be, with all its national, social and economic boundaries represented. And so I came up with the following people, individuals who find themselves on different sides of what’s right, and are all their own mix of good, bad, and ambiguous.

William Harding: A privileged high school senior with an big of a drug problem who lives in Port Coquitlam, just outside of the Pacific Northwest megalopolis of Cascadia. Like many people his age, he carries all his personal computing in the form of implants. However, a kidnapping and a close brush with death suddenly expand his worldview. Being at the mercy of others and deprived of his hardware, he realizes that his lifestyle have shielded him from the real world.

Amy Dixon: A young refugee who has moved to Cascadia from the American South. Her socioeconomic status places her and her family at the fringes of society, and she is determined to change their fortunes by plying her talents and being the first in her family to get a comprehensive education.

Climate_ChangeFernie Dixon: Amy’s brother, a twenty-something year-old man who lives away from her and claims to be a software developer. In reality, he is a member of the local Aryan Brotherhood, one of many gangs that run rampant in the outlying districts of the city. Not a true believer like his “brothers”, he seeks money and power so he can give his sister the opportunities he knows she deserves.

Shen Zhou: A former Lieutenant in the People’s Liberation Army and member of Unit 61398 during the Cyberwars of the late teens. After the fall of Communism, he did not ingratiate himself to the new government and was accused of spying for foreign interests. As  result, he left the country to pursue his own agenda, which places him in the cross hairs of both the new regime and western governments.

artificial-intelligenceArthur Banks: A major industrialist and part-owner of Harding Enterprises, a high-tech multinational that specializes in quantum computing and the development of artificial intelligence. For years, Banks and his associates have been working on a project known as QuaSI – a Quantum-based Sentient Intelligence that would revolutionize the world and usher in the Technological Singularity.

Rhianna Sanchez: Commander of Joint Task Force 2, an elite unit attached to National Security Agency’s Cyberwarfare Division. For years, she and her task force have been charged with locating terror cells that are engaged in private cyberwarfare with the US and its allies. And Shen Zhou, a suspected terrorist with many troubling connections, gets on their radar after a mysterious kidnapping and high-profile cyberintrusion coincide.

And that about covers the particulars. Naturally, there are a lot of other details, but I haven’t got all day and neither do you fine folks 😉 In any case, the idea is in the queue and its getting updated regularly. But I don’t plan to have it finished until I’ve polished off Oscar Mike, Arrivals, and a bunch of other projects first!

The Fate of Humanity

the-futureWelcome to the world of tomorroooooow! Or more precisely, to many possible scenarios that humanity could face as it steps into the future. Perhaps it’s been all this talk of late about the future of humanity, how space exploration and colonization may be the only way to ensure our survival. Or it could be I’m just recalling what a friend of mine – Chris A. Jackson – wrote with his “Flash in the Pan” piece – a short that consequently inspired me to write the novel Source.

Either way, I’ve been thinking about the likely future scenarios and thought I should include it alongside the Timeline of the Future. After all, once cannot predict the course of the future as much as predict possible outcomes and paths, and trust that the one they believe in the most will come true. So, borrowing from the same format Chris used, here are a few potential fates, listed from worst to best – or least to most advanced.

1. Humanrien:
extinctionDue to the runaway effects of Climate Change during the 21st/22nd centuries, the Earth is now a desolate shadow of its once-great self. Humanity is non-existent, as are many other species of mammals, avians, reptiles, and insects. And it is predicted that the process will continue into the foreseeable future, until such time as the atmosphere becomes a poisoned, sulfuric vapor and the ground nothing more than windswept ashes and molten metal.

One thing is clear though: the Earth will never recover, and humanity’s failure to seed other planets with life and maintain a sustainable existence on Earth has led to its extinction. The universe shrugs and carries on…

2. Post-Apocalyptic:
post-apocalypticWhether it is due to nuclear war, a bio-engineered plague, or some kind of “nanocaust”, civilization as we know it has come to an end. All major cities lie in ruin and are populated only marauders and street gangs, the more peaceful-minded people having fled to the countryside long ago. In scattered locations along major rivers, coastlines, or within small pockets of land, tiny communities have formed and eke out an existence from the surrounding countryside.

At this point, it is unclear if humanity will recover or remain at the level of a pre-industrial civilization forever. One thing seems clear, that humanity will not go extinct just yet. With so many pockets spread across the entire planet, no single fate could claim all of them anytime soon. At least, one can hope that it won’t.

3. Dog Days:
arcology_lillypadThe world continues to endure recession as resource shortages, high food prices, and diminishing space for real estate continue to plague the global economy. Fuel prices remain high, and opposition to new drilling and oil and natural gas extraction are being blamed. Add to that the crushing burdens of displacement and flooding that is costing governments billions of dollars a year, and you have life as we know it.

The smart money appears to be in offshore real-estate, where Lillypad cities and Arcologies are being built along the coastlines of the world. Already, habitats have been built in Boston, New York, New Orleans, Tokyo, Shanghai, Hong Kong and the south of France, and more are expected in the coming years. These are the most promising solution of what to do about the constant flooding and damage being caused by rising tides and increased coastal storms.

In these largely self-contained cities, those who can afford space intend to wait out the worst. It is expected that by the mid-point of the 22nd century, virtually all major ocean-front cities will be abandoned and those that sit on major waterways will be protected by huge levies. Farmland will also be virtually non-existent except within the Polar Belts, which means the people living in the most populous regions of the world will either have to migrate or die.

No one knows how the world’s 9 billion will endure in that time, but for the roughly 100 million living at sea, it’s not a going concern.

4. Technological Plateau:
computer_chip4Computers have reached a threshold of speed and processing power. Despite the discovery of graphene, the use of optical components, and the development of quantum computing/internet principles, it now seems that machines are as smart as they will ever be. That is to say, they are only slightly more intelligent than humans, and still can’t seem to beat the Turing Test with any consistency.

It seems the long awaited-for explosion in learning and intelligence predicted by Von Neumann, Kurzweil and Vinge seems to have fallen flat. That being said, life is getting better. With all the advances turned towards finding solutions to humanity’s problems, alternative energy, medicine, cybernetics and space exploration are still growing apace; just not as fast or awesomely as people in the previous century had hoped.

Missions to Mars have been mounted, but a colony on that world is still a long ways away. A settlement on the Moon has been built, but mainly to monitor the research and solar energy concerns that exist there. And the problem of global food shortages and CO2 emissions is steadily declining. It seems that the words “sane planning, sensible tomorrow” have come to characterize humanity’s existence. Which is good… not great, but good.

Humanity’s greatest expectations may have yielded some disappointment, but everyone agrees that things could have been a hell of a lot worse!

5. The Green Revolution:
MarsGreenhouse2The global population has reached 10 billion. But the good news is, its been that way for several decades. Thanks to smart housing, hydroponics and urban farms, hunger and malnutrition have been eliminated. The needs of the Earth’s people are also being met by a combination of wind, solar, tidal, geothermal and fusion power. And though space is not exactly at a premium, there is little want for housing anymore.

Additive manufacturing, biomanufacturing and nanomanufacturing have all led to an explosion in how public spaces are built and administered. Though it has led to the elimination of human construction and skilled labor, the process is much safer, cleaner, efficient, and has ensured that anything built within the past half-century is harmonious with the surrounding environment.

This explosion is geological engineering is due in part to settlement efforts on Mars and the terraforming of Venus. Building a liveable environment on one and transforming the acidic atmosphere on the other have helped humanity to test key technologies and processes used to end global warming and rehabilitate the seas and soil here on Earth. Over 100,000 people now call themselves “Martian”, and an additional 10,000 Venusians are expected before long.

Colonization is an especially attractive prospect for those who feel that Earth is too crowded, too conservative, and lacking in personal space…

6. Intrepid Explorers:
spacex-icarus-670Humanity has successfully colonized Mars, Venus, and is busy settling the many moons of the outer Solar System. Current population statistics indicate that over 50 billion people now live on a dozen worlds, and many are feeling the itch for adventure. With deep-space exploration now practical, thanks to the development of the Alcubierre Warp Drive, many missions have been mounted to explore and colonizing neighboring star systems.

These include Earth’s immediate neighbor, Alpha Centauri, but also the viable star systems of Tau Ceti, Kapteyn, Gliese 581, Kepler 62, HD 85512, and many more. With so many Earth-like, potentially habitable planets in the near-universe and now within our reach, nothing seems to stand between us and the dream of an interstellar human race. Mission to find extra-terrestrial intelligence are even being plotted.

This is one prospect humanity both anticipates and fears. While it is clear that no sentient life exists within the local group of star systems, our exploration of the cosmos has just begun. And if our ongoing scientific surveys have proven anything, it is that the conditions for life exist within many star systems and on many worlds. No telling when we might find one that has produced life of comparable complexity to our own, but time will tell.

One can only imagine what they will look like. One can only imagine if they are more or less advanced than us. And most importantly, one can only hope that they will be friendly…

7. Post-Humanity:
artificial-intelligence1Cybernetics, biotechnology, and nanotechnology have led to an era of enhancement where virtually every human being has evolved beyond its biological limitations. Advanced medicine, digital sentience and cryonics have prolonged life indefinitely, and when someone is facing death, they can preserve their neural patterns or their brain for all time by simply uploading or placing it into stasis.

Both of these options have made deep-space exploration a reality. Preserved human beings launch themselves towards expoplanets, while the neural uploads of explorers spend decades or even centuries traveling between solar systems aboard tiny spaceships. Space penetrators are fired in all directions to telexplore the most distant worlds, with the information being beamed back to Earth via quantum communications.

It is an age of posts – post-scarcity, post-mortality, and post-humansim. Despite the existence of two billion organics who have minimal enhancement, there appears to be no stopping the trend. And with the breakneck pace at which life moves around them, it is expected that the unenhanced – “organics” as they are often known – will migrate outward to Europa, Ganymede, Titan, Oberon, and the many space habitats that dot the outer Solar System.

Presumably, they will mount their own space exploration in the coming decades to find new homes abroad in interstellar space, where their kind can expect not to be swept aside by the unstoppable tide of progress.

8. Star Children:
nanomachineryEarth is no more. The Sun is now a mottled, of its old self. Surrounding by many layers of computronium, our parent star has gone from being the source of all light and energy in our solar system to the energy source that powers the giant Dyson Swarm at the center of our universe. Within this giant Matrioshka Brain, trillions of human minds live out an existence as quantum-state neural patterns, living indefinitely in simulated realities.

Within the outer Solar System and beyond lie billions more, enhanced trans and post-humans who have opted for an “Earthly” existence amongst the planets and stars. However, life seems somewhat limited out in those parts, very rustic compared to the infinite bandwidth and computational power of inner Solar System. And with this strange dichotomy upon them, the human race suspects that it might have solved the Fermi Paradox.

If other sentient life can be expected to have followed a similar pattern of technological development as the human race, then surely they too have evolved to the point where the majority of their species lives in Dyson Swarms around their parent Sun. Venturing beyond holds little appeal, as it means moving away from the source of bandwidth and becoming isolated. Hopefully, enough of them are adventurous enough to meet humanity partway…

_____

Which will come true? Who’s to say? Whether its apocalyptic destruction or runaway technological evolution, cataclysmic change is expected and could very well threaten our existence. Personally, I’m hoping for something in the scenario 5 and/or 6 range. It would be nice to know that both humanity and the world it originated from will survive the coming centuries!

The Future is Here: Flexible, Paper Thin Ultra-HD Screens

amoledThe explosion in computing and personal devices in recent years has led to a world where we are constantly surrounded by displays. Whether they belong to personal computers, laptops, smartphones, LCDs, PDAs, or MP3 players, there is no shortage to the amount of screens we can consult. In turn, this proliferation has led computer scientists and engineers to address a number of imperfections these displays have.

For instance, some of these displays don’t work in direct sunlight or are subject to glare. Others are horridly energy-inefficient and will drain their battery life very quickly. Some don’t have high-definition, rich color, and can’t display true black color. Just about all of them are rigid, and all can be broken given a solid enough impact. Luckily, a new age of flexible, ultra-HD screens are on the way that promise to resolve all of this.

amoled-display-3The first examples of this concept were rolled out at the 2011 Consumer Electronics Show, where Samsung unveiled its revolutionary new AMOLED display on a number of devices. This was followed up in September of 2012 when Nokia unveiled its Kinetic Device at the World Nokia Conference in London. Both devices showcased displays that could bend and flex, and were followed by concept videos produced by electronic giants Sony, 3M and Microsoft.

Since that time, numerous strides have been taken to improve on the technology before it hits the open market. In research published earlier this month in Nature, scientists describe what may be the first steps toward creating a new type of ultrathin, superfast, low-power, high-resolution, flexible color screen. If successful, these displays could combine some of the best features of current display technologies.

ultra-thin-displayThe new displays work with familiar materials, including the metal alloy already used to store data on some CDs and DVDs. The key property of these materials is that they can exist in two states – when warmed by heat, light, or electricity, they switch from one state to the other. Scientists call them phase-change materials (PCMs); and as Alex Kolobov, a researcher at Japan’s Nanoelectronics Research Institute who was not involved in the new work, explains:

It is really fascinating that phase-change materials, now widely used in optical and nonvolatile electronic memory devices, found a potentially new application in display technology.

A PCM display would work similar to the electronic paper used in products like Amazon’s Kindle reader. Both are made by sandwiching a material that has two states, one lighter and one darker, in between layers of transparent conductors. The inner material is a viscous black oil filled with tiny white titanium balls. To make a pixel black or white, a current is run through a tiny area of the glass to either pull the reflective balls to the front, or cause them to recede.

gst-phase-change-nanopixel-display-640x352In a PCM display, the inner material is a substance made of silicon’s heavier cousins: germanium, antimony, and tellurium. The two states of this material (known as GST) are actually two different phases of matter: one an ordered crystal and the other a disordered glass. To switch between them, current pulses are used to melt a tiny column, and either cooled gently to make the crystal or rapidly to make the glass.

This cycle can be done remarkably quickly, more than 1 million times per second. That speed could be a big advantage in consumer products. While scrolling on a Kindle can be terribly slow because the screen only refreshes once per second, the refresh rate on a PCM display would be fast enough to play movies, stream videos, and perform all the tasks people routinely do with their devices.

https://i0.wp.com/www.extremetech.com/wp-content/uploads/2014/07/nanopixelspr.jpgTo make the new displays, the research team – led by Harish Bhaskaran, a nanoscale manufacturing expert from Oxford University – used a 35-year-old machine developed by the semiconductor industry. They then laid down three layers that were a few nanometers thick of conducting glass, GST, and another layer of conducting glass. Then they used current from the tip of an atomic force microscope to draw pictures on the surface.

These images included everything from a Japanese print of a tidal wave to fleas and antique cars – each one smaller than the width of a human hair. With this sort of flexible, ultra-high resolution screen, a PCM display could be made into everything from a bendable laptop and personal device to a programmable contact lens — like Apple’s Retina Display, except that it would actually fit on your retina.

https://i0.wp.com/images.gizmag.com/gallery_lrg/lg-display-oled-2.jpgTurning this technology into products will require years of labor and hundreds of millions of dollars. Nevertheless, Bhaskaran and his colleagues are optimistic. The electronics industry has lots of experience with all the components, so there are plenty of well-known tricks to try to improve this first draft. And they are hardly alone in their efforts to bring flexible displays to market.

For instance, LG unveiled their new line of flexible OLED TVs at CES earlier this year. Now, they are taking things a step further with the unveiling of two new 18-inch OLED panels, the first of which is a transparent display, while the second can be rolled up. Although both fall short of the 77-inch flexible TV on show at CES, the company says the new panels prove that it has the technology to bring rollable TVs with screens in excess of 50 inches to market in the future.

lg-display-oledUnlike their 77-inch flexible TV that has a fairly limited range of changeable curvature, LG Display’s latest flexible OLED panel can be rolled up into a cylinder with a radius of 3 cm (1.18 in) without the function of the 1,200 x 810 pixel display being affected. This is made possible though the use of a high molecular substance-based polyimide film to create the backplane, rather than conventional plastic .

The transparent OLED panel, on the other hand, was created using LG Display’s transparent pixel design technology. With transmittance of 30 percent, the company says the panel is superior to existing transparent LCD panels that generally achieve around 10 to 15 percent transmittance. LG Display claims to have also reduced the haze of the panel, caused by circuit devices and film components, to just 2 percent.

https://i0.wp.com/images.gizmag.com/gallery_lrg/lg-display-oled-1.jpgAs In-Byung Kang, Senior Vice President and Head of the R&D Center at LG Display, explained:

LG Display pioneered the OLED TV market and is now leading the next-generation applied OLED technology. We are confident that by 2017, we will successfully develop an Ultra HD flexible and transparent OLED panel of more than 60 inches, which will have transmittance of more than 40 percent and a curvature radius of 100R, thereby leading the future display market.

Granted, it will be still be a few years and several hundred million dollars before such displays become the norm for computers and all other devices. However, the progress that is being made is quite impressive and with all the electronics megagiants committed to making it happen, an age where computing and communications are truly portable and much more survivable is likely just around the corner.

Sources: wired.com, gizmag.com, extremetech.com

Encoding Equality: Girl Geek Academy

girlgeekWhen it comes to the gaming industry, there appears to be something of a glass ceiling. According to a developer satisfaction survey that was released last month from the International Game Developers Association, only 22 percent of people working in the gaming industry are women. And while this presents a twofold increase from five years ago (11.5%), it’s proportionally low considering that women make up some 48% of the gaming community.

This disparity is pretty common across software, app development, and tech startups (even though startups led by women produce 12 per cent higher returns). The logical next step would be to encourage more women to enter these fields. This is where Girl Geek Academy comes in, an initiative aimed at teaching women the skills they need to start their own ventures – everything from coding classes to mentoring programs from successful start-ups.

girlgeek_dinnerAnd there’s definitely demand for it, according to co-founder, programmer and senior digital strategist Tammy Butow:

We have seen over the years that female-focused groups have helped increase the number of women attending technology events and learning technology skills. Over the last few years I have run Girl Geek Dinners Melbourne – in January 2013 we had 350 members – and we then ran a series of tech workshops to teach skills such as HTML, CSS and JS…

Girl Geek Dinners Melbourne now has over 1000 members. [Fellow co-founder] April [Staines] and I also ran Australia’s first all-female hackathon She Hacks in Melbourne. She Hacks sold out in one week, a few weeks later we also ran Australia’s first Startup Weekend Women event and that sold out too.

After running these workshops and discovering just how many women were interested in learning these skills, Butow and her associates decided to widen their scope. This they did by opening up a series of classes and programs for women of all ages (above the age of 18) and skill levels with a target of achieving a total of one million women building apps and learning to create startups by the year 2025.

girlgeek_acadAs Butow explained, it’s all about taking the next step in the development of the internet as we know it:

The internet we know now was primarily built by men. We are interested in finding out what women would like to create. At the Startup Weekend Women event we recently ran, there were several teams that created apps focusing on flexible work opportunities for women. This was a very clear theme for the weekend. We had several women in attendance who were expecting children or had small children; they are interested in using technology to solve the problems they are experiencing.

Partnered with Google, Aduro and 99Designs, the Academy offers a number of classes – either as face-to-face workshops, or via Google Hangouts and Aduro. The two-hour classes include learning different programming languages, such as JavaScript and Ruby, down to the basics of founding a startup, such as a public speaking class and how to manage your finances.

https://i0.wp.com/klausandfritz.com/wp-content/uploads/2014/07/GGAcademyLaunch-19.jpgMore experienced women are encouraged to teach classes, and the Academy already boasts a variety of events, ranging from hackathons, makerfests, code getaways and study tours. The team is already organising the very first study tour, hoping to take Australian women to visit global startup hotspots such as Silicon Valley and Tel Aviv. And though women are the focus, men are welcome too, as long as they attend with a girl geek and are willing to lend a helping hand.

The first class took place on July 15th in Richmond, Victoria. For the price of AU$35, people got a healthy dinner and a seminar that focused on the very first issue relating to development: how to pitch an idea. For an additional AU$10, people were able to get tickets for the Google Hangout. For those interested in getting in on events held in the next 12 months, they can look them up on the Girl Geek Academy website.

Personally, I think this is a great initiative with a noble purpose. Despite great strides being made by women in all walks of professional life, certain industries remain tougher than others to crack. By creating an organization and atmosphere that fosters support, guidance and welcomes contribution, the gaming industry is likely to see a lot more women on the supply side in coming years.

the_evolution_by_pedro_croft-d5qxi09-600x259Perhaps then we can look forward to more positive representations of women in games, yes?

Sources: cnet.com, girlgeekacademy.com

Computex 2014

https://download.taiwantradeshows.com.tw/files/model/photo/CP/2014/PH00013391-2.jpgEarlier this month, Computex 2014 wrapped up in Taipei. And while this trade show may not have all the glitz and glamor of its counterpart in Vegas (aka. the Consumer Electronics Show), it is still an important launch pad for new IT products slated for release during the second half of the year. Compared to other venues, the Taiwanese event is more formal, more business-oriented, and for those people who love to tinker with their PCs.

For instance, it’s an accessible platform for many Asian vendors who may not have the budget to head to Vegas. And in addition to being cheaper to set up booths and show off their products, it gives people a chance to look at devices that wouldn’t often be seen in the western parts of the world. The timing of the show is also perfect for some manufacturers. Held in June, the show provides a fantastic window into the second half of the year.

https://i0.wp.com/www.lowyat.net/wp-content/uploads/2014/06/140602dellcomputex.jpgFor example, big name brands like Asus typically use the event to launch a wide range of products. This year, this included such items as the super-slim Asus Book Chi and the multi-mode Book V, which like their other products, have demonstrated that the company has a flair for innovation that easily rivals the big western and Korean names. In addition, Intel has been a long stalwart at Computex, premiered its fanless reference design tablet that runs on the Llama Mountain chipset.

And much like CES, there were plenty of cool gadgets to be seen. This included a GPS tracker that can be attached to a dog collar to track a pet’s movements; the Fujitsu laptop, a hardy new breed of gadget that showcases Japanese designers’ aim to make gear that are both waterproof and dustproof; the Rosewill Chic-C powerbank that consists of 1,000mAh battery packs that attach together to give additional power and even charge gadgets; and the Altek Cubic compact camera that fits in the palm of the hand.

https://i0.wp.com/twimages.vr-zone.net/2013/12/altek-Cubic-1.jpgAnd then there was the Asus wireless storage, a gadget that looks like an air freshener, but is actually a wireless storage device that can be paired with a smartphone using near-field communication (NFC) technology – essentially being able to transfer info simply by bringing a device into near-proximity with it. And as always, there were plenty of cameras, display headsets, mobile devices, and wearables. This last aspect was particularly ever-present, in the form of look-alike big-name wearables.

By and all large, the devices displayed this year were variations on a similar theme: wrist-mounted fitness trackers, smartwatches, and head-mounted smartglasses. The SiMEye smartglass display, for example, was every bit inspired by Google Glass, and even bears a strong resemblance. Though the show was admittedly short on innovation over imitation, it did showcase a major trend in the computing and tech industry.

http://img.scoop.it/FWa9Z463Q34KPAgzjElk3Tl72eJkfbmt4t8yenImKBVvK0kTmF0xjctABnaLJIm9In his keynote speech, Microsoft’s Nick Parker talked about the age of ubiquitous computing, and the “devices we carry on us, as opposed to with us.” What this means is, we may very well be entering a PC-less age, where computing is embedded in devices of increasingly diminished size. Eventually, it could even be miniaturized to the point where it is stitched into our clothing as accessed through contacts, never mind glasses or headsets!

Sources: cnet.com, (2), (3), computextaipei.com

Frontiers of Neuroscience: Neurohacking and Neuromorphics

neural-network-consciousness-downloading-640x353It is one of the hallmarks of our rapidly accelerating times: looking at the state of technology, how it is increasingly being merged with our biology, and contemplating the ultimate leap of merging mind and machinery. The concept has been popular for many decades now, and with experimental procedures showing promise, neuroscience being used to inspire the next great leap in computing, and the advance of biomedicine and bionics, it seems like just a matter of time before people can “hack” their neurology too.

Take Kevin Tracey, a researcher working for the Feinstein Institute for Medical Research in Manhasset, N.Y., as an example. Back in 1998, he began conducting experiments to show that an interface existed between the immune and nervous system. Building on ten years worth of research, he was able to show how inflammation – which is associated with rheumatoid arthritis and Crohn’s disease – can be fought by administering electrical stimulu, in the right doses, to the vagus nerve cluster.

Brain-ScanIn so doing, he demonstrated that the nervous system was like a computer terminal through which you could deliver commands to stop a problem, like acute inflammation, before it starts, or repair a body after it gets sick.  His work also seemed to indicate that electricity delivered to the vagus nerve in just the right intensity and at precise intervals could reproduce a drug’s therapeutic reaction, but with greater effectiveness, minimal health risks, and at a fraction of the cost of “biologic” pharmaceuticals.

Paul Frenette, a stem-cell researcher at the Albert Einstein College of Medicine in the Bronx, is another example. After discovering the link between the nervous system and prostate tumors, he and his colleagues created SetPoint –  a startup dedicated to finding ways to manipulate neural input to delay the growth of tumors. These and other efforts are part of the growing field of bioelectronics, where researchers are creating implants that can communicate directly with the nervous system in order to try to fight everything from cancer to the common cold.

human-hippocampus-640x353Impressive as this may seem, bioelectronics are just part of the growing discussion about neurohacking. In addition to the leaps and bounds being made in the field of brain-to-computer interfacing (and brain-to-brain interfacing), that would allow people to control machinery and share thoughts across vast distances, there is also a field of neurosurgery that is seeking to use the miracle material of graphene to solve some of the most challenging issues in their field.

Given graphene’s rather amazing properties, this should not come as much of a surprise. In addition to being incredibly thin, lightweight, and light-sensitive (it’s able to absorb light in both the UV and IR range) graphene also a very high surface area (2630 square meters per gram) which leads to remarkable conductivity. It also has the ability to bind or bioconjugate with various modifier molecules, and hence transform its behavior. 

brainscan_MRIAlready, it is being considered as a possible alternative to copper wires to break the energy efficiency barrier in computing, and even useful in quantum computing. But in the field of neurosurgery, where researchers are looking to develop materials that can bridge and even stimulate nerves. And in a story featured in latest issue of Neurosurgery, the authors suggest thatgraphene may be ideal as an electroactive scaffold when configured as a three-dimensional porous structure.

That might be a preferable solution when compared with other currently vogue ideas like using liquid metal alloys as bridges. Thanks to Samsung’s recent research into using graphene in their portable devices, it has also been shown to make an ideal E-field stimulator. And recent experiments on mice in Korea showed that a flexible, transparent, graphene skin could be used as a electrical field stimulator to treat cerebral hypoperfusion by stimulating blood flow through the brain.

Neuromorphic-chip-640x353And what look at the frontiers of neuroscience would be complete without mentioning neuromorphic engineering? Whereas neurohacking and neurosurgery are looking for ways to merge technology with the human brain to combat disease and improve its health, NE is looking to the human brain to create computational technology with improved functionality. The result thus far has been a wide range of neuromorphic chips and components, such as memristors and neuristors.

However, as a whole, the field has yet to define for itself a clear path forward. That may be about to change thanks to Jennifer Hasler and a team of researchers at Georgia Tech, who recently published a roadmap to the future of neuromorphic engineering with the end goal of creating the human-brain equivalent of processing. This consisted of Hasler sorting through the many different approaches for the ultimate embodiment of neurons in silico and come up with the technology that she thinks is the way forward.

neuromorphic-chip-fpaaHer answer is not digital simulation, but rather the lesser known technology of FPAAs (Field-Programmable Analog Arrays). FPAAs are similar to digital FPGAs (Field-Programmable Gate Arrays), but also include reconfigurable analog elements. They have been around on the sidelines for a few years, but they have been used primarily as so-called “analog glue logic” in system integration. In short, they would handle a variety of analog functions that don’t fit on a traditional integrated circuit.

Hasler outlines an approach where desktop neuromorphic systems will use System on a Chip (SoC) approaches to emulate billions of low-power neuron-like elements that compute using learning synapses. Each synapse has an adjustable strength associated with it and is modeled using just a single transistor. Her own design for an FPAA board houses hundreds of thousands of programmable parameters which enable systems-level computing on a scale that dwarfs other FPAA designs.

neuromorphic_revolutionAt the moment, she predicts that human brain-equivalent systems will require a reduction in power usage to the point where they are consuming just one-eights of what digital supercomputers that are currently used to simulate neuromorphic systems require. Her own design can account for a four-fold reduction in power usage, but the rest is going to have to come from somewhere else – possibly through the use of better materials (i.e. graphene or one of its derivatives).

Hasler also forecasts that using soon to be available 10nm processes, a desktop system with human-like processing power that consumes just 50 watts of electricity may eventually be a reality. These will likely take the form of chips with millions of neuron-like skeletons connected by billion of synapses firing to push each other over the edge, and who’s to say what they will be capable of accomplishing or what other breakthroughs they will make possible?

posthuman-evolutionIn the end, neuromorphic chips and technology are merely one half of the equation. In the grand scheme of things, the aim of all of this research is not only produce technology that can ensure better biology, but technology inspired by biology to create better machinery. The end result of this, according to some, is a world in which biology and technology increasingly resemble each other, to the point that they is barely a distinction to be made and they can be merged.

Charles Darwin would roll over in his grave!

Sources: nytimes.com, extremetech.com, (2), journal.frontiersin.orgpubs.acs.org

The Future of Computing: Brain-Like Computers

neuronsIt’s no secret that computer scientists and engineers are looking to the human brain as means of achieving the next great leap in computer evolution. Already, machines are being developed that rely on machine blood, can continue working despite being damaged, and recognize images and speech. And soon, a computer chip that is capable of learning from its mistakes will also be available.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system – specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

brain_chip2The first commercial version of the new kind of computer chip is scheduled to be released in 2014, and was the result of a collaborative effort between I.B.M. and Qualcomm, as well as a Stanford research team. This “neuromorphic processor” can not only automate tasks that once required painstaking programming, but can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

googleneuralnetworkFor example, computer vision systems only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation. But last year, Google researchers were able to get a machine-learning algorithm, known as a “Google Neural Network”, to perform an identification task (involving cats) without supervision.

And this past June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately. And this past November, researchers at Standford University came up with a new algorithm that could give computers the power to more reliably interpret language. It’s known as the Neural Analysis of Sentiment (NaSent).

deep_learning_laptopA similar concept known as Deep Leaning is also looking to endow software with a measure of common sense. Google is using this technique with their voice recognition technology to aid in performing searches. In addition, the social media giant Facebook is looking to use deep learning to help them improve Graph Search, an engine that allows users to search activity on their network.

Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of binary code (0s and 1s). The information is stored separately in what is known as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

neural-networksBy contrast, the new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

These processors are not “programmed”, in the conventional sense. Instead, the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” This, in turn, strengthens some connections and weakens others, reacting much the same way the human brain does.

Neuromorphic-chip-640x353In the words of Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort:

Instead of bringing data to computation as we do today, we can now bring computation to data. Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.

One great advantage of the new approach is its ability to tolerate glitches, whereas traditional computers are cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks. Another benefit is energy efficiency, another inspiration drawn from the human brain.

IBM_stacked3dchipsThe new computers, which are still based on silicon chips, will not replace today’s computers, but augment them; at least for the foreseeable future. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and the centralized computers that run computing clouds.

However, the new approach is still limited, thanks to the fact that scientists still do not fully understand how the human brain functions. As Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, put it:

We have no clue. I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.

calit2PhotoLuckily, there are efforts underway that are designed to remedy this, with the specific intention of directing that knowledge towards the creation of better computers and AIs. One such effort comes from the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

Another is the California Institute for Telecommunications and Information Technology (aka. Calit2) – a center dedicated to innovation in nanotechnology, life sciences, information technology, and telecommunications. As
Larry Smarr, an astrophysicist and director of Institute, put it:

We’re moving from engineering computing systems to something that has many of the characteristics of biological computing.

Human-Brain-project-Alp-ICTAnd last, but certainly not least, is the Human Brain Project, an international group of 200 scientists from 80 different research institutions and based in Lausanne, Switzerland. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This initiative, which has been compared to the Large Hadron Collider, will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into an overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimic the functions of the human brain.

neuromorphic_revolutionWhen future generations look back on this decade, no doubt they will refer to it as the birth of the neuromophic computing revolution. Or maybe just Neuromorphic Revolution for short, but that sort of depends on the outcome. With so many technological revolutions well underway, it is difficult to imagine how the future will look back and characterize this time.

Perhaps, as Charles Stross suggest, it will simply be known as “the teens”, that time in pre-Singularity history where it was all starting to come together, but was yet to explode and violently change everything we know. I for one am looking forward to being around to witness it all!

Sources: nytimes.com, technologyreview.com, calit2.net, humanbrainproject.eu

Top Stories from CES 2014

CES2014_GooglePlus_BoxThe Consumer Electronics Show has been in full swing for two days now, and already the top spots for most impressive technology of the year has been selected. Granted, opinion is divided, and there are many top contenders, but between displays, gaming, smartphones, and personal devices, there’s been no shortage of technologies to choose from.

And having sifted through some news stories from the front lines, I have decided to compile a list of what I think the most impressive gadgets, displays and devices of this year’s show were. And as usual, they range from the innovative and creative, to the cool and futuristic, with some quirky and fun things holding up the middle. And here they are, in alphabetical order:

celestron_cosmosAs an astronomy enthusiast, and someone who enjoys hearing about new and innovative technologies, Celestron’s Cosmos 90GT WiFi Telescope was quite the story. Hoping to make astronomy more accessible to the masses, this new telescope is the first that can be controlled by an app over WiFi. Once paired, the system guides stargazers through the cosmos as directions flow from the app to the motorized scope base.

In terms of comuting, Lenovo chose to breathe some new life into the oft-declared dying industry of desktop PCs this year, thanks to the unveiling of their Horizon 2. Its 27-inch touchscreen can go fully horizontal, becoming both a gaming and media table. The large touch display has a novel pairing technique that lets you drop multiple smartphones directly onto the screen, as well as group, share, and edit photos from them.

Lenovo Horizon 2 Aura scanNext up is the latest set of display glasses to the world by storm, courtesy of the Epson Smart Glass project. Ever since Google Glass was unveiled in 2012, other electronics and IT companies have been racing to produce a similar product, one that can make heads-up display tech, WiFi connectivity, internet browsing, and augmented reality portable and wearable.

Epson was already moving in that direction back in 2011 when they released their BT100 augmented reality glasses. And now, with their Moverio BT200, they’ve clearly stepped up their game. In addition to being 60 percent lighter than the previous generation, the system has two parts – consisting of a pair of glasses and a control unit.

moverio-bt200-1The glasses feature a tiny LCD-based projection lens system and optical light guide which project digital content onto a transparent virtual display (960 x 540 resolution) and has a camera for video and stills capture, or AR marker detection. With the incorporation of third-party software, and taking advantage of the internal gyroscope and compass, a user can even create 360 degree panoramic environments.

At the other end, the handheld controller runs on Android 4.0, has a textured touchpad control surface, built-in Wi-Fi connectivity for video content streaming, and up to six hours of battery life.


The BT-200 smart glasses are currently being demonstrated at Epson’s CES booth, where visitors can experience a table-top virtual fighting game with AR characters, a medical imaging system that allows wearers to see through a person’s skin, and an AR assistance app to help perform unfamiliar tasks .

This year’s CES also featured a ridiculous amount of curved screens. Samsung seemed particularly proud of its garish, curved LCD TV’s, and even booked headliners like Mark Cuban and Michael Bay to promote them. In the latter case, this didn’t go so well. However, one curved screen device actually seemed appropriate – the LG G Flex 6-inch smartphone.

LG_G_GlexWhen it comes to massive curved screens, only one person can benefit from the sweet spot of the display – that focal point in the center where they feel enveloped. But in the case of the LG G Flex-6, the subtle bend in the screen allows for less light intrusion from the sides, and it distorts your own reflection just enough to obscure any distracting glare. Granted, its not exactly the flexible tech I was hoping to see, but its something!

In the world of gaming, two contributions made a rather big splash this year. These included the Playstation Now, a game streaming service just unveiled by Sony that lets gamers instantly play their games from a PS3, PS4, or PS Vita without downloading and always in the most updated version. Plus, it gives users the ability to rent titles they’re interested in, rather than buying the full copy.

maingear_sparkThen there was the Maingear Spark, a gaming desktop designed to run Valve’s gaming-centric SteamOS (and Windows) that measures just five inches square and weighs less than a pound. This is a big boon for gamers who usually have to deal gaming desktops that are bulky, heavy, and don’t fit well on an entertainment stand next to other gaming devices, an HD box, and anything else you might have there.

Next up, there is a device that helps consumers navigate the complex world of iris identification that is becoming all the rage. It’s known as the Myris Eyelock, a simple, straightforward gadget that takes a quick video of your eyeball, has you log in to your various accounts, and then automatically signs you in, without you ever having to type in your password.

myris_eyelockSo basically, you can utilize this new biometric ID system by having your retinal scan on your person wherever you go. And then, rather than go through the process of remembering multiple (and no doubt, complicated passwords, as identity theft is becoming increasingly problematic), you can upload a marker that leaves no doubt as to your identity. And at less than $300, it’s an affordable option, too.

And what would an electronics show be without showcasing a little drone technology? And the Parrot MiniDrone was this year’s crowd pleaser: a palm-sized, camera-equipped, remotely-piloted quad-rotor. However, this model has the added feature of two six-inch wheels, which affords it the ability to zip across floors, climb walls, and even move across ceilings! A truly versatile personal drone.

 

scanaduAnother very interesting display this year was the Scanadu Scout, the world’s first real-life tricorder. First unveiled back in May of 2013, the Scout represents the culmination of years of work by the NASA Ames Research Center to produce the world’s first, non-invasive medical scanner. And this year, they chose to showcase it at CES and let people test it out on themselves and each other.

All told, the Scanadu Scout can measure a person’s vital signs – including their heart rate, blood pressure, temperature – without ever touching them. All that’s needed is to place the scanner above your skin, wait a moment, and voila! Instant vitals. The sensor will begin a pilot program with 10,000 users this spring, the first key step toward FDA approval.

wowwee_mip_sg_4And of course, no CES would be complete without a toy robot or two. This year, it was the WowWee MiP (Mobile Inverted Pendulum) that put on a big show. Basically, it is an eight-inch bot that balances itself on dual wheels (like a Segway), is controllable by hand gestures, a Bluetooth-conncted phone, or can autonomously roll around.

Its sensitivity to commands and its ability to balance while zooming across the floor are super impressive. While on display, many were shown carrying a tray around (sometimes with another MiP on a tray). And, a real crowd pleaser, the MiP can even dance. Always got to throw in something for the retro 80’s crowd, the people who grew up with the SICO robot, Jinx, and other friendly automatons!

iOptikBut perhaps most impressive of all, at least in my humble opinion, is the display of the prototype for the iOptik AR Contact Lens. While most of the focus on high-tech eyewear has been focused on wearables like Google Glass of late, other developers have been steadily working towards display devices that are small enough to worse over your pupil.

Developed by the Washington-based company Innovega with support from DARPA, the iOptik is a heads-up display built into a set of contact lenses. And this year, the first fully-functioning prototypes are being showcased at CES. Acting as a micro-display, the glasses project a picture onto the contact lens, which works as a filter to separate the real-world from the digital environment and then interlaces them into the one image.

ioptik_contact_lenses-7Embedded in the contact lenses are micro-components that enable the user to focus on near-eye images. Light projected by the display (built into a set of glasses) passes through the center of the pupil and then works with the eye’s regular optics to focus the display on the retina, while light from the real-life environment reaches the retina via an outer filter.

This creates two separate images on the retina which are then superimposed to create one integrated image, or augmented reality. It also offers an alternative solution to traditional near-eye displays which create the illusion of an object in the distance so as not to hinder regular vision. At present, still requires clearance from the FDA before it becomes commercially available, which may come in late 2014 or early 2015.


Well, its certainly been an interesting year, once again, in the world of electronics, robotics, personal devices, and wearable technology. And it manages to capture the pace of change that is increasingly coming to characterize our lives. And according to the tech site Mashable, this year’s show was characterized by televisions with 4K pixel resolution, wearables, biometrics, the internet of personalized and data-driven things, and of course, 3-D printing and imaging.

And as always, there were plenty of videos showcasing tons of interesting concepts and devices that were featured this year. Here are a few that I managed to find and thought were worthy of passing on:

Internet of Things Highlights:


Motion Tech Highlights:


Wearable Tech Highlights:


Sources: popsci.com, (2), cesweb, mashable, (2), gizmag, (2), news.cnet

By 2014: According to Asimov and Clarke

asimov_clarkeAmongst the sci-fi greats of old, there were few authors, scientists and futurists more influential than Isaac Asimov and Arthur C. Clarke. And as individuals who constantly had one eye to the world of their day, and one eye to the future, they had plenty to say about what the world would look like by the 21st century. And interestingly enough, 2014 just happens to be the year where much of what they predicted was meant to come true.

For example, 50 years ago, Asimov wrote an article for the New York Times that listed his predictions for what the world would be like in 2014. The article was titled “Visit to the World’s Fair of 2014”, and contained many accurate, and some not-so-accurate, guesses as to how people would be living today and what kinds of technology would be available to us.

Here are some of the accurate predictions:

1. “By 2014, electroluminescent panels will be in common use.”
In short, electroluminescent displays are thin, bright panels that are used in retail displays, signs, lighting and flat panel TVs. What’s more, personal devices are incorporating this technology, in the form of OLED and AMOLED displays, which are both paper-thin and flexible, giving rise to handheld devices you can bend and flex without fear of damaging them.

touch-taiwan_amoled2. “Gadgetry will continue to relieve mankind of tedious jobs.”
Oh yes indeed! In the last thirty years, we’ve seen voicemail replace personal assistants, secretaries and message boards. We’ve seen fax machines replace couriers. We’ve seen personal devices and PDAs that are able to handle more and more in the way of tasks, making it unnecessary for people to consult a written sources of perform their own shorthand calculations. It’s a hallmark of our age that personal technology is doing more and more of the legwork, supposedly freeing us to do more with our time.

3. “Communications will become sight-sound and you will see as well as hear the person you telephone.”
This was a popular prediction in Asimov’s time, usually taking the form of a videophone or conversations that happened through display panels. And the rise of the social media and telepresence has certainly delivered on that. Services like Skype, Google Hangout, FaceTime and more have made video chatting very common, and a viable alternative to a phone line you need to pay for.

skypeskype4. “The screen can be used not only to see the people you call but also for studying documents and photographs and reading passages from books.”
Multitasking is one of the hallmarks of modern computers, handheld devices, and tablets, and has been the norm for operating systems for some time. By simply calling up new windows, new tabs, or opening up multiple apps simultaneously and simply switching between them, users are able to start multiple projects, or conduct work and view video, take pictures, play games, and generally behave like a kid with ADHD on crack if they so choose.

5. “Robots will neither be common nor very good in 2014, but they will be in existence.”
If you define “robot” as a computer that looks and acts like a human, then this guess is definitely true. While we do not have robot servants or robot friends per se, we do have Roomba’s, robots capable of performing menial tasks, and even ones capable of imitating animal and even human movements and participating in hazardous duty exercises (Google the DARPA Robot Challenge to see what I mean).

Valkyrie_robotAlas, he was off on several other fronts. For example, kitchens do not yet prepare “automeals” – meaning they prepare entire meals for us at the click of a button. What’s more, the vast majority of our education systems is not geared towards the creation and maintenance of robotics. All surfaces have not yet been converted into display screens, though we could if we wanted to. And the world population is actually higher than he predicted (6,500,000,000 was his estimate).

As for what he got wrong, well… our appliances are not powered by radioactive isotopes, and thereby able to be entirely wireless (though wireless recharging is becoming a reality). Only a fraction of students are currently proficient in computer language, contrary to his expectation that all would be. And last, society is not a place of “enforced leisure”, where work is considered a privilege and not a burden. Too bad too!

Arthur-C-ClarkeAnd when it comes to the future, there are few authors whose predictions are more trusted than Arthur C. Clarke. In addition to being a prolific science fiction writer, he wrote nearly three dozen nonfiction books and countless articles about the future of space travel, undersea exploration and daily life in the 21st century.

And in a recently released clip from a 1974 ABC News program filmed in Australia, Clarke is shown talking to a reporter next to a massive bank of computers. With his son in tow, the reporter asks Clarke to talk about what computers will be like when his son is an adult. In response, Clarke offers some eerily prophetic, if not quite spot-on, predictions:

The big difference when he grows up, in fact it won’t even wait until the year 2001, is that he will have, in his own house, not a computer as big as this, but at least a console through which he can talk to his friendly local computer and get all the information he needs for his everyday life, like his bank statements, his theater reservations, all the information you need in the course of living in a complex modern society. This will be in a compact form in his own house.

internetIn short, Clarke predicted not only the rise of the personal computer, but also online banking, shopping and a slew of internet services. Clarke was then asked about the possible danger of becoming a “computer-dependent” society, and while he acknowledged that in the future humanity would rely on computers “in some ways,” computers would also open up the world:

It’ll make it possible for us to live really anywhere we like. Any businessman, any executive, could live almost anywhere on Earth and still do his business through his device like this. And this is a wonderful thing.

Clarke certainly had a point about computers giving us the ability to communicate from almost anywhere on the globe, also known as telecommunication, telecommuting and telepresence. But as to whether or not our dependence on this level of technology is a good or bad thing, the jury is still out on that one. The point is, his predictions proved to be highly accurate, forty years in advance.

computer_chip1Granted, Clarke’s predictions were not summoned out of thin air. Ever since their use in World War II as a means of cracking Germany’s cyphers, miniaturization has been the trend in computing. By the 1970’s, they were still immense and clunky, but punch cards and vacuum tubes had already given way to transistors, ones which were getting smaller all the time.

And in 1969, the first operational packet network to implement a Transmission Control Protocol and Internet Protocol (TCP/IP) was established. Known as a Advanced Research Projects Agency Network (or ARPANET), this U.S. Department of Defense network was set up to connect the DOD’s various research projects at universities and laboratories all across the US, and was the precursor to the modern internet.

In being a man who was so on top of things technologically, Clarke accurately predicted that these two trends would continue into the foreseeable future, giving rise to computers small enough to fit on our desks (rather than taking up an entire room) and networked with other computers all around the world via a TCP/IP network that enabled real-time data sharing and communications.

And in the meantime, be sure to check out the Clarke interview below:


Sources:
huffingtonpost.com, blastr.com

The Future is Bright: Positive Trends to Look For in 2014

Colourful 2014 in fiery sparklersWith all of the world’s current problems, poverty, underdevelopment, terrorism, civil war, and environmental degradation, it’s easy to overlook how things are getting better around the world. Not only do we no longer live in a world where superpowers are no longer aiming nuclear missiles at each other and two-thirds of the human race live beneath totalitarian regimes; in terms of health, mortality, and income, life is getting better too.

So, in honor of the New Year and all our hopes for a better world, here’s a gander at how life is improving and is likely to continue…

1. Poverty is decreasing:
The population currently whose income or consumption is below the poverty line – subsisting on less than $1.25 a day –  is steadily dropping. In fact, the overall economic growth of the past 50 years has been proportionately greater than that experienced in the previous 500. Much of this is due not only to the growth taking place in China and India, but also Brazil, Russia, and Sub-Saharan Africa. In fact, while developing nations complain about debt crises and ongoing recession, the world’s poorest areas continue to grow.

gdp-growth-20132. Health is improving:
The overall caloric consumption of people around the world is increasing, meaning that world hunger is on the wane. Infant mortality, a major issue arising from poverty, and underdevelopment, and closely related to overpopulation, is also dropping. And while rates of cancer continue to rise, the rate of cancer mortality continue to decrease. And perhaps biggest of all, the world will be entering into 2014 with several working vaccines and even cures for HIV (of which I’ve made many posts).

3. Education is on the rise:
More children worldwide (especially girls) have educational opportunities, with enrollment increasing in both primary and secondary schools. Literacy is also on the rise, with the global rate reaching as high as 84% by 2012. At its current rate of growth, global rates of literacy have more than doubled since 1970, and the connections between literacy, economic development, and life expectancy are all well established.

literacy_worldwide4. The Internet and computing are getting faster:
Ever since the internet revolution began, connection speeds and bandwidth have been increasing significantly year after year. In fact, the global average connection speed for the first quarter of 2012 hit 2.6 Mbps, which is a 25 percent year-over-year gain, and a 14 percent gain over the fourth quarter of 2011. And by the second quarter of 2013, the overall global average peak connection speed reached 18.9 Mbps, which represented a 17 percent gan over 2012.

And while computing appears to be reaching a bottleneck, the overall increase in speed has increased by a factor of 260,000 in the past forty years, and storage capacity by a factor of 10,000 in the last twenty. And in terms of breaking the current limitations imposed by chip size and materials, developments in graphene, carbon nanotubes, and biochips are promising solutions.

^5. Unintended pregnancies are down:
While it still remains high in the developing regions of the world, the global rate of unintended pregnancies has fallen dramatically in recent years. In fact, between 1995 and 2008, of 208 billion pregnancies surveyed in a total of 80 nations, 41 percent of the pregnancies were unintended. However, this represents a drop of 29 percent in the developed regions surveyed and a 20 percent drop in developing regions.

The consequences of unintended pregnancies for women and their families is well established, and any drop presents opportunities for greater health, safety, and freedom for women. What’s more, a drop in the rate of unwanted pregnancies is surefire sign of socioeconomic development and increasing opportunities for women and girls worldwide.

gfcdimage_06. Population growth is slowing:
On this blog of mine, I’m always ranting about how overpopulation is bad and going to get to get worse in the near future. But in truth, that is only part of the story. The upside is while the numbers keep going up, the rate of increase is going down. While global population is expected to rise to 9.3 billion by 2050 and 10.1 billion by 2100, this represents a serious slowing of growth.

If one were to compare these growth projections to what happened in the 20th century, where population rose from 1 billion to just over 6, they would see that the rate of growth has halved. What’s more, rates of population growth are expecting to begin falling in Asia by 2060 (one of the biggest contributors to world population in the 20th century), in Europe by 2055, and the Caribbean by 2065.

Population_curve.svgIn fact, the only region where exponential population growth is expected to happen is Africa, where the population of over 1 billion is expected to reach 4 billion by the end of the 21st century. And given the current rate of economic growth, this could represent a positive development for the continent, which could see itself becoming the next powerhouse economy by the 2050s.

7. Clean energy is getting cheaper:
While the price of fossil fuels are going up around the world, forcing companies to turn to dirty means of oil and natural gas extraction, the price of solar energy has been dropping exponentially. In fact, the per capita cost of this renewable source of energy ($ per watt) has dropped from a high of $80 in 1977 to 0.74 this past year. This represents a 108 fold decrease in the space of 36 years.

solar_array1And while solar currently comprises only a quarter of a percent of the planet’s electricity supply, its total share grew by 86% last year. In addition, wind farms already provide 2% of the world’s electricity, and their capacity is doubling every three years. At this rate of increase, solar, wind and other renewables are likely to completely offset coal, oil and gas in the near future.

Summary:
In short, things are looking up, even if they do have a long way to go. And a lot of what is expected to make the world a better place is likely to happen this year. Who knows which diseases we will find cures for? Who knows what inspirational leaders will come forward? And who knows what new and exciting inventions will be created, ones which offer creative and innovative solutions to our current problems?

Who knows? All I can say is that I am eager to find out!

Additional Reading: unstats.un.org, humanprogress.org, mdgs.un.org