Reciprocity – The Deets

self-aware-colonyHey again, all. I find myself with some spare time for the first time in awhile. So I thought I might take a moment to share an idea I’ve been working with, in a bit more detail. Last post I made, I talked about the bare bones of a story I am working on known as Reciprocity, the successor to the story known as Apocrypha. But as it turns out, there are a lot of details to that story idea that I still want to share and get people’s opinion on.

You might say this is a story that I am particularly serious about. Should it work out, it would be my break from both space-opera sci-fi and zombie fiction. A foray into the world of hard-hitting social commentary and speculative science fiction.

The Story:
So the year is 2030. The world is reeling from the effects of widespread drought, wildfires, coastal storms, flooding, and population displacement. At the same time, a revolution is taking place in terms of computing, robotics, biomachinery, and artificial intelligence. As a result, the world’s population finds itself being pulled in two different directions – between a future of scarcity and the promise of plenty.

space-solar-headSpace exploration continues as private aerospace and space agencies all race to put boots on Mars, a settlement on the Moon, and lay claim to the resources of the Solar System. India, China, the US, the EU, Russia, Argentina, Brazil, and Iran are all taking part now – using robotic probes and rovers to telexplore the System and prospect asteroids. Humanity’s future as an interplanetary species seems all but guaranteed at this point.

Meanwhile, a new global balance of power is shaping up. While the US and the EU struggle with food and fuel shortages, Russia remains firmly in the grips of quasi-fascist interests, having spurned the idea of globalization and amicable relations with NATO and the EU in favor of its Collective Security Treaty, which in recent years has expanded to include Iran, Afghanistan and Pakistan.

shanghai_towerMeanwhile, China is going through a period of transition. After the fall of Communism in 2023, the Chinese state is lurching between the forces of reform and ultra-nationalism, and no one is sure which side it will fall on. The economy has largely recovered, but the divide between rich and poor is all too apparent. And given the sense of listless frustration and angst, there is fear that a skilled politician could exploit it all too well.

It’s an era of uncertainty, high hopes and renewed Cold War.

The MacGuffin:
The central item of the story is a cybervirus known as Baoying, a quantum-decryption algorithm that was designed by Unit 61398 in the early 2020’s to take down America’s quantum networks in the event of open war. When the Party fell from power, the Unit was dissolved and the virus itself was destroyed. However, rumors persisted that one or more copies still exist…

MatrixBackgroundNotable Characters:
For this ensemble to work, it had to represent a good cross-section of the world that will be, with all its national, social and economic boundaries represented. And so I came up with the following people, individuals who find themselves on different sides of what’s right, and are all their own mix of good, bad, and ambiguous.

William Harding: A privileged high school senior with an big of a drug problem who lives in Port Coquitlam, just outside of the Pacific Northwest megalopolis of Cascadia. Like many people his age, he carries all his personal computing in the form of implants. However, a kidnapping and a close brush with death suddenly expand his worldview. Being at the mercy of others and deprived of his hardware, he realizes that his lifestyle have shielded him from the real world.

Amy Dixon: A young refugee who has moved to Cascadia from the American South. Her socioeconomic status places her and her family at the fringes of society, and she is determined to change their fortunes by plying her talents and being the first in her family to get a comprehensive education.

Climate_ChangeFernie Dixon: Amy’s brother, a twenty-something year-old man who lives away from her and claims to be a software developer. In reality, he is a member of the local Aryan Brotherhood, one of many gangs that run rampant in the outlying districts of the city. Not a true believer like his “brothers”, he seeks money and power so he can give his sister the opportunities he knows she deserves.

Shen Zhou: A former Lieutenant in the People’s Liberation Army and member of Unit 61398 during the Cyberwars of the late teens. After the fall of Communism, he did not ingratiate himself to the new government and was accused of spying for foreign interests. As  result, he left the country to pursue his own agenda, which places him in the cross hairs of both the new regime and western governments.

artificial-intelligenceArthur Banks: A major industrialist and part-owner of Harding Enterprises, a high-tech multinational that specializes in quantum computing and the development of artificial intelligence. For years, Banks and his associates have been working on a project known as QuaSI – a Quantum-based Sentient Intelligence that would revolutionize the world and usher in the Technological Singularity.

Rhianna Sanchez: Commander of Joint Task Force 2, an elite unit attached to National Security Agency’s Cyberwarfare Division. For years, she and her task force have been charged with locating terror cells that are engaged in private cyberwarfare with the US and its allies. And Shen Zhou, a suspected terrorist with many troubling connections, gets on their radar after a mysterious kidnapping and high-profile cyberintrusion coincide.

And that about covers the particulars. Naturally, there are a lot of other details, but I haven’t got all day and neither do you fine folks 😉 In any case, the idea is in the queue and its getting updated regularly. But I don’t plan to have it finished until I’ve polished off Oscar Mike, Arrivals, and a bunch of other projects first!

The Fate of Humanity

the-futureWelcome to the world of tomorroooooow! Or more precisely, to many possible scenarios that humanity could face as it steps into the future. Perhaps it’s been all this talk of late about the future of humanity, how space exploration and colonization may be the only way to ensure our survival. Or it could be I’m just recalling what a friend of mine – Chris A. Jackson – wrote with his “Flash in the Pan” piece – a short that consequently inspired me to write the novel Source.

Either way, I’ve been thinking about the likely future scenarios and thought I should include it alongside the Timeline of the Future. After all, once cannot predict the course of the future as much as predict possible outcomes and paths, and trust that the one they believe in the most will come true. So, borrowing from the same format Chris used, here are a few potential fates, listed from worst to best – or least to most advanced.

1. Humanrien:
extinctionDue to the runaway effects of Climate Change during the 21st/22nd centuries, the Earth is now a desolate shadow of its once-great self. Humanity is non-existent, as are many other species of mammals, avians, reptiles, and insects. And it is predicted that the process will continue into the foreseeable future, until such time as the atmosphere becomes a poisoned, sulfuric vapor and the ground nothing more than windswept ashes and molten metal.

One thing is clear though: the Earth will never recover, and humanity’s failure to seed other planets with life and maintain a sustainable existence on Earth has led to its extinction. The universe shrugs and carries on…

2. Post-Apocalyptic:
post-apocalypticWhether it is due to nuclear war, a bio-engineered plague, or some kind of “nanocaust”, civilization as we know it has come to an end. All major cities lie in ruin and are populated only marauders and street gangs, the more peaceful-minded people having fled to the countryside long ago. In scattered locations along major rivers, coastlines, or within small pockets of land, tiny communities have formed and eke out an existence from the surrounding countryside.

At this point, it is unclear if humanity will recover or remain at the level of a pre-industrial civilization forever. One thing seems clear, that humanity will not go extinct just yet. With so many pockets spread across the entire planet, no single fate could claim all of them anytime soon. At least, one can hope that it won’t.

3. Dog Days:
arcology_lillypadThe world continues to endure recession as resource shortages, high food prices, and diminishing space for real estate continue to plague the global economy. Fuel prices remain high, and opposition to new drilling and oil and natural gas extraction are being blamed. Add to that the crushing burdens of displacement and flooding that is costing governments billions of dollars a year, and you have life as we know it.

The smart money appears to be in offshore real-estate, where Lillypad cities and Arcologies are being built along the coastlines of the world. Already, habitats have been built in Boston, New York, New Orleans, Tokyo, Shanghai, Hong Kong and the south of France, and more are expected in the coming years. These are the most promising solution of what to do about the constant flooding and damage being caused by rising tides and increased coastal storms.

In these largely self-contained cities, those who can afford space intend to wait out the worst. It is expected that by the mid-point of the 22nd century, virtually all major ocean-front cities will be abandoned and those that sit on major waterways will be protected by huge levies. Farmland will also be virtually non-existent except within the Polar Belts, which means the people living in the most populous regions of the world will either have to migrate or die.

No one knows how the world’s 9 billion will endure in that time, but for the roughly 100 million living at sea, it’s not a going concern.

4. Technological Plateau:
computer_chip4Computers have reached a threshold of speed and processing power. Despite the discovery of graphene, the use of optical components, and the development of quantum computing/internet principles, it now seems that machines are as smart as they will ever be. That is to say, they are only slightly more intelligent than humans, and still can’t seem to beat the Turing Test with any consistency.

It seems the long awaited-for explosion in learning and intelligence predicted by Von Neumann, Kurzweil and Vinge seems to have fallen flat. That being said, life is getting better. With all the advances turned towards finding solutions to humanity’s problems, alternative energy, medicine, cybernetics and space exploration are still growing apace; just not as fast or awesomely as people in the previous century had hoped.

Missions to Mars have been mounted, but a colony on that world is still a long ways away. A settlement on the Moon has been built, but mainly to monitor the research and solar energy concerns that exist there. And the problem of global food shortages and CO2 emissions is steadily declining. It seems that the words “sane planning, sensible tomorrow” have come to characterize humanity’s existence. Which is good… not great, but good.

Humanity’s greatest expectations may have yielded some disappointment, but everyone agrees that things could have been a hell of a lot worse!

5. The Green Revolution:
MarsGreenhouse2The global population has reached 10 billion. But the good news is, its been that way for several decades. Thanks to smart housing, hydroponics and urban farms, hunger and malnutrition have been eliminated. The needs of the Earth’s people are also being met by a combination of wind, solar, tidal, geothermal and fusion power. And though space is not exactly at a premium, there is little want for housing anymore.

Additive manufacturing, biomanufacturing and nanomanufacturing have all led to an explosion in how public spaces are built and administered. Though it has led to the elimination of human construction and skilled labor, the process is much safer, cleaner, efficient, and has ensured that anything built within the past half-century is harmonious with the surrounding environment.

This explosion is geological engineering is due in part to settlement efforts on Mars and the terraforming of Venus. Building a liveable environment on one and transforming the acidic atmosphere on the other have helped humanity to test key technologies and processes used to end global warming and rehabilitate the seas and soil here on Earth. Over 100,000 people now call themselves “Martian”, and an additional 10,000 Venusians are expected before long.

Colonization is an especially attractive prospect for those who feel that Earth is too crowded, too conservative, and lacking in personal space…

6. Intrepid Explorers:
spacex-icarus-670Humanity has successfully colonized Mars, Venus, and is busy settling the many moons of the outer Solar System. Current population statistics indicate that over 50 billion people now live on a dozen worlds, and many are feeling the itch for adventure. With deep-space exploration now practical, thanks to the development of the Alcubierre Warp Drive, many missions have been mounted to explore and colonizing neighboring star systems.

These include Earth’s immediate neighbor, Alpha Centauri, but also the viable star systems of Tau Ceti, Kapteyn, Gliese 581, Kepler 62, HD 85512, and many more. With so many Earth-like, potentially habitable planets in the near-universe and now within our reach, nothing seems to stand between us and the dream of an interstellar human race. Mission to find extra-terrestrial intelligence are even being plotted.

This is one prospect humanity both anticipates and fears. While it is clear that no sentient life exists within the local group of star systems, our exploration of the cosmos has just begun. And if our ongoing scientific surveys have proven anything, it is that the conditions for life exist within many star systems and on many worlds. No telling when we might find one that has produced life of comparable complexity to our own, but time will tell.

One can only imagine what they will look like. One can only imagine if they are more or less advanced than us. And most importantly, one can only hope that they will be friendly…

7. Post-Humanity:
artificial-intelligence1Cybernetics, biotechnology, and nanotechnology have led to an era of enhancement where virtually every human being has evolved beyond its biological limitations. Advanced medicine, digital sentience and cryonics have prolonged life indefinitely, and when someone is facing death, they can preserve their neural patterns or their brain for all time by simply uploading or placing it into stasis.

Both of these options have made deep-space exploration a reality. Preserved human beings launch themselves towards expoplanets, while the neural uploads of explorers spend decades or even centuries traveling between solar systems aboard tiny spaceships. Space penetrators are fired in all directions to telexplore the most distant worlds, with the information being beamed back to Earth via quantum communications.

It is an age of posts – post-scarcity, post-mortality, and post-humansim. Despite the existence of two billion organics who have minimal enhancement, there appears to be no stopping the trend. And with the breakneck pace at which life moves around them, it is expected that the unenhanced – “organics” as they are often known – will migrate outward to Europa, Ganymede, Titan, Oberon, and the many space habitats that dot the outer Solar System.

Presumably, they will mount their own space exploration in the coming decades to find new homes abroad in interstellar space, where their kind can expect not to be swept aside by the unstoppable tide of progress.

8. Star Children:
nanomachineryEarth is no more. The Sun is now a mottled, of its old self. Surrounding by many layers of computronium, our parent star has gone from being the source of all light and energy in our solar system to the energy source that powers the giant Dyson Swarm at the center of our universe. Within this giant Matrioshka Brain, trillions of human minds live out an existence as quantum-state neural patterns, living indefinitely in simulated realities.

Within the outer Solar System and beyond lie billions more, enhanced trans and post-humans who have opted for an “Earthly” existence amongst the planets and stars. However, life seems somewhat limited out in those parts, very rustic compared to the infinite bandwidth and computational power of inner Solar System. And with this strange dichotomy upon them, the human race suspects that it might have solved the Fermi Paradox.

If other sentient life can be expected to have followed a similar pattern of technological development as the human race, then surely they too have evolved to the point where the majority of their species lives in Dyson Swarms around their parent Sun. Venturing beyond holds little appeal, as it means moving away from the source of bandwidth and becoming isolated. Hopefully, enough of them are adventurous enough to meet humanity partway…

_____

Which will come true? Who’s to say? Whether its apocalyptic destruction or runaway technological evolution, cataclysmic change is expected and could very well threaten our existence. Personally, I’m hoping for something in the scenario 5 and/or 6 range. It would be nice to know that both humanity and the world it originated from will survive the coming centuries!

The Future is Here: Flexible, Paper Thin Ultra-HD Screens

amoledThe explosion in computing and personal devices in recent years has led to a world where we are constantly surrounded by displays. Whether they belong to personal computers, laptops, smartphones, LCDs, PDAs, or MP3 players, there is no shortage to the amount of screens we can consult. In turn, this proliferation has led computer scientists and engineers to address a number of imperfections these displays have.

For instance, some of these displays don’t work in direct sunlight or are subject to glare. Others are horridly energy-inefficient and will drain their battery life very quickly. Some don’t have high-definition, rich color, and can’t display true black color. Just about all of them are rigid, and all can be broken given a solid enough impact. Luckily, a new age of flexible, ultra-HD screens are on the way that promise to resolve all of this.

amoled-display-3The first examples of this concept were rolled out at the 2011 Consumer Electronics Show, where Samsung unveiled its revolutionary new AMOLED display on a number of devices. This was followed up in September of 2012 when Nokia unveiled its Kinetic Device at the World Nokia Conference in London. Both devices showcased displays that could bend and flex, and were followed by concept videos produced by electronic giants Sony, 3M and Microsoft.

Since that time, numerous strides have been taken to improve on the technology before it hits the open market. In research published earlier this month in Nature, scientists describe what may be the first steps toward creating a new type of ultrathin, superfast, low-power, high-resolution, flexible color screen. If successful, these displays could combine some of the best features of current display technologies.

ultra-thin-displayThe new displays work with familiar materials, including the metal alloy already used to store data on some CDs and DVDs. The key property of these materials is that they can exist in two states – when warmed by heat, light, or electricity, they switch from one state to the other. Scientists call them phase-change materials (PCMs); and as Alex Kolobov, a researcher at Japan’s Nanoelectronics Research Institute who was not involved in the new work, explains:

It is really fascinating that phase-change materials, now widely used in optical and nonvolatile electronic memory devices, found a potentially new application in display technology.

A PCM display would work similar to the electronic paper used in products like Amazon’s Kindle reader. Both are made by sandwiching a material that has two states, one lighter and one darker, in between layers of transparent conductors. The inner material is a viscous black oil filled with tiny white titanium balls. To make a pixel black or white, a current is run through a tiny area of the glass to either pull the reflective balls to the front, or cause them to recede.

gst-phase-change-nanopixel-display-640x352In a PCM display, the inner material is a substance made of silicon’s heavier cousins: germanium, antimony, and tellurium. The two states of this material (known as GST) are actually two different phases of matter: one an ordered crystal and the other a disordered glass. To switch between them, current pulses are used to melt a tiny column, and either cooled gently to make the crystal or rapidly to make the glass.

This cycle can be done remarkably quickly, more than 1 million times per second. That speed could be a big advantage in consumer products. While scrolling on a Kindle can be terribly slow because the screen only refreshes once per second, the refresh rate on a PCM display would be fast enough to play movies, stream videos, and perform all the tasks people routinely do with their devices.

https://i1.wp.com/www.extremetech.com/wp-content/uploads/2014/07/nanopixelspr.jpgTo make the new displays, the research team – led by Harish Bhaskaran, a nanoscale manufacturing expert from Oxford University – used a 35-year-old machine developed by the semiconductor industry. They then laid down three layers that were a few nanometers thick of conducting glass, GST, and another layer of conducting glass. Then they used current from the tip of an atomic force microscope to draw pictures on the surface.

These images included everything from a Japanese print of a tidal wave to fleas and antique cars – each one smaller than the width of a human hair. With this sort of flexible, ultra-high resolution screen, a PCM display could be made into everything from a bendable laptop and personal device to a programmable contact lens — like Apple’s Retina Display, except that it would actually fit on your retina.

https://i2.wp.com/images.gizmag.com/gallery_lrg/lg-display-oled-2.jpgTurning this technology into products will require years of labor and hundreds of millions of dollars. Nevertheless, Bhaskaran and his colleagues are optimistic. The electronics industry has lots of experience with all the components, so there are plenty of well-known tricks to try to improve this first draft. And they are hardly alone in their efforts to bring flexible displays to market.

For instance, LG unveiled their new line of flexible OLED TVs at CES earlier this year. Now, they are taking things a step further with the unveiling of two new 18-inch OLED panels, the first of which is a transparent display, while the second can be rolled up. Although both fall short of the 77-inch flexible TV on show at CES, the company says the new panels prove that it has the technology to bring rollable TVs with screens in excess of 50 inches to market in the future.

lg-display-oledUnlike their 77-inch flexible TV that has a fairly limited range of changeable curvature, LG Display’s latest flexible OLED panel can be rolled up into a cylinder with a radius of 3 cm (1.18 in) without the function of the 1,200 x 810 pixel display being affected. This is made possible though the use of a high molecular substance-based polyimide film to create the backplane, rather than conventional plastic .

The transparent OLED panel, on the other hand, was created using LG Display’s transparent pixel design technology. With transmittance of 30 percent, the company says the panel is superior to existing transparent LCD panels that generally achieve around 10 to 15 percent transmittance. LG Display claims to have also reduced the haze of the panel, caused by circuit devices and film components, to just 2 percent.

https://i0.wp.com/images.gizmag.com/gallery_lrg/lg-display-oled-1.jpgAs In-Byung Kang, Senior Vice President and Head of the R&D Center at LG Display, explained:

LG Display pioneered the OLED TV market and is now leading the next-generation applied OLED technology. We are confident that by 2017, we will successfully develop an Ultra HD flexible and transparent OLED panel of more than 60 inches, which will have transmittance of more than 40 percent and a curvature radius of 100R, thereby leading the future display market.

Granted, it will be still be a few years and several hundred million dollars before such displays become the norm for computers and all other devices. However, the progress that is being made is quite impressive and with all the electronics megagiants committed to making it happen, an age where computing and communications are truly portable and much more survivable is likely just around the corner.

Sources: wired.com, gizmag.com, extremetech.com

Encoding Equality: Girl Geek Academy

girlgeekWhen it comes to the gaming industry, there appears to be something of a glass ceiling. According to a developer satisfaction survey that was released last month from the International Game Developers Association, only 22 percent of people working in the gaming industry are women. And while this presents a twofold increase from five years ago (11.5%), it’s proportionally low considering that women make up some 48% of the gaming community.

This disparity is pretty common across software, app development, and tech startups (even though startups led by women produce 12 per cent higher returns). The logical next step would be to encourage more women to enter these fields. This is where Girl Geek Academy comes in, an initiative aimed at teaching women the skills they need to start their own ventures – everything from coding classes to mentoring programs from successful start-ups.

girlgeek_dinnerAnd there’s definitely demand for it, according to co-founder, programmer and senior digital strategist Tammy Butow:

We have seen over the years that female-focused groups have helped increase the number of women attending technology events and learning technology skills. Over the last few years I have run Girl Geek Dinners Melbourne – in January 2013 we had 350 members – and we then ran a series of tech workshops to teach skills such as HTML, CSS and JS…

Girl Geek Dinners Melbourne now has over 1000 members. [Fellow co-founder] April [Staines] and I also ran Australia’s first all-female hackathon She Hacks in Melbourne. She Hacks sold out in one week, a few weeks later we also ran Australia’s first Startup Weekend Women event and that sold out too.

After running these workshops and discovering just how many women were interested in learning these skills, Butow and her associates decided to widen their scope. This they did by opening up a series of classes and programs for women of all ages (above the age of 18) and skill levels with a target of achieving a total of one million women building apps and learning to create startups by the year 2025.

girlgeek_acadAs Butow explained, it’s all about taking the next step in the development of the internet as we know it:

The internet we know now was primarily built by men. We are interested in finding out what women would like to create. At the Startup Weekend Women event we recently ran, there were several teams that created apps focusing on flexible work opportunities for women. This was a very clear theme for the weekend. We had several women in attendance who were expecting children or had small children; they are interested in using technology to solve the problems they are experiencing.

Partnered with Google, Aduro and 99Designs, the Academy offers a number of classes – either as face-to-face workshops, or via Google Hangouts and Aduro. The two-hour classes include learning different programming languages, such as JavaScript and Ruby, down to the basics of founding a startup, such as a public speaking class and how to manage your finances.

https://i0.wp.com/klausandfritz.com/wp-content/uploads/2014/07/GGAcademyLaunch-19.jpgMore experienced women are encouraged to teach classes, and the Academy already boasts a variety of events, ranging from hackathons, makerfests, code getaways and study tours. The team is already organising the very first study tour, hoping to take Australian women to visit global startup hotspots such as Silicon Valley and Tel Aviv. And though women are the focus, men are welcome too, as long as they attend with a girl geek and are willing to lend a helping hand.

The first class took place on July 15th in Richmond, Victoria. For the price of AU$35, people got a healthy dinner and a seminar that focused on the very first issue relating to development: how to pitch an idea. For an additional AU$10, people were able to get tickets for the Google Hangout. For those interested in getting in on events held in the next 12 months, they can look them up on the Girl Geek Academy website.

Personally, I think this is a great initiative with a noble purpose. Despite great strides being made by women in all walks of professional life, certain industries remain tougher than others to crack. By creating an organization and atmosphere that fosters support, guidance and welcomes contribution, the gaming industry is likely to see a lot more women on the supply side in coming years.

the_evolution_by_pedro_croft-d5qxi09-600x259Perhaps then we can look forward to more positive representations of women in games, yes?

Sources: cnet.com, girlgeekacademy.com

Computex 2014

https://download.taiwantradeshows.com.tw/files/model/photo/CP/2014/PH00013391-2.jpgEarlier this month, Computex 2014 wrapped up in Taipei. And while this trade show may not have all the glitz and glamor of its counterpart in Vegas (aka. the Consumer Electronics Show), it is still an important launch pad for new IT products slated for release during the second half of the year. Compared to other venues, the Taiwanese event is more formal, more business-oriented, and for those people who love to tinker with their PCs.

For instance, it’s an accessible platform for many Asian vendors who may not have the budget to head to Vegas. And in addition to being cheaper to set up booths and show off their products, it gives people a chance to look at devices that wouldn’t often be seen in the western parts of the world. The timing of the show is also perfect for some manufacturers. Held in June, the show provides a fantastic window into the second half of the year.

https://i0.wp.com/www.lowyat.net/wp-content/uploads/2014/06/140602dellcomputex.jpgFor example, big name brands like Asus typically use the event to launch a wide range of products. This year, this included such items as the super-slim Asus Book Chi and the multi-mode Book V, which like their other products, have demonstrated that the company has a flair for innovation that easily rivals the big western and Korean names. In addition, Intel has been a long stalwart at Computex, premiered its fanless reference design tablet that runs on the Llama Mountain chipset.

And much like CES, there were plenty of cool gadgets to be seen. This included a GPS tracker that can be attached to a dog collar to track a pet’s movements; the Fujitsu laptop, a hardy new breed of gadget that showcases Japanese designers’ aim to make gear that are both waterproof and dustproof; the Rosewill Chic-C powerbank that consists of 1,000mAh battery packs that attach together to give additional power and even charge gadgets; and the Altek Cubic compact camera that fits in the palm of the hand.

https://i1.wp.com/twimages.vr-zone.net/2013/12/altek-Cubic-1.jpgAnd then there was the Asus wireless storage, a gadget that looks like an air freshener, but is actually a wireless storage device that can be paired with a smartphone using near-field communication (NFC) technology – essentially being able to transfer info simply by bringing a device into near-proximity with it. And as always, there were plenty of cameras, display headsets, mobile devices, and wearables. This last aspect was particularly ever-present, in the form of look-alike big-name wearables.

By and all large, the devices displayed this year were variations on a similar theme: wrist-mounted fitness trackers, smartwatches, and head-mounted smartglasses. The SiMEye smartglass display, for example, was every bit inspired by Google Glass, and even bears a strong resemblance. Though the show was admittedly short on innovation over imitation, it did showcase a major trend in the computing and tech industry.

http://img.scoop.it/FWa9Z463Q34KPAgzjElk3Tl72eJkfbmt4t8yenImKBVvK0kTmF0xjctABnaLJIm9In his keynote speech, Microsoft’s Nick Parker talked about the age of ubiquitous computing, and the “devices we carry on us, as opposed to with us.” What this means is, we may very well be entering a PC-less age, where computing is embedded in devices of increasingly diminished size. Eventually, it could even be miniaturized to the point where it is stitched into our clothing as accessed through contacts, never mind glasses or headsets!

Sources: cnet.com, (2), (3), computextaipei.com

Frontiers of Neuroscience: Neurohacking and Neuromorphics

neural-network-consciousness-downloading-640x353It is one of the hallmarks of our rapidly accelerating times: looking at the state of technology, how it is increasingly being merged with our biology, and contemplating the ultimate leap of merging mind and machinery. The concept has been popular for many decades now, and with experimental procedures showing promise, neuroscience being used to inspire the next great leap in computing, and the advance of biomedicine and bionics, it seems like just a matter of time before people can “hack” their neurology too.

Take Kevin Tracey, a researcher working for the Feinstein Institute for Medical Research in Manhasset, N.Y., as an example. Back in 1998, he began conducting experiments to show that an interface existed between the immune and nervous system. Building on ten years worth of research, he was able to show how inflammation – which is associated with rheumatoid arthritis and Crohn’s disease – can be fought by administering electrical stimulu, in the right doses, to the vagus nerve cluster.

Brain-ScanIn so doing, he demonstrated that the nervous system was like a computer terminal through which you could deliver commands to stop a problem, like acute inflammation, before it starts, or repair a body after it gets sick.  His work also seemed to indicate that electricity delivered to the vagus nerve in just the right intensity and at precise intervals could reproduce a drug’s therapeutic reaction, but with greater effectiveness, minimal health risks, and at a fraction of the cost of “biologic” pharmaceuticals.

Paul Frenette, a stem-cell researcher at the Albert Einstein College of Medicine in the Bronx, is another example. After discovering the link between the nervous system and prostate tumors, he and his colleagues created SetPoint –  a startup dedicated to finding ways to manipulate neural input to delay the growth of tumors. These and other efforts are part of the growing field of bioelectronics, where researchers are creating implants that can communicate directly with the nervous system in order to try to fight everything from cancer to the common cold.

human-hippocampus-640x353Impressive as this may seem, bioelectronics are just part of the growing discussion about neurohacking. In addition to the leaps and bounds being made in the field of brain-to-computer interfacing (and brain-to-brain interfacing), that would allow people to control machinery and share thoughts across vast distances, there is also a field of neurosurgery that is seeking to use the miracle material of graphene to solve some of the most challenging issues in their field.

Given graphene’s rather amazing properties, this should not come as much of a surprise. In addition to being incredibly thin, lightweight, and light-sensitive (it’s able to absorb light in both the UV and IR range) graphene also a very high surface area (2630 square meters per gram) which leads to remarkable conductivity. It also has the ability to bind or bioconjugate with various modifier molecules, and hence transform its behavior. 

brainscan_MRIAlready, it is being considered as a possible alternative to copper wires to break the energy efficiency barrier in computing, and even useful in quantum computing. But in the field of neurosurgery, where researchers are looking to develop materials that can bridge and even stimulate nerves. And in a story featured in latest issue of Neurosurgery, the authors suggest thatgraphene may be ideal as an electroactive scaffold when configured as a three-dimensional porous structure.

That might be a preferable solution when compared with other currently vogue ideas like using liquid metal alloys as bridges. Thanks to Samsung’s recent research into using graphene in their portable devices, it has also been shown to make an ideal E-field stimulator. And recent experiments on mice in Korea showed that a flexible, transparent, graphene skin could be used as a electrical field stimulator to treat cerebral hypoperfusion by stimulating blood flow through the brain.

Neuromorphic-chip-640x353And what look at the frontiers of neuroscience would be complete without mentioning neuromorphic engineering? Whereas neurohacking and neurosurgery are looking for ways to merge technology with the human brain to combat disease and improve its health, NE is looking to the human brain to create computational technology with improved functionality. The result thus far has been a wide range of neuromorphic chips and components, such as memristors and neuristors.

However, as a whole, the field has yet to define for itself a clear path forward. That may be about to change thanks to Jennifer Hasler and a team of researchers at Georgia Tech, who recently published a roadmap to the future of neuromorphic engineering with the end goal of creating the human-brain equivalent of processing. This consisted of Hasler sorting through the many different approaches for the ultimate embodiment of neurons in silico and come up with the technology that she thinks is the way forward.

neuromorphic-chip-fpaaHer answer is not digital simulation, but rather the lesser known technology of FPAAs (Field-Programmable Analog Arrays). FPAAs are similar to digital FPGAs (Field-Programmable Gate Arrays), but also include reconfigurable analog elements. They have been around on the sidelines for a few years, but they have been used primarily as so-called “analog glue logic” in system integration. In short, they would handle a variety of analog functions that don’t fit on a traditional integrated circuit.

Hasler outlines an approach where desktop neuromorphic systems will use System on a Chip (SoC) approaches to emulate billions of low-power neuron-like elements that compute using learning synapses. Each synapse has an adjustable strength associated with it and is modeled using just a single transistor. Her own design for an FPAA board houses hundreds of thousands of programmable parameters which enable systems-level computing on a scale that dwarfs other FPAA designs.

neuromorphic_revolutionAt the moment, she predicts that human brain-equivalent systems will require a reduction in power usage to the point where they are consuming just one-eights of what digital supercomputers that are currently used to simulate neuromorphic systems require. Her own design can account for a four-fold reduction in power usage, but the rest is going to have to come from somewhere else – possibly through the use of better materials (i.e. graphene or one of its derivatives).

Hasler also forecasts that using soon to be available 10nm processes, a desktop system with human-like processing power that consumes just 50 watts of electricity may eventually be a reality. These will likely take the form of chips with millions of neuron-like skeletons connected by billion of synapses firing to push each other over the edge, and who’s to say what they will be capable of accomplishing or what other breakthroughs they will make possible?

posthuman-evolutionIn the end, neuromorphic chips and technology are merely one half of the equation. In the grand scheme of things, the aim of all of this research is not only produce technology that can ensure better biology, but technology inspired by biology to create better machinery. The end result of this, according to some, is a world in which biology and technology increasingly resemble each other, to the point that they is barely a distinction to be made and they can be merged.

Charles Darwin would roll over in his grave!

Sources: nytimes.com, extremetech.com, (2), journal.frontiersin.orgpubs.acs.org

The Future of Computing: Brain-Like Computers

neuronsIt’s no secret that computer scientists and engineers are looking to the human brain as means of achieving the next great leap in computer evolution. Already, machines are being developed that rely on machine blood, can continue working despite being damaged, and recognize images and speech. And soon, a computer chip that is capable of learning from its mistakes will also be available.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system – specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

brain_chip2The first commercial version of the new kind of computer chip is scheduled to be released in 2014, and was the result of a collaborative effort between I.B.M. and Qualcomm, as well as a Stanford research team. This “neuromorphic processor” can not only automate tasks that once required painstaking programming, but can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

googleneuralnetworkFor example, computer vision systems only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation. But last year, Google researchers were able to get a machine-learning algorithm, known as a “Google Neural Network”, to perform an identification task (involving cats) without supervision.

And this past June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately. And this past November, researchers at Standford University came up with a new algorithm that could give computers the power to more reliably interpret language. It’s known as the Neural Analysis of Sentiment (NaSent).

deep_learning_laptopA similar concept known as Deep Leaning is also looking to endow software with a measure of common sense. Google is using this technique with their voice recognition technology to aid in performing searches. In addition, the social media giant Facebook is looking to use deep learning to help them improve Graph Search, an engine that allows users to search activity on their network.

Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of binary code (0s and 1s). The information is stored separately in what is known as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

neural-networksBy contrast, the new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

These processors are not “programmed”, in the conventional sense. Instead, the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” This, in turn, strengthens some connections and weakens others, reacting much the same way the human brain does.

Neuromorphic-chip-640x353In the words of Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort:

Instead of bringing data to computation as we do today, we can now bring computation to data. Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.

One great advantage of the new approach is its ability to tolerate glitches, whereas traditional computers are cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks. Another benefit is energy efficiency, another inspiration drawn from the human brain.

IBM_stacked3dchipsThe new computers, which are still based on silicon chips, will not replace today’s computers, but augment them; at least for the foreseeable future. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and the centralized computers that run computing clouds.

However, the new approach is still limited, thanks to the fact that scientists still do not fully understand how the human brain functions. As Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, put it:

We have no clue. I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.

calit2PhotoLuckily, there are efforts underway that are designed to remedy this, with the specific intention of directing that knowledge towards the creation of better computers and AIs. One such effort comes from the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

Another is the California Institute for Telecommunications and Information Technology (aka. Calit2) – a center dedicated to innovation in nanotechnology, life sciences, information technology, and telecommunications. As
Larry Smarr, an astrophysicist and director of Institute, put it:

We’re moving from engineering computing systems to something that has many of the characteristics of biological computing.

Human-Brain-project-Alp-ICTAnd last, but certainly not least, is the Human Brain Project, an international group of 200 scientists from 80 different research institutions and based in Lausanne, Switzerland. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This initiative, which has been compared to the Large Hadron Collider, will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into an overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimic the functions of the human brain.

neuromorphic_revolutionWhen future generations look back on this decade, no doubt they will refer to it as the birth of the neuromophic computing revolution. Or maybe just Neuromorphic Revolution for short, but that sort of depends on the outcome. With so many technological revolutions well underway, it is difficult to imagine how the future will look back and characterize this time.

Perhaps, as Charles Stross suggest, it will simply be known as “the teens”, that time in pre-Singularity history where it was all starting to come together, but was yet to explode and violently change everything we know. I for one am looking forward to being around to witness it all!

Sources: nytimes.com, technologyreview.com, calit2.net, humanbrainproject.eu