The Future of Computing: Brain-Like Computers

neuronsIt’s no secret that computer scientists and engineers are looking to the human brain as means of achieving the next great leap in computer evolution. Already, machines are being developed that rely on machine blood, can continue working despite being damaged, and recognize images and speech. And soon, a computer chip that is capable of learning from its mistakes will also be available.

The new computing approach, already in use by some large technology companies, is based on the biological nervous system – specifically on how neurons react to stimuli and connect with other neurons to interpret information. It allows computers to absorb new information while carrying out a task, and adjust what they do based on the changing signals.

brain_chip2The first commercial version of the new kind of computer chip is scheduled to be released in 2014, and was the result of a collaborative effort between I.B.M. and Qualcomm, as well as a Stanford research team. This “neuromorphic processor” can not only automate tasks that once required painstaking programming, but can also sidestep and even tolerate errors, potentially making the term “computer crash” obsolete.

In coming years, the approach will make possible a new generation of artificial intelligence systems that will perform some functions that humans do with ease: see, speak, listen, navigate, manipulate and control. That can hold enormous consequences for tasks like facial and speech recognition, navigation and planning, which are still in elementary stages and rely heavily on human programming.

googleneuralnetworkFor example, computer vision systems only “recognize” objects that can be identified by the statistics-oriented algorithms programmed into them. An algorithm is like a recipe, a set of step-by-step instructions to perform a calculation. But last year, Google researchers were able to get a machine-learning algorithm, known as a “Google Neural Network”, to perform an identification task (involving cats) without supervision.

And this past June, the company said it had used those neural network techniques to develop a new search service to help customers find specific photos more accurately. And this past November, researchers at Standford University came up with a new algorithm that could give computers the power to more reliably interpret language. It’s known as the Neural Analysis of Sentiment (NaSent).

deep_learning_laptopA similar concept known as Deep Leaning is also looking to endow software with a measure of common sense. Google is using this technique with their voice recognition technology to aid in performing searches. In addition, the social media giant Facebook is looking to use deep learning to help them improve Graph Search, an engine that allows users to search activity on their network.

Until now, the design of computers was dictated by ideas originated by the mathematician John von Neumann about 65 years ago. Microprocessors perform operations at lightning speed, following instructions programmed using long strings of binary code (0s and 1s). The information is stored separately in what is known as memory, either in the processor itself, in adjacent storage chips or in higher capacity magnetic disk drives.

neural-networksBy contrast, the new processors consist of electronic components that can be connected by wires that mimic biological synapses. Because they are based on large groups of neuron-like elements, they are known as neuromorphic processors, a term credited to the California Institute of Technology physicist Carver Mead, who pioneered the concept in the late 1980s.

These processors are not “programmed”, in the conventional sense. Instead, the connections between the circuits are “weighted” according to correlations in data that the processor has already “learned.” Those weights are then altered as data flows in to the chip, causing them to change their values and to “spike.” This, in turn, strengthens some connections and weakens others, reacting much the same way the human brain does.

Neuromorphic-chip-640x353In the words of Dharmendra Modha, an I.B.M. computer scientist who leads the company’s cognitive computing research effort:

Instead of bringing data to computation as we do today, we can now bring computation to data. Sensors become the computer, and it opens up a new way to use computer chips that can be everywhere.

One great advantage of the new approach is its ability to tolerate glitches, whereas traditional computers are cannot work around the failure of even a single transistor. With the biological designs, the algorithms are ever changing, allowing the system to continuously adapt and work around failures to complete tasks. Another benefit is energy efficiency, another inspiration drawn from the human brain.

IBM_stacked3dchipsThe new computers, which are still based on silicon chips, will not replace today’s computers, but augment them; at least for the foreseeable future. Many computer designers see them as coprocessors, meaning they can work in tandem with other circuits that can be embedded in smartphones and the centralized computers that run computing clouds.

However, the new approach is still limited, thanks to the fact that scientists still do not fully understand how the human brain functions. As Kwabena Boahen, a computer scientist who leads Stanford’s Brains in Silicon research program, put it:

We have no clue. I’m an engineer, and I build things. There are these highfalutin theories, but give me one that will let me build something.

calit2PhotoLuckily, there are efforts underway that are designed to remedy this, with the specific intention of directing that knowledge towards the creation of better computers and AIs. One such effort comes from the National Science Foundation financed the Center for Brains, Minds and Machines, a new research center based at the Massachusetts Institute of Technology, with Harvard and Cornell.

Another is the California Institute for Telecommunications and Information Technology (aka. Calit2) – a center dedicated to innovation in nanotechnology, life sciences, information technology, and telecommunications. As
Larry Smarr, an astrophysicist and director of Institute, put it:

We’re moving from engineering computing systems to something that has many of the characteristics of biological computing.

Human-Brain-project-Alp-ICTAnd last, but certainly not least, is the Human Brain Project, an international group of 200 scientists from 80 different research institutions and based in Lausanne, Switzerland. Having secured the $1.6 billion they need to fund their efforts, these researchers will spend the next ten years conducting research that cuts across multiple disciplines.

This initiative, which has been compared to the Large Hadron Collider, will attempt to reconstruct the human brain piece-by-piece and gradually bring these cognitive components into an overarching supercomputer. The expected result of this research will be new platforms for “neuromorphic computing” and “neurorobotics,” allowing for the creation of computing and robotic architectures that mimic the functions of the human brain.

neuromorphic_revolutionWhen future generations look back on this decade, no doubt they will refer to it as the birth of the neuromophic computing revolution. Or maybe just Neuromorphic Revolution for short, but that sort of depends on the outcome. With so many technological revolutions well underway, it is difficult to imagine how the future will look back and characterize this time.

Perhaps, as Charles Stross suggest, it will simply be known as “the teens”, that time in pre-Singularity history where it was all starting to come together, but was yet to explode and violently change everything we know. I for one am looking forward to being around to witness it all!

Sources: nytimes.com, technologyreview.com, calit2.net, humanbrainproject.eu

Cyberwars: The Biggest Cyber Attack in History?

cyber_virusIt’s been declared: the largest cyber attack in the history of the internet is happening right now. But you can forget about the US and China, this one is going on between private organizations, both of whom . In short, the fight comes down to Cyberbunker – a decommissioned NATO bunker located just outside of Kloetinge in the Netherlands – and a non-profit anti-spam organization named Spamhaus.

But first, a little background information is required for those of us not well-versed in the comings and goings of cyberwarfare (I include myself in this mix). Cyberbunker, as its name suggests, is an internet service provider and data haven that hosts websites and data stores for various companies. Founded in 1998, it began with the mission of hosting companies and protecting their data-assets from intrusion and attack.

cyberbunkerSpamhaus, on the other hand, is a non-profit that tracks internet addresses that are sources of email spam, and adds their addresses to a blacklist. Companies that use this blacklist—which include pretty much every email provider and most internet service providers on the planet—automatically block those addresses. Hence, to be blacklisted by this organization is to have your bottom line seriously effected.

The conflict between these two belligerents began in 2011, when Spamhaus began targeting Cyberbunker through one of its clients – and internet service provider named A2B. At the time, Spamhaus was trying to convince said provider that Cyberbunker was a haven for spam email, which led A2B to drop them as a client. Shortly thereafter, Cyberbunker moved onto a new internet service provider, leaving Spamhaus free to blacklist them directly.

Spamhaus attack … did it affect you?When they did, Cyberbunker responded in a way that seemed to suggest they wanted to live up to the reputation Spamhaus was bestowing on them. This involved massive retaliation by launching a cyberattack of some 300 billion bits of data per second, designed to clog Spamhaus’s connection to the internet and shut down their infrastructure.

Might sound like a tiff between two internet companies and nothing more. But in truth, this attack was so big that it began affecting service for regular people like you and me who happen to rely on some of the internet connections the attack is commandeering. In short, millions were effected by this “largest attack in internet history”, as their internet slowed down and even shorted out. Some even went as far as to say that it “almost broke the internet”.

internetBut for many others, this attack went unnoticed. In fact, according to an article by Gizmodo, most people were relatively unaffected. While some companies, like Netlix, reported sluggish streaming, they did not go down, mega net-enterprises such as Amazon reported nothing unusual, and organizations that monitor the health of the web “showed zero evidence of this Dutch conflict spilling over into our online backyards”.

In short, the attack was a major one and it had a profound impact on those sites it was directed at, and the collateral damage was noticeable. But aside from that, nothing major happened and this tiff remains a war between an organization known for spamming and one known for targeting them. And it shows no signs of slowing down or stopping anytime soon.

computer-virus.istockAccording to Patrick Gilmore, chief architect at the internet hosting service Akamai who was interviewed by the New York Times, the bottom line for CyberBunker is that “they think they should be allowed to spam.” CyberBunker is explicit on its homepage that it will host anything but child pornography and “anything related to terrorism.”

So while this latest incident did not cause “Infopocalype”, it does raise some interest questions. For one, how hard is it to wage a full-scale cyberwarfare in this day and age? Apparently, it is rather easy to create massive networks of “zombie PCs and use them to carry out related attacks, not to mention cheap since the hardware and software is hardly sophisticated.

cyber-war-1024x843And as it stands, numerous groups, including military hackers, are engaged in a back and forth with government and industrial giants that involves stealing information and spying on their activities. If things were to escalate, would it not be very easy for hackers or national cyberwarfare rings – especially ones operating out of China, Israel, Iran, Russia or the US – to try and shut down their enemies infrastructure by launching terabytes of useless data at them?

Oh, I shudder to think! An entire nation brought to its heels by adds for Russian brides, discount watches and cheap Viagra! But for the moment, it seems this latest apocalyptic prediction has proven to be just as flaccid as the others. Oh well, another day, another dollar…

Sources: qz.com, gaurdian.co.uk, gizmodo.com

Future Timeline

This has been sitting in my box of ideas for quite some time, a website that produces videos dedicated to predicting future trends. Awhile back, I came across it while searching on the subject of the Technological Singularity, and was pretty intrigued by what I saw. Not only was this website dedicated to predicting major technological developments in the near future, the ones that would culminate in the Singularity, but was even considering humanity’s prospects as a species in the far, far future. After taking a look around I thought to myself: “truly, this is the stuff of speculative science-fiction.”

To get a breakdown of what the makers of this site predict, check out the videos posted below, as compiled by HayenMill at Youtube. A self-professed amateur historian and futurist, HayenMill took the liberty of combining the Future Timeline predictions, year by year, covering the three decades that will take us from the beginning of 2010 to 2040, by which time all the current trends of the world will reach a full, fevered pitch. These include the problems of overpopulation, climate change, the shift of economic power from the US to Asia, and the growth of information, medical, and bio technology, as well as the development of AI and commercial spaceflight.

Check them out, and for a more detailed breakdown of future events, go to futuretimeline.net. Trust me when I say that the group’s predictions range far and wide, but which are also highly detailed, at least when pertaining to this century! You can take me at my word when I say that I will be doing my best to incorporate as many of these ideas as possible into my own writing!

 

 

The Future Of Education

Hi all, and welcome to the third and final installment in the “Envisioning Technology” series. Today, it’s the “Future of Education Technology” that’s up for all to see. Much like their speculative work on Future Tech and the Future of Medicine, they present us here with an infographic that shows the interrelated fields of educational technology and how growth in one will inevitable lead to change in others.

On the one hand, we see a gradual transition from the Classroom (i.e. traditional educational environment) to the Studio environment, where a peer and group dynamic becomes the focus, rather than classic teacher-student transmission. In the final environment, learning becomes Virtual, divorced from any specific geographical context – i.e. it happens wherever you are, not just in a classroom or academic institution.

Also, through an incorporation of various education and education-related technologies, six steps are discerned within this process. As usual the entire process is traced from the present day to 2040, with many of the necessary technologies already in existence or in the process of development.

As a teacher, I was rather fascinated to see this, as it illustrates much of what was being espoused when I was still in teacher’s college. Back then, the concept of the post-modernist classroom was all the rage, even though there were many who insisted that this movement had passed.

Intrinsic to the concept was the deconstructing the traditional learning paradigm and even the classroom environment. Openness was the new rule, individuation the new philosophy and building on a student’s existing knowledge and experience, rather than simply handing them the curriculum and evaluated their assimilation thereof.

Naturally, many of us felt the same about all the concepts and ideas that were being thrown at us, in that they seemed highly idiosyncratic and theoretical. Missing from just about all the articles, studies and lectures we heard on the subject was mention of how this was to be done. Lectures on applied technology and new methods, on the other hand, seemed much more effective. Whereas the theory seemed to be commenting on trends that were happening, or still needed to happen, these lectures seemed to be showing us how.

Kind of makes you think… and in a way, I’m reminded of what men like George Orwell said. In 1984 (Goldstein’s Manifesto, to be specific), he claimed that the advent of modern industry and education had removed the basis of class distinction and elitism. By the 20th century, when totalitarian philosophies emerged, humanity was closer to the state of true equality that Marx predicted than ever before. Granted, that road has been fraught with bumps and attempts at subversion, but the general trend seems pretty clear.

Perhaps we’re seeing something of the same thing here with the emergence of IT and what people like Foucault, Derrida and Habermas predicted. The breakdown of singular standards, the opening of discourse, the plurality of perspective and opinions. Perhaps they weren’t just speaking off the cuff or stuck in an esoteric bubble. Maybe they were just picking up on trends which were yet to come to true fruition.

Makes me think, at any rate. But then again, that’s the point isn’t it?

The Future…

A recent article from The Futurist concerning trends in the coming decade got me thinking… If we can expect major shifts in the technological and economic landscape, but at the same time be experiencing worries about climate change and resource shortages, what will the future look like? Two competing forces are warring for possession of our future; which one will win?

To hear Singularitarians and Futurists tell it, in the not-too-distant future we will be capable of downloading our consciousness and merging our brains with machine technology. At about the same time, we’re likely to perfect nanobots that will be capable of altering matter at the atomic level. We will be living in a post-mortal, post-scarcity future where just about anything is possible and we will be able to colonize the Solar System and beyond.

But to hear environmentalists and crisis planners tell it, we will be looking at a worldwide shortage of basic commodities and food due to climate change. The world’s breadbaskets, like the American Midwest, Canada’s Prairiers, and the Russian Steppe, will suffer from repeated droughts, putting a strain on food production and food prices. Places that are already hard pressed to feed their growing populations, like China and India, will be even harder pressed. Many countries in the mid-latitudes that are already suffering from instability due to lack of irrigation and hunger – Pakistan, North Africa, the Middle East, Saharan Africa – will become even more unstable.

Polar ice regions will continue to melt, wreaking havoc with the Gulf Stream and forcing Europe to experience freezing winters and their own crop failures. And to top if off, tropical regions will suffer from increased tropical storm activity and flooding. This will be create a massive refugee crisis, where up to 25% of the world’s population will try to shift north and south to occupy the cooler climes and more developed parts of the world. And this, of course, will lead to all kinds of political upheaval and incidents as armed forces are called out to keep them away.

Makes you wonder…

To hear the future characterized in such dystopian and utopian terms is nothing new. But at this juncture, it now seems like both of these visions are closer to coming true than ever before. With the unprecedented growth in computing, information technology, and biology, we could very well be making DNA based computers and AI’s in a few decades. But the climate crisis is already happening, with record heat, terrible wildfires, tropical storms and food shortages already gripping the world. Two tidal waves are rising and heading on a collision course, both threatening to sweep humanity up in their wake. Which will prove successful, or will one come first, rendering the other completely ineffective?

Hard to say, in the meantime, check out the article. It proves to be an interesting read!

The Futurist – Seven Themes For the Coming Decade

Of William Gibson (The Bigend Trilogy)

Not only is he a famous author, he’s also a fellow BCite and the man who literally wrote the book on cyberpunk. Beyond that, his books have been renowned for capturing the zeitgeist of our times, an age characterized by revolutions in information technology and mass media. And I can honestly say that I’ve tried to emulate him in recent years. His Neuromancer was required reading seeing as how I wanted to get into hard sci-fi and he’s a major name. And his latest works also gave me a push in the direction of modern day fiction, dealing with the cutting edge rather than the future.

But… gotta be honest here, these books have been a bit of a disappointment for me. Pattern Recognition, Spook Country, and Zero History are all mainstream bestsellers that did an awful lot to capture the spirit of our age once more, but they all shared elements which I thought were kind of… well, weak. For example, consider the plot set-ups to all three books. All things in this trilogy by Gibson revolve around the enigmatic (and absurdly named) Hubertus Bigend. He’s an advertising magnate who’s always looking for the angles, the hidden agendas, the thing that’s beyond cutting edge, just five minutes away from becoming real. And to investigate these things, he hires freelance contractors, strange people with strange gifts. And that’s what sets the plots in motion every time.

In Pattern Recognition, he recruits Cayce Pollard (pronounced Cay-See), a freelance “coolhunter” who uses her odd intuition to evaluate logos and brand names for companies. Her father was lost in 9/11 (something that Gibson had to include because it occurred while writing it) and this haunts her. In addition to her weird skills (hypersensitivity to iconography) she follows footage on the internet produced by some cinematic genius. Bigend wants the creator found because… he’s curious, he wants their talent, or something like that. So Cayce sets out to find them relying on Bigend’s network, his dime, and her own personal contacts. Her journey takes her from New York, to London, to Tokyo, and finally to Moscow, all the while she’s pursued by a rival and some shadowy agents who’s agenda is not quite clear. In the end, she finds the geniuses in Moscow, the genius is a brain-damaged girl who’s sister takes care of her and puts out the footage as a way of expressing herself. The dark agents pursuing her turn out to be their protectors who just stalked her because they weren’t sure of her, and Bigend’s slightly richer for having uncovered the truth… I guess. Cool idea, weak climax, weird overall point.

Then there was Spook Country. The name alone was telling, alluding to its focus on the paranoia and intrigue of post-9/11 America. In this one, Bigend is back, employing yet another freelance contractor named Hollis Henry (why they couldn’t just bring Cayce back is beyond me, but whatever). This one has no weird intuition, she’s just a former teen singer who’s gone on to become a writer about the industry. He hires her ostensibly to research locative art for some new magazine, a cutting edge technology that is now referred to as “augmented reality”. In the course of this, she discovers that her real mission is to find out who the artist is working for. You see, he’s been using the GPS technology that powers locative art to track a crate that’s been moving around the world for years, passing that info onto some shadowy figure.

So once again, Bigend is curious… When Hollis looks into it, she finds out that the crate is filled with millions that were embezzled from Iraq’s reconstruction fund and the old man tracking it is a former intelligence operative who has a score to settle. He and his crack team are also being tracked by a current intelligence man who uses an addict named Milgrim to track the old man’s operatives by translating their Russian texts (rendered in a language called Volapuk). By the end, the old man and his crew follow the crate to Vancouver and fill it full of hollow point bullets containing radioactive dust. The money is now useless, Hollis is given an exclusive first hand look at it, and returns to Bigend to report on it. Once again, he’s richer for knowing, but has gained nothing else… And all that spy stuff and paranoia? Didn’t really amount to much. Sure there was spy work going on but it was pretty damn subtle, the marginal stuff that goes on at the fringes of the war on terror, not anything central to it. Not what I would expect at all from a book that was trying to make a point about post-9/11 America, in all its paranoid, angry, confused glory.

In the finale, Zero History, which apparently takes it name from the character Milgrim, a man who has no record of his existence for the last ten years (hence, zero history), things are a bit more clear in terms of Bigend’s motivation. However, the overall story was a bit weak, with a name like Zero History and the fact that its the third installment in the series, I was expecting a big send-off, something that went over and above what the first two did. It didn’t seem too much to expect; after all, the first book was a fitting commentary on cyberspace and the sort of tribalism its engendered. The second book upped the ante with a look at espionage and paranoia in post-9/11 America. So who wouldn’t expect that this one would deal with something incrementally bigger and more important? Alas… not so much. But I digress!

In the final installment of the trilogy, Bigend hires Hollis again, paired up with Milgrim, to investigate the origin of some elusive fashion line known as Gabriel Hounds. The reason he wants to do this is because he wants to break into the military-fashion crossover market. Not as silly as it sounds; according to the book, this has been a huge market trend since the Vietnam War and has received new life thanks to the war in Iraq. The culture of war provides life to the fashion industry, swaths of men who buy outfits to look like soldiers, and fashion designers get accustomed to making army gear and end up contracting to the military itself. In the course of their investigation, they learn that one sample they are looking at is the illusive brand named Hounds. These denim products are sold using direct marketing: the dealers show up at prearranged drop points, sell off their merchandise, and then disappear. However, the other sample they come across is being produced by an arms dealer who has a big racket involving former contacts in the military and consulting worlds, and he now sees Bigend as competition. Since he’s a former military man and is into some shady stuff, things begin to get dangerous!

However, the climax is once again the same, with a build-up and then a letdown. Sure the bad guys get beat, but no one dies and no one even gets hurt beyond a simple tasering. Some arrests are made, people hook up, and the world keeps on spinning! There’s also the point of how Bigend’s company appears to be coming apart towards the climax, but in another abortive twist, nothing happens. Bigend is simply declared as being “too big to fail” by the end, and his machinations about being able to see a few minutes into the future appear to have come true thanks to the work of his people. Cool, as a concept, the idea of limited prescience, but like with the other books, it feels like something taped over the plot itself to give it some credibility. Bigend’s main motivation was his curiosity, a contrivance to get the story moving; everything else just feels like justification. Somehow, Bigend has to benefit from all his maneuvering, and developing some kind of system whereby he can predict trends sounds like a good answer. No explanation is forthcoming as to how this works, its just thrown in at the end. Too bad too, as a premise, it’s pretty cool and even kind of worked with the title. Zero History: there is no future, just a constantly evolving present. He who can see just a few minutes ahead and glimpse it in formation will have unimaginable power!

As a third act twist, Gibson does throw one curve ball. Turns out the elusive Hounds designer, whom Hollis finds, is Cayce Pollard herself! Cool, but again, not much comes of it. Hollis says she won’t reveal her, Pollard says she’s not worried, she knows how to deal with Bigend so she’ll be okay when he finds her, and the thread dies! The story then shifts over to the military man and the threat he poses and no word is given to the Hounds for the rest of the story. Odd seeing as how that was central to the plot, but this kind of truncation is common by this installment in the story so I wasn’t surprised. In the plus column, the story does provide some interesting thoughts on resistance to commodification and how the culture of the armed forces trickles down to the street. But seriously, all the fashion stuff gets really suffocating! After a certain point in my reading of it, I couldn’t help but notice the constant mention of clothing, apparel, jackets, etc. Intrinsic to the themes of the novel yes, but I mean, c’mon! Felt like I was reading Sex and the City fan fiction after awhile! Then there was the rather odd attempts to give Bigend character traits beyond his wealth and eccentricities. Aside from an odd fashion sense he has a lust for the Full English breakfast that is mentioned too often in the story, and serves no real purpose that I can see.

Second, there’s the usual Bigend motivation factor. His interest in the marketability of military apparel is one thing, but why would be pay through the nose to get Milgrim clean in this book? Apparently, Bigend likes him because he “notices things” while at the same time is good at going completely unnoticed. For these reasons, he’s decided to pay for rehab in a Swiss clinic and put him on his payroll. Really? All that money just to hire someone who’s only marketable skill is being inconspicuous and observant? Seems more like he just wanted to bring the character back and came up with a small contrivance to fill the point. And of course, there’s Bigend himself. Unlike the previous books, where he just a shadowy figure in the background, by this book he’s grown to the point where he’s kind of like a Bond villain. Gibson even goes as far to say it by the ending, how his purchase of a Russian low-flying craft, the way he had it decked out, and has all the staff dressed like odd caricatures, is Bondian. Doesn’t make it any less weird. Oh, and the fact that he has acquired half of Iceland through a series of business deals and is flying all his staff there on that Russian craft at the end? Bondian!

Overall, what stands out about these books is their similarity to his earlier works, particularly Neuromancer. In this and other works, the story revolves around contractors who are picked up by mysterious men who work behind the scenes or have hidden agendas. But whereas in Neuromancer and other titles belonging to the “Sprawl” and “Bridge” trilogies where you have corporate magnates or mass media forces with clear (and often morally ambiguous) intentions, this time around the agenda of the shadowy person (i.e. Bigend) seems pretty benign and… well, pointless. I mean, why for example is Bigend so obsessed with uncovering all of these mysteries, what’s his motivation? Where’s the profit incentive, the threat to his bottom line? Surely a filthy-rich advertising magnate would have better things to do than spend all kinds of time and money on pet projects that have no purpose other than satisfying his curiosity. In some cases, marginal attention is given to how these things could be of use to him, but curiosity is always the main driving force. Again and again, Bigend’s actions are justified by saying that this is just the kind of guy he is, an eccentric, controlling man who wants to know whats going on around every corner and just happens to be rich enough to make that happen.

To be fair, I get it. I mean how else are you going to set up plots like this, which delve into the mysteries of the everyday world, not sci-fi worlds where anything’s possible because its total fiction and the limits of your imagination are the only constraints you have to deal with? But I would expect that a story would build to a climax, not truncate itself or end up being a letdown for the heroes, not to mention the audience. But then again, Gibson’s work is in details, the story come through more in the subtext than in the goings on of the text itself. And I still love Gibson’s work and owe a rather large debt to him for the inspiration and example he’s provided over the years. So I won’t be avoiding his books in the future; in fact, I’m anxious to see what he’ll do next. Whatever else can be said about this man, he’s good at what he does and manages to always have a keen eye for the things that are just beyond the fringes of the now, the things that are likely to be the cutting edge stuff of tomorrow. One has to wonder how much influence he himself exercises in this regard… Oh well, something for his next book maybe!