Year-End Tech News: Stanene and Nanoparticle Ink

3d.printingThe year of 2013 was also a boon for the high-tech industry, especially where electronics and additive manufacturing were concerned. In fact, several key developments took place last year that may help scientists and researchers to move beyond Moore’s Law, as well as ring in a new era of manufacturing and production.

In terms of computing, developers have long feared that Moore’s Law – which states that the number of transistors on integrated circuits doubles approximately every two years – could be reaching a bottleneck. While the law (really it’s more of an observation) has certainly held true for the past forty years, it has been understood for some time that the use of silicon and copper wiring would eventually impose limits.

copper_in_chips__620x350Basically, one can only miniaturize circuits made from these materials so much before resistance occurs and they are too fragile to be effective. Because of this, researchers have been looking for replacement materials to substitute the silicon that makes up the 1 billion transistors, and the one hundred or so kilometers of copper wire, that currently make up an integrated circuit.

Various materials have been proposed, such as graphene, carbyne, and even carbon nanotubes. But now, a group of researchers from Stanford University and the SLAC National Accelerator Laboratory in California are proposing another material. It’s known as Stanene, a theorized material fabricated from a single layer of tin atoms that is theoretically extremely efficient, even at high temperatures.

computer_chip5Compared to graphene, which is stupendously conductive, the researchers at Stanford and the SLAC claim that stanene should be a topological insulator. Topological insulators, due to their arrangement of electrons/nuclei, are insulators on their interior, but conductive along their edge and/or surface. Being only a single atom in thickness along its edges, this topological insulator can conduct electricity with 100% efficiency.

The Stanford and SLAC researchers also say that stanene would not only have 100%-efficiency edges at room temperature, but with a bit of fluorine, would also have 100% efficiency at temperatures of up to 100 degrees Celsius (212 Fahrenheit). This is very important if stanene is ever to be used in computer chips, which have operational temps of between 40 and 90 C (104 and 194 F).

Though the claim of perfect efficiency seems outlandish to some, others admit that near-perfect efficiency is possible. And while no stanene has been fabricated yet, it is unlikely that it would be hard to fashion some on a small scale, as the technology currently exists. However, it will likely be a very, very long time until stanene is used in the production of computer chips.

Battery-Printer-640x353In the realm of additive manufacturing (aka. 3-D printing) several major developments were made during the year 0f 2013. This one came from Harvard University, where a materials scientist named Jennifer Lewis Lewis – using currently technology – has developed new “inks” that can be used to print batteries and other electronic components.

3-D printing is already at work in the field of consumer electronics with casings and some smaller components being made on industrial 3D printers. However, the need for traditionally produced circuit boards and batteries limits the usefulness of 3D printing. If the work being done by Lewis proves fruitful, it could make fabrication of a finished product considerably faster and easier.

3d_batteryThe Harvard team is calling the material “ink,” but in fact, it’s a suspension of nanoparticles in a dense liquid medium. In the case of the battery printing ink, the team starts with a vial of deionized water and ethylene glycol and adds nanoparticles of lithium titanium oxide. The mixture is homogenized, then centrifuged to separate out any larger particles, and the battery ink is formed.

This process is possible because of the unique properties of the nanoparticle suspension. It is mostly solid as it sits in the printer ready to be applied, then begins to flow like liquid when pressure is increased. Once it leaves the custom printer nozzle, it returns to a solid state. From this, Lewis’ team was able to lay down multiple layers of this ink with extreme precision at 100-nanometer accuracy.

laser-welding-640x353The tiny batteries being printed are about 1mm square, and could pack even higher energy density than conventional cells thanks to the intricate constructions. This approach is much more realistic than other metal printing technologies because it happens at room temperature, no need for microwaves, lasers or high-temperatures at all.

More importantly, it works with existing industrial 3D printers that were built to work with plastics. Because of this, battery production can be done cheaply using printers that cost on the order of a few hundred dollars, and not industrial-sized ones that can cost upwards of $1 million.

Smaller computers, and smaller, more efficient batteries. It seems that miniaturization, which some feared would be plateauing this decade, is safe for the foreseeable future! So I guess we can keep counting on our electronics getting smaller, harder to use, and easier to lose for the next few years. Yay for us!

Sources: extremetech.com, (2)

Judgement Day Update: Bionic Computing!

big_blue1IBM has always been at the forefront of cutting-edge technology. Whether it was with the development computers that could guide ICBMs and rockets into space during the Cold War, or the creation of the Internet during the early 90’s, they have managed to stay on the vanguard by constantly looking ahead. So it comes as no surprise that they had plenty to say last month on the subject of the next of the next big leap.

During a media tour of their Zurich lab in late October, IBM presented some of the company’s latest concepts. According to the company, the key to creating supermachines that 10,000 faster and more efficient is to build bionic computers cooled and powered by electronic blood. The end result of this plan is what is known as “Big Blue”, a proposed biocomputer that they anticipate will take 10 years to make.

Human-Brain-project-Alp-ICTIntrinsic to the design is the merger of computing and biological forms, specifically the human brain. In terms of computing, IBM is relying the human brain as their template. Through this, they hope to be able to enable processing power that’s densely packed into 3D volumes rather than spread out across flat 2D circuit boards with slow communication links.

On the biological side of things, IBM is supplying computing equipment to the Human Brain Project (HBP) – a $1.3 billion European effort that uses computers to simulate the actual workings of an entire brain. Beginning with mice, but then working their way up to human beings, their simulations examine the inner workings of the mind all the way down to the biochemical level of the neuron.

brain_chip2It’s all part of what IBM calls “the cognitive systems era”, a future where computers aren’t just programmed, but also perceive what’s going on, make judgments, communicate with natural language, and learn from experience. As the description would suggest, it is closely related to artificial intelligence, and may very well prove to be the curtain raiser of the AI era.

One of the key challenge behind this work is matching the brain’s power consumption. The ability to process the subtleties of human language helped IBM’s Watson supercomputer win at “Jeopardy.” That was a high-profile step on the road to cognitive computing, but from a practical perspective, it also showed how much farther computing has to go. Whereas Watson uses 85 kilowatts of power, the human brain uses only 20 watts.

aquasar2Already, a shift has been occurring in computing, which is evident in the way engineers and technicians are now measuring computer progress. For the past few decades, the method of choice for gauging performance was operations per second, or the rate at which a machine could perform mathematical calculations.

But as a computers began to require prohibitive amounts of power to perform various functions and generated far too much waste heat, a new measurement was called for. The new measurement that emerged as a result was expressed in operations per joule of energy consumed. In short, progress has come to be measured in term’s of a computer’s energy efficiency.

IBM_Research_ZurichBut now, IBM is contemplating another method for measuring progress that is known as “operations per liter”. In accordance with this new paradigm, the success of a computer will be judged by how much data-processing can be squeezed into a given volume of space. This is where the brain really serves as a source of inspiration, being the most efficient computer in terms of performance per cubic centimeter.

As it stands, today’s computers consist of transistors and circuits laid out on flat boards that ensure plenty of contact with air that cools the chips. But as Bruno Michel – a biophysics professor and researcher in advanced thermal packaging for IBM Research – explains, this is a terribly inefficient use of space:

In a computer, processors occupy one-millionth of the volume. In a brain, it’s 40 percent. Our brain is a volumetric, dense, object.

IBM_stacked3dchipsIn short, communication links between processing elements can’t keep up with data-transfer demands, and they consume too much power as well. The proposed solution is to stack and link chips into dense 3D configurations, a process which is impossible today because stacking even two chips means crippling overheating problems. That’s where the “liquid blood” comes in, at least as far as cooling is concerned.

This process is demonstrated with the company’s prototype system called Aquasar. By branching chips into a network of liquid cooling channels that funnel fluid into ever-smaller tubes, the chips can be stacked together in large configurations without overheating. The liquid passes not next to the chip, but through it, drawing away heat in the thousandth of a second it takes to make the trip.

aquasarIn addition, IBM also is developing a system called a redox flow battery that uses liquid to distribute power instead of using wires. Two types of electrolyte fluid, each with oppositely charged electrical ions, circulate through the system to distribute power, much in the same way that the human body provides oxygen, nutrients and cooling to brain through the blood.

The electrolytes travel through ever-smaller tubes that are about 100 microns wide at their smallest – the width of a human hair – before handing off their power to conventional electrical wires. Flow batteries can produce between 0.5 and 3 volts, and that in turn means IBM can use the technology today to supply 1 watt of power for every square centimeter of a computer’s circuit board.

IBM_Blue_Gene_P_supercomputerAlready, the IBM Blue Gene supercomputer has been used for brain research by the Blue Brain Project at the Ecole Polytechnique Federale de Lausanne (EPFL) in Lausanne, Switzerland. Working with the HBP, their next step ill be to augment a Blue Gene/Q with additional flash memory at the Swiss National Supercomputing Center.

After that, they will begin simulating the inner workings of the mouse brain, which consists of 70 million neurons. By the time they will be conducting human brain simulations, they plan to be using an “exascale” machine – one that performs 1 exaflops, or quintillion floating-point operations per second. This will take place at the Juelich Supercomputing Center in northern Germany.

brain-activityThis is no easy challenge, mainly because the brain is so complex. In addition to 100 billion neurons and 100 trillionsynapses,  there are 55 different varieties of neuron, and 3,000 ways they can interconnect. That complexity is multiplied by differences that appear with 600 different diseases, genetic variation from one person to the next, and changes that go along with the age and sex of humans.

As Henry Markram, the co-director of EPFL who has worked on the Blue Brain project for years:

If you can’t experimentally map the brain, you have to predict it — the numbers of neurons, the types, where the proteins are located, how they’ll interact. We have to develop an entirely new science where we predict most of the stuff that cannot be measured.

child-ai-brainWith the Human Brain Project, researchers will use supercomputers to reproduce how brains form in an virtual vat. Then, they will see how they respond to input signals from simulated senses and nervous system. If it works, actual brain behavior should emerge from the fundamental framework inside the computer, and where it doesn’t work, scientists will know where their knowledge falls short.

The end result of all this will also be computers that are “neuromorphic” – capable of imitating human brains, thereby ushering in an age when machines will be able to truly think, reason, and make autonomous decisions. No more supercomputers that are tall on knowledge but short on understanding. The age of artificial intelligence will be upon us. And I think we all know what will follow, don’t we?

Evolution-of-the-Cylon_1024Yep, that’s what! And may God help us all!

Sources: news.cnet.com, extremetech.com

The Future is Here: Carbon Nanotube Computers

carbon-nanotubeSilicon Valley is undergoing a major shift, one which may require it to rethink its name. This is thanks in no small part to the efforts of a team based at Stanford that is seeking to create the first basic computer built around carbon nanotubes rather than silicon chips. In addition to changing how computers are built, this is likely to extend the efficiency and performance.

What’s more, this change may deal a serious blow to the law of computing known as Moore’s Law. For decades now, the exponential acceleration of technology – which has taken us from room-size computers run by punched paper cards to handheld devices with far more computing power – has depended the ability to place more and more transistors onto an individual chip.

PPTMooresLawaiThe result of this ongoing trend in miniaturization has been devices that are becoming smaller, more powerful, and cheaper. The law used to describe this – though “basic rule” would be a more apt description – states that the number of transistors on a chip has been doubling every 18 months or so since the dawn of the information age. This is what is known as “Moore’s Law.”

However, this trend could be coming to an end, mainly because its becoming increasingly difficult, expensive and inefficient to keep jamming more tiny transistors on a chip. In addition, there are the inevitable physical limitations involved, as miniaturization can only go on for so long before its becomes unfeasible.

carbon_nanotubecomputerCarbon nanotubes, which are long chains of carbon atoms thousands of times thinner than a human hair, have the potential to be more energy-efficient and outperform computers made with silicon components. Using a technique that involved “burning” off and weeding out imperfections with an algorithm from the nanotube matrix, the team built a very basic computer with 178 transistors that can do tasks like counting and number sorting.

In a recent release from the university, Stanford professor Subhasish Mitra said:

People have been talking about a new era of carbon nanotube electronics moving beyond silicon. But there have been few demonstrations of complete digital systems using this exciting technology. Here is the proof.

Naturally, this computer is more of a proof of concept than a working prototype. There are still a number of problems with the idea, such as the fact that nanotubes don’t always grow in straight lines and cannot always “switch off” like a regular transistor. The Stanford team’s computer’s also has limited power due to the limited facilities they had to work with, which did not have access to industrial fabrication tools.

carbon_nanotube2All told, their computer is only about as powerful as an Intel 4004, the first single-chip silicon microprocessor that was released in 1971. But given time, we can expect more sophisticated designs to emerge, especially if design teams have access to top of the line facilities to build prototypes.

And this research team is hardly alone in this regard. Last year, Silicon Valley giant IBM managed to create their own transistors using carbon nanotubes and also found that they outperformed the transistors made of silicon. What’s more, these transistors measured less than ten nanometers across, and were able to operated using very low voltage.

carbon_nanotube_transistorSimilarly, a research team from Northwestern University in Evanston, Illinois managed to create something very similar. In their case, this consisted of a logic gate – the fundamental circuit that all integrated circuits are based on – using carbon nanotubes to create transistors that operate in a CMOS-like architecture. And much like IBM and the Standford team’s transistors, it functioned at very low power levels.

What this demonstrated is that carbon nanotube transistors and other computer components are not only feasible, but are able to outperform transistors many times their size while using a fraction of the power. Hence, it is probably only a matter of time before a fully-functional computer is built – using carbon nanotube components – that will supersede silicon systems and throw Moore’s Law out the window.

Sources: news.cnet.com, (2), fastcolabs.com

The Future of Computing

digital_sentienceLook what you started, Nicolla 😉 After talking, at length, about the history of computing a few days ago, I got to thinking about the one aspect of the whole issue that I happened to leave out. Namely, the future of computing, with all the cool developments that we are likely to see in the next few decades or centuries.

Much of that came up in the course of my research, but unfortunately, after thirteen or so examples about the history of computing, I was far too tired and burnt to get into the future of it as well. And so, I carry on today, with a brief (I promise!) list of developments that we are likely to see before the century is out… give or take. Here they are:

Chemical Computer:
Here we have a rather novel idea for the future of hardware. Otherwise known as a reaction-diffusion or “gooware” computer, this concept calls for the creation of a semi-solid chemical “soup” where data is represented by varying concentrations of chemicals and computations are performed by naturally occurring chemical reactions.

Based on the Belousov-Zhabotinsky reaction, a chemical experiment which demonstrated that wave phenomena can indeed take place in chemical reactions, contradicting the theory of thermodynamics which states that entropy will only increase in a closed system. By contrast, the BZ experiments showed that cyclic effects can take place without breaking the laws of nature.

Amongst theoretical models, it remains a top contender for future use for the simple reason that it is far less limiting that current microprocessors. Whereas the latter only allows the flow of data in one direction at a time, a chemical computer theoretically allows for the movement of data in all directions, all dimensions, both away and against each other.

For obvious reasons, the concept is still very much in the experimental stage and no working models have been proposed at this time.

DNA Computing:
Yet another example of an unconventional computer design, one which uses biochemistry and molecular biology, rather than silicon-based hardware, in order to conduct computations. Originally proposed by Leonard Adleman of the University of Southern Calfornia in 1994, Adleman was able to demonstrate how DNA could be used to conduct multiple calculations at once.

Much like chemical computing, the potential here is to be able to build a machine that is not restricted as conventional machines are. In addition to being able to compute in multiple dimensions and directions, the DNA basis of the machine means it could be merged with other organic technology, possibly even a fully-organic AI (a la the 12 Cylon models).

While progress in this area remains modest thus far, Turing complete models have been constructed, the most notable of which is the model crated by the Weizmann Institute of Science in Rehovot, Israel in 2002. Here, researchers unveiled a programmable molecular computing machine composed of enzymes and DNA molecules instead of silicon microchips which would theoretically be capable of diagnosing cancer in a cell and releasing anti-cancer drugs.

Nanocomputers:
In keeping with the tradition of making computers smaller and smaller, scientists have proposed that the next generation of computers should measure only a few nanometers in size. That’s 1×10-9 meters for those who mathematically inclined. As part of the growing field of nanotechnology, the application is still largely theoretical and dependent on further advancements. Nevertheless, the process is a highly feasible one with many potential benefits.

Here, as with many of these other concepts, the plan is simple. By further miniaturizing the components, a computer could be shrunk to the size of a chip and implanted anywhere on a human body (i.e. “Wetware” or silicate implants). This will ensure maximum portability, and coupled with a wireless interface device (see Google Glasses or VR Contact Lenses) could be accessed at any time in any place.

Optical Computers:
Compared to the previous two examples, this proposed computer is quite straightforward, even if it radically advanced. While today’s computer rely on the movement of electrons in and out of transistors to do logic, an optical computer relies on the movement of photons.

The immediate advantage of this is clear; given that photons are much faster than electrons, computers equipped with optical components would be able to process information of significantly greater speeds. In addition, researchers contend that this can be done with less energy, making optical computing a potential green technology.

Currently, creating optical computers is just a matter of replacing electronic components with optical ones, which requires an optical transistor, which are composed of non-linear crystals. Such materials exist and experiments are already underway. However, there remains controversy as to whether the proposed benefits will pay off, or be comparable to other technologies (such as semiconductors). Only time will tell…

Quantum Computers:
And last, and perhaps most revolutionary of all, is the concept of quantum computing – a device which will rely on the use of quantum mechanical phenomena to performs operations. Unlike digital computers, which require that data to be encoded into binary digits (aka. bits), quantum computation utilizes quantum properties to represent data and perform calculations.

The field of quantum computing was first introduced by Richard Feynman in 1982 and represented the latest advancements in field theory. Much like chemical and DNA-based computer designs, the theoretical quantum computer also has the ability to conduct multiple computations at the same time, mainly because it would have the ability to be in more than one state simultaneously.

The concept remains highly theoretical, but a number of experiments have been conducted in which quantum computational operations were executed on a very small number of qubits (quantum bits). Both practical and theoretical research continues, and many national government and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes, such as cryptanalysis.

Wearable Computers:
Last, and most feasible, is the wearable computer, which has already been developed for commercial use. Essentially, these are a class of miniature electronic devices that are worn on the bearer’s person, either under or on top of clothing. A popular version of this concept is the wrist mounted option, where the computer is worn like a watch.

The purposes and advantages of this type of computer are obvious, especially where applications that require more complex computational support than hardware coded logics can provide. Another advantage is the constant interactions between user and computer, as it is augmented into all other functions of the user’s daily life. In many ways, it acts as a prosthesis, being an extension of the users mind and body.

Pretty cool, huh? And to think that these and possibly other concepts could be feasible within our own lifetimes. Given the current rate of progress in all thing’s high-tech, we could be looking at fully-integrated computer implants, biological computers and AI’s with biomechanical brains. Wouldn’t that be both amazing and potentially frightening!

Of Mechanical Minds

A few weeks back, a friend of mine, Nicola Higgins, directed me to an article about Google’s new neural net. Not only did she provide me with a damn interesting read, she also challenged me to write an article about the different types of robot brains. Well, Nicola, as Barny Stintson would say “Challenge Accepted!”And I got to say, it was a fun topic to get into.

After much research and plugging away at the lovely thing known as the internet (which was predicted by Vannevar Bush with his proposed Memor-Index system (aka. Memex) 50 years ago, btw) I managed to compile a list of the most historically relevant examples of mechanical minds, culminating in the development of Google’s Neural Net. Here we go..

Earliest Examples:
Even in ancient times, the concept of automata and arithmetic machinery can be found in certain cultures. In the Near East, the Arab World, and as far East as China, historians have found examples of primitive machinery that was designed to perform one task or another. And even though few specimens survive, there are even examples of machines that could perform complex mathematical calculations…

Antikythera mechanism:
Invented in ancient Greece, and recovered in 1901 on the ship that bears the same name, the Antikythera is the world’s oldest known analog calculator, invented to calculate the positions of the heavens for ancient astronomers. However, it was not until a century later that its true complexity and significance would be fully understood. Having been built in the 1st century BCE, it would not be until the 14th century CE that machines of its complexity would be built again.

Although it is widely theorized that this “clock of the heavens” must have had several predecessors during the Hellenistic Period, it remains the oldest surviving analog computer in existence. After collecting all the surviving pieces, scientists were able to reconstruct the design (pictured at right), which essentially amounted to a large box of interconnecting gears.

Pascaline:
Otherwise known as the Arithmetic Machine and Pascale Calculator, this device was invented by French mathematician Blaise Pascal in 1642 and is the first known example of a mechanized mathematical calculator. Apparently, Pascale invented this device to help his father reorganize the tax revenues of the French province of Haute-Normandie, and went on to create 50 prototypes before he was satisfied.

Of those 50, nine survive and are currently on display in various European museums. In addition to giving his father a helping hand, its introduction launched the development of mechanical calculators all over Europe and then the world. It’s invention is also directly linked to the development of the microprocessing circuit roughly three centuries later, which in turn is what led to the development of PC’s and embedded systems.

The Industrial Revolution:
With the rise of machine production, computational technology would see a number of developments. Key to all of this was the emergence of the concept of automation and the rationalization of society. Between the 18th and late 19th centuries, as every aspect of western society came to be organized and regimented based on the idea of regular production, machines needed to be developed that could handle this task of crunching numbers and storing the results.

Jacquard Loom:
Invented by Joseph Marie Jacquard, a French weaver and merchant, in 1801, the Loom that bears his name is the first programmable machine in history, which relied on punch cards to input orders and turn out textiles of various patterns. Thought it was based on earlier inventions by Basile Bouchon (1725), Jean Baptiste Falcon (1728) and Jacques Vaucanson (1740), it remains the most well-known example of a programmable loom and the earliest machine that was controlled through punch cards.

Though the Loom was did not perform computations, the design was nevertheless an important step in the development of computer hardware. Charles Babbage would use many of its features to design his Analytical Engine (see next example) and the use of punch cards would remain a stable in the computing industry well into the 20th century until the development of the microprocessor.

Analytical Engine:
Also known as the “Difference Engine”, this concept was originally proposed by English Mathematician Charles Babbage. Beginning in 1822 Babbage began contemplating designs for a machine that would be capable of automating the process of creating error free tables, which arose out of difficulties encountered by teams of mathematicians who were attempting to do it by hand.

Though he was never able to complete construction of a finished product, due to apparent difficulties with the chief engineer and funding shortages, his proposed engine incorporated an arithmetical unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first Turing-complete design for a general-purpose computer. His various trial models (like that featured at left) are currently on display in the Science Museum in London, England.

The Birth of Modern Computing:
The early 20th century saw the rise of several new developments, many of which would play a key role in the development of modern computers. The use of electricity for industrial applications was foremost, with all computers from this point forward being powered by Alternating and/or Direct Current and even using it to store information. At the same time, older ideas would be remain in use but become refined, most notably the use of punch cards and tape to read instructions and store results.

Tabulating Machine:
The next development in computation came roughly 70 years later when Herman Hollerith, an American statistician, developed a “tabulator” to help him process information from the 1890 US Census. In addition to being the first electronic computational device designed to assist in summarizing information (and later, accounting), it also went on to spawn the entire data processing industry.

Six years after the 1890 Census, Hollerith formed his own company known as the Tabulating Machine Company that was responsible for creating machines that could tabulate info based on punch cards. In 1924, after several mergers and consolidations, Hollerith’c company was renamed International Business Machines (IBM), which would go on to build the first “supercomputer” for Columbia University in 1931.

Atanasoff–Berry Computer:
Next, we have the ABC, the first electronic digital computing device in the world. Conceived in 1937, the ABC shares several characteristics with its predecessors, not the least of which is the fact that it is electrically powered and relied on punch cards to store data. However, unlike its predecessors, it was the first machine to use digital symbols to compute and was the first computer to use vacuum tube technology

These additions allowed the ABC to acheive computational speeds that were previously thought impossible for a mechanical computer. However, the machine was limited in that it could only solve systems of linear equations, and its punch card system of storage was deemed unreliable. Work on the machine also stopped when it’s inventor John Vincent Atanasoff was called off to assist in World War II cryptographic assignments. Nevertheless, the machine remains an important milestone in the development of modern computers.

Colossus:
There’s something to be said about war being the engine of innovation. The Colossus is certainly no stranger to this rule, the machine used to break German codes in the Second World War. Due to the secrecy surrounding it, it would not have much of an influence on computing and would not be rediscovered until the 1990’s. Still, it represents a step in the development of computing, as it relied on vacuum tube technology and punch tape in order to perform calculations, and proved most adept at solving complex mathematical computations.

Originally conceived by Max Newman, the British mathematician who was chiefly responsible fore breaking German codes in Bletchley Park during the war, the machine was a proposed means of combatting the German Lorenz machine, which the Nazis used to encode all of their wireless transmissions. With the first model built in 1943, ten variants of the machine for the Allies before war’s end and were intrinsic in bringing down the Nazi war machine.

Harvard Mark I:
Also known as the “IBM Automatic Sequence Controlled Calculator (ASCC)”, the Mark I was an electro-mechanical computer that was devised by Howard H. Aiken, built by IBM, and officially presented to Harvard University in 1944. Due to its success at performing long, complex calculations, it inspired several successors, most of which were used by the US Navy and Air Force for the purpose of running computations.

According to IBM’s own archives, the Mark I was the first computer that could execute long computations automatically. Built within a steel frame 51 feet (16 m) long and eight feet high, and using 500 miles (800 km) of wire with three million connections, it was the industry’s largest electromechanical calculator and the largest computer of its day.

Manchester SSEM:
Nicknamed “Baby”, the Manchester Small-Scale Experimental Machine (SSEM) was developed in 1948 and was the world’s first computer to incorporate stored-program architecture.Whereas previous computers relied on punch tape or cards to store calculations and results, “Baby” was able to do this electronically.

Although its abilities were still modest – with a 32-bit word length, a memory of 32 words, and only capable of performing subtraction and negation without additional software – it was still revolutionary for its time. In addition, the SSEM also had the distinction of being the result of Alan Turing’s own work – another British crytographer who’s theories on the “Turing Machine” and development of the algorithm would form the basis of modern computer technology.

The Nuclear Age to the Digital Age:
With the end of World War II and the birth of the Nuclear Age, technology once again took several explosive leaps forward. This could be seen in the realm of computer technology as well, where wartime developments and commercial applications grew by leaps and bounds. In addition to processor speeds and stored memory multiplying expontentially every few years, the overall size of computers got smaller and smaller. This, some theorized would lead to the development of computers that were perfectly portable and smart enough to pass the “Turing Test”. Imagine!

IBM 7090:
The 7090 model which was released in 1959, is often referred to as a third generation computer because, unlike its predecessors which were either electormechanical  or used vacuum tubes, this machine relied transistors to conduct its computations. In addition, it was an improvement on earlier models in that it used a 36-bit word length and could store up to 32K (32,768) words, a modest increase in processing over the SSEM, but a ten thousand-fold increase in terms of storage capacity.

And of course, these improvements were mirrored in the fact the 7090 series were also significantly smaller than previous versions, being about the size of a desk rather than an entire room. They were also cheaper and were quite popular with NASA, Caltech and MIT.

PDP-8:
In keeping with the trend towards miniaturization, 1965 saw the development of the first commercial minicomputer by the Digital Equipment Corporation (DEC). Though large by modern standards (about the size of a minibar) the PDP-8, also known as the “Straight-8”, was a major improvement over previous models, and therefore a commercial success.

In addition, later models also incorporated advanced concepts like the Real-Time Operating System and preemptive multitasking. Unfortunately, early models still relied on paper tape in order to process information. It was not until later that the computer was upgraded to take advantage of controlling language  such as FORTRAN, BASIC, and DIBOL.

Intel 4004:
Founded in California in 1968, the Intel Corporation quickly moved to the forefront of computational hardware development with the creation of the 4004, the worlds first Central Processing Unit, in 1971. Continuing the trend towards smaller computers, the development of this internal processor paved the way for personal computers, desktops, and laptops.

Incorporating the then-new silicon gate technology, Intel was able to create a processor that allowed for a higher number of transistors and therefore a faster processing speed than ever possible before. On top of all that, they were able to pack in into a much smaller frame, which ensured that computers built with the new CPU would be smaller, cheaper and more ergonomic. Thereafter, Intel would be a leading designer of integrated circuits and processors, supplanting even giants like IBM.

Apple I:
The 60’s and 70’s seemed to be a time for the birthing of future giants. Less than a decade after the first CPU was created, another upstart came along with an equally significant development. Named Apple and started by three men in 1976 – Steve Jobs, Steve Wozniak, and Ronald Wayne – the first product to be marketed was a “personal computer” (PC) which Wozniak built himself.

One of the most distinctive features of the Apple I was the fact that it had a built-in keyboard. Competing models of the day, such as the Altair 8800, required a hardware extension to allow connection to a computer terminal or a teletypewriter machine. The company quickly took off and began introducing an upgraded version (the Apple II) just a year later. As a result, Apple I’s remain a scarce commodity and very valuable collector’s item.

The Future:
The last two decades of the 20th century also saw far more than its fair of developments. From the CPU and the PC came desktop computers, laptop computers, PDA’s, tablet PC’s, and networked computers. This last creation, aka. the Internet, was the greatest leap by far, allowing computers from all over the world to be networked together and share information. And with the exponential increase in information sharing that occurred as a result, many believe that it’s only a matter of time before wearable computers, fully portable computers, and artificial intelligences are possible. Ah, which brings me to the last entry in this list…

The Google Neural Network:
googleneuralnetworkFrom mechanical dials to vacuum tubes, from CPU’s to PC’s and laptops, computer’s have come a hell of a long way since the days of Ancient Greece. Hell, even within the last century, the growth in this one area of technology has been explosive, leading some to conclude that it was just a matter of time before we created a machine that was capable of thinking all on its own.

Well, my friends, that day appears to have dawned. Already, Nicola and myself blogged about this development, so I shan’t waste time going over it again. Suffice it to say, this new program, which thus far has been able to identify pictures of cats at random, contains the necessary neural capacity to acheive 1/1000th of what the human brain is capable of. Sounds small, but given the exponential growth in computing, it won’t be long before that gap is narrowed substantially.

Who knows what else the future will hold?  Optical computers that use not electrons but photons to move information about? Quantum computers, capable of connecting machines not only across space, but also time? Biocomputers that can be encoded directly into our bodies through our mitochondrial DNA? Oh, the possibilities…

Creating machines in the likeness of the human mind. Oh Brave New World that hath such machinery in it. Cool… yet scary!