Cyberwars: “Bigger than Heartbleed”

Shellshock-bash-header-664x374Just months after the Heartbleed bug made waves across the internet, a new security flaw has emerged which threatens to compromise everything from major servers to connected cameras. It is known as the Bash or Shellshock bug, a quarter-century old vulnerability that could put everything from major internet companies and small-scale web hosts to wi-fi connected devices at risk.

This  flaw allows malicious code execution within the bash shell – commonly accessed through Command Prompt on PC or Mac’s Terminal application – to take over an operating system and access confidential information. According to the open-source software company Red Hat, bash shells are run in the background of many programs, and the bug is triggered when extra code is added within the lines of Bash code.

heartbleed-iconBecause the bug interacts with a large percentage of software currently in use, and does in ways that are unexpected, Robert Graham – an internet security expert – claims that the Bash bug is bigger than Heartbleed. As he explained it:

We’ll never be able to catalogue all the software out there that is vulnerable to the Bash bug. While the known systems (like your Web server) are patched, unknown systems remain unpatched. We see that with the Heartbleed bug: six months later, hundreds of thousands of systems remain vulnerable.

According to a report filed by Ars Technica, the vulnerability could affect Unix and Linux devices, as well as hardware running Max OS X – particularly Mac OS X Mavericks (version 10.9.4). Graham warned that the Bash bug was also particularly dangerous for connected devices because their software is built using Bash scripts, which are less likely to be patched and more likely to expose the vulnerability to the outside world.

shellshock_bashAnd since the bug has existed for some two and a half decades, a great number of older devices will be vulnerable and need to be patched because of it. By contrast, The Heartbleed bug was introduced into OpenSSL more than two years ago, allowing random bits of memory to be retrieved from impacted servers. And according to security researcher Bruce Schneier, roughly half a million websites could be vulnerable.

For the time being, the administrative solution is to apply patches to your operating system. Tod Beardsley, an engineering manager at security firm Rapid7, claims that even though the vulnerability’s complexity is low, the level of danger it poses is severe. In addition, the wide range of devices affected by the bug make it essential that system administrators apply patches immediately.

cyber_virusAs Beardsley explained during an interview with CNET:

This vulnerability is potentially a very big deal. It’s rated a 10 for severity, meaning it has maximum impact, and ‘low’ for complexity of exploitation — meaning it’s pretty easy for attackers to use it… The affected software, Bash, is widely used so attackers can use this vulnerability to remotely execute a huge variety of devices and Web servers. Using this vulnerability, attackers can potentially take over the operating system, access confidential information, make changes etc. Anybody with systems using bash needs to deploy the patch immediately.

Attackers can potentially take over the operating system, access confidential information, and make changes. After conducting a scan of the internet to test for the vulnerability, Graham reported that the bug “can easily worm past firewalls and infect lots of systems” which he says would be “‘game over’ for large networks”. Similar to Beardsley, Graham said the problem needed immediate attention.

cyber-hackIn the meantime, Graham advised people to do the following:

Scan your network for things like Telnet, FTP, and old versions of Apache (masscan is extremely useful for this). Anything that responds is probably an old device needing a Bash patch. And, since most of them can’t be patched, you are likely screwed.

How lovely! But then again, these sorts of exploitable vulnerabilities are likely to continue to pop up until we rethink how the internet is run. As the Heartbleed bug demonstrated, the problem at the heart (no pun!) of it all is that vast swaths of the internet run on open-source software that is created by only a handful of people who are paid very little (and sometimes, not at all) for performing this lucrative job.

In addition, there is a terrible lack of oversight and protection when it comes to the internet’s infrastructure. Rather than problems being addressed in an open-source manner after they emerge, there needs to be a responsible body of committed and qualified individuals who have the ability to predict problems in advance, propose possible solutions, and come up with a set of minimum standards and regulations.

cryptographyEnsuring that it is international body would also be advisable. For as the Snowden leaks demonstrated, so much of the internet is controlled the United States. And as always, people need to maintain a degree of vigilance, and seek out information – which is being updated on a regular basis – on how they might address any possible vulnerabilities in their own software.

I can remember reading not long ago that the growing amount of cyber-attacks would soon cause people to suffer from “alert fatigue”. Well, those words are ringing in my ears, as it seems that a growing awareness of our internet’s flaws is likely to lead to “bug fatique” as well. Hopefully, it will also urge people to action and lead to some significant reforms in how the internet is structured and administered.

Source: cnet.com, arstechnica.com, blog.erratasec.com, securityblog.redhat.com

Computex 2014

https://download.taiwantradeshows.com.tw/files/model/photo/CP/2014/PH00013391-2.jpgEarlier this month, Computex 2014 wrapped up in Taipei. And while this trade show may not have all the glitz and glamor of its counterpart in Vegas (aka. the Consumer Electronics Show), it is still an important launch pad for new IT products slated for release during the second half of the year. Compared to other venues, the Taiwanese event is more formal, more business-oriented, and for those people who love to tinker with their PCs.

For instance, it’s an accessible platform for many Asian vendors who may not have the budget to head to Vegas. And in addition to being cheaper to set up booths and show off their products, it gives people a chance to look at devices that wouldn’t often be seen in the western parts of the world. The timing of the show is also perfect for some manufacturers. Held in June, the show provides a fantastic window into the second half of the year.

https://i0.wp.com/www.lowyat.net/wp-content/uploads/2014/06/140602dellcomputex.jpgFor example, big name brands like Asus typically use the event to launch a wide range of products. This year, this included such items as the super-slim Asus Book Chi and the multi-mode Book V, which like their other products, have demonstrated that the company has a flair for innovation that easily rivals the big western and Korean names. In addition, Intel has been a long stalwart at Computex, premiered its fanless reference design tablet that runs on the Llama Mountain chipset.

And much like CES, there were plenty of cool gadgets to be seen. This included a GPS tracker that can be attached to a dog collar to track a pet’s movements; the Fujitsu laptop, a hardy new breed of gadget that showcases Japanese designers’ aim to make gear that are both waterproof and dustproof; the Rosewill Chic-C powerbank that consists of 1,000mAh battery packs that attach together to give additional power and even charge gadgets; and the Altek Cubic compact camera that fits in the palm of the hand.

https://i1.wp.com/twimages.vr-zone.net/2013/12/altek-Cubic-1.jpgAnd then there was the Asus wireless storage, a gadget that looks like an air freshener, but is actually a wireless storage device that can be paired with a smartphone using near-field communication (NFC) technology – essentially being able to transfer info simply by bringing a device into near-proximity with it. And as always, there were plenty of cameras, display headsets, mobile devices, and wearables. This last aspect was particularly ever-present, in the form of look-alike big-name wearables.

By and all large, the devices displayed this year were variations on a similar theme: wrist-mounted fitness trackers, smartwatches, and head-mounted smartglasses. The SiMEye smartglass display, for example, was every bit inspired by Google Glass, and even bears a strong resemblance. Though the show was admittedly short on innovation over imitation, it did showcase a major trend in the computing and tech industry.

http://img.scoop.it/FWa9Z463Q34KPAgzjElk3Tl72eJkfbmt4t8yenImKBVvK0kTmF0xjctABnaLJIm9In his keynote speech, Microsoft’s Nick Parker talked about the age of ubiquitous computing, and the “devices we carry on us, as opposed to with us.” What this means is, we may very well be entering a PC-less age, where computing is embedded in devices of increasingly diminished size. Eventually, it could even be miniaturized to the point where it is stitched into our clothing as accessed through contacts, never mind glasses or headsets!

Sources: cnet.com, (2), (3), computextaipei.com

The Future is Here: 3-D Printed Brain Scanner

openbciWhen it comes to cutting-edge technology in recent years, two areas of development have been taking the world by storm. On the one hand, there’s 3-D printing (aka. Additive Manufacturing) that is revolutionizing the way we fabricate things. On the other, there’s brain-computer interfaces (BCI), which are giving people the power to control machines with their minds and even transfer their thoughts.

And now, two inventors – Conor Russomanno and Joel Murphy – are looking to marry the two worlds in order to create the first, open-source brain scanner that people can print off at home. Thanks to funding from DARPA, the two men printed off their first prototype headset this past week. It’s known as the OpenBCI, and it’s likely to make brain scanning a hell of a lot more affordable in the near future.

openbci1It includes a mini-computer that plugs into sensors on a black, skull-grabbing piece of plastic called the “Spider Claw 3000,” which can be created with a 3-D printer. Assembled, it operates as a low-cost electroencephalography (EEG) brainwave scanner that connects to a PC, compared to  high-grade EEG machines used by laboratories and researchers that cost thousands of dollars.

But over the past few years, cheaper models have been made by companies like Emotiv, which have in turn allowed a new era of DIY brain hackers to conduct brainwaves experiments. Since that time, everything from games, computer interfaces, personal tracking tools, and self-directed mind enhancement have been available to regular people.

openbci2But Russomanno and Murphy felt the community needed a completely open-source platform if it was truly going to take off – hence the OpenBCI. The hardware to build the headset can be ordered from the company, while the software to run it is available through GitHub, a popular code sharing site. Once procured, people will have the ability to print off, program, and adjust their own personal brain scanning device.

According to Russomanno, the greatest asset of the headset (aside from the price) is the freedom it gives to brain hackers to put their EEG probes anywhere they like:

You don’t want to limit yourself to looking to just a few places on the scalp. You can target up to 64 locations on the scalp with a maximum of 16 electrodes at a time.

As it stands, Russomanno and Murphy have built the prototype headset, but still need to raise money to build the mini-computer that it plugs into. To accomplish this, the two inventors launched a Kickstarter project to fund the development of the Arduino-compatible hardware. Last week, they reached their goal of $100,000, and expect to ship their first systems in March.

openbci3The current design of the hardware, which looks more like a hexagonly-shaped circuit board than a computer, is their third incarnation. In addition to being smaller and Adruino-compatible, the third version is also programmable via Bluetooth and has a port for an SD card. When the hardware starts shipping, Russomanno expects it to kick off a new round of experimentation:

We’ve got about 300 people that have already donated to receive the board. If you’re willing to spend $300 for a piece of technology, you’re definitely going to build something with it.

One of the hallmarks of technological revolutions is the ability to make the technology scalable and more affordable. In this way, its benefits (aka. returns) are able to multiply and expand. And with the help of open-source devices like these that people can create on 3-D printers (which are also dropping in prices) the returns on mind-controlled devices are likely to grow exponentially in the coming years.

In short, the age of mind-controlled machinery may be just around the corner. Good to know they will be obeying us and not the other way around!


Sources:
wired.com, kickstarter.com

IFA 2013!

IFA2013There are certainly no shortages of electronic shows happening this year! It seems that I just finished getting through all the highlights from Touch Taiwan which happened back in August. And then September comes around and I start hearing all about IFA 2013. For those unfamiliar with this consumer electronics exhibition, IFA stands for Internationale Funkausstellung Berlin, which loosely translated means the Berlin Radio Show.

As you can tell from the name, this annual exhibit has some deep roots. Beginning in 1924, the show was intended to gives electronics producers the chance to present their latest products and developments to the general public, as well as showcasing the latest in technology. From radios and cathode-ray display boxes (i.e. television) to personal computers and PDAs, the show has come a long way, and this year’s show promised to be a doozy as well.

IFA-2013Of all those who presented this year, Sony seems to have made the biggest impact. In fact, they very nearly stole the show with their presentation of their new smartphones, cameras and tablets. But it was their new Xperia Z1 smartphone that really garnered attention, given all the fanfare that preceded it. Check out the video by TechRadar:


However, their new Vaio Tap 11 tablet also got quite a bit of fanfare. In addition to a Haswell chip (Core i3, i5 or i7), a six-hour battery, full Windows connectivity, a camera, a stand, 128GB to 512GB of solid-state storage, and a wireless keyboard, the tablet has what is known as Near Field Communications (NFC) which comes standard on smartphones these days.

This technology allows the tablet to communicate with other devices and enable data transfer simply by touching them together or bringing them into close proximity. The wireless keyboard is also attachable to the device via a battery port which allows for constant charging, and the entire thin comes in a very thin package. Check out the video by Engadget:


Then there was the Samsung Galaxy Gear smartwatch, an exhibit which was equally anticipated and proved to be quite entertaining. Initially, the company had announced that their new smartwatch would incorporate flexible technology, which proved to not be the case. Instead, they chose to release a watch that was comparable to Apple’s own smartwatch design.

But as you can see, the end result is still pretty impressive. In addition to telling time, it also has many smartphone-like options, like being able to take pictures, record and play videos, and link to your other devices via Bluetooth. And of course, you can also phone, text, instant message and download all kinds of apps. Check out the hands-on video below:


Toshiba also made a big splash with their exhibit featuring an expanded line of tablets, notebooks and hybrids, as well as Ultra High-Definition TVs. Of note was their M9 design, a next-generation concept that merges the latest in display and networking technology – i.e. the ability to connect to the internet or your laptop, allowing you to stream video, display pictures, and play games on a big ass display!

Check out the video, and my apologies for the fact that this and the next one are in German. There were no English translations:


And then there was their Cloud TV presentation, a form of “smart tv” that merges the best of a laptop to that of a television. Basically, this means that a person can watch video-on-demand, use social utilities, network, and save their files via cloud memory storage, all from their couch using a handheld remote. Its like watching TV, but with all the perks of a laptop computer – one that also has a very big screen!


And then there was the HP Envy Recline, an all-in-one PC that has a hinge that allows the massive touchscreen to pivot over the edge of a desk and into the user’s lap. Clearly, ergonomics and adaptability were what inspired this idea, and many could not tell if it was a brilliant idea or the most enabling invention since the LA-Z-BOY recliner. Still, you have to admit, it looks pretty cool:


Lenovo and Acer also attracted show goers with their new lineup of smartphones, tablets, and notebooks. And countless more came to show off the latest in their wares and pimp out their own versions of the latest and greatest developments. The show ran from September 6th to 11th and there are countless videos, articles and testimonials to still making it to the fore.

For many of the products, release dates are still pending. But all those who attended managed to come away with the understanding that when it comes to computing, networking, gaming, mobile communications, and just plain lazing, the technology is moving by leaps and bounds. Soon enough, we are likely to have flexible technology available in all smart devices, and not just in the displays.

nokia_morphNanofabricated materials are also likely to create cases that are capable of morphing and changing shape and going from a smartwatch, to a smartphone, to a smart tablet. For more on that, check out this video from Epic Technology, which showcases the most anticipated gadgets for 2014. These include transparent devices, robots, OLED curved TVs, next generation smartphones, the PS4, the Oculus Rift, and of course, Google Glass.

I think you’ll agree, next year’s gadgets are even more impressive than this year’s gadgets. Man, the future is moving fast!


Sources:
b2b.ifa-berlin.com, technologyguide.com, telegraph.co.uk, techradar.com

News from Mars: Another (Planned) Mission!

mars-mission1When it comes to generational milestones, those of born since the late 70’s often feel like we’re lagging behind previous generations. Unlike the “Greatest Generation” or the “Baby Boomers”, we weren’t around to witness Two World Wars, the Great Depression, the Cuban Missile Crisis, the death of JFK, Neil Armstrong, or the FLQ Crisis. For us, the highlights were things like the development of the PC, the birth of the internet, Kurt Cobain, and of course, 9/11.

But looking ahead, those us of belonging to Generation X, Y, and Millennials might just be around to witness the greatest event in human history to date – a manned mission to Mars! And while NASA is busy planning a mission for 2030, a number of private sources are looking to make a mission happen sooner. One such group is a team of UK scientists working from Imperial College London that are working to mount a a three person mission to Mars.

mission-to-marsThe planned mission consists of two spacecraft, the first of which is a Martian lander equipped with a heat shield that will send the crew off into Earth’s orbit. The second craft would be a habitat vehicle, which is the craft that the crew would live in during the voyage. The habitat vehicle would consist of three floors, and measure in at around 30 feet (10m) tall and 13 feet (4m) in diameter.

The astronauts would be situated in the lander during takeoff, and would move to the habitat when the dual-craft reaches Earth orbit. Once the astronauts are safely within the habitat, a rocket would shoot the dual-craft off on its journey to Mars, which would take nine months to arrive, less than the approximately 300 days that most projections say it will take.

Mars_landerOnce In space, the dual-craft would then split apart but remain connected by a 60 meter (200 foot) tether. Thrusters from both vehicles would then spin them around a central point, creating artificial gravity similar to Earth’s in the habitat. Not only would this help the astronauts feel at home for the better part of a lonely year, but it would also reduce the bone and muscle atrophy that are associated with weightlessness.

The craft would be well-stocked with medicine to ensure that the crew remained in fine health for the nine month transit. Superconducting magnets, as well as water flowing through the shell of the craft, would be employed to help reduce both cosmic and solar radiation. And once the dual-craft reaches Mars, it would tether back together, the crew would move back into the lander, and then detach from the habitat descend to the Martian surface.

Mars-mission-2This mission would also involve sending a habitat and return vehicle to Mars before the astronauts arrived, so the crew would have shelter upon landing as well as a way to get home. The crew would spend anywhere from two months to two years on Mars, depending on the goals of the mission and the distance between Mars and Earth. On the way back home, the mission would dock with the ISS, then take a craft back to Earth from there.

What’s especially interesting about this proposed mission is that each stage of it has been proven to work in an individual capacity. What’s more, the concept of using water as a form radiation shielding is far more attractive than Inspiration Mars’, which calls for using the astronauts own fecal matter!

Unfortunately, no real timetable or price tags have been proposed for this mission yet. However, considering that every individual step of the mission has been proven to work on its own, the proposed overall journey could work. In the meantime, all us post-Baby Boomers can do is wait and hope we live to see it! I for one am going sick of hearing Boomers talk about where they were when Apollo 11 happened and having nothing comparable to say!

And be sure to enjoy this video of the University College London team discussing the possibilities of a Mars mission in our lifetime:


Sources:
bbc.co.uk, extremetech.com

AR Glasses Restore Sight to the Blind

projectglass01As I’m sure most readers are aware, blindness comes in many forms. It’s not simply a matter of the afflicted not being able to see. In fact, there are many degrees of blindness and in most cases, depth perception is limited. But as it turns out, researchers at the University of Yamanashi in Japan have found a way to improve depth perception for the visually challenged using simple augmented reality glasses.

The process involved a pair of Wrap 920 ARs, an off-the-shelf brand of glasses that allow their wearer to interface with their PC, watch video or surf the internet, all the while staying mobile and carrying out their daily chores. The team then recorded images as seen by the wearer from the angle of both eyes, processed it with a quad-core Windows 7 machine, and then merged the images as they would appear to the healthy eye.

AR_glassesEssentially, the glasses perform the task of rendering a scene as it would be seen through “binocular vision” – i.e. in 3D. By taking two images, merging them together and defining what is near and what is far by their relative resolution, they were able to free the wearer’s brain from having to it for them. This in turn allowed them to interact more freely and effectively with their test environment: a dinner table with chop sticks and food in small bowls, arguably a tricky meal to navigate!

Naturally, the technology is still in its infancy. For one, the processed imagery has a fairly low resolution and frame rate, and it requires the glasses to be connected to a laptop. Newer tech will provide better resolution, faster frames per second, and a larger viewport. In addiiton, mobile computing with smartphones and tablets ought to provide for a greater degree of portability, to the point where all the required technology is in the glasses themselves.

posthumanLooking ahead, it is possible that there could be a f0rm of AR glasses specially programmed to deliver this kind of vision correction. The glasses would then act as a prosthesis, giving people with visual impairment an increased level of visual acuity, bringing them one step closer to vision recovery. And since this is also a development which will blurring the lines between humans and computers even more, it’s arguably another step closer to transhumanism!

Source: Extremetech.com

Of Mechanical Minds

A few weeks back, a friend of mine, Nicola Higgins, directed me to an article about Google’s new neural net. Not only did she provide me with a damn interesting read, she also challenged me to write an article about the different types of robot brains. Well, Nicola, as Barny Stintson would say “Challenge Accepted!”And I got to say, it was a fun topic to get into.

After much research and plugging away at the lovely thing known as the internet (which was predicted by Vannevar Bush with his proposed Memor-Index system (aka. Memex) 50 years ago, btw) I managed to compile a list of the most historically relevant examples of mechanical minds, culminating in the development of Google’s Neural Net. Here we go..

Earliest Examples:
Even in ancient times, the concept of automata and arithmetic machinery can be found in certain cultures. In the Near East, the Arab World, and as far East as China, historians have found examples of primitive machinery that was designed to perform one task or another. And even though few specimens survive, there are even examples of machines that could perform complex mathematical calculations…

Antikythera mechanism:
Invented in ancient Greece, and recovered in 1901 on the ship that bears the same name, the Antikythera is the world’s oldest known analog calculator, invented to calculate the positions of the heavens for ancient astronomers. However, it was not until a century later that its true complexity and significance would be fully understood. Having been built in the 1st century BCE, it would not be until the 14th century CE that machines of its complexity would be built again.

Although it is widely theorized that this “clock of the heavens” must have had several predecessors during the Hellenistic Period, it remains the oldest surviving analog computer in existence. After collecting all the surviving pieces, scientists were able to reconstruct the design (pictured at right), which essentially amounted to a large box of interconnecting gears.

Pascaline:
Otherwise known as the Arithmetic Machine and Pascale Calculator, this device was invented by French mathematician Blaise Pascal in 1642 and is the first known example of a mechanized mathematical calculator. Apparently, Pascale invented this device to help his father reorganize the tax revenues of the French province of Haute-Normandie, and went on to create 50 prototypes before he was satisfied.

Of those 50, nine survive and are currently on display in various European museums. In addition to giving his father a helping hand, its introduction launched the development of mechanical calculators all over Europe and then the world. It’s invention is also directly linked to the development of the microprocessing circuit roughly three centuries later, which in turn is what led to the development of PC’s and embedded systems.

The Industrial Revolution:
With the rise of machine production, computational technology would see a number of developments. Key to all of this was the emergence of the concept of automation and the rationalization of society. Between the 18th and late 19th centuries, as every aspect of western society came to be organized and regimented based on the idea of regular production, machines needed to be developed that could handle this task of crunching numbers and storing the results.

Jacquard Loom:
Invented by Joseph Marie Jacquard, a French weaver and merchant, in 1801, the Loom that bears his name is the first programmable machine in history, which relied on punch cards to input orders and turn out textiles of various patterns. Thought it was based on earlier inventions by Basile Bouchon (1725), Jean Baptiste Falcon (1728) and Jacques Vaucanson (1740), it remains the most well-known example of a programmable loom and the earliest machine that was controlled through punch cards.

Though the Loom was did not perform computations, the design was nevertheless an important step in the development of computer hardware. Charles Babbage would use many of its features to design his Analytical Engine (see next example) and the use of punch cards would remain a stable in the computing industry well into the 20th century until the development of the microprocessor.

Analytical Engine:
Also known as the “Difference Engine”, this concept was originally proposed by English Mathematician Charles Babbage. Beginning in 1822 Babbage began contemplating designs for a machine that would be capable of automating the process of creating error free tables, which arose out of difficulties encountered by teams of mathematicians who were attempting to do it by hand.

Though he was never able to complete construction of a finished product, due to apparent difficulties with the chief engineer and funding shortages, his proposed engine incorporated an arithmetical unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first Turing-complete design for a general-purpose computer. His various trial models (like that featured at left) are currently on display in the Science Museum in London, England.

The Birth of Modern Computing:
The early 20th century saw the rise of several new developments, many of which would play a key role in the development of modern computers. The use of electricity for industrial applications was foremost, with all computers from this point forward being powered by Alternating and/or Direct Current and even using it to store information. At the same time, older ideas would be remain in use but become refined, most notably the use of punch cards and tape to read instructions and store results.

Tabulating Machine:
The next development in computation came roughly 70 years later when Herman Hollerith, an American statistician, developed a “tabulator” to help him process information from the 1890 US Census. In addition to being the first electronic computational device designed to assist in summarizing information (and later, accounting), it also went on to spawn the entire data processing industry.

Six years after the 1890 Census, Hollerith formed his own company known as the Tabulating Machine Company that was responsible for creating machines that could tabulate info based on punch cards. In 1924, after several mergers and consolidations, Hollerith’c company was renamed International Business Machines (IBM), which would go on to build the first “supercomputer” for Columbia University in 1931.

Atanasoff–Berry Computer:
Next, we have the ABC, the first electronic digital computing device in the world. Conceived in 1937, the ABC shares several characteristics with its predecessors, not the least of which is the fact that it is electrically powered and relied on punch cards to store data. However, unlike its predecessors, it was the first machine to use digital symbols to compute and was the first computer to use vacuum tube technology

These additions allowed the ABC to acheive computational speeds that were previously thought impossible for a mechanical computer. However, the machine was limited in that it could only solve systems of linear equations, and its punch card system of storage was deemed unreliable. Work on the machine also stopped when it’s inventor John Vincent Atanasoff was called off to assist in World War II cryptographic assignments. Nevertheless, the machine remains an important milestone in the development of modern computers.

Colossus:
There’s something to be said about war being the engine of innovation. The Colossus is certainly no stranger to this rule, the machine used to break German codes in the Second World War. Due to the secrecy surrounding it, it would not have much of an influence on computing and would not be rediscovered until the 1990’s. Still, it represents a step in the development of computing, as it relied on vacuum tube technology and punch tape in order to perform calculations, and proved most adept at solving complex mathematical computations.

Originally conceived by Max Newman, the British mathematician who was chiefly responsible fore breaking German codes in Bletchley Park during the war, the machine was a proposed means of combatting the German Lorenz machine, which the Nazis used to encode all of their wireless transmissions. With the first model built in 1943, ten variants of the machine for the Allies before war’s end and were intrinsic in bringing down the Nazi war machine.

Harvard Mark I:
Also known as the “IBM Automatic Sequence Controlled Calculator (ASCC)”, the Mark I was an electro-mechanical computer that was devised by Howard H. Aiken, built by IBM, and officially presented to Harvard University in 1944. Due to its success at performing long, complex calculations, it inspired several successors, most of which were used by the US Navy and Air Force for the purpose of running computations.

According to IBM’s own archives, the Mark I was the first computer that could execute long computations automatically. Built within a steel frame 51 feet (16 m) long and eight feet high, and using 500 miles (800 km) of wire with three million connections, it was the industry’s largest electromechanical calculator and the largest computer of its day.

Manchester SSEM:
Nicknamed “Baby”, the Manchester Small-Scale Experimental Machine (SSEM) was developed in 1948 and was the world’s first computer to incorporate stored-program architecture.Whereas previous computers relied on punch tape or cards to store calculations and results, “Baby” was able to do this electronically.

Although its abilities were still modest – with a 32-bit word length, a memory of 32 words, and only capable of performing subtraction and negation without additional software – it was still revolutionary for its time. In addition, the SSEM also had the distinction of being the result of Alan Turing’s own work – another British crytographer who’s theories on the “Turing Machine” and development of the algorithm would form the basis of modern computer technology.

The Nuclear Age to the Digital Age:
With the end of World War II and the birth of the Nuclear Age, technology once again took several explosive leaps forward. This could be seen in the realm of computer technology as well, where wartime developments and commercial applications grew by leaps and bounds. In addition to processor speeds and stored memory multiplying expontentially every few years, the overall size of computers got smaller and smaller. This, some theorized would lead to the development of computers that were perfectly portable and smart enough to pass the “Turing Test”. Imagine!

IBM 7090:
The 7090 model which was released in 1959, is often referred to as a third generation computer because, unlike its predecessors which were either electormechanical  or used vacuum tubes, this machine relied transistors to conduct its computations. In addition, it was an improvement on earlier models in that it used a 36-bit word length and could store up to 32K (32,768) words, a modest increase in processing over the SSEM, but a ten thousand-fold increase in terms of storage capacity.

And of course, these improvements were mirrored in the fact the 7090 series were also significantly smaller than previous versions, being about the size of a desk rather than an entire room. They were also cheaper and were quite popular with NASA, Caltech and MIT.

PDP-8:
In keeping with the trend towards miniaturization, 1965 saw the development of the first commercial minicomputer by the Digital Equipment Corporation (DEC). Though large by modern standards (about the size of a minibar) the PDP-8, also known as the “Straight-8”, was a major improvement over previous models, and therefore a commercial success.

In addition, later models also incorporated advanced concepts like the Real-Time Operating System and preemptive multitasking. Unfortunately, early models still relied on paper tape in order to process information. It was not until later that the computer was upgraded to take advantage of controlling language  such as FORTRAN, BASIC, and DIBOL.

Intel 4004:
Founded in California in 1968, the Intel Corporation quickly moved to the forefront of computational hardware development with the creation of the 4004, the worlds first Central Processing Unit, in 1971. Continuing the trend towards smaller computers, the development of this internal processor paved the way for personal computers, desktops, and laptops.

Incorporating the then-new silicon gate technology, Intel was able to create a processor that allowed for a higher number of transistors and therefore a faster processing speed than ever possible before. On top of all that, they were able to pack in into a much smaller frame, which ensured that computers built with the new CPU would be smaller, cheaper and more ergonomic. Thereafter, Intel would be a leading designer of integrated circuits and processors, supplanting even giants like IBM.

Apple I:
The 60’s and 70’s seemed to be a time for the birthing of future giants. Less than a decade after the first CPU was created, another upstart came along with an equally significant development. Named Apple and started by three men in 1976 – Steve Jobs, Steve Wozniak, and Ronald Wayne – the first product to be marketed was a “personal computer” (PC) which Wozniak built himself.

One of the most distinctive features of the Apple I was the fact that it had a built-in keyboard. Competing models of the day, such as the Altair 8800, required a hardware extension to allow connection to a computer terminal or a teletypewriter machine. The company quickly took off and began introducing an upgraded version (the Apple II) just a year later. As a result, Apple I’s remain a scarce commodity and very valuable collector’s item.

The Future:
The last two decades of the 20th century also saw far more than its fair of developments. From the CPU and the PC came desktop computers, laptop computers, PDA’s, tablet PC’s, and networked computers. This last creation, aka. the Internet, was the greatest leap by far, allowing computers from all over the world to be networked together and share information. And with the exponential increase in information sharing that occurred as a result, many believe that it’s only a matter of time before wearable computers, fully portable computers, and artificial intelligences are possible. Ah, which brings me to the last entry in this list…

The Google Neural Network:
googleneuralnetworkFrom mechanical dials to vacuum tubes, from CPU’s to PC’s and laptops, computer’s have come a hell of a long way since the days of Ancient Greece. Hell, even within the last century, the growth in this one area of technology has been explosive, leading some to conclude that it was just a matter of time before we created a machine that was capable of thinking all on its own.

Well, my friends, that day appears to have dawned. Already, Nicola and myself blogged about this development, so I shan’t waste time going over it again. Suffice it to say, this new program, which thus far has been able to identify pictures of cats at random, contains the necessary neural capacity to acheive 1/1000th of what the human brain is capable of. Sounds small, but given the exponential growth in computing, it won’t be long before that gap is narrowed substantially.

Who knows what else the future will hold?  Optical computers that use not electrons but photons to move information about? Quantum computers, capable of connecting machines not only across space, but also time? Biocomputers that can be encoded directly into our bodies through our mitochondrial DNA? Oh, the possibilities…

Creating machines in the likeness of the human mind. Oh Brave New World that hath such machinery in it. Cool… yet scary!