IFA 2013!

IFA2013There are certainly no shortages of electronic shows happening this year! It seems that I just finished getting through all the highlights from Touch Taiwan which happened back in August. And then September comes around and I start hearing all about IFA 2013. For those unfamiliar with this consumer electronics exhibition, IFA stands for Internationale Funkausstellung Berlin, which loosely translated means the Berlin Radio Show.

As you can tell from the name, this annual exhibit has some deep roots. Beginning in 1924, the show was intended to gives electronics producers the chance to present their latest products and developments to the general public, as well as showcasing the latest in technology. From radios and cathode-ray display boxes (i.e. television) to personal computers and PDAs, the show has come a long way, and this year’s show promised to be a doozy as well.

IFA-2013Of all those who presented this year, Sony seems to have made the biggest impact. In fact, they very nearly stole the show with their presentation of their new smartphones, cameras and tablets. But it was their new Xperia Z1 smartphone that really garnered attention, given all the fanfare that preceded it. Check out the video by TechRadar:


However, their new Vaio Tap 11 tablet also got quite a bit of fanfare. In addition to a Haswell chip (Core i3, i5 or i7), a six-hour battery, full Windows connectivity, a camera, a stand, 128GB to 512GB of solid-state storage, and a wireless keyboard, the tablet has what is known as Near Field Communications (NFC) which comes standard on smartphones these days.

This technology allows the tablet to communicate with other devices and enable data transfer simply by touching them together or bringing them into close proximity. The wireless keyboard is also attachable to the device via a battery port which allows for constant charging, and the entire thin comes in a very thin package. Check out the video by Engadget:


Then there was the Samsung Galaxy Gear smartwatch, an exhibit which was equally anticipated and proved to be quite entertaining. Initially, the company had announced that their new smartwatch would incorporate flexible technology, which proved to not be the case. Instead, they chose to release a watch that was comparable to Apple’s own smartwatch design.

But as you can see, the end result is still pretty impressive. In addition to telling time, it also has many smartphone-like options, like being able to take pictures, record and play videos, and link to your other devices via Bluetooth. And of course, you can also phone, text, instant message and download all kinds of apps. Check out the hands-on video below:


Toshiba also made a big splash with their exhibit featuring an expanded line of tablets, notebooks and hybrids, as well as Ultra High-Definition TVs. Of note was their M9 design, a next-generation concept that merges the latest in display and networking technology – i.e. the ability to connect to the internet or your laptop, allowing you to stream video, display pictures, and play games on a big ass display!

Check out the video, and my apologies for the fact that this and the next one are in German. There were no English translations:


And then there was their Cloud TV presentation, a form of “smart tv” that merges the best of a laptop to that of a television. Basically, this means that a person can watch video-on-demand, use social utilities, network, and save their files via cloud memory storage, all from their couch using a handheld remote. Its like watching TV, but with all the perks of a laptop computer – one that also has a very big screen!


And then there was the HP Envy Recline, an all-in-one PC that has a hinge that allows the massive touchscreen to pivot over the edge of a desk and into the user’s lap. Clearly, ergonomics and adaptability were what inspired this idea, and many could not tell if it was a brilliant idea or the most enabling invention since the LA-Z-BOY recliner. Still, you have to admit, it looks pretty cool:


Lenovo and Acer also attracted show goers with their new lineup of smartphones, tablets, and notebooks. And countless more came to show off the latest in their wares and pimp out their own versions of the latest and greatest developments. The show ran from September 6th to 11th and there are countless videos, articles and testimonials to still making it to the fore.

For many of the products, release dates are still pending. But all those who attended managed to come away with the understanding that when it comes to computing, networking, gaming, mobile communications, and just plain lazing, the technology is moving by leaps and bounds. Soon enough, we are likely to have flexible technology available in all smart devices, and not just in the displays.

nokia_morphNanofabricated materials are also likely to create cases that are capable of morphing and changing shape and going from a smartwatch, to a smartphone, to a smart tablet. For more on that, check out this video from Epic Technology, which showcases the most anticipated gadgets for 2014. These include transparent devices, robots, OLED curved TVs, next generation smartphones, the PS4, the Oculus Rift, and of course, Google Glass.

I think you’ll agree, next year’s gadgets are even more impressive than this year’s gadgets. Man, the future is moving fast!


Sources:
b2b.ifa-berlin.com, technologyguide.com, telegraph.co.uk, techradar.com

The Future of Firearms: The Inteliscope!

inteliscope-iphone-adapterGiven the many, many uses that smartphones have these days, and the many technologies being adapted to work with them, I guess it was only a matter of time before someone found a way to militarize it. And that’s exactly what inventor Jason Giddings and his new company, Inteliscope, LLC, decided to do when they combined guns with smart devices to launch the Inteliscope Tactical Rifle Adapter.

Along with an iOS app and a mount that can be affixed to tactical rails, the adapter allows gun owners to mount their iPhone or iPod Touch to a firearm and use it as a sight with a heads-up display showing real-time data on their surroundings. The app also works in portrait mode, so the adapter can be affixed to the side of a firearm if needed.

Inteliscope_2Some might ask how an iPhone could be expected to improve upon a standard scope, but that’s where things get particularly interesting. By offering a range of visual enhancements and features, the user is essentially able to convert their smartphone into an integrated ballistic computer system, but at a fraction of the cost of a military variant.

Added features include a 5x digital zoom, an adjustable mount that lets users peek around corners, a choice of different cross hairs, data on local prevailing winds, a GPS locator, a compass, ballistics info, and a shot timer. The attached device can even act as a mounted flashlight or strobe, but probably the most useful feature is the ability to record and play back video of each shot.

inteliscope-iphone-adapter-4Naturally, there are some drawbacks to the Inteliscope. For example, the iPhone/iPod Touch’s camera optics only offer support for short range targets, and using calibers larger than .223 or 5.56 mm could damage your smart device. The developers have also advised potential customers to make sure hunting with electronic-enhanced devices is legal in their region.

Still, it does provide a fairly cost-effective means for giving any gun that Future Warrior look, and for the relatively cheap price of $69.99. Inteliscope is currently accepting pre-orders through its website, with adapters available for the iPhone 4, iPhone 4S, iPhone 5 and iPod Touch, and plan to ship to begin shipping in June.

And of course, there’s a video of the system in action:


Source:
gizmag.com

The Future is Here: Self-Healing Computer Chips

computer_chipIt’s one of the cornerstones of the coming technological revolution: machinery that can assemble, upgrade, and/or fix itself without the need for regular maintenance. Such devices would forever put an end to the hassles of repairing computers, replacing components, or having to buy new machines when something vital broke down. And thanks to researchers at Caltech, we now have a microchip that accomplish one of these feats: namely, fix itself.

The chip is the work of Ali Hajimiri and a group of Caltech researchers who have managed to create an integrated circuit that, after taking severe damage, can reconfigure itself in such a way where it can still remain functional. This is made possible thanks to a secondary processor that jumps into action when parts of the chip fail or become compromised. The chip is also able to tweak itself on the fly, and can be programmed to focus more on saving energy or performance speed.

computer_chip2In addition, the chip contains 100,000 transistors, as well as various sensors that give it the ability to monitor the unit’s overall health. Overall, the microchip is comparable to a power amplifier as well as a microprocessor, the kind of circuit that processes signal transmissions, such as those found in mobile phones, as well as carrying out complex functions. This combined nature is what gives it this self-monitoring ability and ensures that it can keep working where other chips would simply stop.

To test the self-healing, self-monitoring attributes of their design, Hajimiri and his team blasted the chip with a laser, effectively destroying half its transistors. It only took the microchip a handful of milliseconds to deal with the loss and move on, which is an impressive feat by any standard. On top of that, the team found that a chip that wasn’t blasted by lasers was able to increase its efficiency by reducing its power consumption by half.

healingchipGranted, the chip can only fix itself if the secondary processor and at least some of the parts remain intact, but the abilities to self-monitor and tweak itself are still of monumental importance. Not only can the chip monitor itself in order to provide the best possible performance, it can also ensure that it will continue to provide a proper output of data if some of the parts do break down.

Looking ahead, Hajimiri has indicated that the technology behind this self-healing circuit can be applied to any other kind of circuit. This is especially good news for people with portable computers, laptops and other devices who have watched them break down because of a hard bump. Not only would this save consumers a significant amount of money on repairs, replacement, and data recovery, it is pointing the way towards a future where embedded repair systems are the norm.

And who knows? Someday, when nanomachines and self-assembling structures are the norm, we can look forward to devices that can be totally smashed, crushed and shattered, but will still manage to come back together and keep working. Hmm, all this talk of secondary circuits and self-repairing robots. I can’t help but get the feeling we’ve seen this somewhere before…

t1000-ressurect_3135628_GIFSoup.com

Sources: Extremetech.com, inhabitat.com

The Future is Here: The Smart Bandage!

electronic_skin_patchWith recent advances being made in flexible electronics, researchers are finding more and more ways to adapt medical devices to the human body. These include smart tattoos, stretchable patches for organs, and even implants. But what of band-aids? Aren’t they about due for an upgrade? Well as it happens, a team of chemical engineering at Northeastern University are working towards just that.

Led by associate professor Ed Goluch, the team is working towards the development of a “smart bandage” that will not only dress wounds, but can monitor infections and alert patients to their existence. Based around an electrochemical sensor that is capable of detecting Pseudomonas aerug­i­nosa – a common bacteria that can kill if untreated – this bandage could very prove to be the next big step in first aid.

smart_bandaidAccording to Goluch, the idea came to him while he was studying how different bacterial cells behave individually and he and his colleagues began speaking about building other types of sensors:

I was designing sensors to be able to track individual cells, measure how they produce different toxins and compounds at the single-cell level and see how they change from one cell to another and what makes one cell more resistant to an antibiotic.

Naturally, addition research is still needed so that smart band-aids of this kind would be able to detect other forms of infections. And Goluch and his colleagues are quite confident, claiming that they are adapting their device to be able to detect the specific molecules emitted by Staphylococcal – the bacteria responsible for staph infections.

???????????????????????????????So far, Goluch and his team have tested the system with bacteria cultures and sensors. The next step, which he hopes to begin fairly soon, will involve humans and animals testing. The professor isn’t sure exactly how much the sensor would cost when commercialized, but he believes “it’s simple enough that you’d be able to integrate it in a large volume fairly cheap.”

At this rate, I can foresee a future where all first-aid devices are small patches that are capable of gathering data on your wounds, checking your vitals, and communicating all this information directly to your PDA or tablet, your doctor, or possibly your stretchable brain implant. I tell ya, it’s coming, so keep your apps up to date!

Source: factcoexist.com

 

Nokia Morph Concept Phone

nokia_morphThis story is a bit of an expansion on a preview post, and one which I’ve put off since I spent so much time talking about phones a few weeks ago. And the concept is a little dated at this point, but since it’s still in the works and just as revolutionary. And trust me, its quite cool and to read about!

It seems that there is no shortage of new and radical ideas when it comes to the field of personal communications these days! And when it comes to personal phones, it seems the sky’s the limit. In keeping with the trend to build smaller, ergonomic, flexible and thinner smartphones and PDA’s, Nokia has another concept which is making waves.

It’s known as the Morph, a new concept that showcases some revolutionary leaps being made in numerous fields. Thanks to ongoing collaboration between the Nokia Research Center (NRC) and the Cambridge Nanoscience Centre in the UK, this device incorporates numerous advances being made in terms of thin displays, flexible housings and nanotechnological processes. Once feasible, this phone will literary be assembled at the microscopic levels, leading to a phone made of “smart matter”.

In addition to the revolutionary nanoscale manufacturing process, the phone will present a number of radical new possibilities for users and device manufacturers everywhere. They include:

  • Newly-enabled flexible and transparent materials that blend more seamlessly with the way we live
  • Devices that are self-cleaning and self-preserving
  • Transparent electronics that offer an entirely new aesthetic dimension
  • Built-in solar absorption that charge a device and batteries that are smaller, longer lasting and faster to charge
  • Integrated sensors that allow people to learn more about the environment, empowering them to make better choices

In addition to the advances above, the integrated electronics shown in the Morph concept could cost less and include more functionality in a much smaller space, even as interfaces are simplified and usability is enhanced. What’s more, the development and combination of these technologies will have far-reaching benefits for the fields of communication and personal computing, revolutionizing how people do these in their daily lives.

And of course, Nokia was sure to create an animated video displaying the Morph concept in action. Take a gander:

Source: press.nokia.com, youtube.com

The Future is Here: Paper Thin, Flexible Batteries

flexbatteryAs Yogi Berra would say, “It’s like deja vu, all over again.” Designed to be paper thin, flexible, and printable using a 3D printer device, this latest advancement combines several technological breakthroughs into one package. But instead of being a display device, a PDA, a smartphone, or some high-tech component, this latest piece of future tech is a simple battery. And in a world where technology is becoming increasingly smart, thin and ergonomic, it just may be the way of the future for electrical devices.

Well, simple might be a bit of a stretch. Developed by Imprint Energy, the key piece of technology here is a polymer electrolyte that allows the zinc-based battery to be recharged. In typical batteries, liquid electrolytes are used, which tend to experience the formation of “fingers” which bridge across the lithium interior of the battery and make charging impossible. But in this case, the flexible and customizable zinc anode, electrolyte, and metal oxide cathode of the battery are printed in the form of electrochemical inks.

This is turn leads to the creation of a battery that is not only flexible and printable, but also rechargeable, safer, cheaper, and more powerful than anything currently on the market. The printing process is similar to old-fashioned silk-screening where material is deposited in a pattern by squeezing it through a mesh over a template. While this screen printing is different from what we tend to think of nowadays as 3D printing, it is in keeping with the concept of printing where manufacturing is done on the micro-level, leading to the creation of all kinds of consumer products.

smart-tattooAnd like all technological advancements, this one occurred not in a vacuum but amidst a backdrop of cool and interesting breakthroughs. For example, numerous tech c0mpanies and start-ups are using screen printing to fabricate electronic components that will address the need for cheap and disposable electronics in the next few years.

Norway-based Thin Film Electronics is one such group, which has created a prototype all-printed devices that includes temperature sensors, memory, logic, and uses Imprint Energy’s new battery. In addition, smart tattoos are being created to monitor patient vitals, blood pressure, pulse rate, and blood glucose levels. Printable “smart stickers” for time-sensitive food or medicines are being contemplated as well, patches that would be able to store details of  a products temperature, chemical exposure, freshness, and history of shock and vibe during handling.

All of this, coupled with ultra-thin devices, could led to a future where all devices and electronics are the size of a business card, as thin as a sheet of construction paper, and can be worn on a person’s body. Hey, there’s a reason they call it “smart technology” 😉

Source: Extremetech.com

 

The Future is Here: Flexible Displays!

It’s like something out of a Neal Stephenson novel, or possibly movies like Minority Report or Red Planet. A display which you can not only morph and twist, but which is barely thicker than a piece of paper. Yes, some pretty impressive developments have been making the rounds in the world of displays of late, most of which are coming to an electronics store near you!

Many of these products were displayed last year at the 2011 Consumer Electronics Show in Las Vegas, where Samsung unveiled its revolutionary new AMOLED display on a number of items. AMOLED, which stands for active-matrix organic light-emitting diode, is a process where organic compounds are used to form the electroluminescent material while an active matrix takes care of pixelation and display.

The result is a display that can be twisted and shaped without fear of breaking the display, or ruining the picture quality. At CES, many of the displays came on hand-held devices, all of which boasted displays that were almost paper-thin and could be bent, hammered, and still maintain their picture. Check out the video below to see a few such items on display, which have since become commercially available, at least in some discerning sectors of the market.


But what is really exciting about this news is that it is not reserved to any one company. During 2011, virtually all technology firms with a hand in portable devices, laptops and tablets had their own ideas on new-age flexible displays that utilized AMOLED technology. Nokia has its own concept for the “Kinetic Device”, which it demonstrated at the Nokia World Conference in London this past September. This flexible phone is controlled not by touching the screen, but by manipulating the body itself. Check out this video of a demo of the Kinetic running Windows Phone OS.


Megagiants Sony, 3M and Microsoft are also on board, producing videos of products that are under development that utilize holographic technology, bendable displays, and all kinds of neat and futuristic concepts to produce the next great leap in gaming, personal computing, and communications. After viewing the majority of them, it seems clear that the future envisioned here will involve ultra-light, transparent devices that are extremely portable and merged with items we were on our person in the course of everyday life.

We can also expect things like windows and panes of glass to carry displays and interfaces as well, allowing people to get directions and access public databases just about anywhere. Consider the following video as an example of what’s in store. Not to left behind in the speculative department, Samsung produced this video of what they felt the future of tablets would look like:


You know the old saying, the truth is stranger than fiction? Well in this case, it seems the truth is catching up to the fiction. It’s nice when that happens, even if it comes a little bit later than expected. Now if someone would just invent a damn flying car already, we’d be in business!

Source: Huffington Post Tech

Of Mechanical Minds

A few weeks back, a friend of mine, Nicola Higgins, directed me to an article about Google’s new neural net. Not only did she provide me with a damn interesting read, she also challenged me to write an article about the different types of robot brains. Well, Nicola, as Barny Stintson would say “Challenge Accepted!”And I got to say, it was a fun topic to get into.

After much research and plugging away at the lovely thing known as the internet (which was predicted by Vannevar Bush with his proposed Memor-Index system (aka. Memex) 50 years ago, btw) I managed to compile a list of the most historically relevant examples of mechanical minds, culminating in the development of Google’s Neural Net. Here we go..

Earliest Examples:
Even in ancient times, the concept of automata and arithmetic machinery can be found in certain cultures. In the Near East, the Arab World, and as far East as China, historians have found examples of primitive machinery that was designed to perform one task or another. And even though few specimens survive, there are even examples of machines that could perform complex mathematical calculations…

Antikythera mechanism:
Invented in ancient Greece, and recovered in 1901 on the ship that bears the same name, the Antikythera is the world’s oldest known analog calculator, invented to calculate the positions of the heavens for ancient astronomers. However, it was not until a century later that its true complexity and significance would be fully understood. Having been built in the 1st century BCE, it would not be until the 14th century CE that machines of its complexity would be built again.

Although it is widely theorized that this “clock of the heavens” must have had several predecessors during the Hellenistic Period, it remains the oldest surviving analog computer in existence. After collecting all the surviving pieces, scientists were able to reconstruct the design (pictured at right), which essentially amounted to a large box of interconnecting gears.

Pascaline:
Otherwise known as the Arithmetic Machine and Pascale Calculator, this device was invented by French mathematician Blaise Pascal in 1642 and is the first known example of a mechanized mathematical calculator. Apparently, Pascale invented this device to help his father reorganize the tax revenues of the French province of Haute-Normandie, and went on to create 50 prototypes before he was satisfied.

Of those 50, nine survive and are currently on display in various European museums. In addition to giving his father a helping hand, its introduction launched the development of mechanical calculators all over Europe and then the world. It’s invention is also directly linked to the development of the microprocessing circuit roughly three centuries later, which in turn is what led to the development of PC’s and embedded systems.

The Industrial Revolution:
With the rise of machine production, computational technology would see a number of developments. Key to all of this was the emergence of the concept of automation and the rationalization of society. Between the 18th and late 19th centuries, as every aspect of western society came to be organized and regimented based on the idea of regular production, machines needed to be developed that could handle this task of crunching numbers and storing the results.

Jacquard Loom:
Invented by Joseph Marie Jacquard, a French weaver and merchant, in 1801, the Loom that bears his name is the first programmable machine in history, which relied on punch cards to input orders and turn out textiles of various patterns. Thought it was based on earlier inventions by Basile Bouchon (1725), Jean Baptiste Falcon (1728) and Jacques Vaucanson (1740), it remains the most well-known example of a programmable loom and the earliest machine that was controlled through punch cards.

Though the Loom was did not perform computations, the design was nevertheless an important step in the development of computer hardware. Charles Babbage would use many of its features to design his Analytical Engine (see next example) and the use of punch cards would remain a stable in the computing industry well into the 20th century until the development of the microprocessor.

Analytical Engine:
Also known as the “Difference Engine”, this concept was originally proposed by English Mathematician Charles Babbage. Beginning in 1822 Babbage began contemplating designs for a machine that would be capable of automating the process of creating error free tables, which arose out of difficulties encountered by teams of mathematicians who were attempting to do it by hand.

Though he was never able to complete construction of a finished product, due to apparent difficulties with the chief engineer and funding shortages, his proposed engine incorporated an arithmetical unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first Turing-complete design for a general-purpose computer. His various trial models (like that featured at left) are currently on display in the Science Museum in London, England.

The Birth of Modern Computing:
The early 20th century saw the rise of several new developments, many of which would play a key role in the development of modern computers. The use of electricity for industrial applications was foremost, with all computers from this point forward being powered by Alternating and/or Direct Current and even using it to store information. At the same time, older ideas would be remain in use but become refined, most notably the use of punch cards and tape to read instructions and store results.

Tabulating Machine:
The next development in computation came roughly 70 years later when Herman Hollerith, an American statistician, developed a “tabulator” to help him process information from the 1890 US Census. In addition to being the first electronic computational device designed to assist in summarizing information (and later, accounting), it also went on to spawn the entire data processing industry.

Six years after the 1890 Census, Hollerith formed his own company known as the Tabulating Machine Company that was responsible for creating machines that could tabulate info based on punch cards. In 1924, after several mergers and consolidations, Hollerith’c company was renamed International Business Machines (IBM), which would go on to build the first “supercomputer” for Columbia University in 1931.

Atanasoff–Berry Computer:
Next, we have the ABC, the first electronic digital computing device in the world. Conceived in 1937, the ABC shares several characteristics with its predecessors, not the least of which is the fact that it is electrically powered and relied on punch cards to store data. However, unlike its predecessors, it was the first machine to use digital symbols to compute and was the first computer to use vacuum tube technology

These additions allowed the ABC to acheive computational speeds that were previously thought impossible for a mechanical computer. However, the machine was limited in that it could only solve systems of linear equations, and its punch card system of storage was deemed unreliable. Work on the machine also stopped when it’s inventor John Vincent Atanasoff was called off to assist in World War II cryptographic assignments. Nevertheless, the machine remains an important milestone in the development of modern computers.

Colossus:
There’s something to be said about war being the engine of innovation. The Colossus is certainly no stranger to this rule, the machine used to break German codes in the Second World War. Due to the secrecy surrounding it, it would not have much of an influence on computing and would not be rediscovered until the 1990’s. Still, it represents a step in the development of computing, as it relied on vacuum tube technology and punch tape in order to perform calculations, and proved most adept at solving complex mathematical computations.

Originally conceived by Max Newman, the British mathematician who was chiefly responsible fore breaking German codes in Bletchley Park during the war, the machine was a proposed means of combatting the German Lorenz machine, which the Nazis used to encode all of their wireless transmissions. With the first model built in 1943, ten variants of the machine for the Allies before war’s end and were intrinsic in bringing down the Nazi war machine.

Harvard Mark I:
Also known as the “IBM Automatic Sequence Controlled Calculator (ASCC)”, the Mark I was an electro-mechanical computer that was devised by Howard H. Aiken, built by IBM, and officially presented to Harvard University in 1944. Due to its success at performing long, complex calculations, it inspired several successors, most of which were used by the US Navy and Air Force for the purpose of running computations.

According to IBM’s own archives, the Mark I was the first computer that could execute long computations automatically. Built within a steel frame 51 feet (16 m) long and eight feet high, and using 500 miles (800 km) of wire with three million connections, it was the industry’s largest electromechanical calculator and the largest computer of its day.

Manchester SSEM:
Nicknamed “Baby”, the Manchester Small-Scale Experimental Machine (SSEM) was developed in 1948 and was the world’s first computer to incorporate stored-program architecture.Whereas previous computers relied on punch tape or cards to store calculations and results, “Baby” was able to do this electronically.

Although its abilities were still modest – with a 32-bit word length, a memory of 32 words, and only capable of performing subtraction and negation without additional software – it was still revolutionary for its time. In addition, the SSEM also had the distinction of being the result of Alan Turing’s own work – another British crytographer who’s theories on the “Turing Machine” and development of the algorithm would form the basis of modern computer technology.

The Nuclear Age to the Digital Age:
With the end of World War II and the birth of the Nuclear Age, technology once again took several explosive leaps forward. This could be seen in the realm of computer technology as well, where wartime developments and commercial applications grew by leaps and bounds. In addition to processor speeds and stored memory multiplying expontentially every few years, the overall size of computers got smaller and smaller. This, some theorized would lead to the development of computers that were perfectly portable and smart enough to pass the “Turing Test”. Imagine!

IBM 7090:
The 7090 model which was released in 1959, is often referred to as a third generation computer because, unlike its predecessors which were either electormechanical  or used vacuum tubes, this machine relied transistors to conduct its computations. In addition, it was an improvement on earlier models in that it used a 36-bit word length and could store up to 32K (32,768) words, a modest increase in processing over the SSEM, but a ten thousand-fold increase in terms of storage capacity.

And of course, these improvements were mirrored in the fact the 7090 series were also significantly smaller than previous versions, being about the size of a desk rather than an entire room. They were also cheaper and were quite popular with NASA, Caltech and MIT.

PDP-8:
In keeping with the trend towards miniaturization, 1965 saw the development of the first commercial minicomputer by the Digital Equipment Corporation (DEC). Though large by modern standards (about the size of a minibar) the PDP-8, also known as the “Straight-8”, was a major improvement over previous models, and therefore a commercial success.

In addition, later models also incorporated advanced concepts like the Real-Time Operating System and preemptive multitasking. Unfortunately, early models still relied on paper tape in order to process information. It was not until later that the computer was upgraded to take advantage of controlling language  such as FORTRAN, BASIC, and DIBOL.

Intel 4004:
Founded in California in 1968, the Intel Corporation quickly moved to the forefront of computational hardware development with the creation of the 4004, the worlds first Central Processing Unit, in 1971. Continuing the trend towards smaller computers, the development of this internal processor paved the way for personal computers, desktops, and laptops.

Incorporating the then-new silicon gate technology, Intel was able to create a processor that allowed for a higher number of transistors and therefore a faster processing speed than ever possible before. On top of all that, they were able to pack in into a much smaller frame, which ensured that computers built with the new CPU would be smaller, cheaper and more ergonomic. Thereafter, Intel would be a leading designer of integrated circuits and processors, supplanting even giants like IBM.

Apple I:
The 60’s and 70’s seemed to be a time for the birthing of future giants. Less than a decade after the first CPU was created, another upstart came along with an equally significant development. Named Apple and started by three men in 1976 – Steve Jobs, Steve Wozniak, and Ronald Wayne – the first product to be marketed was a “personal computer” (PC) which Wozniak built himself.

One of the most distinctive features of the Apple I was the fact that it had a built-in keyboard. Competing models of the day, such as the Altair 8800, required a hardware extension to allow connection to a computer terminal or a teletypewriter machine. The company quickly took off and began introducing an upgraded version (the Apple II) just a year later. As a result, Apple I’s remain a scarce commodity and very valuable collector’s item.

The Future:
The last two decades of the 20th century also saw far more than its fair of developments. From the CPU and the PC came desktop computers, laptop computers, PDA’s, tablet PC’s, and networked computers. This last creation, aka. the Internet, was the greatest leap by far, allowing computers from all over the world to be networked together and share information. And with the exponential increase in information sharing that occurred as a result, many believe that it’s only a matter of time before wearable computers, fully portable computers, and artificial intelligences are possible. Ah, which brings me to the last entry in this list…

The Google Neural Network:
googleneuralnetworkFrom mechanical dials to vacuum tubes, from CPU’s to PC’s and laptops, computer’s have come a hell of a long way since the days of Ancient Greece. Hell, even within the last century, the growth in this one area of technology has been explosive, leading some to conclude that it was just a matter of time before we created a machine that was capable of thinking all on its own.

Well, my friends, that day appears to have dawned. Already, Nicola and myself blogged about this development, so I shan’t waste time going over it again. Suffice it to say, this new program, which thus far has been able to identify pictures of cats at random, contains the necessary neural capacity to acheive 1/1000th of what the human brain is capable of. Sounds small, but given the exponential growth in computing, it won’t be long before that gap is narrowed substantially.

Who knows what else the future will hold?  Optical computers that use not electrons but photons to move information about? Quantum computers, capable of connecting machines not only across space, but also time? Biocomputers that can be encoded directly into our bodies through our mitochondrial DNA? Oh, the possibilities…

Creating machines in the likeness of the human mind. Oh Brave New World that hath such machinery in it. Cool… yet scary!