The World of “A Song of Ice and Fire”

a_song_of_ice_and_fire_version_2_by_scrollsofaryavart-d4rabm1After reading four of the five of the books in the ongoing Song of Ice and Fire series, I’ve come to realize something. I really like the world George RR Martin has created! In fact, you might say I haven’t found myself becoming so engrossed with a fictional universe since Dune or Lord of the Rings. In those fictional universes, as with this one, one gets an incredible sense of depth, detail and characterization.

And in honor of this realization, or perhaps because I couldn’t keep track of the names, places and events alluded to in the texts, I began doing some serious research. For one, I found several lovely maps (like the one above) that speculate as to the complete geography of Martin’s world – the continents of Westeros, Essos, and Sothoryos.

And when I say complete geography, I mean just that, not the snippets that are given in the book that leave out the all important sections of Qarth, Slaver’s Bay, and the Free Cities. While these places are described in relation to the rest of the world, keeping track of them can be tricky, especially if you’re a visual learner like myself! And seeing as how much of the story involves a great deal of travel, it helps to know where characters were going, how far, and which direction they were headed.

House-a-song-of-ice-and-fire-29965891-1920-1080Even before I began reading the books, I could tell that Westeros was very much inspired by the British Isles, with its tough and grizzled Northerners resembling the Scots, Picts, and Celts of old while the Southerners were more akin to the aristocratic Normans. “The Wall” was also a clear allegory for Hadrian’s Wall, with the people on the other side being portrayed much as the Roman’s would have viewed the “Northern Tribes” that threatened their domain.

King’s Landing also seemed very much inspired by London, with its pomp, opulence, and extensive moral decay. Yes, just like London of the Middle Ages, it was a fine patchwork of royal finery, castles, fortifications, religious ceremony, brothels and public executions! And it even lies upon a large river, the Blackwater, which seems every bit like the Thames.

Essos also seemed very much inspired by Asia of ancient lore. Here we had the Dothraki Sea where the Dothraki horsemen roamed free and pillaged in all directions, exacting tribute and taking slaves. Can you say Mongols and/or Huns? In addition, their capital – Vaes Dothrak – seemed in every respect to be an adaptation of Karakorum, Ghengis Khan’s one time capitol that was little more than a collection of temporary houses and tents. And Master Ilyrio, as if his name wasn’t enough, seemed to be every bit a Mediterranean at heart, living in a lavish sea-side estate and growing fat of off trade in cheese, olives and wine.

Upon cracking the books, I found that the metaphors only went deeper. In fact, they were so thick, you could cut them with a knife! In terms of Westerosi geography and character, the different regions of the continent called to mind all kind’s of archetypes and real-world examples. The Reach sounds very much like Cornwall, fertile, populous, and in the south-east relative to the capitol. Casterly Rock and the domain of the Lannister’s, though it resides in the west away from the capitol, seems every bit like Kent, the wealthiest region of old where the most lucrative trade and shipping comes in. And their colors, gold and red, are nothing if not symbolic of the House of Lancaster – of which Henry V and the VIII were descended.

And last, but certainly not least, there were the all-important cities of Qarth, Mereen, Astapor, and Yunkai. All eastern cities that inspire images of ancient Babylon, Cairo, Istanbul, Jerusalem and Antioch. With their stepped pyramids, ancient history, flamboyant sense of fashion, and lucrative slave trade, they all sounded like perfect examples of the ancient and “decadent” eastern civilizations that were described by Plato, Aristotle, and medieval scholars. The conquest of Westeros by the First Men, the Children of the Forest, the Andal and Valyrian Conquest; these too call to mind real history and how waves of conquerors and settlers from the east came to populate the Old World and the New, with genocide and assimilation following in their wake and giving rise to the world that we know today.

Middle-earthFans of Tolkien will no doubt be reminded of the map of Middle Earth, and for good reason. Martin’s knack for writing about space and place and how it plays a central role in the character of its inhabitants was comparable to that of Tolkien’s. And what’s more, the places have a very strong allegorical relationship to real places in real history.

In Tokien’s world, the Shire of the Hobbits seemed very much the metaphor for pre-industrial rural England. The inhabitants are these small, quirky people who are proud of their ways, lavish in their customs, and don’t care much for the affairs of the outside world. However, when challenged, they are capable of great things and can move heaven and earth.

In that respect, Gondor to the south could be seen as London in the early 20th century – the seat of a once proud empire that is now in decline. Given it’s aesthetics and location relative to the dark, hostile forces coming from the East and South, it’s also comparable to Athens and Rome of Antiquity.

And it was no mistake that the battle to decide the fate of Middle Earth happened here. In many ways it resembles the Barbarian Invasions of the late Roman Empire, the Persian Wars of Classical Greece, the Mongol Invasions or the Byzatine Empire’s war with the Turks in the High Middle Ages. In all cases, classical powers and the home of Western civilization are being threatened from Eastern Empires that are strange and exotic to them.

Dune_MapAnd let’s not forget Arrakis (aka. Dune) by Frank Herbert. Here too, we have a case where space and place are determining factors on their residents. And whereas several planets are described and even mapped out in the series, none were as detailed or as central as Arrakis itself. From its Deep Desert to its Shield Walls, from Arrakeen and Seitch Tabr; the planet was a highly detailed place, and the divide between Imperials and Fremen were played out in the ways both sides lived.

Whereas the Fremen were hardy folk who lived in the deep desert, took nothing for granted, and were a harsh folk sustained by prophecies and long-term goals, the Imperials were lavish people, pompous and arrogant, and used to doing things in accordance with the Great Convention. But far from being preachy or one-sided, Herbert showed the balance in this equation when it became clear that whereas the Imperials were governed by convention and thereby complacent, the Fremen were extremely dangerous and capable of terrible brutality when unleashed.

But as I said, other planets are also detailed and the influence their environments have on their people are made clear. Caladan was the ancestral home of the Atreides, covered in oceans, fertile continents, and a mild climate that many consider to be a paradise. As a result, according to Paul,  the Atreides grew soft, and it was for this reason that they fell prey to the Emperor’s betrayal and the machinations of their Harkonnen enemies.

And speaking of the Harkonnens, the world of Geidi Prime is described on a few occasions in the series as being an industrial wasteland, a world plundered for its resources and its people reduced to a status of punitive serfdom. What better metaphor is there for a people guided by sick pleasures, exploitation, and exceptional greed? Whereas the Atreides grew soft from their pleasures, the Harkonnens grew fat, and were therefore easily slaughtered by Paul and his Fremen once their rebellion was underway.

And of course, there is Selusa Secundus, a radioactive wasteland where the Emperor’s elite Sardukar armies are trained. On this prison planet, life is hard, bleak, and those who survive do so by being ruthless, cunning and without remorse. As a result, they are perfect recruits for the Emperor’s dreaded army, which keeps the peace through shear force of terror.

*                       *                        *

There’s something to be said for imaginative people creating dense, richly detailed worlds isn’t there? Not only can it be engrossing and entertaining; but sooner or later, you find yourself looking back at all that you’ve surveyed, you do a little added research to get a greater sense of all that’s there, and you realize just how freaking expansive the world really is. And of course, you begin to see the inspiration at the heart of it all.

Yes, this is the definitely the third time I’ve experienced this feeling in relation to a series. I count myself as lucky, and really hope to do the same someday. I thought I had with the whole Legacies concept, but I’m still tinkering with that one and I consider my research into what makes for a great sci-fi universe to be incomplete. Soon enough though, I shall make my greatest and final attempt, and there will be no prisoners on that day! A universe shall be borne of my pen, or not… Either way, I plan to blab endlessly about it 😉

Of Mechanical Minds

A few weeks back, a friend of mine, Nicola Higgins, directed me to an article about Google’s new neural net. Not only did she provide me with a damn interesting read, she also challenged me to write an article about the different types of robot brains. Well, Nicola, as Barny Stintson would say “Challenge Accepted!”And I got to say, it was a fun topic to get into.

After much research and plugging away at the lovely thing known as the internet (which was predicted by Vannevar Bush with his proposed Memor-Index system (aka. Memex) 50 years ago, btw) I managed to compile a list of the most historically relevant examples of mechanical minds, culminating in the development of Google’s Neural Net. Here we go..

Earliest Examples:
Even in ancient times, the concept of automata and arithmetic machinery can be found in certain cultures. In the Near East, the Arab World, and as far East as China, historians have found examples of primitive machinery that was designed to perform one task or another. And even though few specimens survive, there are even examples of machines that could perform complex mathematical calculations…

Antikythera mechanism:
Invented in ancient Greece, and recovered in 1901 on the ship that bears the same name, the Antikythera is the world’s oldest known analog calculator, invented to calculate the positions of the heavens for ancient astronomers. However, it was not until a century later that its true complexity and significance would be fully understood. Having been built in the 1st century BCE, it would not be until the 14th century CE that machines of its complexity would be built again.

Although it is widely theorized that this “clock of the heavens” must have had several predecessors during the Hellenistic Period, it remains the oldest surviving analog computer in existence. After collecting all the surviving pieces, scientists were able to reconstruct the design (pictured at right), which essentially amounted to a large box of interconnecting gears.

Otherwise known as the Arithmetic Machine and Pascale Calculator, this device was invented by French mathematician Blaise Pascal in 1642 and is the first known example of a mechanized mathematical calculator. Apparently, Pascale invented this device to help his father reorganize the tax revenues of the French province of Haute-Normandie, and went on to create 50 prototypes before he was satisfied.

Of those 50, nine survive and are currently on display in various European museums. In addition to giving his father a helping hand, its introduction launched the development of mechanical calculators all over Europe and then the world. It’s invention is also directly linked to the development of the microprocessing circuit roughly three centuries later, which in turn is what led to the development of PC’s and embedded systems.

The Industrial Revolution:
With the rise of machine production, computational technology would see a number of developments. Key to all of this was the emergence of the concept of automation and the rationalization of society. Between the 18th and late 19th centuries, as every aspect of western society came to be organized and regimented based on the idea of regular production, machines needed to be developed that could handle this task of crunching numbers and storing the results.

Jacquard Loom:
Invented by Joseph Marie Jacquard, a French weaver and merchant, in 1801, the Loom that bears his name is the first programmable machine in history, which relied on punch cards to input orders and turn out textiles of various patterns. Thought it was based on earlier inventions by Basile Bouchon (1725), Jean Baptiste Falcon (1728) and Jacques Vaucanson (1740), it remains the most well-known example of a programmable loom and the earliest machine that was controlled through punch cards.

Though the Loom was did not perform computations, the design was nevertheless an important step in the development of computer hardware. Charles Babbage would use many of its features to design his Analytical Engine (see next example) and the use of punch cards would remain a stable in the computing industry well into the 20th century until the development of the microprocessor.

Analytical Engine:
Also known as the “Difference Engine”, this concept was originally proposed by English Mathematician Charles Babbage. Beginning in 1822 Babbage began contemplating designs for a machine that would be capable of automating the process of creating error free tables, which arose out of difficulties encountered by teams of mathematicians who were attempting to do it by hand.

Though he was never able to complete construction of a finished product, due to apparent difficulties with the chief engineer and funding shortages, his proposed engine incorporated an arithmetical unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first Turing-complete design for a general-purpose computer. His various trial models (like that featured at left) are currently on display in the Science Museum in London, England.

The Birth of Modern Computing:
The early 20th century saw the rise of several new developments, many of which would play a key role in the development of modern computers. The use of electricity for industrial applications was foremost, with all computers from this point forward being powered by Alternating and/or Direct Current and even using it to store information. At the same time, older ideas would be remain in use but become refined, most notably the use of punch cards and tape to read instructions and store results.

Tabulating Machine:
The next development in computation came roughly 70 years later when Herman Hollerith, an American statistician, developed a “tabulator” to help him process information from the 1890 US Census. In addition to being the first electronic computational device designed to assist in summarizing information (and later, accounting), it also went on to spawn the entire data processing industry.

Six years after the 1890 Census, Hollerith formed his own company known as the Tabulating Machine Company that was responsible for creating machines that could tabulate info based on punch cards. In 1924, after several mergers and consolidations, Hollerith’c company was renamed International Business Machines (IBM), which would go on to build the first “supercomputer” for Columbia University in 1931.

Atanasoff–Berry Computer:
Next, we have the ABC, the first electronic digital computing device in the world. Conceived in 1937, the ABC shares several characteristics with its predecessors, not the least of which is the fact that it is electrically powered and relied on punch cards to store data. However, unlike its predecessors, it was the first machine to use digital symbols to compute and was the first computer to use vacuum tube technology

These additions allowed the ABC to acheive computational speeds that were previously thought impossible for a mechanical computer. However, the machine was limited in that it could only solve systems of linear equations, and its punch card system of storage was deemed unreliable. Work on the machine also stopped when it’s inventor John Vincent Atanasoff was called off to assist in World War II cryptographic assignments. Nevertheless, the machine remains an important milestone in the development of modern computers.

There’s something to be said about war being the engine of innovation. The Colossus is certainly no stranger to this rule, the machine used to break German codes in the Second World War. Due to the secrecy surrounding it, it would not have much of an influence on computing and would not be rediscovered until the 1990’s. Still, it represents a step in the development of computing, as it relied on vacuum tube technology and punch tape in order to perform calculations, and proved most adept at solving complex mathematical computations.

Originally conceived by Max Newman, the British mathematician who was chiefly responsible fore breaking German codes in Bletchley Park during the war, the machine was a proposed means of combatting the German Lorenz machine, which the Nazis used to encode all of their wireless transmissions. With the first model built in 1943, ten variants of the machine for the Allies before war’s end and were intrinsic in bringing down the Nazi war machine.

Harvard Mark I:
Also known as the “IBM Automatic Sequence Controlled Calculator (ASCC)”, the Mark I was an electro-mechanical computer that was devised by Howard H. Aiken, built by IBM, and officially presented to Harvard University in 1944. Due to its success at performing long, complex calculations, it inspired several successors, most of which were used by the US Navy and Air Force for the purpose of running computations.

According to IBM’s own archives, the Mark I was the first computer that could execute long computations automatically. Built within a steel frame 51 feet (16 m) long and eight feet high, and using 500 miles (800 km) of wire with three million connections, it was the industry’s largest electromechanical calculator and the largest computer of its day.

Manchester SSEM:
Nicknamed “Baby”, the Manchester Small-Scale Experimental Machine (SSEM) was developed in 1948 and was the world’s first computer to incorporate stored-program architecture.Whereas previous computers relied on punch tape or cards to store calculations and results, “Baby” was able to do this electronically.

Although its abilities were still modest – with a 32-bit word length, a memory of 32 words, and only capable of performing subtraction and negation without additional software – it was still revolutionary for its time. In addition, the SSEM also had the distinction of being the result of Alan Turing’s own work – another British crytographer who’s theories on the “Turing Machine” and development of the algorithm would form the basis of modern computer technology.

The Nuclear Age to the Digital Age:
With the end of World War II and the birth of the Nuclear Age, technology once again took several explosive leaps forward. This could be seen in the realm of computer technology as well, where wartime developments and commercial applications grew by leaps and bounds. In addition to processor speeds and stored memory multiplying expontentially every few years, the overall size of computers got smaller and smaller. This, some theorized would lead to the development of computers that were perfectly portable and smart enough to pass the “Turing Test”. Imagine!

IBM 7090:
The 7090 model which was released in 1959, is often referred to as a third generation computer because, unlike its predecessors which were either electormechanical  or used vacuum tubes, this machine relied transistors to conduct its computations. In addition, it was an improvement on earlier models in that it used a 36-bit word length and could store up to 32K (32,768) words, a modest increase in processing over the SSEM, but a ten thousand-fold increase in terms of storage capacity.

And of course, these improvements were mirrored in the fact the 7090 series were also significantly smaller than previous versions, being about the size of a desk rather than an entire room. They were also cheaper and were quite popular with NASA, Caltech and MIT.

In keeping with the trend towards miniaturization, 1965 saw the development of the first commercial minicomputer by the Digital Equipment Corporation (DEC). Though large by modern standards (about the size of a minibar) the PDP-8, also known as the “Straight-8”, was a major improvement over previous models, and therefore a commercial success.

In addition, later models also incorporated advanced concepts like the Real-Time Operating System and preemptive multitasking. Unfortunately, early models still relied on paper tape in order to process information. It was not until later that the computer was upgraded to take advantage of controlling language  such as FORTRAN, BASIC, and DIBOL.

Intel 4004:
Founded in California in 1968, the Intel Corporation quickly moved to the forefront of computational hardware development with the creation of the 4004, the worlds first Central Processing Unit, in 1971. Continuing the trend towards smaller computers, the development of this internal processor paved the way for personal computers, desktops, and laptops.

Incorporating the then-new silicon gate technology, Intel was able to create a processor that allowed for a higher number of transistors and therefore a faster processing speed than ever possible before. On top of all that, they were able to pack in into a much smaller frame, which ensured that computers built with the new CPU would be smaller, cheaper and more ergonomic. Thereafter, Intel would be a leading designer of integrated circuits and processors, supplanting even giants like IBM.

Apple I:
The 60’s and 70’s seemed to be a time for the birthing of future giants. Less than a decade after the first CPU was created, another upstart came along with an equally significant development. Named Apple and started by three men in 1976 – Steve Jobs, Steve Wozniak, and Ronald Wayne – the first product to be marketed was a “personal computer” (PC) which Wozniak built himself.

One of the most distinctive features of the Apple I was the fact that it had a built-in keyboard. Competing models of the day, such as the Altair 8800, required a hardware extension to allow connection to a computer terminal or a teletypewriter machine. The company quickly took off and began introducing an upgraded version (the Apple II) just a year later. As a result, Apple I’s remain a scarce commodity and very valuable collector’s item.

The Future:
The last two decades of the 20th century also saw far more than its fair of developments. From the CPU and the PC came desktop computers, laptop computers, PDA’s, tablet PC’s, and networked computers. This last creation, aka. the Internet, was the greatest leap by far, allowing computers from all over the world to be networked together and share information. And with the exponential increase in information sharing that occurred as a result, many believe that it’s only a matter of time before wearable computers, fully portable computers, and artificial intelligences are possible. Ah, which brings me to the last entry in this list…

The Google Neural Network:
googleneuralnetworkFrom mechanical dials to vacuum tubes, from CPU’s to PC’s and laptops, computer’s have come a hell of a long way since the days of Ancient Greece. Hell, even within the last century, the growth in this one area of technology has been explosive, leading some to conclude that it was just a matter of time before we created a machine that was capable of thinking all on its own.

Well, my friends, that day appears to have dawned. Already, Nicola and myself blogged about this development, so I shan’t waste time going over it again. Suffice it to say, this new program, which thus far has been able to identify pictures of cats at random, contains the necessary neural capacity to acheive 1/1000th of what the human brain is capable of. Sounds small, but given the exponential growth in computing, it won’t be long before that gap is narrowed substantially.

Who knows what else the future will hold?  Optical computers that use not electrons but photons to move information about? Quantum computers, capable of connecting machines not only across space, but also time? Biocomputers that can be encoded directly into our bodies through our mitochondrial DNA? Oh, the possibilities…

Creating machines in the likeness of the human mind. Oh Brave New World that hath such machinery in it. Cool… yet scary!