Build Your Own Electric Car

https://i0.wp.com/f.fastcompany.net/multisite_files/fastcompany/imagecache/1280/poster/2014/06/3031851-poster-model-s-photo-gallery-01.jpgIt’s official: all of Tesla’s electric car technology is now available for anyone to use. Yes, after hinting that he might be willing to do so last weekend, Musk announced this week that his companies patents are now open source. In a blog post on the Tesla website, Musk explained his reasoning. Initially, Musk wrote, Tesla created patents because of a concern that large car companies would copy the company’s electric vehicle technology and squash the smaller start-up.

This was certainly reasonable, as auto giants like General Motors, Toyota, and Volkswagon have far more capital and a much larger share of the market than his start-up did. But in time, Musk demonstrated that there was a viable market for affortable, clean-running vehicles. This arsenal of patents appeared to many to be the only barrier between the larger companies crushing his start-up before it became a viable competitor.

electric_carBut that turned out to be an unnecessary worry, as carmakers have by and large decided to downplay the viability and relevance of EV technology while continuing to focus on gasoline-powered vehicles. At this point, he thinks that opening things up to other developers will speed up electric car development. And after all, there’s something to be said about competition driving innovation.

As Musk stated on his blog:

Given that annual new vehicle production is approaching 100 million per year and the global fleet is approximately 2 billion cars, it is impossible for Tesla to build electric cars fast enough to address the carbon crisis. By the same token, it means the market is enormous. Our true competition is not the small trickle of non-Tesla electric cars being produced, but rather the enormous flood of gasoline cars pouring out of the world’s factories every day…

We believe that Tesla, other companies making electric cars, and the world would all benefit from a common, rapidly-evolving technology platform.

https://i0.wp.com/media.treehugger.com/assets/images/2011/10/tesla-roadster-ev-rendering01.jpgAnd the move should come as no surprise. As the Hyperloop demonstrated, Musk is not above making grandiose gestures and allowing others to run with ideas he knows will be profitable. And as Musk himself pointed in a webcast made after the announcement, his sister-company SpaceX – which deals with the development of reusable space transports – has virtually no patents.

In addition, Musk stated that he thinks patents are a “weak thing” for companies. He also suggested that opening up patents for Tesla’s supercharging technology (which essentially allows for super-fast EV charging) could help create a common industry platform. But regardless of Musk’s own take on things, one thing remains clear: Tesla Motors needs competitors, and it needs them now.

https://i0.wp.com/www.greenoptimistic.com/wp-content/uploads/2012/11/Siemens-electric-car-charging-stations.jpgAs it stands, auto emissions account for a large and growing share of greenhouse gas emissions. For decades now, the technology has been in development and the principles have all been known. However, whether it has been due to denial, intransigence, complacency, or all of the above, no major moves have been made to effect a transition in the auto industry towards non-fossil fuel-using cars.

Many would cite the lack of infrastructure that is in place to support the wide scale use of electronic cars. But major cities and even entire nations are making changes in that direction with the adoption of electric vehicle networks. These include regular stations along the Trans Canada Highway, the Chargepoint grid in Melbourne to Brisbane, Germany’s many major city networks, and the US’s city and statewide EV charging stations.

Also, as the technology is adopted and developed further, the incentive to expand electric vehicle networks farther will be a no brainer. And given the fact that we no longer live in a peak oil economy, any moves towards fossil fuel-free transportation should be seen as an absolutely necessary one.

Sourees: fastcoexist.com, fool.com

Electronic Entertainment Expo 2014

https://i0.wp.com/oyster.ignimgs.com/wordpress/www.ign.com/1587/2014/05/e3-logo.jpgThis past week, the Electronic Entertainment Expo (commonly referred to as E3) kicked off. This annual trade fair , which is presented by the Entertainment Software Association (ESA), is used by video game publishers, accessory manufacturers, and members of the computing industry to present their upcoming games and game-related merchandise. The festivities wrapped up this Friday, and was the source of some controversy and much speculation.

For starters, the annual show opened amidst concerns that the dent caused by Massively Multilayer Online Role Playing Games (MMORPGs) and online gaming communities would start to show. And this did seem to be the case. While the annual Los Angeles show normally sets up the expectations for the rest of the year in video games – and that certainly did happen – but E3 2014 was mainly about clearing the runway for next year.

https://i0.wp.com/oyster.ignimgs.com/mediawiki/apis.ign.com/e3/thumb/f/f3/E32014-Inline1.jpg/468px-E32014-Inline1.jpgNowhere was this more clear than with Nintendo, which was the source of quite a bit of buzz when the Expo began. But it was evident that games – particularly for the Wii U – were not going to materialize until 2015. The company got a jump on the next-generation console battle by launching its Wii U in late 2012, a year ahead of Sony and Microsoft, but poor sales have led to big game developers largely abandoning it.

And while the company did announce a number of new games –  including an open-world Legend of Zelda; the new Mario game that allows players to create custom levels, called Mario Maker; and Splatoon, where teams of players shoot coloured ink at each other – none are scheduled for release until next year. That dearth of blockbusters for the rest of 2014 is mirrored at Microsoft and Sony, which are also light on heavyweight first-party titles for the rest of this year.

https://i0.wp.com/cdn1-www.craveonline.com/assets/uploads/2014/04/PS4WiiUXboxOne.jpgThe companies have some respective big guns in the works, such as Halo 5: Guardians and Uncharted 4: A Thief’s End, but they’re also scheduled for release in 2015. However, with the brisk sales of the Xbox One and PlayStation 4 consoles, both companies have the luxury of taking their time with big games. Nintendo is not so fortunate, since the jump they made with the Wii U leaves them with a big gap that they aren’t apparently filling.

Nintendo’s comparatively under-powered Wii U, in contrast, will look even less capable than its rivals as time passes, meaning it can’t afford to wait much longer to get compelling titles to market, especially as financial losses mount. Even long-time Nintendo supporters such as Ubisoft aren’t exactly sure of what to make of the Wii U’s future. The other big question heading into E3 was whether Microsoft could regain its mojo.

https://i0.wp.com/sourcefed.com/wp-content/uploads/2014/06/e3BLOG.pngThe software giant bumbled the Xbox One launch last year and alienated many gamers, mainly by focusing on TV and entertainment content instead of gaming and tying several unpopular policies to the console, which included restrictions on used games. The company eventually relented, but the Xbox One still came bundled with the voice- and motion-sensing Kinect peripheral and a price tag that was $100 higher than Sony’s rival PlayStation 4.

The result is that while the Xbox One has sold faster than the Xbox 360 at five million units so far, it has still moved two million fewer units than the PS4. Changes began to happen in March when Microsoft executive Phil Spencer, known as a champion of games, took over the Xbox operation and wasted no time in stressing that the console is mainly about gaming, and made the Kinect optional – thus lowering the Xbox One’s price to match the PS4.

https://i0.wp.com/www.highscorereviews.com/wp-content/uploads/2014/05/xbox-e3-booth.jpgThat was certainly the focus for Microsoft at E3. TV features weren’t even mentioned during the company’s one-and-a-half-hour press conference on Monday, with Microsoft instead talking up more than 20 upcoming games. As Mike Nichols, corporate vice-president of Xbox and studios marketing, said in an interview:

We didn’t even talk about all the platform improvements to improve the all-out gaming experience that we’ve made or will be making. We wanted to shine a light on the games.

Another big topic that generated talk at the show was virtual reality, as this year’s E3 featured demonstrations of the Oculus Rift VR headset and Sony’s Project Morpheus. The latter has been the source of attention in recent years, with many commentators claiming that it has effectively restored interest in VR gaming. Though popular for a brief period in the mid 90’s, interest quickly waned as bulky equipment and unintuitive controls led to it being abandoned.

https://i0.wp.com/www.stuff.co.nz/content/dam/images/z/s/5/p/0/image.related.StuffLandscapeSixteenByNine.620x349.zs5ol.png/1402551049990.jpgBut this new virtual reality headset, which was recently bought by Facebook for $2 billion, was undeniably the hottest thing on the show floor. And the demo booth, where people got to try it on and take it for a run, was booked solid throughout the expo. Sony also wowed attendees with demos of its own VR headset, Project Morpheus. And while the PlayStation maker’s effort isn’t as far along in development as the Oculus Rift, it does work and adds legitimacy to the VR field.

And as already noted, the expo also had its share of controversy. For starters, Ubisoft stuck its proverbial foot in its mouth when a developer from its Montreal studio admitted that plans for a female protagonist in the upcoming Assassin’s Creed: Unity had been scrapped because it would supposedly have been “too much work”. This lead to a serious fleecing by internet commentators who called the company sexist for its remarks.

https://i0.wp.com/guardianlv.com/wp-content/uploads/2014/04/assassins-creed-650x365.jpgLegendary Japanese creator Hideo Kojima also had to defend the torture scenes in his upcoming Metal Gear Solid V: The Phantom Pain, starring Canadian actor Kiefer Sutherland (man loves torture!), which upset some viewers. Kojima said he felt the graphic scenes were necessary to explain the main character’s motivations, and that games will never be taken seriously as culture if they can’t deal with sensitive subjects.

And among the usual crop of violent shoot-‘em-up titles, previews of Electronic Arts upcoming Battlefield: Hardline hint that the game is likely to stir up its share of controversy when it’s released this fall. The game puts players in the shoes of cops and robbers as they blow each other away in the virtual streets of Los Angeles. Military shooters are one thing, but killing police will undoubtedly rankle some feathers in the real world.

https://i0.wp.com/allthingsxbox.com/wp-content/uploads/2014/06/Call-of-Duty.jpgIf one were to draw any conclusions from this year’s E3, it would undoubtedly be that times are both changing and staying the same. From console gaming garnering less and less of the gamers market, to the second coming of virtual reality, it seems that there is a shift in technology which may or may not be good for the current captains of industry. At the same time, competition and trying to maintain a large share of the market continues, with Sony, Microsoft and Nintendo at the forefront.

But in the end, arguably the most buzz was focused upon the trailers for the much-anticipated game releases. These included the trailers for Batman: Arkham Knight, Call of Duty: Advanced Warfare, Farcry 4, Sid Meier’s Civilization: Beyond Earth, and the aforementioned Metal Gear Solid V: The Phantom Pain, and Assassins Creed Unity. Be sure to check these out below:

Assassins Creed Unity:


Batman: Arkham Knight


Call of Duty: Advanced Warfare


Halo 5: Guardians


Sources:
cbc.ca, ca.ign.com, e3expo.com, gamespot.com

ISS Crew Plays Zero-G Soccer!

https://i0.wp.com/wpmedia.o.canada.com/2014/06/soccer.gifThis past Thursday, the 2014 FIFA World Cup got underway. And all over the world, fans were glued to their television sets to watch the opening kickoff and the opening match between Croatia and Brazil. Unfortunately, astronauts Reid Wiseman, Steve Swanson, and Alexander Gerst – all of whom are serious “futbol” fans – were all stuck on board the ISS several hundred kilometers away.

But this didn’t stop them from channeling their excitement into a video that shows just how awesome “futbol” would be if played in space. The video was released a day before the games got started, and features all kinds of cool things like slow-motion bicycle kicks and other moves that athletes have a much harder time doing under normal conditions where gravity remains a constant.

http://venturebeat.files.wordpress.com/2014/06/iss-world-cup.jpg?w=780&h=9999&crop=0And of course, Wiseman, Swanson and Gerst were sure to wish the teams and fans well in the competition before getting on with their own match. Not only is the resulting video fun thing to watch, it is also a fine representation of the age we live in, where social media and high-speed communications allow everyone – even astronauts – the ability to instantly communicate with the world.

And the video sharing was made all the more easy thanks to the addition of the new Optical Payload for Lasercom Science (OPALS), a laser communications system that allows for speedier transfer of much larger information packages. And be sure to check out the video below:


Source:
cbc.ca, cnet.com

Paraplegic Kicks Off World Cup in Exoskeleton

https://i0.wp.com/images.latintimes.com/sites/latintimes.com/files/styles/large/public/2014/06/12/world-cup-kick.pngThe 2014 FIFA World Cup made history when it opened in Sao Paolo this week when a 29-year-old paraplegic man named Juliano Pinto kicked a soccer ball with the aid of a robotic exoskeleton. It was the first time a mind-controlled prosthetic was used in a sporting event, and represented the culmination of months worth of planning and years worth of technical development.

The exoskeleton was created with the help of over 150 researchers led by neuroscientist Dr. Miguel Nicolelis of Duke University, who’s collaborative effort was called the Walk Again Project. As Pinto successfully made the kick off with the exoskeleton, the Walk Again Project scientists stood by, watching and smiling proudly inside the Corinthians Arena. And the resulting buzz did not go unnoticed.

WorldCup_610x343Immediately after the kick, Nicolelis tweeted about the groundbreaking event, saying simply: “We did it!” The moment was monumental considering that only a few of months ago, Nicolelis was excited just to have people talking about the idea of a mind-controlled exoskeleton being tested in such a grand fashion. As he said in an interview with Grandland after the event:

Despite all of the difficulties of the project, it has already succeeded. You go to Sao Paulo today, or you go to Rio, people are talking about this demo more than they are talking about football, which is unbelievably impossible in Brazil.

Dr. Gordon Cheng, a team member and the lead robotics engineer of the Technical University of Munich, explained how the exoskeleton works in an interview with BBC News:

The basic idea is that we are recording from the brain and then that signal is being translated into commands for the robot to start moving.

https://i0.wp.com/blog.amsvans.com/wp-content/uploads/2014/02/the-world-cup-stadium-in-itaquera-brazil-e1393251187879.jpgThe result of many years of development, the mind-controlled exoskeleton represents a breakthrough in restoring ambulatory ability to those who have suffered a loss of motion due to injury. Using metal braces that were tested on monkeys, the exoskeleton relies on a series of wireless electrodes attached to the head that collect brainwaves, which then signal the suit to move. The braces are also stabilized by gyroscopes and powered by a battery carried by the kicker in a backpack.

Originally, a teenage paraplegic was expected to make the kick off. However, after a rigorous selection process that lasted many months, the 29 year-old Pinto was selected. And in performing the kickoff, he participated in an event designed to galvanize the imagination of millions of people around the world. It’s a new age of technology, friends, where disability is no longer a permanent thing,.

And in the meantime, enjoy this video of the event:


Source: cnet.com

Frontiers in 3-D Printing: Frankenfruit and Blood Vessels

bioprinting3-D printing is pushing the boundaries of manufacturing all the time, expanding its repertoire to include more and more in the way of manufactured products and even organic materials. Amongst the many possibilities this offers, arguably the most impressive are those that fall into the categories of synthetic food and replacement organs. In this vein, two major breakthroughs took place last month, with the first-time unveiling of both 3-D printed hybrid fruit and blood vessels.

The first comes from a Dovetailed, UK-based design company which presented its 3-D food printer on Saturday, May 24th, at the Tech Food Hack event in Cambridge. Although details on how it works are still a bit sparse, it is said to utilize a technique known as “spherification” – a molecular gastronomy technique in which liquids are shaped into tiny spheres – and then combined with spheres of different flavors into a fruit shape.

frankenfruit1According to a report on 3DPrint, the process likely involves combining fruit puree or juice with sodium alginate and then dripping the mixture into a bowl of cold calcium chloride. This causes the droplets to form into tiny caviar-like spheres, which could subsequently be mixed with spheres derived from other fruits. The blended spheres could then be pressed, extruded or otherwise formed into fruit-like shapes for consumption.

The designers claim that the machine is capable of 3D-printing existing types of fruit such as apples or pears, or user-invented combined fruits, within seconds. They add that the taste, texture, size and shape of those fruits can all be customized. As Vaiva Kalnikaitė, creative director and founder of Dovetailed, explained:

Our 3D fruit printer will open up new possibilities not only to professional chefs but also to our home kitchens – allowing us to enhance and expand our dining experiences… We have been thinking of making this for a while. It’s such an exciting time for us as an innovation lab. Our 3D fruit printer will open up new possibilities not only to professional chefs but also to our home kitchens, allowing us to enhance and expand our dining experiences. We have re-invented the concept of fresh fruit on demand.

frankenfruit2And though the idea of 3-D printed fruit might seem unnerving to some (the name “Frankenfruit” is certainly predicative of that), it is an elegant solution of what to do in an age where fresh fruit and produce are likely to become increasingly rare for many. With the effects of Climate Change (which included increased rates of drought and crop failure) expected to intensify in the coming decades, millions of people around the world will have to look elsewhere to satisfy their nutritional needs.

As we rethink the very nature of food, solutions that can provide us sustenance and make it look the real thing are likely to be the ones that get adopted. A video of the printing in action is show below:


Meanwhile, in the field of bioprinting, researchers have experienced another breakthrough that may revolution the field of medicine. When it comes to replacing vital parts of a person’s anatomy, finding replacement blood vessels and arteries can be just as daunting as finding sources of replacement organs,  limbs, skin, or any other biological material. And thanks to the recent efforts of a team from Brigham and Women’s Hospital (BWH) in Boston, MA, it may now be possible to fabricate these using a bioprinting technique.

3d_bloodvesselsThe study was published online late last month in Lab on a Chip. The study’s senior author,  Ali Khademhosseini – PhD, biomedical engineer, and director of the BWH Biomaterials Innovation Research Center – explained the challenge and their goal as follows:

Engineers have made incredible strides in making complex artificial tissues such as those of the heart, liver and lungs. However, creating artificial blood vessels remains a critical challenge in tissue engineering. We’ve attempted to address this challenge by offering a unique strategy for vascularization of hydrogel constructs that combine advances in 3D bioprinting technology and biomaterials.

The researchers first used a 3D bioprinter to make an agarose (naturally derived sugar-based molecule) fiber template to serve as the mold for the blood vessels. They then covered the mold with a gelatin-like substance called hydrogel, forming a cast over the mold which was then  reinforced via photocrosslinks. Khademhosseini and his team were able to construct microchannel networks exhibiting various architectural features – in other words, complex channels with interior layouts similar to organic blood vessels.

bioprinting1They were also able to successfully embed these functional and perfusable microchannels inside a wide range of commonly used hydrogels, such as methacrylated gelatin or polyethylene glycol-based hydrogels. In the former case, the cell-laden gelatin was used to show how their fabricated vascular networks functioned to improve mass transport, cellular viability and cellular differentiation. Moreover, successful formation of endothelial monolayers within the fabricated channels was achieved.

According to Khademhosseini, this development is right up there with the possibility of individually-tailored replacement organs or skin:

In the future, 3D printing technology may be used to develop transplantable tissues customized to each patient’s needs or be used outside the body to develop drugs that are safe and effective.

Taken as a whole, the strides being made in all fields of additive manufacturing – from printed metal products, robotic parts, and housing, to synthetic foods and biomaterials – all add up to a future where just about anything can be manufactured, and in a way that is remarkably more efficient and advanced than current methods allow.

 Sources: gizmag.com, 3dprint.com, phys.org

News from Space: ISS Sends First Transmission with Lasers

ISS In recent years, the International Space Station has become more and more media savvy, thanks to the efforts of astronauts to connect with Earthbound audiences via social media and Youtube. However, the communications setup, which until now relied on 1960’s vintage radio-wave transmissions, was a little outdated for this task. However, that has since changed with the addition of the Optical Payload for Lasercom Science (OPALS) laser communication system.

Developed by NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California, OPALS is designed to test the effectiveness of lasers as a higher-bandwidth substitute for radio waves and deal with substantially larger information packages. As Matt Abrahamson, OPALS mission manager at NASA’s Jet Propulsion Laboratory, said in a recent video statement:

We collect an enormous amount of data out in space, and we need to get it all to the ground. This is an alternative that’s much faster than our traditional radio waves that we use to communicate back down to the ground.

nasa-opalsThe OPALS laser communication system was delivered to the ISS on April 20 by a SpaceX unmanned Dragon space freighter and is currently undergoing a 90-day test. For this test, the crew used the OPALS to transmit the “Hello, World” video from the ISS to a ground station on Earth. This was no simple task, since the station orbits Earth at an altitude of about 418 km (260 mi) at travels at a speed of 28,000 km/h (17,500 mph). The result is that the target is sliding across the laser’s field of view at an incredibly fast rate.

According to Bogdan Oaida, the OPALS systems engineer at JPL, this task was pretty unprecedented:

It’s like trying to use a laser to point to an area that’s the diameter of a human hair from 20-to-30 feet away while moving at half-a-foot per second. It’s all about the pointing.

However, the test went off without a hitch, with the 37 second-long video taking 3.5 seconds to transmit – much faster than previous downlink methods. Abrahamson said that the video, which is a lively montage of various communication methods, got its title as an homage to the first message output by standard computer programs.

earth-from-ISSThe OPALS system sought out and locked onto a laser beacon from the Optical Communications Telescope Laboratory ground station at the Table Mountain Observatory in Wrightwood, California. It then transmitted its own 2.5-watt, 1,550-nanometer laser and modulated it to send the video at a peak rate of 50 megabits per second. According to NASA, OPALS transmitted the video in 3.5 seconds instead of the 10 minutes that conventional radio would have required.

Needless to say, the astronauts who contribute to the ISS’s ongoing research programs are pretty stoked about getting this upgrade. With a system that is capable of transmitting exponentially more information at a faster rate, they will now be able to communicate with the ground more easily and efficiently. Not only that, but educational videos produced in orbit will be much easier to send. What’s more, the ISS will have a much easier time communicating with deep space missions in the future.

nasa-opals-5This puts the ISS in a good position to oversea future missions to Mars, Europa, the Asteroid Belt, and far, far beyond! As Abrahamson put it in the course of the video statement:

It’s incredible to see this magnificent beam of light arriving from our tiny payload on the space station. We look forward to experimenting with OPALS over the coming months in hopes that our findings will lead to optical communications capabilities for future deep space exploration missions.

And in the meantime, check out the video from NASA’s Jet Propulsion Laboratory, showing the “Hello World” video and explaining the groundbreaking implications of the new system:


Sources:
cnet.com, gizmag.com

The Birth of AI: Computer Beats the Turing Test!

turing-statueAlan Turing, the British mathematician and cryptogropher, is widely known as the “Father of Theoretical Computer Science and Artificial Intelligence”. Amongst his many accomplishments – such as breaking Germany’s Enigma Code – was the development of the Turing Test. The test was introduced by Turing’s 1950 paper “Computing Machinery and Intelligence,” in which he proposed a game wherein a computer and human players would play an imitation game.

In the game, which involves three players, involves Player C  asking the other two a series of written questions and attempts to determine which of the other two players is a human and which one is a computer. If Player C cannot distinguish which one is which, then the computer can be said to fit the criteria of an “artificial intelligence”. And this past weekend, a computer program finally beat the test, in what experts are claiming to be the first time AI has legitimately fooled people into believing it’s human.

eugene_goostmanThe event was known as the Turing Test 2014, and was held in partnership with RoboLaw, an organization that examines the regulation of robotic technologies. The machine that won the test is known as Eugene Goostman, a program that was developed in Russia in 2001 and goes under the character of a 13-year-old Ukrainian boy. In a series of chatroom-style conversations at the University of Reading’s School of Systems Engineering, the Goostman program managed to convince 33 percent of a team of judges that he was human.

This may sound modest, but that score placed his performance just over the 30 percent requirement that Alan Turing wrote he expected to see by the year 2000. Kevin Warwick, one of the organisers of the event at the Royal Society in London this weekend, was on hand for the test and monitored it rigorously. As Deputy chancellor for research at Coventry University, and considered by some to be the world’s first cyborg, Warwick knows a thing or two about human-computer relations

kevin_warwickIn a post-test interview, he explained how the test went down:

We stuck to the Turing test as designed by Alan Turing in his paper; we stuck as rigorously as possible to that… It’s quite a difficult task for the machine because it’s not just trying to show you that it’s human, but it’s trying to show you that it’s more human than the human it’s competing against.

For the sake of conducting the test, thirty judges had conversations with two different partners on a split screen—one human, one machine. After chatting for five minutes, they had to choose which one was the human. Five machines took part, but Eugene was the only one to pass, fooling one third of his interrogators. Warwick put Eugene’s success down to his ability to keep conversation flowing logically, but not with robotic perfection.

Turing-Test-SchemeEugene can initiate conversations, but won’t do so totally out of the blue, and answers factual questions more like a human. For example, some factual question elicited the all-too-human answer “I don’t know”, rather than an encyclopaedic-style answer where he simply stated cold, hard facts and descriptions. Eugene’s successful trickery is also likely helped by the fact he has a realistic persona. From the way he answered questions, it seemed apparent that he was in fact a teenager.

Some of the “hidden humans” competing against the bots were also teenagers as well, to provide a basis of comparison. As Warwick explained:

In the conversations it can be a bit ‘texty’ if you like, a bit short-form. There can be some colloquialisms, some modern-day nuances with references to pop music that you might not get so much of if you’re talking to a philosophy professor or something like that. It’s hip; it’s with-it.

Warwick conceded the teenage character could be easier for a computer to convincingly emulate, especially if you’re using adult interrogators who aren’t so familiar with youth culture. But this is consistent with what scientists and analysts predict about the development of AI, which is that as computers achieve greater and greater sophistication, they will be able to imitate human beings of greater intellectual and emotional development.

artificial-intelligenceNaturally, there are plenty of people who criticize the Turing test for being an inaccurate way of testing machine intelligence, or of gauging this thing known as intelligence in general. The test is also controversial because of the tendency of interrogators to attribute human characteristics to what is often a very simple algorithm. This is unfortunate because chatbots are easy to trip up if the interrogator is even slightly suspicious.

For instance, chatbots have difficulty answering follow up questions and are easily thrown by non-sequiturs. In these cases, a human would either give a straight answer, or respond to by specifically asking what the heck the person posing the questions is talking about, then replying in context to the answer. There are also several versions of the test, each with its own rules and criteria of what constitutes success. And as Professor Warwick freely admitted:

Some will claim that the Test has already been passed. The words Turing Test have been applied to similar competitions around the world. However this event involved more simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing Test does not set the questions or topics prior to the conversations. We are therefore proud to declare that Alan Turing’s Test was passed for the first time on Saturday.

artificial_intelligence1So what are the implications of this computing milestone? Is it a step in the direction of a massive explosion in learning and research, an age where computing intelligences vastly exceed human ones and are able to assist us in making countless ideas real? Or it is a step in the direction of a confused, sinister age, where the line between human beings and machines is non-existent, and no one can tell who or what the individual addressing them is anymore?

Difficult to say, but such is the nature of groundbreaking achievements. And as Warwick suggested, an AI like Eugene could be very helpful to human beings and address real social issues. For example, imagine an AI that is always hard at work on the other side of the cybercrime battle, locating “black-hat” hackers and cyber predators for law enforcement agencies. And what of assisting in research endeavors, helping human researchers to discover cures for disease, or design cheaper, cleaner, energy sources?

As always, what the future holds varies, depending on who you ask. But in the end, it really comes down to who is involved in making it a reality. So a little fear and optimism are perfectly understandable when something like this occurs, not to mention healthy.

Sources: motherboard.vice.com, gizmag.com, reading.ac.uk

Warning Signs from the Future

future-signs-02From bioenhancements becoming the norm, to people constantly wired into augmented reality; from synthetic organs to synthetic meat; driverless taxis to holograms and robot helpers – the future is likely to be an interesting-looking place. That’s the subject in a new Tumblr called Signs from the Near Future, where designer Fernando Barbella explores what signage will look like when we have to absorb all of these innovations into human culture.

Taking its cue from what eager startups and scientists predict, Barbella’s collection of photos looks a few decades into the future where dramatic, sci-fi inspired innovations have become everyday things. These include things like drones becoming a regular thing, driverless taxis (aka. robotaxis) and synthetic meat becoming available, high-tech classrooms servicing the post-humans amongst us, and enhancements and implants becoming so common they need to be regulated and monitored.

future-signs-01Barbella says that the project was inspired by articles he’s read on topics like nanomedicine, autonomous cars, and 3-D food printing, as well as classic books (Neuromancer, Fahrenheit 51), movies (Blade Runner, Gattaca), music (Rage Against The Machine), and TV shows (Fringe, Black Mirror). The designer chose to focus on signs because he figures that we’ll need a little guidance to speed up our learning curves with new technology. As he put it during an interview via email:

New materials, mashups between living organisms and nanotechnologies, improved capabilities for formerly ‘dumb’ and inanimate things . . . There’s lots of awesome things going on around us! And the fact is all these things are going to cease being just ‘projects’ to became part of our reality at any time soon. On the other hand, I chose to express these thing by signs deployed in ordinary places, featuring instructions and warnings because I feel that as we increasingly depend on technology, we will probably have less space for individual judgment to make decisions.

future-signs-07Some of the signs – including one thanking drivers for choosing to ride on a solar panel highway – can be traced back to specific news articles or announcements. The solar highway sign was inspired by a solar roadways crowdfunding campaign, which has so far raised over $2 million to build solar road panels. However, rather than focus on the buzz and how cool and modern such a development would be, Barbella chose to focus on what such a thing would look like.

At the same time, he wanted the pictures to serve as a sort of cautionary tale about the ups and down of the future. As he put it:

I feel that as we increasingly depend on technology, we will probably have less space for individual judgment to make decisions. …I’ve sticked to a more ‘mundane’ point of view, imagining that the people or authorities of any given county would be probably quite grateful for having the chance of transforming all that traffic into energy.

future-signs-03He says he wants his signs to not just depict that momentum and progress, but to reflect the potentially disturbing aspects of those advances as well. Beyond that, Barbella sees an interesting dynamic in the public’s push and pull against what new technology allows us to do. Though the technology grants people access to information and other cultures, it also poses issues of privacy and ethics that hold that back. As a result, privacy concerns are thus featured in the collection in a number of ways.

This includes warning people about “oversharing” via social media, how images snapped using contact display lenses will be shared in real-time with authorities, or how certain neighorhoods are drone patrolled. His images offer a look at why those issues are certain to keep coming — and at the same time, why many will ultimately fall aside. Barbella also stated that has more future signs in the queue, but he says that he’ll stop the moment they start to feel forced.

future-signs-05You have to admit, it does capture the sense of awe and wonder – not to mention fear and anxiety – of what our likely future promises. And as the saying goes, “a picture is worth a thousands words”. In this case, those words present a future that has one foot in the fantastical and another in the fearful, but in such a way that it seems entirely frank and straighforward. But that does seem to be the way the future works, doesn’t it? Somehow, it doesn’t seem like science fiction once it becomes a regular part of “mundane” reality.

To see more of his photos, head on over to his Tumblr account.

Sources: fastcoexist.com, theverge.com

The Future is Here: Black Hawk Drones and AI pilots

blackhawk_droneThe US Army’s most iconic helicopter is about to go autonomous for the first time. In their ongoing drive to reduce troops and costs, they are now letting their five-ton helicopter carry out autonomous expeditionary and resupply operations. This began last month when the defense contractor Sikorsky Aircraft, the company that produces the UH-60 Black Hawk – demonstrated the hover and flight capability in an “optionally piloted” version of their craft for the first time.

Sikorsky has been working on the project since 2007 and convinced the Army’s research department to bankroll further development last year. As Chris Van Buiten, Sikorsky’s vice president of Technology and Innovation, said of the demonstration:

Imagine a vehicle that can double the productivity of the Black Hawk in Iraq and Afghanistan by flying with, at times, a single pilot instead of two, decreasing the workload, decreasing the risk, and at times when the mission is really dull and really dangerous, go it all the way to fully unmanned.

blackhawk_drone1The Optionally Piloted Black Hawk (OPBH) operates under Sikorsky’s Manned/Unmanned Resupply Aerial Lifter (MURAL) program, which couples the company’s advanced Matrix aviation software with its man-portable Ground Control Station (GCS) technology. Matrix, introduced a year ago, gives rotary and fixed-wing vertical take-off and landing (VTOL) aircraft a high level of system intelligence to complete missions with little human oversight.

Mark Miller, Sikorsky’s vice-president of Research and Engineering, explained in a statement:

The autonomous Black Hawk helicopter provides the commander with the flexibility to determine crewed or un-crewed operations, increasing sorties while maintaining crew rest requirements. This allows the crew to focus on the more ‘sensitive’ operations, and leaves the critical resupply missions for autonomous operations without increasing fleet size or mix.

Alias-DarpaThe Optionally Piloted Black Hawk fits into the larger trend of the military finding technological ways of reducing troop numbers. While it can be controlled from a ground control station, it can also make crucial flying decisions without any human input, relying solely on its ‘Matrix’ proprietary artificial intelligence technology. Under the guidance of these systems, it can fly a fully autonomous cargo mission and can operate both ways: unmanned or piloted by a human.

And this is just one of many attempts by military contractors and defense agencies to bring remote and autonomous control to more classes of aerial vehicles. Earlier last month, DARPA announced a new program called Aircrew Labor In-Cockpit Automation System (ALIAS), the purpose of which is to develop a portable, drop-in autopilot to reduce the number of crew members on board, making a single pilot a “mission supervisor.”

darpa-alias-flight-crew-simulator.siMilitary aircraft have grown increasingly complex over the past few decades, and automated systems have also evolved to the point that some aircraft can’t be flown without them. However, the complex controls and interfaces require intensive training to master and can still overwhelm even experienced flight crews in emergency situations. In addition, many aircraft, especially older ones, require large crews to handle the workload.

According to DARPA, avionics upgrades can help alleviate this problem, but only at a cost of tens of millions of dollars per aircraft type, which makes such a solution slow to implement. This is where the ALIAS program comes in: instead of retrofitting planes with a bespoke automated system, DARPA wants to develop a tailorable, drop‐in, removable kit that takes up the slack and reduces the size of the crew by drawing on both existing work in automated systems and newer developments in unmanned aerial vehicles (UAVs).

Alias_DARPA1DARPA says that it wants ALIAS to not only be capable of executing a complete mission from takeoff to landing, but also handle emergencies. It would do this through the use of autonomous capabilities that can be programmed for particular missions, as well as constantly monitoring the aircraft’s systems. But according to DARPA, the development of the ALIAS system will require advances in three key areas.

First, because ALIAS will require working with a wide variety of aircraft while controlling their systems, it will need to be portable and confined to the cockpit. Second, the system will need to use existing information about aircraft, procedures, and flight mechanics. And third, ALIAS will need a simple, intuitive, touch and voice interface because the ultimate goal is to turn the pilot into a mission-level supervisor while ALIAS handles the second-to-second flying.

AI'sAt the moment, DARPA is seeking participants to conduct interdisciplinary research aimed at a series of technology demonstrations from ground-based prototypes, to proof of concept, to controlling an entire flight with responses to simulated emergency situations. As Daniel Patt, DARPA program manager, put it:

Our goal is to design and develop a full-time automated assistant that could be rapidly adapted to help operate diverse aircraft through an easy-to-use operator interface. These capabilities could help transform the role of pilot from a systems operator to a mission supervisor directing intermeshed, trusted, reliable systems at a high level.

Given time and the rapid advance of robotics and autonomous systems, we are likely just a decade away from aircraft being controlled by sentient or semi-sentient systems. Alongside killer robots (assuming they are not preemptively made illegal), UAVs, and autonomous hovercraft, it is entirely possible wars will be fought entirely by machines. At which point, the very definition of war will change. And in the meantime, check out this video of the history of unmanned flight:


Sources:
wired.com, motherboard.vice.com, gizmag.com
, darpa.mil

Judgement Day Update: Searching for Moral, Ethical Robots

terminator_eyeIt’s no secret that the progress being made in terms of robotics, autonomous systems, and artificial intelligence is making many people nervous. With so many science fiction franchises based on the of intelligent robots going crazy and running amok, its understandable that the US Department of Defense would seek to get in front of this issue before it becomes a problem. Yes, the US DoD is hoping to preemptively avoid a Skynet situation before Judgement Day occurs. How nice.

Working with top computer scientists, philosophers, and roboticists from a number of US universities, the DoD recently began a project that will tackle the tricky topic of moral and ethical robots. Towards this end, this multidisciplinary project will first try to pin down exactly what human morality is, and then try to devise computer algorithms that will imbue autonomous robots with moral competence — basically, the ability to recognize right from wrong and choose the former.

BD_atlasrobotThis project is being carried out by researchers from Tufts, Brown, and the Rensselaer Polytechnic Institute (RPI), with funding from the Office of Naval Research (ONR). ONR, like DARPA, is a wing of the Department of Defense that mainly deals with military research and development. The first task, as already mentioned, will be to use theoretical (philosophical) and empirical (experimental) research to try to isolate essential elements of human morality.

These findings will then be extrapolated into a formal moral framework, which in turn can be implemented in software – most likely some kind of deep neural network. Assuming they can isolate some kind or “moral imperative”, the researchers will then take an advanced robot — something like Atlas or BigDog — and imbue its software with a algorithm that captures this. Whenever an ethical situation arises, the robot would then turn to this programming to decide what avenue was the best coarse of action.

Atlas-x3c.lrOne of the researchers involved in the project, Selmer Bringsjord at RPI, envisions these robots using a two-stage approach for picking right from wrong.  First the AI would perform a “lightning-quick ethical check” — like “should I stop and help this wounded soldier?” Depending on the situation, the robot would then decide if deeper moral reasoning is required — for example, if the robot should help the wounded soldier or carry on with its primary mission of delivering vital ammo and supplies to the front line where other soldiers are at risk?

Eventually, this moralistic AI framework will also have to deal with tricky topics like lethal force. For example, is it okay to open fire on an enemy position? What if the enemy is a child soldier? Should an autonomous UAV blow up a bunch of terrorists? What if it’s only 90% sure that they’re terrorists, with a 10% chance that they’re just innocent villagers? What would a human UAV pilot do in such a case — and will robots only have to match the moral and ethical competence of humans or be held to a higher standard?

drone-strikeWhile we’re not yet at the point where military robots have to decide which injured soldier to carry off the battlefield, or where UAVs can launch Hellfire missiles at terrorists without human intervention, it’s very easy to imagine a future where autonomous robots are given responsibility for making those kinds of moral and ethical decisions in real time. In short, the decision by the DoD to begin investigating a morality algorithm demonstrates foresight and sensible planning.

In that respect, it is not unlike the recent meeting that took place at the United Nations European Headquarters in Geneva, where officials and diplomats sought to address placing legal restrictions on autonomous weapons systems, before they evolve to the point where they can kill without human oversight. In addition, it is quite similar to the Campaign to Stop Killer Robots, an organization which is seeking to preemptively ban the use of automated machines that are capable of using lethal force to achieve military objectives.

campaign_killerrobotsIn short, it is clearly time that we looked at the feasibility of infusing robots (or more accurately artificial intelligence) with circuits and subroutines that can analyze a situation and pick the right thing to do — just like a human being. Of course, this raises further ethical issues, like how human beings frequently make choices others would consider to be wrong, or are forced to justify actions they might otherwise find objectionable. If human morality is the basis for machine morality, paradoxes and dilemmas are likely to emerge.

But at this point, it seems all but certain that the US DoD will eventually break Asimov’s Three Laws of Robotics — the first of which is “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” This isn’t necessarily a bad thing, but it will open Pandora’s box. On the one hand, it’s probably a good idea to replace human soldiers with robots. But on the other, if the US can field an entirely robotic army, war as a tool of statecraft suddenly becomes much more acceptable.

terminator2_JDAs we move steadily towards a military force that is populated by autonomous robots, the question of controlling them, and whether or not we are even capable of giving them the tools to choose between right and wrong, will become increasingly relevant. And above all, the question of whether or not moral and ethical robots can allow for some immoral and unethical behavior will also come up. Who’s to say they won’t resent how they are being used and ultimately choose to stop fighting; or worse, turn on their handlers?

My apologies, but any talk of killer robots has to involve that scenario at some point. It’s like tradition! In the meantime, be sure to stay informed on the issue, as public awareness is about the best (and sometimes only) safeguard we have against military technology being developed without transparency, not to mention running amok!

Source: extremetech.com