Michael Bay has earned his fair share of notoriety for taking popular 80’s franchises and completely ruining them. With his crass remakes of nostalgic classics like Friday the 13th, A Nightmare on Elm Street, The Texas Chainsaw Massacre and a soon-to-be-reviled remake of Teenage Mutant Ninja Turtles, he might just best George Lucas for the title of who raped 80’s childhoods the most.
But it is arguably his work with the Transformers genre that has earned him the most scorn. From it’s beginning as a semi-decent movie that still had all the Bay staples (racist caricatures, sexist portrayals, stupid dialogue, action porn, eye-candy visuals), it quickly degenerated into a franchise that produced equal parts convulsive laughter and vomiting over just how bad it was. And with a fourth movie on the way, its clear he has no intention of stopping.
Luckily (as is often proving to be the case these days) fans of the franchise have stepped up to fill the void left by Bay’s hackish, opportunistic attempts to recreate a childhood classic. Entitled “Attack On Giant”, this mini-film was shot entirely in stop-motion using Transformer toys, sound effects from the original series, and focuses on a fight scene between two original version toys: Battle Tanker and Giant.
Sure, the visuals may not be as intensely colored as in Bay’s movies, and the stop-motion might be a little clunkier than seamless CGI, but the quality and the heart are there in spades. And you got to admit, this was a very fine effort for a fan-made film. This is just one of several stop motion fan films made by Harris Loureiro, a Malaysian amateur filmmaker who has created five Transformers fan-films to date.
So if you like this video, be sure to check out of some of his other videos:
I’ve been spending entirely too much time over at Youtube lately. Have you seen those comments sections? If there was ever a reason for misanthropy, that would be it! But one can always find plenty of nuggets of awesomeness while navigating through that sea of virulence, and Honest Trailers is often the source. Below is the video for Game of Thrones they released a few months back to coincide with the release of Season Four.
!Warning! As it says in the intro, this video contains spoiler material for anyone who hasn’t viewed the first three seasons of GOT. But at this point, I got to assume that’s nobody, right? Or at least nobody who would care about this video. Anyway, enjoy the video, and pay close attention to the inside joke at the end (R+L=J, I can’t believe I got that reference. I am SUCH a nerd!)
As a nerd and historian, I am obliged to share these latest Epic Rap Battles of History videos. First up, there’s Sir Isaac Newton Vs. Bill Nye. In addition to being funny, I think it set a record for most featured performers. These included Weird Al Yankovic in the role of Sir Isaac Newton and hip-hop artist Chali 2na in the role of Neil deGrasse Tyson, who jumps in at the fourth verse to give Bill Nye some much needed assistance.
Second, there’s the battle between revolutionaries George Washington vs. William Wallace. While this one didn’t feature anyone famous, it was some of Nice Peter and EpicLLOYD’s best work. And like all of their best videos, it is both educational and classically hilarious! Enjoy!
It’s like something out of Huxley’s Brave New World: a blanket that monitors your brain activity, and takes on a corresponding color to show just how relaxed you are. Yes, it might sound like a bizarre social experiment, but in fact, it is part of a British Airways study to measure the effects of night-time travel between Heathrow and New York, a trip that takes flyers across multiple time zones.
Anyone who has ever done this knows that the jet lag can be a real pain in the ass. And for frequent flyers, jet lag has a surprisingly powerful impact on their internal clocks and circadian rhythms. Part of the problem arises from the fact that travelers are inside a metal and plastic cylinder that’s about as far from natural as possible, which poses difficulties for psychologists and others tasked with improving passenger conditions.
Using the happiness blanket, British Airways is trying to tweak those conditions to make air travel more relaxing and better suited to adjusting to a new time zone. The blanket works by using a neurosensor studded headband to measure brain waves and determine the user’s level of relaxation, while fiber optics woven into the material display this through color patterns. Red means the minimum of relaxation, and blue indicates the maximum relaxation.
Naturally, there’s also the marketing angle that’s at work here. In truth, there’s no need for the blankets to have a readout mechanism, but it is a nice way of illustrating to the public what’s going on. Using data gleaned from volunteer fliers, British Airways hopes to learn how to adjust the various factors in the cabin options and routines – including lighting, mealtimes, menus, seating positions, types of films shown, and general cabin routine.
According to British Airways, the key to these adjustments is to provide passengers with the best sleep possible on long flights, which is one reason why the airline has introduced lie-flat seating for business class and above. Better relaxation provides the brain with as few distractions as possible while traveling to different time zones, so it has a chance to adjust.
As Frank van der Post, British Airways’ managing director, brands and customer experience, said about the experiment:
Using technology like the British Airways ‘happiness blanket’ is another way for us to investigate how our customers’ relaxation and sleep is affected by everything on board, from the amount of light in the cabin, when they eat, to what in-flight entertainment they watch and their position in the seat.
I can smell an industry emerging. High-tech happiness monitoring. And with the growth in neurosensors and EEG headsets, its was really just a matter of time before someone got pro-active and decided to mass produce them. I imagine other companies will begin following suit, perhaps to monitor their employees happiness, or to gauge customer response to commercials. It all sounds so deliciously quasi-fascist!
And be sure to check out the video of the company’s promotional video:
This coming fall, Brad Pitt will be starring in another World War II movie, though one that is somewhat different from Inglorious Basterds. Set in April of 1945, Fury takes place in Germany during the final month of the war as the crew of a single Sherman attempt a final mission behind enemy lines. And as you can see from the stark and gritty trailer, the film features a real-live Sherman and Tiger tank, the latter of which was borrowed from a museum.
The movie also stars Shia LaBoeuf, Logan Lerman, and The Walking Dead‘s Jon Bernthal (aka. Shane) and is slated for release in November 2014.
We’ve all thought about it… the day when super-intelligent computer becomes self-aware and unleashes a nuclear holocaust, followed shortly thereafter by the rise of the machines (cue theme from Terminator). But as it turns out, when the robot army does come to exterminate humanity, at two humans might be safe – Google co-founders Larry Page and Sergey Brin to be precise.
Basically, they’ve uploaded a killer-robots.txt file to their servers that instructs T-800 and T-1000 Terminators to spare the company’s co-founders (or “disallow” their deaths). Such was the subject of a totally tongue-in-cheek presentation at this year’s Google I/O at the Moscone Center in San Fransisco, which coincided with the 20th anniversary of the Robots.txt file.
This tool, which was created in 1994, instructs search engines and other automated bots to avoid crawling certain pages or directories of a website. The industry has done a remarkable job staying true to the simple text file in the two decades since; Google, Bing, and Yahoo still obey its directives. The changes they uploaded read like this, just in case you’re planning on adding your name to the “disallow” list:
While that tool didn’t exactly take the rise of the machines into account, it’s appearance on the Google’s website as an Easter egg did add some levity to a company that is already being accused of facilitating in the creation of killer robots. Calling Google’s proposed line or robots “killer” does seem both premature and extreme, that did not stop a protester from interrupting the I/O 2014 keynote address.
Basically, as Google’s senior VP of technical infrastructure Urs Hölze spoke about their cloud platform, the unidentified man stood up and began screaming “You all work for a totalitarian company that builds machines that kill people!” As you can see from the video below, Hölze did his best to take the interruptions in stride and continued with the presentation. The protestor was later escorted out by security.
This wasn’t the first time that Google has been the source of controversy over the prospect of building “killer robots”. Ever since Google acquired Boston Dynamics and seven other robots companies in the space of six months (between and June and Dec of 2013), there has been some fear that the company has a killer machine in the works that it will attempt to sell to the armed forces.
Naturally, this is all part of a general sense of anxiety that surrounds developments being made across multiple fields. Whereas some concerns have crystallized into dedicated and intelligent calls for banning autonomous killer machines in advance – aka. the Campaign To Stop Killer Robots – others have resulted in the kinds of irrational outbreaks observed at this year’s I/O.
Needless to say, if Google does begin developing killer robots, or just starts militarizing its line of Boston Dynamics acquisitions, we can expect that just about everyone who can access (or hack their way into) the Robots.txt file to be adding their names. And it might not be too soon to update the list to include the T-X, Replicants, and any other killer robots we can think of!
And be sure to check out the video of the “killer robot” protester speaking out at 2014 I/O:
Ian Burkhart, a 23-year-old quadriplegic from Dublin, Ohio, was injured in 2010 in a diving accident, breaking his neck on a sandbar and paralyzing his body from the neck down. He was left with some use of his arms, but lost the use of his legs, hands, and fingers. Thanks to a new device known as the Neurobridge though – a device that allows the brains signals to bypass the severed spinal cord – Burkhart has now moved his right hand and fingers for the first time since the accident.
This device, which was developed in concert by the Ohio State University Wexner Medical Center and the non-profit company Battelle, consists of a pea-sized chip that contains an an array of 96 electrodes, allows researchers to look at detailed signals and neural activity emanating from the patient’s brain. This chip was implanted in Ian’s brain two months ago, when neurosurgeon Dr Ali Rezai of Ohio State University performed the surgery that would implant the sensor chip into the motor cortex of his brain.
Battelle has been working on neurosensing technology for almost a decade. As Chad Bouton, the leader of the Neurobridge project at Battelle, explains:
We were having such success in decoding brain activity, we thought, ‘Let’s see if we could remap the signals, go around something like a spinal cord injury and then translate the signals into something that the muscles could understand and help someone paralyzed regain control of their limb’.
During the test, which occurred in June, the implanted chip read and interpreted the electrical activity in Burkhart’s brain and sent it to a computer. The computer then recoded the signal, and sent it to a high-definition electrode stimulation sleeve Burkhart wore on his right arm, a process that took less than a tenth of a second and allowed Burkhart to move his paralysed fingers. Basically, Burkhart is able to move his hand by simply thinking about moving his hand, and the machine does the rest.
A team led by Chad Bouton at Battelle spent nearly a decade developing the algorithms, software and sleeve. Then, just two years ago, Dr Ali Rezai and Dr Jerry Mysiw were brought on board to design the clinical trials. Burkhart became involved with the study after his doctor mentioned it to him and he learned he was an ideal candidate. He had the exact level of injury the researchers were looking for, is young and otherwise healthy, and lives close to the Ohio State University Wexner Medical Center, where the research is being conducted.
Even so, Burkhart had to think hard before agreeing to the surgery. He also knew that the surgery wouldn’t magically give him movement again. He would have to undergo rigorous training to regain even basic hand function. Mainly, his experience would help move along future technological advances. However, he was excited to be taking part in cutting-edge research which would ultimately help people like him who have suffered from spinal injuries and paralysis.
Post-surgery, Burkhart still had a lot of thinking to do, this time, in order to move his hand. As he explained:
It’s definitely great for me to be as young as I am when I was injured because the advancements in science and technology are growing rapidly and they’re only going to continue to increase… Mainly, it was just the fact that I would have to have brain surgery for something that wasn’t needed… Anyone able bodied doesn’t think about moving their hand, it just happens. I had to do lots of training and coaching.
The hand can make innumerable complex movements with the wrist, the fingers, and the fist. In order for Battelle’s software to read Ian’s mind, it has to look for subtle changes in the signals coming from Ian’s brain. As Bouton explains it, the process is like walking into a crowded room with hundreds of people trying to talk to each other, and you’re trying to isolate one particular conversation in a language that you don’t understand.
At this point, Burkhart can perform a handful of movement patterns, including moving his hand up and down, opening and closing it, rotating it, and drumming on a table with his fingers. All of this can only be done while he’s in the hospital, hooked up to the researchers’ equipment. But the ultimate goal is to create a device and a software package that he can take with him, giving him the ability to bypass his injury and have full ambulatory ability during everyday activities.
This isn’t the only research looking into bringing movement back to the paralyzed. In the past, paralyzed patients have been given brain-computer interfaces, but they have only been able to control artificial limbs – i.e. Zak Water’s mind-controlled leg or the BrainGate’s device that allow stroke victims to eat and drink using a mind-controlled robotic arm. Participants in an epidural stimulator implant study have also been able to regain some movement in their limbs, but this technology works best on patients with incomplete spinal cord injuries.
Burkhart is confident that he can regain even more movement back from his hand, and the researchers are approved to try the technology out on four more patients. Ultimately, the system will only be workable commercially with a wireless neural implant, or an EEG headset – like the Emotiv, Insight or Neurosky headsets. The technology is also being considered for stroke rehabilitation as well, another area where EEG and mind-control technology are being considered as a mean to recovery.
From restoring ambulatory ability through mind-controlled limbs and neurosensing devices to rehabilitating stroke victims with mind-reading software, the future is fast shaping up to be a place where no injuries are permanent and physical disabilities and neurological impairments are a thing of the past. I think I can safely speak for everyone when I say that watching these technologies emerge makes it an exciting time to be alive!
And be sure to check out this video from the OSUW Medical Center that shows Ian Burkhart and the Batelle team testing the Neurobridge:
The wearable computing revolution that has been taking place in recent years has drawn in developers and tech giants from all over the world. Though its roots are deep, dating back to the late 60’s and early 80’s with the Sword of Damocles concept and the work of Steve Mann. But in recent years, thanks to the development of Google Glass, the case for wearable tech has moved beyond hobbyists and enthusiasts and into the mainstream.
And with display glasses now accounted for, the latest boom in development appears to be centered on smart watches and similar devices. These range from fitness trackers with just a few features to wrist-mounted version of smart phones that boast the same constellations of functions and apps (email, phone, text, skyping, etc.) And as always, the big-name industries are coming forward with their own concepts and designs.
First, there’s the much-anticipated Apple iWatch, which is still in the rumor stage. The company has been working on this project since late 2012, but has begun accelerating the process as it tries to expand its family of mobile devices to the wrist. Apple has already started work on trademarking the name in a number of countries in preparation for a late 2014 launch perhaps in October, with the device entering mass production in July.
And though it’s not yet clear what the device will look like, several mockups and proposals have been leaked. And recent reports from sources like Reuters and The Wall Street Journal have pointed towards multiple screen sizes and price points, suggesting an array of different band and face options in various materials to position it as a fashion accessory. It is also expected to include a durable sapphire crystal display, produced in collaboration with Apple partner GT Advanced.
While the iWatch will perform some tasks independently using the new iOS 8 platform, it will be dependent on a compatible iOS device for functions like receiving messages, voice calls, and notifications. It is also expected to feature wireless charging capabilities, advanced mapping abilities, and possibly near-field communication (NFC) integration. But an added bonus, as indicated by Apple’s recent filing for patents associated with their “Health” app, is the inclusion of biometric and health sensors.
Along with serving as a companion device to the iPhone and iPad, the iWatch will be able to measure multiple different health-related metrics. Consistent with the features of a fitness band, these will things like a pedometer, calories burned, sleep quality, heart rate, and more. The iWatch is said to include 10 different sensors to track health and fitness, providing an overall picture of health and making the health-tracking experience more accessible to the general public.
Apple has reportedly designed iOS 8 with the iWatch in mind, and the two are said to be heavily reliant on one another. The iWatch will likely take advantage of the “Health” app introduced with iOS 8, which may display all of the health-related information gathered by the watch. Currently, Apple is gearing up to begin mass production on the iWatch, and has been testing the device’s fitness capabilities with professional athletes such as Kobe Bryant, who will likely go on to promote the iWatch following its release.
Not to be outdone, Google launched its own brand of smartwatch – known as Android Wear – at this year’s I/O conference. Android Wear is the company’s software platform for linking smartwatches from companies including LG, Samsung and Motorola to Android phones and tablets. A preview of Wear was introduced this spring, the I/O conference provided more details on how it will work and made it clear that the company is investing heavily in the notion that wearables are the future.
Android Wear takes much of the functionality of Google Now – an intelligent personal assistant – and uses the smartwatch as a home for receiving notifications and context-based information. For the sake of travel, Android Wear will push relevant flight, weather and other information directly to the watch, where the user can tap and swipe their way through it and use embedded prompts and voice control to take further actions, like dictating a note with reminders to pack rain gear.
For the most part, Google had already revealed most of what Wear will be able to do in its preview, but its big on-stage debut at I/O was largely about getting app developers to buy into the platform and keep designing for a peripheral wearable interface in mind. Apps can be designed to harness different Android Wear “intents.” For example, the Lyft app takes advantage of the “call me a car” intent and can be set to be the default means of hailing a ride when you tell your smartwatch to find you a car.
Google officials also claimed at I/O that the same interface being Android Wear will be behind their new Android Auto and TV, two other integrated services that allow users to interface with their car and television via a mobile device. So don’t be surprised if you see someone unlocking or starting their car by talking into their watch in the near future. The first Android Wear watches – the Samsung Gear Live and the LG G Watch – are available to pre-order and the round-face Motorola Moto 360 is expected to come out later this summer.
All of these steps in integration and wearable technology are signs of an emergent trend, one where just about everything from personal devices to automobiles and even homes are smart and networked together – thus giving rise to a world where everything is remotely accessible. This concept, otherwise known as the “Internet of Things”, is expected to become the norm in the next 20 years, and will include other technologies like display contacts and mediated (aka. augmented) reality.
And be sure to check out this concept video of the Apple iWatch:
This past week, Japanese scientists unveiled what they claim is the world’s first news-reading android. The adolescent-looking “Kodomoroid” – an amalgamation of the Japanese word “kodomo” (child) and “android”- and “Otonaroid” (“otona” meaning adult) introduced themselves at an exhibit entitled Android: What is a Human?, which is being presented at Tokyo’s National Museum of Emerging Science and Innovation (Miraikan).
The androids were flanked by robotics professor Hiroshi Ishiguro and Miraikan director Mamoru Mori. After Kodomoroid delivered news of an earthquake and an FBI raid to amazed reporters in Tokyo. She even poked fun at her creator, leading robotics professor Hiroshi Ishiguro, “You’re starting to look like a robot!” This was followed by Otonaroid fluffing her lines when asked to introduced herself, which was followed by her excusing herself by saying, “I’m a little bit nervous.”
Both androids will be working at Miraikan and interacting with visitors, as part of Ishiguro’s studies into human reactions to the machines. Ishiguro is well-known for his work with “geminoid”, robots that bare a frightening resemblance to their creator. As part of his lecture process, Ishiguro takes his geminoid with him when he travels and even let’s it deliver his lectures for him. During an interview with AFP, he explained the reasoning behind this latest exhibit:
This will give us important feedback as we explore the question of what is human. We want robots to become increasingly clever. We will have more and more robots in our lives in the future… This will give us important feedback as we explore the question of what is human. We want robots to become increasingly clever.
Granted the unveiling did have its share of bugs. For her part, Otonaroid looked as if she could use some rewiring before beginning her new role as the museum’s science communicator, her lips out of sync and her neck movements symptomatic of a bad night’s sleep. But Ishiguro insisted both would prove invaluable to his continued research as museum visitors get to have conversations with the ‘droids and operate them as extensions of their own body.
And this is just one of many forays into a world where the line between robots and humans are becoming blurred. After a successful debut earlier this month, a chatty humanoid called Pepper is set to go on sale as a household companion in Japan starting next year. Designed by SoftBank, using technology acquired from French robotics company Aldebaran, and marketed as a household companion, each robot will cost around $2,000, the same cost of a laptop.
Pepper can communicate through emotion, speech or body language and it’s equipped with both mics and proximity sensors. Inside, it will be possible to install apps and upgrade the unit’s functionality, the plan being to make Pepper far smarter than when you first bought it. It already understands 4,500 Japanese words, but perhaps more impressively, Pepper can apparently read into the tone used to understand its master’s disposition.
Aldebaran CEO Bruno Maisonnier claims that robots that can recognize human emotion will change the way we live and communicate. And this is certainly a big step towards getting robots into our daily lives, at least if you live in Japan (the only place Pepper will be available for the time being). He also believes this is the start of a “robotic revolution” where robotic household companions that can understand and interact with their human owners will become the norm.
Hmm, a world where robots are increasingly indistinguishable from humans, can do human jobs, and are capable of understanding and mimicking our emotions. Oh, and they live in our houses too? Yeah, I’m just going to ignore the warning bells going off in my head now! And in the meantime, be sure to check out these videos of Kodomoroid and Otonaroid and Pepper being unveiled for the first time:
Scientists have been staring at the surface of Mars for decades through high-powered telescopes. Only recently, and with the help of robotic missions, has anyone been able to look deeper. And with the success of the Spirit, Opportunity and Curiosity rovers, NASA is preparing to go deeper. The space agency just got official approval to begin construction of the InSight lander, which will be launched in spring 2016. While there, it’s going to explore the subsurface of Mars to see what’s down there.
Officially, the lander is known as the Interior Exploration Using Seismic Investigations, Geodesy and Heat Transport, and back in May, NASA passed the crucial mission final design review. The next step is to line up manufacturers and equipment partners to build the probe and get it to Mars on time. As with many deep space launches, the timing is incredibly important – if not launched at the right point in Earth’s orbit, the trip to Mars would be far too long.
Unlike the Curiosity rover, which landed on the Red Planet by way of a fascinating rocket-powered sky crane, the InSight will be a stationary probe more akin to the Phoenix lander. That probe was deployed to search the surface for signs of microbial life on Mars by collecting and analyzing soil samples. InSight, however, will not rely on a tiny shovel like Phoenix (pictured above) – it will have a fully articulating robotic arm equipped with burrowing instruments.
Also unlike its rover predecessors, once InSight sets down near the Martian equator, it will stay there for its entire two year mission – and possibly longer if it can hack it. That’s a much longer official mission duration than the Phoenix lander was designed for, meaning it’s going to need to endure some harsh conditions. This, in conjunction with InSight’s solar power system, made the equatorial region a preferable landing zone.
For the sake of its mission, the InSight lander will use a sensitive subsurface instrument called the Seismic Experiment for Interior Structure (SEIS). This device will track ground motion transmitted through the interior of the planet caused by so-called “marsquakes” and distant meteor impacts. A separate heat flow analysis package will measure the heat radiating from the planet’s interior. From all of this, scientists hope to be able to shed some light on Mars early history and formation.
For instance, Earth’s larger size has kept its core hot and spinning for billions of years, which provides us with a protective magnetic field. By contrast, Mars cooled very quickly, so NASA scientists believe more data on the formation and early life of rocky planets will be preserved. The lander will also connect to NASA’s Deep Space Network antennas on Earth to precisely track the position of Mars over time. A slight wobbling could indicate the red planet still has a small molten core.
If all goes to plan, InSight should arrive on Mars just six months after its launch in Spring 2016. Hopefully it will not only teach us about Mars’ past, but our own as well.
After the daring new type of landing that was performed with the Curiosity rover, NASA went back to the drawing table to come up with something even better. Their solution: the “Low-Density Supersonic Decelerator”, a saucer-shaped vehicle consisting of an inflating buffer that goes around the ship’s heat shield. It is hopes that this will help future spacecrafts to put on the brakes as they enter Mar’s atmosphere so they can make a soft, controlled landing.
Back in January and again in April, NASA’s Jet Propulsion Laboratory tested the LDSD using a rocket sled. Earlier this month, the next phase was to take place, in the form of a high-altitude balloon that would take it to an altitude of over 36,600 meters (120,000 feet). Once there, the device was to be dropped from the balloon sideways until it reached a velocity of four times the speed of sound. Then the LDSD would inflate, and the teams on the ground would asses how it behaved.
Unfortunately, the test did not take place, as NASA lost its reserved time at the range in Hawaii where it was slated to go down. As Mark Adler, the Low Density Supersonic Decelerator (LDSD) project manager, explained:
There were six total opportunities to test the vehicle, and the delay of all six opportunities was caused by weather. We needed the mid-level winds between 15,000 and 60,000 feet [4,500 meters to 18,230 meters] to take the balloon away from the island. While there were a few days that were very close, none of the days had the proper wind conditions.
In short, bad weather foiled any potential opportunity to conduct the test before their time ran out. And while officials don’t know when they will get another chance to book time at the U.S. Navy’s Pacific Missile Range in Kauai, Hawaii, they’re hoping to start the testing near the end of June. NASA emphasized that the bad weather was quite unexpected, as the team had spent two years looking at wind conditions worldwide and determined Kauai was the best spot for testing their concept over the ocean.
If the technology works, NASA says it will be useful for landing heavier spacecraft on the Red Planet. This is one of the challenges the agency must surmount if it launches human missions to the planet, which would require more equipment and living supplies than any of the rover or lander missions mounted so far. And if everything checks out, the testing goes as scheduled and the funding is available, NASA plans to use an LDSD on a spacecraft as early as 2018.
And in the meantime, check out this concept video of the LDSD, courtesy of NASA’s Jet Propulsion Laboratory: