The Future is Here: Google’s New Self-Driving Car

google-new-self-driving-car-prototype-640x352Google has just unveiled its very first, built-from-scratch-in-Detroit, self-driving electric robot car. The culmination of years worth of research and development, the Google vehicle is undoubtedly cuter in appearance than other EV cars – like the Tesla Model S or Toyota Prius. In fact, it looks more like a Little Tikes plastic car, right down to smiley face on the front end. This is no doubt the result of clever marketing and an attempt to reduce apprehension towards the safety or long-term effects of autonomous vehicles.

The battery-powered electric vehicle has as a stop-go button, but no steering wheel or pedals. It also comes with some serious expensive hardware – radar, lidar, and 360-degree cameras – that are mounted in a tripod on the roof. This is to ensure good sightlines around the vehicle, and at the moment, Google hasn’t found a way to integrate them seamlessly into the car’s chassis. This is the long term plan, but at the moment, the robotic tripod remains.

google-self-driving-car-prototype-concept-artAs the concept art above shows, the eventual goal appears to be to to build the computer vision and ranging hardware into a slightly less obtrusive rooftop beacon. In terms of production, Google’s short-term plan is to build around 200 of these cars over the next year, with road testing probably restricted to California for the next year or two. These first prototypes are mostly made of plastic with battery/electric propulsion limited to a max speed of 25 mph (40 kph).

Instead of an engine or “frunk,” there’s a foam bulkhead at the front of the car to protect the passengers. There’s just a couple of seats in the interior, and some great big windows so passengers can enjoy the view while they ride in automated comfort. In a blog post on their website, Google expressed that their stated goal is in “improving road safety and transforming mobility for millions of people.” Driverless cars could definitely revolutionize travel for people who can’t currently drive.

google_robotcar_mapImproving road safety is a little more ambiguous, though. It’s generally agreed that if all cars on the road were autonomous, there could be some massive gains in safety and efficiency, both in terms of fuel usage and being able to squeeze more cars onto the roads. In the lead-up to that scenario, though, there are all sorts of questions about how to effectively integrate a range of manual, semi- and fully self-driving vehicles on the same roadways.

Plus, there are the inevitable questions of practicality and exigent circumstances. For starters, having no other controls in the car but a stop-go button may sound simplified and creative, but it creates problems. What’s a driver to do when they need to move the car just a few feet? What happens when a tight parking situation is taking place and the car has to be slowly moved to negotiate it? Will Google’s software allow for temporary double parking, or off-road driving for a concert or party? google_robotca

Can you choose which parking spot the car will use, to leave the better/closer parking spots for someone with special needs (i.e. the elderly or physically disabled)? How will these cars handle the issue of “right of way” when it comes to pedestrians and other drivers? Plus, is it even sensible to promote a system that will eventually make it easier to put more cars onto the road? Mass transit is considered the best option for a cleaner, less cluttered future. Could this be a reason not to develop such ideas as the Hyperloop and other high-speed maglev trains?

All good questions, and ones which will no doubt have to be addressed as time goes on and production becomes more meaningful. In the meantime, there are no shortage of people who are interested in the concept and hoping to see where it will go. Also, there’s plenty of people willing to take a test drive in the new robotic car. You can check out the results of these in the video below. In the meantime, try not to be too creeped out if you see a car with a robotic tripod on top and a very disengaged passenger in the front seat!


Sources:
extremetech.com, scientificamerican.com

Big News in Quantum Computing!

^For many years, scientists have looked at the field of quantum machinery as the next big wave in computing. Whereas conventional computing involves sending information via a series of particles (electrons), quantum computing relies on the process of beaming the states of these particles from one location to the next. This process, which occurs faster than the speed of light since no movement takes place, would make computers exponentially faster and more efficient, and lead to an explosion in machine intelligence. And while the technology has yet to be realized, every day brings us one step closer…

One important step happened earlier this month with the installment of the D-Wave Two over at the Quantum Artificial Intelligence Lab (QAIL) at the Ames Research Center in Silicon Valley, NASA has announced that this is precisely what they intend to pursue. Not surprisingly, the ARC is only the second lab in the world to have a quantum computer.  The only other lab to possess the 512-qubit, cryogenically cooled machine is the defense contractor Lockheed Martin, which upgraded to a D-Wave Two in 2011.

D-Wave’s new 512-qubit Vesuvius chip
D-Wave’s new 512-qubit Vesuvius chip

And while there are still some who question the categorization of the a D-Wave Two as a true quantum computer, most critics have acquiesced since many of its components function in accordance with the basic principle. And NASA, Google, and the people at the Universities Space Research Association (USRA) even ran some tests to confirm that the quantum computer offered a speed boost over conventional supercomputers — and it passed.

The new lab, which will be situated at NASA’s Advanced Supercomputing Facility at the Ames Research Center, will be operated by NASA, Google, and the USRA. NASA and Google will each get 40% of the system’s computing time, with the remaining 20% being divvied up by the USRA to researchers at various American universities. NASA and Google will primarily use the quantum computer to advance a branch of artificial intelligence called machine learning, which is tasked with developing algorithms that optimize themselves with experience.

nasa-ames-research-center-partyAs for what specific machine learning tasks NASA and Google actually have in mind, we can only guess. But it’s a fair bet that NASA will be interested in optimizing flight paths to other planets, or devising a safer/better/faster landing procedure for the next Mars rover. As for Google, the smart money says they will be using their time to develop complex AI algorithms for their self-driving cars, as well optimizing their search engines, and Google+.

But in the end, its the long-range possibilities that offer the most excitement here. With NASA and Google now firmly in command of a quantum processor, some of best and brightest minds in the world will now be working to forward the field of artificial intelligence, space flight, and high-tech. It will be quite exciting to see what they produce…

photon_laserAnother important step took place back in March, when researchers at Yale University announced that they had developed a new way to change the quantum state of photons, the elementary particles researchers hope to use for quantum memory. This is good news, because it effectively demonstrated that true quantum computing – the kind that utilizes qubits for all of its processes – has continually eluded scientists and researchers in recent years.

To break it down, today’s computers are restricted in that they store information as bits – where each bit holds either a “1″ or a “0.” But a quantum computer is built around qubits (quantum bits) that can store a 1, a 0 or any combination of both at the same time. And while the qubits would make up the equivalent of a processor in a quantum computer, some sort of quantum Random Access Memory (RAM) is also needed.

Photon_follow8Gerhard Kirchmair, one of Yale researchers, explained in a recent interview with Nature magazine that photons are a good choice for this because they can retain a quantum state for a long time over a long distance. But you’ll want to change the quantum information stored in the photons from time to time. What the Yale team has developed is essentially a way to temporarily make the photons used for memory “writeable,” and then switch them back into a more stable state.

To do this, Kirchmair and his associates took advantage of what’s known as a “Kerr medium”, a law that states how certain mediums will refract light in a different ways depending on the amount shined on it. This is different from normal material materials that refract light and any other form of electromagnetic field the same regardless of how much they are exposed to.

Higgs-bosonThus, by exposing photons to a microwave field in a Kerr medium, they were able to manipulate the quantum states of photons, making them the perfect means for quantum memory storage. At the same time, they knew that storing these memory photons in a Kerr medium would prove unstable, so they added a vacuum filled aluminum resonator to act as a coupler. When the resonator is decoupled, the photons are stable. When resonator is coupled, the photons are “writeable”, allowing a user to input information and store it effectively.

This is not the first or only instance of researchers finding ways to toy with the state of photons, but it is currently the most stable and effective. And coupled with other efforts, such as the development of photonic transistors and other such components, or new ways to create photons seemingly out of thin air, we could be just a few years away from the first full and bona fide quantum processor!

Sources: Extremetech.com, Wired.com, Nature.com