The Future is Here: Google Robot Cars Hit Milestone

google_robotcaIt’s no secret that amongst its many cooky and futuristic projects, self-driving cars are something Google hopes to make real within the next few years. Late last month, Google’s fleet of autonomous automobiles reached an important milestone. After many years of testing out on the roads of California and Nevada, they logged well 0ver one-million kilometers (700,000 miles) of accident-free driving. To celebrate, Google has released a new video that demonstrates some impressive software improvements that have been made over the last two years.

Most notably, the video demonstrates how its self-driving cars can now track hundreds of objects simultaneously – including pedestrians, an indicating cyclist, a stop sign held by a crossing guard, or traffic cones. This is certainly exciting news for Google and enthusiasts of automated technology, as it demonstrates that the ability of the vehicles to obey the rules of the road and react to situations that are likely to emerge and require decisions to be made.

google_robotcar_mapIn the video, we see the Google’s car reacting to railroad crossings, large stationary objects, roadwork signs and cones, and cyclists. In the case of the cyclist — not only are the cars able to discern whether the cyclist wants to move left or right, it even watches out for cyclists coming from behind when making a right turn. And while the demo certainly makes the whole process seem easy and fluid, there is actually a considerable amount of work going on behind the scenes.

For starters, there are around $150,000 of equipment in each car performing real-time LIDAR and 360-degree computer vision – a complex and computing-intensive task. The software powering the whole process is also the result of years of development. Basically, every single driving situation that can possibly occur has to be anticipated and then painstakingly programmed into the software. This is an important qualifier when it comes to these “autonomous vehicles”. They are not capable of independent judgement, only following pre-programmed instructions.

BMW 7 Series F01 July 2009 Miramas FranceWhile a lot has been said about the expensive LIDAR hardware, the most impressive aspect of the innovations is the computer vision. While LIDAR provides a very good idea of the lay of the land and the position of large objects (like parked cars), it doesn’t help with spotting speed limits or “construction ahead” signs, and whether what’s ahead is a cyclist or a railroad crossing barrier. And Google has certainly demonstrated plenty of adeptness in the past, what with their latest versions of Street View and their Google Glass project.

Naturally, Google says that it has lots of issues to overcome before its cars are ready to move out from their home town of Mountain View, California and begin driving people around. For instance, the road maps needed to be finely tuned and expanded, and Google is likely to be selling map packages in the future in the same way that apps are sold for smartphones. In the mean time, the adoption of technologies like adaptive cruise control (ACC) and lane keep assist (LKA) will bring lots of almost-self-driving cars to the road over the next few years.

In the meantime, be sure to check out the video of the driverless car in action:


Source:
extremetech.com

The Future is Here: inFORM Tangible Media Interface

tangible_mediaThe future of computing is tactile. That’s the reasoning behind the inFORM interface, a revolutionary new interface produced by the MIT Media Lab and the Tangible Media Group. Unveiled earlier this month, the inFORM is basically a surface that changes shapes in three-dimensions, allowing users to not only interact with digital content, but even make simulated physical contact with other people.

Created by Daniel Leithinger and Sean Follmer and overseen by Professor Hiroshi Ishii, the technology behind the inFORM isn’t actually quite simple. Basically, it functions like a fancy Pinscreen, one of those executive desk toys that allows you to create a rough 3-D model of an object by simply pressing it into a bed of flattened pins.

tangible_media3However, with the inFORM, each of those “pins” is connected to a motor controlled by a nearby laptop. This not only moves the pins to render digital content physically, but can also register real-life objects interacting with its surface thanks to the sensors of a hacked Microsoft Kinect. In short, you can touch hands with someone via Skype, or feel a stretch of terrain through Google Maps.

Another possible application comes in the form of video conferencing, where remote participants can be displayed physically, allowing for a strong sense of presence and the ability to interact physically at a distance. However, Tangible Media Group sees the inFORM as merely a step along the long road towards what they refer to “Tangible Bits”, or a Tangible User Interface (TUI).

tangible_media4This concept is what the group sees as the physical embodiment of digital information & computation. This constitutes a move away from the current paradigm of “Painted Bits”, or Graphical User Interfaces (GUI), something that is based on intangible pixels that do not engage users fully. As TMG states on their website:

Humans have evolved a heightened ability to sense and manipulate the physical world, yet the GUI based on intangible pixels takes little advantage of this capacity. The TUI builds upon our dexterity by embodying digital information in physical space. TUIs expand the affordances of physical objects, surfaces, and spaces so they can support direct engagement with the digital world.

It also represents a step on the long road towards what TMG refers to as “Radical Atoms”. One of the main constraints with TUI’s, according to Professor Ishii and his associates, is their limited ability to change the form or properties of physical objects in real time. This constraint can make the physical state of TUIs inconsistent with the underlying digital models.

tangible_media1Radical Atoms, a vision which the group unveiled last year, looks to the far future where materials can change form and appearance dynamically, becoming as reconfigurable as pixels on a screen. By bidirectionally coupling this material with an underlying digital model, dynamic changes in digital states would be reflected in tangible matter in real time, and vice versa.

inFORM45This futuristic paradigm is something that could be referred to as a “Material User Interface (MUI).” In all likelihood, it would involve polymers or biomaterials that are embedded with nanoscopic wires, that are able to change shape with the application of tiny amounts of current. Or, more boldy, materials that are composed of utility fogs or swarms of coordinated nanorobots that can alter their shape at will.

Certainly the ambitious concept, but as the inFORM demonstrates, its something that is getting closer. And the rate at which it is getting here is growing faster every day. And you have to admit, though the full-scale model does look a little bit like a loom, it does make for a pretty impressive show. And in the meantime, be sure to enjoy this video of the inFORM in action.


Source:
tangible.media.mit.edu

100,000 Stars: An Interactive Exploration of the Milky Way

100,000starsWith interactive maps becoming all the rage, I had a feeling it was only time before someone premiered an interactive browser that would let you explore the cosmos. And now there is, and it goes by the name 100,000 stars. Personally, I would have preferred Google Galaxy, like I suggested before, but forget it! You can’t teach these big time web developers anything 😉

In any case, 100,000 stars is an experiment for Chrome web browsers, but it will also work with Firefox, Safari, or just about any other WebGL you might have. Open it up, and you can see where our Solar System is in relation to the Orion Arm of the Milky Way Galaxy. Then zoom in to see the local star groups that are closest to us, our sun, and the planets and asteroids that make up our Solar System.

Also, I should note that the site provides a guided tour for the newly-initiated. I recommend you use that first, then try tinkering with the settings a little before mucking about to get a look at our little corner of the universe. The site can be a bit clunky at times, but keep in mind that there’s plenty of graphic info that’s being streamed at any given time. But if your machine and/or internet connection is faster than mine (a distinct possibility) you might have no trouble at all.

Simply click here and start exploring!

Source: thisiscolossal.com