A Cleaner Future: Contaminant-Detecting Water Sensor

https://i0.wp.com/f.fastcompany.net/multisite_files/fastcompany/imagecache/1280/poster/2014/05/3030503-poster-p-jack-and-beaker.jpgJack Andraka is at it again! For those who follow this blog (or subscribe to Forbes or watch TED Talks), this young man probably needs no introduction. But if not, then you might not known that Andraka is than the young man who – at 15 years of age – invented an inexpensive litmus test for detecting pancreatic cancer. This invention won him first prize at the 2012 Intel International Science and Engineering Fair (ISEF), and was followed up less than a year later with a handheld device that could detect cancer and even explosives.

And now, Andraka is back with yet another invention: a biosensor that can quickly and cheaply detect water contaminants. His microfluidic biosensor, developed with fellow student Chloe Diggs, recently took the $50,000 first prize among high school entrants in the Siemens We Can Change the World Challenge. The pair developed their credit card-sized biosensor after learning about water pollution in a high school environmental science class.

andraka_diggsAs Andraka explained:

We had to figure out how to produce microfluidic [structures] in a classroom setting. We had to come up with new procedures, and we custom-made our own equipment.

According to Andraka, the device can detect six environmental contaminants: mercury, lead, cadmium, copper, glyphosate, and atrazine. It costs a dollar to make and takes 20 minutes to run, making it 200,000 times cheaper and 25 times more efficient than comparable sensors. At this point, make scaled-down versions of expensive sensors that can save lives has become second nature to Andraka. And in each case, he is able to do it in a way that is extremely cost-effective.

andraka-inlineFor example, Andraka’s litmus test cancer-detector was proven to be 168 times faster than current tests, 90% accurate, and 400 times more sensitive. In addition, his paper test costs 26,000 times less than conventional methods – which include  CT scans, MRIs, Ultrasounds, or Cholangiopancreatography. These tests not only involve highly expensive equipment, they are usually administered only after serious symptoms have manifested themselves.

In much the same vein, Andraka’s handheld cancer/explosive detector was manufactured using simple, off-the-shelf and consumer products. Using a simple cell phone case, a laser pointer and an iPhone camera, he was able to craft a device that does the same job as a raman spectrometer, but at a fraction of the size and cost. Whereas a conventional spectrometer is the size of a room and costs around $100,000, his handheld device is the size of a cell phone and costs $15 worth of components.

andraka_seimensAs part of the project, Diggs and Andraka also developed an inexpensive water filter made out of plastic bottles. Next, they hope to do large-scale testing for their sensor in Maryland, where they live. They also want to develop a cell-phone-based sensor reader that lets users quickly evaluate water quality and post the test results online. Basically, its all part of what is fast becoming the digitization of health and medicine, where the sensors are portable and the information can be uploaded and shared.

This isn’t the only project that Andraka has been working on of late. Along with the two other Intel Science Fair finalists – who came together with him to form Team Gen Z – he’s working on a handheld medical scanner that will be entered in the Tricorder XPrize. This challenge offers $10 million to any laboratory or private inventors that can develop a device that can diagnose 15 diseases in 30 patients over a three-day period. while still being small enough to carry.

For more information on this project and Team Gen Z, check out their website here. And be sure to watch their promotional video for the XPrize competition:


Source:
fastcoexist.com

The Internet of Things: AR and Real World Search

https://i0.wp.com/screenmediadaily.com/wp-content/uploads/2013/04/augmented_reality_5.jpgWhen it comes to the future, it is clear that the concept of the “Internet of Things” holds sway. This idea – which states that all objects will someday be identifiable thanks to a virtual representations on the internet – is at the center of a great deal of innovation that drives our modern economy. Be it wearables, wireless, augmented reality, voice or image recognition, that which helps us combine the real with the virtual are on the grow.

And so it’s really no surprise that innovators are looking to take augmented reality to the next level. The fruit of some of this labor is Blippar, a market-leading image-recognition and augmented reality platform. Lately, they have been working on a proof of concept for Google Glass showing that 3-D searches are doable. This sort of technology is already available n the form of apps for smartphones, but a central database is lacking that could any device into a visual search engine.

https://i0.wp.com/inthralld.com/wp-content/uploads/2013/08/Say-Hello-to-Ikeas-2014-Interactive-Catalog-App-4.jpegAs Ambarish Mitra, the head of Blippar stated, AR is already gaining traction among consumers thanks to some of the world’s biggest industrial players recognizing the shift to visually mediated lifestyles. Examples include IKEA’s interactive catalog, Heinz’s AR recipe booklet or Amazon’s recent integration of the Flow AR technology into its primary shopping app. As this trend continues, we will need a Wikipedia-like database for 3-D objects that will be available to us anytime, anywhere.

Social networks and platforms like Instagram, Pinterest, Snapchat and Facebook have all driven a cultural shift in the way people exchange information. This takes the form of text updates, instant messaging, and uploaded images. But as the saying goes, “a picture is worth a thousand words”. In short, information absorbed through visual learning has a marked advantage over that which is absorbed through reading and text.

Augmented_Reality_Contact_lensIn fact, a recent NYU study found that people retain close to 80 percent of information they consume through images versus just 10 percent of what they read. If people are able to regularly consume rich content from the real world through our devices, we could learn, retain, and express our ideas and information more effectively. Naturally, there will always be situations where text-based search is the most practical tool, but searches arise from real-world experiences.

Right now, text is the only option available, and oftentimes, people are unable to best describe what they are looking for. But an image-recognition technology that could turn any smartphone, tablet or wearable device into a scanner that could identify any 3-D object would vastly simplify things. Information could be absorbed in a more efficient way, using an object’s features and pulling up information from a rapidly learning engine.

https://i0.wp.com/24reviews.com/wp-content/uploads/2014/03/QWERTY-keyboard.pngFor better or for worse, wearable designs of consumer electronics have come to reflect a new understanding in the past few years. Basically, they have come to be extensions of our senses, much as Marshall McCluhan wrote in his 1964 book Understanding Media: The Extensions of Man. Google Glass is representative of this revolutionary change, a step in the direction of users interacting with the environment around them through technology.

Leading tech companies are already investing time and money into the development of their own AR products, and countless patents and research allocations are being made with every passing year. Facebook’s acquisition of virtual reality company Oculus Rift is the most recent example, but even Samsung received a patent earlier this year for a camera-based augmented reality keyboard that is projected onto the fingers of the user.

https://i0.wp.com/blogs.gartner.com/it-glossary/files/2012/07/internet-of-things-gartner.pngAugmented reality has already proven itself to be a multi-million dollar industry – with 60 million users and around half a billion dollars in global revenues in 2013 alone. It’s expected to exceed $1 billion annually by 2015, and combined with a Google-Glass type device, this AR could eventually allow individuals to build vast libraries of data that will be the foundation for finding any 3-D object in the physical world.

In other words, the Internet of Things will become one step closer, with an evolving database of visual information at the base of it that is becoming ever larger and (in all likelihood) smarter. Oh dear, I sense another Skynet reference coming on! And in the meantime, enjoy this video that showcases Blippar’s vision of what this future of image overlay and recognition will look like:


Source: wired.com, dashboardinsight.com, blippar.com

Stephen Hawking: AI Could Be a “Real Danger”

http://flavorwire.files.wordpress.com/2014/06/safe_image.jpgIn a hilarious appearance on “Last Week Tonight” – John Oliver’s HBO show – guest Stephen Hawking spoke about some rather interesting concepts. Among these were the concepts of “imaginary time” and, more interestingly, artificial intelligence. And much to the surprise of Oliver, and perhaps more than a few viewers, Hawking’s was not too keen on the idea of the latter. In fact, his predictions were just a tad bit dire.

Of course, this is not the first time Oliver had a scientific authority on his show, as demonstrated by his recent episode which dealt with Climate Change and featured guest speaker Bill Nye “The Science Guy”. When asked about the concept of imaginary time, Hawking explained it as follows:

Imaginary time is like another direction in space. It’s the one bit of my work science fiction writers haven’t used.

singularity.specrepIn sum, imaginary time has something to do with time that runs in a different direction to the time that guides the universe and ravages us on a daily basis. And according to Hawking, the reason why sci-fi writers haven’t built stories around imaginary time is apparently due to the fact that  “They don’t understand it”. As for artificial intelligence, Hawking replied without any sugar-coating:

Artificial intelligence could be a real danger in the not too distant future. [For your average robot could simply] design improvements to itself and outsmart us all.

Oliver, channeling his inner 9-year-old, asked: “But why should I not be excited about fighting a robot?” Hawking offered a very scientific response: “You would lose.” And in that respect, he was absolutely right. One of the greatest concerns with AI, for better or for worse, is the fact that a superior intelligence, left alone to its own devices, would find ways to produce better and better machines without human oversight or intervention.

terminator2_JDAt worst, this could lead to the machines concluding that humanity is no longer necessary. At best, it would lead to an earthly utopia where machines address all our worries. But in all likelihood, it will lead to a future where the pace of technological change will impossible to predict. As history has repeatedly shown, technological change brings with it all kinds of social and political upheaval. If it becomes a runaway effect, humanity will find it impossible to keep up.

Keeping things light, Oliver began to worry that Hawking wasn’t talking to him at all. Instead, this could be a computer spouting wisdoms. To which, Hawking replied: “You’re an idiot.” Oliver also wondered whether, given that there may be many parallel universes, there might be one where he is smarter than Hawking. “Yes,” replied the physicist. “And also a universe where you’re funny.”

Well at least robots won’t have the jump on us when it comes to being irreverent. At least… not right away! Check out the video of the interview below:


Source: cnet.com

Electronic Entertainment Expo 2014

https://i0.wp.com/oyster.ignimgs.com/wordpress/www.ign.com/1587/2014/05/e3-logo.jpgThis past week, the Electronic Entertainment Expo (commonly referred to as E3) kicked off. This annual trade fair , which is presented by the Entertainment Software Association (ESA), is used by video game publishers, accessory manufacturers, and members of the computing industry to present their upcoming games and game-related merchandise. The festivities wrapped up this Friday, and was the source of some controversy and much speculation.

For starters, the annual show opened amidst concerns that the dent caused by Massively Multilayer Online Role Playing Games (MMORPGs) and online gaming communities would start to show. And this did seem to be the case. While the annual Los Angeles show normally sets up the expectations for the rest of the year in video games – and that certainly did happen – but E3 2014 was mainly about clearing the runway for next year.

https://i0.wp.com/oyster.ignimgs.com/mediawiki/apis.ign.com/e3/thumb/f/f3/E32014-Inline1.jpg/468px-E32014-Inline1.jpgNowhere was this more clear than with Nintendo, which was the source of quite a bit of buzz when the Expo began. But it was evident that games – particularly for the Wii U – were not going to materialize until 2015. The company got a jump on the next-generation console battle by launching its Wii U in late 2012, a year ahead of Sony and Microsoft, but poor sales have led to big game developers largely abandoning it.

And while the company did announce a number of new games –  including an open-world Legend of Zelda; the new Mario game that allows players to create custom levels, called Mario Maker; and Splatoon, where teams of players shoot coloured ink at each other – none are scheduled for release until next year. That dearth of blockbusters for the rest of 2014 is mirrored at Microsoft and Sony, which are also light on heavyweight first-party titles for the rest of this year.

https://i0.wp.com/cdn1-www.craveonline.com/assets/uploads/2014/04/PS4WiiUXboxOne.jpgThe companies have some respective big guns in the works, such as Halo 5: Guardians and Uncharted 4: A Thief’s End, but they’re also scheduled for release in 2015. However, with the brisk sales of the Xbox One and PlayStation 4 consoles, both companies have the luxury of taking their time with big games. Nintendo is not so fortunate, since the jump they made with the Wii U leaves them with a big gap that they aren’t apparently filling.

Nintendo’s comparatively under-powered Wii U, in contrast, will look even less capable than its rivals as time passes, meaning it can’t afford to wait much longer to get compelling titles to market, especially as financial losses mount. Even long-time Nintendo supporters such as Ubisoft aren’t exactly sure of what to make of the Wii U’s future. The other big question heading into E3 was whether Microsoft could regain its mojo.

https://i0.wp.com/sourcefed.com/wp-content/uploads/2014/06/e3BLOG.pngThe software giant bumbled the Xbox One launch last year and alienated many gamers, mainly by focusing on TV and entertainment content instead of gaming and tying several unpopular policies to the console, which included restrictions on used games. The company eventually relented, but the Xbox One still came bundled with the voice- and motion-sensing Kinect peripheral and a price tag that was $100 higher than Sony’s rival PlayStation 4.

The result is that while the Xbox One has sold faster than the Xbox 360 at five million units so far, it has still moved two million fewer units than the PS4. Changes began to happen in March when Microsoft executive Phil Spencer, known as a champion of games, took over the Xbox operation and wasted no time in stressing that the console is mainly about gaming, and made the Kinect optional – thus lowering the Xbox One’s price to match the PS4.

https://i0.wp.com/www.highscorereviews.com/wp-content/uploads/2014/05/xbox-e3-booth.jpgThat was certainly the focus for Microsoft at E3. TV features weren’t even mentioned during the company’s one-and-a-half-hour press conference on Monday, with Microsoft instead talking up more than 20 upcoming games. As Mike Nichols, corporate vice-president of Xbox and studios marketing, said in an interview:

We didn’t even talk about all the platform improvements to improve the all-out gaming experience that we’ve made or will be making. We wanted to shine a light on the games.

Another big topic that generated talk at the show was virtual reality, as this year’s E3 featured demonstrations of the Oculus Rift VR headset and Sony’s Project Morpheus. The latter has been the source of attention in recent years, with many commentators claiming that it has effectively restored interest in VR gaming. Though popular for a brief period in the mid 90’s, interest quickly waned as bulky equipment and unintuitive controls led to it being abandoned.

https://i0.wp.com/www.stuff.co.nz/content/dam/images/z/s/5/p/0/image.related.StuffLandscapeSixteenByNine.620x349.zs5ol.png/1402551049990.jpgBut this new virtual reality headset, which was recently bought by Facebook for $2 billion, was undeniably the hottest thing on the show floor. And the demo booth, where people got to try it on and take it for a run, was booked solid throughout the expo. Sony also wowed attendees with demos of its own VR headset, Project Morpheus. And while the PlayStation maker’s effort isn’t as far along in development as the Oculus Rift, it does work and adds legitimacy to the VR field.

And as already noted, the expo also had its share of controversy. For starters, Ubisoft stuck its proverbial foot in its mouth when a developer from its Montreal studio admitted that plans for a female protagonist in the upcoming Assassin’s Creed: Unity had been scrapped because it would supposedly have been “too much work”. This lead to a serious fleecing by internet commentators who called the company sexist for its remarks.

https://i0.wp.com/guardianlv.com/wp-content/uploads/2014/04/assassins-creed-650x365.jpgLegendary Japanese creator Hideo Kojima also had to defend the torture scenes in his upcoming Metal Gear Solid V: The Phantom Pain, starring Canadian actor Kiefer Sutherland (man loves torture!), which upset some viewers. Kojima said he felt the graphic scenes were necessary to explain the main character’s motivations, and that games will never be taken seriously as culture if they can’t deal with sensitive subjects.

And among the usual crop of violent shoot-‘em-up titles, previews of Electronic Arts upcoming Battlefield: Hardline hint that the game is likely to stir up its share of controversy when it’s released this fall. The game puts players in the shoes of cops and robbers as they blow each other away in the virtual streets of Los Angeles. Military shooters are one thing, but killing police will undoubtedly rankle some feathers in the real world.

https://i0.wp.com/allthingsxbox.com/wp-content/uploads/2014/06/Call-of-Duty.jpgIf one were to draw any conclusions from this year’s E3, it would undoubtedly be that times are both changing and staying the same. From console gaming garnering less and less of the gamers market, to the second coming of virtual reality, it seems that there is a shift in technology which may or may not be good for the current captains of industry. At the same time, competition and trying to maintain a large share of the market continues, with Sony, Microsoft and Nintendo at the forefront.

But in the end, arguably the most buzz was focused upon the trailers for the much-anticipated game releases. These included the trailers for Batman: Arkham Knight, Call of Duty: Advanced Warfare, Farcry 4, Sid Meier’s Civilization: Beyond Earth, and the aforementioned Metal Gear Solid V: The Phantom Pain, and Assassins Creed Unity. Be sure to check these out below:

Assassins Creed Unity:


Batman: Arkham Knight


Call of Duty: Advanced Warfare


Halo 5: Guardians


Sources:
cbc.ca, ca.ign.com, e3expo.com, gamespot.com

Paraplegic Kicks Off World Cup in Exoskeleton

https://i0.wp.com/images.latintimes.com/sites/latintimes.com/files/styles/large/public/2014/06/12/world-cup-kick.pngThe 2014 FIFA World Cup made history when it opened in Sao Paolo this week when a 29-year-old paraplegic man named Juliano Pinto kicked a soccer ball with the aid of a robotic exoskeleton. It was the first time a mind-controlled prosthetic was used in a sporting event, and represented the culmination of months worth of planning and years worth of technical development.

The exoskeleton was created with the help of over 150 researchers led by neuroscientist Dr. Miguel Nicolelis of Duke University, who’s collaborative effort was called the Walk Again Project. As Pinto successfully made the kick off with the exoskeleton, the Walk Again Project scientists stood by, watching and smiling proudly inside the Corinthians Arena. And the resulting buzz did not go unnoticed.

WorldCup_610x343Immediately after the kick, Nicolelis tweeted about the groundbreaking event, saying simply: “We did it!” The moment was monumental considering that only a few of months ago, Nicolelis was excited just to have people talking about the idea of a mind-controlled exoskeleton being tested in such a grand fashion. As he said in an interview with Grandland after the event:

Despite all of the difficulties of the project, it has already succeeded. You go to Sao Paulo today, or you go to Rio, people are talking about this demo more than they are talking about football, which is unbelievably impossible in Brazil.

Dr. Gordon Cheng, a team member and the lead robotics engineer of the Technical University of Munich, explained how the exoskeleton works in an interview with BBC News:

The basic idea is that we are recording from the brain and then that signal is being translated into commands for the robot to start moving.

https://i0.wp.com/blog.amsvans.com/wp-content/uploads/2014/02/the-world-cup-stadium-in-itaquera-brazil-e1393251187879.jpgThe result of many years of development, the mind-controlled exoskeleton represents a breakthrough in restoring ambulatory ability to those who have suffered a loss of motion due to injury. Using metal braces that were tested on monkeys, the exoskeleton relies on a series of wireless electrodes attached to the head that collect brainwaves, which then signal the suit to move. The braces are also stabilized by gyroscopes and powered by a battery carried by the kicker in a backpack.

Originally, a teenage paraplegic was expected to make the kick off. However, after a rigorous selection process that lasted many months, the 29 year-old Pinto was selected. And in performing the kickoff, he participated in an event designed to galvanize the imagination of millions of people around the world. It’s a new age of technology, friends, where disability is no longer a permanent thing,.

And in the meantime, enjoy this video of the event:


Source: cnet.com

Frontiers in 3-D Printing: Frankenfruit and Blood Vessels

bioprinting3-D printing is pushing the boundaries of manufacturing all the time, expanding its repertoire to include more and more in the way of manufactured products and even organic materials. Amongst the many possibilities this offers, arguably the most impressive are those that fall into the categories of synthetic food and replacement organs. In this vein, two major breakthroughs took place last month, with the first-time unveiling of both 3-D printed hybrid fruit and blood vessels.

The first comes from a Dovetailed, UK-based design company which presented its 3-D food printer on Saturday, May 24th, at the Tech Food Hack event in Cambridge. Although details on how it works are still a bit sparse, it is said to utilize a technique known as “spherification” – a molecular gastronomy technique in which liquids are shaped into tiny spheres – and then combined with spheres of different flavors into a fruit shape.

frankenfruit1According to a report on 3DPrint, the process likely involves combining fruit puree or juice with sodium alginate and then dripping the mixture into a bowl of cold calcium chloride. This causes the droplets to form into tiny caviar-like spheres, which could subsequently be mixed with spheres derived from other fruits. The blended spheres could then be pressed, extruded or otherwise formed into fruit-like shapes for consumption.

The designers claim that the machine is capable of 3D-printing existing types of fruit such as apples or pears, or user-invented combined fruits, within seconds. They add that the taste, texture, size and shape of those fruits can all be customized. As Vaiva Kalnikaitė, creative director and founder of Dovetailed, explained:

Our 3D fruit printer will open up new possibilities not only to professional chefs but also to our home kitchens – allowing us to enhance and expand our dining experiences… We have been thinking of making this for a while. It’s such an exciting time for us as an innovation lab. Our 3D fruit printer will open up new possibilities not only to professional chefs but also to our home kitchens, allowing us to enhance and expand our dining experiences. We have re-invented the concept of fresh fruit on demand.

frankenfruit2And though the idea of 3-D printed fruit might seem unnerving to some (the name “Frankenfruit” is certainly predicative of that), it is an elegant solution of what to do in an age where fresh fruit and produce are likely to become increasingly rare for many. With the effects of Climate Change (which included increased rates of drought and crop failure) expected to intensify in the coming decades, millions of people around the world will have to look elsewhere to satisfy their nutritional needs.

As we rethink the very nature of food, solutions that can provide us sustenance and make it look the real thing are likely to be the ones that get adopted. A video of the printing in action is show below:


Meanwhile, in the field of bioprinting, researchers have experienced another breakthrough that may revolution the field of medicine. When it comes to replacing vital parts of a person’s anatomy, finding replacement blood vessels and arteries can be just as daunting as finding sources of replacement organs,  limbs, skin, or any other biological material. And thanks to the recent efforts of a team from Brigham and Women’s Hospital (BWH) in Boston, MA, it may now be possible to fabricate these using a bioprinting technique.

3d_bloodvesselsThe study was published online late last month in Lab on a Chip. The study’s senior author,  Ali Khademhosseini – PhD, biomedical engineer, and director of the BWH Biomaterials Innovation Research Center – explained the challenge and their goal as follows:

Engineers have made incredible strides in making complex artificial tissues such as those of the heart, liver and lungs. However, creating artificial blood vessels remains a critical challenge in tissue engineering. We’ve attempted to address this challenge by offering a unique strategy for vascularization of hydrogel constructs that combine advances in 3D bioprinting technology and biomaterials.

The researchers first used a 3D bioprinter to make an agarose (naturally derived sugar-based molecule) fiber template to serve as the mold for the blood vessels. They then covered the mold with a gelatin-like substance called hydrogel, forming a cast over the mold which was then  reinforced via photocrosslinks. Khademhosseini and his team were able to construct microchannel networks exhibiting various architectural features – in other words, complex channels with interior layouts similar to organic blood vessels.

bioprinting1They were also able to successfully embed these functional and perfusable microchannels inside a wide range of commonly used hydrogels, such as methacrylated gelatin or polyethylene glycol-based hydrogels. In the former case, the cell-laden gelatin was used to show how their fabricated vascular networks functioned to improve mass transport, cellular viability and cellular differentiation. Moreover, successful formation of endothelial monolayers within the fabricated channels was achieved.

According to Khademhosseini, this development is right up there with the possibility of individually-tailored replacement organs or skin:

In the future, 3D printing technology may be used to develop transplantable tissues customized to each patient’s needs or be used outside the body to develop drugs that are safe and effective.

Taken as a whole, the strides being made in all fields of additive manufacturing – from printed metal products, robotic parts, and housing, to synthetic foods and biomaterials – all add up to a future where just about anything can be manufactured, and in a way that is remarkably more efficient and advanced than current methods allow.

 Sources: gizmag.com, 3dprint.com, phys.org

News from Space: ISS Sends First Transmission with Lasers

ISS In recent years, the International Space Station has become more and more media savvy, thanks to the efforts of astronauts to connect with Earthbound audiences via social media and Youtube. However, the communications setup, which until now relied on 1960’s vintage radio-wave transmissions, was a little outdated for this task. However, that has since changed with the addition of the Optical Payload for Lasercom Science (OPALS) laser communication system.

Developed by NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California, OPALS is designed to test the effectiveness of lasers as a higher-bandwidth substitute for radio waves and deal with substantially larger information packages. As Matt Abrahamson, OPALS mission manager at NASA’s Jet Propulsion Laboratory, said in a recent video statement:

We collect an enormous amount of data out in space, and we need to get it all to the ground. This is an alternative that’s much faster than our traditional radio waves that we use to communicate back down to the ground.

nasa-opalsThe OPALS laser communication system was delivered to the ISS on April 20 by a SpaceX unmanned Dragon space freighter and is currently undergoing a 90-day test. For this test, the crew used the OPALS to transmit the “Hello, World” video from the ISS to a ground station on Earth. This was no simple task, since the station orbits Earth at an altitude of about 418 km (260 mi) at travels at a speed of 28,000 km/h (17,500 mph). The result is that the target is sliding across the laser’s field of view at an incredibly fast rate.

According to Bogdan Oaida, the OPALS systems engineer at JPL, this task was pretty unprecedented:

It’s like trying to use a laser to point to an area that’s the diameter of a human hair from 20-to-30 feet away while moving at half-a-foot per second. It’s all about the pointing.

However, the test went off without a hitch, with the 37 second-long video taking 3.5 seconds to transmit – much faster than previous downlink methods. Abrahamson said that the video, which is a lively montage of various communication methods, got its title as an homage to the first message output by standard computer programs.

earth-from-ISSThe OPALS system sought out and locked onto a laser beacon from the Optical Communications Telescope Laboratory ground station at the Table Mountain Observatory in Wrightwood, California. It then transmitted its own 2.5-watt, 1,550-nanometer laser and modulated it to send the video at a peak rate of 50 megabits per second. According to NASA, OPALS transmitted the video in 3.5 seconds instead of the 10 minutes that conventional radio would have required.

Needless to say, the astronauts who contribute to the ISS’s ongoing research programs are pretty stoked about getting this upgrade. With a system that is capable of transmitting exponentially more information at a faster rate, they will now be able to communicate with the ground more easily and efficiently. Not only that, but educational videos produced in orbit will be much easier to send. What’s more, the ISS will have a much easier time communicating with deep space missions in the future.

nasa-opals-5This puts the ISS in a good position to oversea future missions to Mars, Europa, the Asteroid Belt, and far, far beyond! As Abrahamson put it in the course of the video statement:

It’s incredible to see this magnificent beam of light arriving from our tiny payload on the space station. We look forward to experimenting with OPALS over the coming months in hopes that our findings will lead to optical communications capabilities for future deep space exploration missions.

And in the meantime, check out the video from NASA’s Jet Propulsion Laboratory, showing the “Hello World” video and explaining the groundbreaking implications of the new system:


Sources:
cnet.com, gizmag.com

The Future is Here: Black Hawk Drones and AI pilots

blackhawk_droneThe US Army’s most iconic helicopter is about to go autonomous for the first time. In their ongoing drive to reduce troops and costs, they are now letting their five-ton helicopter carry out autonomous expeditionary and resupply operations. This began last month when the defense contractor Sikorsky Aircraft, the company that produces the UH-60 Black Hawk – demonstrated the hover and flight capability in an “optionally piloted” version of their craft for the first time.

Sikorsky has been working on the project since 2007 and convinced the Army’s research department to bankroll further development last year. As Chris Van Buiten, Sikorsky’s vice president of Technology and Innovation, said of the demonstration:

Imagine a vehicle that can double the productivity of the Black Hawk in Iraq and Afghanistan by flying with, at times, a single pilot instead of two, decreasing the workload, decreasing the risk, and at times when the mission is really dull and really dangerous, go it all the way to fully unmanned.

blackhawk_drone1The Optionally Piloted Black Hawk (OPBH) operates under Sikorsky’s Manned/Unmanned Resupply Aerial Lifter (MURAL) program, which couples the company’s advanced Matrix aviation software with its man-portable Ground Control Station (GCS) technology. Matrix, introduced a year ago, gives rotary and fixed-wing vertical take-off and landing (VTOL) aircraft a high level of system intelligence to complete missions with little human oversight.

Mark Miller, Sikorsky’s vice-president of Research and Engineering, explained in a statement:

The autonomous Black Hawk helicopter provides the commander with the flexibility to determine crewed or un-crewed operations, increasing sorties while maintaining crew rest requirements. This allows the crew to focus on the more ‘sensitive’ operations, and leaves the critical resupply missions for autonomous operations without increasing fleet size or mix.

Alias-DarpaThe Optionally Piloted Black Hawk fits into the larger trend of the military finding technological ways of reducing troop numbers. While it can be controlled from a ground control station, it can also make crucial flying decisions without any human input, relying solely on its ‘Matrix’ proprietary artificial intelligence technology. Under the guidance of these systems, it can fly a fully autonomous cargo mission and can operate both ways: unmanned or piloted by a human.

And this is just one of many attempts by military contractors and defense agencies to bring remote and autonomous control to more classes of aerial vehicles. Earlier last month, DARPA announced a new program called Aircrew Labor In-Cockpit Automation System (ALIAS), the purpose of which is to develop a portable, drop-in autopilot to reduce the number of crew members on board, making a single pilot a “mission supervisor.”

darpa-alias-flight-crew-simulator.siMilitary aircraft have grown increasingly complex over the past few decades, and automated systems have also evolved to the point that some aircraft can’t be flown without them. However, the complex controls and interfaces require intensive training to master and can still overwhelm even experienced flight crews in emergency situations. In addition, many aircraft, especially older ones, require large crews to handle the workload.

According to DARPA, avionics upgrades can help alleviate this problem, but only at a cost of tens of millions of dollars per aircraft type, which makes such a solution slow to implement. This is where the ALIAS program comes in: instead of retrofitting planes with a bespoke automated system, DARPA wants to develop a tailorable, drop‐in, removable kit that takes up the slack and reduces the size of the crew by drawing on both existing work in automated systems and newer developments in unmanned aerial vehicles (UAVs).

Alias_DARPA1DARPA says that it wants ALIAS to not only be capable of executing a complete mission from takeoff to landing, but also handle emergencies. It would do this through the use of autonomous capabilities that can be programmed for particular missions, as well as constantly monitoring the aircraft’s systems. But according to DARPA, the development of the ALIAS system will require advances in three key areas.

First, because ALIAS will require working with a wide variety of aircraft while controlling their systems, it will need to be portable and confined to the cockpit. Second, the system will need to use existing information about aircraft, procedures, and flight mechanics. And third, ALIAS will need a simple, intuitive, touch and voice interface because the ultimate goal is to turn the pilot into a mission-level supervisor while ALIAS handles the second-to-second flying.

AI'sAt the moment, DARPA is seeking participants to conduct interdisciplinary research aimed at a series of technology demonstrations from ground-based prototypes, to proof of concept, to controlling an entire flight with responses to simulated emergency situations. As Daniel Patt, DARPA program manager, put it:

Our goal is to design and develop a full-time automated assistant that could be rapidly adapted to help operate diverse aircraft through an easy-to-use operator interface. These capabilities could help transform the role of pilot from a systems operator to a mission supervisor directing intermeshed, trusted, reliable systems at a high level.

Given time and the rapid advance of robotics and autonomous systems, we are likely just a decade away from aircraft being controlled by sentient or semi-sentient systems. Alongside killer robots (assuming they are not preemptively made illegal), UAVs, and autonomous hovercraft, it is entirely possible wars will be fought entirely by machines. At which point, the very definition of war will change. And in the meantime, check out this video of the history of unmanned flight:


Sources:
wired.com, motherboard.vice.com, gizmag.com
, darpa.mil

Skyrim – Game of Thrones Theme!

skyrim_GOTIt was bound to happen sooner or later, what with Season Four of GOT coming to an end and the current popular obsession with mash-ups. In this video, Vimeo user Brady Wold mashed up the fantasy game Elders Scrolls V: Skyrim with the intro theme from Game of the Thrones to create something very watchable and fun. Using locations within the realm of Tamriel, the animation sweeps across the lands of Skyrim and watches cities like Whiterun, Riften, and others assemble themselves from the ground up.

Ever since it’s release in 2011, this RPG has been renowned for featuring elements that are quite similar to the HBO series and the fictional A Song of Ice and Fire universe on which it is based. This includes sword and sorcery, medieval history and clothing, dragons, epic fantasy, and an a common sense of aesthetics. And if that’s not enough for you, there’s a ton of GOT mods that can be uploaded to the game to add content and items from the series.

Skyrim_longclawFor instance, I myself experimented by adding weapons like Ice (Eddard Stark’s huge ass sword), Longclaw (Jon Snow’s bastard sword) and Needle (Arya Stark’s pigsticker) into the game with the “GOT Weapons Pack”. You can also download an “Arya Stark Follower” mod that has a version of this young character follow you around and assist you, and there are numerous others that allow for you to integrate livery and standards from the GOT universe into the game.

And there’s even a mod that makes it so whenever you fire up Skyrim, instead of seeing the opening Bethesda logo, this video animation plays. New ones emerge every week, including ones from the LOTR franchise and other fantasy universes. It kind of makes you wonder why the studios even bother making games anymore! Couldn’t an army of moderators simply build MMORPGs online from now on that would cut out the video game makers altogether?

I should keep my voice down, don’t want to encourage said folks. Some of the mods they created are already on the border between and bad taste. Lord only knows what kind of stuff they’d allow for if they had total freedom! In the meantime, enjoy the video:


Source:
wired.com

Climate Crisis: Bigger Storm Waves and Glacier Collapse

glacier collapseClimate Change is a multifaceted issue, which is due to the fact that there is no single consequence that takes precedence over the others. However, one undeniable consequence is the effect rising sea levels will have, thanks to rising temperatures and melting polar ice caps. Unfortunately, a new paper from Eric Rignot at NASA’s Jet Propulsion Laboratory  claims that some glaciers in West Antarctica “have passed the point of no return”.

A section of glaciers along West Antarctica’s coastline on the Amundsen Sea was previously predicted to be solid enough to last thousands of years. However, the JPL report finds that the ice will continue to slip into the water and melt much faster than expected. These massive glaciers are releasing tremendous amounts of water each year, nearly the equivalent of the entire Greenland Ice Sheet. When they are gone, they will have increased sea-level by about 1.2 meters (4 feet).

NOAA_sea_level_trend_1993_2010Rignot and his team came to this conclusion after analyzing three critical factors of glacier stability: slope of the terrain, flow rate, and the amount of the glacier floating in the water. Flow rate was the topic of a paper Rignot’s team published previously in Geophysical Research Letters where they determined the flow rate of these Antarctic glaciers has increased over the last few decades. The current paper discusses the slope and how much of the glacier is actually floating on seawater.

The conclusion he and his team came to were quite dire. As he summarized it in a recent press conference:

The collapse of this sector of West Antarctica appears to be unstoppable. The fact that the retreat is happening simultaneously over a large sector suggests it was triggered by a common cause, such as an increase in the amount of ocean heat beneath the floating sections of the glaciers. At this point, the end of this sector appears to be inevitable.

rising_sea_levelsAnother recent study, which appeared last month in the journal Nature, addressed another major problem threatening the polar ice caps. This study, which was compiled by researchers from the National Institute of Water and Atmospheric Research and The University of Newcastle, found that ocean waves that are whipped up by storms hundreds or even thousands of miles away from Earth’s poles, could play a bigger role in breaking up polar sea ice and thus contributing to its melt more than had been thought.

According to the study, these waves penetrate further into the fields of sea ice around Antarctica than current models suggest, and that bigger waves might be more common near the ice edges at both poles as climate change alters wind patterns. Incorporating this information into models could help scientists better predict the patterns of retreat and expansion seen in the sea ice in both Antarctica and the Arctic — patterns that are at least partly related to the effects of climate change — the researchers say.

glacier_collapseSea ice, as its name would suggest, frozen ocean water is, and therefore differs from icebergs, glaciers and their floating tongues called ice shelves – all of which originate on land. Sea ice grows in the winter months, and wanes as summer’s warmth causes it to melt. The amount of ice present can influence the movement of ocean currents — on average, about 9.7 million square miles of the ocean is covered with sea ice, according to the U.S. National Snow and Ice Data Center (NSIDC).

Researchers in Australia and New Zealand wanted to see how the action of big waves — defined as those with a height of at least 3 meters (about 10 feet) — might play a role in influencing the patterns of retreat and expansion, and if they could help improve the reliability of sea ice models. Prior to this study, no one had measured the propagation of large waves through sea ice before because the sea ice is in some of the most remote regions on the planet, and icebreaker ships must be used to plow through the thick ice.

Live blog on Artic sea ice : Sea Ice MinimumTo conduct their research, Alison Kohout – of New Zealand’s National Institute of Water and Atmospheric Research and the lead author on the study – went on a two-month ocean voyage with her colleagues to drop five buoys onto the sea ice that could measure the waves as they passed. It is thought that the ice behaves elastically as the waves pass through, bending with the wave peaks and troughs, weakening, and eventually breaking.

What the team found was that the big waves weren’t losing energy as quickly as smaller waves, allowing them to penetrate much deeper into the ice field and break up the ice there. That exposes more of the ice to the ocean, potentially causing more rapid melting and pushing back the edge of the sea ice. The researchers also compared observed positions of the sea ice edge with modeled wave heights in the Southern Ocean from 1997 to 2009 and found a good match between the waves and the patterns of retreat and expansions.

NASA_arctic-antarctic-2012Essentially, more big waves matched increased rates of sea ice retreat and vice versa. And while they believe that this might be able to help researchers understand this regional variability around Antarctica, Kohout and other researchers agree that more work needs to be done to fully understand how waves might be influencing sea ice. Kohout and her colleagues are planning another expedition in a couple of years. and it is hoped that subsequent studies will help identify the relationship with larger ice floes as well as the Arctic.

One thing remains clear though: as we move into the second and third decade of the 21st century, a much clearer picture of how anthropogenic climate change is effecting our environment and creating feedback mechanisms is likely to resolve itself. One can only hope that this is the result of in-depth research and not from the worst coming to pass! It is also clear that it is at the poles of the planet, where virtually no human beings exist, that the clearest signs of human agency are at work.

And be sure to check out this video from NASA’s Jet Propulsion Laboratory that illustrates the decline of glaciers in Western Antarctica:


Sources:
iflscience.com, scientificamerican.com