The Future is Here: Lab-Grown Burger Gets a Taste Test

labmeat0Yesterday, the world’s first lab-grown hamburger was cooked, served, and eaten. And according to an article from The Week, it passed the taste test. The taste test took place in London, where Mark Post, the man who had grown the patty in his lab at Maastricht University in the Netherlands, allowed two independent tasters to sample one of his hamburger patties.

The samplers were food writer and journalist Josh Schonwald and Austrian food trends researcher Hanni Rützler. After biting into a piece of the cooked meat in front of reporters, Schonwald claimed that “It had a familiar mouthfeel. [The difference] is the absence of fat.” Naturally, both tasters were careful not to comment on whether the burger was “good” or not, as any such judgements might seem premature and could hurt its chances for sales at this point.

lab-grown-burgerThis lab-grown patty took two years and $325,000 to produce. And as sources revealed, the money came from Google co-founder and TED speaker Sergey Brin. Worth an estimated $20 billion, Brin has a history of investing in cooky projects – everything from driverless cars to trips to the moon. And as he told The Guardian, he was moved to invest in the technology for animal welfare reasons and believes it has “the capability to transform how we view the world”.

lab-grown-burger_postThe hamburger was grown in Post’s lab using bovine skeletal muscle stem cells that were collected from a piece of fresh beef. The cells were grown by “feeding” them calf serum and commercially available growth medium to initiate multiplication and prompt them to develop into muscle cells over time. Once they differentiated into muscle cells, they were given simple nutrient sources and exercised in a bioreactor, helping the muscle to “bulk up.”

The resulting five-ounce burger, cooked by chef Richard McGeown for Schonwald and Rützler, was made using 20,000 strips of cultured meat – about 40 billion cow cells – and took about three months to produce. As Post joked, this is significantly less time than it takes to raise a cow. And while the arrival of in-vitro meat has been predicted and heralded for decades, but now that it’s finally here, people are not sure how to respond.

labmeat1On the one hand, it offers a range of possibilities for producing sustainable, cheap meat that could help meet global needs using only a laboratory. On the other, there’s no telling how long it will be before consumers will be comfortable eating something grown in a petri dish from stem cells. Between the absence of fat and the stigma that is sure to remain in place for some time, getting people to buy “lab-grown” might be difficult.

But then again, the same issues apply to 3D printed food and other forms of synthesized food. Designed and developed as a means of meeting world hunger and future population growth, and with sustainability and nutritional balance in mind, some degree of hesitation and resistance is to be expected. However, attitudes are likely to shift as time goes on and increased demand forces people to rethink the concept of “what’s for dinner”.

And while you’re thinking the issue over, be sure to check out this video of Mark Post speaking about his lab-grown burger at TEDx Haarlem:


Sources:
scientificamerican.com, theweek.co.uk, theguardian.com
, blog.ted.com,

The Future is Here: Augmented Reality Storybooks

ar_storybookDisney has always been on the forefront of technological innovation whenever and wherever their animation is concerned. Augmented reality has been a part of their operations for quite some time, usually in the form of displays put on at Epcot Center or their Haunted Mansion. But now, they are bringing their efforts in AR to the kind of standard storybook that you would read to your children before bedtime.

Thanks to innovations provided by Nintendo DS, the PSP, tablets and smartphones, books have become alive and interactive in ways that were simply not possible ten or twenty years ago. However, one cannot deny that ebooks simply do not have the same kind of old world charm and magic that paperbacks do. Call it nostalgic appeal or tradition, but reading to a child from a bounded tome just seems somehow more meaningful to most people.

disneyhideout-640x353And that’s where Disney’s HideOut project comes into play, a mobile projector is used to create an augmented reality storybook. How it works is simple enough, and in a way, involves merging the best of electronic and paper media. Within the book, certain parts will be printed using special infrared-absorbing ink, so that sentences and images can be tracked.

The mobile projector, in turn, uses a built-in camera to sense the ink, then projects digital images onto the page’s surface that are animated to interact with the markers. In this way, it knows to show certain images when parts of the book call for them to be displayed, and can turn normal pictures into 3D animated segments.

disney_argameAnd storybooks aren’t the only application being investigated by Disney. In addition, they have been experimenting with game concepts, where a user would moves a mobile projector around a board, causing a character to avoid enemies. In another scenario, a characters projected onto a surface interacts with tangible objects placed around them. This would not be entertaining to a child, but could be educational as well.

The applications also extend to the world of work, as the demo below shows. in this case, HideOut projects a file system onto the top of a desk, allowing the user to choose folders by aiming the projector, not unlike how a person selects channels or options using a Wii remote by aiming it at a sensor bar. And the technology could even be used on smartphones and mobile devices, allowing people the ability to interact with their phone, Facetime, or Skype on larger surfaces.

disneyhideoutAnd of course, Disney is not the only company developing this kind of AR interactive technology, nor are they the first. Products like ColAR, an app that brings your coloring book images to life, and Eye of Judgment, an early PS3 game that accessed CCG cards and animated the characters on-screen, are already on the market. And while there does not appear to be a release date for Disney’s HideOut device just yet, its likely to be making the rounds within a few years tops.

For anyone familiar with the world of Augmented Reality and computing, this is likely to call to mind what Pranav Mistry demonstrated with his Sixth Sense technology, something which is being adopted by numerous developers for mobile computing. Since he first unveiled his concept back in 2009, the technology has been improving and the potential for commercial applications has been keeping pace.

In just a few years time, every storybook is likely to come equipped with its own projector. And I wouldn’t be surprised if it quickly becomes the norm to see people out on the streets interacting with images and worlds that only they can see. And those of us who are old enough will think back to a time when only crazy people did this!

In the meantime, check out this demo of the Disney’s HideOut device in action:


Source: extremetech.com

The Future is Here: “Spiber” Silk

spider-silkFor years, scientists and researchers have been looking for a way to reproduce the strength of spider silk in the form of a synthetic material. As an organic material, spider silk is tougher than kevlar, strong as steel, lighter than carbon fiber, and can be stretched 40 percent beyond its original length without breaking. Any material that can boast the same characteristics and be massed produced would be worth its weight in gold!

Recently, a Japanese startup named Spiber has announced that it has found a way to produce the silk synthetically. Over the next two years, they intend to step up mass production and created everything from surgical materials and auto arts to bulletproof vests. And thanks to recent developments in nanoelectronics, its usages could also include soluble electronic implants, artificial blood levels and ligaments, and even antibacterial sutures.

spiber-synthetic-spider-silkSpider silk’s amazing properties are due to a protein named fibroin. In nature, proteins act as natural catalyst for most chemical reactions inside a cell and help bind cells together into tissues. Naturally, the process for creating a complex sequence of aminoacids that make up fibroin are very hard to reproduce inside a lab. Hence why scientists have been turning to genetic engineering in recent years to make it happen.

In Spiber’s case, this consisted of decoding the gene responsible for the production of fibroin in spiders and then bioengineering bacteria with recombinant DNA to produce the protein, which they then spin into their artificial silk. Using their new process, they claim to be able to engineer a new type of silk in as little as 10 days, and have already created 250 prototypes with characteristics to suit specific applications.

SpiderSilkModelNatureThey begin this process by tweaking the aminoacid sequences and gene arrangements using computer models to create artificial proteins that seek to maximize strength, flexibility and thermal stability in the final product. Then, they synthesize a fibroin-producing gene modified to produce that specific molecule.

Microbe cultures are then modified with the fibroin gene to produce the candidate molecule, which is turned into a fine powder and then spun. These bacteria feed on sugar, salt and other micronutrients and can reproduce in just 20 minutes. In fact, a single gram of the protein produces about 5.6 miles (9 km) of artificial silk.

spiber_qmonosAs part of the patent process, Spiber has named the artificial protein derived from fibroin QMONOS, from the Japanese word for spider. The substance can be turned into fiber, film, gel, sponge, powder, and nanofiber form, giving it the ability to suit a number of different applications – everything from clothing and manufacturing to nanomedicine.

Spibers says it is building a trial manufacturing research plant, aiming to produce 100 kg (220 lb) of QMONOS fiber per month by November. The pilot plant will be ready by 2015, by which time the company aims to produce 10 metric tons (22,000 lb) of silk per year.

spiber_dressAt the recent TedX talk in Tokyo, company founder Kazuhide Sekiyama unveiled Spiber’s new process by showcasing a dress made of their synthetic silk. It’s shiny blue sheen was quite dazzling and looks admittedly futuristic. Still, company spokesperson Shinya Murata admitted that it was made strictly for show and nobody tried it on.

Murata also suggested that their specialized slik could be valuable in moving toward a post-fossil-fuel future:

We use no petroleum in the production process of Qmonos. But, we know that we need to think about the use of petroleum to produce nutrient source for bacteria, electric power, etc…

Overall, Sekyama lauded the material’s strength and flexibility before the TedX audience, and claimed it could revolutionize everything from wind turbines to medical devices. All that’s needed is some more time to further manipulate the amino acid sequence to create an even lighter, stronger product. Given the expanding use for silks and its impeccable applicability, I’d say he’s correct in that belief.

In the meantime, check out the video from the TedX talk:


Sources:
gizmag.com, fastcoexist.com

Nanotech News: Smart Sponges, Nanoparticles and Neural Dust!

nanomachineryNanotechnology has long been the dream of researchers, scientists and futurists alike, and for obvious reasons. If machinery were small enough so as to be microscopic, or so small that it could only be measured on the atomic level,  just about anything would be possible. These include constructing buildings and products from the atomic level up, with would revolutionize manufacturing as we know it.

In addition, microscopic computers, smart cells and materials, and electronics so infinitesimally small that they could be merged with living tissues would all be within our grasp. And it seems that at least once a month, universities, research labs, and even independent skunkworks are unveiling new and exciting steps that are bringing us ever closer to this goal.

Close-up of a smart sponge
Close-up of a smart sponge

Once such breakthrough comes from the University of North Carolina at Chapel Hill, where biomedical scientists and engineers have joined forces to create the “smart sponge”. A spherical object that is microscopic — just 250 micrometers across, and could be made as small as 0.1 micrometers – these new sponges are similar to nanoparticles, in that they are intended to be the next-generation of delivery vehicles for medication.

Each sponge is mainly composed of a polymer called chitosan, something which is not naturally occurring, but can be produced easily from the chitin in crustacean shells. The long polysaccharide chains of chitosan form a matrix in which tiny porous nanocapsules are embedded, and which can be designed to respond to the presence of some external compound – be it an enzyme, blood sugar, or a chemical trigger.

bloodstreamSo far, the researchers tested the smart sponges with insulin, so the nanocapsules in this case contained glucose oxidase. As the level of glucose in a diabetic patient’s blood increases, it would trigger the nanocapsules in the smart sponge begin releasing hydrogen ions which impart a positive charge to the chitosan strands. This in turn causes them to spread apart and begin to slowly release insulin into the blood.

The process is also self-limiting: as glucose levels in the blood come down after the release of insulin, the nanocapsules deactivate and the positive charge dissipates. Without all those hydrogen ions in the way, the chitosan can come back together to keep the remaining insulin inside. The chitosan is eventually degraded and absorbed by the body, so there are no long-term health effects.

NanoparticlesOne the chief benefits of this kind of system, much like with nanoparticles, is that it delivers medication when its needed, to where its needed, and in amounts that are appropriate to the patient’s needs. So far, the team has had success treating diabetes in rats, but plans to expand their treatment to treating humans, and branching out to treat other types of disease.

Cancer is a prime candidate, and the University team believes it can be treated without an activation system of any kind. Tumors are naturally highly acidic environments, which means a lot of free hydrogen ions. And since that’s what the diabetic smart sponge produces as a trigger anyway, it can be filled with small amounts of chemotherapy drugs that would automatically be released in areas with cancer cells.

nanorobotAnother exciting breakthrough comes from University of California at Berkeley, where medical researchers are working towards tiny, implantable sensors . As all medical researchers know, the key to understanding and treating neurological problems is to gather real-time and in-depth information on the subject’s brain. Unfortunately, things like MRIs and positron emission tomography (PET) aren’t exactly portable and are expensive to run.

Implantable devices are fast becoming a solution to this problem, offering real-time data that comes directly from the source and can be accessed wirelessly at any time. So far, this has taken the form of temporary medical tattoos or tiny sensors which are intended to be implanted in the bloodstreams. However, what the researchers at UofC are proposing something much more radical.

neural_dustIn a recent research paper, they proposed a design for a new kind of implantable sensor – an intelligent dust that can infiltrate the brain, record data, and communicate with the outside world. The preliminary design was undertaken by Berkeley’s Dongjin Seo and colleagues, who described a network of tiny sensors – each package being no more than 100 micrometers – in diameter. Hence the term they used: “neural dust”.

The smart particles would all contain a very small CMOS sensor capable of measuring electrical activity in nearby neurons. The researchers also envision a system where each particle is powered by a piezoelectric material rather than tiny batteries. The particles would communicate data to an external device via ultrasound waves, and the entire package would also be coated in a polymer, thus making it bio-neutral.

smart_tatoosBut of course, the dust would need to be complimented by some other implantable devices. These would likely include a larger subdural transceiver that would send the ultrasound waves to the dust and pick up the return signal. The internal transceiver would also be wirelessly connected to an external device on the scalp that contains data processing hardware, a long range transmitter, storage, and a battery.

The benefits of this kind of system are again obvious. In addition to acting like an MRI running in your brain all the time, it would allow for real-time monitoring of neurological activity for the purposes of research and medical monitoring. The researchers also see this technology as a way to enable brain-machine interfaces, something which would go far beyond current methods. Who knows? It might even enable a form of machine-based telepathy in time.

telepathySounds like science fiction, and it still is. Many issues need to be worked out before something of this nature would be possible or commercially available. For one, more powerful antennae would need to be designed on the microscopic scale in order for the smart dust particles to be able to send and receive ultrasound waves.

Increasing the efficiency of transceivers and piezoelectric materials will also be a necessity to provide the dust with power, otherwise they could cause a build-up of excess heat in the user’s neurons, with dire effects! But most importantly of all, researchers need to find a safe and effective way to deliver the tiny sensors to the brain.

prosthetic_originalAnd last, but certainly not least, nanotechnology might be offering improvements in the field of prosthetics as well. In recent years, scientists have made enormous breakthroughs in the field of robotic and bionic limbs, restoring ambulatory mobility to accident victims, the disabled, and combat veterans. But even more impressive are the current efforts to restore sensation as well.

One method, which is being explored by the Technion-Israel Institute of Technology in Israel, involves incorporating gold nanoparticles and a substrate made of polyethylene terephthalate (PET) – the plastic used in bottles of soft drinks. Between these two materials, they were able to make an ultra-sensitive film that would be capable of transmitting electrical signals to the user, simulating the sensation of touch.

gold_nanoparticlesBasically, the gold-polyester nanomaterial experiences changes in conductivity as it is bent, providing an extremely sensitive measure of physical force. Tests conducted on the material showed that it was able to sense pressures ranging from tens of milligrams to tens of grams, which is ten times more sensitive than any sensors being build today.

Even better, the film maintained its sensory resolution after many “bending cycles”, meaning it showed consistent results and would give users a long term of use. Unlike many useful materials that can only really be used under laboratory conditions, this film can operate at very low voltages, meaning that it could be manufactured cheaply and actually be useful in real-world situations.

smart-skin_610x407In their research paper, lead researcher Hossam Haick described the sensors as “flowers, where the center of the flower is the gold or metal nanoparticle and the petals are the monolayer of organic ligands that generally protect it.” The paper also states that in addition to providing pressure information (touch), the sensors in their prototype were also able to sense temperature and humidity.

But of course, a great deal of calibration of the technology is still needed, so that each user’s brain is able to interpret the electronic signals being received from the artificial skin correctly. But this is standard procedure with next-generation prosthetic devices, ones which rely on two-way electronic signals to provide control signals and feedback.

nanorobot1And these are just some examples of how nanotechnology is seeking to improve and enhance our world. When it comes to sensory and mobility, it offers solutions to not only remedy health problems or limitations, but also to enhance natural abilities. But the long-term possibilities go beyond this by many orders of magnitude.

As a cornerstone to the post-singularity world being envisioned by futurists, nanotech offers solutions to everything from health and manufacturing to space exploration and clinical immortality. And as part of an ongoing trend in miniaturization, it presents the possibility of building devices and products that are even tinier and more sophisticated than we can currently imagine.

It’s always interesting how science works by scale, isn’t it? In addition to dreaming large – looking to build structures that are bigger, taller, and more elaborate – we are also looking inward, hoping to grab matter at its most basic level. In this way, we will not only be able to plant our feet anywhere in the universe, but manipulate it on the tiniest of levels.

As always, the future is a paradox, filling people with both awe and fear at the same time.

Sources: extremetech.com, (2), (3)

The Future is Here: Bionic Eye Approved by FDA!

Argus-IIAfter more than 20 years in the making, the Argus II bionic eye was finally approved this past February by the Food and Drug Administration for commercial sale in the US. For people suffering from the rare genetic condition known as retinitis pigmentosa – an inherited, degenerative eye disease that causes severe vision impairment and often blindness – this is certainly good news indeed.

Developed by Second Sight, the Argus II is what is known as a “Retinal Prosthesis System” (RPS) that corrects the main effect of retinitis pigmentosa, which is the diminished ability to distinguish light from dark. While it doesn’t actually restore vision to people who suffer from this condition, it can improve their perceptions of light and dark, and thus identify the movement or location of objects.

argusII_1The Argus II works by using a series of electrodes implanted onto the retina that are wirelessly connected to a video camera mounted on the eyeglasses. The eye-electrodes use electrical impulses transmitted from the camera to stimulate the part of the retina that allows for image perception. By circumventing the parts of the eye effected by the disease, the bionic device is a prosthetic in every sense of the word.

According to Suber S. Huang, director of the University Hospital Eye Institute’s Center for Retina and Macular Disease, the breakthrough treatment is:

 [R]emarkable. The system offers a profound benefit for people who are blind from RP and who currently have no therapy available to them. Argus II allows patients to reclaim their independence and improve their lives.

ArgusIIArgus II boasts 20-plus years of research, three clinical trials, and more than $200 million in private and public investment behind it. Still, the system has been categorized by the FDA as a humanitarian use device, meaning there is a “reasonable assurance” that the device is safe and its “probable benefit outweighs the risk of illness or injury.”

Good news for people with vision impairment, and a big step in the direction of restoring sight. And of course, a possible step on the road to human enhancement and augmentation. As always, every development that is made in the direction of correcting human impairment offers the future possibility of augmenting otherwise unimpaired human beings.

infraredAs such, it might not be long before there are devices that can give the average human the ability to see in the invisible spectrum, such as IR and ultra-violet frequencies. Perhaps also something that can detect x-rays, gamma ray radiation, and other harmful particles. Given that the very definition of cyborg is “a being with both organic and cybernetic parts”, the integration of this device means the birth of the cybernetic age.

And be sure to check out this promotional video by Second Sight showing how the device works:

Source: news.cnet.com

The 3D Printing Revolution

3D-printing1From the way people have been going on about 3D printing in the past few months, you’d think it was some kind of fad or something! But of course, there’s a reason for that. Far from being a simple prescriptive technology that requires us all to update our software or buy the latest version in order to “stay current”, 3D printing is ushering in a revolution that will literally change the world.

From design models and manufactured products, the range of possibilities is now venturing into printed food and even artificial organs. The potential for growth is undeniable, and the pace at which progress is happening is astounding. And on one of my usual jaunts through the tech journals and video-sharing websites, I found a few more examples of the latest applications.

ord_bot_2_2_display_mediumFirst up is this story from Mashable, a social media news source, that discusses NYU student Marko Manriquez’s new invention: the BurritoBot. Essentially a 3D food printer that uses tortillas, salsa, guacamole and other quintessential ingredients, Manriquez’s built this machine for his master’s thesis using open-source hardware – including the ORD bot, a 3D printing mechanical platform (pictured above).

The result is a food printer that an tailor-make Burritos and other Mexican delights, giving users the ability to specify which ingredients they want, in which proportion, and all through an app on their smartphone. No demos available online as of yet, but Mashable provides a pretty good breakdown on how it works, as well as Manrquez’s inspiration and intent behind its creation:


Next up, there’s Cornell University’s food printer that allows users to created desserts. In this CNN video, Chef David Arnold at the French Culinary Institute shows off the printer by creating a chocolate cake, layer by layer, dough and icing. A grad student from Cornell’s Computational Synthesis Lab was on hand to explain that their design is also open-source, with the blueprints and technical design made available online so anyone can build their own.

As Chef Arnold explained, his kitchen has been using the printer to work with ingredients ranging from cookie dough, to icing to masa – the corn meal tortillas are made from. It also allows for a degree of accuracy that many may not possess, while still offering plenty of opportunities to be creative. “The only real limitation now is that the product has to be able to go through a syringe,” he said. “Other than that, skies the limit.”


But even more exciting for some are the opportunities that are now being explored using metals. Using metal powder and an electron beam to form manufactured components, this type of “additive manufacturing” is capable of turning out parts that are amazingly complex, far more so than anything created through the machining-process.

In this next video, the crew from CNNMoney travel to the Oakridge National Lab in Tenessee to speak to the Automation, Manufacturing and Robotics Group. This government-funded lab specializes in making parts that are basically “structures within structures”, the kind of things that are used in advanced prosthetic limbs, machinery, and robots. As they claim, this sort of manufacturing is made possible thanks to the new generation of 3D ABS and metal printers.

Oakridge_natlabWhat’s more, this new process is far more efficient. Compared to old fashioned forms of machining, it consumes less energy and generates far less waste in terms of materials used. And the range of applications is extensive, embracing fields as divergent as robotics and construction to biomedical and aerospace. At present, the only real prohibition is the cost of the equipment itself, but that is expected to come down as 3D printing and additive manufacturers receive more market penetration.


But of course, all of this pales in comparison to the prospect of 3D printed buildings. As Behrokh Khoshnevis – a professor of Industrial & Systems Engineering at USC – explains in this last video from TEDxTalks, conventional construction methods are not only inefficient, labor intensive and dangerous, they may very well be hampering development efforts in the poorer parts of the world.

As anyone with a rudimentary knowledge of poverty and underdevelopment knows, slums and shanty-towns suffer disproportionately from the problems of crime, disease, illiteracy, and infant mortality. Unfortunately, government efforts to create housing in regions where these types of communities are common are restrained by budgets and resource shortages. With one billion people living in shanties and slum-like shelters, a new means of creating shelter needs to be found for the 21st century.

contour-craftingThe solution, according to Khoshnevis, lies in Contour Crafting and Automated Construction –  a process which can create a custom house in just 20 hours! As a proponent of Computer-Assisted Design and Computer-Assisted Manufacturing (CAD/CAM), he sees automated construction as a cost-effective and less labor resource-intensive means of creating homes for these and other people who are likely to live in unsafe, unsanitary conditions.

The technology is already in place, so any claims of that is of a “theoretical nature” are moot. What’s more, such processes are already being designed to construct settlements on the moon, incorporating robotics and 3D printing with advanced computer-assisted simulations. As such, Khoshnevis is hardly alone in advocating similar usages here on planet Earth.

The benefits, as he outlines them, are dignity, safety, and far more sanitary conditions for the inhabitants, as well as the social benefits of breaking the pathological cycle of underdevelopment. Be sure to check out his video below. It’s a bit long, but very enlightening!


Once in awhile, its good to take stock of the future and see that it’s not all creepy robots and questionable inventions. Much of the time, technological progress really does promise to make life better, and not just “more convenient”. It’s also especially good to see how it can be made to improve the lives of all people, rather than perpetuating the gap between the haves and the have nots.

Until next time, keep your heads high and your eyes to the horizon!

 

Judgement Day Update: Geminoid Robotic Clones

geminoidWe all know it’s coming: the day when machines would be indistinguishable from human beings. And with a robot that is capable of imitating human body language and facial expressions, it seems we are that much closer to realizing it. It’s known as the Geminoid HI-2, a robotic clone of its maker, famed Japanese roboticist Hiroshi Ishiguro.

Ishiguro unveiled his latest creation at this year’s Global Future 2045 conference, an annual get-together for all sorts of cybernetics enthusiasts, life extension researchers, and singularity proponents. As one of the world’s top experts on human-mimicking robots, Ishiguro wants his creations to be as close to human as possible.

avatar_imageAlas, this has been difficult, since human beings tend to fidget and experience involuntary tics and movements. But that’s precisely what his latest bot excels at. Though it still requires a remote controller, the Ishiguro clone has all his idiosyncrasies hard-wired into his frame, and can even give you dirty looks.

geminoidfThis is not the first robot Ishiguro has built, as his female androids Repliee Q1Expo and Geminoid F will attest. But above all, Ishiguro loves to make robotic versions of himself, since one of his chief aims with robotics is to make human proxies. As he said during his talk, “Thanks to my android, when I have two meetings I can be in two places simultaneously.” I honestly think he was only half-joking!

During the presentation, Ishiguro’s robotic clone was on stage with him, where it realistically fidgeted as he pontificated and joked with the audience. The Geminoid was controlled from off-stage, where an unseen technician guided it, and fidgeted, yawned, and made annoyed facial expressions. At the end of the talk, Ishiguro’s clone suddenly jumped to life and told a joke that startled the crowd.

geminoid_uncanny_valleyIn Ishiguro’s eyes, robotic clones can outperform humans at basic human behaviors thanks to modern engineering. And though they are not yet to the point where the term “android” can be applied, he believes it is only a matter of time before they can rival and surpass the real thing. Roboticists and futurists refer to this as the “uncanny valley” – that strange, off-putting feeling people get when robots begin to increasingly resemble humans. If said valley was a physical place, I think we can all agree that Ishiguro would be its damn mayor!

And judging by these latest creations, the time when robots are indistinguishable from humans may be coming sooner than we think. As you can see from the photos, there seems to be very little difference in appearance between his robots and their human counterparts. And those who viewed them live have attested to them being surprisingly life-like. And once they are able to control themselves and have an artificial neural net that can rival a human one in terms of complexity, we can expect them to mimic many of our other idiosyncrasies as well.

As usual, there are those who will respond to this news with anticipation and those who respond with trepidation. Where do you fall? Maybe these videos from the conference of Ishiguro’s inventions in action will help you make up your mind:

Ishiguro Clone:


Geminoid F:

Sources: fastcoexist.com, geminoid.jp

The Future is Here: The AR Bike Helmet

AR_helmetAR displays are becoming all the rage, thanks in no small part to Google Glass and other display glasses. And given the demand and appeal of the technology, it seemed like only a matter of time before AR displays began providing real-time navigation for vehicles. For decades, visor-mounted heads-up displays have been available, but fully-integrated displays have yet to have been produced.

Live Helmet is one such concept, a helmet that superimposes information and directions into a bike-helmet visor. Based in Moscow, this startup seeks to combine a head-mounted display, built-in navigation, and Siri-like voice recognition. The helmet will have a translucent, color display that’s projected on the visor in the center of the field of vision, and a custom user interface, English language-only at launch, based on Android.

AR_helmet1This augmented reality helmet display includes a light sensor for adjusting image brightness according to external light conditions, as well as an accelerometer, gyroscope, and digital compass for tracking head movements. Naturally, the company anticipated that concerns about driver safety would come up, hence numerous safety features which they’ve included.

For one, the digital helmet is cleverly programmed to display maps only when the rider’s speed is close to zero to avoid distracting them at high speeds. And for the sake of hands-free control, it comes equipped with a series of voice commands for navigation and referencing points of interest. No texting and driving with this thing!

ar_helmet4So far, the company has so far built some prototype hardware and software for the helmet with the help of grants from the Russian government, and is also seeking venture capital. However, they have found little within their home country, and have been forced to crowdfund via an Indiegogo campaign. As CEO, Andrew Artishchev, wrote on LiveMap’s Indiegogo page:

Russian venture funds are not disposed to invest into hardware startups. They prefer to back up clones of successful services like Groupon, Airnb, Zappos, Yelp, Booking, etc. They are not interested in producing hardware either.

All told, they are seeking to raise $150,000 to make press molds for the helmet capsule. At present, they have raised $5,989 with 31 days remaining. Naturally, prizes have been offered, ranging from thank yous and a poster (for donations of $1 to $25) to a test drive in a major city (Berlin, Paris, Rome, Moscow, Barcelona) for $100, and a grand prize of a helmet itself for a donation of $1500.

ar_helmet3And of course, the company has announced that they have some “Stretched Goals”, just in case people want to help them overshoot their mandate of $150,000. For 300 000$, they will include a Bluetooth with a headset profile to their helmet, and for 500 000$, they will merge a built-in high-resolution 13Mpix photo&video camera. Good to have goals.

Personally, I’d help sponsor this, except for the fact that I don’t have motorbike and wouldn’t know how to use it if I did. But a long drive across the autobahn or the Amber Route would be totally boss! Speaking of which, check out the company’s promotional video:

Sources: news.cnet.com, indiegogo.com

The Future is Here: Smart Skin!

neuronsWhen it comes to modern research and development, biomimetics appear to be the order of the day. By imitating the function of biological organisms, researchers seek to improve the function of machinery to the point that it can be integrated into human bodies. Already, researchers have unveiled devices that can do the job of organs, or bionic limbs that use the wearer’s nerve signals or thoughts to initiate motion.

But what of machinery that can actually send signals back to the user, registering pressure and stimulation? That’s what researchers from the University of Georgia have been working on of late, and it has inspired them to create a device that can do the job of the largest human organ of them all – our skin. Back in April, they announced that they had successfully created a brand of “smart skin” that is sensitive enough to rival the real thing.

smart-skin_610x407In essence, the skin is a transparent, flexible arrays that uses 8000 touch-sensitive transistors (aka. taxels) that emit electricity when agitated. Each of these comprises a bundle of some 1,500 zinc oxide nanowires, which connect to electrodes via a thin layer of gold, enabling the arrays to pick up on changes in pressure as low as 10 kilopascals, which is what human skin can detect.

Mimicking the sense of touch electronically has long been the dream researchers, and has been accomplished by measuring changes in resistance. But the team at Georgia Tech experimented with a different approach, measuring tiny polarization changes when piezoelectric materials such as zinc oxide are placed under mechanical stress. In these transistors, then, piezoelectric charges control the flow of current through the nanowires.

nanowiresIn a recent news release, lead author Zhong Lin Wang of Georgia Tech’s School of Materials Science and Engineering said:

Any mechanical motion, such as the movement of arms or the fingers of a robot, could be translated to control signals. This could make artificial skin smarter and more like the human skin. It would allow the skin to feel activity on the surface.

This, when integrated to prosthetics or even robots, will allow the user to experience the sensation of touch when using their bionic limbs. But the range of possibilities extends beyond that. As Wang explained:

This is a fundamentally new technology that allows us to control electronic devices directly using mechanical agitation. This could be used in a broad range of areas, including robotics, MEMS, human-computer interfaces, and other areas that involve mechanical deformation.

prostheticNot the first time that bionic limbs have come equipped with electrodes to enable sensation. In fact, the robotic hand designed by Silvestro Micera of the Ecole Polytechnique Federale de Lausanne in Switzerland seeks to do the same thing. Using electrodes that connect from the fingertips, palm and index finger to the wearer’s arm nerves, the device registers pressure and tension in order to help them better interact with their environment.

Building on these two efforts, it is easy to get a glimpse of what future prosthetic devices will look like. In all likelihood, they will be skin-colored and covered with a soft “dermal” layer that is studded with thousands of sensors. This way, the wearer will be able to register sensations – everything from pressure to changes in temperature and perhaps even injury – from every corner of their hand.

As usual, the technology may have military uses, since the Defense Advanced Research Projects Agency (DARPA) is involved. For that matter, so is the U.S. Air Force, the U.S. Department of Energy, the National Science Foundation, and the Knowledge Innovation Program of the Chinese Academy of Sciences are all funding it. So don’t be too surprised if bots wearing a convincing suit of artificial skin start popping up in your neighborhood!

terminator2Source: news.cnet.com

The Future is Here: Power Shorts!

piezoelectric_nanogeneratorBig public events are often used to showcase new technology: the Consumer Electronics Show in Las Vegas, the Bett Show in London, and now the Glastonbury outdoor festival in England, where early last the mobile phone company Vodafone chose to showcase a new line: the Power Shorts, an item of clothing that turns motion and even body heat into electricity.

The shorts were naturally a big hit, and quite appropriate for the venue since they use motion (like dancing), to boost the battery life of your mobile devices. Created with help from scientists at the University of Southampton, the shorts incorporate a Power Pocket that contains foam-like ferroelectret materials with pockets of permanently charged surfaces. When the material gets squashed or deformed through movement, kinetic energy gets produced.

power-pocket_610x328But for those who are looking for a way to charge their gear without exertion, Vodafone is also working on a Recharge Sleeping Bag. This bag apparently harvests body heat via the “Seebeck effect,” a process that produces a voltage from the temperature differences across a thermoelectric module.

These modules are printed on the fabric of the sleeping bag, which supposedly can transform an 8-hour snooze into 11 hours of smartphone battery life. As Stephen Beeby, a professor of electronic systems at the University of Southampton who worked on the innovations explained:

One side of that is cold and the other is hot, and when you get a flow of heat through it you can create a voltage and a current. Voltage and current together equals electrical power.

recharge-bag_610x328And this is not the first time that Vodafone chose to unveil something new and innovative that just happens to take advantage of the principles of piezoelectricity during a musical event. For those who attended the Isle of Wight Festival last year, the Vodafone Booster Brolley, a prototype parasol that keeps your phone charged while it keeps you dry might ring a bell.

These are by no means the only examples of kinetic energy devices these days. For example, a piezoelectric rubber material produced by Princeton and Caltech a few years back, is already being considered for shoes and other mobile devices as a means of recharging personal electronics.

pavegen2And remember Pavegen, the rubber panels that turned runners steps at the finishing line of the Paris Marathon into actual electricity? This technology is already being adapted to provide electricity for a Grammar School in Kent, England, utilizing the thousands of steps students take everyday to keep the lights on.

Such concepts are likely to be powering just about all our devices in the not-too-distant future, at least in part. And beyond personal electronics, piezoelectric motors are also sure to be turning up in buildings and public spaces in the near future. In addition to stairways, hallways, and sidewalks, any surface in the city that moves or is touched on a regular basis could be converted to providing power.

Very clean, and very renewable. People still do a great deal of getting around by foot these days, and if we can convert that motion into energy, so much the better!

Source: news.cnet.com, blog.vodafone.co.uk