Hey folks! It’s a new day and a new week. And during one of my many trips over to Amazon to see how my books were doing, I noticed that I picked up some additional reviews. As expected, they were a bit of a mixed bag, which seems to reflect the fact that the new edition is getting out there and earning its keep. On the other hand, some reviewers aren’t done with the 1st edition, and once again opinion is divided when it comes to how much they care about editing.
See for yourself. I’ve arranged the three latest in order from best to worst:
(5.0 out of 5 stars) fantastic read: This was an absolutely fantastic read. Highly recommend for any fan of the zombie genre. There are some terrible spelling and grammatical errors, at least with the 1st edition, but nothing that detracted from the story nor made hard to read in my opinion. Cannot wait to read more from this author!
-echOs
(4.0 out of 5 stars) Great Story and Characters: Surprisingly good story of combat in a military responding to a frightening zombie infested world. Strong, consistent characterizations, great story lines, believable situations, and good use of humor. Delightfully without massive amounts of information regarding weapons and ammunition. An author worthy of continuing support. A 4 1/2 star book and a 5 star author. Kimohair
-Irish Kathleen
(2.0 out of 5 stars)Poorly written, not proof read and his editor simply didn’t do her job!!!!! I thought this was the first book in the series no preface lended to my confusion – therefore the story starts off unfocused with no clear beginning! This is followed by a character development that is non-existent backed up with so many misspelt words that lends itself to drive the poor reader insane.
Sorry to say but each page has so many errors it makes it hard to follow the storyline, which is actually not bad. But makes reading this extremely frustrating.
In my not so humble opinion the writer did not do his homework in regards to military chain of command,terms and or squad tactics which was again frustrating.
Moreover for me his insistence on using abbreviated terms such as “mage” for major and his reliance on buzzwords such as “whiskey delta” is annoying to the point of nails on a chalkboard! This shows a lack of respect for the armed forces rank and an overall disregard to the readers intelligence.
I would suggest this writer spend some time with a seasoned older mentor while collaborating. Plus make sure you have a proof reader who actually reads your proofs – so you release something that is somewhat grammatically correct and free of spelling errors.
-Putty Tat “Tat”
Okay, so that was one stellar review, in spite of what I can only assume is the 1st edition’s share of editing mistakes, one good review without any mention of editing issues – assuming they read the 2nd edition – and the worst review I have received to date! In fact, this last one was originally one star out of five, but ol’ Putty there seemed fit to upgrade it two stars after a having a change of heart (said with only mild irony!)
So basically, it seems that things are looking up for this little work of indie fiction. Fingers crossed the sequel will be well received, consistently so!
Ongoing developments in 3D printing have allowed for some amazing breakthroughs in recent years. From its humble beginnings, manufacturing everything from 3D models and drugs to jewelry, the technology is rapidly expanding into the realm of the biological. This began with efforts to create printed cartilage and skin, but quickly expanded into using stem cells to create specific types of living tissues. And as it happens, some of those efforts are bearing some serious fruit!
One such example comes to us from California, where the San Diego-based firm Organova announced that they were able to create samples of liver cells using 3D printing technology. The firm presented their findings at the Experimental Biology conference in Boston this past April. In a press release, the company said the following:
We have demonstrated the power of bioprinting to create functional human tissue that replicates human biology better than what has come before.
The company’s researchers used a gel and “bioink” to build three types of liver cells and arranged them into the same kind of three-dimensional cell architecture found in a human liver. Although not fully functional, the 3D cells were able to produce some of the same proteins as an actual liver does and interacted with each other and with compounds introduced into the tissue as they would in the body.
This latest breakthrough places Organovo, indeed all biomedical research firms, that much closer to the dream of being able to synthesize human organs and other complex organic tissues. And they are hardly alone in narrowing the gap, as doctor’s at the University of Michigan made a similar advancement last year when they used a 3D printer to build a synthetic trachea for a child with a birth defect that had collapsed her airway.
As scientists get more familiar with the technology and the process of building shaped, organic cells that are capable of doing the same job as their natural counterparts, we are likely to be seeing more and more examples of synthetic organic tissue. In addition, its likely to be just a few more years before fully-functional synthetic organs are available for purchase. This will be a boon for both those looking for a transplant, as well as a medical system that is currently plagued by shortages and waiting lists.
And be sure to check out this CBC video of Keith Murphy, CEO of Organovo, explaining the process of bioprinting:
On Friday, Washington DC found itself embroiled in controversy as revelations were made about the extent to which US authorities have been spying on Americans in the last six years. This news came on the heels of the announcement that the federal government had been secretly cataloging all of Verizon’s phone records. No sooner had the dust settled on that revelation that it became known that the scope of the Obama administration’s surveillance programs was far greater than anyone had imagined.
According to updated information on the matter, it is now known that The National Security Agency (NSA) and the FBI have been tapping directly into the central servers of nine leading U.S. Internet companies, extracting everything from audio and video chats, photographs, e-mails, documents, and connection logs that would enable their analysts to track foreign targets.
This information was revealed thanks to a secret document that was leaked to the Washington Post, which shows for the first time that under the Obama administration, the communication records of millions of US citizens are being collected indiscriminately and in bulk – regardless of whether they are suspected of any wrongdoing. Equally distressing is the names being named: U.S. Service Providers such as Microsoft, Yahoo, Google, Facebook, PalTalk, AOL, Skype, YouTube, Apple.
The document further indicates that all of this has been taking place since 2007, when news disclosures, lawsuits and the Foreign Intelligence Surveillance Court forced then-president George W. Bush to look for new authority to justify his program warrantless domestic surveillance. It’s continuance and expansion under Obama has created a great deal of understandable intrigue, and not only because of promises made that “illegal wiretapping” would not take place under his watch.
The joint FBI-NSA program responsible for mining all the data is known as PRISM, and it may very well be the first of its kind. While the NSA and FBI have a long history of monitoring suspects via phone records and computer activity, and are both accustomed to corporate partnerships that help it divert data traffic or sidestep barriers, such a vast program has never before been possible. In the current information age, there is an immense wealth of information out there, and where better to access all of this than in Silicon Valley?
Not long after the news broke in Washington, London’s Guardian reported that GCHQ, Britain’s equivalent of the NSA, also has been secretly gathering intelligence from the same internet companies through an operation set up by the NSA. According to the same leaked information, PRISM appears to allow the GCHQ to circumvent the formal legal process required in Britain to seek personal material such as emails, photos and videos from an internet company based outside of the country.
But perhaps worst of all is the fact that this process is entirely above board, at least for the companies involved. Back in 2007, Congress passed the Protect America Act, and then in 2008 followed it up with the FISA Amendments Act, both of which immunized private companies that cooperated voluntarily with U.S. intelligence collection against prosecution. And late last year, when critics in Congress sought changes in the FISA Amendments Act, the only lawmakers who knew about PRISM were bound by oaths of office to hold their tongues.
An anticipated, a bi-partisan amalgam of Senators came out to defend the initial reports of phone record monitoring shortly after it was announced. In a rare display of solidarity that cut across party lines, Democrats and Republicans from both the Senate and House came forward to say that the program was justified, only spied on terrorists, and that law-abiding citizens need not worry.
National Security Agency – aerial view
Once again, the argument “if you’ve done nothing wrong, you’ve got nothing to fear” finds itself employed by people who do not want to voice criticisms about a government spying program. Echoes of the Bush administration and McCarthy era all over again. Needless to say, all of this has many people worried, not the least of which are people opposed to government intrusion and the protection of privacy for the past decade.
Ever since it became possible to “mine data” from numerous online digital sources, there has been fear that corporations or governments might try to ascertain the habits and comings and goings of regular people in order to effectively monitor them. For some time now, this sort of monitoring has been somewhat benign, in the form of anticipating their spending habits and using targeted advertising. But always, the fear that something more sinister and totalitarian might emerge.
And with the “War on Terror”, the Patriot Act, domestic warrantless wiretapping, the legitimization of torture, and a slew of other crimes the Bush administration was indicted in, people all over the world have become convinced that “Big Brother” government is just around the corner, if indeed it is not already here.
The fact that such processes have continued and even expanded under Obama, a man who originally pledged not to engage in such behavior, has made a bad situation worse. In many ways, it demonstrates that fears that he too would succumb to internal pressure were justified. Much as he was won over by the Pentagon and CIA to continue the war in Afghanistan and UAV programs, it seems that the constellation of FBI and NSA specialists advising him on domestic surveillance has managed to sway him here as well.
One can only hope that this revelation causes the federal government and the Obama administration to reconsider their stances. After all, these are the same people who were convinced to stand down on the use of UAVs in oversees operations and to take measures that would ensure transparency in the future. We can also hope that the NSA and FBI will be required to once again have to rely on the court system and demonstrate “just cause” before initiating any domestic surveillance in the future.
Otherwise, we might all need to consider getting our hands on some stealth wear and personal cameras, to shield ourselves and create an environment of “sousveillance” so we can spy on everything the government does. Might not hurt to start monitoring the comings and goings of every telecommunications and Silicon Valley CEO while were at it! For as the saying goes, “who watches the watchers?” I’ll give you a hint: we do!
Also, be sure to check out the gallery of artist Adam Harvey, the man who pioneered “stealth wear” as a protest against the use of drones and domestic surveillance. To learn more about sousveillance, the concept of a society monitored by common people, check out Steve Mann’s (inventor of the EyeTap) blog.
As an educator, technological innovation is a subject that comes up quite often. Not only are teachers expected to keep up with trends so they can adapt them into their teaching strategies, classrooms,and prepare children in how to use them, they are also forced to contend with how these trends are changing the very nature of education itself. If there was one thing we were told repeatedly in Teacher’s College, it was that times are changing, and we must change along with them.
And as history has repeatedly taught us, technological integration not only changes the way we do things, but the way we perceive things. As we come to be more and more dependent on digital devices, electronics and wireless communications to give us instant access to a staggering amount of technology, we have to be concerned with how this will effect and even erode traditional means of information transmission. After all, how can reading and lecture series’ be expected to keep kid’s attention when they are accustomed to lighting fast videos, flash media, and games?
And let’s not forget this seminal infographic, “Envisioning the future of educational technology” by Envisioning Technology. As one of many think tanks dedicated to predicting tech-trends, they are just one of many voices that is predicting that in time, education will no longer require the classroom and perhaps even teachers, because modern communications have made the locale and the leader virtually obsolete.
Pointing to such trends as Massive Open Online Courses, several forecasters foresee a grand transformation in the not too distant future where all learning happens online and in virtual environments. These would be based around “microlearning”, moments where people access the desired information through any number of means (i.e. a google search) and educate themselves without the need for instruction or direction.
The technical term for this future trend is Socialstructured Learning = an aggregation of microlearning experiences drawn from a rich ecology of content and driven not by grades but by social and intrinsic rewards. This trend may very well be the future, but the foundations of this kind of education lie far in the past. Leading philosophers of education–from Socrates to Plutarch, Rousseau to Dewey–talked about many of these ideals centuries ago. The only difference is that today, we have a host of tools to make their vision reality.
One such tool comes in the form of augmented reality displays, which are becoming more and more common thanks to devices like Google Glass, the EyeTap or the Yelp Monocle. Simply point at a location, and you are able to obtain information you want about various “points of interest”. Imagine then if you could do the same thing, but instead receive historic, artistic, demographic, environmental, architectural, and other kinds of information embedded in the real world?
This is the reasoning behind projects like HyperCities, a project from USC and UCLA that layers historical information on actual city terrain. As you walk around with your cell phone, you can point to a site and see what it looked like a century ago, who lived there, what the environment was like. The Smithsonian also has a free app called Leafsnap, which allows people to identify specific strains of trees and botany by simply snapping photos of its leaves.
In many respects, it reminds me of the impact these sorts of developments are having on politics and industry as well. Consider how quickly blogging and open source information has been supplanting traditional media – like print news, tv and news radio. Not only are these traditional sources unable to supply up-to-the-minute information compared to Twitter, Facebook, and live video streams, they are subject to censorship and regulations the others are not.
In terms of industry, programs like Kickstarter and Indiegogo – crowdsources, crowdfunding, and internet-based marketing – are making it possible to sponsor and fund research and development initiatives that would not have been possible a few years ago. Because of this, the traditional gatekeepers, aka. corporate sponsors, are no longer required to dictate the pace and advancement of commercial development.
In short, we are entering into a world that is becoming far more open, democratic, and chaotic. Many people fear that into this environment, someone new will step in to act as “Big Brother”, or the pace of change and the nature of the developments will somehow give certain monolithic entities complete control over our lives. Personally, I think this is an outmoded fear, and that the real threat comes from the chaos that such open control and sourcing could lead to.
Is humanity ready for democratic anarchy – aka. Demarchy (a subject I am semi-obsessed with)? Do we even have the means to behave ourselves in such a free social arrangement? Opinion varies, and history is not the best indication. Not only is it loaded with examples of bad behavior, previous generations didn’t exactly have the same means we currently do. So basically, we’re flying blind… Spooky!
We are now just one week way from Man of Steel‘s official, theatrical release. And in honor of this final lap, the studio has chosen to release one last, beautifully produced, trailer. And unlike the other trailers, which focused on Clarke’s upbringing, his quest to find his true identity, or showcased how the people of Earth would react to his presence, this one is action and nothing but!
It starts with a fair bit of footage from Krypton, where we see more of the apocalyptic event that required that Clarke be sent away, and then moves to Earth, where the same forces that took his homeworld threaten to consume his adopted world. And it’s abundantly clear from this and the last trailer that in this relaunch, unlike the originals, that Zod is at the center of it all.
Krypton’s destruction was no mere accident, and the evil Kryptonians were not mere exiles. Interesting angle…
Enjoy the show, and let’s all be ready for when it comes out. This is one relaunch I actually intend to see!
Ending terminal illness is one of the hallmarks of the 21st century, with advances being made all the time. In recent years, efforts have been particularly focused on findings treatments and cures for the two greatest plagues of the past 100 years – HIV and cancer. But whereas HIV is one of the most infectious diseases to ever be observed, cancer is by far the greater killer. In 2008 alone, approximately 12.7 million cancers were diagnosed (excluding non-invasive cancers) and 7.6 million people died of cancer worldwide.
Little wonder then why so much time and energy is dedicated to ending it; and in recent years, a number of these initiatives have begun to bear fruit. One such initiative comes from the Mayo Clinic, where researchers claim they have developed a new type of software that can help classify cancerous lung nodules noninvasively, thus saving lives and health care costs.
It’s called Computer-aided Nodule Assessment and Risk Yield, or Canary, and a pilot study of the software recently appeared in the April issue of the Journal of Thoracic Oncology. According to the article, Canary uses data from high-resolution CT images of a common type of cancerous nodule in the lung and then matches them, pixel for pixel, to one of nine unique radiological exemplars. In this way, the software is able to make detailed comparisons and then determine whether or not the scans indicate the presence of cancer.
In the pilot study, Canary was able to classify lesions as either aggressive or indolent with high sensitivity, as compared to microscopic analyses of the lesions after being surgically removed and analyzed by lung pathologists. More importantly, it was able to do so without the need for internal surgery to allow a doctor to make a visual examination. This not only ensures that a patient could receive and early (and accurate) diagnosis from a simple CT scan, but also saves a great deal of money by making surgery unnecessary.
As they say, early detection is key. But where preventative medicine fails, effective treatments need to be available. And that’s where a new invention, inspired by Velcro comes into play. Created by researchers at UCLA, the process is essentially a refined method of capturing and analyzing rogue cancer cells using a Velcro-like technology that works on the nanoscale. It’s called NanoVelcro, and it can detect, isolate, and analyze single cancer cells from a patient’s blood.
Researchers have long recognized that circulating tumor cells play an important role in spreading cancer to other parts of the body. When the cells can be analyzed and identified early, they can offer clues to how the disease may progress in an individual patient, and how to best tailor a personalized cancer treatment. The UCLA team developed the NanoVelcro chip (see above) to do just that, trap individual cancer cells for analysis so that early, non-invasive diagnosis can take place.
The treatment begins with a patient’s blood being pumped in through the NanoVelcro Chip, where tiny hairs protruding from the cancer cells stick to the nanofiber structures on the device’s surface. Then, the scientists selectively cut out the cancer cells using laser microdissection and subject the isolated and purified cancer cells to single cell sequencing. This last step reveals mutations in the genetic material of the cells and may help doctors personalize therapies to the patient’s unique form of cancer.
The UCLA researchers say this technology may function as a liquid biopsy. Instead of removing tissue samples through a needle inserted into a solid tumor, the cancer cells can be analyzed directly from the blood stream, making analysis quicker and easier. They claim this is especially important in cancers like prostate, where biopsies are extremely difficult because the disease often spreads to bone, where the availability of the tissue is low. In addition, the technology lets doctors look at free-floating cancer cells earlier than they’d have access to a biopsy site.
Already, the chip is being tested in prostate cancer, according to research published in the journal Advanced Materials in late March. The process is also being tested by Swiss researchers to remove heavy metals from water, using nanomaterials to cling to and remove impurities like mercury and heavy metals. So in addition to assisting in the war on cancer, this new technology showcases the possibilities of nantechnology and the progress being made in that field.
The Cassini Space Probe is at it again, providing the people of Earth with rare glimpses of Saturn and its moons. And with this latest picturesque capture, revealed by NASA, the ESA and ASI back in April, we got to see the moon of Enceladus as it sprayed icy vapor off into space. For some time, scientists have known about the large collection of geysers located at the moon’s south pole. But thanks to Cassini, this was the first time that it was caught (beautifully) on film.
First discovered by Cassini in 2005, scientists have been trying to learn more about how these plumes of water behave, what they are made of and – most importantly – where they are coming from. The working theory is that Enceladus has a liquid subsurface ocean, and pressure from the rock and ice layers above combined with heat from within force the water up through surface cracks near the moon’s south pole.
When this water reaches the surface it instantly freezes, sending plumes of water vapor, icy particles, and organic compounds hundreds of kilometers out into space. Cassini has flown through the spray several times now, and instruments have detected that aside from water and organic material, there is salt in the icy particles.
Tests run on samples that were captured indicate that the salinity is the same as that of Earth’s oceans. These findings, combined with the presence of organic compounds, indicate that Enceladus may be one of the best candidates in the Solar System for finding life.
Much like Europa, the life would be contained within the planet’s outer crust. But as we all know, life comes in many, many forms. Not all of it needs to be surface-dweling in nature, and an atmosphere need not exist either. Granted, these are essential for life to thrive, but not necessarily exist.
What’s more, this could come in handy if manned missions to Cassini ever do take place. Water is key to making hydrogen fuel, and could come in might handy if ever people set down and feel the need to terraform the place. Of course, they might want to make sure they aren’t depriving subterranean organisms of their livelihood first. Don’t want another Avatar situation on our hands!
When it comes to high-tech flight, hypersonic is the undisputed way of the future. Not only is it the next logical step in the long chain from the Wright Brothers to supersonic flight (which humanity achieved in 1947), it is sort of a prerequisite in order for commercial space travel to take place. And on May 1st, the US Air Force tested its latest concept vehicle for going hypersonic, known as the X-51A Waverider.
The test took place at Edwards Air Force Base in California, when a B-52H Stratofortress carried the scramjet to a height of 15,000 meters (50,000 feet) and then released it. A solid rocket booster then kicked in and brought the X-51A to a speed of Mach 4.8 in just 26 seconds. The solid rocket booster then separated and the X-51A’s air-breathing supersonic combustion ramjet – or scramjet – engine pushed it up the rest of the way to Mach 5.1 and up to an altitude of 18,300 meters (60,000 feet).
Four minutes later, its fuel supply was spent and the scramjet nosed down, finally crashing (as planned) into the Pacific Ocean. The previous air speed record for manned flight is just under Mach 3, making this a rather large leap forward. In addition, in just over six minutes, the scramjet traveled over 425 kilometers (264 miles), making it the longest air-breathing hypersonic flight ever.
In addition to being record-breaking, it also tested out an important concept which may soon get more of us here on Earth into orbit. Considering the cost of sending a single rocket into space, concepts for a reusable space craft that could break the Earth’s gravitational pull, fly itself into high-earth orbit, and then land again have been under review for some time. All that was missing was an engine that could accomplish the kind’s of speeds needed without relying on criminally-fuel efficient rockets.
Needless to say, this is a difficult task, since maintaining airspeed above mach 2 is a serious challenge. This is due to the fact that at these speeds, its very difficult for jet engines to continue to intake air. What makes the X-51A special is the fact that it has no moving parts. Whereas scramjets of the past used hydrogen fuel which would be injected into a combustion chamber and mixed with incoming air, the X-51A differs in that it uses a hydrocarbon fuel as sort of a pilot light, effectively“lighting a match in a hurricane.”
This apparently makes more sense logistically, and therefore could allow the technology to be applied on a broader scale. As it stands, this test involved the last of four X-51As to be constructed, the previous tests having taken place between 2004 and 2012. No plans exist for the construction of future X-51A vehicles, perhaps because the program cost a staggering $300 million. Nevertheless, Air Force officials indicated that the Waverider has left a valuable legacy.
And certainly think so! Not only has the Waverider established a new air speed record, and set a hypersonic distance record, it has also taken an important step as far as the next generation of space flight is concerned. In time, and perhaps in conjunction with rocket boosters, we could be seeing commercial spacecraft capable of breaking the atmosphere very soon.
Think of it, aerospace flights making deliveries to the ISS, and perhaps even beyond… Also, check out the video of the X-51A below making it’s historic, record-breaking flight:
For decades, solar power has been dogged by two undeniable problems that have prevented it from replacing fossil fuels as our primary means of energy. The first has to do the cost of producing and installing solar cells, which until recently remained punitively. The second has to do with efficiency, in that conventional photovoltaic cells remained inefficient as far as most cost per watt analyses went. But thanks to a series of developments, solar power has been beating the odds on both fronts and coming down in price.
However, to most people, it was unclear exactly how far it had come down in price. And thanks to a story recently published in The Economist, which comes complete with a helpful infographic, we are now able to see firsthand the progress that’s been made. To call it astounding would be an understatement; and for the keen observer, a certain pattern is certainly discernible.
It’s known as the “Swanson Effect” (or Swanson’s Law), a theory that suggests that the cost of the photovoltaic cells needed to generate solar power falls by 20% with each doubling of global manufacturing capacity. Named after Richard Swanson, the founder of the major American solar-cell manufacturer named SunPower, this law is basically an imitation of Moore’s Law, which states that every 18 months or so, the size of transistors (and also their cost) halves.
What this means, in effect, is that in solar-rich areas of the world, solar power can now compete with gas and coal without the need for clean energy subsidies. As it stands, solar energy still accounts for only a quarter of a percent of the planet’s electricity needs. But when you consider that this represents a 86% increase over last year and prices shall continue to drop, you begin to see a very trend in the making.
What this really means is that within a few decades time, alternative energy won’t be so alternative anymore. Alongside such growth made in wind power, tidal harnesses, and piezoelectric bacterias and kinetic energy generators, fossil fuels, natural gas and coal will soon be the “alternatives” to cheap, abundant and renewable energy. Combined with advances being made in carbon capture and electric/hydrogen fuel cell technology, perhaps all will arrive in time to stave off environmental collapse!
Check out the infographic below and let the good news of the “Swanson Effect” inspire you!:
Robots have come a long way in recent years, haven’t they? From their humble beginnings, servicing human beings with menial tasks and replacing humans on the assembly line, they now appear poised to take over other, more complex tasks as well. Between private companies and DARPA-developed concepts, it seems like just a matter of time before a fully-functioning machine is capable of performing all our work for us.
One such task-mastering robot was featured at the Milan Design Week this year, an event where fashion tales center stage. It’s known as the Makr Shakr, a set of robotic arms that are capable of mixing drinks, slicing fruit, and capable of making millions of different recipes. The result of a collaborative effort between MIT SENSEable City Lab and Carlo Ratti Associati, an Italian architecture firm, this robot is apparently able to match wits with any human bartender.
While at the Milan Design Week, the three robotic arms put on quite the show, demonstrating their abilities to a crowd of wowed spectators. According to the website, this technology is not just a bar aid, but part of a larger movement in robotics:
Makr Shakr aims to show the ‘Third Industrial Revolution’ paradigm through the simple process design-make-enjoy, and in just the time needed to prepare a new cocktail.
In a press release, the company described the process. It begins with the user downloading an app to create their order to the smartphone as well as peruse the recipes that other users have come up with. They then communicate the order to the Makr Shakr and “[the] cocktail is then crafted by three robotic arms, whose movements reproduce every action of a barman–from the shaking of a Martini to the muddling of a Mojito, and even the thin slicing of a lemon garnish.”
Inspired by the ballerina Roberto Bolle, whose “movements were filmed and used as input for the programming of the Makr Shakr robots”, the arms appear most graceful when they do their work. In addition, the design system monitors exactly how much booze each patron is consuming, which, in theory, could let the robot-bartenders know when it’s time to cut off designers who have thrown back a few too many.
Check out the video of the Makr Shakr in action:
Another major breakthrough comes, yet again, from DARPA. For years now, they have been working with numerous companies and design and research firms in order to create truly ambulatory and dextrous robot limbs. In some cases, as with the Legged Squad Support System (LS3), this involves creating a machine that can carry supplies and keep up with troops. In others, this involves the creation of robotic hands and limbs to help wounded veterans recover and lead normal lives again.
And you may recall earlier this year when DARPA unveiled a cheap design for a robotic hand that was able to use tools and perform complex tasks (like changing a tire). More recently, it showcased a design for a three-fingered robot, designed in conjunction with the firm iRobot – the makers of the robotic 3D printer – and with support from Harvard and Yale, that is capable of unlocking and opening doors. Kind of scary really…
The arm is the latest to come out of the Autonomous Robotic Manipulation (ARM) program, a program designed to create robots that are no longer expensive, cumbersome, and dependent on human operators. Using a Kinect to zero in on the object’s location before moving in to grab the item, the arm is capable of picking up thin objects lying flat, like a laminated card or key. In addition, the hand’s three-finger configuration is versatile, strong, and therefore capable of handling objects of varying size and complexity.
When put to the test (as shown in the video below), the hand was able to pick up a metal key, insert it into a lock, and open a door without any assistance. Naturally, a human operator is still required at this stage, but the use of a Kinect sensor to identify objects shows a degree of autonomous capability, and the software behind its programming is still in the early development phase.
And while the hand isn’t exactly cheap by everyday standards, the production cost has been dramatically reduced. Hands fabricated in batches of 1,000 or more can be produced for $3,000 per unit, which is substantially less than the current cost of $50,000 per unit for similar technology. And as usual, DARPA has its eye on future development, creating hands that would be used in hazardous situations – such as diffusing IEDs on the battlefield – as well as civilian and post-combat applications (i.e. prosthetics).
And of course, there’s a video for the ARM in action as well. Check it out, and then decide for yourself if you need to be scared yet: