The Internet of Things: AR and Real World Search

https://i0.wp.com/screenmediadaily.com/wp-content/uploads/2013/04/augmented_reality_5.jpgWhen it comes to the future, it is clear that the concept of the “Internet of Things” holds sway. This idea – which states that all objects will someday be identifiable thanks to a virtual representations on the internet – is at the center of a great deal of innovation that drives our modern economy. Be it wearables, wireless, augmented reality, voice or image recognition, that which helps us combine the real with the virtual are on the grow.

And so it’s really no surprise that innovators are looking to take augmented reality to the next level. The fruit of some of this labor is Blippar, a market-leading image-recognition and augmented reality platform. Lately, they have been working on a proof of concept for Google Glass showing that 3-D searches are doable. This sort of technology is already available n the form of apps for smartphones, but a central database is lacking that could any device into a visual search engine.

https://i1.wp.com/inthralld.com/wp-content/uploads/2013/08/Say-Hello-to-Ikeas-2014-Interactive-Catalog-App-4.jpegAs Ambarish Mitra, the head of Blippar stated, AR is already gaining traction among consumers thanks to some of the world’s biggest industrial players recognizing the shift to visually mediated lifestyles. Examples include IKEA’s interactive catalog, Heinz’s AR recipe booklet or Amazon’s recent integration of the Flow AR technology into its primary shopping app. As this trend continues, we will need a Wikipedia-like database for 3-D objects that will be available to us anytime, anywhere.

Social networks and platforms like Instagram, Pinterest, Snapchat and Facebook have all driven a cultural shift in the way people exchange information. This takes the form of text updates, instant messaging, and uploaded images. But as the saying goes, “a picture is worth a thousand words”. In short, information absorbed through visual learning has a marked advantage over that which is absorbed through reading and text.

Augmented_Reality_Contact_lensIn fact, a recent NYU study found that people retain close to 80 percent of information they consume through images versus just 10 percent of what they read. If people are able to regularly consume rich content from the real world through our devices, we could learn, retain, and express our ideas and information more effectively. Naturally, there will always be situations where text-based search is the most practical tool, but searches arise from real-world experiences.

Right now, text is the only option available, and oftentimes, people are unable to best describe what they are looking for. But an image-recognition technology that could turn any smartphone, tablet or wearable device into a scanner that could identify any 3-D object would vastly simplify things. Information could be absorbed in a more efficient way, using an object’s features and pulling up information from a rapidly learning engine.

https://i1.wp.com/24reviews.com/wp-content/uploads/2014/03/QWERTY-keyboard.pngFor better or for worse, wearable designs of consumer electronics have come to reflect a new understanding in the past few years. Basically, they have come to be extensions of our senses, much as Marshall McCluhan wrote in his 1964 book Understanding Media: The Extensions of Man. Google Glass is representative of this revolutionary change, a step in the direction of users interacting with the environment around them through technology.

Leading tech companies are already investing time and money into the development of their own AR products, and countless patents and research allocations are being made with every passing year. Facebook’s acquisition of virtual reality company Oculus Rift is the most recent example, but even Samsung received a patent earlier this year for a camera-based augmented reality keyboard that is projected onto the fingers of the user.

https://i1.wp.com/blogs.gartner.com/it-glossary/files/2012/07/internet-of-things-gartner.pngAugmented reality has already proven itself to be a multi-million dollar industry – with 60 million users and around half a billion dollars in global revenues in 2013 alone. It’s expected to exceed $1 billion annually by 2015, and combined with a Google-Glass type device, this AR could eventually allow individuals to build vast libraries of data that will be the foundation for finding any 3-D object in the physical world.

In other words, the Internet of Things will become one step closer, with an evolving database of visual information at the base of it that is becoming ever larger and (in all likelihood) smarter. Oh dear, I sense another Skynet reference coming on! And in the meantime, enjoy this video that showcases Blippar’s vision of what this future of image overlay and recognition will look like:


Source: wired.com, dashboardinsight.com, blippar.com

The Future is Here: Zombie Fitness App!

3027311-poster-p-runnerFleeing a horde of flesh-eating zombies? There’s an app for that. Seriously though, it seems that some cheeky IT developer recently created a fitness app for Google Glass that motivated runners by letting them know if they pace they are setting would be enough to flee from a pursuing zombie. But of course, that’s just one option that comes with this Glass application – known as Race Yourself – which first previewed this past January.

Mainly, the app seeks to take advantage of Google Glass display technology, which allows runners to see their progress in real-time without having to check their watch, device, or a series of chimes. Using the Glass’ heads-up display, it allows users to keep track of time, distance, and calories by simply taking a quick glance at the screen. And it comes complete with some games, including running from zombies or fleeing a giant boulder (a la Raiders of the Lost Ark).

google_glass1While it is still in development, early reviews state that the app would be of use to both casual runners and those training for a big race. In addition to keeping track of your time and distance, runners are able to see how many calories they’ve burned and the pace they are setting. In the end, these useful stats, which can be consulted at a glance, are the real point of the app. The games (which require far more concentration) are just a fun bonus.

In addition to the zombie chase and the fleeing of the boulder, they include running against an Olympic athlete (100 metes in 9 seconds), racing  against a train to save a woman lying on the tracks, or against your own speed during the last 50 meters of your run, where the name Race Yourself comes from. Runners using the app can expect two hours of battery life, which is more than enough for a good workout.

Richard Goodrum, COO of Race Yourself, says that the app will be launching later this year, at about the same time that Glass opens up to the general public. It will be joined by apps like Strava Cycling app, which offers similar stats to cyclists. And while you’re waiting, be sure to check out this video of the app in action:


Source:
fastcoexist.com

The Future of Medicine: 3D Printing and Bionic Organs!

biomedicineThere’s just no shortage of breakthroughs in the field of biomedicine these days. Whether it’s 3D bioprinting, bionics, nanotechnology or mind-controlled prosthetics, every passing week seems to bring more in the way of amazing developments. And given the rate of progress, its likely going to be just a few years before mortality itself will be considered a treatable condition.

Consider the most recent breakthrough in 3D printing technology, which comes to us from the J.B Speed School of Engineering at the University of Louisville where researchers used a printed model of a child’s hear to help a team of doctors prepare for open heart surgery. Thanks to these printer-assisted measures, the doctors were able to save the life of a 14-year old child.

3d_printed_heartPhilip Dydysnki, Chief of Radiology at Kosair Children’s Hospital, decided to approach the school when he and his medical team were looking at ways of treating Roland Lian Cung Bawi, a boy born with four heart defects. Using images taken from a CT scan, researchers from the school’s Rapid Prototyping Center were able to create and print a 3D model of Roland’s heart that was 1.5 times its actual size.

Built in three pieces using a flexible filament, the printing reportedly took around 20 hours and cost US$600. Cardiothoracic surgeon Erle Austin III then used the model to devise a surgical plan, ultimately resulting in the repairing of the heart’s defects in just one operation. As Austin said, “I found the model to be a game changer in planning to do surgery on a complex congenital heart defect.”

Roland has since been released from hospital and is said to be in good health. In the future, this type of rapid prototyping could become a mainstay for medical training and practice surgery, giving surgeons the options of testing out their strategies beforehand. And be sure to check out this video of the procedure from the University of Louisville:


And in another story, improvements made in the field of bionics are making a big difference for people suffering from diabetes. For people living with type 1 diabetes, the constant need to extract blood and monitor it can be quite the hassle. Hence why medical researchers are looking for new and non-invasive ways to monitor and adjust sugar levels.

Solutions range from laser blood-monitors to glucose-sensitive nanodust, but the field of bionics also offer solutions. Consider the bionic pancreas that was recently trialled among 30 adults, and has also been approved by the US Food and Drug Administration (FDA) for three transitional outpatient studies over the next 18 months.

bionic-pancreasThe device comprises a sensor inserted under the skin that relays hormone level data to a monitoring device, which in turn sends the information wirelessly to an app on the user’s smartphone. Based on the data, which is provided every five minutes, the app calculates required dosages of insulin or glucagon and communicates the information to two hormone infusion pumps worn by the patient.

The bionic pancreas has been developed by associate professor of biomedical engineering at Boston University Dr. Edward Damiano, and assistant professor at Harvard Medical School Dr. Steven Russell. To date, it has been trialled with diabetic pigs and in three hospital-based feasibility studies amongst adults and adolescents over 24-48 hour periods.

bionic_pancreasThe upcoming studies will allow the device to be tested by participants in real-world scenarios with decreasing amounts of supervision. The first will test the device’s performance for five continuous days involving twenty adults with type 1 diabetes. The results will then be compared to a corresponding five-day period during which time the participants will be at home under their own care and without the device.

A second study will be carried out using 16 boys and 16 girls with type 1 diabetes, testing the device’s performance for six days against a further six days of the participants’ usual care routine. The third and final study will be carried out amongst 50 to 60 further participants with type 1 diabetes who are also medical professionals.

bionic_pancreas_technologyShould the transitional trials be successful, a more developed version of the bionic pancreas, based on results and feedback from the previous trials, will be put through trials in 2015. If all goes well, Prof. Damiano hopes that the bionic pancreas will gain FDA approval and be rolled out by 2017, when his son, who has type 1 diabetes, is expected to start higher education.

With this latest development, we are seeing how smart technology and non-invasive methods are merging to assist people living with chronic health issues. In addition to “smart tattoos” and embedded monitors, it is leading to an age where our health is increasingly in our own hands, and preventative medicine takes precedence over corrective.

Sources: gizmag.com, (2)

The Future of Smart Living: Smart Homes

Future-Home-Design-Dupli-CasaAt this year’s Consumer Electronics Show, one of the tech trends to watch was the concept of the Smart Home. Yes, in addition to 4K televisions, curved OLEDs, smart car technology and wearables, a new breed of in-home technology that extends far beyond the living room made some serious waves. And after numerous displays and presentations, it seems that future homes will involve connectivity and seamless automation.

To be fair, some smart home devices – such as connected light bulbs and thinking thermostats – have made their way into homes already. But by the end of 2014, a dizzying array of home devices are expected to appear, communicating across the Internet and your home network from every room in the house. It’s like the internet of things meets modern living, creating solutions that are right at your fingertips (via your smartphone)

smarthomeBut in many ways, the companies on the vanguard of this movement are still working on drawing the map and several questions still loom. For example, how will your connected refrigerator and your connected light bulbs talk to each other? Should the interface for the connected home always be the cell phone, or some other wirelessly connect device.

Such was the topic of debate at this year’s CES Smart Home Panel. The panel featured GE Home & Business Solutions Manager John Ouseph; Nest co-founder and VP of Engineering Matt Rogers; Revolv co-founder and Head of Marketing Mike Soucie; Philips’ Head of Technology, Connected Lighting George Yianni; Belkin Director of Product Management Ohad Zeira, and CNET Executive Editor Rich Brown.

samsunglumenSpecific technologies that were showcased this year that combined connectivity and smart living included the Samsung Lumen Smart Home Control Panel. This device is basically a way to control all the devices in your home, including the lighting, climate control, and sound and entertainment systems. It also networks with all your wireless devices (especially if their made by Samsung!) to run your home even when your not inside it.

Ultimately, Samsung hopes to release a souped-up version of this technology that can be integrated to any device in the home. Basically, it would be connected to everything from the washer and dryer to the refrigerator and even household robots, letting you know when the dishes are done, the clothes need to be flipped, the best before dates are about to expire, and the last time you house was vacuumed.


As already noted, intrinsic to the Smart Home concept is the idea of integration to smartphones and other devices. Hence, Samsung was sure to develop a Smart Home app that would allow people to connect to all the smart devices via WiFi, even when out of the home. For example, people who forget to turn off the lights and the appliances can do so even from the road or the office.

These features can be activated by voice, and several systems can be controlled at once through specific commands (i.e. “going to bed” turns the lights off and the temperature down). Cameras also monitor the home and give the user the ability to survey other rooms in the house, keeping a remote eye on things while away or in another room. And users can even answer the phone when in another room.

Check out the video of the Smart Home demonstration below:


Other companies made presentations as well. For instance, LG previewed their own software that would allow people to connect and communicate with their home. It’s known as HomeChat, an app based on Natural Language Processing (NLP) that lets users send texts to their compatible LG appliances. It works on Android, BlackBerry, iOS, Nokia Asha, and Windows Phone devices as well as OS X and Windows computers.

This represents a big improvement over last year’s Smart ThinQ, a set of similar application that were debuted at CES 2013. According to many tech reviewers, the biggest problem with these particular apps was the fact that each one was developed for a specific appliance. Not so with the HomeChat, which allows for wireless control over every integrated device in the home.

LGHomeChatAura, a re-imagined alarm clock that monitors your sleep patterns to promote rest and well-being. Unlike previous sleep monitoring devices, which monitor sleep but do not intervene to improve it, the Aura is fitted a mattress sensor that monitors your movements in the night, as well as a series of multi-colored LED light that “hack” your circadian rhythms.

In the morning, its light glows blue like daytime light, signaling you to wake up when it’s optimal, based upon your stirrings. At night, the LED glows orange and red like a sunset and turn itself off when you fall asleep. The designers hopes that this mix of cool and warm light can fill in where the seasons fall short, and coax your body into restful homeostasis.

aura_nightlightMeanwhile, the Aura will send your nightly sleep report to the cloud via Wi-Fi, and you can check in on your own rest via the accompanying smartphone app. The entire body is also touch-sensitive, its core LED – which are generally bright and piercing – is cleverly projected into an open air orb, diffusing the light while evoking the shape of the sun. And to deactivate the alarm, people need only trigger the sensor by getting out of bed.

Then there was Mother, a robotic wellness monitor produced by French inventor Rafi Haladjian. This small, Russian-doll shaped device is basically an internet base station with four sensors packs that track 15 different parts of your life. It is small enough to fit in your pocket to track your steps, affix to your door to act as a security alarm, and stick to your coffee maker to track how much you’re drinking and when you need more beans.

mother_robotAnd though the name may sound silly or tongue-in-cheek, it is central to Haladjian’s vision of what the “Internet of things” holds for us. More and more, smart and sensor-laden devices are manifesting as wellness accessories, ranging from fitness bands to wireless BP and heart rate monitors. But the problem is, all of these devices require their own app to operate. And the proliferation of devices is leading to a whole lot of digital clutter.

As Haladjian said in a recent interview with Co.Design:

Lots of things that were manageable when the number of smart devices was scarce, become unbearable when you push the limit past 10. You won’t be willing to change 50 batteries every couple of weeks. You won’t be willing to push the sync button every day. And you can’t bear to have 50 devices sending you notifications when something happens to them!

keekerAnd last, but not least, there was the Keecker – a robotic video projector that may just be the future of video entertainment. Not only is this robot able to wheel around the house like a Roomba, it can also sync with smartphones and display anything on your smart devices – from email, to photos, to videos. And it got a battery charge that lasts a week, so no cords are needed.

Designed by Pierre Lebeau, a former product manager at Google, the robot is programmed to follow its human owner from room to room like a little butler (via the smartphone app). It’s purpose is to create an immersive media environment by freeing the screen from its fixed spots and projecting them wherever their is enough surface space.


In this respect, its not unlike the Omnitouch or other projection smartscreens, which utilizes projectors and motion capture technology to allow people to turn any surface into a screen. The design even includes features found in other smart home devices – like the Nest smoke detector or the Spotter – which allow for the measuring of a home’s CO2 levels and temperature, or alerting users to unusual activity when they aren’t home.

Lebeau and his company will soon launching a Kickstarter campaign in order to finance bringing the technology to the open market. And though it has yet to launch, the cost of the robot is expected to be between $4000 and $5000.

Sources: cnet.com, (2), (3), (4), fastcodesign, (2), (3), (4)

The Future is Here: The Copenhagen Wheel

copenhagen_wheelFans of the cable show Weeds ought to instantly recognize this invention. It was featured as a product invented by one of the characters while living (predictably) in Copenhagen. In addition, it was the subject of news stories, articles, design awards, and a whole lot of public interest. People wanted to get their hands on it, and for obvious reasons.

It’s known as the Copenhagen Wheel, a device invented by MIT SENSEable City Lab back in 2009 to electrify the bicycle. Since that time, engineers at MIT have been working to refine it in preparation for the day when it would be commercially available. And that time has come, as a new company called Superpedestrian announced that it has invested $2.1 million in venture capital to make the device available to the public.

copenhagen_wheel1Superpedestrian founder Assaf Biderman, who is also the SENSEable City lab associate director and one of the creators of the wheel, along with lab director Carlo Ratti, had this to say:

The project touched an exposed nerve somehow. Aside from news coverage and design awards, people were wanting it. Over 14,000 people emailed saying ‘I want to buy it, sell it, make it for you.

Three years after inventing it, Biderman finally decided that it was time to spin off a company to make it happen. MIT filed all the relevant patents, and Superpedestrian acquired exclusive licenses to the Copenhagen Wheel technology. And by late November, they plan to launch the wheel to the public for the very first time.

copenhagen_wheel2And though the much of the facts are being carefully guarded in preparation for the release, some details are already known. For example, the wheel can be fitted to almost any bike, is controlled by sensors in the peddles, and has a power assist feature that doesn’t require any work on the part of the rider. And according to Biderman, its range “will cover the average suburban commute, about 15 miles to and from work and back home.”

On top of that, a regenerative braking system stores energy for later use in a lithium battery. The wheel also comes with an app that allows users to control special features from their smartphone. These include being able to lock and unlock the bike, select motor assistance, and get real-time data about road conditions. An open-source platform called The Superpedestrian SDK also exists to allow developers to make on their own apps.

smartwheelrotatingInterestingly enough,the Copenhagen Wheel also has a rival, who’s appearance on the market seems nothing short of conspiratorial. Its competitor, the FlyKly Smart Wheel, a device which has raised over $150,000 on Kickstarter so far. It is extremely similar to the Copenhagen Wheel in most respects, from its electrical assistance to the fact that it can be integrated via smartphone.

According to Biderman, the appearance of the Smart Wheel is just a coincidence, though it is similar to their product. And her company really doesn’t have to worry about competition, since the Copenhagen Wheel has years of brand recognition and MIT name behind it. In terms of the the target audience, Biderman says that they are looking at targeting city dwellers as well as cyclists:

If you’re an urbanite, you can use it to move all around, and go as far as the edges of most cities with this quite easily. You overcome topographical challenges like hills. The point is to attract more people to cycling.

Though no indication has been given how much an individual unit will cost, it is expected to have a price point that’s competitive with today’s e-bikes.

copenhagen_wheel3The FlyKly Smart Wheel, by comparison, can be pre-ordered for $550 apiece. In total, that campaign has raised $301,867 (their original goal was $100,000) since opening on Oct. 16th. As a result, they have been able to reach their first “stretch goal” of producing a 20″ wheel. If they can reach $500,000 before the campaign closes on Nov. 25th, they will be able to deliver on their other goals: a motor brake and a glow in the dark casing.

For some time, designers and engineers have been trying to find ways to make alternative transportation both effective and attractive. Between these designs and a slew of others that will undoubtedly follow, it looks like e-bicycling may be set to fill that void. Combined with electric cars, self-driving cars, hydrogen cars, robotaxis, podcars, and high speed trains, we could be looking at the revolution in transit that we’ve been waiting for.

Sources: fastcoexist.com(2), kickstarter.com

The Future of Education: Facial Recognition in the Classroom

https://i1.wp.com/edudemic.com/wp-content/uploads/2012/07/intel-bridge-the-gap.pngFor some time now, classroom cameras have been used to see what teachers do in the course of their lessons, and evaluate their overall effectiveness as educators. But thanks to a recent advances in facial recognition software, a system has been devised that will assess teacher effectiveness by turning the cameras around and aiming at them at the class.

It’s what’s known as EngageSense, and was developed by SensorStar Labs in Queens, New York. It begins by filming student’s faces, then applying an algorithm to assess their level of interest. And while it might sound a bit Big Brother-y, the goal is actually quite progressive. Traditional logic has it that by filming the teacher, you will know what they are doing right and wrong.

https://i1.wp.com/f.fastcompany.net/multisite_files/fastcompany/imagecache/slideshow_large/slideshow/2013/10/3018861-slide-makerfaire1.jpgThis system reverses that thinking, measuring reactions to see how the students feel and react, measuring their level of interest over time to see what works for them and what doesn’t. As SensorStar Labs co-founder Sean Montgomery put it:

This idea of adding the cameras and being able to use that information to assist teachers to improve their lessons is already underway. Where this is trying to add a little value on top of that is to make it less work for the teachers.

Montgomery also emphasized that the technology is in the research and development research and development  phase. In its current form, it uses webcams to shoot students’ faces and computer vision algorithms to analyze their gaze – measuring eye movement, the direction they are facing, and facial expressions. That, coupled with audio, can be transformed into a rough, automated metric of student engagement throughout the day.

https://i2.wp.com/endthelie.com/wp-content/uploads/2012/08/facial-recognition-data-points.jpgAfter a lesson, a teacher could boot up EngageSense and see, with a glance at the dashboard, when students were paying rapt attention, and at what points they became confused or distracted. Beyond that, the concept is still being refined as SensorStar Labs looks both for funding and for schools to give EngageSense a real-world trial.

The ultimate goal here is to tailor lessons so that the learning styles of all students can be addressed. And given the importance of classroom accommodation and the amount of time dedicated to ensuring individual student success, a tool like this may prove very useful. Rather than relying on logs and spreadsheets, the EngageSense employs standard computer hardware that simplifies the evaluation process over the course of days, weeks, months, and even years.

https://i1.wp.com/image.slidesharecdn.com/technologyandeducation2-110922110134-phpapp01/95/slide-1-728.jpgAt the present time, the biggest obstacle would definitely be privacy concerns. While the software is designed for engaging student interest right now, it would not be difficult at all to imagine the same technology applied to police interrogations, security footage, or public surveillance.

One way to assuage these concerns in the classroomstudents, according to Montgomery, is to make the entire process voluntary. Much in the same way that smartphone apps ask permission to access your GPS or other personal data, parental consent would be needed before a child could be recorded or their data accessed and analyzed.

Sources: fastcoexist.com, labs.sensorstar.com

Nukemap 3D: Bringing Nuclear War to your Home!

nukemap3Ever wonder what it would look like if a thermonuclear device hit your hometown? Yeah, me neither! But let’s pretend for a moment that this is something you’ve actually considered… sicko! There’s an online browser-based program for that! It’s called Nukemap3D, and uses a Google Earth plug in to produce a set of graphics that show the effects of a nuclear weapon on your city of choice.

All you have to do is pick your target, select your favorite thermonuclear device, and you can see an animated mushroom cloud rising over ground zero. The creator was Dr. Alex Wellerstein, an Associate Historian at the Center for History of Physics at the American Institute of Physics in College Park, Maryland, who specializes in the history of nuclear weapons and nuclear secrecy.

nukemap3-1Interestingly enough, Wellerstein’s inspiration for developing Nukemap 3D came from his experience of trying to teach about the history of nuclear weapons to undergraduates. As people who had completely missed the Cold War, these students naturally didn’t think about the prospect of nuclear war much, and had little to no cultural association with them.

Events like Hiroshima and the Cuban Missile Crisis were essentially ancient history to them. For him and his wife, who teaches high school, it was always a challenge to get students to relate to these issues from the past and seeing how they related to the present. Specifically, he wanted his students to address the larger issue of how one controls a dangerous technology that others find desirable.

nukemap3-2And given how inundated young people are today with technology, he believed an online browser that allowed children to visualize the effects of a nuclear attack seemed just like the thing. The concept originally grew out of his own research to determine the size of the Hiroshima bomb versus the first hydrogen bomb versus a modern nuclear weapon.

After producing a web page with the relevant info in 2012, he began receiving millions of hits and felt the need to expand on it. One of the things he felt was missing was info on additional effects of nuclear blasts, such as radioactive debris that comes down as fallout, contamination that can extend for hundreds of kilometers in all directions, and how this can spread with prevailing winds.

NuclearDetonationsIn addition to being a pedagogical tool which can help students appreciate what life was like during the Cold War, Wellerstein also hopes his site could help combat misinformation about modern nukes. All too often, people assume that small devices – like those being developed by North Korea – could only cause small-scale damage, unaware of the collateral damage and long-term effects.

Another use of the program is in helping to combat ideas of “instant apocalypse” and other misconceptions about nuclear war. As we move farther and farther away from an age in which nuclear holocaust was a distinct possibility, people find themselves turning to movies and pop culture for their information on what nuclear war looks like. In these scenarios, the end result is always apocalyptic, and by and large, this is not the case.

nuclear1In a war where nuclear confrontation is likely, civilization does not simply come to an end and mutants do not begin roaming the Earth. In reality, it will mean mass destruction within a certain area and tens of thousands of deaths. This would be followed by mass evacuations of the surrounding areas, the creation of field hospitals and refugee camps, and an ongoing state of emergency.

In short, a nuclear exchange would not means the instantaneous end of civilization as we know it. Instead, it would lead to an extended period of panic, emergency measures, the presence of NGOs, humanitarian aid workers, and lots and lots of people in uniform. And the effects would be felt long after the radiation cleared and the ruins were rebuilt, and the memory would be slow to fade.

Hiroshima, after the blast
Hiroshima, after the blast

Basically, Wellerstein created Nukemap 3D in the hope of finding a middle ground between under exaggeration and over exaggeration, seeking to combat the effects of misinformation on both fronts. In a nuclear war, no one is left unaffected; but at the same time, civilization doesn’t just come to an abrupt end. As anyone who survived the horrors of Hiroshima and Nagasaki can attest, life does go on after a nuclear attack.

 

The effects are felt for a very long time, and the scars run very deep. And as those who actually witnessed what a nuclear blast looks like (or lived in fear of one) grow old and pass on, people need to be educated on what it entails. And a graphic representation, one that utilizes the world’s most popular form of media, is perhaps the most effective way of doing that.

In the meantime, be sure to check out Nukemap 3D and see exactly what your hometown would look like if it were hit by a nuclear device. It’s quite… eye-opening!

Source: gizmag.com