The Large Hadron Collider: We’ve Definitely Found the Higgs Boson

higgs-boson1In July 2012, the CERN laboratory in Geneva, Switzerland made history when it discovered an elementary particle that behaved in a way that was consistent with the proposed Higgs boson – otherwise known as the “God Particle”. Now, some two years later, the people working the Large Hadron Collider have confirmed that what they observed was definitely the Higgs boson, the one predicted by the Standard Model of particle physics.

In the new study, published in Nature Physics, the CERN researchers indicated that the particle observed in 2012 researchers indeed decays into fermions – as predicted by the standard model of particle physics. It sits in the mass-energy region of 125 GeV, has no spin, and it can decay into a variety of lighter particles. This means that we can say with some certainty that the Higgs boson is the particle that gives other particles their mass – which is also predicted by the standard model.

CERN_higgsThis model, which is explained through quantum field theory  – itself an amalgam of quantum mechanics and Einstein’s special theory of relativity – claims that deep mathematical symmetries rule the interactions among all elementary particles. Until now, the decay modes discovered at CERN have been of a Higgs particle giving rise to two high-energy photons, or a Higgs going into two Z bosons or two W bosons.

But with the discovery of fermions, the researchers are now sure they have found the last holdout to the full and complete confirmation that the Standard Model is the correct one. As Marcus Klute of the CMS Collaboration said in a statement:

Our findings confirm the presence of the Standard Model Boson. Establishing a property of the Standard Model is big news itself.

CERN_LHCIt is certainly is big news for scientists, who can say with absolute certainty that our current conception for how particles interact and behave is not theoretical. But on the flip side, it also means we’re no closer to pushing beyond the Standard Model and into the realm of the unknown. One of the big shortfalls of the Standard Model is that it doesn’t account for gravity, dark energy and dark matter, and some other quirks that are essential to our understanding of the universe.

At present, one of the most popular theories for how these forces interact with the known aspects of our universe – i.e. electromagnetism, strong and nuclear forces – is supersymmetry.  This theory postulates that every Standard Model particle also has a superpartner that is incredibly heavy – thus accounting for the 23% of the universe that is apparently made up of dark matter. It is hoped that when the LHC turns back on in 2015 (pending upgrades) it will be able to discover these partners.

CERN_upgradeIf that doesn’t work, supersymmetry will probably have to wait for LHC’s planned successor. Known as the “Very Large Hadron Collider” (VHLC), this particle accelerator will measure some 96 km (60 mile) in length – four times as long as its predecessor. And with its proposed ability to smash protons together with a collision energy of 100 teraelectronvolts – 14 times the LHC’s current energy – it will hopefully have the power needed to answer the questions the discovery of the Higgs Boson has raised.

These will hopefully include whether or not supersymmetry holds up and how gravity interacts with the three other fundamental forces of the universe – a discovery which will finally resolve the seemingly irreconcilable theories of general relativity and quantum mechanics. At which point (and speaking entirely in metaphors) we will have gone from discovering the “God Particle” to potentially understanding the mind of God Himself.

I don’t think I’ve being melodramatic!

Source: extremetech.com, blogs.discovermagazine.com

News from Space: Insight Lander and the LDSD

mars-insight-lander-labelledScientists have been staring at the surface of Mars for decades through high-powered telescopes. Only recently, and with the help of robotic missions, has anyone been able to look deeper. And with the success of the Spirit, Opportunity and Curiosity rovers, NASA is preparing to go deeper. The space agency just got official approval to begin construction of the InSight lander, which will be launched in spring 2016. While there, it’s going to explore the subsurface of Mars to see what’s down there.

Officially, the lander is known as the Interior Exploration Using Seismic Investigations, Geodesy and Heat Transport, and back in May, NASA passed the crucial mission final design review. The next step is to line up manufacturers and equipment partners to build the probe and get it to Mars on time. As with many deep space launches, the timing is incredibly important – if not launched at the right point in Earth’s orbit, the trip to Mars would be far too long.

Phoenix_landingUnlike the Curiosity rover, which landed on the Red Planet by way of a fascinating rocket-powered sky crane, the InSight will be a stationary probe more akin to the Phoenix lander. That probe was deployed to search the surface for signs of microbial life on Mars by collecting and analyzing soil samples. InSight, however, will not rely on a tiny shovel like Phoenix (pictured above) – it will have a fully articulating robotic arm equipped with burrowing instruments.

Also unlike its rover predecessors, once InSight sets down near the Martian equator, it will stay there for its entire two year mission – and possibly longer if it can hack it. That’s a much longer official mission duration than the Phoenix lander was designed for, meaning it’s going to need to endure some harsh conditions. This, in conjunction with InSight’s solar power system, made the equatorial region a preferable landing zone.

mars-core_bigFor the sake of its mission, the InSight lander will use a sensitive subsurface instrument called the Seismic Experiment for Interior Structure (SEIS). This device will track ground motion transmitted through the interior of the planet caused by so-called “marsquakes” and distant meteor impacts. A separate heat flow analysis package will measure the heat radiating from the planet’s interior. From all of this, scientists hope to be able to shed some light on Mars early history and formation.

For instance, Earth’s larger size has kept its core hot and spinning for billions of years, which provides us with a protective magnetic field. By contrast, Mars cooled very quickly, so NASA scientists believe more data on the formation and early life of rocky planets will be preserved. The lander will also connect to NASA’s Deep Space Network antennas on Earth to precisely track the position of Mars over time. A slight wobbling could indicate the red planet still has a small molten core.

If all goes to plan, InSight should arrive on Mars just six months after its launch in Spring 2016. Hopefully it will not only teach us about Mars’ past, but our own as well.

LDSDAfter the daring new type of landing that was performed with the Curiosity rover, NASA went back to the drawing table to come up with something even better. Their solution: the “Low-Density Supersonic Decelerator”, a saucer-shaped vehicle consisting of an inflating buffer that goes around the ship’s heat shield. It is hopes that this will help future spacecrafts to put on the brakes as they enter Mar’s atmosphere so they can make a soft, controlled landing.

Back in January and again in April, NASA’s Jet Propulsion Laboratory tested the LDSD using a rocket sled. Earlier this month, the next phase was to take place, in the form of a high-altitude balloon that would take it to an altitude of over 36,600 meters (120,000 feet). Once there, the device was to be dropped from the balloon sideways until it reached a velocity of four times the speed of sound. Then the LDSD would inflate, and the teams on the ground would asses how it behaved.

LDSD_testUnfortunately, the test did not take place, as NASA lost its reserved time at the range in Hawaii where it was slated to go down. As Mark Adler, the Low Density Supersonic Decelerator (LDSD) project manager, explained:

There were six total opportunities to test the vehicle, and the delay of all six opportunities was caused by weather. We needed the mid-level winds between 15,000 and 60,000 feet [4,500 meters to 18,230 meters] to take the balloon away from the island. While there were a few days that were very close, none of the days had the proper wind conditions.

In short, bad weather foiled any potential opportunity to conduct the test before their time ran out. And while officials don’t know when they will get another chance to book time at the U.S. Navy’s Pacific Missile Range in Kauai, Hawaii, they’re hoping to start the testing near the end of June. NASA emphasized that the bad weather was quite unexpected, as the team had spent two years looking at wind conditions worldwide and determined Kauai was the best spot for testing their concept over the ocean.

If the technology works, NASA says it will be useful for landing heavier spacecraft on the Red Planet. This is one of the challenges the agency must surmount if it launches human missions to the planet, which would require more equipment and living supplies than any of the rover or lander missions mounted so far. And if everything checks out, the testing goes as scheduled and the funding is available, NASA plans to use an LDSD on a spacecraft as early as 2018.

And in the meantime, check out this concept video of the LDSD, courtesy of NASA’s Jet Propulsion Laboratory:


Sources:
universetoday.com, (2), extremetech.com

NASA’s Proposed Warp-Drive Visualized

ixs-enterpriseIt’s no secret that NASA has been taking a serious look at Faster-Than-Light (FTL) technology in recent years. It began back in 2012 when Dr Harold White, a team leader from NASA’s Engineering Directorate, announced that he and his team had begun work on the development of a warp drive. His proposed design, an ingenious re-imagining of an Alcubierre Drive, may eventually result in an engine that can transport a spacecraft to the nearest star in a matter of weeks — and all without violating Einstein’s law of relativity.

In the spirit of this proposed endeavor, White chose to collaborate with an artist to visualize what such a ship might look like. Said artist, Mark Rademaker, recently unveiled the fruit of this collaboration in the form of a series of concept images. At the heart of them is a sleek ship nestled at the center of two enormous rings that create the warp bubble. Known as the IXS Enterprise, the ship has one foot in the world of science fiction, but the other in the realm of hard science.

ixs-enterprise-0The idea for the warp-drive comes from the work published by Miguel Alcubierre in 1994. His version of a warp drive is based on the observation that, though light can only travel at a maximum speed of 300,000 km/sec (186,000 miles per second, aka. c), spacetime itself has a theoretically unlimited speed. Indeed, many physicists believe that during the first seconds of the Big Bang, the universe expanded at some 30 billion times the speed of light.

The Alcubierre warp drive works by recreating this ancient expansion in the form of a localized bubble around a spaceship. Alcubierre reasoned that if he could form a torus of negative energy density around a spacecraft and push it in the right direction, this would compress space in front of it and expand space behind it. As a result, the ship could travel at many times the speed of light while the ship itself sits in zero gravity – hence sparing the crew from the effects of acceleration.

alcubierre-warp-drive-overviewUnfortunately, the original maths indicated that a torus the size of Jupiter would be needed, and you’d have to turn Jupiter itself into pure energy to power it. Worse, negative energy density violates a lot of physical limits itself, and to create it requires forms of matter so exotic that their existence is largely hypothetical. In short, what was an idea proposed to circumvent the laws of physics itself fell prey to their limitations.

However, Dr Harold “Sonny” White of NASA’s Johnson Space Center reevaluated Alcubierre’s equations and made adjustments that corrected for the required size of the torus and the amount of energy required. In the case of the former, White discovered that making the torus thicker, while reducing the space available for the ship, allowed the size of it to be greatly decreased – from the size of Jupiter down to a width of 10 m (30 ft), roughly the size of the Voyager 1 probe.

alcubierre-warp-drive-overviewIn the case of the latter, oscillating the bubble around the craft would reduce the stiffness of spacetime, making it easier to distort. This would reduce the amount of energy required by several orders of magnitude, for a ship traveling ten times the speed of light. According to White, with such a setup, a ship could reach Alpha Centauri in a little over five months. A crew traveling on a ship that could accelerate to just shy of the speed of light be able to make the same trip in about four and a half years.

Rademaker’s renderings reflect White’s new calculations. The toruses are thicker and, unlike the famous warp nacelles on Star Trek’s Enterprise, their design is the true function of hurling the craft between the stars. Also, the craft, which is divided into command and service modules, fits properly inside the warp bubble. There are some artistic additions, such as some streamlining, but no one said an interstellar spaceship couldn’t be functional and pretty right?

ixs-enterprise-2For the time being, White’s ideas can only be tested on special interferometers of the most exacting precision. Worse, the dependence of the warp on negative energy density is a major barrier to realization. While it can, under special circumstances, exist at a quantum level, in the classical physical world that this ship must travel through, it cannot exist except as a property of some form of matter so exotic that it can barely be said to be capable of existing in our universe.

Though no one can say with any certainty when such a system might be technically feasible, it doesn’t hurt to look ahead and dream of what may one day be possible. And in the meantime, you can check out Rademaker’s entire gallery by going to his Flickr account here. And be sure to check out the video of Dr. White explaining his warp-drive concept at SpaceVision 2013:


Sources:
gizmag.comIO9.com, cnet.com
, flickr.com

The Future is Here: Google’s New Self-Driving Car

google-new-self-driving-car-prototype-640x352Google has just unveiled its very first, built-from-scratch-in-Detroit, self-driving electric robot car. The culmination of years worth of research and development, the Google vehicle is undoubtedly cuter in appearance than other EV cars – like the Tesla Model S or Toyota Prius. In fact, it looks more like a Little Tikes plastic car, right down to smiley face on the front end. This is no doubt the result of clever marketing and an attempt to reduce apprehension towards the safety or long-term effects of autonomous vehicles.

The battery-powered electric vehicle has as a stop-go button, but no steering wheel or pedals. It also comes with some serious expensive hardware – radar, lidar, and 360-degree cameras – that are mounted in a tripod on the roof. This is to ensure good sightlines around the vehicle, and at the moment, Google hasn’t found a way to integrate them seamlessly into the car’s chassis. This is the long term plan, but at the moment, the robotic tripod remains.

google-self-driving-car-prototype-concept-artAs the concept art above shows, the eventual goal appears to be to to build the computer vision and ranging hardware into a slightly less obtrusive rooftop beacon. In terms of production, Google’s short-term plan is to build around 200 of these cars over the next year, with road testing probably restricted to California for the next year or two. These first prototypes are mostly made of plastic with battery/electric propulsion limited to a max speed of 25 mph (40 kph).

Instead of an engine or “frunk,” there’s a foam bulkhead at the front of the car to protect the passengers. There’s just a couple of seats in the interior, and some great big windows so passengers can enjoy the view while they ride in automated comfort. In a blog post on their website, Google expressed that their stated goal is in “improving road safety and transforming mobility for millions of people.” Driverless cars could definitely revolutionize travel for people who can’t currently drive.

google_robotcar_mapImproving road safety is a little more ambiguous, though. It’s generally agreed that if all cars on the road were autonomous, there could be some massive gains in safety and efficiency, both in terms of fuel usage and being able to squeeze more cars onto the roads. In the lead-up to that scenario, though, there are all sorts of questions about how to effectively integrate a range of manual, semi- and fully self-driving vehicles on the same roadways.

Plus, there are the inevitable questions of practicality and exigent circumstances. For starters, having no other controls in the car but a stop-go button may sound simplified and creative, but it creates problems. What’s a driver to do when they need to move the car just a few feet? What happens when a tight parking situation is taking place and the car has to be slowly moved to negotiate it? Will Google’s software allow for temporary double parking, or off-road driving for a concert or party? google_robotca

Can you choose which parking spot the car will use, to leave the better/closer parking spots for someone with special needs (i.e. the elderly or physically disabled)? How will these cars handle the issue of “right of way” when it comes to pedestrians and other drivers? Plus, is it even sensible to promote a system that will eventually make it easier to put more cars onto the road? Mass transit is considered the best option for a cleaner, less cluttered future. Could this be a reason not to develop such ideas as the Hyperloop and other high-speed maglev trains?

All good questions, and ones which will no doubt have to be addressed as time goes on and production becomes more meaningful. In the meantime, there are no shortage of people who are interested in the concept and hoping to see where it will go. Also, there’s plenty of people willing to take a test drive in the new robotic car. You can check out the results of these in the video below. In the meantime, try not to be too creeped out if you see a car with a robotic tripod on top and a very disengaged passenger in the front seat!


Sources:
extremetech.com, scientificamerican.com

Computex 2014

https://download.taiwantradeshows.com.tw/files/model/photo/CP/2014/PH00013391-2.jpgEarlier this month, Computex 2014 wrapped up in Taipei. And while this trade show may not have all the glitz and glamor of its counterpart in Vegas (aka. the Consumer Electronics Show), it is still an important launch pad for new IT products slated for release during the second half of the year. Compared to other venues, the Taiwanese event is more formal, more business-oriented, and for those people who love to tinker with their PCs.

For instance, it’s an accessible platform for many Asian vendors who may not have the budget to head to Vegas. And in addition to being cheaper to set up booths and show off their products, it gives people a chance to look at devices that wouldn’t often be seen in the western parts of the world. The timing of the show is also perfect for some manufacturers. Held in June, the show provides a fantastic window into the second half of the year.

https://i0.wp.com/www.lowyat.net/wp-content/uploads/2014/06/140602dellcomputex.jpgFor example, big name brands like Asus typically use the event to launch a wide range of products. This year, this included such items as the super-slim Asus Book Chi and the multi-mode Book V, which like their other products, have demonstrated that the company has a flair for innovation that easily rivals the big western and Korean names. In addition, Intel has been a long stalwart at Computex, premiered its fanless reference design tablet that runs on the Llama Mountain chipset.

And much like CES, there were plenty of cool gadgets to be seen. This included a GPS tracker that can be attached to a dog collar to track a pet’s movements; the Fujitsu laptop, a hardy new breed of gadget that showcases Japanese designers’ aim to make gear that are both waterproof and dustproof; the Rosewill Chic-C powerbank that consists of 1,000mAh battery packs that attach together to give additional power and even charge gadgets; and the Altek Cubic compact camera that fits in the palm of the hand.

https://i0.wp.com/twimages.vr-zone.net/2013/12/altek-Cubic-1.jpgAnd then there was the Asus wireless storage, a gadget that looks like an air freshener, but is actually a wireless storage device that can be paired with a smartphone using near-field communication (NFC) technology – essentially being able to transfer info simply by bringing a device into near-proximity with it. And as always, there were plenty of cameras, display headsets, mobile devices, and wearables. This last aspect was particularly ever-present, in the form of look-alike big-name wearables.

By and all large, the devices displayed this year were variations on a similar theme: wrist-mounted fitness trackers, smartwatches, and head-mounted smartglasses. The SiMEye smartglass display, for example, was every bit inspired by Google Glass, and even bears a strong resemblance. Though the show was admittedly short on innovation over imitation, it did showcase a major trend in the computing and tech industry.

http://img.scoop.it/FWa9Z463Q34KPAgzjElk3Tl72eJkfbmt4t8yenImKBVvK0kTmF0xjctABnaLJIm9In his keynote speech, Microsoft’s Nick Parker talked about the age of ubiquitous computing, and the “devices we carry on us, as opposed to with us.” What this means is, we may very well be entering a PC-less age, where computing is embedded in devices of increasingly diminished size. Eventually, it could even be miniaturized to the point where it is stitched into our clothing as accessed through contacts, never mind glasses or headsets!

Sources: cnet.com, (2), (3), computextaipei.com

The Internet of Things: AR and Real World Search

https://i0.wp.com/screenmediadaily.com/wp-content/uploads/2013/04/augmented_reality_5.jpgWhen it comes to the future, it is clear that the concept of the “Internet of Things” holds sway. This idea – which states that all objects will someday be identifiable thanks to a virtual representations on the internet – is at the center of a great deal of innovation that drives our modern economy. Be it wearables, wireless, augmented reality, voice or image recognition, that which helps us combine the real with the virtual are on the grow.

And so it’s really no surprise that innovators are looking to take augmented reality to the next level. The fruit of some of this labor is Blippar, a market-leading image-recognition and augmented reality platform. Lately, they have been working on a proof of concept for Google Glass showing that 3-D searches are doable. This sort of technology is already available n the form of apps for smartphones, but a central database is lacking that could any device into a visual search engine.

https://i0.wp.com/inthralld.com/wp-content/uploads/2013/08/Say-Hello-to-Ikeas-2014-Interactive-Catalog-App-4.jpegAs Ambarish Mitra, the head of Blippar stated, AR is already gaining traction among consumers thanks to some of the world’s biggest industrial players recognizing the shift to visually mediated lifestyles. Examples include IKEA’s interactive catalog, Heinz’s AR recipe booklet or Amazon’s recent integration of the Flow AR technology into its primary shopping app. As this trend continues, we will need a Wikipedia-like database for 3-D objects that will be available to us anytime, anywhere.

Social networks and platforms like Instagram, Pinterest, Snapchat and Facebook have all driven a cultural shift in the way people exchange information. This takes the form of text updates, instant messaging, and uploaded images. But as the saying goes, “a picture is worth a thousand words”. In short, information absorbed through visual learning has a marked advantage over that which is absorbed through reading and text.

Augmented_Reality_Contact_lensIn fact, a recent NYU study found that people retain close to 80 percent of information they consume through images versus just 10 percent of what they read. If people are able to regularly consume rich content from the real world through our devices, we could learn, retain, and express our ideas and information more effectively. Naturally, there will always be situations where text-based search is the most practical tool, but searches arise from real-world experiences.

Right now, text is the only option available, and oftentimes, people are unable to best describe what they are looking for. But an image-recognition technology that could turn any smartphone, tablet or wearable device into a scanner that could identify any 3-D object would vastly simplify things. Information could be absorbed in a more efficient way, using an object’s features and pulling up information from a rapidly learning engine.

https://i0.wp.com/24reviews.com/wp-content/uploads/2014/03/QWERTY-keyboard.pngFor better or for worse, wearable designs of consumer electronics have come to reflect a new understanding in the past few years. Basically, they have come to be extensions of our senses, much as Marshall McCluhan wrote in his 1964 book Understanding Media: The Extensions of Man. Google Glass is representative of this revolutionary change, a step in the direction of users interacting with the environment around them through technology.

Leading tech companies are already investing time and money into the development of their own AR products, and countless patents and research allocations are being made with every passing year. Facebook’s acquisition of virtual reality company Oculus Rift is the most recent example, but even Samsung received a patent earlier this year for a camera-based augmented reality keyboard that is projected onto the fingers of the user.

https://i0.wp.com/blogs.gartner.com/it-glossary/files/2012/07/internet-of-things-gartner.pngAugmented reality has already proven itself to be a multi-million dollar industry – with 60 million users and around half a billion dollars in global revenues in 2013 alone. It’s expected to exceed $1 billion annually by 2015, and combined with a Google-Glass type device, this AR could eventually allow individuals to build vast libraries of data that will be the foundation for finding any 3-D object in the physical world.

In other words, the Internet of Things will become one step closer, with an evolving database of visual information at the base of it that is becoming ever larger and (in all likelihood) smarter. Oh dear, I sense another Skynet reference coming on! And in the meantime, enjoy this video that showcases Blippar’s vision of what this future of image overlay and recognition will look like:


Source: wired.com, dashboardinsight.com, blippar.com

Powered by the Sun: Solar City and Silevo

solar2Elon Musk is at it again, this time with clean, renewable energy. Just yesterday, he announced that Solar City (the solar installation company that he chairs) plans to acquire a startup called Silevo. This producer of high-efficiency panels was acquired for $200 million (plus up to $150 million more if the company meets certain goals), and Musk now plans to build a huge factory to produce their panels as part of a strategy that will make solar power “way cheaper” than power from fossil fuels.

Solar City is one of the country’s largest and fastest-growing solar installers, largely as a result of its innovative business model. Conceived by Musk as another cost-reducing gesture, the company allows homeowners and businesses to avoid any up-front cost. If its plans pan out, it will also become a major manufacturer of solar panels, with by far the largest factory in the U.S.

https://i0.wp.com/images.fastcompany.com/upload/620-most-innovative-companies-solar-city.jpgThe acquisition makes sense given that Silevo’s technology has the potential to reduce the cost of installing solar panels, Solar City’s main business. But the decision to build a huge factory in the U.S. seems daring – especially given the recent failures of other U.S.-based solar manufacturers in the face of competition from Asia. Ultimately, however, Solar City may have little choice, since it needs to find ways to reduce costs to keep growing.

Silevo produces solar panels that are roughly 15 to 20 percent more efficient than conventional ones thanks to the use of thin films of silicon – which increase efficiency by helping electrons flow more freely out of the material – and copper rather than silver electrodes to save costs. Higher efficiency can yield big savings on installation costs, which often exceed the cost of the panels themselves, because fewer panels are needed to generate a given amount of power.

http://gigaom2.files.wordpress.com/2011/10/silevo-single-buss-bar-cell.jpgSilevo isn’t the only company to produce high-efficiency solar cells. A version made by Panasonic is just as efficient, and SunPower makes ones that are significantly more so. But Silevo claims that its panels could be made as cheaply as conventional ones if they could scale their production capacity up from their current 32 megawatts to the factory Musk has planned, which is expected to produce 1,000 megawatts or more.

The factory plan mirrors an idea Musk introduced at one of his other companies, Tesla Motors, which is building a huge “gigafactory” that he says will reduce the cost of batteries for electric cars. The proposed plant would have more lithium-ion battery capacity than all current factories combined. And combined with Musk’s release of the patents, which he hopes will speed development, it is clear Musk has both eyes on making clean technology cheaper.

Not sure, but I think it’s fair to say Musk just became my hero! Not only is he all about the development of grand ideas, he is clearly willing to sacrifice profit and a monopolistic grasp on technologies in order to see them come to fruition.

Source: technologyreview.com

Stephen Hawking: AI Could Be a “Real Danger”

http://flavorwire.files.wordpress.com/2014/06/safe_image.jpgIn a hilarious appearance on “Last Week Tonight” – John Oliver’s HBO show – guest Stephen Hawking spoke about some rather interesting concepts. Among these were the concepts of “imaginary time” and, more interestingly, artificial intelligence. And much to the surprise of Oliver, and perhaps more than a few viewers, Hawking’s was not too keen on the idea of the latter. In fact, his predictions were just a tad bit dire.

Of course, this is not the first time Oliver had a scientific authority on his show, as demonstrated by his recent episode which dealt with Climate Change and featured guest speaker Bill Nye “The Science Guy”. When asked about the concept of imaginary time, Hawking explained it as follows:

Imaginary time is like another direction in space. It’s the one bit of my work science fiction writers haven’t used.

singularity.specrepIn sum, imaginary time has something to do with time that runs in a different direction to the time that guides the universe and ravages us on a daily basis. And according to Hawking, the reason why sci-fi writers haven’t built stories around imaginary time is apparently due to the fact that  “They don’t understand it”. As for artificial intelligence, Hawking replied without any sugar-coating:

Artificial intelligence could be a real danger in the not too distant future. [For your average robot could simply] design improvements to itself and outsmart us all.

Oliver, channeling his inner 9-year-old, asked: “But why should I not be excited about fighting a robot?” Hawking offered a very scientific response: “You would lose.” And in that respect, he was absolutely right. One of the greatest concerns with AI, for better or for worse, is the fact that a superior intelligence, left alone to its own devices, would find ways to produce better and better machines without human oversight or intervention.

terminator2_JDAt worst, this could lead to the machines concluding that humanity is no longer necessary. At best, it would lead to an earthly utopia where machines address all our worries. But in all likelihood, it will lead to a future where the pace of technological change will impossible to predict. As history has repeatedly shown, technological change brings with it all kinds of social and political upheaval. If it becomes a runaway effect, humanity will find it impossible to keep up.

Keeping things light, Oliver began to worry that Hawking wasn’t talking to him at all. Instead, this could be a computer spouting wisdoms. To which, Hawking replied: “You’re an idiot.” Oliver also wondered whether, given that there may be many parallel universes, there might be one where he is smarter than Hawking. “Yes,” replied the physicist. “And also a universe where you’re funny.”

Well at least robots won’t have the jump on us when it comes to being irreverent. At least… not right away! Check out the video of the interview below:


Source: cnet.com

The Future is Here: Mind-Controlled Airplanes

screen-shot-2014-05-27-at-10-39-41-am.pngBrainwaves can now be used to control an impressive number of things these days: prosthetics, computers, quadroptors, and even cars. But recent research released by the Technische Universität München (TUM) in Germany indicates that they might also be used to flying an aircraft. Using a simple EEG cap that read their brainwaves, a team of researchers demonstrated that thoughts alone could navigate a plane.

Using seven people for the sake of their experiment, the research team hooked them all up to a cap containing dozens of electroencephalography (EEG) electrodes. They then sat them down in a flight simulator and told them to steer the plane using their thoughts alone. The cap read the electrical signals from their brains and an algorithm then translated those signals into computer commands.

https://i0.wp.com/images.gizmag.com/inline/mind-control-uav-1.PNGAccording to the researchers, the accuracy with which the test subjects stayed on course was what was truly impressive. Not to mention the fact that the study participants weren’t all pilots and had varying levels of flight experience, with one having no experience at all. And yet, of the seven participants, all performed well enough to satisfy some of the criteria for getting a pilot’s license. Several of the subjects also managed to land their planes under poor visibility.

The research was part of an EU-funded program called ” Brainflight.” As Tim Fricke, an aerospace engineer who heads the project at TUM, explained:

A long-term vision of the project is to make flying accessible to more people. With brain control, flying, in itself, could become easier. This would reduce the work load of pilots and thereby increase safety. In addition, pilots would have more freedom of movement to manage other manual tasks in the cockpit.

prosthetic_originalWith this successful test under their belts, the TU München scientists are focusing in particular on the question of how planes can provide feedback to their “mind pilots”. Ordinarily, pilots feel resistance in steering and must exert significant force when they are pushing their aircraft to its limits, and hence rely upon to gauge the state of their flight. This is missing with mind control, and must be addressed before any such system can be adapted to a real plane.

In many ways, I am reminded of the recent breakthroughs being made in mind-controlled prosthetics. After succeeding in creating prosthetic devices that could convert nerve impulses into controls, the next step became creating devices that could stimulate nerves in order to provide sensory feedback. Following this same developmental path, mind-controlled flight could become viable within a few years time.

Mind-controlled machinery, sensory feedback… what does this sound like to you?

Sources: cnet.com, sciencedaily.com