First Ever Organism with “Alien” DNA

alien-dna-640x353Normal DNA, which is characterized by the double helix and its the four bases that bond it together – known as T, G, A, and C – is at the heart of all living organisms. While permutations and differences exist between species, this basic structure has existed unchanged for billions of years. That is, until now. This past May, researchers announced that they had created the first ever organism with synthetic DNA that has two new bases – X and Y. Mary Shelley and H.G. Wells must be turning over in their graves, as scientists are officially playing God now!

This landmark study, 15 years in the making, was carried out by scientists at the Scripps Research Institute and published in Nature today under the title “A semi-synthetic organism with an expanded genetic alphabet”. In normal DNA, the four bases combine in predictable ways. A always bonds with T, and C always bonds with G, creating a fairly simple “language” of base pairs — ATCGAAATGCC, etc. Combine a few dozen base pairs together in a long strand of DNA and you then have a gene, which tells the organism how to produce a certain protein.

DNA-MicroarrayIf you know the sequence of letters down one strand of the helix, you always know what other letter is. This “complementarity” is the fundamental reason why a DNA helix can be split down the middle, and then have the other half perfectly recreated. In this new study, the Scripps scientists found a method of inserting a new base pair into the DNA of an e. coli bacterium. These two new bases are represented by the letters X and Y, but the actual chemicals are described as “d5SICS” and “dNaM.”

A previous in vitro (test tube) study had shown that these two chemicals were compatible with the enzymes that split and copy DNA. For the purposes of this study, the scientists began by genetically engineering an e. coli bacterium to allow the new chemicals (d5SICS and dNaM) through the cell membrane. Then they inserted a DNA plasmid (a small loop of DNA) that contained a single XY base pair into the bacterium.

dnaheadAs long as the new chemicals were available, the bacterium continued to reproduce normally, copying and passing on the new DNA, alien plasmid and all, and continued to carry on flawlessly for almost a week. For now, the XY base pair does nothing; it just sits there in the DNA, waiting to be copied. In this form, it could be used as biological data storage which, as a new form of biocomputing, could result in hundreds of terabytes of data being stored in a single gram of synthetic, alien DNA. 

Floyd Romesberg, who led the research, has much grander plans:

If you read a book that was written with four letters, you’re not going to be able to tell many interesting stories. If you’re given more letters, you can invent new words, you can find new ways to use those words and you can probably tell more interesting stories.

Now his target is to find a way of getting the alien DNA to actually do something, such as producing amino acids (and thus proteins) that aren’t found in nature. If Romesberg and his colleagues can crack that nut, then it will suddenly become possible to engineer cells that produce proteins that target cancer cells, or special amino acids that help with fluorescent microscopy, or new drugs/gene therapies that do weird and wonderful things.

dna_cancerUltimately it may even be possible to create a wholly synthetic organism with DNA that contains dozens (or hundreds) of different base pairs that can produce an almost infinitely complex library of amino acids and proteins. At that point, we’d basically be rewriting some four billion years of evolution. The organisms and creatures that would arise would be unrecognizable, and be capable of just about anything that a researcher (or mad scientist) could dream up.

In the future, this breakthrough should allow for the creation of highly customized organisms – bacteria, animals, humans – that behave in weird and wonderful ways that mundane four-base DNA would never allow. At the same time, it raises ethical dilemmas and fears that may be well founded. But such is the nature of breakthroughs. The potential for harm and good are always presumably equal when they are firts conceived.

Source: extremetech.com

Ending HIV: New Vaccine Holds Promise for a Cure

hiv-aids-vaccineScientists and researchers have been making great strides in the fight against HIV/AIDS in recent years. In addition to developing vaccines that have shown great promise, there have even been some treatments that have been shown to eliminate the virus altogether. And it seems that with this latest development, which was published in Nature earlier this month, there might be a treatment that can double as a cure.

Developed at the Vaccine and Gene Therapy Institute at the Oregon Health and Science University (OHSU), this new vaccine proved successful in about fifty percent of the clinical subjects that were tested, and may be able to cure patients who are currently on anti-retroviral drugs. If successful, this could mean that a preventative vaccine and cure could come in the same package, thus eliminating HIV altogether.

vaccineCurrently, anti-retroviral drugs and HIV vaccine typically aim at improving the immune response of the patient in the long term. However, they are limited in that they can never completely clear the virus from the body. In fact, aside from a very few exceptional cases, researchers have long believed that HIV/AIDS could only be contained, but not completely cured.

The OHSU team, led by Dr. Louis Picker, has been working on its own vaccine for the past 10 years. In that time, their research has shown that an immune response can in fact go beyond containment and systematically wipe the virus out of the body. As with most early vaccine candidates, the study revolves around SIV – a more aggressive virus than HIV that can replicate up to 100 times faster and, unchecked, can cause AIDS in only two years.

HIV_virusPicker and his research team created the vaccine by working with cytomegalovirus (CMV), another virus which is itself persistent, but doesn’t cause disease. In their initial tests, the vaccine was found to generate an immunoresponse very similar to that generated by CMV, where T-cells that can search and destroy target cells were created and remained in the system, consistently targeting SIV-infected cells until the virus was cleared from the body.

For the sake of their clinical trials, simian subjects were used that were infected by the HIV virus. When treated with the team’s vaccine, half of the subjects initially showed signs of infection, but those signs gradually receded before disappearing completely. This sets it apart from other vaccines which also generate an immunoresponse, but one which fades over time.

HIVAccording to Dr. Picker, it is the permanency of the T-cells that allows the immunoresponse to be consistent and slowly eradicate the virus, eventually eliminating it completely from the system. Says Dr. Picker of their trials and the possibilities for the vaccine:

The virus got in, it infected some cells, moved about in various parts of the body, but it was subsequently cleared, so that by two or three years later the monkeys looked like normal monkeys. There’s no evidence, even with the most sensitive tests, of the SIV virus still being there... We might be able to use this vaccine either to prevent infection or, potentially, even to apply it to individuals who are already infected and on anti-retroviral therapy. It may help to clear their infections so ultimately they can go off the drugs.

Currently, Picker and his the team are trying to understand why some of the vaccinated animals did not respond positively, in the hopes of further increasing the efficacy of the vaccine. Once these trials are complete, it could be just a hop, skip and a jump to getting FDA approval and making the vaccine/cure available to the open market.

Cure_for_HIVImagine, if you will, a world where HIV/AIDS is on the decline, and analysts begin predicting how long it will take before it is eradicated entirely. At this rate, such a world may be just a few years away. For those working in the field of medicine, and those of us who are around to witness it all, it’s an exciting time to be alive!

And be sure to enioy this video from OHSU where Dr. Picker speak about their vaccine and the efforts to end HIV:


Sources:
gizmag.com, nature.com

Big News in Quantum Computing!

^For many years, scientists have looked at the field of quantum machinery as the next big wave in computing. Whereas conventional computing involves sending information via a series of particles (electrons), quantum computing relies on the process of beaming the states of these particles from one location to the next. This process, which occurs faster than the speed of light since no movement takes place, would make computers exponentially faster and more efficient, and lead to an explosion in machine intelligence. And while the technology has yet to be realized, every day brings us one step closer…

One important step happened earlier this month with the installment of the D-Wave Two over at the Quantum Artificial Intelligence Lab (QAIL) at the Ames Research Center in Silicon Valley, NASA has announced that this is precisely what they intend to pursue. Not surprisingly, the ARC is only the second lab in the world to have a quantum computer.  The only other lab to possess the 512-qubit, cryogenically cooled machine is the defense contractor Lockheed Martin, which upgraded to a D-Wave Two in 2011.

D-Wave’s new 512-qubit Vesuvius chip
D-Wave’s new 512-qubit Vesuvius chip

And while there are still some who question the categorization of the a D-Wave Two as a true quantum computer, most critics have acquiesced since many of its components function in accordance with the basic principle. And NASA, Google, and the people at the Universities Space Research Association (USRA) even ran some tests to confirm that the quantum computer offered a speed boost over conventional supercomputers — and it passed.

The new lab, which will be situated at NASA’s Advanced Supercomputing Facility at the Ames Research Center, will be operated by NASA, Google, and the USRA. NASA and Google will each get 40% of the system’s computing time, with the remaining 20% being divvied up by the USRA to researchers at various American universities. NASA and Google will primarily use the quantum computer to advance a branch of artificial intelligence called machine learning, which is tasked with developing algorithms that optimize themselves with experience.

nasa-ames-research-center-partyAs for what specific machine learning tasks NASA and Google actually have in mind, we can only guess. But it’s a fair bet that NASA will be interested in optimizing flight paths to other planets, or devising a safer/better/faster landing procedure for the next Mars rover. As for Google, the smart money says they will be using their time to develop complex AI algorithms for their self-driving cars, as well optimizing their search engines, and Google+.

But in the end, its the long-range possibilities that offer the most excitement here. With NASA and Google now firmly in command of a quantum processor, some of best and brightest minds in the world will now be working to forward the field of artificial intelligence, space flight, and high-tech. It will be quite exciting to see what they produce…

photon_laserAnother important step took place back in March, when researchers at Yale University announced that they had developed a new way to change the quantum state of photons, the elementary particles researchers hope to use for quantum memory. This is good news, because it effectively demonstrated that true quantum computing – the kind that utilizes qubits for all of its processes – has continually eluded scientists and researchers in recent years.

To break it down, today’s computers are restricted in that they store information as bits – where each bit holds either a “1″ or a “0.” But a quantum computer is built around qubits (quantum bits) that can store a 1, a 0 or any combination of both at the same time. And while the qubits would make up the equivalent of a processor in a quantum computer, some sort of quantum Random Access Memory (RAM) is also needed.

Photon_follow8Gerhard Kirchmair, one of Yale researchers, explained in a recent interview with Nature magazine that photons are a good choice for this because they can retain a quantum state for a long time over a long distance. But you’ll want to change the quantum information stored in the photons from time to time. What the Yale team has developed is essentially a way to temporarily make the photons used for memory “writeable,” and then switch them back into a more stable state.

To do this, Kirchmair and his associates took advantage of what’s known as a “Kerr medium”, a law that states how certain mediums will refract light in a different ways depending on the amount shined on it. This is different from normal material materials that refract light and any other form of electromagnetic field the same regardless of how much they are exposed to.

Higgs-bosonThus, by exposing photons to a microwave field in a Kerr medium, they were able to manipulate the quantum states of photons, making them the perfect means for quantum memory storage. At the same time, they knew that storing these memory photons in a Kerr medium would prove unstable, so they added a vacuum filled aluminum resonator to act as a coupler. When the resonator is decoupled, the photons are stable. When resonator is coupled, the photons are “writeable”, allowing a user to input information and store it effectively.

This is not the first or only instance of researchers finding ways to toy with the state of photons, but it is currently the most stable and effective. And coupled with other efforts, such as the development of photonic transistors and other such components, or new ways to create photons seemingly out of thin air, we could be just a few years away from the first full and bona fide quantum processor!

Sources: Extremetech.com, Wired.com, Nature.com

Italian Court Convicts Scientists for Failing to Save Lives

hi-italy-earthquake-852In a move that calls to mind the Inquisition, the Scopes Monkey Trial and other cases where science was put on trial by fearful minds, an Italian court made international news in 2012 for charging six seismologists with manslaughter. The verdict was handed down back in October in relation to the deadly earthquake that struck  the Abruzzo region in 2009. This decision has sent ripples through the scientific community, and inspired a fair deal of rancor the world over.

The 6.3-magnitude quake that struck on April 6, 2009 caused the deaths of 309 people and injured about 1,500 others as well as laying waste to most of the buildings in the medieval town of L’Aquila. In the aftermath, six seismologist were put on trial for not giving the public “sufficient warning” about the quake, even though members of their profession the world over insisted that given the current state of technology, their was no way to accurately predict it.

italy-quake-rtr32cThat didn’t fly with the Italian court, which handed a sentence of six years apiece for the scientists after a 13 month-long trial. On the same day, four top Italian disaster experts quit their jobs, saying the ruling will make it impossible for them to perform their duties. And of course, that feeling was echoed far and wide, especially here in Canada where numerous officials lined up to denounce the verdict and express grief over its likely implications.

In an interview with Nature magazine at the outset of the trial last September, Italian prosecutor Fabio Picuti acknowledged that prediction was not (no pun intended) an exact science, replying “I’m not crazy. I know they can’t predict earthquakes.” Meanwhile judge Giuseppe Romano Gargarella, who oversaw the case, said that the defendants “gave inexact, incomplete and contradictory information” about whether a series of small tremors in the six months prior to the 2009 disaster were significant enough to issue a quake warning.

earthquakeSo in reality, the case was not about a failure to predict the quake, but was instead a matter of “risk communication”. As David Ropeik, a journalist for Scientific American‘s online blog, pointed out, that task fell to Bernardo De Bernardinis, a government official who was not a seismologist, and who tried to assuage public concern by glibly suggesting they “relax with a glass of wine”. He and other members of the Great Risks Commission and the national Institute of Geophysics and Volcanology were also tried in the same case. All of these men, according to Ropeik, did a very poor job of communicating the risk to the public.

But even with this distinction being made, between failure to predict and failure to communicate, this verdict still has many people worried. One such person was Gail Atkinson, the Canada Research Chair in Earthquake Hazards and Ground Motions, remarked “It’s a travesty… what it will result in is seismologists and other scientists being afraid to say anything at all.” Another was John Clague, a professor in the department of earth sciences at Simon Fraser University and a member of the Royal Society of Canada. “I just think scientists are going to be reluctant to deal with the problem,” he said, “particularly government scientists. Academics like myself, we’re going to be very guarded about the words we use”, referring to seismology and earthquakes.

In short, if there’s a question of liability, one can expect scientists to be far more careful about what they say, which is going to wreak havoc since science depends upon the accurate transmission of information. This state of mind, for many, calls to mind instances in Italy’s past where scientists were forced to hold their tongues and conceal their research and findings for fear of a public backlash.

vitruvian_man_leonardo_da_vinciThree prominent examples include Leonardo da Vinci, who’s extensive work in biology, anatomy, flight, and physics was documented with backwards writing to conceal it from prying eyes. Another is Galileo Galilee, a man who’s seminal work proving the Heliocentric model of the universe was hindered by the Vatican’s fear that it contradicted church doctrine. And third and last is the Luminati, an organization of Renaissance scientists who were purged for their interests in the natural sciences and mysticism.

So the question remains, are scientists and government panels to be held accountable for failing to predict, or accurately convey potential disasters? Moreover, is this is a case of scientists being persecuted, or just liability gone mad?

Source: CBC.ca