News from Space: Biggest Lunar Explosion Ever Seen!

moon-asteroid-impact-1600Back in September of 2013, something truly amazing happened on the surface of the Moon. Granted, small objects impact with Earth’s only satellite all the time, hence its cratered surface. But this time around, Earth-based instruments observed an impact that was caused by an object the size of a small car, ten times bigger than any previously-recorded impacts.

The burst occurred on Sept. 11, 2013, at about 20:07 GMT in a area on the moon known as Mare Nubium, producing a flash that would have been visible from Earth. It was caused by a meteor that is believed to measure between 0.6- and 1.4-meters wide, weighed some 400 kg (880 pounds) and generated a crater with a diameter of about 40 meters.

Mars_impact_craterJudging from the explosion and the crater it left behind, scientists estimate that the rock hit Mare Nubium at a speed of 61,000 kph (38,000 mph), generating an explosion equivalent to roughly 15 tons of TNT. This beats the previous record, which occurred in March 2013 when a 40 kg meteoroid 0.3 or 0.4 meters wide struck the moon at about 90,000 km/hr (56,000 mph) and caused an explosion equivalent to 5 tons of TNT.

These findings appeared in the February issue of Monthly Notices of the Royal Astronomical Society (MNRAS), in a paper entitled “A large lunar impact blast on 2013 September 11”. According to the paper’s authors – Jose M. Madiedo, from the University of Huelva and Jose L. Ortiz, from the Institute of Astrophysics of Andalusia – the impact was the longest and brightest impact ever observed, as the “afterglow” remained visible for 8 seconds.

moonIn a subsequent press release, Madiedo and Ortiz said that:

Our telescopes will continue observing the Moon as our meteor cameras monitor the Earth’s atmosphere. In this way we expect to identify clusters of rocks that could give rise to common impact events on both planetary bodies. We also want to find out where the impacting bodies come from.

Knowing how often such collisions happen on the moon could be important for future lunar explorers, one reason why NASA has set up a specific program – Lunar Impacts, working out of the Marshall Space Flight Center – to study them. This campaign started in 2005 and has already proved that lunar impacts happen about 10 times more frequently than scientists previously expected.

Russian_meteorBecause the moon is our next-door neighbor, and a place where human beings may someday live in large numbers, knowing the frequency and severity of meteoric impacts is certainly important. These latest findings also suggests that the Earth might get hit more often than we previously thought by objects of a similar size. And given the damage associated with such impacts, knowing all we can is certainly prudent.

In the meantime, check out this outreach video provided by J.M. Madiedo (co-author of the MNRAS paper) that discusses this record-breaking lunar impact:


Source:
universetoday.com, wired.com, nasa.gov

Drone Wars: Bigger, Badder, and Deadlier

UAVsIn their quest to “unman the front the lines”, and maintain drone superiority over other states, the US armed forces have been working on a series of designs that will one day replace their air fleet of Raptors and Predators. Given that potential rivals, like Iran and China, are actively imitating aspects of these designs in an added incentive, forcing military planners to think bigger and bolder.

Consider the MQ-4C Triton Unmanned Aerial System (UAS), a jet-powered drone that is the size of a Boeing 757 passenger jet. Developed by Northrop Grumman and measuring some 40 meters (130 feet) from wingtip to wingtip, this “super drone” is intended to replace the US Navy’s fleet of RQ-4 Global Hawks, a series of unmanned aerial vehicles that have been in service since the late 90’s.

Triton_droneThanks to a sensor suite that supplies a 360-degree view at a radius of over 3700 kms (2,300 miles), the Triton can provide high-altitude, real-time intelligence, surveillance and reconnaissance (ISR) at heights and distances in excess of any of its competitors. In addition, the drone possess unique de-icing and lightning protection capabilities, allowing to plunge through the clouds to get a closer view at surface ships.

And although Triton has a higher degree of autonomy than the most autonomous drones, operators on the ground are still relied upon to obtain high-resolution imagery, use radar for target detection and provide information-sharing capabilities to other military units. Thus far, Triton has completed flights up to 9.4 hours at altitudes of 15,250 meters (50,000 feet) at the company’s manufacturing facility in Palmdale, California.

?????????????????????????????????Mike Mackey, Northrop Grumman’s Triton UAS program director, had the following to say in a statement:

During surveillance missions using Triton, Navy operators may spot a target of interest and order the aircraft to a lower altitude to make positive identification. The wing’s strength allows the aircraft to safely descend, sometimes through weather patterns, to complete this maneuver.

Under an initial contract of $1.16 billion in 2008, the Navy has ordered 68 of the MQ-4C Triton drones with expected delivery in 2017. Check out the video of the Triton during its most recent test flight below:


But of course, this jetliner-sized customer is just one of many enhancements the US armed forces is planning on making to its drone army. Another is the jet-powered, long-range attack drone that is a planned replacement for the aging MQ-1 Predator system. It’s known as the Avenger (alternately the MQ-1 Predator C), a next-generation unmanned aerial vehicle that has a range of close to 3000 kms (1800 miles).

Designed by General Atomics, the Avenger is designed with Afghanistan in mind; or rather, the planned US withdrawal by the end 0f 2014. Given the ongoing CIA anti-terrorism operations in neighboring Pakistan are expected to continue, and airstrips in Afghanistan will no longer be available, the drones they use will need to have significant range.

(c) Kollected Pty Ltd.

The Avenger prototype made its first test flight in 2009, and after a new round of tests completed last month, is now operationally ready. Based on the company’s more well-known MQ-9 Reaper drone, Avenger is designed to perform high-speed, long-endurance surveillance or strike missions, flying up to 800 kms (500 mph) at a maximum of 15,250 meters (50,000 feet) for as long as 18 hours.

Compared to its earlier prototype, the Avenger’s fuselage has been increased by four feet to accommodate larger payloads and more fuel, allowing for extended missions. It can carry up to 1000 kilograms (3,500 pounds) internally, and its wingspan is capable of carrying weapons as large as a 2,000-pound Joint Direct Attack Munition (JDAM) and a full-compliment of Hellfire missiles.

Avenger_drone1Switching from propeller-driven drones to jets will allow the CIA to continue its Pakistan strikes from a more distant base if the U.S. is forced to withdraw entirely from neighboring Afghanistan. And according to a recent Los Angeles Times report, the Obama administration is actively making contingency plans to maintain surveillance and attacks in northwest Pakistan as part of its security agreement with Afghanistan.

The opportunity to close the gap between the need to act quickly and operating from a further distance with technology isn’t lost on the US military, or the company behind the Avenger. Frank Pace, president of the Aircraft Systems Group at General Atomics, said in a recent statement:

Avenger provides the right capabilities for the right cost at the right time and is operationally ready today. This aircraft offers unique advantages in terms of performance, cost, timescale, and adaptability that are unmatched by any other UAS in its class.

??????????????????????????????What’s more, one can tell by simply looking at the streamlined fuselage and softer contours that stealth is part of the package. By reducing the drone’s radar cross-section (RCS) and applying radar-absorbing materials, next-generation drone fleets will also be mimicking fifth-generation fighter craft. Perhaps we can expect aerial duels between remotely-controlled fighters to follow not long after…

And of course, there’s the General Atomic’s Avenger concept video to enjoy:


Sources:
wired.com, (2)

News from Space: First Detailed Map of Ganymede

ganymedeLast week, researchers released the first-ever geological map of Ganymede, Jupiter’s largest moon and the largest planetary satellite in the Solar System. Led by Geoffrey Collins of Wheaton College, these scientists produced the first global geologic map that combines the best images obtained by NASA’s Voyager 1 and 2 spacecraft (1979) and the Galileo orbiter (1995 to 2003).

The information of these probes was pieced together as a mosaic image of the planet, giving us our first complete image of the geological features of the world. This image has now been published by the U. S. Geological Survey as a global planar map. The 2D version of the planet surface illustrates the varied geologic character of Ganymede and is the first global, geologic map of the icy, outer-planet moon.

ganymede_mapAnd its about time too! As Robert Pappalardo of NASA’s Jet Propulsion Laboratory in Pasadena, California put it:

This map illustrates the incredible variety of geological features on Ganymede and helps to make order from the apparent chaos of its complex surface. This map is helping planetary scientists to decipher the evolution of this icy world and will aid in upcoming spacecraft observations.

Since its discovery in January 1610 by Galileo Galilee, Ganymede has been the focus of repeated observation; first by Earth-based telescopes, and later by the flybys and orbiting spacecraft. These studies depict a complex, icy world whose surface is characterized by the striking contrast between the dark, very old, highly cratered regions, and the lighter, somewhat younger regions marked with an extensive array of grooves and ridges.

Ganymede-JupiterMoon-GeologicMap-SIM3237-20140211The map isn’t just aesthetically pleasing; it also informs our understanding of Ganymede’s geological history. Researchers have identified three geological periods – one involving heavy impact cratering, followed by tectonic upheaval, and then a decline in geological activity. The more detailed images let them study the ridges and groves, and have revealed that the formation of cryovolcanos is rare on Ganymede.

Baerbel Lucchitta, scientist emeritus at the U.S. Geological Survey in Flagstaff, Ariz., who has been involved with geologic mapping of Ganymede since 1980, had this to say:

The highly detailed, colorful map confirmed a number of outstanding scientific hypotheses regarding Ganymede’s geologic history, and also disproved others. For example, the more detailed Galileo images showed that cryovolcanism, or the creation of volcanoes that erupt water and ice, is very rare on Ganymede.

ganymede_ridges_craters_600According to the Jet Propulsion Laboratory, Ganymede is an especially valuable body to study because it is an ice moon with a richly varied geology and a surface area that is more than half as large as all the land area on Earth. The Ganymede map will also enable researchers to compare the geologic characters of other icy satellite moons, since most features found on other icy satellites have a similar feature somewhere on Ganymede.

Laszlo Kestay, the director of the United States Geological Survey (USGS) Astrogeology Science Center, explained the implications of this in a statement:

After Mars, the interiors of icy satellites of Jupiter are considered the best candidates for habitable environments for life in our solar system. This geologic map will be the basis for many decisions by NASA and partners regarding future U.S. missions under consideration to explore these worlds.

The project was funded by NASA through its Outer Planets Research and Planetary Geology and Geophysics Programs, and the images can all be downloaded by going to the Jet Propulsion Laboratory’s website at the California Institute of Technology (Caltech). And be sure to check out the animated version of the Ganymede planetary map below:


Sources:
IO9.com, (2), jpl.nasa.gov, space.com

Papa Zulu – Ready and Available, Finally!

papa_zuluWell, after about a week of tinkering, complaining, and demanding that Amazon, Kindle and Createspace get their act together, Papa Zulu is now available in all formats, and all in one place! This was a bit of a bugger last week, when I was finally finished with the tedious editing and submission process, only to find that it wasn’t even showing up in the right places.

To recap, Papa Zulu was made available through Amazon.com as of last Monday, but it did not appear with the rest of my books. So for the untrained consumer (i.e. anyone who doesn’t know me already), the book’s relation to Whiskey Delta would have been unclear. In addition, the Kindle edition and the paperback didn’t even appear together, with one only available at Amazon.com and the other at Amazon.ca.

But after a few days, that was all resolved. As of now, all formats of Papa Zulu can be ordered from one place (Amazon.com) and I’ve made sure the links to it have been updated to reflect that. And on top of that, it now appears alongside all my other titles on my Amazon author page. So now it will be easy to find, and people who said they wanted a sequel will actually be able to find it.

Yay for small victories and the work that makes them happen! Woe for the speed bumps and delays that make the extra work necessary! And feel free to check out the book’s listing and my author page, now that they are are in working order:

Papa Zulu:
http://www.amazon.com/Papa-Zulu-Matthew-S-Williams

Amazon Author Page:
http://www.amazon.com/Matthew-S-Williams

The Future is Here: VR Body-Swapping

simstimOne of the most interesting and speculative things to come out of the William Gibson’s cyberpunk series The Sprawl Trilogy was the concept of Simstim. A term which referred to “simulated stimulation”, this technology  involved stimulating the nervous system of one person so that they could experience another’s consciousness. As is so often the case, science fiction proves to be the basis for science fact.

This latest case of science imitating sci-fi comes from Barcelona, where a group of interdisciplinary students have created a revolutionary VR technology that uses virtual reality and neuroscience to let people see, hear, and even feel what it’s like in another person’s body. The focus, though, is on letting men and women undergo a sort of high-tech “gender swapping”, letting people experience what it’s like to be in the others’ shoes.

VR_simstim2Be Another Lab is made up of Philippe Bertrand, Daniel Gonzalez Franco, Christian Cherene, and Arthur Pointea, a collection of interdisciplinary artists whose fields range from programming and electronic engineering to interactive system design and neuro-rehabilitation. Together, the goal of Be Another Lab is to explore the concepts of empathy through technology, science, and art.

In most neuroscience experiments that examine issues of empathy and bias, participants “trade places” with others using digital avatars. If a study wants to explore empathy for the handicapped, for example, scientists might sit subjects down in front of a computer and make them play a video game in which they are confined to a wheelchair, then ask them a series of questions about how the experience made them feel.

BeanotherlabHowever, Be Another Lab takes a different, more visceral approach to exploring empathy. Instead of using digital avatars, the group uses performers to copy the movements of a subject. For example, racial bias is studied by having a subject’s actions mirrored by a performer of color. And for something like gender bias, men and women would take a run at living inside the body of one another.

Bertrand and company have taken this approach to the next level by leveraging the tech of a paid Oculus Rift virtual reality headset, renaming it the Machine To Be Another. In the project, two participants stand in front of one another, put on their headsets, and effectively see out of one anothers’ eyes. When they look at each other, they see themselves. When they speak, they hear the other person’s voice in their ears.

VR_simstim1But things don’t end there! Working together, the two participants are encouraged to sync their movements, touching objects in the room, looking at things, and exploring their ‘own’ bodies simultaneously. Bertrand explains the experience as follows:

The brain integrates different senses to create your experience of the world. In turn, the information from each of these senses influences how the other senses are processed. We use these techniques from neuroscience to actually affect the psychophysical sensation of being in your body.

In other words, in combination with being fed video and sound from their partner’s headset, by moving and touching things at the same time, the Machine To Be Another can actually convince people that they are in someone else’s body as long as the two partners remain in sync.

VR_simstimIt’s a radical idea that Be Another Lab is only beginning to explore. Right now, their experiments have mostly focused on gender swapping, but the team hopes to expand on this and tackle issues such as transgender and homosexuality. The group is currently looking to partner with various organizations, experts and activists to help them further perfect their techniques.

It’s a unique idea, giving people the ability to not only walk a mile in another’s shoes, but to know what that actually feels like physically. I can foresee this sort of technology becoming a part of sensitivity training in the future, and even as education for sex offenders and hate criminals. Currently, such training focuses on getting offenders to empathize with their victims.

What better way to do that than making them see exactly what it’s like to be them? And in the meantime, enjoy this video of the Machine To Be Another in action:


Source:
fastcodesign.com

Powered by Wind: World’s Tiniest Windmills

tiny_windmillWind turbines are one of the fastest growing industries thanks to their ability to provide clean, renewable energy. And while most designs are trending towards larger and larger sizes and power yields, some are looking in the opposite direction. By equipping everyday objects with tiny windmills, we just might find our way towards a future where batteries are unnecessary.

Professor J.C. Chiao and his postdoc Dr. Smitha Rao of the University of Texas at Arlington are two individuals who are making this idea into a reality. Their new MEMS-based nickel alloy windmill is so small that 10 could be mounted on a single grain of rice. Aimed at very-small-scale energy harvesting applications, these windmills could recharge batteries for smartphones, and directly power ultra-low-power electronic devices.

tiny_windmill1These micro-windmills – called horizontal axis wind turbines – have a three-bladed rotor that is 1.8 mm in diameter, 100 microns thick, and are mounted on a tower about 2 mm tall mount. Despite their tiny size, the micro-windmills can endure strong winds, owing to being constructed of a tough nickel alloy rather than silicon, which is typical of most microelectromechanical systems (MEMS), and a smart aerodynamic design.

According to Dr. Rao, the problem with most MEMS designs is that they are too fragile, owing to silicon and silicon oxide’s brittle nature. Nickel alloy, by contrast, is very durable, and the clever design and size of the windmill means that several thousands of them could be applied to a single 200 mm (8 inch) silicon wafer, which in turn makes for very low cost-per-unit prices.

tiny_windmill2The windmills were crafted using origami techniques that allow two-dimensional shapes to be electroplated on a flat plane, then self-assembled into 3D moving mechanical structures. Rao and Chiao created the windmill for a Taiwanese superconductor company called WinMEMS, which developed the fabrication technique. And as Rao stats, they were interested in her work in micro-robotics:

It’s very gratifying to first be noticed by an international company and second to work on something like this where you can see immediately how it might be used. However, I think we’ve only scratched the surface on how these micro-windmills might be used.

Chiao claims that the windmills could perhaps be crafted into panels of thousands, which could then be attached to the sides of buildings to harvest wind energy for lighting, security, or wireless communication. So in addition to wind tunnels, large turbines, and piezoelectric fronds, literally every surface on a building could be turned into a micro-generator.

Powered by the wind indeed! And in the meantime, check out this video from WinMEMS, showcasing one of the micro-windmills in action:


Source: news.cnet.com, gizmag.com

The Future is Here: VR Taste Buds and Google Nose

holodeck_telexOne of the most intriguing and fastest-growing aspects of digital media is the possibilities it offers for augmenting reality. Currently, that means overlaying images or text on top of the real world through the use of display glasses or projectors. But in time, the range of possibilities might expand far beyond the visual range, incorporating the senses of taste and smell.

That’s where devices like the Digital Taste Interface comes into play. Developed by Nimesha Ranasinghe, an electrical engineer and the lead researcher of the team at National University of Singapore, this new technology seeks to combine the worlds of virtual reality and gestation. As Ranasinghe explained it in a recent interview with fastcompany.com:

Gustation is one of the fundamental and essential senses, [yet] it is almost unheard of in Internet communication, mainly due to the absence of digital controllability over the sense of taste. To simulate the sensation of taste digitally, we explored a new methodology which delivers and controls primary taste sensations electronically on the human tongue.

digital_taste_interfaceThe method involves two main modules, the first being a control system which formulates different properties of stimuli – basically, levels of current, frequency, and temperature. These combine to provide thermal changes and electrical stimulation that simulate taste sensations, which are in turn delivered by the second module. This is the tongue interface, which consists of two thin, metal electrodes.

According to Ranasinghe, during the course of clinical trials, subjects reported a range of taste experiences. These ranged from sour, salty and bitter sensations to minty, spicy, and sweet. But to successfully communicate between the control systems and sensors, Ranasinghe and her team created a new language format. Known as the TasteXML (TXML), this software specifies the format of specific taste messages.

digital_taste_interface1While the team is currently in negotiations to make the technology commercially available, there are a few pressing updates in the works for the Digital Taste Interface. The first is a more appealing way to use the tongue sensors, which currently are attached while the mouth is open. To that end, they want an interface that can be held in the mouth, called the digital lollipop because it looks like the candy.

In addition to making the system look more aesthetically pleasing and appetizing, it will also allow for a deeper understanding of how electrical stimulation affects taste sensors on different parts of the tongue. In addition, they also want to incorporate smell and texture into the experience, to further extend the range of sensations and create a truly immersive virtual experience.

digital_taste_interface2Ultimately, the Digital Taste Interface has many potential benefits and applications, ranging from medical advances to diet regimens and video games. As Ranasinghe explains:

We are exploring different domains such as entertainment (taste changing drink-ware and accessories) and medical (for patients who lost the sense of taste or have a diminished sense of taste). However, our main focus is to introduce the sensation of taste as a digitally controllable media, especially to facilitate virtual and augmented reality domains.

So in the coming years, do not be surprised if virtual simulations come augmented with a full-range of sensory experiences. In addition to being able to interact with simulated environments (i.e. blowing shit up), you may also be able to smell the air, taste the food, and feel like you’re really and truly there. I imagine they won’t even call it virtual reality anymore. More like “alternate reality”!

And of course, there’s a video:


Sources:
fastcompany.com

The Future of Building: Superefficient Nanomaterials

carbon-nanotubeToday, we are on the verge of a fabrication revolution. Thanks to developments in nanofabrication and miniaturization, where materials can be fashioned down the cellular (or even atomic) level, the option of making bigger and stronger structures that happen to weight less is becoming a reality. This is the goal of materials scientist Julia Greer and her research lab at Caltech.

As an example, Greer offers the The Great Pyramid of Giza and the Eiffel Tower. The former is 174 meters tall and weighs 10 megatons while the latter is over twice that height, but at five and half kilotons is one-tenth the mass. It all comes down to the “elements of architecture”, which allowed the Eiffel Tower to be stronger and more lightweight while using far less materials.

carbon_nanotube2Whereas the pyramids are four solid walls, the Eiffel Tower is skeletal, and vastly more efficient as a result. Greer and her colleagues are trying to make the same sort of leap on a nano scale, engineering hollow materials that are fantastically lightweight while remaining every bit as stiff and strong. Carbon nanotubes are one such example, but the range of possibilities are immense and due to explode in the near future.

The applications for this “Hierarchical Design” are also myriad, but its impact could be profound. For one, these ultralight wonders offer a chance to drastically reduce our reliance on fossil fuels, allowing us to make familiar goods with less raw stuff. But they also could also expand our idea of what’s possible with material science, opening doors to designs that are inconceivable today.

It’s all here on this video, where Greer explains Hierarchical Design and the possibilities it offers below:


Source: wired.com

The Future is Here: Laser 3D Printing

pegasus-touch3D printing has really come into is own in recent years, with the range of applications constantly increasing. However, not all 3D printers or printing methods are the same, ranging from ones that use layered melted plastic to ones that print layers of metal dust, then fuse them with microwave radiation. This range in difference also means that some printers are faster, more accurate, and more expensive than others.

Take the Pegasus Touch as an example. Built by a Las Vegas-based company Full Spectrum Laser (FSL), this desktop 3D printer uses lasers to create objects faster and in finer detail than most other printers in its price range. Available for as little as US$2,000 via a Kickstarter campaign, its performance is claimed to be comparable to machines costing 50 times more.

 

pegasus-touch-8Instead of building up an object by melting plastic filaments and depositing the liquid like ink from a nozzle, the Pegasus touch uses what’s called laser-based stereolithography (SLA). This consists of using a series of 500 kHz ultraviolet lasers moving at 3,000 mm/sec to solidify curable photopolymer resin. As the object rises out of a vat of resin, the laser focuses on the surface, building up layer after layer with high precision.

To be fair, the technology has been around for many years. What is different with the Pegasus Touch is that FSL has shrunk the printer down and made it more economical. Normally, SLA machines are huge and cost in the order of hundreds of thousands of dollars. The Pegasus Touch, on other hand, measures just 28 x 36 x 57 cm (11 x 14 x 22.5 inches) and costs only a few thousand dollars.

pegasus-touch-4This affordability is due in part to the wide availability of Bluray players has made UC laser diodes much more affordable. In addition, FSL is already adept at making laser cutting and engraving machines, which has allowed the company to base the Pegasus Touch on modelling software and electronics already developed for these machines. This allows the device to operate at tolerances equivalent to a $100,000 machine.

The device also has an on-board 1GHz Linux computer with 512 MB memory that can do much of the 3D processing computation itself, making a connected PC all but unnecessary. There’s also an internet-connected 4.3-in color touchscreen, which allows the user to access open-source models that are printer-ready, plus the machine comes with multi-touch-capable desktop software.

pegasus-touch-3It also has a relatively large build area of approximately 18 x 18 x 23 cm (7 x 7 x 9 inch), which is one of the largest in the consumer 3D printer market. The company also says that the Pegasus Touch is 10 times faster than a filament deposition modelling (FDM) printer, has finer control, and up to six times faster than other SLA printers, and can produces a better and more detailed finish.

The Pegasus Touch’s Kickstarter campaign wrapped up earlier this month and raised a total of $819,535, putting them well above their original goal of $100,000. For those who pledged $2000 or more, the printer was made available for pre-order. When and if it goes on sale, the asking price will be $3,499. Given time, I imagine the technology will improve to use metal and other materials instead of resin.

And of course, there’s a promotional video, showcasing the device at work:


Sources: gizmag.com, kickstarter.com, fsl3d.com

Papa Zulu’s First Sale!

shutterstock_102844133This past weekend, Papa Zulu went live on Amazon.com in paperback and ebook formats! I wanted to deliver the news the moment it happened, but as KDP and Createspace can take their time making books available to the public, I felt the need to hold off a little. However, that ended today when the book made its first sale. Yes, somebody out there is now the owner of an ecopy of Papa Zulu!

Granted, there are still a few kinks in the publication process. Right now, the ebook is only available on Amazon.ca, the store’s Canadian subsidiary, while the paperback is only available on Amazon.com. And neither are appearing on my Amazon author page. I can only assume my publishing services need to get their stuff together and expand its availability!

But in any case, I’ve gone ahead and posted the link for the ebook in the right hand column. If you liked the first one, be sure to check out the sequel. I’ve posted the respective links below to make it easier. And if you didn’t read the first one, didn’t like it, or just aren’t interested, then do what you like. I ain’t the boss of you!

Until next time, keep hammering those keys 🙂

Amazon.ca (ebook): amazon.ca

Amazon.com (paperback): amazon.com

Createspace store: createspace.com