Category Archives: Science

Distributed computing comes to Android with BOINC

Our understanding of the world around us has grown by leaps and bounds since the invention of the computer. The simulation of complex systems in particular involves crunching a ton of numbers, a task computers excel at. Unfortunately, the very best number crunchers happen to be extremely expensive, both to buy and to maintain. Through a system known as distributed computing large, complex tasks can be completed without the hassle of managing a supercomputer.

boinc_star
Image credit: NASA

In distributed computing, a central server offloads small tasks to the computers connected to its network. Each computer completes its task and sends the results back to the server. By utilizing the spare CPU cycles of tens of thousands of volunteer computers, a project like Folding@Home can complete vital research without needing to buy pricey supercomputers. Distributed computing networks exist for a vast array of scientific pursuits, including disease research, the factorization of large integers, and even the search for extraterrestrial life.

Major distributed computing platforms have been available for the desktop computer for more than a decade, and a Folding@Home app can even be installed on the PS3, but until now the mobile market has remained largely untouched. The Berkeley Open Infrastructure for Network Computing (BOINC) has changed that with the recent release of their Android app.

boinc_settings

Upon first opening BOINC’s app you’ll be prompted to select a distributed computing project to contribute to. A brief overview of each project’s goals can be found on BOINC’s website or by selecting a project in the app. After selecting a project you’ll need to create an account to track your computing progress. Once you’ve created an account, BOINC is ready to do its work.

You’re probably thinking that an app of this nature would quickly drain your phone’s battery, and you’d be right if the BOINC app ran continuously. Thankfully, it isn’t configured to run continuously. By default, it only runs when your phone is connected to power, and even then only when the battery is charged to at least 90%. These settings (and others) can be fine tuned in the preferences menu. I highly recommend changing the max used storage space option to something much lower, as the default setting is absurdly high.

The computing power of a current generation smartphone might not compare to that of even a meager desktop computer, but combined with thousands of other phones that power becomes much more substantial. Every little bit helps.

BOINC is available for Android and can be found on the Play Store. Clients for Windows, Mac, and Linux can be found on the BOINC website.

How technology companies are improving voice recognition software

While voice recognition software has certainly improved in the two decades, it hasn’t exactly been the blockbuster tech that Ray Kurzweil predicted. My first experiments with the technology were playing around with Microsoft’s Speech API (circa Windows 95) and early versions of Dragon Naturally Speaking. Both were interesting as “toys” but didn’t work well enough for me to put them to practical use.

Seriously, everything I tell a computer turns into something like this.
Seriously, everything I tell a computer turns into something like this.

Since then, I’ve tried out new voice recognition software every few years and always came away thinking “Well, it’s better. But it’s still not very good.” For fans of the technology, it’s been a slow journey of disappointment. The engineering problems in building practical voice systems turned out to be much harder than anyone thought they’d be.

Human languages don’t follow the strict rules and grammar of programming languages and computer scientists have struggled to build software that can match the intention of someone’s speech to a query or action that can be accurately processed by software. Building code that understands “What is the best way to make bacon?” (Answer: in the oven, just saying.) in the countless number of ways a human might ask the question has been challenging.

Collect ALL the data

One of the tactics used by software engineers to figure out how to handle varied types of input is to build a database of potential input (in this case, human speech) and try to find common threads and patterns. It’s a bit of a brute-force approach, but helps engineers understand what types of input they need to build code for and in cases of similar input, helps reduce the overall code needed.

If you can code your software to know that questions like “What’s it going to be like outside tomorrow?” and “What’s the weather supposed to do?” are both questions about the weather forecast, processing human speech becomes a little easier. Obviously you wouldn’t want to (or even be able to) build code for every possible input, but this approach does give you a good base to build from.

In the past, capturing the volume of data needed to do a thorough analysis of speech wasn’t really an option for voice recognition researchers, cost being a major factor. It was simply too expensive to capture, store, and analyze a large enough sampling of voice data to push research forward.

Leveraging scale

Over the last few years, lowered costs of storage and computing power paired with a much larger population of internet users has made this type of data collection a lot cheaper and easier. In the case of Google (and to some extent Apple), features like Voice Search probably weren’t initially intended to be products in and of themselves, but capture points for the company to collect and analyze voice data so that they could improve future products.

Analysis of a massive database of voice data paired with what are likely some very smart algorithms helped Google build their latest update to Voice Search. For the more scientifically minded, Google’s research site has a lot of interesting information on this analysis work. And as you can see from the video, the results are impressive.

Welcome to the future

For people who have been following voice recognition, the recent uptick in progress is very exciting. Now that the field has gained some momentum, development will likely advance at a rapid pace. The “teaching” component of these systems will improve, enabling them to decipher natural language without human help and more products will include voice interfaces. It’s been a long time coming, but it’s finally starting to feel like the future.

What will technology be like when Generation Y gets old?

When I think about the future, I wonder what technology will be 40 years from now. Today’s older generation love to boast to the younger crowd about how they got through life without computers, tablets, or smartphones. However, when I’m older, what am I going to say to the younger generation? “When I was your age, we didn’t have mind-controlled submarines!”?

It’s certainly interesting to think about, and while we can’t accurately predict what new technologies will be invented during the next 30 or 40 years (since anything can happen), it’s still fun to predict at least how technology might evolve over the next few decades.

In my opinion, technology has evolved and progressed so quickly the past 10 years alone that I feel like it’s going to reach a plateau soon. We’ll still have the traditional computers, tablets, and smartphones, but they’ll simply be thinner, lighter, and much faster. That is until a completely new revolution comes along, like when personal computers came into fruition or when the automobile was invented.

Then again, I have no idea what “completely new revolution” will come since it hasn’t even been invented yet. I mean, before automobiles and planes were invented, nobody had any idea that we’d be able to travel to another part of the world in less than a day. That’s how I feel about the future of technology when I’m 65 — what crazy new things will be invented at that point that I never would have dreamed of?

The only reason that I say that technology might be reaching a plateau soon is that Moore’s Law simply cannot last forever, even though it’s lasted almost a half-century so far. It’s said to only be around until around 2020, give or take a few years.

If you’re not familiar with Moore’s Law, it’s basically an observation of sorts where the number of transistors that can fit onto an integrated circuit doubles roughly every two years. It’s named after Intel co-founder Gordon Moore and coined by computer scientist and former Caltech professor Carver Mead.

One of my biggest questions is, when Moore’s Law eventually collapses, how will technology evolve? Will there be another “law” that replaces Moore’s Law? Or will technology simply just evolve at a slower pace than before?

Image Credit: Sean McEntee

Spice Up Your Android Wallpaper With Astronomy Picture of the Day

Once you’ve grown tired of Android’s multitude of live wallpapers, or if (God forbid) you’re still rocking whatever default wallpaper your phone came with, it might be time for a change. Add some science to your day and keep your wallpaper fresh with Astronomy Picture of the Day.

Astronomy Picture of the Day scrapes the NASA webpage of the same name and makes it easy to set the current day’s picture as your wallpaper.

As you would expect from an application used to set your wallpaper, Astronomy Picture of the Day is simple to use. From the main page you can see the current day’s picture, or scroll through a list of previous days’ pictures. Once you find the picture that catches your fancy, just click on it, hit the Menu key, and click Auto Set Wallpaper (note that there are a few other options in this menu as well).

On the picture’s page you can also click the little ‘i’ button in the upper right hand corner to learn more about that picture.

Astronomy Picture of the Day’s coolest feature is its ability to auto-update your wallpaper every day. From the main page hit Menu, then Preferences. Here you can set the time of day that the application should update, as well as set a few other options.

Astronomy Picture of the Day can be downloaded from the Android Market here or by scanning the QR code below. If you really enjoy the application consider donating to the developer, which also removes advertisements from the main application.

Progress Bars use Optical Illusions to Appear Faster

I’d rather not think about how many hours I’ve spent staring at progress bars in my life. They’re a lot faster than the used to be because of better processors and internet connections, but we still have to wait patiently while transferring large files, downloading video games on Steam, or streaming videos online.

Have you ever wondered why modern progress bars have become so much more animated than they were in the “old days”? Part of it might be that user interfaces are more aesthetically pleasing than they used to be, but the biggest reason is because they want to deceive you.

That’s right, your harmless little progress bar is lying to you.

Studies have shown that the animations used in progress bars can make them seem to go up to 11% faster, as shown in the New Scientist video below. Using tricks like ripples and pulses of light are a simple way to make you feel like things are happening quicker than they really are.

The next time you’re waiting for your computer to transfer a large file to your USB flash drive, I expect you to point an accusing finger and yell “LIES!”.

Image courtesy: D’Arcy Norman

One Small Step for Android, One Giant Leap for Synthetic Robotic Organisms

You might remember that we gave away some pretty sweet limited edition Android collectibles a few months ago.  As it turns out, these Androids had a few buddies that recently took a trip to the upper stratosphere (about 100,000 feet above the Earth’s surface) to take part in a research experiment by Google.

android-in-space
Image Courtesy Google

A team at Google built seven payloads equipped with Nexus S Android smartphones (with accompanying Android toys commanding each payload) and sent them into the Earth’s atmosphere with weather balloons.  The goal was to capture data with the Nexus S’s internal sensors, including GPS, accelerometer, magnetometer, and gyroscope – not to mention that they got some amazing shots of our big blue planet.

google-earth-android
Image Courtesy Google

While I was in college, several of my engineering classmates sent a similar type of balloon into our atmosphere to measure ozone gas profiles in the stratosphere, so Google’s project immediately piqued my interest.  The Nexus S phones sent up used applications like Google Maps (with the new offline data mode available in version 5.0), Google Sky Map, Google Latitude, and some custom apps that sampled data from the phone’s sensors.

balloon-tracks-google-earth
Data captured from the Nexus S, rendered in Google Earth.

With the data they collected, the Google team determined speeds and altitudes of the jet stream, and also found that the Nexus S’s internal GPS functioned up to a max altitude of around 60,000 feet (a secondary high-altitude GPS was included in the payload for more robust data collection).  If you’re feeling science-y, check out the rest of their findings in the What We Found section of their blog post.

Lastly, check out the beautiful video they shot from one of the payloads below.  Is this breathtaking footage or what?

You can also check out a brief summary of the project in the following video.

I love when big companies use their tremendous resources to do interesting science experiments like atmosphere and space exploration. Research projects like this are conducted at universities around the world all the time, but when Google does a project like this, we all get to follow along with it.

On a side note: If anybody at Google needs help sending Android phones into space, please call/text/tweet/email/fax/telegraph me and I’ll be there in a heartbeat.

HP and Hynix are Creating the Next Generation of Memory

HP is currently working with Hynix Semiconductor to develop the next generation of computer memory.  This new non-volatile memory, dubbed Resistive Random Access Memory, or ReRAM for short, will be built upon memristor technology.  Memristor technology has been considered only theoretical since 1971, but changed in 2006 when HP Labs was able to develop the technology.

Memristor technology has the potential to create some great things in the PC world.  It works on a material that changes resistance when a voltage is applied to it.  Not only is memristor technology thought to work as memory, but the companies also believe it has the potential to perform logic functions.  This would allow storage devices to perform functions that would normally take place in a central processing unit (CPU).  This combination allows for the possibility of great speed improvements in PCs.

HP and Hynix would eventually like to make ReRAM a standard for all types of memory, including long-term storage medium such as hard drives.  While the future goals for this new memristor based ReRAM are high, the near future has more basic goals.  HP and Hynix are currently working to make ReRAM as a replacement for flash memory.  The memristor technology allows for the potential production of flash memory chips that run ten times faster and use ten times less power than current flash memory chips.  The companies also say that the ReRAM will be able to be rewritten more times than is possible with flash memory.

The companies believe that they can have the memristor-based ReRAM into the market by 2013 – within half of the normal research and development time if the companies were working independently.  This is in part to the combined forces of both reputable companies, HP being one of the largest PC manufacturers and Hynix being the second largest memory chip maker in the world.

I have to say that I’m quite excited about this new memory technology.  Finding better and faster ways to store our data is becoming more important to all PC users.  If this new-found partnership does succeed, the possibilities are exciting.  We could see faster RAM, higher capacity drives, improvements in portable technologies using the compact sized flash memory, and generally faster computers.  I’ll be keeping a close eye on HP Labs as they continue to develop this technology.

[via HP Labs]
Image Credit: Pete

Perseids Meteor Shower Peaks Soon, Find the Best Time to Watch Where You Live

Depending on where you live, the Perseids meteor shower will be reaching its peak sometime during the night of August 12th and the morning of August 13th.  Since many factors can impact the visibility of astronomical events like meteor showers, NASA provides a great Java applet called the Fluxtimator that can show you the best time to go stargazing.

To use the Fluxtimator, select the meteor shower you would like to observe (tonight’s being “#7 Perseids”), select your location, and then select your viewing conditions.  Specifying your viewing conditions will help fine tune the ideal time to view the event.

Once you’ve entered the information, you’ll see a chart displaying the optimal times to view the shower.  In my part of the world, best viewing conditions for the Perseids will occur between 1am and 5am on August 13th.

The Fluxtimator takes into account moonlight (which is supposed to be minimal this year, providing a better view of the shower), but you should stay away from city lights and street lamps for the best possible view.

The Perseids should be visible around the Cassiopeia constellation, and you can check out Sky and Telescope’s great illustration of where to look.  Android users can use the free app Google Sky Map to easily locate the correct area to watch.

Happy stargazing!

Image credit: Tambako