Android Auto – Not Ready Yet

Pioneer AVH-4100NEX head unitI was pretty excited to get my Pioneer AVH-4100NEX head unit, which supports Android Auto.  I installed it yesterday, and much to my disappointment, Android Auto doesn’t work.  I banged my head against the wall for an hour or so before giving up.  Today I called Crutchfield tech support (Crutchfield is awesome, by the way), and they informed me that Google hasn’t yet released the required apps.  Apparently Google expected to hold back the release by a week, and that was on the 13th (of March, 2015).  So, who knows, maybe I’ll actually be able to play with my shiny new toy on Friday.  I’m not holding my breath, though, software schedules being what they are.

So, if you find yourself wondering why your brand new head unit doesn’t provide the one feature you bought the thing for, hopefully this clears things up for you.  I know there were no search results when I went looking.  🙂


Poor Man’s Linear Distance Sensor

I’d like to use a linear distance sensor in an upcoming project, but the commercially available sensors seem a bit overpriced (probably because they are targeted at precision machining applications).  Given the parts I have on hand, it seemed worthwhile to try to make one out of just an led, a CdS sensor, a drinking straw, electrical tape and a microcontroller:

linear distance sensor prototype
Pay no attention to the circuit on the left side of the breadboard; it’s unrelated.

A drinking straw is wrapped in electrical tape (to keep the ambient light out), and then a CdS sensor is inserted into one end:


The led is inserted into the other end of the straw.  I had to use a 3mm led, since a 5mm wouldn’t fit.  Then, as the led is moved inside the straw, the reading on the CdS sensor changes.  A simple Arduino sketch shows very approximate granularity of 48 units per inch (as reported by the ADC).

#define CDS_PIN 0

void setup()

void loop()

I followed this guide on the Arduino site for hooking up the CdS sensor.

In the end, I’m not sure whether this implementation will be practical for my use case.  I’ll have to wait for more parts to arrive to determine whether the mechanics will work out.  It was a fun little experiment, though!  You can find more photos of the circuit here.

Red Bull Creation Contest Easter Eggs

There’s a command prompt on, which got me wondering what sorts of commands it would accept.  So, as any curious hacker would do, I started poking at the sources.  That eventually led me to some .swf files, which I decompiled, and ended up finding these gems (in addition to the commands listed in the HELP menu) for you to enjoy:


The Problem With Android’s ACTION_USER_PRESENT Intent

First, the background.  I bought the Nexus S when it first came out.  I had come to rely on the notification LED on my previous phone, and this phone’s lack of one was quite annoying.  So, I set out to fix my problem by writing NotificationPlus (source here), an Android app that provides recurring notifications via a ringtone and/or the vibrator.

There are a couple of things such an app has to be able to detect.  First, it has to detect incoming events, such as SMS, missed calls, voicemail, email, etc.  I’ll leave the problems with the Android API in that space for another post.  In this post, we’ll focus on the second thing the app has to accomplish, and that is to tell whether or not the user is actively using the phone.  Sounds simple, right?  In my first take, I just checked to see if the screen was turned on.  Surely, if the screen is unblanked, that means someone is looking at the phone, right?  Wrong!  Let’s explore the variety of ways the screen becomes unblanked, shall we?

  1. the user pushed the power button, either intentionally or not (think butt dialing)
  2. an application decided to unblank the screen, such as:
    1. the phone app, which unblanks the screen for an incoming call
    2. messaging apps, such as GoSMS, wh ich unblank the screen when a message arrives
    3. any other app may do this

As you can see, you can’t infer from the ACTION_SCREEN_ON intent that it is triggered by a user.  Thus, it cannot be relied upon for determining when to disable the repeating notification.  Many of the complaints in the Android Market comments of the form, “did not work, uninstalled,” I am sure boil down to this problem.

My next crack at fixing the problem was to utilize the very promising ACTION_USER_PRESENT intent.  Surely this would be the ticket!  Nope.  Not even close.  ACTION_USER_PRESENT is broadcast only if there is a lock screen enabled.  So, if the lock screen preference is set to none, this intent is never broadcast.  That sounds minor, as I doubt many people run this way.  However, there is another, related problem.  How do you determine when the user is no longer present?  You would think that this sort of intent would have a complement, right?  Like ACTION_USER_ABSENT (;-)) or maybe just ACTION_SCREEN_LOCKED.  There is no such intent.  Why does it matter?  Well, when a lock screen, such as pattern lock or pin, is configured, the user has the option to delay locking after the screen has blanked.  So, let’s say you’re reading a web page, and the screen blanks before you finish.  You hit the power button, and the phone turns back on without requiring the unlock code (and hence without firing the ACTION_USER_PRESENT intent).  If you were relying on a screen blank to tell when the user is no longer present, then you are screwed.  Now, this would be ok, so long as there was a way to query the system preferences to tell what the lock delay was set to, but there isn’t.

So, that’s the end of my ranting.  This basically means that, without asking the user a bunch of questions about their configuration, there is no way to have a one size fits all solution to this problem.  I can write a ton of heuristics, but they are bound to fail for some corner case or another.  All of this could be avoided if the android OS just provided a recurring notification option in the settings.  Or, you know, they could fix the API.

Using the Android Speech Recognition APIs

In my most recent project, I put together a voice controlled iRobot Create using the Android ADK, an iRobot Create and my Nexus S.  The Android speech recognition API takes care of listening for speech, determining when to end the speech input, and also sending the resulting recording off to “the cloud” for processing.  In the end, what you get back is a list of possible matches (this isn’t an exact science, after all).

There are two ways to incorporate speech recognition into an application.  In the first approach, an ACTION_RECOGNIZE_SPEECH intent is broadcast by your application using startActivityForResult.  The results are obtained by defining an onActivityResult method in your class.  As you can see in the Voice Recognition API demo, it is very simple to write an application using this interface!  The problem I had with this approach is that there was too little control over the speech recognition error handling.  Also, I really wanted the speech recognition to be running all of the time.  So, in the end I decided to use the second approach, using the SpeechRecognizer directly in my code.  This actually didn’t make the code all that much more complicated.  As an added bonus, your application is not being paused and resumed in order to get the results from the speech recognition activity.

Having the mechanics out-of the way, the next thing I did was to create a list of voice commands.  The list of speech recognition matches was compared against the command list.  If there was a match, I added the entire list of matches to a hash table, storing the actual command as the value.  Thus, any time a close match came up, it would be found in the hash table, with the entry being the (hopefully) intended command.

Now we have the name of a voice command.  We could write another if/else statement to perform the appropriate function call for each of the commands, or we could do something a little fancier.  Using reflection, I turned the command name into a method call.  So, to implement the command “forward,” you simply have to add a method called forward to the class!

Now, it isn’t quite that slick.  I still keep an if/else statement in order to get a match on the speech recognition results, and to store close matches in the hash table.  I’ll have to experiment with removing that code to see how it fares.

To browse through the code yourself, check out  It’s GPLv3 licensed, so cut-n-paste away into your open source projects!

Voice Controlled iRobot Create

I recently created an instructable on hooking together the Android ADK, an iRobot Create, and (of course) an Android cell phone.  The result is a voice-controlled robot, which you can find here.  I also just uploaded the code for this project to google’s code repository.  You can browse the sources here, or clone a copy using the following command:

hg clone adk-moto

In future posts, I’ll walk through some of the code, explaining how the voice recognition is done, and why I structured things the way I did.

Stay tuned!

Turtlebot Power/Sensor Board Bill Of Materials (BOM)

Turtlebot Power/Sensor BoardI attempted to put together an order, today, for the parts needed to populate the Willow Garage Turtlebot power and sensor board.  They have a link to digi-key that’s supposed to have all of the necessary part numbers, but that plain didn’t work for me.  Also, they were missing some information for several of the parts (such as physical dimensions on surface mount capacitors and a resistor).  So, where there was wiggle room, I just ordered a couple of parts, in the hopes that one fits!

One interesting thing (to me, anyway) was that there was a resistor with a value of “0R”.  I’m guessing that is a 0 Ohm resistor, a beast that I have not yet come across.  But, sure enough, they do exist.  Apparently there are at least a couple uses for such thing.  One use is to simply jumper over a trace.  Another use is a placeholder when there are multiple configuration options available, and you only want to do one board run.  From the documentation provided for the Turtlebot, it doesn’t appear to be the former.  The datasheet for the gyroscope suggests that you could use a resistor here:

"A single external resistor between SUMJ and RATEOUT can be used to
lower the scale factor"

So, I guess the resistor could be a placeholder in case you wanted to adjust the scale factor.  Maybe it’s an oversight that they left it in the final layout.  Who knows?

For those interested, here is a link to the BOM from digi-key that I put together.  The grand total was a whopping $11.24.  Note that there is still a heatsink required (which I ordered separately from, only because I forgot to look on digi-key first).  Also, you need the gyro breakout board from Sparkfun.

I’ll post an updated status when the PCB actually arrives.  The lead time from BatchPCB is 3-4 weeks, though, so don’t hold your breath.