A few months ago, I came across a Canadian company called Thalmic Labs who are making a product called the Myo armband. Myo, explained by Thalmic, is a “motion control and gesture control armband bracelet [that] uses arm muscle activity and EMG signals to control digital devices over Bluetooth”. I was particularly intrigued by their vision video. Given my recent fascination with HCI – particularly input methods – I knew this was something I wanted to play with.
Their developer kit shipped with a few built-in controls for common applications (such as Powerpoint and iTunes), as well as a SDK and a few tools to manage the device. The SDK consists of a libmyo library that is exposed through a plain C API, although most applications utilize a language binding to talk to the Myo armband. A language binding is just glue code to use the base API – in our case, the C API – in a particular programming language. The developer community has built a lot of these bindings already.
In addition to the C API, the SDK provides a scripting method using Lua to interface with the armband. To explore the functionality of this armband, I used Lua to play a little Halo…
There are five callbacks that are provided for these scripts:
onPoseEdge(pose, edge)
onActiveChange()
onForegroundWindowChange(app, title)
onPeriodic()
activeAppName(isActive)
For what I’m doing, I’m only concerned about onPoseEdge; my implementations for the rest either don’t do anything or just spit out debug statements, but the callback definitions need to be present in every script.
onPoseEdge is called every time a gesture is made, both on the starting and ending edge. This is particularly helpful so that you can do something for the entire duration of a gesture until you get the falling edge. You can see I use this in my script below for firing your weapon.
function onPoseEdge(pose, edge)
if pose == "fist" then
if edge == "on" then
shoot()
else
endshoot()
end
elseif pose == "fingersSpread" then
grenade()
elseif pose == "waveOut" then
reload()
elseif pose == "waveIn" then
switchWeapon()
else
myo.debug("unused gesture")
end
end
First of all, lets cover the gestures that cause onPoseEdge to get called. They are:
“fist”
“fingersSpread”
“waveIn”
“waveOut”
“thumbToPinky”
There are two additional gestures that aren’t actually anything you really control: “rest” and “unknown”. The “rest” gesture is called when you aren’t doing a particular gesture, such as after one of the first five gestures ends. “unknown” is only called in special cases where Myo can’t recognize the gesture, such as when Myo is disconnected.
Each one of these gestures cause onPoseEdge to be called twice, once on the rising edge “on” and once on the falling edge “off”. So, for example, we would get an onPoseEdge(“rest”, “off”) then an onPoseEdge(“fist”, “on”) when you make a fist, and then get two calls when you release the fist: onPoseEdge(“fist”, “off”) and onPoseEdge(“rest”, “on”). As long as Myo is connected, there is always a gesture considered “on” – which is most often “rest”. If you want to read more, take a peek at the script reference page.
Back to the snippet, each helper function simply maps to a keyboard key or mouse button. The provided script API allows nice control of how to use those – allowing a “press” or “click”, or allowing you to specify a “down” and a separate “up” to simulate holding down a key or button. In practice, I’m happy with how this works – it reliably executes the command and makes for a very fun way to play the game.
Visual control, which in the case of Halo is how you aim, is achieved by using the built in myo.controlMouse(enabled) which uses the change in roll, pitch, and yaw of the armband to move the view. While it works as advertised, I find that aiming is too sensitive to inspire confidence. I’m sure that will increase as I become more comfortable with the new interaction model, but I do plan to explore other methods that I could use here.
You can grab my latest iteration here; I’ll be working on tweaking the aiming method to hopefully build a more accurate experience. In theory, I might make it auto-center and use onPeriodic() to detect when arm movement happens, and adjust the aim accordingly.
Even if the developer unit isn’t quite as capable as their vision video, the possibilities of what modern interaction methods like Myo are very exciting – and getting better every day. I’m looking forward to diving into their C API to see what else we can have fun with…