Amazon Echo

If you haven’t heard about Echo, it’s a new device that can be described as Siri in your living room, but actually getting used. And it does just that, listening to your voice and handling simple queries.

Amazon Echo, causally listening to your entire life.

It’s supposed to have a seven-piece microphone array (that your phone doesn’t have) making it more accurate in the speech recognition, it’s always on and listening (making the interaction more frictionless) and has a great speaker. And perhaps being in the comfort of your own home - where talking to machines is somewhat less awkward - is another big reason behind it’s popularity.

Infographic: What the Amazon Echo Is Actually Used For | Statista
Amazon Echo Usage - Statista.

Despite not having a great experience with it, I couldn’t resist the fun of developing for it. For now, I’m hoping that my unusual accent is the reason why we don’t get along that great, and next generations will improve.

Alexa skills

Also an advantage over Siri (although that’s changing), it that it has its own app store. Apps here are called “skills”. But you don’t need machine learning expertise to develop most skills, Amazon does the heavylifting for you.

The way it works is:

  1. the user will speak to an Alexa device (like Amazon Echo)
  2. Amazon does the speech recognition, intent classification and entity extraction, and calls our service while providing the processed speech (eg. the user says “what’s the weather in London”, and the service gets {'intent': 'KnowWeather', 'entities': {'city': 'London'}
  3. the skill will receive the nicely parsed data, and do its logic (eg. query a weather API for London) and send a response via an HTTP POST request to Amazon
  4. Echo converts the skill’s text response to speech.

Want to code an app for it in Python? Read:

After going through a number of issues getting an Alexa skill up and running properly in Python via Kubernetes, I thought I’d abstract the core setup in a cookie cutter. To get started:


And continue with these instructions. It should take you from nothing to a working app in a real life Echo device in ~30 minutes.

The included skill is barebones, if that. For a small number of hardcoded ingredients, it provides an example replacement if you ask for it. Alexa, ask cook bot what does replace lemon?


What is not included, as of January 2017, is:

  • proper logging (for exceptions and performance monitoring): hopefully someone will later do a Helm chart for Sentry and others to easily deploy these
  • autoscaling, for both pods and the nodes
  • some patterns of dialogues along with more examples of the API (with tests)
  • CI/CD
  • test environments
  • integrated analytics
  • local Kubernetes tests (currently it tests the app itself, but everything up from gunicorn is untested)

Learning JavaScript in 2018

## TL;DRI'll talk about my experience of learning Frontend in 2018 and give some pointers to those interested in the same.## Why JavaScri...… Continue reading

Automated local environments

Published on July 12, 2018

Bots - a review

Published on July 22, 2017