Mac Tool Tip #1: Mailplane

I’ve been using gmail for a long time and have been using the web UI. Keyboard shortcuts enabled of course. Then I started realising the the it drowned visually among the other tabs in chrome. So I pinned it to the first tab, which was ok but not awesome.

fluid_logo_iconWhen I switched to Mac after a lifetime in PC land I found out about Fluid app (on the Ruby Rogues podcast I think) which let’s you wrap any webapp as it’s own Mac app, with a dock icon and all. That was a step up but I had challanges ther two. I have multiple gmail accounts and had a hard time finding a nice solution for that.

open-envelope-858Along came a kickstarter project for Kiwi for Gmail, I imediatly liked it an backed it. After beta-testing for a while I got it and I was happy. Worked like a charm, hosted multiple accounts in isolation. In short all I wanted. Then it stared crashing, having porblems with preview windows and stuff. It was in this state for a long time. I hade some interaction with the support and got a ”we know we have stability issues with your version” answer.

planeI’m not very patient. So reading Omar Shahine’s blog I stumbled upon a reference to Mailplane. Tried it out and now it’s my Gmail wapon of choise :). The preview functionality of images, pdf’s and such things is not as good as Kiwi’s. Neither is the downloads of attachements UI. But it doesn’t crach or hang which wins out in my book.

Probably will switch again witin a couple of months but for the time being Mailplane is my Gmail client of choise.


Update firmware on Particle Photon using the CLI

Before updating the firmware on Particle Photon was a bit tricky. You can use the Firmware updater app on Mac. However I tried a couple of times and never got it to work. Another way was to use the particle-cli but ut was a little messy. You had to download the firmware consisting of two files and the run particle commands with those files in the right order.

However, I just upgraded the the CLI tools (> npm install -g particle-cli) and discovered some new (at least to me) utility functions. Namely particle upgrade.

I connected a Particle Photon with USB, and ran it.

particle update execution in terminal

Really simple and worked like a charm!

Water level notifier – home IoT, sending text messages using a Particle Photon

(This is also posted at medium: https:[email protected][email protected]icle-photon-f5cdd74efb9b#.ihyk18myl as a little A/B-test :)).

So, I’ve been itching to do a little IoT project at home. Last thing I did was the a little “information radiator” telling us how long before the bus leaves (here). Now I wanted to do something else fun and useful.

So in the basement of our house we have a geothermal heating pump. When water gets warm and expands inside the pump it has to dispose of it. We don’t have a floor drain in that part of the cellar so it ejects the water into a jar. When it’s full I’ll empty it and put it back.

In order for me not to have to check if it’s full with regular intervals I put together the following solution. When the water level reaches a certain threshold I should get a text message telling me that it’s time.

Parts used

Parts needed

1 Particle Photon microcontroller – An arduinoish microcontroller with wifi on the chip and some neat cloud services around it that you can use if you want to (, I think they made my solution a bit easier so I used their cloud. I hade one laying around at home.

1 Water sensor for arduino – Cheap simple thing that returns different voltage depending on water level.

1 account – for setting up webhooks

Particle CLI tool — using it to register webhooks and get data from the device. So you probably can get on without it but I’ll use it in this article.

Module cable with 4 conductors- You can use any cable, I had that laying around from erlier projects.

Soldering gear – you probably can make it work without using clamps or lab jumper cables.

Parts not needed but used

Experiment board – You can use a breadboard or just solder wires to pins and other wires.

PBC Connector – again you could just solder it on to pins.

Shrink tubes – I used to secure som of my amateurish soldering.

Zip tie – just to hold some stuff together

Building the hardware

I actually proofed the concept out using an Arduino Uno, a breadboard and some jumper cables. But when I set of to make the “real stuff” I started with soldering the connectors of the cable to the water sensor. Keep track of the colors! I used red for power, black for ground and green for signal (signal delivers different voltage depending on the water level). Then I put on some shrink plastic to secure the soldering.


I decided to put a PBC connector on a prototypeing circuit board so that I simply can change the length or type of wire used for the water level sensor.


So to clarify the layout:

Water Level Alarm_bb

This is my first, crude, prototype looked.


I made it look a little bit more polished.




So there is a few parts in this solution. Since has cloud services working with their devices and SDK available at I opted to use that instead of doing HTTP calls directly from the Particle Photon to Twilio to get it done faster. Here’s an overview of the solution.


Setting up Twilio

First of all you need to set up a twilio account at When you’re logged in you need to deposit some money to be able to use Twilio. Then you need to accuire a phone number enabled for SMS


You need your twilio number, accountd sid and auth token.


Creating the Webhook

Now we’re going to put the Twilio info to use when we are going to create a webhook in our account. A particle webhook is a cloud service that acts as a bridge between your particle and the rest of the world. The Particle SDK provides nice utility abstractions for these. So all you have to do in your code (as we’ll see later on) is to call Particle.publish(“webhook-name”, “message”, 60, PRIVATE); Which is kinda neat.
The webhook file is pretty straight.

Create a file named twilio-webhook.json like this.

Now we have everything setup and need some code to get it running :). All code is available at

Code on the device

First off I declare some variables and a method.

DELAY_IN_SECONDS is how often to check the water level. sendSms is the method that calls the webhook, waterLevelPin defines which analog port I connected the water level sensor to and some values for water level and keeping track of text message status. Next the up, the setup:

Using Particle.variable and Particle.function to expose a variable and a method to the Particle cloud service, for testing purposes. Setting the mode of the water level pin to input using pinMode(..).

A larger chunk of code this time. getWaterLevel() gets the value and puts it in a global variable just so I can monitor it through the particle cli. checkWaterLevel() does the actual checking and sends a SMS if the current value is higher than the threshold and no message has been sent.

So no rocket science but a fun little project.
Then I ended up tweaking it a little adding a few LEDs and stuff just for fun.

Docker Compose Config

Diving into docker compose files (docker-compose.yml) there is a lot of keywords used. Some are obvious, others not. Here’s a little cheat sheet, not at all total coverage but hopefully a few nuggets to get started.

docker-compose file


Build points to your Dockerfile. If it’s named Dockerfile and resides in the same directory as your docker-compose.yml you can specify it with a dot (.) otherwise you can give it a path.

You can only use build OR image (se below) not both.


Names an image, local or remote (if its not local docker will try to pull it down). Could for example be redis, ubuntu, mongodb or something else.


This is used to set up relationships (links) between docker containers in your compose environment.

It’s also possible to set up aliases using the links.

- redis:cache


Pretty self-explaining. What ports should be mapped out to the outside world. Follows a :, so if you have a web server on port 80 and want to expose it on port 80 you do a 80:80. If you only give it 80 it will give you a random port for the outside world to use that maps into port 80 on that container.

So this example maps 80 to port 80, and port 8080 to a random port and interval 8000-8030 to 3000-3030.


Is used to mount paths as volumes. Can mount on the host machine or in the container.


Is used to mount volumes from another container or service. So and example could be a web server mounting volumes from a file server.


Specifies a custom name for the container.


Adds environment variables to the container. That in node.js for example can be accessed using process.env.

Next up I’ll look at the docker file.

Starting with docker – tools and what they do

Last week I’ve been thrown into docker land head first and this post is just a few notes from my notes getting into it and figuring out the different concepts and tools.


First of all it needs to be installed. I’m running on OS X and favor homebrew for that:

> brew install docker
> brew install docker-machine
> brew install docker-compose

It’s also a downloadable installer available and it’s installable on Windows.

Now to the tools and what they do.


Creates and manages the docker host(s).

Earlier knows as boot2docker and the image it boots still is named boot2docker.iso. There are different drivers, below I’ll use virtual box which is very common on Mac OS X.

So on OS X creating a virtualbox docker host can look like this:

$ docker-machine create --driver virtualbox docker-test
Running pre-create checks...
Creating machine...
(docker-test) Copying /Users/nippe/.docker/machine/cache/boot2docker.iso to /Users/nippe/.docker/machine/machines/docker-test/boot2docker.iso...
(docker-test) Creating VirtualBox VM...
(docker-test) Creating SSH key...
(docker-test) Starting VM...
Waiting for machine to be running, this may take a few minutes...
Machine is running, waiting for SSH to be available...
Detecting operating system of created instance...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect Docker to this machine, run: docker-machine env docker-test

Now we can see the machine exists:

> $ docker-machine ls
NAME          ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER   ERRORS
default       -        virtual box   Running   tcp://           v1.9.1
docker-test   -        virtual box   Running   tcp://           v1.9.1

And we can check it’s status

> $ docker-machine status docker-test                                                                                            

A common problem is not finding the docker-machine from your host (Mac OS X for example). You then need to expose the docker-machine config. First get it:

> $ docker-machine env docker-test                                                                                            
export DOCKER_HOST=“tcp://”
export DOCKER_CERT_PATH=“/Users/nippe/.docker/machine/machines/docker-test”
export DOCKER_MACHINE_NAME=“docker-test”
 # Run this command to configure your shell:
 # eval $(docker-machine env docker-test)

Personally I’ve put an alias in my rc-file for running:
> eval $(docker-machine env docker-test)


“A lightweight runtime and robust tooling that builds and runs your Docker containers.”

So what that means, as I understand it, is that on a docker host (aka docker-machine) you can manage the separate docker images using the docker command.

Command: docker

docker command can be used to start, stop and manage individual docker containers. It can also run, pull and push images from/to the docker-registry.

If you get the error: “Cannot connect to the Docker daemon. Is the docker daemon running on this host?”

You can run (se more info below):
> eval $(docker-machine env docker-test)

To run an image:
> docker run hello-world

Remove docker images (don’t do this if you’re not sure):
> docker rmi $(docker images -q)


Is a repository for docker image files. The most common one is Docker Hub.

> docker run -i -p 3000:3000 grafana/grafana


Do docker or the docker engine enables us to run images in containers, which is great. Only thing is, usually our applications consist of more than one box. Here’s where docker compose comes in, it lets us specify entire environments.

Lets take an example: We want to spin up a solution with 3 boxes (web server with node.js, redis server and a mongo database). We might create a docker-compose.yml-file looking something like this:

  build: .
  container_name: web_app
    - “3000”
    - redis
    - mongo
  image: redis
    - “6379:6379”
  image: mongo:3.0.4

In this case I have a Dockerfile in the same folder as docker-compose.yml that defines my node environment.

Then we build it and start it:

> docker-compose build
> docker-compose up

And it spins up an entire environment with 3 containers. To check we can open another terminal (don’t forget the eval-trick mentioned earlier to set up docker environment variables, they are per terminal session) and run docker ps:

> $ docker ps
CONTAINER ID        IMAGE                   COMMAND
7847a9012f4c        dockercomposelab_node   “npm start”
a9182a623fc6        redis                   “/ redis”
783370a485c1        mongo:3.0.4             “/ mongo”


Docker Swarm is still a bit above my head but if I understand it somewhat correctly it’s a tool for managing multiple docker hosts as a single unit / docker container.

Hope this helps!

Summary of goals 2015

This is a post that probably is not of interest to anyone else but me. It’s just a way to keep myself accountable and thus hopefully achieving more of my goals.

I hade a few goals 2015, let’s see how I did:

Weight 85 kg

Actually I did reevaluate this goal when preparing for the world championship in underwater rugby. I realized that after loosing quite a lot of weight that I need to weigh more than 85 kg. So I’ll mark this as a success.

Read 15 books

Success. See my goodreads profile

One blogpost per month

Fail. Wrote 11. Report from jetpack:

Update my linkedin profile


Look into starting my own business

Success. Started :).

Finish renovating my daughters room

Fail. Had to put this a little on hold when we got a summer house

Personal finance, make budget


Speak at meet up

Fail. I reached out to nodejs sthlm but nothing that panned out.

Short post on env: node\r: No such file or directory problem

Trying to use the iothub-explorer ( node by using the npm package for Azure I run into some problems. Namely as soon as I touched it I got an error:

env: node\r: No such file or directory
-zsh: list: command not found

So I figured out it was due to different line endings in unix and windows. So what I did was to open up /usr/local/lib/node_modules/iothub-explorer in my editor (atom) and with use of the atom package named lien-ending-converter.

And it works! Should of course be fixed at the source and supposedly has been according to this issue: However I didn’t know how to install that version so in the mean time I fixed it with the trick above.

Small Lesson on Using Request in node.js with Form-data

I was playing around with api for sending SMS (it’s a Swedish Twilio). Trying to send data using request I got 404 Not Found all the time. Using something like below.

var reqeustOptions = {
  uri: urljoin(apiBaseUrl, apiBasePath, ‘SMS’),
  method: ‘POST’,
  auth: {
    user: ‘username’,
    pass: ‘pwd’
  formData: {
    from: fromNubmer,
    to: toNumber,
    message: message


Notice the formData. That infers that the form type will be multipart/form-data which did not work at all with the 46elks api. All I got was 404 Not Found all the time.

After trail and error in Postman (chrome plugin for building HTTP requests). So here’s the thing if you use form instead. The form-data type inferred is x-www-form-urlencoded. When I changed it (after 4 hours of head-banging-against-the-wall) to:

var reqeustOptions = {
  uri: urljoin(apiBaseUrl, apiBasePath, ‘SMS’),
  method: ‘POST’,
  auth: {
    user: ‘username’,
    pass: ‘pwd’
  form: {
    from: fromNubmer,
    to: toNumber,
    message: message


It worked like a charm! 🙂

I googled like crazy before figuring this out, hopefully you stumbles upon this post if you have the same problem.

A Nerds Way of Keeping Track of When the Next Bus Leaves

This post is just me nerding out with a open api, raspberry pi, blink(1) and some node.js code.


Outside my house, about 50 meters away there is a bus stop. In my struggle to not use the car so much we are trying to take the bus or bike to the kids school more. So on the morning I open the app for the commuting time tables and check when an appropriate bus will leave. Then I keep track of the watch to time that buse with the kids.

I wanted to do this more effortless.

The project

wall-pi-02When it’s 10 minutes left the blink(1) lights up in green, then it changes to yellow, then red to do some flashing last minute.

To do this I used:
* API-keys for SL’s realtime api (through
* Raspberry Pi 2
* blink(1)
* USB WiFi adapter for the Raspberry Pi
* Some node.js code (

Getting the Pi in shape

Install the Raspberry Pi OS, I used Raspian Jessie and followed the instructions.

Then I connected my Pi to a monitor and configured the wifi using the graphical interface (if it boots to prompt, write startx) and made sure the SSH server was up and running by default.

Made sure everything was up to date sudo apt-get update && sudo apt-get upgrade

Node.js – I tried to get nvm upp and running but did not succeed so I did a `sudo apt-get install nodejs’.

Get the code

Got the code from my repo (developed on a Mac), git clone [email protected]:nippe/when-does-the-bus-leave.git and then a npm install. (The node-blink npm package depends on node-HID which has different instructions for different node versions, so just be aware and do what’s right for your situation. Read more on the node-blink1 repo)

Run it

So a simple node busStatus.js does the trick. However that process dies when the ssh connections goes down. So the correct thing would probably be to set it up as a proper demon process and I was about to when I stumbled upon screen. A nice little tool to keep a virtual screen session going even if the client is not connected.

sudo apt-get update
sudo apt-get install screen

Start it:

> screen
> cd when-does-the-bus-leave
> node busStatus.js 

The leave the session by hitting ctrl + a + d.

When connecting to the raspberry pi again screen -r reconnects you to the screen-session. Nice little utility!

I’ll probably do updates of code and docs on github:


Document databases are not schema less

This has become a little pet peeve of mine and it’s a bit of a rant. So be warned and exit now :).

I’m getting tired of people who are walking around saying that it’s so nice with document databases because you don’t need a schema. ”You can just insert whatever…”

My issue with this is that I feel they can never have maintained a solution like that.

The schema is always there but if it’s not in the database it’s in the code. I read somewhere that it can be called a on-read-schema (in opposite of a on-write-schema) which I think sums it up nicely.

I have a document database and I’m storing the full address in a field. Then we decide to split it up in street address, zip and city. I can directly start inserting it in that format but when I read the posts I need to put some logic in place to handle this (a schema that is).

if(entry.full_address && entry.full_address.lenght > 0) {
    street = getStreetFromFullAddress(entry.full_address);
    zip_code = getZipFromFullAddress(entry.full_address);
    city = getCityFromFullAddress(entry.full_address);
else {
    street = entry.street;
    zip_code =;
    city =;

That said. I think document databases are awesome, at certain things and less good at others. Use the right tool for the job man and for the right reasons!