Posted in Information Technology, Software Architecture



Every software project includes a set of architecture decisions defining boundaries and constraints for further design and implementation.

It’s important to document those decisions somehow or else a development team might not know which decisions where made and with which assumptions.

Or they know the decision but are missing the context and the consequences and therefore decisions are blindly accepted or blindly changed.

In the following short tutorial I will show how to structure architecture decisions in so called Architecture Decision Records and how to manage them with a simple tool named ADR-Tools.


Architecture Decision Records (ADR)

The Agile Manifesto states that “Working software over comprehensive documentation” but this does not mean that there should be no documentation at all.

Especially for agile software projects where the architecture might adapt to new knowledge, market situations or technologies it is important to know what decisions were made, why they were made and with which assumptions.

The main problem with documentation is, that it needs to be close to the project and it needs to be short and concise or else it won’t be read or won’t be updated.

What are significant decisions that we should document in this scope?

According to my favorite author, Michael T. Nygard (Release it), significant decisions are those who “affect the structure, non-functional characteristics, dependencies, interfaces, or construction techniques

Therefore each record describes a set of forces and a single decision in their response, forces may appear in multiple Architecture Decision Records (ADR) and we store them in our project directory in doc/arch/adr-NNN.mdand we number them sequentially and monotonically.

When reversing a decision, we keep the old ADR file but we’re marking it as superseded because it still might be relevant to know that there was such a decision.

The ADR file contains these typical elements:

  • Title: The ADR file have names consisting of short noun phrases, Example: ADR 1: Use Hystrix to stabilize integration points.
  • Date: The ADR’s creation date
  • Status: The decision’s status, e.g.: “accepted“, “proposed” (if stakeholders have not approved yet), “deprecated” or “superseded
  • Context: A description of the forces at play (organizational, political, cultural, technological, social, project-local…)
  • Decision: The response to these forces, active voice, full sentences.
  • Consequences: The resulting context after applying the decision with all consequences, positive as well as neutral or negative ones.

For more detailed information, please feel free to read Michael T. Nygard: Documenting Architecture Decisions.

Of course other formats for ADRs exist, please feel free to visit for other tools and formats.

Installing ADR-Tools

We will be using ADR-Tools to manage our ADRs so we need to install it.

Using Mac, we may simply use the following command (using homebrew):

brew install adr-tools

Another choice e.g. for Linux is to download the latest release from GitHub here, untar the archive and add the src-path to your PATH.

ADR-Tools are a collection of bash scripts.

Initializing a Project

The first step is to initialize ADRs for our project (in this example it’s a typical Maven project).

We use adr init to initialize ADRs for our project within the specified directory …..

$ adr init doc/architecture/decisions

Afterwards, our project directory structure looks similar to this one:

├── doc
│   └── architecture
│       └── decisions
│           └──
├── pom.xml
└── src
    ├── main
    │   ├── java
    │   └── resources
    └── test
        └── java

As we can see, a first decision already has been added to the project: It’s the decision to record our architecture decisions.

When opening the ADR file we’re able to read the following Markdown file:

# 1. Record architecture decisions
Date: 2018-05-26
## Status
## Context
We need to record the architectural decisions made on this project.
## Decision
We will use Architecture Decision Records, as described by Michael Nygard in this article:
## Consequences
See Michael Nygard's article, linked above. For a lightweight ADR toolset, see Nat Pryce's _adr-tools_ at

Creating new Architecture Decision Records

We’re ready now to create our first ADR. The command adr new allows us to create a new ADR with the given title.

$ adr new Use Hystrix to stabilize integration points

This is what the generated Markdown file looks like:

# 2. Use Hystrix to stabilize integration points
Date: 2018-05-26
## Status
## Context
Context here...
## Decision
Decision here...
## Consequences
Consequences here...

We’re adding another ADR:

adr new Use Thrift for data serialization between system a and system b

Listing Architecture Decision Records

We may list existing ADRs using adr list like this:

$ adr list

Superseding a Decision

Now we will supersede a decision, for example if a new ADR supersedes our decision #3, we would type in the following command:

$ adr new -s 3 Use Avro for data serialization between system a and system b

This produces the following new ADR file with a reference to the superseded ADR.

# 4. Use Avro for data serialization between system a and system b
Date: 2018-05-26
## Status
Supersedes [3. Use Thrift for data serialization between system a and system b](
## Context
Context here...
## Decision
Decision here...
## Consequences
Consequences here...

Of course our old decision is changed, too including a reference to the new ADR and setting its status to “Superseded”.

# 3. Use Thrift for data serialization between system a and system b
Date: 2018-05-26
## Status
Superseded by [4. Use Avro for data serialization between system a and system b](
## Context
Context here...
## Decision
Decision here...
## Consequences
Consequences here...

Generating Graphs

To gain an overview of our ADR’s relations we may generate a graph in the DOT format using adr generate graphlike this:

$ adr generate graph
digraph {
  node [shape=plaintext];
  _1 [label="1. Record architecture decisions"; URL="0001-record-architecture-decisions.html"]
  _2 [label="2. Use Hystrix to stabilize integration points"; URL="0002-use-hystrix-to-stabilize-integration-points.html"]
  _1 -> _2 [style="dotted"];
  _3 [label="3. Use Thrift for data serialization between system a and system b"; URL="0003-use-thrift-for-data-serialization-between-system-a-and-system-b.html"]
  _2 -> _3 [style="dotted"];
  _3 -> _4 [label="Superceded by"]
  _4 [label="4. Use Avro for data serialization between system a and system b"; URL="0004-use-avro-for-data-serialization-between-system-a-and-system-b.html"]
  _3 -> _4 [style="dotted"];
  _4 -> _3 [label="Supercedes"]

We may write this output to a file e.g. adr generate graph > generated/

From this DOT file we may create an image e.g. using GraphViz like this:

dot -Tpng generated/ -ogenerated/graph.png

Another choice is to use an online tool like  webgraphviz here so you don’t need to install another tool.

This is what the generated graph looks like as png-image:

ADR generated graph of architecture decisions

ADR generated graph of architecture decisions

Generating Table of Contents

To include the ADRs in other documents we may generate a table of contents for our ADRs using adr generate toc like this:

$ adr generate toc

This returns the following Markdown code:

# Architecture Decision Records
* [1. Record architecture decisions](
* [2. Use Hystrix to stabilize integration points](
* [3. Use Thrift for data serialization between system a and system b](
* [4. Use Avro for data serialization between system a and system b](

Linking Architecture Decision Records

When ADRs are linked somehow we want to document this and adr link eases this for us.

Let’s use an example where ADR #4 amends ADR #2 so that we could link both with the following command:

$ adr link 4 Amends 2 "Amended by"

Now our graph looks like this:

adr generate graph
digraph {
  node [shape=plaintext];
  _1 [label="1. Record architecture decisions"; URL="0001-record-architecture-decisions.html"]
  _2 [label="2. Use Hystrix to stabilize integration points"; URL="0002-use-hystrix-to-stabilize-integration-points.html"]
  _1 -> _2 [style="dotted"];
  _2 -> _4 [label="Amended by"]
  _3 [label="3. Use Thrift for data serialization between system a and system b"; URL="0003-use-thrift-for-data-serialization-between-system-a-and-system-b.html"]
  _2 -> _3 [style="dotted"];
  _3 -> _4 [label="Superceded by"]
  _4 [label="4. Use Avro for data serialization between system a and system b"; URL="0004-use-avro-for-data-serialization-between-system-a-and-system-b.html"]
  _3 -> _4 [style="dotted"];
  _4 -> _3 [label="Supercedes"]
  _4 -> _2 [label="Amends"]

Or as an image:

Updated Architecture Decision Graph

Updated Architecture Decision Graph

In addition both ADR files were updated:

ADR #2 now looks like this one:

# 2. Use Hystrix to stabilize integration points
Date: 2018-05-26
## Status
Amended by [4. Use Avro for data serialization between system a and system b](
## Context
Context here...
## Decision
Decision here...
## Consequences
Consequences here...

ADR #4 has been changed to this:

# 4. Use Avro for data serialization between system a and system b
Date: 2018-05-26
## Status
Supercedes [3. Use Thrift for data serialization between system a and system b](
Amends [2. Use Hystrix to stabilize integration points](
## Context
Context here...
## Decision
Decision here...
## Consequences
Consequences here...



Posted in javascript

No selenium server jar at the specific location

End to end testing with protractor is very important for developers doing BDD or TDD and using Angular
To install protractor there is a common error encountered of “Warning: there’s no selenium server jar at the specified location“.
Though there are several solutions online, and some include you specifying the selenium jar folder in your protractor configuration file, I disagree with this solution because if you are in a team and have other developers contributing to the project, this solution may not work on their machine
So the solution that I did that fixed this was that, instead of running
webdriver-manager update
as specified in protractor site, you should run this command instead
./node_modules/protractor/bin/webdriver-manager update
This would update the webdriver that your app is using
and voila, the error should be gone, and you should see a selenium folder in the protractor folder in your app
You can drop a comment if you have any other question
Posted in Devops, Information Technology

A comparison of tools that help developers build and deploy their apps on Kubernetes


  • Draft
    – deploy code to k8s cluster (automates build-push-deploy)
    – deploy code in draft-pack supported languages without writing dockerfile or k8s manifests
    – needs draft cli, helm cli, tiller on cluster, local docker, docker registry
  • Gitkube
     deploy code to k8s cluster (automates build-push-deploy)
    – git push to deploy, no dependencies on your local machine
    – needs dockerfile, k8s manifests in the git repo, gitkube on cluster
  • Helm
    – deploy and manage charts (collection of k8s objects defining an application) on a k8s cluster
    – ready made charts for many common applications, like mysql, mediawiki etc.
    – needs helm cli, tiller on cluster, chart definition locally or from a repo
  • Ksonnet
    – define k8s manifests in jsonnet, deploy them to k8s cluster
    – reusable components for common patterns and stacks, like deployment+service, redis
    – needs jsonnet knowledge, ksonnet cli
  • Metaparticle
    – deploy your code in metaparticle supported languages to k8s (automates build-push-deploy)
    – define containerizing and deploying to k8s in the language itself, in an idiomatic way, without writing dockerfile or k8s yaml
    – needs metaparticle library for language, local docker
  • Skaffold
    – deploy code to k8s cluster (automates build-push-deploy)
    – watches source code and triggers build-push-deploy when change happens, configurable pipeline
    – needs skaffold cli, dockerfile, k8s manifests, skaffold manifest in folder, local docker, docker registry

Want to know more? Read ahead.

Kubernetes is super popular nowadays and people are looking for more ways and workflows to deploy applications to a Kubernetes cluster. kubectl itself has become like a low-level tool, with people looking for even easier workflows. Draft, Gitkube, Helm, Ksonnet, Metaparticle and Skaffold are some of the tools around that help developers build and deploy their apps on Kubernetes.

Draft, Gitkube and Skaffold ease the developer effort, when you are building an application, to get it running on a Kubernetes cluster as quickly as possible. Helm and Ksonnet help the deployment process once your app is built and ready to ship, by defining applications, handling rollout of new versions, handling different clusters etc. Metaparticle is an odd one out here, since it combines everything into your code — yaml, dockerfile, all in the code itself.

So, what should you use for your use case?

Let’s discuss.



Simple app development & deployment — on to any Kubernetes cluster.

As the name suggests, Draft makes developing apps that run on Kubernetes clusters easier. The official statement says that Draft is a tool for developing applications that run on Kubernetes, not for deploying them. Helm is the recommended way of deploying applications as per Draft documentation.

The goal is to get the current code on the developer’s machine to a Kubernetes cluster, while the developer is still hacking on it, before it is committed to version control. Once the developer is satisfied by the changes made and deployed using Draft, the code is committed to version control.

Draft is not supposed to be used for production deployments as it is purely intended to be used for a very quick development workflow when writing applications for Kubernetes. But it integrates very well with Helm, as it internally uses Helm to deploy the changes.


Draft: architecture diagram

As we can see from the diagram, draft CLI is a key component. It can detect language used from the source code and then use an appropriate pack from a repo. A pack is a combination of Dockerfile and Helm chart which together defines the environment for an application. Packs can be defined and distributed in repos. Users can define their own packs and repos as they’re present as files in the local system or a git repo.

Any directory with source code can be deployed if there is a pack for that stack. Once the directory is setup using draft create(this adds dockerfile, Helm chart and draft.toml), draft up can build the docker image, push it to a registry and rollout the app using Helm chart (provided Helm is installed). Every time a change is made, executing the command again will result in a new build being deployed.

There is a draft connect command which can port forward connections to your local system as well as stream logs from the container. It can also integrate with nginx-ingress to provide domain names to each app it deploys.

From zero to k8s

Here are the steps required to get a python app working on a k8s cluster using Draft. (See docs for a more detailed guide)


  • k8s cluster (hence kubectl)
  • helm CLI
  • draft CLI
  • docker
  • docker repository to store images
$ helm init

Use case

  • Developing apps that run on Kubernetes
  • Used in “inner loop”, before code is committed onto version control
  • Pre-CI: Once development is complete using draft, CI/CD takes over
  • Not to be used in production deployment

More details here.


Build and deploy Docker images to Kubernetes using git push

Gitkube is a tool that takes care of building and deploying your Docker images on Kubernetes, using git push. Unlike draft, gitkube has no CLI and runs exclusively on the cluster.

Any source code repo with a dockerfile can be deployed using gitkube. Once gitkube is installed and exposed on the cluster, developer can create a remote custom resource which gives a git remote url. The developer can then push to the given url and docker build-kubectl rollout will happen on the cluster. The actual application manifests can be created using any tool (kubectl, helm etc.)

Focus is on plug and play installation & usage of existing well known tools (git and kubectl). No assumptions are made about about the repo to be deployed. The docker build context and dockerfile path, along with deployments to be updated are configurable. Authentication to the git remote is based on SSH public key. Whenever any change in code is made, committing and pushing it using git will trigger a build and rollout.


Gitkube: architecture diagram

There are 3 components on the cluster, a remote CRD which defines what should happen when a push is made on a remote url, gitkubed which builds docker images and updates deployments, and a gitkube-controller which is watching on the CRD to configure gitkubed.

Once these objects are created on the cluster, a developer can create their own applications, using kubectl. Next step is to create a remote object which tells gitkube what has to happen when a git push is made to a particular remote. Gitkube writes the remote url back to the status field of the remote object.

From zero to k8s


  • k8s cluster (kubectl)
  • git
  • gitkube installed on the cluster (kubectl create)

Here are the steps required to get you application on Kubernetes, including installation of gitkube:

$ git clone

$ cd gitkube-example

$ kubectl create -f k8s.yaml$ cat ~/.ssh/ | awk '$0="  - "$0' >> "remote.yaml"

$ kubectl create -f remote.yaml

$ kubectl get remote example -o json | jq -r '.status.remoteUrl'

$ git remote add example [remoteUrl]

$ git push example master

## edit code
## commit and push

Use case

  • Easy deployments using git, without docker builds
  • Developing apps on Kubernetes
  • While development, WIP branch can be pushed multiple times to see immediate results

More details here.


The package manager for Kubernetes

As the tag suggests, Helm is a tool to manage applications on Kubernetes, in the form of Charts. Helm takes care of creating the Kubernetes manifests and versioning them so that rollbacks can be performed across all kind of objects, not just deployments. A chart can have deployment, service, configmap etc. It is also templated so that variables can be easily changed. It can be used to define complex applications with dependencies.

Helm is primarily intended as a tool to deploy manifests and manage them in a production environment. In contrast to Draft or Gitkube, Helm is not for developing applications, but to deploy them. There are a wide variety of pre-built charts ready to be used with Helm.


Helm: architecture diagram

Let’s look at Charts first. As we mentioned earlier, a chart a bundle of information necessary to create an instance of a Kubernetes application. It can have deployments, services, configmaps, secrets, ingress etc. all defined as yaml files, which in turn are templates. Developers can also define certain charts as dependencies for other charts, or nest charts inside another one. Charts can be published or collated together in a Chart repo.

Helm has two major components, the Helm CLI and Tiller Server. The cli helps in managing charts and repos and it interacts with the Tiller server to deploy and manage these charts.

Tiller is a component running on the cluster, talking to k8s API server to create and manage actual objects. It also renders the chart to build a release. When the developer does a helm install <chart-name> the cli contacts tiller with the name of the chart and tiller will get the chart, compile the template and deploy it on the cluster.

Helm does not handle your source code. You need use some sort of CI/CD system to build your image and then use Helm to deploy the correct image.

From zero to k8s


  • k8s cluster
  • helm CLI

Here is an example of deploying WordPress blog onto a k8s cluster using Helm:

$ helm init
$ helm repo update
$ helm install stable/wordpress
## make new version
$ helm upgrade [release-name] [chart-name]

Use case

  • Packaging: Complex applications (many k8s objects) can be packaged together
  • Reusable chart repo
  • Easy deployments to multiple environments
  • Nesting of charts — dependencies
  • Templates — changing parameters is easy
  • Distribution and reusability
  • Last mile deployment: Continuous delivery
  • Deploy an image that is already built
  • Upgrades and rollbacks of multiple k8s objects together — lifecycle management

More details here.


A CLI-supported framework for extensible Kubernetes configurations

Ksonnet is an alternate way of defining application configuration for Kubernetes. It uses Jsonnet, a JSON templating language instead of the default yaml files to define k8s manifests. The ksonnet CLI renders the final yaml files and then applies it on the cluster.

It is intended to be used for defining reusable components and incrementally using them to build an application.


Ksonnet: overview

The basic building blocks are called parts which can be mixed and matched to create prototypes. A prototype along with parameters becomes a component and components can be grouped together as an application. An application can be deployed to multiple environments.

The basic workflow is to create an application directory using ks init, auto-generate a manifest (or write your own) for a component using ks generate, deploy this application on a cluster/environment using ks apply <env>. You can manage different environments using ks env command.

In short, Ksonnet helps you define and manage applications as collection of components using Jsonnet and then deploy them on different Kubernetes clusters.

Like Helm, Ksonnet does not handle source code, it is a tool for defining applications for Kubernetes, using Jsonnet.

From zero to k8s


  • k8s cluster
  • ksonnet CLI

Here’s a guestbook example:

$ ks init
$ ks generate deployed-service guestbook-ui \     --image \     --type ClusterIP
$ ks apply default
## make changes
$ ks apply default

Use case

  • Flexibility in writing configuration using Jsonnet
  • Packaging: Complex configurations can be built as mixing and matching components
  • Reusable component and prototype library: avoid duplication
  • Easy deployments to multiple environments
  • Last-mile deployment: CD step

More details here.


Cloud native standard library for Containers and Kubernetes

Positioning itself as the standard library for cloud native applications, Metaparticle helps developers to easily adopt proven patterns for distributed system development through primitives via programming language interfaces.

It provides idiomatic language interfaces which helps you build systems that can containerize and deploy your application to Kubernetes, develop replicated load balanced services and a lot more. You never define a Dockerfile or a Kubernetes manifest. Everything is handled through idioms native to the programming language that you use.

For example, for a Python web application, you add a decorator called containerize (imported from the metaparticle package) to your main function. When you execute the python code, the docker image is built and deployed on to your kubernetes cluster as per parameters mentioned in the decorator. The default kubectl context is used to connect to cluster. So, switching environments means changing the current context.

Similar primitives are available for NodeJS, Java and .NET and support for more languages are a work in progress.


The metaparticle library for the corresponding language have required primitives and bindings for building the code as a docker image, pushing it to a registry, creating k8s yaml files and deploying it onto a cluster.

The Metaparticle Package contains these language idiomatic bindings for building containers. Metaparticle Sync is a library within Metaparticle for synchronization across multiple containers running on different machines.

JavaScript/NodeJS, Python, Java and .NET are supported during the time of writing.

From zero to k8s


  • k8s cluster
  • metaparticle library for the supported language
  • docker
  • docker repository to store images

A python example (only relevant portion) for building docker image with the code and deploying to a k8s cluster:

        'ports': [8080],
        'replicas': 4,
        'runner': 'metaparticle',
        'name': 'my-image',
        'publish': True
def main():
    Handler = MyHandler
    httpd = SocketServer.TCPServer(("", port), Handler)

Use case

  • Develop applications without worrying about k8s yaml or dockerfile
  • Developers no longer need to master multiple tools and file formats to harness the power of containers and Kubernetes.
  • Quickly develop replicated, load-balanced services
  • Handle synchronization like locking and master election between distributed replicas
  • Easily develop cloud-native patterns like sharded-systems

More details here.


Easy and Repeatable Kubernetes Development

Skaffold handles the workflow of building, pushing and deploying an application to Kubernetes. Like Gitkube, any directory with a dockerfile can be deployed to a k8s cluster with Skaffold.

Skaffold will build the docker image locally, push it to a registry and rollout the deployment using the skaffold CLI tool. It also watches the directory so that rebuilds and redeploys happen whenever the code inside the directory changes. It also streams logs from containers.

The build, push, deploy pipelines are configurable using a yaml file, so that developers can mix and match tools that like for these steps: e.g. docker build vs google container builder, kubectl vs helm for rollout etc.


Skaffold: overview

Skaffold CLI does all the work here. It looks at a file called skaffold.yaml which defines what has to be done. A typical example is to build the docker image using dockerfile in the directory where skaffold dev is run, tag it with its sha256, push the image, set that image in the k8s manifest pointed to in the yaml file, apply that manifests on the cluster. This is run continuously in a loop triggering for every change in the directory. Logs from the deployed container is streamed to the same watch window.

Skaffold is very similar to Draft and Gitkube, but more flexible, as it can manage different build-push-deploy pipelines, like the one shown above.

From zero to k8s


  • k8s cluster
  • skaffold CLI
  • docker
  • docker repository to store images

Here are the steps to deploy an example Go application that prints hello-world:

$ git clone
$ cd examples/getting-started
## edit skaffold.yaml to add docker repo
$ skaffold dev
## open new terminal: edit code

Use case

  • Easy deployments
  • Iterative builds — Continuous build-deploy loop
  • Developing apps on Kubernetes
  • Define build-push-deploy pipelines in CI/CD flow

More details here.

Let me know in the comments if I missed something out or got something wrong. I have not mentioned tools like Ksync and Telepresence, as I plan to write another post about them soon. But, if there are other tools that fit into the category of those mentioned here, please drop a note in the comments.

You can find the discussion on Hacker News here.

We’ve been trying to solve similar problems around deployment with the Hasura platform and our current approach is to:

  1. Bootstrap a monorepo of microservices that contains Dockerfiles and Kubernetes specs
  2. git push to build and deploy all the microservices in one shot

This method helps you get started without knowing about Dockerfiles and Kubernetes (using boilerplates) but keeps them around in case you need access to them. It also has a very simple (git push based) deployment method.

Posted in Information Technology

Container Orchestration


with the ever growing containerazation technology, there are also growing requirement for Container Orchestration. There are many technology vendors who are developing them. This page is to list down some well known technology vendors and their container solutions

Container Scheduler Solutions

  • Amazon ACS (supports DC/OS, Swarm, Kubernetes)
  • CoreOS Fleet
  • Cloud Foundry Diego
  • Docker Swarm
  • Google Container Engine
  • kubernetes
  • Mesosphere Marathon

See also Kubernetes

Container Source To Image Solutions / CaaS

Integrated Products:

  • Cloud Foundry
  • Openshift
  • Heroku

Kubernetes Ecosystem Cluster Deployment Solutions:

  • Draft
  • Gitkube
  • Helm
  • Ksonnet
  • Metaparticle
  • Skaffold

See also Openshift

Infrastructure as Code

Amazon ECS



See also Helm Kubernetes

  • komposer (Docker Compose for kubernetes and Openshift)
  • kube-applier (single repo watcher, resource files only, no templates)
  • Helm (recipe repo manager, upstream chart repo)
  • Helmsman (Helm based cluster manager)
  • Armada (Helm based central configuration, including lifecycle hooks)
  • Landscaper (Helm charts based resource definition)
  • Terraform official kubernetes provider (only a few resource types, missing Deployments/Routes/…)
  • Terraform 3rd party kubernetes providers… (FIXME: resource provider)
  • Exekube (projects mapped using terraform+helm)
  • WeaveCloud/flux (SaaS)
  • Mesosphere Maestro (declarative universal operator)


Multi-PaaS IaC Tools

At least supporting Azure, AWS and GCE

  • Terraform
  • Ubuntu Juju (also Openstack, Rackspace, vSphere)
Posted in Devops, Information Technology

Bash Regex Cheat Sheet

Following are the few usefull bash regex

Regexp Matching

Use conditions with doubled [] and the =~ operator. Ensure not to quote the regular expression. Only BRE are allowed. If the regexp has whitespaces put it in a variable first.

if [[ $string =~ ^[0-9]+$ ]]; then 
    echo "Is a number"

Regexp Match Extraction

Variant #1: You can do this with grouping in bash. Despite only BRE being supported grouping works also. Note how you need to set the regexp into a variable because you must not quote it in the if condition!

REGEXP="2013:06:23 ([0-9]+):([0-9]+)"
if [[ $string =~ $REGEXP ]]; then
    echo "Hour ${BASH_REMATCH[1]} Minute ${BASH_REMATCH[2]}"

Variant #2: Actually using “expr” can much simpler especially when only on value is to be extracted:

hour=$(expr match "$string" '2013:06:23 \([0-9]\+\)')

Validate IPs

If you need to validate an IP try the following function

function validate_ip {
        local net=$1
        [[ $net =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}/[0-9]{1,2}$ ]] || return 1
        [[ ${net#*/} -le 32 ]] || return 1
        local ip=${net%/*}
        local -a oc=(${ip//\./ })
        [[ ${oc[0]} -le 255 && ${oc[1]} -le 255 && ${oc[2]} -le 255 && ${oc[3]} -le 255 ]] || return 1
        return 0


Posted in Devops, Information Technology

Shell-Scripting Cheat Sheet

Date Handling

Convert Date To Unix Timestamp

date -d "$date" +%s

Note that this only works for American style dates. European “25.06.2014” like dates are not supported. The simple solution is to convert them first to “2014-06-25” for example with

sed 's/\([0-9]*\)\.\([0-9]*\)\.([0-9]*\)/\3-\2-\1/'

Convert From Unix Timestamp

date -d "1970-01-01 1234567890 sec GMT"

Calculate Last Day of Month

Found here:

cal $(date "+%M %y") | grep -v ^$ | tail -1 | sed 's/^.* \([0-9]*\)$/\1/'

Lock Files

Using “flock”:

flock /tmp/myapp.lock <some command>
flock -w 10 /tmp/myapp.lock <some command>

Using “lockfile-*” commands:

lockfile-create /tmp/myapp.lock
lockfile-touch  /tmp/myapp.lock
lockfile-remove /tmp/myapp.lock

Parameter Handling


getopt is a standalone command, supporting GNU style long parameters and parameters mixed with options and can be used like this

PARAMS=`getopt -o a::bc: --long arga::,argb,argc: -n '' -- "$@"`
eval set -- "$PARAMS"

while true ; do
    case "$1" in
            case "$2" in
                "") ARG_A='some default value' ; shift 2 ;;
                *) ARG_A=$2 ; shift 2 ;;
            esac ;;
        -b|--argb) ARG_B=1 ; shift ;;
            case "$2" in
                "") shift 2 ;;
                *) ARG_C=$2 ; shift 2 ;;
            esac ;;
        --) shift ; break ;;
        *) echo "Unknown option!" ; exit 1 ;;


getopts is shell-builtin

while getopts ":ap:" opt; do
  case $opt in
      echo "Option -a ist set"
      echo "Parameter -p is given with value '$OPTARG'"
      echo "Unknown option: -$OPTARG"

shflags – portable getotps

If you ever need to port between different Unix derivates use shflags a Google library providing standard parameter handling. Example:

source shflags

DEFINE_string 'value' '0' 'an example value to pass with default value "0"' 'v'

FLAGS "$@" || exit $?
eval set -- "${FLAGS_ARGV}"

echo "${FLAGS_value}!"

Other Topics

Posted in Information Technology

NFS Cheat Sheet

NFS Shares

Update Exports

After editing /etc/exports run

exportfs -a

List Exports

# showmount -e
Export list for myserver:

Show Clients

On the NFS server run ‘showmount’ to see mounting clients

# showmount 
Hosts on myserver:

List Protocols/Services

To list local services run:

# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  48555  status
    100024    1   tcp  49225  status
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    2   tcp   2049
    100227    3   tcp   2049
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    2   udp   2049
    100227    3   udp   2049
    100021    1   udp  51841  nlockmgr
    100021    3   udp  51841  nlockmgr
    100021    4   udp  51841  nlockmgr
    100021    1   tcp  37319  nlockmgr
    100021    3   tcp  37319  nlockmgr
    100021    4   tcp  37319  nlockmgr
    100005    1   udp  57376  mountd
    100005    1   tcp  37565  mountd
    100005    2   udp  36255  mountd
    100005    2   tcp  36682  mountd
    100005    3   udp  54897  mountd
    100005    3   tcp  51122  mountd

Above output is from an NFS server. You can also run it for remote servers by passing an IP. NFS clients usually just run status and portmapper:

# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  44152  status
    100024    1   tcp  53182  status


Mounting NFSv4 Shares

The difference in mounting is that you need to provide “nfs4” and transport and port options like this:

mount -t nfs4 -o proto=tcp,port=2049 server:/export/home /mnt

Ensure Running Id Mapper

When using NFSv4 share ensure to have the id mapper running on all clients. On Debian you need to explicitely start it:

service idmapd start

Mapping Users

You might want to set useful NFSv4 default mappings and some explicit mappings for unknown users:

#cat /etc/idmapd.conf
Nobody-User = nobody
Nobody-Group = nogroup

someuser@otherserver = localuser


Tuning NFS Clients

When optimizing for performance try the following client mount option changes:

  • Use “hard” instead of “soft”
  • Add “intr” to allow for dead server and killable client programs
  • Increase “mtu” to maximum
  • Increase “rsize” and “wsize” to maximum supported by clients and server
  • Remove “sync”

After changing and remounting check for effective options using “nfsstat -m” which will give you a list like this:

$ nfsstat -m
/data from
 Flags: rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=,local_lock=none,addr=

When synchronous shares are important try the “noac” mount option.

Tuning NFS Server

For the exported filesystem mount options:

  • Use “noatime”
  • Use “async” if you can (risk of data corruption)
  • Use “no_subtree_check”

Other than that:

  • Use CFQ I/O scheduler
  • Increase /sys/block/sda/device/block/sda/queue/max_sectors_kb
  • Check /sys/block/sda/device/block/sda/queue/read_ahead_kb
  • Increase number of nfsd threads

Getting NFS Statistics

Use “nfsstat” for detailed NFS statistics! The options are “-c” for client and “-s” for server statistics. On the server caching statistics are most interesting,

# nfsstat -o rc
Server reply cache:
hits       misses     nocache
0          63619      885550  

on the client probably errors and retries. Also note that you can get live per-interval results when running with “–sleep=\<interval>”. For example

# nfsstat -o fh --sleep=2
Posted in Information Technology

Postgres Cheat Sheet


Using Regular Expressions

You can edit column using regular expressions by running regexp_replace()

UPDATE table SET field=regexp_replace(field, 'match pattern', 'replace string', 'g');

JSON Output

You can get Postgres to output JSON like this:

SELECT row_to_json(<name of key column>) FROM ...

Analyze Queries

EXPLAIN ANALYZE <sql statement>;

Inspect an Installation

List Postgres Clusters

Under Debian use the pg_wrapper command


List Postgres Settings


List Databases and Sizes

SELECT pg_database.datname, pg_size_pretty(pg_database_size(pg_database.datname)) AS size FROM pg_database;

Show Running Queries in Postgres

SELECT * FROM pg_stat_activity;

Show Blocking Locks

SELECT AS blockedpid, a.usename AS blockeduser, AS blockingpid, ka.usename AS blockinguser FROM pgcatalog.pglocks bl JOIN pgcatalog.pgstatactivity a ON = JOIN pgcatalog.pglocks kl JOIN pgcatalog.pgstatactivity ka ON = ON bl.transactionid = kl.transactionid AND != WHERE NOT bl.granted ;

Show Table Usage

If you want to know accesses or I/O per table or index you can use the pg_stat_*_tables and pg_statio_*_tables relations. For example:

SELECT * FROM pg_statio_user_tables;

to show the I/O caused by your relations. Or for the number of accesses and scan types and tuples fetched:

SELECT * FROM pg_stat_user_tables;


Changing Live Settings

ALTER SYSTEM SET <setting> TO <value>;
SELECT pg_reload_conf();

Kill Postgres Query

First find the query and it’s PID:

SELECT procpid, current_query FROM pg_stat_activity;

And then kill the PID on the Unix shell. Or use

SELECT pg_terminate_backend('12345');

Kill all Connections to a DB

The following was suggested here. Replace “TARGET_DB” with the name of the database whose connections should be killed.

SELECT pg_terminate_backend(pg_stat_activity.procpid) FROM pg_stat_activity WHERE pg_stat_activity.datname = 'TARGET_DB';

Checking Replication

Compared to MySQL checking for replication delay is rather hard. It is usually good to script this or use ready monitoring tools (e.g. Nagios Postgres check). Still it can be done manually by running this command on the master:

SELECT pg_current_xlog_location();

and those two commands on the slave:

SELECT pg_last_xlog_receive_location();
SELECT pg_last_xlog_replay_location()

The first query gives you the most recent xlog position on the master, while the other two queries give you the most recently received xlog and the replay position in this xlog on the slave. A Nagios check plugin could look like this:


# Checks master and slave xlog difference...
# Pass slave IP/host via $1

PSQL="psql -A -t "

# Get master status
master=$(echo "SELECT pg_current_xlog_location();" | $PSQL)

# Get slave receive location
slave=$(echo "select pg_last_xlog_replay_location();" | $PSQL -h$1)

master=$(echo "$master" | sed "s/\/.*//")
slave=$(echo "$slave" | sed "s/\/.*//")

master_dec=$(echo "ibase=16; $master" | bc)
slave_dec=$(echo "ibase=16; $slave" | bc)
diff=$(expr $master_dec - $slave_dec)

if [ "$diff" == "" ]; then
    echo "Failed to retrieve replication info!"
    exit 3

# Choose some good threshold here...
if [ $diff -gt 3 ]; then
if [ $diff -gt 5 ]; then

echo "Master at $master, Slave at $slave , difference: $diff"
exit $status

Postgres Backup Mode

To be able to copy Postgres files e.g. to a slave or a backup you need to put the server into backup mode.

SELECT pg_start_backup('label', true);
SELECT pg_stop_backup();

Read more: Postgres – Set Backup Mode

Pooling / Failover / LB

There are two connection pooling solutions for Postgres both providing read traffic load balancing and HA for read only slaves:

Additionally there is repmgr which manages and monitors replication and has automatic slave promotion on master failure.

Log Analysis

Further Reading



The must have reading for Postgres is for sure this book:\

Posted in Devops, Information Technology

Redis Cheat Sheet

When you encounter a Redis instance and you quickly want to learn about the setup you just need a few simple commands to peak into the setup. Of course it doesn’t hurt to look at the official full command documentation, but below is a listing just for sysadmins.

Accessing Redis


First thing to know is that you can use “telnet” (usually on Redis default port 6379)

telnet localhost 6379

or the Redis CLI client


to connect to Redis. The advantage of redis-cli is that you have a help interface and command line history.

CLI Queries

Here is a short list of some basic data extraction commands:

| Type                                 | Syntax and Explanation               |
| Tracing                              | Watch current live commands. Use     |
|                                      | this with care on production. Cancel |
|                                      | with Ctrl-C.                         |
|                                      |     monitor                          |
| Slow Queries                         |     slowlog get 25      # print top  |
|                                      | 25 slow queries                      |
|                                      |     slowlog len                      |
|                                      |     slowlog reset                    |
| Search Keys                          |     keys pattern        # Find key m |
|                                      | atching exactly                      |
|                                      |     keys pattern*       # Find keys  |
|                                      | matching in back                     |
|                                      |     keys *pattern*      # Find keys  |
|                                      | matching somewhere                   |
|                                      |     keys pattern*       # Find keys  |
|                                      | matching in front                    |
|                                      |                                      |
|                                      | On production servers use "KEYS"     |
|                                      | with care as it causes a full scan   |
|                                      | of all keys!                         |
| Generic                              |     del <key>                        |
|                                      |     dump <key>       # Serialize key |
|                                      |     exists <key>                     |
|                                      |     expire <key> <seconds>           |
| Scalars                              |     get <key>                        |
|                                      |     set <key> <value>                |
|                                      |     setnx <key> <value>   # Set key  |
|                                      | value only if key does not exist     |
|                                      |                                      |
|                                      | Batch commands:                      |
|                                      |     mget <key> <key> ...             |
|                                      |     mset <key> <value> <key> <value> |
|                                      |  ...                                 |
|                                      |                                      |
|                                      | Counter commands:                    |
|                                      |     incr <key>                       |
|                                      |     decr <key>                       |
| Lists                                |     lrange <key> <start> <stop>      |
|                                      |     lrange mylist 0 -1      # Get al |
|                                      | l of a list                          |
|                                      |     lindex mylist 5         # Get by |
|                                      |  index                               |
|                                      |     llen mylist         # Get length |
|                                      |                                      |
|                                      |     lpush mylist "value"             |
|                                      |     lpush mylist 5                   |
|                                      |     rpush mylist "value"             |
|                                      |                                      |
|                                      |     lpushx mylist 6         # Only p |
|                                      | ush in mylist exists                 |
|                                      |     rpushx mylist 0                  |
|                                      |                                      |
|                                      |     lpop mylist                      |
|                                      |     rpop mylist                      |
|                                      |                                      |
|                                      |     lrem mylist 1 "value"       # Re |
|                                      | move 'value' count times             |
|                                      |     lset mylist 2 6         # mylist |
|                                      | [2] = 6                              |
|                                      |     ltrim <key> <start> <stop>       |
| Hashes                               |     hexists myhash field1       # Ch |
|                                      | eck if hash key exists               |
|                                      |                                      |
|                                      |     hget myhash field1               |
|                                      |     hdel myhash field2               |
|                                      |     hset myhash field1 "value"       |
|                                      |     hsetnx myhash field1 "value"     |
|                                      |                                      |
|                                      |     hgetall myhash                   |
|                                      |     hkeys myhash                     |
|                                      |     hlen myhash                      |
|                                      |                                      |
|                                      | Batch commands:                      |
|                                      |     hmget <key> <key> ...            |
|                                      |     hmset <key> <value> <key> <value |
|                                      | > ...                                |
|                                      |                                      |
|                                      | Counter commands                     |
|                                      |     hincrby myhash field1 1          |
|                                      |     hincrby myhash field1 5          |
|                                      |     hincrby myhash field1 -1         |
|                                      |                                      |
|                                      |     hincrbrfloat myhash field2 1.123 |
|                                      | 445                                  |
| Sets                                 | FIXME                                |
| Sorted Sets                          | FIXME                                |

CLI Scripting

For scripting just pass commands to “redis-cli”. For example:

$ redis-cli INFO | grep connected

Server Statistics

The statistics command is “INFO” and will give you an output as following.

$ redis-cli INFO
connected_clients:2         # <---- connection count
used_memory_human:6.29M         # <---- memory usage
role:master             # <---- master/slave in replication setup

Changing Runtime Configuration

The command


gives you a list of all active configuration variables you can change. The output might look like this:

redis> CONFIG GET *
 1) "dir"
 2) "/var/lib/redis"
 3) "dbfilename"
 4) "dump.rdb"
 5) "requirepass"
 6) (nil)
 7) "masterauth"
 8) (nil)
 9) "maxmemory"
10) "0"
11) "maxmemory-policy"
12) "volatile-lru"
13) "maxmemory-samples"
14) "3"
15) "timeout"
16) "300"
17) "appendonly"
18) "no"
19) "no-appendfsync-on-rewrite"
20) "no"
21) "appendfsync"
22) "everysec"              # <---- how often fsync() is called
23) "save"
24) "900 1 300 10 60 10000"     # <---- how often Redis dumps in background
25) "slave-serve-stale-data"
26) "yes"
27) "hash-max-zipmap-entries"
28) "512"
29) "hash-max-zipmap-value"
30) "64"
31) "list-max-ziplist-entries"
32) "512"
33) "list-max-ziplist-value"
34) "64"
35) "set-max-intset-entries"
36) "512"
37) "slowlog-log-slower-than"
38) "10000"
39) "slowlog-max-len"
40) "64"

Note that keys and values are alternating and you can change each key by issuing a “CONFIG SET” command like:

CONFIG SET timeout 900

Such a change will be effective instantly. When changing values consider also updating the redis configuration file.


Multiple Databases

Redis has a concept of separated namespaces called “databases”. You can select the database number you want to use with “SELECT”. By default the database with index 0 is used. So issuing

redis> SELECT 1

switches to the second database. Note how the prompt changed and now has a “[1]” to indicate the database selection. To find out how many databases there are you might want to run redis-cli from the shell:

$ redis-cli INFO | grep ^db

Dropping Databases

To drop the currently selected database run


to drop all databases at once run



Checking for Replication

To see if the instance is a replication slave or master issue

redis> INFO

and watch for the “role” line which shows either “master” or “slave”. Starting with version 2.8 the “INFO” command also gives you per slave replication status looking like this


Setting up Replication

If you quickly need to set up replication just issue

SLAVEOF <IP> <port>

on a machine that you want to become slave of the given IP. It will immediately get values from the master. Note that this instance will still be writable. If you want it to be read-only change the redis config file (only available in most recent version, e.g. not on Debian). To revert the slave setting run


Performance Testing


Install the Redis tools and run the provided benchmarking tool

redis-benchmark -h <host> [-p <port>]

If you are migrating from/to memcached protocol check out how to run the same benchmark for any key value store with memcached protocol.

Debugging Latency

First measure system latency on your Redis server with

redis-cli --intrinsic-latency 100

and then sample from your Redis clients with

redis-cli --latency -h <host> -p <port>

If you have problems with high latency check if transparent huge pages are disabled. Disable it with

echo never > /sys/kernel/mm/transparent_hugepage/enabled

Dump Database Backup

As Redis allows RDB database dumps in background, you can issue a dump at any time. Just run:


When running this command Redis will fork and the new process will dump into the “dbfilename” configured in the Redis configuration without the original process being blocked. Of course the fork itself might cause an interruption. Use “LASTSAVE” to check when the dump file was last updated. For a simple backup solution just backup the dump file. If you need a synchronous save run “SAVE” instead of “BGSAVE”.

Listing Connections

Starting with version 2.4 you can list connections with


and you can terminate connections with


Monitoring Traffic

The propably most useful command compared to memcached where you need to trace network traffic is the “MONITOR” command which will dump incoming commands in real time.

redis> MONITOR
1371241093.375324 "monitor"
1371241109.735725 "keys" "*"
1371241152.344504 "set" "testkey" "1"
1371241165.169184 "get" "testkey"

additionally use “SLOWLOG” to track the slowest queries in an interval. For example

# wait for some time

and get the 25 slowest command during this time.

Sharding with proxies

There are two major proxy solutions

  • Twemproxy (aka nutcracker, by Twitter)
  • Codis
Posted in Devops, Information Technology

Java Cheat Sheet

Heapsize calculation

You can print the effective heap size and RAM settions by using -XX:+PrintFlagsFinal. Below is an example for a 8GB host of which Java per default 1/4 (MaxRAMFraction) uses 2GB:

java -XX:+PrintFlagsFinal $MY_PARAMS -version | grep -Ei "maxheapsize|maxram"
  size_t MaxHeapSize                              = 2061500416                                {product} {ergonomic}
uint64_t MaxRAM                                   = 137438953472                           {pd product} {default}
   uintx MaxRAMFraction                           = 4                                         {product} {default}
  double MaxRAMPercentage                         = 25.000000                                 {product} {default}

Java RAM and containers

When running Java in containers you need to ensure Java see the real amount of RAM it has available. Before Java 11 it usually sees the total amount of RAM available to the host system. Basing usage on this amount often causes OOM kills.

Java Version Solution
Java <8u131 Calculate correct memory size and set using -Xms/-Xmx
Java 8u131+, Java 9 -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
Java 10+ -XX:+UseContainerSupport -XX:MaxRAMPercentage

JDBC Problems

Oracle JDBC hanging and timing out when run on VMs: this can indicate missing entropy Workaround: Note that this reduces security! Use urandom as RNG by adding the following JVM option

Check if you are using Oracle JDK (and need a valid license)

pgrep -fl java | grep -q "+UnlockCommercialFeatures"

Default Keystore Location

readlink -e $(dirname $(readlink -e $(which keytool)))/../lib/security/cacerts

JMX Remote JConsole via SSH tunnel

Enable JMX and JConsole:

And connect jconsole to remote localhost:3333 e.g. via SSH port forwarding

ssh targethost -L 3334:localhost:3334 -f -N
ssh targethost -L 3333:localhost:3333 -f -N

Setting defaults from environment

When you want to merge user passed settings with some defaults use JAVA_TOOL_OPTIONS. Options from the JVM CLI overrule any options also specified in JAVA_TOOL_OPTIONS.