Posted in Software Architecture

Kubernetes platform

Hosted Solutions
With Hosted Solutions, any given software is completely managed by the provider. The user pays hosting and management charges. Some of the vendors providing hosted solutions for Kubernetes are:

Google Kubernetes Engine (GKE)

Azure Kubernetes Service (AKS)

Amazon Elastic Container Service for Kubernetes (EKS)

DigitalOcean Kubernetes

OpenShift Dedicated


IBM Cloud Kubernetes Service.

Turnkey Cloud Solutions
Below are only a few of the Turnkey Cloud Solutions, to install Kubernetes with just a few commands on an underlying IaaS platform, such as:

Google Compute Engine (GCE)

Amazon AWS (AWS EC2)

Microsoft Azure (AKS).

Turnkey On-Premise Solutions
The On-Premise Solutions install Kubernetes on secure internal private clouds with just a few commands:

GKE On-Prem by Google Cloud

IBM Cloud Private

OpenShift Container Platform by Red Hat.

Posted in Devops, Information Technology, microservices

Container Orchestrator

With enterprises containerizing their applications and moving them to the cloud, there is a growing demand for container orchestration solutions. While there are many solutions available, some are mere re-distributions of well-established container orchestration tools, enriched with features and, sometimes, with certain limitations in flexibility.

Although not exhaustive, the list below provides a few different container orchestration tools and services available today:

Posted in Information Technology

Encryption / Decryption , Digital Certificate, Public Key/ Private Key

One of the common question in Information Technology World  is: What is the relation within  Public/Private key and Encryption / Decryption? How do we use public / private key to encrypt/ decrypt? How do we use public/private key for digital signature?

Real world applications for Digital Certificates

So far we have briefly illustrated the theory behind the Digital Certificate and its role in the deliverance of PKI. The following pages now look at the practicalities of using a Digital Certificate, where to find them on your PC, and what they actually look like.

Using Digital Certificates to deliver the 5 primary security functions

Identification / Authentication:

The CA attests to the identity of the Certificate applicant when it signs the Digital Certificate.


The Public Key within the Digital Certificate is used to encrypt data to ensure that only the intended recipient can decrypt and read it.


By Digitally Signing the message or data, the recipient has a means of identifying any tampering made on the signed message or data.


A signed message proves origin, as only the sender has access to the Private Key used to sign the data.

Access Control:

Access Control may be achieved through use of the Digital Certificate for identification (and hence the replacement of passwords etc). Additionally, as data can be encrypted for specific individuals, we can ensure that only the intended individuals gain access to the information within the encrypted data.

Posted in microservices, Software Architecture

What is a service mesh, really?

What is a service mesh, really?

Figure 1: Service mesh overview

Figure 1 illustrates the service mesh concept at its most basic level. There are four service clusters (A-D). Each service instance is colocated with a sidecar network proxy. All network traffic (HTTP, REST, gRPC, Redis, etc.) from an individual service instance flows via its local sidecar proxy to the appropriate destination. Thus, the service instance is not aware of the network at large and only knows about its local proxy. In effect, the distributed system network has been abstracted away from the service programmer.

The data plane

In a service mesh, the sidecar proxy performs the following tasks:

  • Service discovery: What are all of the upstream/backend service instances that are available?
  • Health checking: Are the upstream service instances returned by service discovery healthy and ready to accept network traffic? This may include both active (e.g., out-of-band pings to a /healthcheck endpoint) and passive (e.g., using 3 consecutive 5xx as an indication of an unhealthy state) health checking.
  • Routing: Given a REST request for /foo from the local service instance, to which upstream service cluster should the request be sent?
  • Load balancing: Once an upstream service cluster has been selected during routing, to which upstream service instance should the request be sent? With what timeout? With what circuit breaking settings? If the request fails should it be retried?
  • Authentication and authorization: For incoming requests, can the caller be cryptographically attested using mTLS or some other mechanism? If attested, is the caller allowed to invoke the requested endpoint or should an unauthenticated response be returned?
  • Observability: For each request, detailed statistics, logging, and distributed tracing data should be generated so that operators can understand distributed traffic flow and debug problems as they occur.

All of the previous items are the responsibility of the service mesh data plane. In effect, the sidecar proxy is the data plane. Said another way, the data plane is responsible for conditionally translating, forwarding, and observing every network packet that flows to and from a service instance.

The control plane

The network abstraction that the sidecar proxy data plane provides is magical. However, how does the proxy actually know to route /foo to service B? How is the service discovery data that the proxy queries populated? How are the load balancing, timeout, circuit breaking, etc. settings specified? How are deploys accomplished using blue/green or gradual traffic shifting semantics? Who configures systemwide authentication and authorization settings?

All of the above items are the responsibility of the service mesh control planeThe control plane takes a set of isolated stateless sidecar proxies and turns them into a distributed system.

The reason that I think many technologists find the split concepts of data plane and control plane confusing is that for most people the data plane is familiar while the control plane is foreign. We’ve been around physical network routers and switches for a long time. We understand that packets/requests need to go from point A to point B and that we can use hardware and software to make that happen. The new breed of software proxies are just really fancy versions of tools we have been using for a long time.

Figure 2: Human control plane

However, we have also been using control planes for a long time, though most network operators might not associate that portion of the system with a piece of technology. There reason for this is simple — most control planes in use today are… us.

Figure 2 shows what I call the “human control plane.” In this type of deployment (which is still extremely common), a (likely grumpy) human operator crafts static configurations — potentially with the aid of some scripting tools — and deploys them using some type of bespoke process to all of the proxies. The proxies then consume the configuration and proceed with data plane processing using the updated settings.

Figure 3: Advanced service mesh control plane

Figure 3 shows an “advanced” service mesh control plane. It is composed of the following pieces:

  • The human: There is still a (hopefully less grumpy) human in the loop making high level decisions about the overall system.
  • Control plane UI: The human interacts with some type of UI to control the system. This might be a web portal, a CLI, or some other interface. Through the UI, the operator has access to global system configuration settings such as deploy control (blue/green and/or traffic shifting), authentication and authorization settings, route table specification (e.g., when service A requests /foo what happens), and load balancer settings (e.g., timeouts, retries, circuit breakers, etc.).
  • Workload scheduler: Services are run on an infrastructure via some type of scheduling system (e.g., Kubernetes or Nomad). The scheduler is responsible for bootstrapping a service along with its sidecar proxy.
  • Service discovery: As the scheduler starts and stops service instances it reports liveness state into a service discovery system.
  • Sidecar proxy configuration APIs: The sidecar proxies dynamically fetch state from various system components in an eventually consistent way without operator involvement. The entire system composed of all currently running service instances and sidecar proxies eventually converge. Envoy’s universal data plane API is one such example of how this works in practice.

Ultimately, the goal of a control plane is to set policy that will eventually be enacted by the data plane. More advanced control planes will abstract more of the system from the operator and require less handholding (assuming they are working correctly!).

Data plane vs. control plane summary

  • Service mesh data plane: Touches every packet/request in the system. Responsible for service discovery, health checking, routing, load balancing, authentication/authorization, and observability.
  • Service mesh control plane: Provides policy and configuration for all of the running data planes in the mesh. Does not touch any packets/requests in the system. The control plane turns all of the data planes into a distributed system.

Current project landscape

With the above explanation out of the way, let’s take a look at the current service mesh landscape.

Instead of doing an in-depth analysis of each solution above, I’m going to briefly touch on some of the points that I think are causing the majority of the ecosystem confusion right now.

Linkerd was one of the first service mesh data plane proxies on the scene in early 2016 and has done a fantastic job of increasing awareness and excitement around the service mesh design pattern. Envoy followed about 6 months later (though was in production at Lyft since late 2015). Linkerd and Envoy are the two projects that are most commonly mentioned when discussing “service meshes.”

Istio was announced May, 2017. The project goals of Istio look very much like the advanced control plane illustrated in figure 3. The default proxy of Istio is Envoy. Thus, Istio is the control plane and Envoy is the data plane. In a short time, Istio has garnered a lot of excitement, and other data planes have begun integrations as a replacement for Envoy (both Linkerd and NGINX have demonstrated Istio integration). The fact that it’s possible for a single control plane to use different data planes means that the control plane and data plane are not necessarily tightly coupled. An API such as Envoy’s universal data plane API can form a bridge between the two pieces of the system.

Nelson and SmartStack help further illustrate the control plane vs. data plane divide. Nelson uses Envoy as its proxy and builds a robust service mesh control plane around the HashiCorp stack (i.e. Nomad, etc.). SmartStack was perhaps the first of the new wave of service meshes. SmartStack forms a control plane around HAProxy or NGINX, further demonstrating that it’s possible to decouple the service mesh control plane and the data plane.

The service mesh microservice networking space is getting a lot of attention right now (rightly so!) with more projects and vendors entering all the time. Over the next several years, we will see a lot of innovation in both data planes and control planes, and further intermixing of the various components. The ultimate result should be microservice networking that is more transparent and magical to the (hopefully less and less grumpy) operator.

Key takeaways

  • A service mesh is composed of two disparate pieces: the data plane and the control plane. Both are required. Without both the system will not work.
  • Everyone is familiar with the control plane — albeit the control plane might be you!
  • All of the data planes compete with each other on features, performance, configurability, and extensibility.
  • All of the control planes compete with each other on features, configurability, extensibility, and usability.
  • A single control plane may contain the right abstractions and APIs such that multiple data planes can be used.


Posted in Devops, Tools

Tutorial Creating Jenkins Job using DSL

This tutorial will walk you through how to create a single job using a DSL script; and then add a few more.

1. Creating the Seed Job

We use a Free-style Jenkins Job as a place to run the DSL scripts. We call this a “Seed Job”. Since it’s a normal Job you’ll get all the standard benefits of Jenkins: history, logs, emails, etc. We further enhance the Seed Job to show which Jobs got created from the DSL script, in each build and on the Seed Job page.

The first step is to create this Job.

  • From the Jenkins main page, select either the “New Job” or “Create new Jobs” link. A new job creation page will be displayed.


  • Fill in the name field, e.g. “tutorial-job-dsl-1”
  • Select the “Build a free-style software project” radio button.
  • Click the OK button


2. Adding a DSL Script

Now that we have created our empty Seed Job we need to configure it. We’re going to add a build step to execute the Job DSL script. Then we can paste in an example script as follows:

  • On the configure screen, scroll down to the “Build: Add build step” pull down menu


  • From the pull down menu, select “Process Job DSLs”. You should be presented with two radio buttons. The default will be “Use the provided DSL script” and a text input box will be displayed below it.


  • Copy the following DSL Script block into the input box. (Note: The job resulting from this will be called DSL-Tutorial-1-Test. It’ll check a GitHub repo every 15 minutes, then run ‘clean test’ if there’s any changes found.)
job('DSL-Tutorial-1-Test') {
    scm {
    triggers {
        scm('H/15 * * * *')
    steps {
        maven('-e clean test')


  • Click the “Save” button. You’ll be shown the overview page for the new Seed job you just created.


3. Run the Seed Job and Generate the new Jobs from the Script

The Seed Job is now all set up and can be run, generating the Job we just scripted.

(Note: As it stands right now, we didn’t setup any build triggers to run the job automatically but we could have, using the standard Jenkins UI in Step 2.)

Let’s just run it ourselves manually.

  • Click the “Build Now” link/button on the tutorial-job-dsl-1 overview page. It should only take a second to run.


  • Look at the build result to see a link to the new Job which has been created by the running of your DSL script in the Seed Job. You should see this in the section called “Generated Jobs”. (If you don’t see it, you probably have Auto-Refresh disabled. Enable it, or just refresh the page and then you’ll see the new job.)
  • Follow this link to your new Job. You can run this new script-generated Job manually or wait the 15 minutes for the scm trigger to kick in.

(Note: if you have a new Jenkins server, you might be missing the Git plugin or a Maven installation which Jenkins knows about. That could cause this job to fail when run. If you need to add these, be sure to re-run the Seed Job to make sure the Scripted Job is configured correctly – it won’t be if you ran without all the necessary plugins installed in Jenkins.)

(Additional Note: if the build still fails with these plugins / config set up, it may be because the new job is using a “default” maven rather than the one you just added.)

4. Adding additional Jobs to the DSL Script

To show some more of the power of the DSL Plugin, let’s create a bunch more Jobs.

  • Go back to the ‘tutorial-job-dsl-1’ Seed Job
  • Click the “Configure” link/button and navigate back down the the “Process Job DSLs” build step.
  • Add the following into the text box, below the script which we added at the beginning.

(Note: The practicality of this block is questionable, but it could be used to shard your tests into different jobs.)

def project = 'quidryan/aws-sdk-test'
def branchApi = new URL("${project}/branches")
def branches = new groovy.json.JsonSlurper().parse(branchApi.newReader())
branches.each {
    def branchName =
    def jobName = "${project}-${branchName}".replaceAll('/','-')
    job(jobName) {
        scm {
            git("git://${project}.git", branchName)
        steps {

5. Enjoy the results

That’s it. Now you know how to make Seed Jobs, which can create a multitude of Scripted child Jobs. Take a look at some Real World Examples or jump ahead and read up on the DSL commands in detail for more fun.

Posted in Devops, Software Engineering


Vim is a very efficient text editor. This reference was made for Vim 8.0.
For shortcut notation, see :help key-notation.


:qa Close all files
:qa! Close all files, abandon changes
:w Save
:wq / :x Save and close file
:q Close file
:q! Close file, abandon changes
ZZ Save and quit
ZQ Quit without checking changes
h j k l Arrow keys
<C-U> / <C-D> Page up/page down


b / w Previous/next word
e / ge Previous/next end of word


0 (zero) Start of line
^ Start of line (after whitespace)
$ End of line


fc Go forward to character c
Fc Go backward to character c


gg First line
G Last line
:n Go to line n
nG Go to line n


zz Center this line
H Move to top of screen
M Move to middle of screen
L Move to bottom of screen

Tab pages

:tabedit [file] Edit file in a new tab
:tabfind [file] Open file if exists in new tab
:tabclose Close current tab
:tabs List all tabs
:tabfirst Go to first tab
:tablast Go to last tab
:tabn Go to next tab
:tabp Go to previous tab


a Append
i Insert
o Next line
O Previous line
s Delete char and insert
S Delete line and insert
C Delete until end of line and insert
r Replace one character
R Enter Replace mode
u Undo changes
<C-R> Redo changes

Exiting insert mode

Esc / <C-[> Exit insert mode
<C-C> Exit insert mode, and abort current command


x Delete character
dd Delete line (Cut)
yy Yank line (Copy)
p Paste
P Paste before

Visual mode

v Enter visual mode
V Enter visual line mode
<C-V> Enter visual block mode

In visual mode

d / x Delete selection
s Replace selection
y Yank selection (Copy)

See Operators for other things you can do.



Operators let you operate in a range of text (defined by motion). These are performed in normal mode.

d w
Operator Motion

Operators list

d Delete
y Yank (copy)
c Change (delete then insert)
> Indent right
< Indent left
g~ Swap case
gU Uppercase
gu Lowercase
! Filter through external program

See :help operator


Combine operators with motions to use them.

dd (repeat the letter) Delete current line
dw Delete to next word
db Delete to beginning of word
2dd Delete 2 lines
dip Delete a text object (inside paragraph)
(in visual mode) d Delete selection

See: :help motion.txt

#Text objects


Text objects let you operate (with an operator) in or around text blocks (objects).

v i p
Operator [i]nside or [a]round Text object

Text objects

p Paragraph
w Word
s Sentence
[ ( { < A [], (), or {} block
' " ` A quoted string
b A block [(
B A block in [{
t A XML tag block


vip Select paragraph
vipipipip Select more
yip Yank inner paragraph
yap Yank paragraph (including newline)
dip Delete inner paragraph
cip Change inner paragraph

See Operators for other things you can do.


gvimdiff file1 file2 [file3] See differencies between files, in HMI



zo / zO Open
zc / zC Close
za / zA Toggle
zv Open folds for this line
zM Close all
zR Open all
zm Fold more (foldlevel += 1)
zr Fold less (foldlevel -= 1)
zx Update folds

Uppercase ones are recursive (eg, zO is open recursively).

[( [{ [< Previous ( or { or <
]) Next
[m Previous method start
[M Previous method end


<C-O> Go back to previous location
<C-I> Go forward
gf Go to file in cursor


<C-A> Increment number
<C-X> Decrement


z{height}<Cr> Resize pane to {height} lines tall


:tag Classname Jump to first definition of Classname
<C-]> Jump to definition
g] See all definitions
<C-T> Go back to last tag
<C-O> <C-I> Back/forward
:tselect Classname Find definitions of Classname
:tjump Classname Find definitions of Classname (auto-select 1st)


~ Toggle case (Case => cASE)
gU Uppercase
gu Lowercase
gUU Uppercase current line (also gUgU)
guu Lowercase current line (also gugu)

Do these in visual or normal mode.


`^ Last position of cursor in insert mode
`. Last change
`` Last jump
ma Mark this cursor position as a
`a Jump to the cursor position a
'a Jump to the beginning of the line with position a


. Repeat last command
]p Paste under the current indentation level

Command line

<C-R><C-W> Insert current word into the command line
<C-R>" Paste from “ register
<C-X><C-F> Auto-completion of path in insert mode

Text alignment

:center [width]
:right [width]

See :help formatting


<C-R>=128/2 Shows the result of the division : ‘64’

Do this in insert mode.

Exiting with an error


Works like :qa, but throws an error. Great for aborting Git commands.

Spell checking

:set spell spelllang=en_us Turn on US English spell checking
]s Move to next misspelled word after the cursor
[s Move to previous misspelled word before the cursor
z= Suggest spellings for the word under/after the cursor
zg Add word to spell list
zw Mark word as bad/mispelling
zu / C-X (Insert Mode) Suggest words for bad word under cursor from spellfile

See :help spell

#Also see

Posted in Business Analyst

User Story – Guidelines



“User Stories represent customer requirements in a card, leading to conversations and confirmation.” ~ Ron Jeffries


The following are well known templates used in defining user stories and acceptance criteria.

Value Statement: As a (user role), I want to (activity), so that (business value)

Acceptance Criteria: Given (context), when (action performed), then (observable consequences)

General Guidelines

The following are some general guidelines to consider when writing user stories:

  • User Stories have three aspects: Card, Conversation and Confirmation (Ron Jeffries 2001)
  • User Stories should represent functionality that is of value to users or system owners.
  • User Stories should describe a single feature.
  • User Stories should have a note section where conversations are documented about the user story detail.
  • User Stories should have an estimation (cost) in story points which indicates size and complexity.
  • User Stories should be prioritised according to its value to the customer.

Attributes of good User Stories (I.N.V.E.S.T)

Mike Cohn specifies six fundamental attributes of a good user story in his book User Stories Applied. These are:

Independant (I)

  • User Stories should be free of dependencies on other user stories.
  • User Stories should be self-contained.
  • User Stories should be completed and released in any order.
  • User Stories should be combined or split in different ways when dependencies occur.

Negotiable (N)

  • User Stories should not be contractual obligations as they are negotiable.
  • User Stories should be a collaborative negotiation between customers, developers and testers.

Valuable (V)

  • User Stories should be of value to the user or owner of the software.
  • User Stories should not be only of value to developers.
  • User Stories should clearly define the benefit to customers/users to assist in prioritization.
  • User Stories should be written by customers to ensure it is valuable to customers/users.

Estimatable (E)

  • User Stories should be estimated in terms of story points.
  • User Stories should be clearly understood before it is estimated by development teams.
  • User Stories should contain enough detail before it is estimated by development teams.
  • User Stories may not be estimatable when development teams lack domain knowledge.
  • User Stories may not be estimatable when development teams lack technical knowledge.
  • User Stories may not be estimatable when the user story is too big.

Small (S)

  • User Stories should be as small as possible while still providing user value.
  • User Stories should be able to fit into one iteration.
  • User Stories that are to big will be difficult to understand and estimate.

Testable (T)

  • User Stories should be verified by tests to prove they are implemented correctly.
  • User Stories should contain the story acceptance criteria to guide testing.
  • User Stories should be easily unit tested. (Technical Implementation)
  • User Stories should be easily acceptance tested. (Behavioural)
  • User Stories should be tested in an automated manner where possible.

Story Point Planning

  • Fibonacci ( 0, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ?, Pass )
  • Modified Fibonacci ( 0, ½, 1, 2, 3, 5, 8, 13, 20, 40, 100, ?, Pass )
  • T-shirts ( xxs, xs, s, m, l, xl, xxl, ?, Pass )
  • Powers of 2 ( 0, 1, 2, 4, 8, 16, 32, 64, ?, Pass )

Definition of Done (DoD)

There are numerous criteria a team can use to define their Definition of Done. This ensures that teams deliver features that are completed in terms of functionality and quality. Definition of Done is an auditable checklist. The following is a set of possible criteria and activities for a DoD:

  • Unit Tests Passed
  • Acceptance Criteria Met
  • Code Reviewed
  • Functional Tests Passed
  • Non-Functoinal Requirements Met
  • Product Owner Accepts User Story

User Story Example

The following is an example of a user story.


  • Cohn, Mike. User Stories Applied: For Agile Software Development. 1st ed., Addison-Wesley Professional, 2004.
  • Wake, William C. Extreme Programming Explored. Addison Wesley, 2002.


original source: