Posted in Information Technology

Calendar Phishing

Cybercriminals often seem to find a way to fulfill their malicious intent and the new way in town is phishing via Google Calendar, popularly called Calendar Phishing.

As per a report by security firm Kaspersky, scammers are sending across phishing links to users via Google Calendar in Gmail, taking advantage of a setting on the Google app.

How Does Calendar Phishing Work?

It is suggested that Google Calendar comes with a default setting of adding invitations and events, even if a user has not responded to the invite. Users receive a pop-up notification for all events and invites when the date of the event approaches.

While users these days are likely to dismiss suspicious emails (thanks to all the increasing security awareness), phishing links via trusted apps such as Google Calendars tend to catch users’ attention and help the scammers succeed in their motive.

Cybercriminals, via phishing links, can get access to users’ important data, social security numbers, and banking details in order to extract money from them.

How to stop it?

However, there is still a way to stop the Calendar Phishing and users can follow simple steps to do so:

  • First, users are required to click Google Calendar, select settings Gear icon, and head to the Event settings.
  • Under the Event settings, go for the ‘Automatically add invitations’ option and select ‘No, only show invitations to which I’ve responded.’
  • Following this, users need to ensure that they untick the ‘Show declined events’ option under the View options section.

In addition to this, users need to stay alert and not enter any personal information on the sites they find fishy and should use a reliable security solution to remain safe.

Posted in Information Technology

Getting Started With Spring Cloud Gateway

In this article, we will integrate spring cloud gateway with a microservice-based architecture application using spring cloud. In the process, we will use spring cloud gateway as a gateway provider, Netflix Eureka as a discovery server, with circuit breaker pattern using Netflix Hystrix.

Let’s quickly get started with the implementation of it.

Discovery Server Implementation

In a microservice architecture, service discovery is one of the key tenets. Service discovery automates the process of multiple instance creations on demand and provides high availability of our microservices.

Below are the pom.xml and application.yml configuration for integrating Netflix Eureka Discovery with Spring Cloud.



<name>Spring Milestones</name>


    name: discovery-service

    eureka-server-connect-timeout-seconds: 5
    enabled: true
    fetch-registry: false
    register-with-eureka: false

  port: 8761

Project Setup

First, we will generate a sample spring boot project from and import into the workspace. The selected dependencies are Gateway, Hystrix, and Actuator.

spring cloud project generation

We will also add spring-cloud-starter-netflix-eureka-client dependency in our pom.


Spring Cloud Route Configuration

The route is the basic building block of the gateway. It is defined by an ID, a destination URI, a collection of predicates and a collection of filters. A route is matched if an aggregate predicate is true.

Spring Cloud Gateway provides many built-in Route Predicate Factories such as Path, Host, Date/Time, Method, Header, etc. We can use these built-in routes with conjunction with and() or () to define our routes. Once a request reaches to the gateway, the first thing gateway does is to match the request with each of the available route based on the predicate defined and the request is routed to the matched route.

Below is our route configuration. We have 2 different routes defined for our 2 microservices — first-service and second-service.

package com.devglan.gatewayservice;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

public class BeanConfig {

    public RouteLocator gatewayRoutes(RouteLocatorBuilder builder) {
        return builder.routes()
                .route(r -> r.path("/api/v1/first/**")
                        .filters(f -> f.rewritePath("/api/v1/first/(?.*)", "/${remains}")
                                .addRequestHeader("X-first-Header", "first-service-header")
                                .hystrix(c -> c.setName("hystrix")

                .route(r -> r.path("/api/v1/second/**")
                        .filters(f -> f.rewritePath("/api/v1/second/(?.*)", "/${remains}")
                                .hystrix(c -> c.setName("hystrix")


Spring Cloud Gateway Application Config

hystrix.command.fallbackcmd.execution.isolation.thread.timeoutInMilliseconds: 2000
    name: api-gateway

  port: 8088

      defaultZone: http://localhost:8761/eureka
    register-with-eureka: false
    preferIpAddress: true


Microservices Implementation

First Service

These services are a very simple implementation with only one controller defined for demo purpose.

package com.devglan.gatewayservice.controller;

import org.springframework.web.bind.annotation.*;

public class FirstController {

    public String test(@RequestHeader("X-first-Header") String headerValue){
        return headerValue;



    name: first-service
  port: 8086

      defaultZone: http://localhost:8761/eureka

Second Service Implementation

public class SecondController {

    public String test(@RequestHeader("X-second-Header") String headerValue){
        return headerValue;



    name: second-service
  port: 8087

      defaultZone: http://localhost:8761/eureka

Now, as per the route configuration, the request matching the regex /api/v1/first/** will be forwarded to first-service whereas the request matching the regex /api/v1/second/** will be forwarded to second-service.


This concludes an example of using Spring Cloud Gateway to route requests to multiple services running downstream. Next, we can extend this example to integrate security at the gateway level.


Posted in Information Technology

Comparing and Contrasting Open Source BPM Projects

Open source model (Community vs. Enterprise)

Every company supporting an open source project has its own business model. Typically, open source companies offer enterprise open source products as a way to generate revenue — you typically have to pay for the enterprise version while the community version (in general) is free. Camunda provides an enterprise version of Camunda, Alfresco provides an enterprise version of Activiti, and RedHat provides an enterprise version of jBPM. I have found that the definition of “enterprise” can be different for each company. It is very important for developers using or sourcing an open source project to understand what a company means by “enterprise open source” before working with it.

Capability set

Let’s start and dig into the capabilities of Activiti, Camunda, and jBPM.

A look at how jBPM integrates the Drools rules engine.

A look at how Activiti integrates the Drools rules engine.

A look at how Camunda integrates the Drools rules engine.

A view of the Camunda environment.

A view of the Activiti Explorer environment.

A view of the jBPM environment.

A view of jBPM from builder.

A view of the jBPM REST service task.

A jBPM REST Service Task configuration.

A view of the Service Task in Activiti Modeler.

A Camunda service task that requires further coding.


Who contributes to the source code of an open source BPM project is important. An active community signals that a project is still being improved and enhanced. The number of contributors outside of the supporting company can also help indicate the degree of diversity in thought and ideas put into a project. Open Hub is one site that can be used to lookup this type of information. It provides details such as the activity, number of contributors, and commits. These are important factors to take into account. Below is an example Open Hub page for Camunda:


In this article, we’ve just briefly touched on some of the similarities and differences between the open source BPM projects Activiti, Camunda, and jBPM. All three have their benefits and the specific needs of your project will help determine which one is the right choice for you. The good news is, all three are viable open source alternatives for closed source BPM products. And their open source nature means they will continue to change and evolve over time.

Posted in Information Technology

Fix ‘add-apt-repository command not found’ Error on Ubuntu and Debian

This quick tutorial shows you how to quickly fix the “add-apt-repository command not found” error on Debian, Ubuntu and other Debian-based Linux distributions.

One of the many ways to install software on Ubuntu or Debian is to use PPA (Personal Package Archive).

If you want to add a new PPA repository, you’ll have to use the add-apt-repository command in the following fashion:

sudo add-apt-repository ppa:some/ppa

In Debian, elementary OS and sometimes on Ubuntu, you’ll see the error that add-apt-repository command is missing.

sudo: add-apt-repository: command not found

Let’s see how to fix this annoying error.

Fix add-apt-repository: command not found error

The error is simple. The package add-apt-repository is not installed on your system.

But if you try to use sudo apt-get install add-apt-repository, it won’t work.

It’s because add-apt-repository command is under the package software-properties-commonand you need to install this package in order to install add-apt-repository.

So, open a terminal and use this command:

sudo apt-get install software-properties-common

The command output will be something like this:

Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 9,912 B of archives.
After this operation, 197 kB of additional disk space will be used.
Get:1 bionic-updates/main amd64 software-properties-common all [9,912 B]
Fetched 9,912 B in 2s (5,685 B/s)                      
Selecting previously unselected package software-properties-common.
(Reading database ... 265950 files and directories currently installed.)
Preparing to unpack .../software-properties-common_0. ...
Unpacking software-properties-common ( ...
Processing triggers for man-db (2.8.3-2) ...
Processing triggers for dbus (1.12.2-1ubuntu1) ...
Setting up software-properties-common ( ...

Once you have installed software-properties-common, you should update the system using this command:

sudo apt-get update

You can now comfortably use add-apt-repository or apt-add-repository commands to add PPA.

Note: If you see an error saying software-properties-common command not found, you should run sudo apt-get update and then try to install it again.

I hope this quick tip helped you in fixing “add-apt-repository: command not found” erroron Ubuntu and other Debian-based Linux distributions.

Suggested read  How To Install And Configure Ubuntu SDK In Ubuntu 16.04 & 14.04

If you are still facing issues with PPA, let me know in the comment section. Additional suggestions, questions and a quick word of thanks are always welcome.

Posted in Information Technology

Python requests SSL and InsecurePlatformWarning


Every now and then, when using Python 2.7 < 2.7.9 and trying to access SSL resources, especially through the requests toolkit, which seems to trigger the issue frequently – but I’ve seen it on some combinations of pip inside virtualenv as well – you’ll get an error or a warning along those lines:

InsecurePlatformWarning: A true SSLContext object is not
available. This prevents urllib3 from configuring SSL appropriately and 
may cause certain SSL connections to fail. For more information, see  


SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see

or, even worse:

error: SSLError: hostname '' doesn't match either of,

This is caused by old libraries in python < 2.7.9 and < 3.4. To fix that, just add to your current virtualenv those three packages:

pip install pyOpenSSL ndg-httpsclient pyasn1

You may need runtime/development packages for python and openssl as well in order for the build to succeed, e.g. python-dev libssl-dev libffi-dev on Ubuntu 14.04.

Posted in Software Engineering

Sorting Algorithm


Sorting classification

Internal vs. External Sorting
Sorting algorithms can be classified into two types of algorithms: internal and external. Internal sorting algorithms require the full data set to fit into main memory whereas external sort is used when the full data set does not fit and have to reside on external storage during the sorting process.

Stable vs. Unstable Sorting
A stable sorting algorithm is when two objects with equal keys appear in the same order in the unsorted input array and the sorted output array. Examples of stable sorting algorithms are Insertion sort, Merge Sort and Bubble Sort.

An unstable sorting algorithm is when two objects with equal keys doesn’t appear in the same order in the unsorted input array and the sorted output array. Examples of unstable sorting algorithms are Heap Sort and Quick Sort.

Time vs. Space Complexity

Time Complexity is the computational complexity that describes the amount of time it takes to run an algorithm.

Space Complexity is the computational complexity that describes the amount of memory space required by an algorithm.

In-place vs. Out-of-place Algorithm
An in-place algorithm is an algorithm which transforms input using no auxiliary data structure. The input is usually overwritten by the output as the algorithm executes. In-place algorithm updates input sequence only through replacement or swapping of elements.

An algorithm which is not in-place is sometimes called not-in-place or out-of-place.

List of Sort Algorithms

The following algorithms are examples of internal sorting algorithms:

Algorithm Name Time Complexity: Best Time Complexity: Average Time Complexity: Worst Space Complexity: Worst
Bubble Sort Ω(n) Θ(n2) O(n2) O(1)
Bucket Sort Ω(n+k) Θ(n+k) O(n2) O(n)
Counting Sort Ω(n+k) Θ(n+k) O(n+k) O(k)
Cube Sort Ω(n) Θ(n log(n)) O(n log(n)) O(n)
Heapsort Ω(n log(n)) Θ(n log(n)) O(n log(n)) O(1)
Insertion Sort Ω(n) Θ(n2) O(n2) O(1)
Merge Sort Ω(n log(n)) Θ(n log(n)) O(n log(n)) O(n)
Quick Sort Ω(n log(n)) Θ(n log(n)) O(n2 O(n log(n))
Radix Sort Ω(nk) Θ(nk) O(nk) O(n+k)
Selection Sort Ω(n2) Θ(n2) O(n2) O(1)
Shell Sort Ω(n log(n)) Θ(n log2n) O(n2) O(1)
Timsort Ω(n) Θ(n log(n)) Θ(n log(n)) O(n)
Tree Sort Ω(n log(n)) Θ(n log(n)) O(n2) O(n)


Posted in Information Technology, Package Manager

Homebrew cheatsheet


Homebrew is a free and open-source software package management system that simplifies software installation on macOS operating system. In short we call it software package manager for macOS


Basic Commands

brew install git Install a package
brew upgrade git Upgrade a package
brew unlink git Unlink
brew link git Link
brew switch git 2.5.0 Change versions
brew list --versions git See what versions you have

More package commands

brew info git List versions, caveats, etc
brew cleanup git Remove old versions
brew edit git Edit this formula
brew cat git Print this formula
brew home git Open homepage

Global commands

brew update Update brew and cask
brew list List installed
brew outdated What’s due for upgrades?
Posted in Information Technology

Git Cheat Sheet


Git is the go-to version control tool for most software developers because it allows them to efficiently manage their source code and track file changes while working with a large team. In fact, Git has so many uses that memorizing its various commands can be a daunting task, which is why we’ve created this git cheat sheet.


Setup Git:

After Git is installed, whether from apt-get or from the source, you need to copy your username and email in the gitconfig file. You can access this file at ~/.gitconfig.

Opening it following a fresh Git install would reveal a completely blank page:

sudo vim ~/.gitconfig

You can use the follow commands to add in the required information. Replace ‘user’ with your username and ‘’ with your email.

git config --global "User"
git config --global

And you are done with setting up. Now let’s get started with Git.


Create a new directory, open it and run this command:

git init
Playing around git 1

This will create a new git repository. Your local repository consists of three “trees” maintained by git.

First one is your Working Directory which holds the actual files. Second one is the Index which acts as a staging area and finally the HEAD which points to the last commit you’ve made.checkout your repository using git clone /path/to/repository.

Suggested read  Must Have Essential Linux Applications

Checkout your repository (repository you just created or an existing repository on a server) using git clone /path/to/repository.

Add files and commit:

You can propose changes using:

git add <filename>

This will add a new file for the commit. If you want to add every new file, then just do:

git add --all

Your files are added check your status using

git status
Playing around git 2

As you can see, there are changes but they are not committed. Now you need to commit these changes, use:

git commit -m "Commit message"
playing around git 3

You can also do (preferred):

git commit -a

And then write your commit message. Now the file is committed to the HEAD, but not in your remote repository yet.

Push your changes

Your changes are in the HEAD of your local working copy. If you have not cloned an existing repository and want to connect your repository to a remote server, you need to add it first with:

git remote add origin <serveraddress>

Now you are able to push your changes to the selected remote server.To send those changes to your remote repository, run:

git push -u origin master


Branches are used to develop features which are isolated from each other. The master branch is the “default” branch when you create a repository. Use other branches for development and merge them back to the master branch upon completion.

Create a new branch named “mybranch” and switch to it using:

git checkout -b mybranch
Playing around Git (4)

You can switch back to master by running:

git checkout master

If you want to delete the branch use:

git branch -d mybranch
Playing around Git (5)

A branch is not available to others unless you push the branch to your remote repository, so what are you thinking about just push it:

git push origin <branchname>

Update and  Merge

To update your local repository to the newest commit, run:

git pull

In your working directory to fetch and merge remote changes.To merge another branch into your active branch (e.g. master), use :

git merge <branch>

In both cases, git tries to auto-merge changes. Unfortunately, this is not always possible and results in conflicts. You are responsible for merging those conflicts manually by editing the files shown by git. After changing, you need to mark them as merged with

git add <filename>

Before merging changes, you can also preview them by using

git diff <sourcebranch> <targetbranch>

Git log:

You can see the repository history using.

git log

To see a log where each commit is one line you can use:

git log --pretty=oneline

Or maybe you want to see an ASCII art tree of all the branches, decorated with the names of tags and branches:

git log --graph --oneline --decorate --all

If you want to see only which files have changed:

git log --name-status

And for any help during the entire process, you can use git --help




Posted in Information Technology

Rbenv vs. RVM


Choosing a Ruby version management tool often comes down to two players: rbenv and RVM. The latter was widely accepted as the norm, greatly due to its wide toolkit. However, rbenv has become a strong contender with its lightweight approach.

Under the Hood

So, how do these tools get the job done? This is where things get a little scary with RVM. RVM overrides the cd shell command in order to load the current Ruby environment variables. Not only can the override cause unexpected behavior, but it also means that rubies and gemsets are loaded when switching directories.

rbenv does things on the fly by using shims to execute commands.

* A directory of shims (~/.rbenv/shims) is inserted to the front of PATH.
* The directory holds a shim for every Ruby command.
* The operating system searches for a shim that matches the name of the command, which in turn passes it to rbenv, determining the Ruby version to execute.

rbenv configuration for an application is dirt simple:

# .ruby-version

The RBENV_VERSION variable also makes it easy to quickly specify a Ruby version via the command line. It’s first in line when rbenv checks for the current Ruby version.

Delegating the Workload

There are a few features in RVM that make it the heavier tool. RVM comes with its own Ruby installation mechanism:

rvm install ruby-2.3.0

With rbenv, you can either install Ruby yourself (by saving to ~/.rbenv/versions) or make use of ruby-build, a plugin that will install the versions for you. Like rbenv, ruby-build has a homebrew recipe.

brew install ruby-build
rbenv install 2.3.0

RVM gives the ability to separate dependencies by project with gemsets. Gemsets, however, are more of a thing of the past, thanks to the widespread use of Bundler.

With Bundler, one can easily manage the gems for a project.

gem install bundler

# Gemfile in root of application
source ''

gem 'rails'
gem 'rspec'

bundle install

Although most projects use Bundler now, the plugin rbenv-gemsets is the rbenv equivalent of gemsets.

Light is Might

While the versatility of RVM can be resourceful, when it comes to Ruby version management, it can be overkill. Using rbenv allows you to keep things simple and let other tools handle other aspects of the process. rbenv’s primary focus on Ruby versioning leads to a more dev-friendly setup and configuration.  We have been using rbenv with our apps for a few years now. Partnered with Capistrano, rbenv-capistrano makes Ruby version maintenance for our deployable environments straightforward.

Posted in Information Technology

AWS CLI Cheat Sheet


The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

Leveraging on AWS CLI, we can build an Infrastructure as a Code in the organization.



List buckets

aws s3 ls

Bucket location

aws s3api get-bucket-location --bucket <bucket-name>

Logging status

aws s3api get-bucket-logging --bucket <bucket-name>


Describe autoscale group details and member instances

aws autoscaling describe-auto-scaling-groups \
 --auto-scaling-group-names <as-group-name>


Template validation

aws cloudformation validate-template \
 --template-body file://myCFN.template.json

aws cloudformation validate-template \

Listing stacks

aws cloudformation list-stacks \
 --stack-status-filter [ CREATE_COMPLETE | UPDATE_COMPLETE | etc.. ]

Viewing stack events and resources

aws cloudformation describe-stack-events --stack-name <stack-name>

aws cloudformation list-stack-resources --stack-name <stack-name>


Creating a subscription

aws cloudtrail create-subscription \
 --name cloudtrail-logs-ue1 \
 --s3-use-bucket cloudtrail-logs \
 --s3-prefix stage \
 --sns-new-topic cloudtrail-stage-notify-ue1

Describing and retrieving status

aws cloudtrail describe-trails

aws cloudtrail get-trail-status --name cloudtrail-logs-ue1



aws ec2 describe-instances --instance-ids <instance-id>

Starting, stopping, rebooting and killing an instance

aws ec2 start-instances --instance-ids <instance-id>

aws ec2 stop-instances --instance-ids <instance-id>

aws ec2 reboot-instances --instance-ids <instance-id>

aws ec2 terminate-instances --instance-ids <instance-id>

Viewing console output

aws ec2 get-console-output --instance-id <instance-id>

Listing images

aws ec2 describe-images --image-ids <ami-id>

Creating an AMI

aws ec2 create-image \
 --instance-id <instance-id> \
 --name myAMI \
 --description 'Test AMI'

Viewing a security group

aws ec2 describe-security-groups --group-names <group-name>

Checking the enhanced networking attribute

aws ec2 describe-instance-attribute \
 --instance-id <instance-id> \
 --attribute sriovNetSupport



aws ec2 describe-vpcs

aws ec2 describe-subnets --filters Name=vpc-id,Values=<vpc-id>

aws ec2 describe-route-tables --filters Name=vpc-id,Values=<vpc-id>

aws ec2 describe-network-acls --filters Name=vpc-id,Values=<vpc-id>

aws ec2 describe-vpc-peering-connections



aws elb describe-load-balancers --load-balancer-names <lb-name>

aws elb describe-load-balancer-attributes --load-balancer-name <lb-name>

aws elb describe-load-balancer-policies \
 --policy-names [ <policy-name> | ELBSecurityPolicy-2014-10 ]

Registering and removing instances

aws elb register-instances-with-load-balancer
 --load-balancer-name <lb-name>
 --instances <instance-id>

aws elb deregister-instances-from-load-balancer
 --load-balancer-name <lb-name>
 --instances <instance-id>

Viewing the health of your ELB instances

aws elb describe-instance-health --load-balancer-name <lb-name>


Uploading a server certificate

aws iam upload-server-certificate
 --certificate-body file://
 --private-key file://
 --certificate-chain file://Verisign_Chain_CA.crt

Listing your certificates

aws iam list-server-certificates

Using the “–query” option

(JMESPath query language for JSON)

Describe all instances in a region, or in a specific VPC

aws ec2 describe-instances \
 --query 'Reservations[*].Instances[*].{Id:InstanceId,Pub:PublicIpAddress,Pri:PrivateIpAddress,State:State.Name}' \
 --output table

aws ec2 describe-instances \
 --filters Name=vpc-id,Values=<vpc-id> \
 --query 'Reservations[*].Instances[*].{Id:InstanceId,Pub:PublicIpAddress,Pri:PrivateIpAddress,State:State.Name}' \
 --output table


|                      DescribeInstances                     |
|     Id     |       Pri       |       Pub        |  State   |
|  i-e44ac30e|   |  |  running |
|  i-68dd7282|   |  |  running |
|  i-60e5f38d|   |  |  running |