Cybercriminals often seem to find a way to fulfill their malicious intent and the new way in town is phishing via Google Calendar, popularly called Calendar Phishing.
As per a report by security firm Kaspersky, scammers are sending across phishing links to users via Google Calendar in Gmail, taking advantage of a setting on the Google app.
How Does Calendar Phishing Work?
It is suggested that Google Calendar comes with a default setting of adding invitations and events, even if a user has not responded to the invite. Users receive a pop-up notification for all events and invites when the date of the event approaches.
While users these days are likely to dismiss suspicious emails (thanks to all the increasing security awareness), phishing links via trusted apps such as Google Calendars tend to catch users’ attention and help the scammers succeed in their motive.
Cybercriminals, via phishing links, can get access to users’ important data, social security numbers, and banking details in order to extract money from them.
How to stop it?
However, there is still a way to stop the Calendar Phishing and users can follow simple steps to do so:
First, users are required to click Google Calendar, select settings Gear icon, and head to the Event settings.
Under the Event settings, go for the ‘Automatically add invitations’ option and select ‘No, only show invitations to which I’ve responded.’
Following this, users need to ensure that they untick the ‘Show declined events’ option under the View options section.
In addition to this, users need to stay alert and not enter any personal information on the sites they find fishy and should use a reliable security solution to remain safe.
In this article, we will integrate spring cloud gateway with a microservice-based architecture application using spring cloud. In the process, we will use spring cloud gateway as a gateway provider, Netflix Eureka as a discovery server, with circuit breaker pattern using Netflix Hystrix.
Let’s quickly get started with the implementation of it.
Discovery Server Implementation
In a microservice architecture, service discovery is one of the key tenets. Service discovery automates the process of multiple instance creations on demand and provides high availability of our microservices.
The route is the basic building block of the gateway. It is defined by an ID, a destination URI, a collection of predicates and a collection of filters. A route is matched if an aggregate predicate is true.
Spring Cloud Gateway provides many built-in Route Predicate Factories such as Path, Host, Date/Time, Method, Header, etc. We can use these built-in routes with conjunction with and() or () to define our routes. Once a request reaches to the gateway, the first thing gateway does is to match the request with each of the available route based on the predicate defined and the request is routed to the matched route.
Below is our route configuration. We have 2 different routes defined for our 2 microservices — first-service and second-service.
Now, as per the route configuration, the request matching the regex /api/v1/first/** will be forwarded to first-service whereas the request matching the regex /api/v1/second/** will be forwarded to second-service.
This concludes an example of using Spring Cloud Gateway to route requests to multiple services running downstream. Next, we can extend this example to integrate security at the gateway level.
According to Black Duck’s 2016 Future of Open Source Survey, 65% of companies are now using and contributing to open source projects. Add to that Githubs 38 million projects and 15 million contributors/developers and you can see that this high level of contribution has led to a constant rate of change in the open source field. A trend that is only accelerating as new projects are created every day, while others are being forked off of existing open source projects. This expanding list of open source projects provides viable alternatives to many closed source products. One category of software that has been influenced by this open source trend is Business Process Management (BPM).
BPM Products are geared towards automating manual processes. Many business processes start off using e-mail or spreadsheets and grow to the point when these are no longer manageable. Some examples are when the business process spans multiple departments or when the workflow includes system-related steps in addition to human steps. More advanced use cases include case management type activities that require investigation and unstructured, dynamic type workflows. One particularly interesting trend on the case management front is that many industry products are starting to offer case management and business process management capabilities as part of the same tooling, vs. requiring separate products. You can see this trend present in open source BPM products.
A number of open source BPM products can provide an alternative to closed source BPM products. Some of these include Activiti, Camunda, and jBPM. jBPM, founded in 2006, was the first of these products with Activiti forking off jBPM in 2010 and Camunda forking off Activiti in 2012. While similar, there are a number of differences between these popular open source BPM products. Some of these differences include:
· Open source model (Community vs. Enterprise)
· Capability set
Let’s briefly touch on each of these topics through a compare and contrast of the open source BPM projects.
Open source model (Community vs. Enterprise)
Every company supporting an open source project has its own business model. Typically, open source companies offer enterprise open source products as a way to generate revenue — you typically have to pay for the enterprise version while the community version (in general) is free. Camunda provides an enterprise version of Camunda, Alfresco provides an enterprise version of Activiti, and RedHat provides an enterprise version of jBPM. I have found that the definition of “enterprise” can be different for each company. It is very important for developers using or sourcing an open source project to understand what a company means by “enterprise open source” before working with it.
In some cases, the enterprise version is not open source, but is comprised mostly of open source projects, enabling a lower price point than competing closed source products. One potential downside of this version is you lose the ability to contribute back to the code and are reliant on the supporting company to make the code changes.
In some cases, companies will provide a certain capability set in the community version, then offer additional capabilities in the enterprise version. Camunda and Alfresco both follow this approach. For example, with Activiti, the enterprise version has a full BPMN editor while the community version has a scaled back version.
In some cases, companies provide the same capability set in both the community and enterprise versions, the only difference is that with the enterprise version you pay for support. Depending on the degree of in-house expertise on the open source project, you may or may not want support. RedHat follows this approach with jBPM. The enterprise version of jBPM is based off a previous version of the community version, which is then run through a series of certification tests. Why? Sometimes community versions have experimental features that may have bugs, which is why older versions are often leveraged for enterprise versions.
Let’s start and dig into the capabilities of Activiti, Camunda, and jBPM.
Some BPM products have their own rules engine, while others provide integration / plug-ins for commonly used rules engines. Drools is a common rules engine used in the industry. jBPM integrates Drools into its project whereas Activiti and Camunda take a different approach and provide integration with Drools. All three enable you to use a business rule task as part of the workflow; what differs is the integration behind the business rule task. Since Drools is native to jBPM, you can integrate the rules at the Ruleflow Group level (grouping of rules) in the model, whereas the others integrate at the rule level. This enables you to select a specific set of rules to execute at a particular point in the workflow using the native features of Drools. The same outcome can be achieved with Camunda and Activiti, but will require additional coding whereas jBPM provides the capability out of the box.
2. Modeling & Execution Environment
BPM products typically provide an authoring/modeling user interface that enables users to build process diagrams. Some will also provide eclipse plug-ins for developers that would rather work in an IDE. Most BPM products have an execution user interface that enables users to view and process tasks assigned to them. Camunda provides four separate UIs — modeling, tasklist, process administration, and monitoring. This is a different approach than jBPM and Activiti as they provide a single web app for development and execution like the jBPM Workbench or the Activiti Modeler. One additional difference with the Camunda modeler UI is it’s currently only available as a desktop application. From an Eclipse plug-in perspective, Camunda no longer provides one as they decided to decouple from Eclipse to remove the need for updating the plug-in with each new release.
Activiti provides a single web app for development and execution. In the community version it is called Activiti Explorer and is very easy and simple to use. The enterprise version provides a full BPMN editor with advanced capabilities such as decision tables and step-based process designer. Activiti provides an Eclipse plug-in, giving developers flexibility to work in an IDE if they prefer.
Similar to Activiti, jBPM provides a single web app for development and execution. In the community version it is known as KIE Workbench where in the enterprise version it is called Business Central. Unlike Activti, jBPM offers the same capabilities in the community and enterprise versions. Similar to Activti, jBPM provides an Eclipse plug-in.
3. Form Builder
Form builders provide an easy way for users to view and add/update process related data within the execution user interface. jBPM provides a basic form builder through the KIE Workbench (same UI is used for modeling & execution as mentioned previously) that can be auto-generated from process variables. Camunda does not provide a form editor capability as they prefer that users build their own forms/UI in their preferred language. Activiti provides a form builder, but it is only available in the enterprise edition.
jBPM form builder:
Given the importance of APIs in today’s software industry, the ability to make REST calls from applications is critical. All three products provide an ability to make REST API calls. jBPM provides an out of the box REST service task, while Camunda & Activiti require additional development to implement a REST call (custom java classes). An advantage of having an out of the box REST service task is it decreases development time by being configuration based instead of requiring custom development. While this makes it easier to add REST calls within a workflow, use it carefully. If you have many REST calls to make within a workflow, evaluate performing them in parallel to increase performance.
Below is a view of the jBPM REST service task:
jBPM Rest Service Task configuration:
Below is a view of the Service Task in Activiti Modeler. Once you add in a quick Java class, you can perform a REST call:
Below is the Camunda service task. Similar to Activiti it requires further coding:
5. Deployment modes
Camunda, jBPM, and Activiti all support embedded and standalone deployment modes. An embedded deployment enables you to run the BPM Engine as part of an existing application (within the same JVM). This can provide performance benefits as data is passed in memory instead of network calls. A standalone deployment exposes various API functions that can be invoked by a client via REST. This can be beneficial if you want to reuse a single workflow instance across different applications. It also can be a way to integrate an application to the workflow engine that may be written in a language that is different than the API of the BPM product. Let’s look at examples of how Activiti, jBPM and Camunda can be invoked via APIs.
The Activiti Engine can be invoked via REST through Spring Boot:
The below class, a Spring service, has two methods: one to start the process and one to get a task list for a given assignee. It is a wrapper around Activiti calls, but will be more complex in real life scenarios.
The below REST endpoint class is annotated with @RestController and delegates to the above service.
Both the @Service and the @RestController will be found by the automatic component scan (@ComponentScan) in the application class. The below curl command can interact with the REST API:
Below is an example of invoking a jBPM workflow using the jBPM java API that uses REST.
Camunda does not provide a java API out of the box, but can be accessed through REST calls. Below are two POST REST calls that can start the instance:
Who contributes to the source code of an open source BPM project is important. An active community signals that a project is still being improved and enhanced. The number of contributors outside of the supporting company can also help indicate the degree of diversity in thought and ideas put into a project. Open Hub is one site that can be used to lookup this type of information. It provides details such as the activity, number of contributors, and commits. These are important factors to take into account. Below is an example Open Hub page for Camunda:
In this article, we’ve just briefly touched on some of the similarities and differences between the open source BPM projects Activiti, Camunda, and jBPM. All three have their benefits and the specific needs of your project will help determine which one is the right choice for you. The good news is, all three are viable open source alternatives for closed source BPM products. And their open source nature means they will continue to change and evolve over time.
The analysis in this article was performed on jBPM 6.4, Activiti 5.21, and Camunda 7.5
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 9,912 B of archives.
After this operation, 197 kB of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 software-properties-common all 0.96.24.32.5 [9,912 B]
Fetched 9,912 B in 2s (5,685 B/s)
Selecting previously unselected package software-properties-common.
(Reading database ... 265950 files and directories currently installed.)
Preparing to unpack .../software-properties-common_0.96.24.32.5_all.deb ...
Unpacking software-properties-common (0.96.24.32.5) ...
Processing triggers for man-db (2.8.3-2) ...
Processing triggers for dbus (1.12.2-1ubuntu1) ...
Setting up software-properties-common (0.96.24.32.5) ...
Once you have installed software-properties-common, you should update the system using this command:
sudo apt-get update
You can now comfortably use add-apt-repository or apt-add-repository commands to add PPA.
Note: If you see an error saying software-properties-common command not found, you should run sudo apt-get update and then try to install it again.
I hope this quick tip helped you in fixing “add-apt-repository: command not found” erroron Ubuntu and other Debian-based Linux distributions.
Suggested readHow To Install And Configure Ubuntu SDK In Ubuntu 16.04 & 14.04
If you are still facing issues with PPA, let me know in the comment section. Additional suggestions, questions and a quick word of thanks are always welcome.
Every now and then, when using Python 2.7 < 2.7.9 and trying to access SSL resources, especially through the requests toolkit, which seems to trigger the issue frequently – but I’ve seen it on some combinations of pip inside virtualenv as well – you’ll get an error or a warning along those lines:
InsecurePlatformWarning: A true SSLContext object is not
available. This prevents urllib3 from configuring SSL appropriately and
may cause certain SSL connections to fail. For more information, see
SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#snimissingwarning.
or, even worse:
error: SSLError: hostname 'xxx.franzoni.eu' doesn't match either of www.franzoni.eu, franzoni.eu
This is caused by old libraries in python < 2.7.9 and < 3.4. To fix that, just add to your current virtualenv those three packages:
pip install pyOpenSSL ndg-httpsclient pyasn1
You may need runtime/development packages for python and openssl as well in order for the build to succeed, e.g. python-dev libssl-dev libffi-dev on Ubuntu 14.04.
Internal vs. External Sorting
Sorting algorithms can be classified into two types of algorithms: internal and external. Internal sorting algorithms require the full data set to fit into main memory whereas external sort is used when the full data set does not fit and have to reside on external storage during the sorting process.
Stable vs. Unstable Sorting
A stable sorting algorithm is when two objects with equal keys appear in the same order in the unsorted input array and the sorted output array. Examples of stable sorting algorithms are Insertion sort, Merge Sort and Bubble Sort.
An unstable sorting algorithm is when two objects with equal keys doesn’t appear in the same order in the unsorted input array and the sorted output array. Examples of unstable sorting algorithms are Heap Sort and Quick Sort.
Time vs. Space Complexity
Time Complexity is the computational complexity that describes the amount of time it takes to run an algorithm.
Space Complexity is the computational complexity that describes the amount of memory space required by an algorithm.
In-place vs. Out-of-place Algorithm
An in-place algorithm is an algorithm which transforms input using no auxiliary data structure. The input is usually overwritten by the output as the algorithm executes. In-place algorithm updates input sequence only through replacement or swapping of elements.
An algorithm which is not in-place is sometimes called not-in-place or out-of-place.
List of Sort Algorithms
The following algorithms are examples of internal sorting algorithms:
Git is the go-to version control tool for most software developers because it allows them to efficiently manage their source code and track file changes while working with a large team. In fact, Git has so many uses that memorizing its various commands can be a daunting task, which is why we’ve created this git cheat sheet.
After Git is installed, whether from apt-get or from the source, you need to copy your username and email in the gitconfig file. You can access this file at ~/.gitconfig.
Opening it following a fresh Git install would reveal a completely blank page:
sudo vim ~/.gitconfig
You can use the follow commands to add in the required information. Replace ‘user’ with your username and ‘firstname.lastname@example.org’ with your email.
And you are done with setting up. Now let’s get started with Git.
Create a new directory, open it and run this command:
This will create a new git repository. Your local repository consists of three “trees” maintained by git.
First one is your Working Directory which holds the actual files. Second one is the Index which acts as a staging area and finally the HEAD which points to the last commit you’ve made.checkout your repository using git clone /path/to/repository.
Suggested readMust Have Essential Linux Applications
Checkout your repository (repository you just created or an existing repository on a server) using git clone /path/to/repository.
Add files and commit:
You can propose changes using:
git add <filename>
This will add a new file for the commit. If you want to add every new file, then just do:
git add --all
Your files are added check your status using
As you can see, there are changes but they are not committed. Now you need to commit these changes, use:
git commit -m "Commit message"
You can also do (preferred):
git commit -a
And then write your commit message. Now the file is committed to the HEAD, but not in your remote repository yet.
Push your changes
Your changes are in the HEAD of your local working copy. If you have not cloned an existing repository and want to connect your repository to a remote server, you need to add it first with:
git remote add origin <serveraddress>
Now you are able to push your changes to the selected remote server.To send those changes to your remote repository, run:
git push -u origin master
Branches are used to develop features which are isolated from each other. The master branch is the “default” branch when you create a repository. Use other branches for development and merge them back to the master branch upon completion.
Create a new branch named “mybranch” and switch to it using:
git checkout -b mybranch
You can switch back to master by running:
git checkout master
If you want to delete the branch use:
git branch -d mybranch
A branch is not available to others unless you push the branch to your remote repository, so what are you thinking about just push it:
git push origin <branchname>
Update and Merge
To update your local repository to the newest commit, run:
In your working directory to fetch and merge remote changes.To merge another branch into your active branch (e.g. master), use :
git merge <branch>
In both cases, git tries to auto-merge changes. Unfortunately, this is not always possible and results in conflicts. You are responsible for merging those conflicts manually by editing the files shown by git. After changing, you need to mark them as merged with
git add <filename>
Before merging changes, you can also preview them by using
git diff <sourcebranch> <targetbranch>
You can see the repository history using.
To see a log where each commit is one line you can use:
git log --pretty=oneline
Or maybe you want to see an ASCII art tree of all the branches, decorated with the names of tags and branches:
git log --graph --oneline --decorate --all
If you want to see only which files have changed:
git log --name-status
And for any help during the entire process, you can use git --help
Choosing a Ruby version management tool often comes down to two players: rbenv and RVM. The latter was widely accepted as the norm, greatly due to its wide toolkit. However, rbenv has become a strong contender with its lightweight approach.
Under the Hood
So, how do these tools get the job done? This is where things get a little scary with RVM. RVM overrides the cd shell command in order to load the current Ruby environment variables. Not only can the override cause unexpected behavior, but it also means that rubies and gemsets are loaded when switching directories.
rbenv does things on the fly by using shims to execute commands.
* A directory of shims (~/.rbenv/shims) is inserted to the front of PATH.
* The directory holds a shim for every Ruby command.
* The operating system searches for a shim that matches the name of the command, which in turn passes it to rbenv, determining the Ruby version to execute.
rbenv configuration for an application is dirt simple:
# .ruby-version 2.3.0
The RBENV_VERSION variable also makes it easy to quickly specify a Ruby version via the command line. It’s first in line when rbenv checks for the current Ruby version.
Delegating the Workload
There are a few features in RVM that make it the heavier tool. RVM comes with its own Ruby installation mechanism:
rvm install ruby-2.3.0
With rbenv, you can either install Ruby yourself (by saving to ~/.rbenv/versions) or make use of ruby-build, a plugin that will install the versions for you. Like rbenv, ruby-build has a homebrew recipe.
brew install ruby-build rbenv install 2.3.0
RVM gives the ability to separate dependencies by project with gemsets. Gemsets, however, are more of a thing of the past, thanks to the widespread use of Bundler.
With Bundler, one can easily manage the gems for a project.
gem install bundler
# Gemfile in root of application source 'https://rubygems.org'
gem 'rails' gem 'rspec'
Although most projects use Bundler now, the plugin rbenv-gemsets is the rbenv equivalent of gemsets.
Light is Might
While the versatility of RVM can be resourceful, when it comes to Ruby version management, it can be overkill. Using rbenv allows you to keep things simple and let other tools handle other aspects of the process. rbenv’s primary focus on Ruby versioning leads to a more dev-friendly setup and configuration. We have been using rbenv with our apps for a few years now. Partnered with Capistrano, rbenv-capistrano makes Ruby version maintenance for our deployable environments straightforward.
The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.
Leveraging on AWS CLI, we can build an Infrastructure as a Code in the organization.