Posted in Information Technology

Why should I use OpenID?

http://openidexplained.com/

Faster and easier to sign up

Comic about OpenID.  A man at a cash register says, “Want to sign up for a Food-Mart membership?” and the customer says excitedly, “No thanks, I'll just use my OpenID!”An OpenID is a way of identifying yourself no matter which web site you visit. It’s like a driver’s license for the entire Internet. But, it’s even more than that because you can (if you want) associate information with your OpenID like your name and your e-mail address, and then you choose how much web sites get to see about you. This means that web sites that take advantage of OpenID won’t bother you for the same information over and over again.

Faster and easier to sign in

OpenID also simplifies signing in. With OpenID you only have to remember one username and one password. That’s because you log into websites with your OpenID, so your OpenID is the only thing you have to make secure. Now, you might already use one username and one password online, but OpenID lets you do this in a secure way. That’s because you only give your password to your OpenID provider, and then your provider tells the websites you’re visiting that you are who you say you are. No website other than your provider ever sees your password, so you don’t have to worry about an insecure website compromising your identity.

A man is weighed down with seven heads, each one labeled with a different username of his.  A man wearing an OpenID sweatshirt stands tall and excited.  He says 'hi' to the hydra-man.Remembering all those usernames can really weigh you down!

Closer to a unified “web identity”

The logos of some of the larger companies getting behind OpenID: Microsoft, Aol, Livejournal, Orange, Plaxo, Bloglines, Six Apart, Sun Microsystems, Technorati, and WordPress.Lots of companies are getting behind OpenID.Because OpenID identifies you uniquely across the Internet, it is a way for web sites and other people to connect the different accounts you’ve created online into a more cohesive persona. Once you establish yourself as the person who uses a particular OpenID, whenever someone sees your OpenID in use, anywhere on the Internet, they’ll know that it’s you. Similarly, if you happen upon a new web site and see that someone with your friend’s OpenID has made a comment, you can be almost certain that it was actually her and not somebody who, by coincidence, has the same name.

That said, you might be worried that OpenID is going to make all of your activities online transparent. Your OpenID does unify information about you, but it only unifies information that you’ve already made public. And, you get to choose, using OpenID, which information to spread and to whom.

Is OpenID secure?

OpenID is no less (or more) secure than what you use right now. It’s true that if someone gets your OpenID’s username and password, they can usurp your online identity. But, that’s already possible. Most websites offer a service to e-mail you your password (or a new password) if you’ve forgotten it, which means that if someone breaks into your e-mail account, they can do just as much as they can if they get your OpenID’s username and password. They can test websites with which they think you have an account and ask for a forgotten password. Similarly, if someone gains access to your OpenID, they can scour the Internet for places they think you have accounts and log in as you… but nothing else.

Regardless of whether you use OpenID or not, you should be careful about your username and password. When you type your username and password, make sure you’re actually on the website you think you are (i.e., check the address).

Aren’t I entrusting my whole identity to one website?

Yes and no. You can, if you like, have multiple OpenIDs, each of which has some information about you. (In fact, many websites let you associate multiple OpenIDs with the same account.) But, that ruins the simplicity of only having one username and password. That’s why it’s smart to get your OpenID from a website you trust, and one that you expect to stick around. See How do I get an OpenID? for more information on choosing a good OpenID provider.

OpenID…

… is proof of identity

It is a way to prove you are who you say you are

… is not a trust system

It cannot guarantee you aren’t a jerk– or a spammer, or a robot, or…

… is used for signing up and logging in

You use OpenID to log into websites without making completely new accounts.

… is not Big Brother

It doesn’t keep track of what you do on those websites; that is still controlled by the websites.

… is different

It does take some getting used to.

… is not complicated

As you get used to it, it gets easier and easier.

… is secure

You only entrust your password to one website, as opposed to all websites.

… is not the only answer

All of the tips you’ve learned for staying secure online still apply. Make sure to choose an OpenID Provider you trust!

… is a step towards a cohesive Identity

It can help connect your online identity. People can be sure who you are across multiple sites.

… is not the end of privacy

You can choose when you use it and how you use it.

… is taking over the world

There are over 27,000 OpenID enabled sites, and the number is growing.

… is not an elephant

OpenID is not an elephant.

Posted in Information Technology

Vault and Consul

https://www.vaultproject.io/docs/vs/consul.html

Consul is a system for service discovery, monitoring, and configuration that is distributed and highly available. Consul also supports an ACL system to restrict access to keys and service information.

While Consul can be used to store secret information and gate access using ACLs, it is not designed for that purpose. As such, data is not encrypted in transit nor at rest, it does not have pluggable authentication mechanisms, and there is no per-request auditing mechanism.

Vault is designed from the ground up as a secret management solution. As such, it protects secrets in transit and at rest. It provides multiple authentication and audit logging mechanisms. Dynamic secret generation allows Vault to avoid providing clients with root privileges to underlying systems and makes it possible to do key rolling and revocation.

The strength of Consul is that it is fault tolerant and highly scalable. By using Consul as a backend to Vault, you get the best of both. Consul is used for durable storage of encrypted data at rest and provides coordination so that Vault can be highly available and fault tolerant. Vault provides the higher level policy management, secret leasing, audit logging, and automatic revocation.

 

Posted in Information Technology

How Does SSL Works

Internet Security and Secure Online Transactions

As companies and organizations offer more online services and transactions, internet security becomes both a priority and a necessity of their online transactions to ensure that sensitive information – such as a credit card number – is only being transmitted to legitimate online businesses.

In order to keep customer information private and secure, companies and organizations need to add SSL certificates to their websites to enable secure online transactions.

What are SSL Certificates and Why do I Need Them?

SSL certificates are an essential component of the data encryption process that make internet transactions secure. They are digital passports that provide authentication to protect the confidentiality and integrity of website communication with browsers.

The SSL certificate’s job is to initiate secure sessions with the user’s browser via the secure sockets layer (SSL) protocol. This secure connection cannot be established without the SSL certificate, which digitally connects company information to a cryptographic key.

Any organization that engages in ecommerce must have an SSL certificate on its web server to ensure the safety of customer and company information, as well as the security of financial transactions.

There are many benefits to using SSL Certificates. Namely, SSL customers:

Watch How Quickly A Fraudster Can Steal Personal Information (1:25)

Wistia video thumbnail

How SSL Certificates Work

  • A browser or server attempts to connect to a website (i.e. a web server) secured with SSL. The browser/server requests that the web server identify itself.
  • The web server sends the browser/server a copy of its SSL certificate.
  • The browser/server checks to see whether or not it trusts the SSL certificate. If so, it sends a message to the web server.
  • The web server sends back a digitally signed acknowledgement to start an SSL encrypted session.
  • Encrypted data is shared between the browser/server and the web server.
How SSL Works Chart

There are many benefits to using SSL certificates. Namely, SSL customers can:

  • Utilize HTTPs, which elicits a stronger Google ranking
  • Create safer experiences for your customers
  • Build customer trust and improve conversions
  • Protect both customer and internal data
  • Encrypt browser-to-server and server-to-server communication
  • Increase security of your mobile and cloud apps

Types of SSL/TLS Certificates

Extended Validation (EV) and Organization Validated (OV) certificates are widely used by organizations that want to provide their online customers with strong encryption technology and identity assurance. Encryption ensures that customer data like credit card information and passwords cannot be stolen as it is transmitted. Identity assurance gives website visitors the ability to identify that the website they’re on is legitimate. The amount of verification checking behind the various certificate types is reflected in the pricing variations. The increased vetting, particularly for EV and OV certificates, is what makes these high assurance certificates more expensive.

COMPARE SSL CERTIFICATES >
Extended Validation (EV) Certificates

EV certificates are preferred by most online users because they come with the most comprehensive verification checking, which includes domain verification as well as crosschecks that tie the entity to a specific physical location. This type of verification leaves a detailed paper trail providing customers with recourse should fraud take place while transacting on that website. EV certificates are distinguished with a locked padlock, organization name and sometimes the country ID in the web address bar in most major browsers.

BUY NOW    LEARN MORE >
Organization Validated (OV) Certificates

For OV certificates, in addition to domain ownership, the organization is validated and the certificate details can be viewed on most major web browsers, giving online users the opportunity to determine if the site they’re on is legitimate.

BUY NOW    LEARN MORE >
Domain Validated (DV) Certificates

A website secured with a DV certificate offers only a locked padlock in address bar, but does not show organization details because they do not exist. These certificates validate domain ownership only, can be acquired anonymously, and do not tie a domain to a person, place or entity. For this reason, many websites using DV certificates are linked to fraudulent activity.

LEARN MORE >

 

Posted in Information Technology

Internet of Things: Your Life and Your Business

When I think about the Internet of Things (IoT), a lot comes to mind. Most certainly, all the connected devices I own. I recently moved to a new house; and the prior owner left the washer and dryer behind. Soon after we moved in, we started having issues with the washer. I found the manual for it online and noticed that it said I could connect the machine to Wi-Fi. Then I could call a phone number, and after pressing a few buttons on the machine, it would “talk” to the computer that answered the phone call to help troubleshoot my specific issue, based on feedback from IoT sensors.

Depending on the information gathered, they could either help me fix the problem over the phone. Or if needed, a technician would be sent with the parts required. When I was reading the manual, I realized I didn’t even notice the Wi-Fi symbol on the machine. I ended up fixing the issue quickly on my own (nearly clogged water inlet filter). But, it’s great to know that next time I have a problem, I can press a couple of buttons and get an expedited solution. Count me in!

Businesses can leverage IoT in different ways. One, is to improve customer experiences. With the washer example I just mentioned, customers don’t have to spend 30 minutes on one, two, or even three or more calls (we’ve all been there) trying to resolve an issue. I also recently had an experience with my internet/cable provider (yes, I’m old school). The cable suddenly went out, and none of the TVs in the house could access programming. I was dreading having to call my provider. But when I did, I was connected to someone almost immediately and within three minutes they explained the problem and reset the system. All they had to do was check the information coming from my device’s IoT sensors. Just like that, everything was back up and running. I was amazed at how easy the process was, and I share this experience any chance I get. They turned a problem into a good experience, and as a result, gained a loyal customer. Of course, I would prefer that my digital life never have any malfunctions. But I get it, we live in a complex world of machines and connections that sometimes malfunction. I feel like this is also IoT playing out in my daily life.

Companies can win financially with IoT, too. Take Navistar for example. They are a 100+ year-old transportation company, and it’s very likely you’ve interacted with their vehicles at one point or another. They recently implemented OnCommand Connection, a connected vehicle platform using IoT devices. Working with dozens of telematics providers and most any model truck, bus or engine, OnCommand Connection lets fleet providers monitor and manage vehicle health remotely. When a problem is detected, the platform proactively notifies drivers and routes them to nearby service centers where the parts they need are in inventory. They use Control-M, BMC’s application workflow orchestration solution, to power this program.

They have increased fleet uptime by 40% and have improved driver and mechanic efficiency. Imagine how valuable this could be to a fleet manager at a transportation company. Your vehicles have less down time and are on the road making money while keeping your drivers safe. By adopting IoT into its business, Navistar is still finding ways to evolve with the times while keeping their core products relevant.

IoT is all about data and connectivity. I think in the years to come, as more of the things in our everyday lives will be connected and generating data, we will all continue to see benefits that we can’t even imagine today!

 

Posted in Information Technology

What is OAuth2

If OAuth2 is still a vague concept for you or you simply want to be sure you understand its behaviours, this article should interest you.

What is OAuth2?

OAuth2OAuth2 is, you guessed it, the version 2 of the OAuth protocol (also called framework).

This protocol allows third-party applications to grant limited access to an HTTP service, either on behalf of a resource owner or by allowing the third-party application to obtain access on its own behalf. Access is requested by a client, it can be a website or a mobile application for example.

Version 2 is expected to simplify the previous version of the protocol and to facilitate interoperability between different applications.

Specifications are still being drafted and the protocol is constantly evolving but that does not prevent it from being implemented and acclaimed by several internet giants such as Google or Facebook.

Basic knowledge

Roles

OAuth2 defines 4 roles :

  • Resource Owner: generally yourself.
  • Resource Server: server hosting protected data (for example Google hosting your profile and personal information).
  • Client: application requesting access to a resource server (it can be your PHP website, a Javascript application or a mobile application).
  • Authorization Server: server issuing access token to the client. This token will be used for the client to request the resource server. This server can be the same as the authorization server (same physical server and same application), and it is often the case.

Tokens

Tokens are random strings generated by the authorization server and are issued when the client requests them.

There are 2 types of token:

  • Access Token: this is the most important because it allows the user data from being accessed by a third-party application. This token is sent by the client as a parameter or as a header in the request to the resource server. It has a limited lifetime, which is defined by the authorization server. It must be kept confidential as soon as possible but we will see that this is not always possible, especially when the client is a web browser that sends requests to the resource server via Javascript.
  • Refresh Token: this token is issued with the access token but unlike the latter, it is not sent in each request from the client to the resource server. It merely serves to be sent to the authorization server for renewing the access token when it has expired. For security reasons, it is not always possible to obtain this token. We will see later in what circumstances.

Access token scope

The scope is a parameter used to limit the rights of the access token. This is the authorization server that defines the list of the available scopes. The client must then send the scopes he wants to use for his application during the request to the authorization server. More the scope is reduced, the greater the chance that the resource owner authorizes access.

More information: http://tools.ietf.org/html/rfc6749#section-3.3.

HTTPS

OAuth2 requires the use of HTTPS for communication between the client and the authorization server because of sensitive data passing between the two (tokens and possibly resource owner credentials). In fact you are not forced to do so if you implement your own authorization server but you must know that you are opening a big security hole by doing this.

Register as a client

Since you want to retrieve data from a resource server using OAuth2, you have to register as a client of the authorization server.

Each provider is free to allow this by the method of his choice. The protocol only defines the parameters that must be specified by the client and those to be returned by the authorization server.

Here are the parameters (they may differ depending of the providers):

Client registration

  • Application Name: the application name
  • Redirect URLs: URLs of the client for receiving authorization code and access token
  • Grant Type(s): authorization types that will be used by the client
  • Javascript Origin (optional): the hostname that will be allowed to request the resource server via XMLHttpRequest

Authorization server response

  • Client Id: unique random string
  • Client Secret: secret key that must be kept confidential

More information: RFC 6749 — Client Registration.

Authorization grant types

OAuth2 defines 4 grant types depending on the location and the nature of the client involved in obtaining an access token.

Authorization Code Grant

When it should be used?

It should be used as soon as the client is a web server. It allows you to obtain a long-lived access token since it can be renewed with a refresh token (if the authorization server enables it).

Example:

  • Resource Owner: you
  • Resource Server: a Google server
  • Client: any website
  • Authorization Server: a Google server

Scenario:

  1. A website wants to obtain information about your Google profile.
  2. You are redirected by the client (the website) to the authorization server (Google).
  3. If you authorize access, the authorization server sends an authorization code to the client (the website) in the callback response.
  4. Then, this code is exchanged against an access token between the client and the authorization server.
  5. The website is now able to use this access token to query the resource server (Google again) and retrieve your profile data.

You never see the access token, it will be stored by the website (in session for example). Google also sends other information with the access token, such as the token lifetime and eventually a refresh token.

This is the ideal scenario and the safer one because the access token is not passed on the client side (web browser in our example).

More information: RFC 6749 — Authorization Code Grant.

Sequence diagram:

Authorization Code Grant Flow

Implicit Grant

When it should be used?

It is typically used when the client is running in a browser using a scripting language such as Javascript. This grant type does not allow the issuance of a refresh token.

Example:

  • Resource Owner: you
  • Resource Server: a Facebook server
  • Client: a website using AngularJS for example
  • Authorization Server: a Facebook server

Scenario:

  1. The client (AngularJS) wants to obtain information about your Facebook profile.
  2. You are redirected by the browser to the authorization server (Facebook).
  3. If you authorize access, the authorization server redirects you to the website with the access token in the URI fragment (not sent to the web server). Example of callback: http://example.com/oauthcallback#access_token=MzJmNDc3M2VjMmQzN.
  4. This access token can now be retrieved and used by the client (AngularJS) to query the resource server (Facebook). Example of query: https://graph.facebook.com/me?access_token=MzJmNDc3M2VjMmQzN.

Maybe you wonder how the client can make a call to the Facebook API with Javascript without being blocked because of the Same Origin Policy? Well, this cross-domain request is possible because Facebook authorizes it thanks to a header called Access-Control-Allow-Origin present in the response.

More information about Cross-Origin Resource Sharing (CORS): https://developer.mozilla.org/en-US/docs/HTTP/Access_control_CORS#The_HTTP_response_headers.

Attention! This type of authorization should only be used if no other type of authorization is available. Indeed, it is the least secure because the access token is exposed (and therefore vulnerable) on the client side.

More information: RFC 6749 — Implicit Grant.

Sequence diagram:

Implicit Grant Flow

Resource Owner Password Credentials Grant

When it should be used?

With this type of authorization, the credentials (and thus the password) are sent to the client and then to the authorization server. It is therefore imperative that there is absolute trust between these two entities. It is mainly used when the client has been developed by the same authority as the authorization server. For example, we could imagine a website named example.com seeking access to protected resources of its own subdomain api.example.com. The user would not be surprised to type his login/password on the site example.com since his account was created on it.

Example:

  • Resource Owner: you having an account on acme.com website of the Acme company
  • Resource Server: Acme company exposing its API at api.acme.com
  • Client: acme.com website from Acme company
  • Authorization Server: an Acme server

Scenario:

  1. Acme company, doing things well, thought to make available a RESTful API to third-party applications.
  2. This company thinks it would be convenient to use its own API to avoid reinventing the wheel.
  3. Company needs an access token to call the methods of its own API.
  4. For this, company asks you to enter your login credentials via a standard HTML form as you normally would.
  5. The server-side application (website acme.com) will exchange your credentials against an access token from the authorization server (if your credentials are valid, of course).
  6. This application can now use the access token to query its own resource server (api.acme.com).

More information: RFC 6749 — Resource Owner Password Credentials Grant.

Sequence diagram:

Resource Owner Password Credentials Grant Flow

Client Credentials Grant

When it should be used?

This type of authorization is used when the client is himself the resource owner. There is no authorization to obtain from the end-user.

Example:

  • Resource Owner: any website
  • Resource Server: Google Cloud Storage
  • Client: the resource owner
  • Authorization Server: a Google server

Scenario:

  1. A website stores its files of any kind on Google Cloud Storage.
  2. The website must go through the Google API to retrieve or modify files and must authenticate with the authorization server.
  3. Once authenticated, the website obtains an access token that can now be used for querying the resource server (Google Cloud Storage).

Here, the end-user does not have to give its authorization for accessing the resource server.

More information: RFC 6749 — Client Credentials Grant.

Sequence diagram:

Client Credentials Grant Flow

Access token usage

The access token can be sent in several ways to the resource server.

Request parameter (GET or POST)

Example using GET: https://api.example.com/profile?access_token=MzJmNDc3M2VjMmQzN

This is not ideal because the token can be found in the access logs of the web server.

Authorization header

GET /profile HTTP/1.1
Host: api.example.com
Authorization: Bearer MzJmNDc3M2VjMmQzN

It is elegant but all resource servers do not allow this.

Security

OAuth2 is sometimes criticized for its permeability, but it is often due to bad implementations of the protocol. There are big mistakes to avoid when using it, here are some examples.

Vulnerability in Authorization Code Grant

There is a vulnerability in this flow that allows an attacker to steal a user’s account under certain conditions. This hole is often encountered and also in many known websites (such as Pinterest, SoundCloud, Digg, …) that have not properly implemented the flow.

Example:

  • Your victim has a valid account on a website called A.
  • The A website allows a user to login or register with Facebook and is previously registered as a client in Facebook OAuth2 authorization server.
  • You click on the Facebook Connect button of website A but do not follow the redirection thanks to Firefox NoRedirect addon or by using Burp for example (callback looks like this: http://site-internet-a.com/facebook/login?code=OGI2NmY2NjYxN2Y4YzE3).
  • You get the url (containing the authorization code) to which you would be redirected (visible in Firebug).
  • Now you have to force your victim to visit this url via a hidden iframe on a website or an image in an email for example.
  • If the victim is logged in website A, jackpot! Now you have access to the victim’s account in website A with your Facebook account. You just have to click on the Facebook Connect button and you will be connected with the victim’s account.

Workaround:

There is a way to prevent this by adding a “state” parameter. The latter is only recommended and not required in the specifications. If the client sends this parameter when requesting an authorization code, it will be returned unchanged by the authorization server in the response and will be compared by the client before the exchange of the authorization code against the access token. This parameter generally matches to a unique hash of a random number that is stored in the user session. For example in PHP: sha1(uniqid(mt_rand(), true)).

In our example, if the website A was using the parameter “state”, he would have realized in the callback that the hash does not match the one stored in the session of the victim and would therefore prevented the theft of victim’s account.

More information: RFC 6749 — Cross-Site Request Forgery.

Vulnerability in Implicit Grant

This type of authorization is the least secure of all because it exposes the access token to client-side (Javascript most of the time). There is a widespread hole that stems from the fact that the client does not know if the access token was generated for him or not (Confused Deputy Problem). This allows an attacker to steal a user account.

Example:

  • An attacker aims to steal a victim’s account on a website A. This website allows you to connect via your Facebook account and uses implicit authorization.
  • The attacker creates a website B allowing login via Facebook too.
  • The victim logs in to the website B with his Facebook account and therefore implicitly authorized the generation of an access token for this.
  • The attacker gets the access token via his website B and uses it on website A by modifying the access token in the URI fragment. If website A is not protected against this attack, the victim’s account is compromised and the attacker has now access to it.

Workaround:

To avoid this, the authorization server must provide in its API a way to retrieve access token information. Thus, website A would be able to compare the client_id of the access token of the attacker against its own client_id. As the stolen access token was generated for the website B, client_id would have been different from client_id of website A and the connection would have been refused.

Google describes this in its API documentation: https://developers.google.com/accounts/docs/OAuth2Login#validatingtoken.

More information in RFC: http://tools.ietf.org/html/rfc6819#section-4.4.2.6

Clickjacking

This technique allows the attacker to cheat by hiding the authorization page in a transparent iframe and getting the victim to click a link that is visually over the “Allow” button of the authorization page.

Example:

OAuth2 Clickjacking

Workaround:

To avoid this, it is necessary that the authorization server returns a header named X-Frame-Options on the authorization page with the value DENY or SAMEORIGIN. This prevents the authorization page to be displayed in an iframe (DENY) or requires consistency between the domain name of the main page and the domain name specified in the iframe “src” attribute (SAMEORIGIN).

This header is not standard but is supported in the following browsers: IE8+, Firefox3.6.9+, Opera10.5+, Safari4+, Chrome 4.1.249.1042+.

More information: https://developer.mozilla.org/en-US/docs/HTTP/X-Frame-Options.

Here is the RFC that lists the potential vulnerabilities in the protocol implementations and the countermeasures: http://tools.ietf.org/html/rfc6819.

Posted in Information Technology

CQRS Pattern

https://martinfowler.com/bliki/CQRS.html

CQRS stands for Command Query Responsibility Segregation. It’s a pattern that I first heard described by Greg Young. At its heart is the notion that you can use a different model to update information than the model you use to read information. For some situations, this separation can be valuable, but beware that for most systems CQRS adds risky complexity.

The mainstream approach people use for interacting with an information system is to treat it as a CRUD datastore. By this I mean that we have mental model of some record structure where we cancreate new records, read records, update existing records, and delete records when we’re done with them. In the simplest case, our interactions are all about storing and retrieving these records.

As our needs become more sophisticated we steadily move away from that model. We may want to look at the information in a different way to the record store, perhaps collapsing multiple records into one, or forming virtual records by combining information for different places. On the update side we may find validation rules that only allow certain combinations of data to be stored, or may even infer data to be stored that’s different from that we provide.

As this occurs we begin to see multiple representations of information. When users interact with the information they use various presentations of this information, each of which is a different representation. Developers typically build their own conceptual model which they use to manipulate the core elements of the model. If you’re using a Domain Model, then this is usually the conceptual representation of the domain. You typically also make the persistent storage as close to the conceptual model as you can.

This structure of multiple layers of representation can get quite complicated, but when people do this they still resolve it down to a single conceptual representation which acts as a conceptual integration point between all the presentations.

The change that CQRS introduces is to split that conceptual model into separate models for update and display, which it refers to as Command and Query respectively following the vocabulary ofCommandQuerySeparation. The rationale is that for many problems, particularly in more complicated domains, having the same conceptual model for commands and queries leads to a more complex model that does neither well.

By separate models we most commonly mean different object models, probably running in different logical processes, perhaps on separate hardware. A web example would see a user looking at a web page that’s rendered using the query model. If they initiate a change that change is routed to the separate command model for processing, the resulting change is communicated to the query model to render the updated state.

There’s room for considerable variation here. The in-memory models may share the same database, in which case the database acts as the communication between the two models. However they may also use separate databases, effectively making the query-side’s database into a real-time ReportingDatabase. In this case there needs to be some communication mechanism between the two models or their databases.

The two models might not be separate object models, it could be that the same objects have different interfaces for their command side and their query side, rather like views in relational databases. But usually when I hear of CQRS, they are clearly separate models.

CQRS naturally fits with some other architectural patterns.

  • As we move away from a single representation that we interact with via CRUD, we can easily move to a task-based UI.
  • CQRS fits well with event-based programming models. It’s common to see CQRS system split into separate services communicating with Event Collaboration. This allows these services to easily take advantage of Event Sourcing.
  • Having separate models raises questions about how hard to keep those models consistent, which raises the likelihood of usingeventual consistency.
  • For many domains, much of the logic is needed when you’re updating, so it may make sense to use EagerReadDerivation to simplify your query-side models.
  • If the write model generates events for all updates, you can structure read models as EventPosters, allowing them to be MemoryImages and thus avoiding a lot of database interactions.
  • CQRS is suited to complex domains, the kind that also benefit from Domain-Driven Design.

When to use it

Like any pattern, CQRS is useful in some places, but not in others. Many systems do fit a CRUD mental model, and so should be done in that style. CQRS is a significant mental leap for all concerned, so shouldn’t be tackled unless the benefit is worth the jump. While I have come across successful uses of CQRS, so far the majority of cases I’ve run into have not been so good, with CQRS seen as a significant force for getting a software system into serious difficulties.

In particular CQRS should only be used on specific portions of a system (a BoundedContext in DDD lingo) and not the system as a whole. In this way of thinking, each Bounded Context needs its own decisions on how it should be modeled.

So far I see benefits in two directions. Firstly is that a few complex domains may be easier to tackle by using CQRS. I must stress, however, that such suitability for CQRS is very much the minority case. Usually there’s enough overlap between the command and query sides that sharing a model is easier. Using CQRS on a domain that doesn’t match it will add complexity, thus reducing productivity and increasing risk.

The other main benefit is in handling high performance applications. CQRS allows you to separate the load from reads and writes allowing you to scale each independently. If your application sees a big disparity between reads and writes this is very handy. Even without that, you can apply different optimization strategies to the two sides. An example of this is using different database access techniques for read and update.

If your domain isn’t suited to CQRS, but you have demanding queries that add complexity or performance problems, remember that you can still use a ReportingDatabase. CQRS uses a separate model for all queries. With a reporting database you still use your main system for most queries, but offload the more demanding ones to the reporting database.

Despite these benefits, you should be very cautious about using CQRS. Many information systems fit well with the notion of an information base that is updated in the same way that it’s read, adding CQRS to such a system can add significant complexity. I’ve certainly seen cases where it’s made a significant drag on productivity, adding an unwarranted amount of risk to the project, even in the hands of a capable team. So while CQRS is a pattern that’s good to have in the toolbox, beware that it is difficult to use well and you can easily chop off important bits if you mishandle it.

Posted in Information Technology

Robotic Process Automation

What is Robotic Process Automation?

Robotic Process Automation is the technology that allows anyone today to configure computer software, or a “robot” to emulate and integrate the actions of a human interacting within digital systems to execute a business process. RPA robots utilize the user interface to capture data and manipulate applications just like humans do. They interpret, trigger responses and communicate with other systems in order to perform on a vast variety of repetitive tasks. Only substantially better: an RPA software robot never sleeps, makes zero mistakes and costs a lot less than an employee.

How is RPA different from
other enterprise automation tools?

In contrast to other, traditional IT solutions, RPA allows organizations to automate at a fraction of the cost and time previously encountered. RPA is also non-intrusive in nature and leverages the existing infrastructure without causing disruption to underlying systems, which would be difficult and costly to replace. With RPA, cost efficiency and compliance are no longer an operating cost but a byproduct of the automation.

How does Robotic Process Automation work?

RPA robots are capable of mimicking many–if not most–human user actions. They log into applications, move files and folders, copy and paste data, fill in forms, extract structured and semi-structured data from documents, scrape browsers, and more.