Posted in Information Technology

Disaster Recovery for the Cloud

In recent decades, cloud computing has gained popularity due to its range of benefits to business organizations ranging from cost optimization and access to high performance IT infrastructure, to security, compliance and ease of doing business. However, these advantages are only realized as long as the service is available and functioning as per the expected reliability standards. In order to maximize the reliability of IT services delivered from off-site cloud datacenters, vendors and customers of cloud computing follow Disaster Recovery strategies. These practices are designed to mitigate the risks associated with operating mission-critical apps and data from cloud datacenters, that are not immune to natural disasters, cyber-attacks, power outages, networking issues and other technical or business challenges affecting service availability to end-users.

Unplanned downtime cost businesses over $80,000 per hour in datacenter downtime according to a recent research. While large enterprises may be able to contain the financial damages associated with downtime incidents, small and midsize businesses experience the most damaging consequences. Research suggests that organizations without an adequate disaster recovery plan go into liquidation within 18 months of suffering a major downtime incident. This makes disaster recovery planning critical to business success amid growing dependence on cloud-enabled IT services, cybersecurity issues and power outage concerns.

Disaster Recovery (DR) is a component of security planning that constitutes the technologies, practices and policies to recover from a disaster that impacts the availability, functionality and performance of an IT service. The disaster may result from a human, technology or natural incident. Disaster Recovery is a subset of business continuity that deals with the large picture of avoiding disasters in the first place. While business continuity involves the processes and strategy to ensure a functioning IT service during and after a disaster, the component of disaster recovery involves the measures and mechanism that help regain application functionality and access to data following a disaster incident. The following is a brief guide to get you started with your disaster recovery planning initiatives:

Planning and Preparation

Disaster recovery planning is unique to every organization and depends on the metrics that are best considered to evaluate the recovery of an IT service following a disaster. Organizations need to identify the resilience level for their development, testing and production environments, and implement disaster recovery plans accordingly. The metrics in consideration could include Recovery Point Objective (RPO), the age limit of business information that must be recovered since the disaster, and Recovery Time Objective (RTO), the acceptable time for recovery during which the IT service remains unavailable. These metrics should be aligned with the organizational goals of business continuity and must evolve over time as the organization scales and faces different set of challenges in achieving these goals.

For customers of cloud infrastructure services, the requirements on these metrics should be defined in the SLA agreement. High availability architecture such as hybrid and multi-cloud environments offer improved operational performance in terms of service availability. However, the tradeoff between cost, availability, performance and other associated parameters should be considered for each investment option.

The following best practices should be employed in developing a disaster recovery program for your organization:

  • Understand how your organization defines a disaster.
  • Define your requirements. Understand your RPO and RTO requirements for different workloads and applications.
  • How do you re-evaluate disaster recovery on an ongoing basis to account for changing technical and business requirements?
  • Is the organization capable of realizing a disaster recovery plan in real scenarios? Consider employee awareness and training, disaster recovery exercises and drills.

Know Your Options

Disaster recovery solutions may involve a diverse range of options for different DR goals. A well-designed strategy focuses on an optimal tradeoff of cost investments, practicality, and IT burden, with the disaster recovery performance. For instance, if a car risks a puncture during driving, would you rather run expensive run-flat tires; run a regular tire and keep a spare wheel with a replacement kit in the car; or run a regular tire, have no spare wheel and rely on roadside assistance to replace a flat tire? Each option have their own set of implications and require a strategic assessment of the disaster recovery goals. It may be possible for organizations to follow a holistic disaster recovery plan that incorporates different disaster recovery patterns for different use cases as appropriate. For instance, a mission-critical app may require short RTO/RPO objectives while an external marketing database may not impact business operations for long duration following a disaster.

Testing Your Disaster Recovery Capability

Organizations can develop the most applicable and appropriate disaster recovery program and yet fail to implement the measures in practical, real-world environments. These limitations are often caused by limited employee training and failure to account for real-world situations that may have been ignored during the disaster recovery planning stages. The proof is therefore in the testing of your disaster recovery program at frequent and regular intervals. These intervals may range from one to four times per year, although some fast-growing organization may even resort to monthly testing exercises depending upon their technical requirements or regulatory concerns.

The testing procedures should extend beyond the technology capabilities and encompass the people and processes. Disaster recovery simulations can help organizations understand how the technology will behave in transferring workloads across geographic locations if the primary datacenter is hit with a power outage. But what about the workforce responsible for executing the policies and procedures designed to streamline the disaster recovery process? This means that the disaster recovery program should also consider the education and training of employees responsible for executing key protocols to recover from a disaster situation.

Finally, it is important to keep up to date documentation on the disaster recovery performance during exercises as well as real-world disaster incidents. Use this information as a feedback loop to tune your disaster recovery capabilities based on your organizational requirements. Disaster recovery for different cloud architecture models may be treated according to the impact on business and the technical requirements. For instance, multi-cloud environments may be less prone to disaster situations as per the appropriate SLA agreements associated with multiple datacenter locations, RPO/RTOs and other metrics. Organizations must therefore evaluate which cloud service model optimally fulfils their disaster requirements on different apps and data sets used to perform daily business operations.

Posted in Information Technology

Build Secure Microservices in Your Spring REST API

Users ask us for new features, bug fixes, and changes in domain logic for our applications every day. As any project (especially a monolith) grows, it often becomes difficult to maintain, and the barrier to entry for a new developer joining the project gets higher and higher.

In this tutorial, I’ll walk you through building a secure Spring REST API that tries to solve for some of these pain points using a microservices architecture.

In a typical microservices architecture, you divide your application into several apps that can be more easily maintained and scaled, use different stacks, and support more teams working in parallel. But microservices are the simple solution to every scaling and maintenance problem.

Microservices also present a number of architectural challenges that must be addressed:

  • How those services communicate?
  • How should communication failures and availability be handled?
  • How can a user’s requests be traced between services?
  • And, how should you handle user authorization to access a single service?

Let’s dig in and find out how to address these challenges when building a Spring REST API.

Secure Your Spring REST API With OAuth 2.0

In OAuth 2.0, a resource server is a service designed to handle domain-logic requests and does not have any kind of login workflow or complex authentication mechanism: It receives a pre-obtained access token that guarantees a user have grant permission to access the server and delivers the expected response.

In this post, you are going to build a simple Resource Server with Spring Boot and Okta to demonstrate how easy it is. You will implement a simple Resource Server that will receive and validate a JWT Token.

Add a Resource Server Your Spring REST API

This example uses Okta to handle all authentication process. You can register for a free-forever developer account that will enable you to create as many users and applications you need.

I have set up some things so we can get started easily. Please clone the following resource repository and go to startup tag, as follows:

git clone -b startup https://github.com/oktadeveloper/okta-secure-spring-rest-api-example secure-spring-rest-api
cd secure-spring-rest-api

This project has the following structure:

$ tree .
.
├── README.md
├── mvnw
├── mvnw.cmd
├── pom.xml
└── src
    ├── main
    │   ├── java
    │   │   └── net
    │   │       └── dovale
    │   │           └── okta
    │   │               └── secure_rest_api
    │   │                   ├── HelloWorldController.java
    │   │                   ├── SecureRestApiApplication.java
    │   │                   └── SecurityConfig.java
    │   └── resources
    │       └── application.properties
    └── test
        └── java
            └── net
                └── dovale
                    └── okta
                        └── secure_rest_api
                            └── SecureRestApiApplicationTests.java

14 directories, 9 files

I created it using the excellent Spring Initializr and adding Web and Security dependencies. Spring Initializr provides an easy way to create a new Spring Boot service with some common auto-discovered dependencies. It also adds the Maven Wrapper: So, you use the command mvnw instead of mvn, the tool will detect if you have the designated Maven version and, if not, it will download and run the specified command.

The file HelloWorldController is a simple @RestController that outputs “Hello World.”

In a terminal, you can run the following command and see Spring Boot start:

mvnw spring-boot:run

TIP: If this command doesn’t work for you, try ./mvnw spring-boot:run instead.

Once it finishes loading, you’ll have a REST API ready and set to deliver to you a glorious Hello World message!

> curl http://localhost:8080/
Hello World

TIP: The curl command is not available by default for Windows users. You can download it from here.

Now, you need to properly create a protected Resource Server.

Set Up an OAuth 2.0 Resource Server

In the Okta dashboard, create an application of type Service it indicates a resource server that does not have a login page or any way to obtain new tokens.

Create new Service

Click Next, type the name of your service, and then click Done. You will be presented with a screen similar to the one below. Copy and paste your Client ID and Client Secret for later. They will be useful when you are configuring your application.

Service Created

Now, let’s code something!

Edit the pom.xml file and add dependencies for Spring Security and Okta. They will enable all the Spring AND Okta OAuth 2.0 goodness you need:

<!-- security - begin -->
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-oauth2</artifactId>
</dependency>
<dependency>
    <groupId>com.okta.spring</groupId>
    <artifactId>okta-spring-boot-starter</artifactId>
    <version>0.6.1</version>
</dependency>
<!-- security - end -->

By simply adding this dependency, your code is going to be like a locked house without a key. No one can access your API until you provide a key to your users. Run the command below again.

mvnw spring-boot:run

Now, try to access the Hello World resource:

> curl http://localhost:8080/
{"timestamp":"2018-11-30T01:35:30.038+0000","status":401,"error":"Unauthorized","message":"Unauthorized","path":"/"}

Add Spring Security to Your REST API

Spring Boot has a lot of classpath magic and is able to discover and automatically configure dependencies. Since you have added Spring Security, it automatically secured your resources. Now, you need to configure Spring Security so you can properly authenticate the requests.

NOTE: If you are struggling, you can check the modifications in Git branch step-1-security-dependencies.

For that, you need to modify application.properties as follows (use client_id and client_secret provided by Okta dashboard to your application):

okta.oauth2.issuer=https://okta.okta.com/oauth2/default
okta.oauth2.clientId={clientId}
okta.oauth2.clientSecret={clientSecret}
okta.oauth2.scopes=openid

Spring Boot uses annotations and code for configuring your application so you do not need to edit super boring XML files. This means you can use the Java compiler to validate your configuration!

I usually create a configuration in different classes, each one has its own purpose. Create the class net.dovale.okta.secure_rest_api.SecurityConfig as follows:

package net.dovale.okta.secure_rest_api;

import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.oauth2.config.annotation.web.configuration.EnableResourceServer;

@EnableWebSecurity
@EnableResourceServer
public class SecurityConfig  {}

Allow me to explain what the annotations here do:

  • @EnableWebSecurity tells Spring we are going to use Spring Security to provide web security mechanisms
  • @EnableResourceServer is a convenient annotation that enables request authentication through OAuth 2.0 tokens. Normally, you would provide a ResourceServerConfigurer bean, but Okta’s Spring Boot starter conveniently provides one for you.

That’s it! Now, you have a completely configured and secured Spring REST API without any boilerplate!

Run Spring Boot again and check it with cURL.

mvnw spring-boot:run
# in another shell
curl http://localhost:8080/
{"error":"unauthorized","error_description":"Full authentication is required to access this resource"}

The message changed, but you still without access… why? Because now the server is waiting for an authorization header with a valid token. In the next step, you’ll create an access token and use it to access your API.

NOTE: Check the Git branchstep-2-security-configuration if you have any doubt.

Generate Tokens in Your Spring REST API

So… how do you obtain a token? A resource server has no responsibility to obtain valid credentials: It will only check if the token is valid and proceed with the method execution.

An easy way to achieve a token to generate one using OpenID Connect <debugger/>.

First, you’ll need to create a new Web application in Okta:

New web application

Set the Login redirect URIs field to https://oidcdebugger.com/debug and Grant Type Allowed to Hybrid. Click Done and copy the client ID for the next step.

Now, on the OpenID Connect website, fill the form in like the picture below (do not forget to fill in the client ID for your recently created Okta web application):

OpenID connect

Submit the form to start the authentication process. You’ll receive an Okta login form if you are not logged in or you’ll see the screen below with your custom token.

OpenID connect - getting token

The token will be valid for one hour so you can do a lot of testing with your API. It’s simple to use the token — just copy it and modify the curl command to use it as follows:

> export TOKEN=${YOUR_TOKEN}
> curl http://localhost:8080 -H "Authorization: Bearer $TOKEN"
Hello World

Add OAuth 2.0 Scopes

OAuth 2.0 scopes is a feature that lets users decide if the application will be authorized to make something restricted. For example, you could have “read” and “write” scopes. If an application needs the write scope, it should ask the user this specific scope. These can be automatically handled by Okta’s authorization server.

As a resource server, it can have different endpoints with different scope for each one. Next, you are going to learn how to set different scopes and how to test them.

Add a new annotation to your SecurityConfig class:

@EnableWebSecurity
@EnableResourceServer
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class SecurityConfig {}

The new @EnableGlobalMethodSecurity(prePostEnabled = true) annotation tells Spring to use AOP-like method security, and prePostEnabled = true will enable pre and post annotations. Those annotations will enable us to define security programmatically for each endpoint.

Now, make changes to HelloWorldController.java to create a scope-protected endpoint:

import org.springframework.security.access.prepost.PreAuthorize;
import java.security.Principal;
...
@PreAuthorize("#oauth2.hasScope('profile')")
@GetMapping("/protected/")
public String helloWorldProtected(Principal principal) {
    return "Hello VIP " + principal.getName();
}

Pay attention to @PreAuthorize("#oauth2.hasScope('profile')"). It says: Before running this method, verify the request has authorization for the specified Scope. The #oauth2 bit is added by OAuth2SecurityExpressionMethods (check the other methods available) Spring class and is added to your classpath through the spring-cloud-starter-oauth2dependency.

OK! After a restart, your server will be ready! Make a new request to the endpoint using your current token:

> curl http://localhost:8080/protected/ -H "Authorization: Bearer $TOKEN"
{"error":"access_denied","error_description":"Access is denied"}

Since your token does not have the desired scope, you’ll receive an access is denied message. To fix this, head back over to OIDC Debugger and add the new scope.

Profile scope

Try again using the newly obtained token:

> curl http://localhost:8080/protected/ -H "Authorization: Bearer $TOKEN"
Hello VIP raphael@dovale.net

That’s it! If you are in doubt of anything, check the latest repository branch finished_sample.

TIP: Since profile is a common OAuth 2.0 scope, you don’t need to change anything in your authorization server. Need to create a custom scope? See this Simple Token Authentication for Java Apps.

Learn More About Spring and REST APIs

In this tutorial, you learned how to use Spring (Boot) to create a resource server and seamlessly integrate it with OAuth 2.0. Both Spring and REST API’s are huge topics, with lots to discuss and learn.

The source code for this tutorial is available on GitHub.

Here are some other posts that will help you further your understanding of both Spring and REST API security:

Like what you learned today? Follow us on Twitter, like us on Facebook, check us out on LinkedIn, and subscribe to our YouTube channel.

Create a Secure Spring REST API was originally published to the Okta developer blog on December 18, 2018 by Raphael do Vale.

Posted in Information Technology, Software Engineering

COBOL on AWS Lambda

AWS has joined with Blu Age, a vendor that offers tools to modernize applications, to support COBOL on Lambda, the cloud provider’s serverless computing platform. Blu Age’s software provides a COBOL runtime and takes advantage of AWS’ new Lambda Layers feature.

Developers can run COBOL-based functions on AWS’ native Java 8 runtime and use Blu Age’s compiler to build Lambda deployment packages from COBOL source code, Blu Age said. Support for COBOL applications is part of a wave of improvements to AWS Lambda revealed last week at its re:Invent conference.

There are massive amounts of COBOL code still operational today — some 220 billion lines of it, as Reuters reported last year. The venerable language is a linchpin of the financial industry’s transaction processing systems. For example, about 95% of ATM card swipes rely on COBOL code, Reuters said.

Blu Age’s COBOL runtime for Lambda means that users don’t have to manage servers or containers, and AWS handles scaling. Costs incur for every 100 ms of code execution time, with no fees when code is idle.

The other advantage of Lambda is that it helps developers decompose COBOL applications into microservices to provide more agility and flexibility, Blu Age said.

Many current COBOL applications are already well-suited for serverless, said Ryan Marsh, a DevOps and serverless trainer and consultant based in Houston.

“Your typical COBOL application that you most run into in the wild is an application that runs from time to time,” Marsh said. “It’s batch-oriented. It takes data from place to place, does things with it and puts it somewhere else or calls other COBOL applications.”

Moving [COBOL apps] to Lambda, versus a wholesale rewrite to technology and patterns that were cutting-edge 10 years ago, is much more advisable.

Ryan MarshDevops/serverless trainer and consultant

Marsh likened COBOL applications to Rube Goldberg machines; both are composed of a series of things deliberately chained together to perform complex tasks. Serverless applications follow this same model.

Few COBOL apps are monolithic, where all the functionality is in one executable or invocation that continuously runs, he said. That’s why a move to a serverless model makes sense for COBOL apps.

Some IT shops might be hesitant to move COBOL applications from on premises to the cloud, because those apps often run on hardware paid for long ago.

“Moving into something where I’m renting [compute resources] and paying for it again just doesn’t make sense,” Marsh said. However, there are bigger considerations for COBOL shops to mull.

“When I completely remove the ops headaches and I no longer have to think about virtual machines and instances and worry about disk space and things like that and I’m just thinking about my business logic and data, that makes perfect sense,” he said.

Meanwhile, some companies spend vast sums of money to rewrite their COBOL applications in languages such as Java, but that idea is wrongheaded, Marsh said. For one, documentation on these old applications is frequently poor, making a rewrite project much more fraught with pitfalls.

“What if you could skip all that and lift and shift it into the cloud?” Marsh said. “Moving to Lambda, versus a wholesale rewrite to technology and patterns that were cutting-edge 10 years ago, is much more advisable. You can skip two-plus generations of application development patterns. How often do you have that kind of opportunity in enterprise application development?”

 

source:

https://searchaws.techtarget.com/news/252454023/COBOL-applications-can-go-serverless-on-AWS-Lambda