Posted in Information Technology

Cross Region Replication – AWS S3

S3 cross account replication helps us to keep backup of our data, with versioning enabled. This will give us some breathe when any DR happens or the data we lost it.

In this tutorial we will configure how to do s3 cross region replication between two accounts.

prerequisites :

One bucket in Source region/account One bucket in Destination region/accountVersioning enabled

Once the bucket is created in S3. Click on the Management tab and choose

Replication(Source Account)

and Click on Add rule

1)Source

I am currently moving all the contents inside the bucket.

If you want move the content inside a folder

choose prefix in this bucket and add the folder name like below

eg : test/

Click Next

2) Destination

Click on choose a bucket and select Buckets in another account.

Enter the Destination Account ID and the Destination bucket name

click save.

3) Pemissions

Choose create a New role (the role will be created )

copy the bucket policy(this policy should be placed in the destination bucket policy-(destination account))

Policy will be look like this.Copy and paste it in your destination bucket.

{
    "Version": "2008-10-17",
    "Id": "S3-Console-Replication-Policy",
    "Statement": [
        {
            "Sid": "S3ReplicationPolicyStmt1",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::AWSACCOUNTID:root"
            },
            "Action": [
                "s3:GetBucketVersioning",
                "s3:PutBucketVersioning",
                "s3:ReplicateObject",
                "s3:ReplicateDelete"
            ],
            "Resource": [
                "arn:aws:s3:::BucketName",
                "arn:aws:s3:::BucketName/*"
            ]
        }
    ]
}

4) Review and Click on Save to enable the replication.

Changes in Destination Account :

  1. Click on the destination bucket
  2. Click on the permission . Select bucket policy and paste the policy generated in source account.
  3. Click on Management. Choose Replication and click More and select Recieve objects.

Enter the Source account ID and click Done.

Upload Files to the source bucket

Check the files in the Destination Bucket. The files are replicated from Source account to Destination account.

 
Posted in Information Technology

Effective Programmers

Photo by Chris Ried on Unsplash

Software engineers spend a lot of time gaining skills for interviews by practicing leet code problems and perfecting resumes.

Once they finally get that job at a startup, Google, Amazon, or another corporation, they might find the skills they used to get the job don’t match the ones they need in their everyday work.

Our team was inspired by the seven skills of highly effective programmers created by the TechLead. We wanted to provide our own take on the topic.

Here are our seven skills of effective programmers.


1. Learn How to Read Other People’s Code

Everyone but you writes terrible code.

That is why a great skill that has multiple benefits is being able to follow other people’s code.

No matter how messy or poorly thought out a previous engineer’s code is, you still need to be able to wade through it. After all, it’s your job. Even when that engineer was you one year prior.

This skill benefits you in two ways. One, being able to read other people’s code is a great chance to learn what bad design is. While you are looking through other people’s code you learn what works and what doesn’t. More importantly, you learn what type of code is easy for another engineer to follow and what code is hard to follow.

You need to make sure you gripe as much as possible as you are reading over other people’s code. That way, other engineers understand how much of a superior engineer you are.

Make sure you bring up points about the importance of maintainable code and good commenting. This further shows your dominance in the area of programming.

Your code should be so well-designed that it requires no documentation. In fact, you shouldn’t document any of your code if you are a good programmer. This is just a waste of time and you need to spend your time coding and in meetings.

Being able to read other people’s messy code also makes it easy to make updates when needed. This occasionally means updating code you lack experience in. For instance, we once followed a script from Powershell to Python to Perl. We had limited experience in Perl, but we still had enough context to figure out what was going on and make the changes needed.

This comes from having a decent understanding of all the code as well as being able to read the Perl scripts.

Reading other people’s code makes you valuable because you can follow even over-engineered systems that might stump others.


2. A Sense for Bad Projects

There are many skills that take time to learn. One of the skills we believe is worth knowing is understanding what projects are not worth doing and what projects are clearly death marches.

Large companies always have many more projects going than will probably ever be completed or impactful. There are some projects that might not make any business sense (at least not to you), and there are others that are just poorly managed. This is not to say that you should cut off an idea right when you disagree with the project. However, if the stakeholders can’t properly explain what they will be doing with the end result, then perhaps the project is not worth doing.

Also, some projects might be so focused on the technology instead of the solution that it might be clear from the beginning that there won’t be a lot of impact. This skill requires doing a lot of bad projects before you have an idea of what a bad project really is. So don’t spend too much time early on trying to discern each project.

At some point in your career, you will just have a good gut sense.


3. Avoiding Meetings

Whether you are a software engineer or data scientist, meetings are a necessity because you need to be able to get on the same page with your project managers, end-users, and clients. However, there is also a tendency for meetings to suddenly take over your entire schedule. This is why it’s important to learn how to avoid meetings that are unneeded. Maybe a better word to use is manage rather than avoid. The goal here is to make sure you spend your time in meetings that drive decisions and help your team move forward.

The most common method is to simply block out a two-hour block every day that is a constant meeting. Usually, most people will set up a recurring meeting at a time they find beneficial. They’ll use that as a time to catch up on their development work.

Another way to avoid meetings so you can get work done is to show up before anyone else does. Personally, we like showing up early because in general, the office is quieter. Most people that show up early are like you, just wanting to get work done so no one bugs you.

This is important for individual contributors because our work requires times where we focus and we don’t talk to other people. Yes, there are times you might be problem-solving where you might want to work with other people. But once you get past the blocking issues, you just need to code. It’s about getting into that zone where you are constantly holding a lot of complex ideas in your head about the work you are doing. If you are constantly stopped, it can be hard to pick up where you left off.


4. Github

Some CS majors started using GitHub the day they were born. They understand every command and parameter and can run circles around professionals.

Others get their first taste of GitHub at their first job. For them, Github is a hellish landscape of confusing commands and processes. They are never 100% sure what they are doing (there’s a reason cheat sheets are popular).

No matter what repository system your company uses, the system is both helpful if you use it correctly and a hindrance if used improperly. It doesn’t take much for a simple push or commit to turn into you spending hours trying to untangle some hodgepodge of multiple branches and forks. In addition, if you constantly forget to pull the most recent version of the repository, you will also be dealing with merge conflicts that are never fun.

If you need to keep a Github command cheat sheet, then do it. Whatever makes your life simpler.


5. Writing Simple Maintainable Code

One tendency younger engineers might have is to attempt to implement everything they know into one solution. There is this desire to take your understanding of object-oriented programming, data structures, design patterns, and new technologies and use all of that in every bit of code you write. You create an unnecessary complexity because it’s so easy to be overly attached to a solution or design pattern you have used in the past.

There is a balance with complex design concepts and simple code. Design patterns and object-oriented design are supposed to simplify code in the grand scheme of things. However, the more and more a process is abstracted, encapsulated, and black-boxed, the harder it can be to debug.


6. Learn to Say No and Prioritize

This goes for really any role, whether you are a financial analyst or a software engineer. But in particular, tech roles seem to have everyone needing something from them. If you are a data engineer, you will probably get asked to do more than just develop pipelines. Some teams will need data extracts, others will need dashboards, and others will need new pipelines for their data scientists.

Now, prioritizing and saying no might really be two different skills, but they are closely intertwined. Prioritizing means that you only spend time that has high impact for the company. Whereas saying no sometimes just means avoiding work that should be handled by a different team. They do often happen in tandem for all roles.

This can be a difficult skill to acquire as it is tempting to take on every request thrown your way. Especially if you are straight out of college. You want to avoid disappointing anyone, and you have always been provided a doable amount of work.

In large companies, there is always an endless amount of work. The key is only taking on what can be done.

There are a lot of skills that aren’t tested for in interviews or even always taught in colleges. Oftentimes, this is more a limitation of the environment rather than a lack of desire to expose students to problems that exist in real development environments.


7. Operational Design Thinking

One skill that is hard to test for in an interview and hard to replicate when you are taking courses in college is thinking through how an end-user might use your software incorrectly. We usually reference this as thinking through operational scenarios.

However, this is just a polite way of saying you’re attempting to dummy proof code.

For instance, since much of programming is maintenance, it often means changing code that is highly tangled with other code. Even a simple alteration requires tracing every possible reference of an object, method, and/or API. Otherwise, it can be easy to accidentally break modules you don’t realize are attached. Even if you are just changing a data type in a database.

It also includes thinking through edge cases and thinking through an entire high-level design before going into development.

As for more complex cases where you are developing new modules or microservices, it’s important to take your time and think through the operational scenarios of what you are building. Think about how future users might need to use your new module, how they might use it incorrectly, what parameters might be needed, and if there are different ways a future programmer might need your code.

Simply coding and programming is only part of the problem. It’s easy to create software that works well on your computer. But there are a lot of ways deploying code can go wrong. Once in production, it’s hard to say how code will be used and what other code will be attached to your original code. Five years from now, a future programmer might get frustrated at the limitations of your code.

Posted in Information Technology

What is a Senior Developer

There is a wide range in the skill of developers out there — and seniority often doesn’t determine one’s caliber. So what makes some superior to their peers? What is it that separates them from the pack and sea of mediocrity?

Having five, ten or even fifteen years of ‘work experience’ doesn’t necessarily guarantee that you’re an effective, efficient senior developer — or even deserving of the title. There are certainly poor ones out there who give successful seniors, who are often older in age, a bad rep. Young seniors don’t have it any easier either — but there are certain traits and knowledge that are shared among the echelons of senior developers.

It’s not a trade secret, but rather a recipe of knowledge points and ways of thinking that can be developed. Here are some of the traits that easily help distinguish between a true senior developer and a developer with seniority.


Programming Paradigms

SOLID, Object-Oriented, and Functional Programming are a trio of programming paradigms which make up a good portion of the modes of thinking behind code creation.

What a lot of junior developers miss on their coding journey is that programming is a language — which means that it extends beyond grammatical rules. It’s a communication tool that can be structured in multiple ways and programming paradigms help create a certain stance on the way your code is communicated.

Anyone can write code — just as anyone can write a text message or a short book review on Amazon. But that sort of writing is not at the same level as a Stephen King novel. Programming paradigms act as the guiding force behind a senior developer’s code as much as plot structures do for fiction writers. All languages are made up of formulas and senior developers understand them at an internalized level that many junior and intermediate developers have yet to experience.


Ability to Create

When we first start out in the world of for loops and if else statements, we tend to find answers in the form of ready for copy and paste code. How they comprehend it differentiates the skill of new juniors and low-level intermediate developers.

Seniors, however, take it one step further. They are able to create like mini-gods in their sandboxes without much assistance from the almighty knowledge bank of Google. They know what they’re doing and they understand the implications of their moves. They see the contingencies, or at least anticipate them, and understand the potholes in their code and how to improve it.

If there is a gap in their knowledge, they look further than just the surface. There is a deeper understanding of everything in the toolbox. The world of code looks differently for senior developers.


Objective Criticism

Everyone is biased towards why they know. Junior and intermediate developers tend to display their extreme biases based on personal experiences rather than code related reasoning. Their personal preferences, styles, naming conventions, and methods of thinking become the centerpiece of any suggestion or evaluation they may encounter.

There’s nothing wrong with that, as it’s all part of the process of growing up. True objectivity is not obtained until there is enough of a range of opposing experiences to provide a centering effect on the developer. There is no right way to code, only efficient ways based on situations and scenarios. The senior developer understands this. They accept that their code may not be the best and that there is space for improvement.

Senior developers often become effective code janitors, marking messes made by their peers and labeling the weak parts in the architecture. They are able to step back and see a much bigger picture with future contingencies while making choices based on the least expected negative impact. They are not bound to any one style of coding or paradigm — instead, focusing on the solution rather than the tool.


The Distinction Between Good Software and Working Software

As developers, we make code that runs. Some of us stop there and call it a day. Others take it a bit further and try to clean things up. The best developers are such pros that they edit and rewrite our code as they’re coding — accepting the blips and failures as they go, only to improve it as soon as they can because they know the difference between good software and working software.

Most bosses focus only on whether the software is working, but the good senior developer knows better. They understand the hidden costs of technical debt and code smells. They understand how to balance the demands of working software with good software — to walk the fine line of on-time project delivery and extension negotiations.

Their breadth of knowledge and understanding of frameworks and languages makes them experts at telling the difference between good and working software — and how to create both — and gives them the ability to come up with creative solutions when the situation demands it.


Ability to Teach

“The mediocre teacher tells. The good teacher explains. The superior teacher demonstrates. The great teacher inspires.” — William Arthur Ward

True senior developers have a certain passion about them that inspires their less experienced peers in their field, helping polish the next generation of diamonds.

Programming itself is a collection of ideas and seniors have the ability to translate these into something that is succinct and easily digestible. Their ability to communicate and translate code between different interfaces and mediums demonstrate their true understanding of the language they’ve chosen to master.

There is a level of mastery required to become a teacher of anything. While ‘experience’ may come in the form of projects on their resumes and length of time at different companies, teaching is a skill that is only available to those that truly understand their craft.


Final Words

The true senior developer is a multi-faceted creature that some times masquerades linguistically as a junior or intermediate developer in areas outside their main toolkit but has a strong foundation in programming philosophies.

However, the traits above are present in their personality and depth of knowledge. It gives them the advantage of traversing through unknown territories of code faster than the average programmer. They are often big picture thinkers and view code with an enlightened mindset.

They will advocate for clean coding habits and guide their peers towards it without being a bias force of destruction. They are kind to their peers’ mistakes and accept their own ones with grace — aiming to educate and learn rather than destroy egos.

They can be any age, come from any background, and hold any number of years of ‘experience’. They are true problem solvers and long term thinkers. Do you have what it takes?

Posted in Information Technology

Mobile WebSite, Hybrid, Native app

One question routinely surfaces in today’s modern development landscape—whether to build a mobile Web site versus a native app versus a hybrid app. As a developer, you need to take the time to think through a few considerations before running off to develop software. One such consideration is determining your target audience. To a large degree, this will determine your target platforms.

Your users will use many different devices to access your software. Some will access apps through a corporate network only, while other apps are consumer-focused. Once you’ve determined the audience and platforms, you must figure out what kind of software will serve the needs of those audiences, potentially with platform-specific features.

There are three main types of modern apps: mobile Web apps, native apps and hybrid apps. I’ll review each type, their pros and cons, and how to get started developing them. For the purposes of this article, I’m not considering traditional desktop (native) apps created with Windows Forms or Windows Presentation Foundation (WPF). They’re not considered modern, as they only run on large desktop screens and not across a multitude of devices.

Mobile Web Sites

Mobile Web sites have the broadest audience of the three primary types of applications. Any smartphone can at least display content and let the user interact with a mobile page, although some do so better than others. Along with reach, another benefit is easy deployment. Just update in one location and all users automatically have access to the latest version of the site.

If you already have a Web site and want a companion app or want to expand into the app market, you can start by making your Web site mobile-friendly. This means making a few modifications, but there’s a big payoff for a small effort, especially when compared to building a complete native set of apps. Web sites that target desktop or large monitors are hard to use on small devices. Modifying them so they’re easy to use on mobile devices will directly affect customer satisfaction.

Making mobility a first-class feature in your site also increases reach. It’s easier to use mobile Web sites. There are fewer pop-ups and distractions. Also, mobile design generally leans toward large square or rectangle buttons that are easy to tap.

You can use all your current Web development skills to build a mobilized version of your Web site. That means using HTML, JavaScript, CSS and perhaps a few of your favorite frameworks. The knowledge required to mobilize apps isn’t limited to a certain platform or vendor.

Two big things to note about going mobile are integrating a responsive design and restructuring the content so it works on small hardware. CSS media queries cover the responsive design. Media queries are a way to code CSS to define style rules that target specific device form factors. For example, your site should have media queries for several device form factors, including phones, tablets, phablets, laptops and large screens.

Fortunately, you can build media queries that work for several devices within a category. Restructuring the content entails changing the layout to something a tiny screen can display that’s easy for users to view. This changes the data volume, as well. There are default media queries that come with Twitter Bootstrap, a popular library that contains responsively designed CSS and styles to get you started.

For example, Figure 1 contains CSS that works on a large swath of devices. The code in Figure 1 doesn’t cover every scenario, but it covers most of them. You might have some modifications to make to the code to fit with your needs.

Figure 1 CSS Media Queries for Popular Form Factors
Smartphones
Portrait and Landscape
@media only screen and (min-device-width : 320px) and
(max-device-width : 480px) { ... }
Landscape
@media only screen and (min-width : 321px) { ... }
Portrait
@media only screen and (max-width : 320px) { ... }
Tablets, Surfaces, iPads
Portrait and landscape
@media only screen and (min-device-width : 768px) and
(max-device-width : 1024px) { ... }
Landscape
@media only screen and (min-device-width : 768px) and
(max-device-width : 1024px) and (orientation : landscape) { ... }
Portrait
@media only screen and (min-device-width : 768px) and
(max-device-width : 1024px) and (orientation : portrait) { ... }}
Desktops, laptops, larger screens
@media only screen  and (min-width : 1224px) { ... }
Large screen
@media only screen  and (min-width : 1824px) {  ... }
Smartphones
@media only screen and (min-device-width : 320px) and (max-device-width : 480px) and (orientation : landscape) and (-webkit-min-device-pixel-ratio : 2) { ... }
Portrait
@media only screen and (min-device-width : 320px) and (max-device-width : 480px) and (orientation : portrait) and (-webkit-min-device-pixel-ratio : 2) { ... }

The CSS in Figure 1 not only works in mobile Web apps, but native apps, as well. That means it applies to all three types of apps covered in this article. On the Windows platform, you can use it in Windows Library for JavaScript (WinJS) projects and hybrid apps in C#. For a more in-depth look at responsive app design, see my October 2013 column, “Build a Responsive and Modern UI with CSS for WinJS Apps” (msdn.microsoft.com/magazine/dn451447).

Mobile site UIs and UXes likely won’t match that of the host OS, as the Web and native platforms tend to surface certain design patterns and techniques. Many folks try to cram a Web site that targets desktop monitors into the tiny screens of the smartphone or phablet. This rarely works well. Be sure to consider how users consume information on small devices.

One downside to mobile Web sites is that many features available to native apps simply aren’t available to mobile Web sites. Even some of the native features hybrids enjoy are out of reach for mobile Web sites. This is primarily for security reasons.

Access to the file system and local resources isn’t available in Web sites, whether or not they’re mobile. This will change when browsers widely adopt the File API. For now, Mobile IE, Opera Mini and some iOS Safari versions don’t support it. Code can’t call on the webcam, sensors or other hardware components. At some point, browsers will expose more of the hardware features, but for now it’s mostly off-limits.

To enable offline capabilities, mobile Web sites have to use Web technologies such as Web Storage, IndexedDb and AppCache. Mobile sites can’t take advantage of file system resources, but their sandbox model still allows for some client-based storage. Many existing Web sites don’t support offline capabilities, rendering them useless when they’re disconnected.

Native Apps

For most platforms you’re targeting, you should be able to retain your skills. If you’re developing on Windows, you can power your app with C#, Visual Basic, or C++, alongside XAML for the UI. You could also write in JavaScript, alongside HTML, CSS and WinJS for the UI. On Android, you can write in Java and Objective-C for the iOS.

When going the native route, you can leverage the marketing power of the app store. It doesn’t really matter which store. The fact is, they all try to help market your app with free exposure or promos you wouldn’t get otherwise. Of course, the downside of an app store is a potential user has to find and install your app. Even with the boost in marketing the store gives you, there will be users who won’t find your app.

Something that often blocks you from targeting multiple platforms is the need to recreate at least part of the UI for each targeted platform. That means you’ll need a UI for Windows Store and Windows Phone, iOS and Android. It can be quite a challenge to build a flexible UI that works well on dozens of slightly different screens. However, the result is users get that rich, native experience they expect from quality apps. In the end, the ratings in the app store reflect apps that provide an excellent UX.

When going native, you’ll have to map out a cross-platform strategy. Consider which of the platforms you’ll target and the order in which you’ll publish them. From the perspective of the mobile Web versus native versus hybrid apps question, the smoothest path is mobile Web to hybrid to native.

Even though you might publish native apps, you’ll want to keep the mobile Web site well maintained, as mobile accounts for moat traffic. Then choose a platform, perhaps one that meets your developer background, and start developing. For more guidance on the various considerations when creating cross-platform apps, see my May 2014 column, “Design a Cross-Platform Modern App Architecture” (msdn.microsoft.com/magazine/dn683800).

Visual Studio contains many project templates for creating native apps for the Windows platform. Under C#, Visual Basic and C++, you’ll find Windows Store and Windows Phone apps. Visual Studio also contains templates for JavaScript apps. You must first determine the language you’ll use, as there are many considerations here, including whether the app will be cross-platform. My September 2013 column, “Understanding Your Language Choices for Developing Modern Apps” (msdn.microsoft.com/magazine/dn385713), can help you decide which language to use. As you might expect, there’s a rich ecosystem of tools around native apps including APIs and controls targeting each platform of interest.

Most native apps within a particular platform have a similar navigation paradigm. For example, the Windows Store platform employs app bars and a strategically placed back button. Taking advantage of built-in navigation schemes lets your app give users a familiar feel and no learning curve. This results in better ratings and more downloads. My April 2014 column, “Navigation Essentials in Windows Store Apps” (msdn.microsoft.com/magazine/dn342878), contains all the facts about Windows Store app navigation.

Hybrid Apps

In that place between mobile Web sites and native apps lie the hybrid apps. Hybrid apps are a way to expose content from existing Web sites in app format. They’re a great way to take your Web content and package it up for publishing in an app store. You can publish hybrid apps in any of the major app stores: Microsoft Windows Store, Google Play, Apple App Store, Amazon Appstore and even BlackBerry World.

The great thing about a hybrid app is it can be a published app or a stopgap measure to fill the store while you’re working on creating a set of native apps. It gives you great headway into the market to publish something and get the marketing process started while you work on completing a native app set, if that’s your goal. If not, a hybrid app can serve as a way to have something formally listed in the app stores for the exposure.

Hybrid apps may enjoy a few more privileges with local resources than mobile Web sites, depending on the host OS rules. That means things such as webcam use or certain sensors might not work everywhere.

The good news if you’re considering hybrid apps is you get to use familiar Web development skills. Hybrids are essentially Web site wrappers. Their foundation is the same old HTML, JavaScript and CSS you already know.

There’s an entire third-party ecosystem around building hybrid apps for the various app stores. As you might expect, there are templates for creating hybrid apps in Visual Studio. Popular vendors such as Xamarin, Telerik, DevExpress and Infragistics all have tools and controls that speed up the hybrid app development process.

Using an iFrame in Visual Studio JavaScript apps, you can create a hybrid app completely from Web languages. You can also build a Hybrid app using the Windows Phone HTML5 project template with C# or Visual Basic .NET. Finally, take any XAML-based app and add a WebView control for the same effect. The WebView control behaves as if it were a browser. This means you control it by calling methods like Navigate, Refresh or Stop, often mapping to an equivalent user-driven action. Here’s a sample of the WebView control and some basic code that navigates to a start page for the app:

In MainPage.Xaml

<WebView x:Name="webView"/>

In MainPage.Xaml.Cs

public MainPage()
{
  this.InitializeComponent();
  Uri targetUri = new Uri("http://rachelappel.com");
  webView.Navigate(targetUri);
}

You can tap into WebView events to perform navigation, page load or other tasks. For example, you can tap into navigation to log the popular links, as in this example:

private void webView_NavigationCompleted(Windows.UI.Xaml.Controls.WebView sender,
  WebViewNavigationCompletedEventArgs args)
{
  logNavigation(args.Uri.ToString());
}

This is exactly the kind of event you’d expect when controlling a Web browser. The WebView makes it much easier to combine existing content on the Web with native app capabilities.

Wrapping Up

Each way of designing and building apps comes with its own set of benefits and drawbacks. The app store concept, for example, is both a pro and a con. The upside is targeted visibility. The downside is you have to develop multiple UIs, although back-end services code is often sharable.

Regardless of whether you’re going to build a native or hybrid app, you should have a mobile version of your Web site. Mobile Web sites offer the largest immediate and instant reach of all the types of apps. You don’t get to leverage the store’s marketing efforts, which can boost sales. Hybrid apps help you enter a marketplace earlier while developing native apps. This is a great way to collect download and usage data, to determine if that’s a viable market. Finally, responsive design and responsive CSS add richness to any of the apps discussed here that support Web technologies.

Posted in Information Technology

Angular JS Introduction

AngularJS is a structural framework for dynamic web apps. With AngularJS, designers can use HTML as the template language and it allows for the extension of HTML’s syntax to convey the application’s components effortlessly. Angular makes much of the code you would otherwise have to write completely redundant.

Despite the fact that AngularJS is commonly related to SPA, you can use Angular to build any kind of app, taking advantage of features like: Two-way binding, templating, RESTful api handling, modularization, AJAX handling, dependency injection, etc.

How to Start with AngularJS

AngularJS is maintained by Google, as well as a community of individual developers. The detailed, technical aspects of this framework can be found on the AngularJS website, which states that “AngularJS lets you extend HTML vocabulary.”

We have selected some useful resources to understand the main concepts and simplify the AngularJS learning curve:

AngularJS Directives

Using AngularJS, developers can create HTML-like elements and attributes that define the behavior of presentation components. These directives “let you invent new HTML syntax, specific to your application” or website. Some common AngularJS directives include:

  • ng-show and ng-hide – these directives show or hide and element. This is achieved by setting styles in the site’s CSS.
  • ng-class – this allows class attributes to be dynamically loaded.
  • ng-animate – this directive provides support for animation, including Javascript, CSS3 transitions, and CSS3 keyframe animations.

There are a lot of directives, you can check most of them out in the “The AngularJS Cheat Sheet” or learn how to “Build Custom Directives with AngularJS”.

AngularJSThe AngularJS Cheat Sheet

Practical Examples

1. Creating a Menu

Navigation menus are a staple of all websites, whether the site is a traditional, multi-page experience or a single-page site. Menus that respond to user input (like a touch or click) and include attractive animation effects are one of the ways that framework like AngularJS can be utilized – simply by combining the framework with a little HTML and CSS.

AngularJS

A tutorial example on the website Codementor shows how HTML, CSS, and JavaScript are used in conjunction with AngularJS to create a page with a pair of cool menus, one of which slides onto the page from the left of the site and another which is on the right of the page. While some of the CSS is a little complex, the entire menu comes together in this short tutorial in only minutes, ultimately creating a navigational structure that could easily be expanded upon to create a very attractive and powerful system for a website!

2. Creating a SPA

There are a number of advantages to creating a single page website. Rather than separate pages needing to be fetched and loaded during a visitor’s time on the site, a single page website can provide a much more fluid experience. This is because all the code for the site is retrieved up-front or dynamically loaded as necessary to create an experience that feels more like a desktop application than a traditional, multi-page website.

3. Other Practical Examples

The Future of Web Design?

Will we be seeing more and more website building which is dynamically powered by JavaScript in the future? It is certainly possible. Even with traditional, multi-page sites, having solutions that make development and testing of those sites quicker and easier is always going to be welcome and appealing.

AngularJS already has the ability to handle your project’s wireframes during initial development and testing, as well as other demands like animations and transitions for powerful websites and web applications. With more and more web designers and developers turning to these JavaScript-powered solutions, we can also expect them to become even easier to use as a whole – which is ultimately great news for everyone looking to design and develop rich web experiences.

 

 

Posted in Information Technology

Typescript Introduction

Overview

TypeScript is a superset of JavaScript which primarily provides optional static typing, classes and interfaces. One of the big benefits is to enable IDEs to provide a richer environment for spotting common errors as you type the code.

To get an idea of what I mean, watch Microsoft’s introductory video on the language.

For a large JavaScript project, adopting TypeScript might result in more robust software, while still being deployable where a regular JavaScript application would run.

It is open source, but you only get the clever Intellisense as you type if you use a supported IDE. Initially, this was only Microsoft’s Visual Studio (also noted in blog post from Miguel de Icaza). These days, other IDEs offer TypeScript support too.

Are there other technologies like it?

There’s CoffeeScript, but that really serves a different purpose. IMHO, CoffeeScript provides readability for humans, but TypeScript also provides deep readability for tools through its optional static typing (see this recent blog post for a little more critique). There’s also Dart but that’s a full on replacement for JavaScript (though it can produce JavaScript code)

Example

As an example, here’s some TypeScript (you can play with this in the TypeScript Playground)

class Greeter {
    greeting: string;
    constructor (message: string) {
        this.greeting = message;
    }
    greet() {
        return "Hello, " + this.greeting;
    }
}  

And here’s the JavaScript it would produce

var Greeter = (function () {
    function Greeter(message) {
        this.greeting = message;
    }
    Greeter.prototype.greet = function () {
        return "Hello, " + this.greeting;
    };
    return Greeter;
})();

Notice how the TypeScript defines the type of member variables and class method parameters. This is removed when translating to JavaScript, but used by the IDE and compiler to spot errors, like passing a numeric type to the constructor.

It’s also capable of inferring types which aren’t explicitly declared, for example, it would determine the greet() method returns a string.

Debugging Typescript

Many browsers and IDEs offer direct debugging support through sourcemaps. See this Stack Overflow question for more details: Debugging TypeScript code with Visual Studio

 

Relation to JavaScript

TypeScript is modern JavaScript + types. It’s about catching bugs early and making you a more efficient developer, while at the same time leveraging the JavaScript community.

JavaScript is standardized through the ECMAScript standards. Older browsers do not support all features of newer ECMAScript standards (see this table). TypeScript supports new ECMAScript standards and compiles them to (older) ECMAScript targets of your choosing (current targets are 3, 5 and 6 [a.k.a. 2015]). This means that you can use features of ES2015 and beyond, like modules, lambda functions, classes, the spread operator and destructuring, while remaining backwards compatible with older browsers.

The type support is not part of the ECMAScript standard and may likely never be due to the interpreted nature instead of compiled nature of JavaScript. The type system of TypeScript is incredibly rich and includes: interfaces, enums, hybrid types, generics, union/intersection types, access modifiers and much more. The official website of TypeScript gives an overview of these features. Typescript’s type system today is on-par with other typed language and in some cases arguably more powerful.

Relation to other JavaScript targeting languages

TypeScript has a unique philosophy compared to other languages that compile to JavaScript. JavaScript code is valid TypeScript code; TypeScript is a superset of JavaScript. You can almost rename your .js files to .ts files and start using TypeScript (see “JavaScript interoperability” below). TypeScript files are compiled to readable JavaScript, so that migration back is possible and understanding the compiled TypeScript is not hard at all. TypeScript builds on the successes of JavaScript while improving on its weaknesses.

On the one hand, you have future proof tools that take modern ECMAScript standards and compile it down to older JavaScript versions with Babel being the most popular one. On the other hand, you have languages that may totally differ from JavaScript which target JavaScript, like CoffeeScript, Clojure, Dart, Elm, Haxe, Scala.js, and a whole host more (see this list). These languages, though they might be better than where JavaScript’s future might ever lead, run a greater risk of not finding enough adoption for their futures to be guaranteed. You might also have more trouble finding experienced developers for some of these languages, though the ones you will find can often be more enthusiastic. Interop with JavaScript can also be a bit more involved, since they are farther removed from what JavaScript actually is.

TypeScript sits in between these two extremes, thus balancing the risk. TypeScript is not a risky choice by any standard. It takes very little effort to get used to if you are familiar with JavaScript, since it is not a completely different language, has excellent JavaScript interoperability support and it has seen a lot of adoption recently.

Optionally static typing and type inference

JavaScript is dynamically typed. This means JavaScript does not know what type a variable is until it is actually instantiated at run-time. This also means that it may be too late. TypeScript adds type support to JavaScript. Bugs that are caused by false assumptions of some variable being of a certain type can be completely eradicated if you play your cards right (how strict you type your code or if you type your code at all is up to you).

TypeScript makes typing a bit easier and a lot less explicit by the usage of type inference. For example: var x = "hello" in TypeScript is the same as var x : string = "hello". The type is simply inferred from its use. Even it you don’t explicitly type the types, they are still there to save you from doing something which otherwise would result in a run-time error.

TypeScript is optionally typed by default. For example function divideByTwo(x) { return x / 2 } is a valid function in TypeScript which can be called with any kind of parameter, even though calling it with a string will obviously result in a runtime error. Just like you are used to in JavaScript. This works, because when no type was explicitly assigned and the type could not be inferred, like in the divideByTwo example, TypeScript will implicitly assign the type any. This means the divideByTwo function’s type signature automatically becomes function divideByTwo(x : any) : any. There is a compiler flag to disallow this behavior: --noImplicitAny. Enabling this flag gives you a greater degree of safety, but also means you will have to do more typing.

Types have a cost associated with them. First of all there is a learning curve, and second of all, of course, it will cost you a bit more time to set up a codebase using proper strict typing too. In my experience, these costs are totally worth it on any serious codebase you are sharing with others. A Large Scale Study of Programming Languages and Code Quality in Github suggests that “statically typed languages in general are less defect prone than the dynamic types, and that strong typing is better than weak typing in the same regard”.

It is interesting to note that this very same paper finds that TypeScript is less error prone than JavaScript:

For those with positive coefficients we can expect that the language is associated with, ceteris paribus, a greater number of defect fixes. These languages include C, C++, JavaScript, Objective-C, Php, and Python. The languages Clojure, Haskell, Ruby, Scala, and TypeScript, all have negative coefficients implying that these languages are less likely than the average to result in defect fixing commits.

Enhanced IDE support

The development experience with TypeScript is a great improvement over JavaScript. The IDE is informed in real-time by the TypeScript compiler on its rich type information. This gives a couple of major advantages. For example, with TypeScript you can safely do refactorings like renames across your entire codebase. Through code completion you can get inline help on whatever functions a library might offer. No more need to remember them or look them up in online references. Compilation errors are reported directly in the IDE with a red squiggly line while you are busy coding. All in all this allows for a significant gain in productivity compared to working with JavaScript. One can spend more time coding and less time debugging.

There is a wide range of IDEs that have excellent support for TypeScript, like Visual Studio Code, WebStorm, Atom and Sublime.

Strict null checks

Runtime errors of the form cannot read property 'x' of undefined or undefined is not a function are very commonly caused by bugs in JavaScript code. Out of the box TypeScript already reduces the probability of these kinds of errors occurring, since one cannot use a variable that is not known to the TypeScript compiler (with the exception of properties of any typed variables). It is still possible though to mistakenly utilize a variable that is set to undefined. However, with the 2.0 version of TypeScript you can eliminate these kinds of errors all together through the usage of non-nullable types. This works as follows:

With strict null checks enabled (--strictNullChecks compiler flag) the TypeScript compiler will not allow undefined to be assigned to a variable unless you explicitly declare it to be of nullable type. For example, let x : number = undefined will result in a compile error. This fits perfectly with type theory, since undefined is not a number. One can define x to be a sum type of number and undefined to correct this: let x : number | undefined = undefined.

Once a type is known to be nullable, meaning it is of a type that can also be of the value null or undefined, the TypeScript compiler can determine through control flow based type analysis whether or not your code can safely use a variable or not. In other words when you check a variable is undefined through for example an if statement the TypeScript compiler will infer that the type in that branch of your code’s control flow is not anymore nullable and therefore can safely be used. Here is a simple example:

let x: number | undefined;
if (x !== undefined) x += 1; // this line will compile, because x is checked.
x += 1; // this line will fail compilation, because x might be undefined.

During the build 2016 conference co-designer of TypeScript Anders Hejlsberg gave a detailed explanation and demonstration of this feature: video (from 44:30 to 56:30).

Compilation

To use TypeScript you need a build process to compile to JavaScript code. The build process generally takes only a couple of seconds depending of course on the size of your project. The TypeScript compiler supports incremental compilation (--watch compiler flag), so that all subsequent changes can be compiled at greater speed.

The TypeScript compiler can inline source map information in the generated .js files or create separate .map files. Source map information can be used by debugging utilities like the Chrome DevTools and other IDE’s to relate the lines in the JavaScript to the ones that generated them in the TypeScript. This makes it possible for you to set breakpoints and inspect variables during runtime directly on your TypeScript code. Source map information works pretty good, it was around long before TypeScript, but debugging TypeScript is generally not as great as when using JavaScript directly. Take the this keyword for example. Due to the changed semantics of the this keyword around closures since ES2015, this may actually exists during runtime as a variable called _this(see this answer). This may confuse you during debugging, but generally is not a problem if you know about it or inspect the JavaScript code. It should be noted that Babel suffers the exact same kind of issue.

There are a few other tricks the TypeScript compiler can do, like generating intercepting code based on decorators, generating module loading code for different module systems and parsing JSX. However, you will likely require a build tool besides the Typescript compiler. For example if you want to compress your code you will have to add other tools to your build process to do so.

There are TypeScript compilation plugins available for WebpackGulpGrunt and pretty much any other JavaScript build tool out there. The TypeScript documentation has a section on integrating with build tools covering them all. A linter is also available in case you would like even more build time checking. There are also a great number of seed projects out there that will get you started with TypeScript in combination with a bunch of other technologies like Angular 2, React, Ember, SystemJS, Webpack, Gulp, etc.

JavaScript interoperability

Since TypeScript is so closely related to JavaScript it has great interoperability capabilities, but some extra work is required to work with JavaScript libraries in TypeScript. TypeScript definitions are needed so that the TypeScript compiler understands that function calls like _.groupBy or angular.copy or $.fadeOut are not in fact illegal statements. The definitions for these functions are placed in .d.ts files.

The simplest form a definition can take is to allow an identifier to be used in any way. For example, when using Lodash, a single line definition file declare var _ : any will allow you to call any function you want on _, but then of course you are also still able to make mistakes: _.foobar()would be a legal TypeScript call, but is of course an illegal call at run-time. If you want proper type support and code completion your definition file needs to to be more exact (see lodash definitions for an example).

Npm modules that come pre-packaged with their own type definitions are automatically understood by the TypeScript compiler (see documentation). For pretty much any other semi-popular JavaScript library that does not include its own definitions somebody out there has already made type definitions available through another npm module. These modules are prefixed with “@types/” and come from a Github repository called DefinitelyTyped.

There is one caveat: the type definitions must match the version of the library you are using at run-time. If they do not, TypeScript might disallow you from calling a function or dereferencing a variable that exist or allow you to call a function or dereference a variable that does not exist, simply because the types do not match the run-time at compile-time. So make sure you load the right version of the type definitions for the right version of the library you are using.

To be honest, there is a slight hassle to this and it may be one of the reasons you do not choose TypeScript, but instead go for something like Babel that does not suffer from having to get type definitions at all. On the other hand, if you know what you are doing you can easily overcome any kind of issues caused by incorrect or missing definition files.

Converting from JavaScript to TypeScript

Any .js file can be renamed to a .ts and ran through the TypeScript compiler to get syntactically the same JavaScript code as an output (if it was syntactically correct in the first place). Even when the TypeScript compiler gets compilation errors it will still produce a .js file. It can even accept .jsfiles as input with the --allowJs flag. This allows you to start with TypeScript right away. Unfortunately compilation errors are likely to occur in the beginning. One does need to remember that these are not show-stopping errors like you may be used to with other compilers.

The compilation errors one gets in the beginning when converting a JavaScript project to a TypeScript project are unavoidable by TypeScript’s nature. TypeScript checks all code for validity and thus it needs to know about all functions and variables that are used. Thus type definitions need to be in place for all of them otherwise compilation errors are bound to occur. As mentioned in the chapter above, for pretty much any JavaScript framework there are .d.ts files that can easily be acquired with the installation of DefinitelyTyped packages. It might however be that you’ve used some obscure library for which no TypeScript definitions are available or that you’ve polyfilled some JavaScript primitives. In that case you must supply type definitions for these bits in order for the compilation errors to dissapear. Just create a .d.ts file and include it in the tsconfig.json’s files array, so that it is always considered by the TypeScript compiler. In it declare those bits that TypeScript does not know about as type any. Once you’ve eliminated all errors you can gradually introduce typing to those parts according to your needs.

Some work on (re)configuring your build pipeline will also be needed to get TypeScript into the build pipeline. As mentioned in the chapter on compilation there are plenty of good resources out there and I encourage you to look for seed projects that use the combination of tools you want to be working with.

The biggest hurdle is the learning curve. I encourage you to play around with a small project at first. Look how it works, how it builds, which files it uses, how it is configured, how it functions in your IDE, how it is structured, which tools it uses, etc. Converting a large JavaScript codebase to TypeScript is doable when you know what you are doing. Read this blog for example on converting a 600k lines to typescript in 72 hours). Just make sure you have a good grasp of the language before you make the jump.

Adoption

TypeScript is open source (Apache 2 licensed, see GitHub) and backed by Microsoft. Anders Hejlsberg, the lead architect of C# is spearheading the project. It’s a very active project; the TypeScript team has been releasing a lot of new features in the last few years and a lot of great ones are still planned to come (see the roadmap).

In the 2017 StackOverflow developer survey TypeScript was the most popular JavaScript transpiler (9th place overall) and won third place in the most loved programming language category.

Posted in Information Technology

Java Template Engines Comparison

Let’s dive for while into the template engines problematics of MVC based frameworks. In this article, you will learn about the mystery of different templating possibilities supported by Spring Boot framework.

Spring Boot has become very popular because of its configuration possibilities and full support of spring based application. In the age of microservices and cutting monoliths, this motto of “just run” has quite a nice impact on the desired prototyping application.

I don’t think it’s necessary to go much deeper into the Model View Controller (MVC) design pattern issue because there are many other articles in which this can be easily found.

The main intent of this article is to review the setups of the different Java template engines for Spring-based applications. How can even such a question even arise?

Well, the reason is that Velocity Engine has been deprecated for a while and a lot developers around the globe need to find well-fitting alternatives.

Let’s begin and define the set for our test. We will compare Apache VelocityApache FreeMarkerThymeleaf, and Pebble.

I have not included JSP Engine because JSPs are mature technologies and have been around since the early days, which means that many articles have been already written about it. The fact that JSPs are really hard to beat in the case of raw speed remains, but this is not in the focus now.

Prepare the MvcConfiguration class that extends WebMvcConfigurerAdapter:

@Configuration
@EnableWebMvc
public class MvcConfiguration extends WebMvcConfigurerAdapter {
...

The mentioned MvcConfiguration class must define @Bean ViewResolver that can negotiate about the proper request ViewResolver.

@Bean(name = "viewResolver")
public ViewResolver contentNegotiatingViewResolver( ContentNegotiationManager manager) {
        List resolvers = ...

Each of the mentioned template engines has, under the folder webapp, its own directory dedicated only to it. Such directories (Velocity, Freemarker, Thymeleaf and Pebble) contain only engine-related files.

Here is the deprecated engine that has been widely used over last several years.

Apache Velocity 

Apache Velocity Template Engine is used for comparison and also to make testing the other three alternatives (FreeMarker, Thymeleaf, and Pebble) a little bit simpler. Apache Velocity is one of the Jakarta projects. Each of the Velocity templates is processed but not compiled to Java, which supports a better code separation.

Following code snippets configures Spring Boot ViewResolver and enables the Velocity usage:

@Bean(name = "velocityViewResolver")
public ViewResolver getVelocityViewResolver() {
   VelocityViewResolver resolver = new VelocityViewResolver();
   resolver.setSuffix(".vm");
   resolver.setCache(true);
   return resolver;
}

Having configured ViewResolver we need to add it to the contentNegotiatingViewResolver @Bean, which gives us the access to the ContentNegotiationManager.

The ContentNegotiationManager provides look-up methods to the file extensions based on MediaType. In the example case, it will be used for specific engine file suffix search:

@Bean(name = "viewResolver")
public ViewResolver contentNegotiatingViewResolver( ContentNegotiationManager manager) {
   List resolvers =
      Arrays.asList(getVelocityViewResolver(),
      ...

Inside the directory webapp, we create directory velocity, and a simple velocity template. We call the file test.vm. It contains the following content:

<html lang="en">
<head>
   <title>Test Velocity</title>
</head>
<body>
<h2>This is $test</h2>
</body>
</html>

We are almost done. There is only one more important thing for setting up specific Spring Boot application properties that have been used a configuration file called application.properties located inside the project resources folder. In the velocity case, it will contain loader path setup (you can customize it).

spring.velocity.resourceLoaderPath=/velocity/

Congratulations! Deprecated Template Engine Velocity is up and running, but this is not all that we want to achieve. We continue with the next alternative.

Apache FreeMarker

The first considered candidate as the replacement to Velocity is the FreeMarker. FreeMarker is currently coming from the Apache projects incubator supported by Apache Software Foundation (ASF). ASF puts its effort to support FreeMarker development, which is a very good sign for a long life. One more reason may be that FreeMarker is widely used across the Apache family projects, a good example is newly accepted NetBeans one!

Let’s add FM support to sample project by configuring ViewResolver in the following way:

@Bean(name = "freeMarkerViewResolver")
public ViewResolver getFreeMakerViewResolver() {
   FreeMarkerViewResolver resolver = new FreeMarkerViewResolver();
   resolver.setSuffix(".ftl");
   resolver.setCache(true);
   return resolver;
}

We need to add also properly FreeMarker ViewResolver to the ContentNegotiationManager inside the MvcConfiguration @Bean:

@Bean(name = "viewResolver")
public ViewResolver contentNegotiatingViewResolver( ContentNegotiationManager manager) {
   List resolvers =
      Arrays.asList(getVelocityViewResolver(),
                    getFreeMakerViewResolver(),
                    ...

Now is the sample application is ready for the simple FreeMarker templates. Inside the webapp folder we a create new folder called freemarker and we add the following two files: index.ftl.

<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Test applicaiton</title>
</head>
<body>
<h2>This is test application Main sample site</h2>
</body>
</html>

The magic.ftl file will contain simple FM tags:

<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Magic Spell: ${word}!</title>
</head>
<body>
<h2>MagicHappens by word: ${word}!</h2>
</body>
</html>

Hold on; it’s not enough in the case of FreeMarker we can not forget to properly add a configuration inside the application.properties file:

spring.freemarker.templateLoaderPath=/freemarker/

Now we have FreeMarker up and running inside our sample project! Well done. We can move to the next one.

Pebble Template Engine

It’s quite a new player on the market. It promises quite useful inheritance features and easy-to-read syntax but I will not talk about it because it would be beyond the scope of this article. This article is focused on ViewResolver Configuration and having it up and running as the motto of Spring Boot. As the first step, we again need to configure ViewResolver properly. In the case of Pebble, all is slightly more complicated because the result of the configuration is extremely closely connected to the Servlet config itself. Let’s see, we go back again in the @Bean MvcConfiguration and we add:

@Bean(name="pebbleViewResolver")
public ViewResolver getPebbleViewResolver(){
   PebbleViewResolver resolver = new PebbleViewResolver();
   resolver.setPrefix("/pebble/");
   resolver.setSuffix(".html");
   resolver.setPebbleEngine(pebbleEngine());
   return resolver;
}

Previously has been mentioned that Template may support configuration by application.properties file, this is currently not the case of Pebble. We need to configure all manually and we need to define more Pebble-related @Beans:

@Bean
public PebbleEngine pebbleEngine() {
  return new PebbleEngine.Builder()
                .loader(this.templatePebbleLoader())
                .extension(pebbleSpringExtension())
                .build();
}

@Bean
public Loader templatePebbleLoader(){
   return new ServletLoader(servletContext);
}

@Bean
public SpringExtension pebbleSpringExtension() {
  return new SpringExtension();
}

As you can see, the templatePebbleLoader @Bean requires direct access to the ServletContext which needs to be injected into the configuration @Bean.

@Autowired
private ServletContext servletContext;
...

It also means that by doing this Pebble takes over the any created servlet and will play default choice when any other exists. This may not be bad but when you want to use Pebble and for example Thymeleaf together, you need to do slightly more Spring hacking.

Now we have prepared Pebble configuration, so let’s create a new Pebble folder under the webapp and add a new template file pebble.html

<html>
<head>
    <title>{{ pebble }}</title>
</head>
<body>
{{ pebble }}
</body>
</html>

Now we are finished, Pebble is up and running and we can go directly to the last option.

Thymeleaf Template Engine

Thymeleaf presents itself as the ideal choice for HTML5 JVM web development, it may be true but it’s beyond the scope of this article and you can try this claim by using the example project over my GitHub account. Thymeleaf has better Spring support  than Pebble. This allows us to use for its configuration the application.properties file and add Thymeleaf setup options there:

spring.thymeleaf.prefix=/thymeleaf/
spring.thymeleaf.suffix=.html

But the rest is very similar to Pebble:

@Bean(name = "thymeleafViewResolver")
public ViewResolver getThymeleafViewResolver() {
  ThymeleafViewResolver resolver = new ThymeleafViewResolver();
  resolver.setTemplateEngine(getThymeleafTemplateEngine());
  resolver.setCache(true);
  return resolver;
}

Thymeleaf takes similarly control over any new Servlet creation as you can see in MvcConfiguration @Bean.

@Bean(name ="thymeleafTemplateEngine")
public SpringTemplateEngine getThymeleafTemplateEngine() {
  SpringTemplateEngine templateEngine = new SpringTemplateEngine();
  templateEngine.setTemplateResolver(getThymeleafTemplateResolver());
  return templateEngine;
}

@Bean(name ="thymeleafTemplateResolver")
public ServletContextTemplateResolver getThymeleafTemplateResolver() {
  ServletContextTemplateResolver templateResolver = new ServletContextTemplateResolver();
  templateResolver.setPrefix("/thymeleaf/");
  templateResolver.setSuffix(".htm");
  return templateResolver;
}

Now it’s time to add ViewResolver to the content negotiation configuration:

@Bean(name = "viewResolver")
public ViewResolver contentNegotiatingViewResolver( ContentNegotiationManager manager) {
   List resolvers =
      Arrays.asList(getVelocityViewResolver(),
                    getFreeMakerViewResolver(),
//                  getPebbleViewResolver()
                    getThymeleafViewResolver()
                );
      ContentNegotiatingViewResolver resolver = new ContentNegotiatingViewResolver();
      resolver.setViewResolvers(resolvers);
      resolver.setContentNegotiationManager(manager);
      return resolver;
}
...

For the last step, we will create again under that webapp folder and new folder called thymeleaf. We add thyme.htm file there:

<!DOCTYPE HTML>
<html xmlns:th="http://www.thymeleaf.org">
<head>
    <title>Getting Started: Thymeleaf</title>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
</head>
<body>
<p th:text="'HERE IS, ' + ${thyme} + '!'" />
</body>
</htm>

And congratulations you have successfully configured all four Spring Boot Supported Template Engines.

At the end of the configuration section, it is important to point out that each of the engines has assigned its own @Controller which is responsible for a proper output generation.

Velocity Controller 

@Controller
public class VelocityHelloController {

    @RequestMapping(value = "/velocity")
    public String test(Model model){
        System.out.println("Test");
        model.addAttribute("test", "Here is Velocity");
        return "test";
    }
}

FrameMarker

@Controller
public class FMHelloController {


    @RequestMapping("/")
    public String index(){
        System.out.println("INDEX");
        return "index";
    }

    @RequestMapping("/magic")
    public String magic(Model model, @RequestParam(value = "word", required=false, defaultValue="MagicHappens") String word) {
        System.out.println("MAGIC");
        model.addAttribute("word", word);
        return "magic";
    }
}

Pebble

@Controller
public class PebbleHelloController {

    @RequestMapping(value = "/pebble")
    public String something(Model model){
        System.out.println("Pebble");
        model.addAttribute("pebble", "The Pebble");
        return "pebble";
    }
}

Thymeleaf

@Controller

public class TLHelloController {





    @RequestMapping(value = "/thyme")

    public String something(Model model){

        System.out.println("Thymeleaf");

        model.addAttribute("thyme", "The Thymeleaf");

        return "thyme";

    }



}

Summary

Now is the right time to write a few last words about the general feeling from all mentioned possibilities. I don’t want to highlight any of those tested choices as the best replacement to the Deprecated Velocity Template Engine but from the configuration experiences and Spring framework support, I’d choose FrameMarker. By choosing FreeMarker, I won’t be limited in using Velocity and any other options in parallel, but as has been mentioned before, making the right choice is beyond the scope of this article.

I have created a sample Gradle project that imports all Temple engines starter. This setup can be found inside the configuration file build.gradle.

dependencies {
    compile("org.springframework.boot:spring-boot-starter-web:${springBootVersion}")
    compile("org.springframework.boot:spring-boot-starter-freemarker:$springBootVersion")
    compile("org.springframework.boot:spring-boot-starter-velocity:$springBootVersion")
    compile("org.springframework.boot:spring-boot-starter-thymeleaf:$springBootVersion")
    compile("com.mitchellbosecke:pebble-spring-boot-starter:$pebbleVersion")
    testCompile "junit:junit:${junitVersion}"
}

Enjoy the https://github.com/mirage22/spring-boot-freemaker-demosample project in testing!

Posted in Information Technology

ECMA Script 6

The European Computer Manufacturer’s Association (ECMA) has adopted ECMAScript as a standard for scripting languages. Widely used scripting languages like JavaScript, Jscript and ActionScript are developed based on the ECMAScript standard. ECMAScript standard keeps evolving consistently to accelerate web application development and meet emerging web application development trends.

As its name indicates, ECMAScript 6 or ECMAScript 2015 is the sixth and latest version of the scripting language standard. The syntax rules of ECMAScript 6 make it easier for developers to write complex web application by taking advantage of new classes, modules, methods, keywords, and data types. At the same time, the version 6 of ECMAScript accelerates coding by providing a number of new shortcuts.

Overview of Important Features of ECMAScript 6/ ECMAScript 2015

Classes

While using ECMAScript 6, programmers can use classes developed based on a prototype-based object-oriented (OO) pattern. The developers can easily declare new classes in a declarative way by using the class keyword. They can even take advantage of the syntax rules of ECMAScript 6 to reuse code and create objects. The syntax rule further makes it easier for programmers to extend classes and instantiate new objects. The classes provided by ECMAScript 6 further supports prototype-based inheritance, instance and static methods, super calls, and constructors.

Modules

Modules make ECMAScript compete with several modern programming languages. While using the current version of the scripting language standard, programmers can take advantage of new syntax rules and a new module loader mechanism. The programmers can use the syntax rules to write modules which are compatible with latest web technologies and frameworks. At the same time, the module loader mechanism makes it easier for programmers to implement these modules.

Arrays

While using ECMAScript 6, programmers can use a redesigned array object. The array object now supports both new static class methods as well as new array prototype methods. The current version of the scripting language standard further supports typed arrays. The programmers can use typed arrays, as byte-based data structures, to manipulate file format or implement network protocols.

Symbols

ECMAScript supports a new primitive data type called symbols. The programmers can use symbols just like conventional primitive data types like number and string. But they can use symbols to create both unique constants and unique identifiers for object properties. Each time a programmer calls the symbol method, it will return a value of symbol data type. The value returned by the method will have static properties. But the static properties will be exposed to certain built-in objects.

Destructuring

In addition to supporting typed arrays, ECMAScript further provides a convenient option – destructuring – for extracting data from arrays and objects. The syntax rule enables programmers to assign a value to multiple variables simultaneously without writing additional code. Also, the developers can use destructuring to change variable names, simulate multiple return values, and assign default values to argument objects.

Arrows Functions

These function shorthand appears synthetically similar to the feature provided by modern programming languages like Java, C# and CoffeeScript. The programmers can define arrows by using the => syntax and without using the function keyword. An arrow supports both expression and syntax bodies. But the arrows, unlike functions, use the same lexical this as their surrounding code. The developers even have option to include arrows in functions. When an arrow is used inside a function, it uses the arguments used by the parent function.

Template Literals

The feature is similar to the string interpolation feature provided by Python or Perl. The programmers can take advantage of template literals to simplify string creation and interpolation. The template literals further enable developers to concatenate string in a different way. The programmers can further take advantage of template literals to embed different values into a template without any restriction.

Multi Line Strings

While writing JavaScript code, the programmers can write multi line strings in a number of ways. The template strings feature provided by ECMAScript 6 enables developers to create multi line strings without writing additional code. They can take advantage of template strings to create multi line strings without using escapes or concatenating strings. They even have option to use template literals in multi line strings.

New Keywords

While using ECMAScript 6, programmers use a number of new keywords to keep the code clean and reusable. For instance, they can use the let keyword to declare local variables with the scope of a function, statement, or expression without writing additional code. Also, they have option to declare a variable using two keywords –var and let. Likewise, the constant keyword makes it easier for programmers to declare immutable values or constants. The programmer cannot assign new content to the constant. But they can change the value and properties of an object hold by the constant.

New Operators

The new operators provided by ECMAScript 2015 also help programmers to write clean and reusable code. The developers can even use the new operators to perform common tasks without writing additional code. For instance, a programmer can use the spread operator (…) to represent a number of expected values. He can use the spread operator to insert elements of an array into another array or pass arguments to a function from an array.

New Methods

ECMAScript 6 introduces a number of new built-in methods. The new methods make it easier for programmers to work with various objects – array, math, string, number, object, date, promise, proxy and reflect. The developers can further take advantage of these methods to manipulate various objects without writing additional code. However, several web browsers are yet to support the new methods provided by ECMAScript 2015.

On the whole, ECMAScript 6 comes with several new language features. The new language features enable JavaScript programmers to write custom and complex web applications rapidly. But various web browsers have been implementing the current version of ECMAScript gradually. Hence, the ECMAScript features supported by individual web browsers differ. The developers can easily transpile the code written in ECMAScript 6 to ECMAScript 5 through a command line or specific plug-ins. But they can always accelerate web application development by switching from ECMAScript 5 to ECMAScript 6.

 

Posted in Information Technology

Node vs Django

Node.js (55, 432 ★ on GitHub) and Django (37, 614 ★ on GitHub) are two powerful tools for building web applications.

Node.js has a “JavaScript everywhere” motive to ensure JavaScript is used on the server-side and client-side of web applications and Django has a “framework for perfectionists with deadlines” motive to help developers build applications quickly.

They are being implemented in a lot of big projects, they have a large user community, and are being upgraded on a regular basis. The quality of both tools leaves developers feeling confused as to which tool to choose for their projects. The article aims to clear the air and help you make a decision.

Node.js

Node.js

JavaScript is known mainly for its strengths in client-side development, but Node.js is doing the exact opposite by working wonders on the server-side.

Node is an open source JavaScript runtime environment which was written in C, C++, and JavaScript, built on the Google V8 JavaScript engine, and released in 2009. Node.js is based on an event-driven, non-blocking I/O model.

Node can be installed on Windows using the Windows Installer. Installation is simple and can be done just by following the prompts after downloading the installer from the official website.

Successful installation can be confirmed from the Windows command prompt or PowerShell with:

node -v

For Linux (Ubuntu) users, Node.js can be installed from the terminal with:

sudo apt-get updat
sudo apt-get install nodejs
sudo apt-get install npm

Successful installation on Linux(Ubuntu) can be confirmed on the terminal with:

nodejs -v

The Node Package Manager (npm) is used to install packages to be used with Node.js.

Pros

  • Availability of great libraries.
  • High performance.
  • Awesome for building APIs.
  • It has an awesome package manager.
  • Huge user community.
  • Handles concurrent requests easily.

Cons

  • Asynchronous programming could be difficult to work with.
  • Not great with CPU intensive apps due to its single thread.
  • Callbacks result in tons of nested callbacks.

Django

Django

Django is a very robust open source Python web framework. It is very high-level, as most of the low-level stuff has been abstracted out. It is known for having a “batteries included” philosophy, therefore it’s ready to be used out-of-the-box.

Quick development projects are possible with Django and it’s beginner friendly for people who have an understanding of Python already.

Django was built and modeled on pragmatic and clean design and comes with all the major components needed in building complex web applications.

Installation is very easy and can be done using Python’s package management tool, known as pip. From the terminal, the command below is all that is needed for both Windows and Linux operating systems, provided pipis installed.

pip install django

To confirm its installation, simply activate the Python shell and import Django. Type in “python” in the terminal like:

python

And get something like:

Python 3.6.6 (default, Sep 12 2018, 18:26:19)
[GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>

Then import Django using:

import django

If there are no errors, then everything worked fine.

Pros

  • Little to no security loopholes.
  • Works fine with relational databases.
  • Easy to learn.
  • Speedy development process.
  • Very scalable.
  • Huge user community.
  • Has great documentation.

Cons

  • Django is monolithic, i.e. a single-tiered software application.
  • Not great for small-scale apps.
  • A full understanding of the framework is needed.

The Comparison

Both Are Open Source

Node.js and Django are both free to use. You will not face any licensing issues when using either for commercial software. They are also open source, so you can contribute to the projects when you find a feature or bug to work on.

Check out the Node.js repository and Django repository.

Learning Curve

Node.js is a JavaScript runtime taken out of the client-side browser environment and Django is a Python framework. To be able to learn either tool, you would need to be comfortable with working with their primary programming language.

To work with Node.js, you need an understanding of asynchronous programming, Node’s native methods, and architecture.

There are lots of tutorials online for Node.js, however, lots of examples are bad and that could make learning much more difficult.

To work with Django, the methods need to be understood as well as the features that come out-of-the-box. A full understanding of the framework’s MTV(Model Template View) architecture needs to be understood as well.

While there are lots of good tutorials for Django on the web, you’ll find there are a large number of outdated ones teaching the old way of doing things.

While learning Node.js and Django requires knowledge of their base languages, Node introduces some complex concepts that makes it a bit difficult for beginners as compared to Django.

Syntax

Node.js is simply JavaScript taken outside of the client-side browser environment. Therefore, it’s syntax is more like regular JavaScript syntax.

Here is a ‘hello world’ app in Node.js:

var http = require('http');
http.createServer(function (req, res) res.writeHead(200, {
  'Content-Type': 'text/plain'
}); res.end('Hello World!');
}).listen(8080);

Django is built on Python, therefore it uses Python syntax too. “Hello world!” in Python would simply be:

print(“Hello World”)

However, since Django is a framework it forces you to use a particular structure that identifies with the MTV pattern, so we would need to write different scripts to produce “Hello World” on the web app.

Here’s a look at the basic views.py file for Hello World:

from django.http import HttpResponse
def hello(request):
    return HttpResponse("Hello world")

And here is the urls.py file:

from django.conf.urls
import include, url
from django.contrib
import admin
from mysite.views
import hello
urlpatterns = [
  url(r '^admin/', include(admin.site.urls)),
  url(r '^hello/

hello),

]

Scalability and Performance

Both tools have great scalability and performance. However, while Django seems to have the edge with scalability, Node.js has the edge with performance.

Node.js applications can be scaled by using the cluster module to clone different instances of the application’s workload using a load balancer. But due to Node.js working with single threads, it performs poorly in CPU intensive conditions.

Django is highly scalable, as the caching of applications is quite easy and can be done using tools like MemCache. NGINX can also be used to ensure that compressed static assets are served, and it can also be used to handle data migrations successfully even as data becomes more robust.

User Community

Node.js and Django both have large user communities. The primary factors for this is that developers are taking advantage of a server-side flavor of JavaScript to work on the backend of web applications for Node.js and taking advantage of Python’s easy to use syntax for Django. There are lots of tutorials online related to Node JS on the web when compared to Django, with more companies implementing Node as their backend web technology.

Uber, Twitter, eBay, Netflix, DuckDuckGo, PayPal, LinkedIn, Trello, PayPal, Mozilla, and GoDaddy are some big names using Node.js as their backend technology.

Pinterest, Instagram, Eventbrite, Sentry, Zapier, Dropbox, Spotify, and YouTube are also some big names using Django as their backend technology.

Node.js vs. Django

Conclusion

Both tools are great for building web applications, however, there are uses cases where each stands out.

Django, for example, is a great choice when you are considering using a relational database, a lot of external libraries, have security as a top priority on your list, and need to build the application quickly. Use Node.js when you have an asynchronous stack from the server, need great performance, intend on building features from scratch, and want an app that does the heavy lifting of client-side processing.

Choose whatever tool best suits your needs, both tools are powerful for web development.

 

Posted in Information Technology

Okta: Build OAuth 2.0 using Spring

User management is required for most web applications, but building it isn’t always an easy task. Many developers work around the clock to ensure their app is secure by seeking out individual vulnerabilities to patch. Luckily, you can increase your own efficiency by implementing OAuth 2.0 to your web application with Spring Security and Spring Boot. The process gets even easier by integrating with Okta on top of Spring Boot.

In this tutorial, you’ll first build an OAuth 2.0 web application and authentication server using Spring Boot and Spring Security. After that, you’ll use Okta to get rid of your self-hosted authentication server and simplify your Spring Boot application even more.

Let’s get started!

Create an OAuth 2.0 Server

Start by going to the Spring Initializr and creating a new project with the following settings:

  • Change project type from Maven to Gradle.
  • Change the Group to com.okta.spring.
  • Change the Artifact to AuthorizationServerApplication.
  • Add one dependency: Web.

Spring Initializr

Download the project and copy it somewhere that makes sense on your hard drive. In this tutorial, you’re going to create three different projects, so you might want to create a parent directory, something like SpringBootOAuthsomewhere.

You need to add one dependency to the build.gradle file:

implementation 'org.springframework.security.oauth:spring-security-oauth2:2.3.3.RELEASE'

This adds in Spring’s OAuth goodness.

Update the src/main/resources/application.properties to match:

server.port=8081
server.servlet.context-path=/auth
user.oauth.clientId=R2dpxQ3vPrtfgF72
user.oauth.clientSecret=fDw7Mpkk5czHNuSRtmhGmAGL42CaxQB9
user.oauth.redirectUris=http://localhost:8082/login/oauth2/code/
user.oauth.user.username=Andrew
user.oauth.user.password=abcd

This sets the server port, servlet context path, and some default values for the in-memory, ad hoc generated tokens the server is going to return to the client, as well as for our user’s username and password. In production, you would need to have a bit more of a sophisticated back-end for a real authentication server without the hard-coded redirect URIs and usernames and passwords.

Update the AuthorizationServerApplication class to add @EnableResourceServer:

package com.okta.spring.AuthorizationServerApplication;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.security.oauth2.config.annotation.web.configuration.EnableResourceServer;

@SpringBootApplication
@EnableResourceServer
public class AuthorizationServerApplication {

    public static void main(String[] args) {
        SpringApplication.run(AuthorizationServerApplication.class, args);
    }
}

Create a new class AuthServerConfig in the same package as your application class com.okta.spring.AuthorizationServerApplication under src/main/java (from now on please create Java classes in src/main/java/com/okta/spring/AuthorizationServerApplication). This Spring configuration class enables and configures an OAuth authorization server.

package com.okta.spring.AuthorizationServerApplication;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder;
import org.springframework.security.oauth2.config.annotation.configurers.ClientDetailsServiceConfigurer;
import org.springframework.security.oauth2.config.annotation.web.configuration.AuthorizationServerConfigurerAdapter;
import org.springframework.security.oauth2.config.annotation.web.configuration.EnableAuthorizationServer;
import org.springframework.security.oauth2.config.annotation.web.configurers.AuthorizationServerSecurityConfigurer;

@Configuration
@EnableAuthorizationServer
public class AuthServerConfig extends AuthorizationServerConfigurerAdapter {

    @Value("${user.oauth.clientId}")
    private String ClientID;
    @Value("${user.oauth.clientSecret}")
    private String ClientSecret;
    @Value("${user.oauth.redirectUris}")
    private String RedirectURLs;

   private final PasswordEncoder passwordEncoder;

    public AuthServerConfig(PasswordEncoder passwordEncoder) {
        this.passwordEncoder = passwordEncoder;
    }

    @Override
    public void configure(
        AuthorizationServerSecurityConfigurer oauthServer) throws Exception {
        oauthServer.tokenKeyAccess("permitAll()")
            .checkTokenAccess("isAuthenticated()");
    }

    @Override
    public void configure(ClientDetailsServiceConfigurer clients) throws Exception {
        clients.inMemory()
            .withClient(ClientID)
            .secret(passwordEncoder.encode(ClientSecret))
            .authorizedGrantTypes("authorization_code")
            .scopes("user_info")
            .autoApprove(true)
            .redirectUris(RedirectURLs);
    }
}

The AuthServerConfig class is the class that will create and return our JSON web tokens when the client properly authenticates.

Create a SecurityConfiguration class:

package com.okta.spring.AuthorizationServerApplication;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.annotation.Order;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder;

@Configuration
@Order(1)
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {

    @Value("${user.oauth.user.username}")
    private String username;
    @Value("${user.oauth.user.password}")
    private String password;

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.requestMatchers()
            .antMatchers("/login", "/oauth/authorize")
            .and()
            .authorizeRequests()
            .anyRequest().authenticated()
            .and()
            .formLogin().permitAll();
    }

    @Override
    protected void configure(AuthenticationManagerBuilder auth) throws Exception {
        auth.inMemoryAuthentication()
            .withUser(username)
            .password(passwordEncoder().encode(password))
            .roles("USER");
    }

    @Bean
    public BCryptPasswordEncoder passwordEncoder() {
        return new BCryptPasswordEncoder();
    }
}

The SecurityConfiguration class is the class that actually authenticates requests to your authorization server. Notice near the top where it’s pulling in the username and password from the application.properties file.

Lastly, create a Java class called UserController:

package com.okta.spring.AuthorizationServerApplication;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import java.security.Principal;

@RestController
public class UserController {

    @GetMapping("/user/me")
    public Principal user(Principal principal) {
        return principal;
    }
}

This file allows the client apps to find out more about the users that authenticate with the server.

That’s your authorization server! Not too bad. Spring Boot makes it pretty easy. Four files and a few properties. In a little bit, you’ll make it even simpler with Okta, but for the moment, move on to creating a client app you can use to test the auth server.

Start the authorization server:

./gradlew bootRun

Wait a bit for it to finish running. The terminal should end with something like this:

...
2019-02-23 19:06:49.122  INFO 54333 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8081 (http) with context path '/auth  '
2019-02-23 19:06:49.128  INFO 54333 --- [           main] c.o.s.A.AuthorizationServerApplication   : Started AuthorizationServerApplication in 3.502 seconds (JVM running for 3.945)

NOTE: If you get an error about JAXB (java.lang.ClassNotFoundException: javax.xml.bind.JAXBException), it’s because you’re using Java 11. To fix this, add JAXB to your build.gradle.

implementation 'org.glassfish.jaxb:jaxb-runtime'

Build Your Client App

Back to Spring Initializr. Create a new project with the following settings:

  • Project type should be Gradle (not Maven).
  • Group: com.okta.spring.
  • Artifact: SpringBootOAuthClient.
  • Add three dependencies: WebThymeleafOAuth2 Client.

Create Client App

Download the project, copy it to its final resting place, and unpack it.

This time you need to add the following dependency to your build.gradle file:

implementation 'org.thymeleaf.extras:thymeleaf-extras-springsecurity5:3.0.4.RELEASE'

Rename the src/main/resources/application.properties to application.yml and update it to match the YAML below:

server:
  port: 8082
  session:
    cookie:
      name: UISESSION
spring:
  thymeleaf:
    cache: false
  security:
    oauth2:
      client:
        registration:
          custom-client:
            client-id: R2dpxQ3vPrtfgF72
            client-secret: fDw7Mpkk5czHNuSRtmhGmAGL42CaxQB9
            client-name: Auth Server
            scope: user_info
            provider: custom-provider
            redirect-uri-template: http://localhost:8082/login/oauth2/code/
            client-authentication-method: basic
            authorization-grant-type: authorization_code
        provider:
          custom-provider:
            token-uri: http://localhost:8081/auth/oauth/token
            authorization-uri: http://localhost:8081/auth/oauth/authorize
            user-info-uri: http://localhost:8081/auth/user/me
            user-name-attribute: name

Notice that here, you’re configuring the clientId and clientSecret, as well as various URIs for your authentication server. These need to match the values in the other project.

Update the SpringBootOAuthClientApplication class to match:

package com.okta.spring.SpringBootOAuthClient;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class SpringBootOAuthClientApplication {

    public static void main(String[] args) {
        SpringApplication.run(SpringBootOAuthClientApplication.class, args);
    }
}

Create a new Java class called WebController:

package com.okta.spring.SpringBootOAuthClient;

import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.RequestMapping;

import java.security.Principal;

@Controller
public class WebController {

    @RequestMapping("/securedPage")
    public String securedPage(Model model, Principal principal) {
        return "securedPage";
    }

    @RequestMapping("/")
    public String index(Model model, Principal principal) {
        return "index";
    }
}

This is the controller that maps incoming requests to your Thymeleaf template files (which you’ll make in a sec).

Create another Java class named SecurityConfiguration:

package com.okta.spring.SpringBootOAuthClient;

import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;

@Configuration
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {
    @Override
    public void configure(HttpSecurity http) throws Exception {
        http.antMatcher("/**").authorizeRequests()
            .antMatchers("/", "/login**").permitAll()
            .anyRequest().authenticated()
            .and()
            .oauth2Login();
    }
}

This class defines the Spring Security configuration for your application: allowing all requests on the home path and requiring authentication for all other routes. It also sets up the Spring Boot OAuth login flow.

The last files you need to add are the two Thymeleaf template files. A full look at Thymeleaf templating is well beyond the scope of this tutorial, but you can take a look at their website for more info.

The templates go in the src/main/resources/templates directory. You’ll notice in the controller above that they’re simply returning strings for the routes. When the Thymeleaf dependencies are included the build, Spring Boot automatically assumes you’re returning the name of the template file from the controllers, and so, the app will look in src/main/resources/templates for a file name with the returned string plus .html.

Create the home template: src/main/resources/templates/index.html:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Home</title>
</head>
<body>
    <h1>Spring Security SSO</h1>
    <a href="securedPage">Login</a>
</body>
</html>

And the secured template: src/main/resources/templates/securedPage.html:

<!DOCTYPE html>
<html xmlns:th="http://www.thymeleaf.org">
<head>
    <meta charset="UTF-8">
    <title>Secured Page</title>
</head>
<body>
    <h1>Secured Page</h1>
    <span th:text="${#authentication.name}"></span>
</body>
</html>

I’ll just point out this one line:

<span th:text="${#authentication.name}"></span>

This is the line that will insert the name of the authenticated user. This line is why you needed the org.thymeleaf.extras:thymeleaf-extras-springsecurity5 dependency in the build.gradle file.

Start the client application:

./gradlew bootRun

Wait a moment for it to finish. The terminal should end with something like this:

...
2019-02-23 19:29:04.448  INFO 54893 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8082 (http) with context path ''
2019-02-23 19:29:04.453  INFO 54893 --- [           main] c.o.s.S.SpringBootOAuthClientApplication : Started SpringBootOAuthClientApplication in 3.911 seconds (JVM running for 4.403)

Test the Resource Server

Navigate in your browser of choice to your client app at http://localhost:8082/.

Click the Login link.

You’ll be directed to the login page:

Sign-In Form

Enter username Andrew and password abcd (from the application.properties file from the authentication server).

Click Sign In and you’ll be taken to the super fancy securedPage.html template that should say “Secured Page” and “Andrew”.

Great! It works. Now you’re gonna make it even simpler.

You can stop both server and client Spring Boot apps.

Create an OpenID Connect Application

Okta is a SaaS (software-as-service) authentication and authorization provider. We provide free accounts to developers so they can develop OIDC apps with no fuss. Head over to developer.okta.com and sign up for an account. After you’ve verified your email, log in and perform the following steps:

  • Go to Application > Add Application.
  • Select application type Web and click Next.
  • Give the app a name. I named mine “Spring Boot OAuth”.
  • Under Login redirect URIs change the value to http://localhost:8080/login/oauth2/code/okta. The rest of the default values will work.
  • Click Done.

Leave the page open of take note of the Client ID and Client Secret. You’ll need them in a moment.

Create a New Spring Boot App

Back to the Spring Initializr one more time. Create a new project with the following settings:

  • Change project type from Maven to Gradle.
  • Change the Group to com.okta.spring.
  • Change the Artifact to OktaOAuthClient.
  • Add three dependencies: WebThymeleafOkta.
  • Click Generate Project.

Create Okta OAuth App

Copy the project and unpack it somewhere.

In the build.gradle file, add the following dependency:

implementation 'org.thymeleaf.extras:thymeleaf-extras-springsecurity5:3.0.4.RELEASE'

Also while you’re there, notice the dependency com.okta.spring:okta-spring-boot-starter:1.1.0. This is the Okta Spring Boot Starter. It’s a handy project that makes integrating Okta with Spring Boot nice and easy. For more info, take a look at the project’s GitHub.

Change the src/main/resources/application.properties to application.yml and add the following:

server:
  port: 8080
okta:
  oauth2:
    issuer: https://okta.okta.com/oauth2/default
    client-id: {yourClientId}
    client-secret: {yourClientSecret}
spring:
  thymeleaf:
    cache: false

Remember when I said you’ll need your ClientID and Client Secret above. Well, the time has come. You need to fill them into the file, as well as your Okta issuer URL. It’s gonna look something like this: dev-123456.okta.com. You can find it under API > Authorization Servers.

You also need two similar template files in the src/main/resources/templates directory. The index.htmltemplate file is exactly the same, and can be copied over if you like. The securedPage.html template file is slightly different because of the way the authentication information is returned from Okta as compared to the simple authentication server you built earlier.

Create the home template: src/main/resources/templates/index.html:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Home</title>
</head>
<body>
    <h1>Spring Security SSO</h1>
    <a href="securedPage">Login</a>
</body>
</html>

And the secured template: src/main/resources/templates/securedPage.html:

<!DOCTYPE html>
<html xmlns:th="http://www.thymeleaf.org">
<head>
    <meta charset="UTF-8">
    <title>Secured Page</title>
</head>
<body>
    <h1>Secured Page</h1>
    <span th:text="${#authentication.principal.attributes.name}">Joe Coder</span>
</body>
</html>

Create a Java class named WebController in the com.okta.spring.SpringBootOAuth package:

package com.okta.spring.OktaOAuthClient;

import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.RequestMapping;

import java.security.Principal;

@Controller
public class WebController {

    @RequestMapping("/securedPage")
    public String securedPage(Model model, Principal principal) {
        return "securedPage";
    }

    @RequestMapping("/")
    public String index(Model model, Principal principal) {
        return "index";
    }
}

This class simply creates two routes, one for the home route and one for the secured route. Again, Spring Boot and Thymeleaf are auto-magicking this to the two template files in src/main/resources/templates.

Finally, create another Java class names SecurityConfiguration:

package com.okta.spring.OktaOAuthClient;

import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;

@Configuration
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {
    @Override
    public void configure(HttpSecurity http) throws Exception {
        http.antMatcher("/**").authorizeRequests()
            .antMatchers("/").permitAll()
            .anyRequest().authenticated()
            .and()
            .oauth2Login();
    }
}

That’s it! Bam!

Run the Okta-OAuth-powered client:

./gradlew bootRun

You should see a bunch of output that ends with:

...
2019-02-23 20:09:03.465  INFO 55890 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2019-02-23 20:09:03.470  INFO 55890 --- [           main] c.o.s.O.OktaOAuthClientApplication       : Started OktaOAuthClientApplication in 3.285 seconds (JVM running for 3.744)

Navigate to http://localhost:8080.

Click the Login button.

This time, you’ll be directed to the Okta login page. You may need to use an incognito browser or log out of your developer.okta.com dashboard here so that you don’t skip the login page and get directed immediately to the secured endpoint.

Okta Login Form

Log in, and you’ll see the secured page with your name!