Posted in Uncategorized

Getting Started with GraphQL and Spring Boot

https://www.baeldung.com/spring-graphql

1. Introduction

GraphQL is a relatively new concept from Facebook that is billed as an alternative to REST for Web APIs.

This article will give an introduction to setting up a GraphQL server using Spring Boot so that it can be added to existing applications or used in new ones.

2. What is GraphQL?

Traditional REST APIs work with the concept of Resources that the server manages. These resources can be manipulated in some standard ways, following the various HTTP verbs. This works very well as long as our API fits the resource concept, but quickly falls apart when we need to deviate from it.

This also suffers when the client needs data from multiple resources at the same time. For example, requesting a blog post and the comments. Typically this is solved by either having the client make multiple requests or by having the server supply extra data that might not always be required, leading to larger response sizes.

GraphQL offers a solution to both of these problems. It allows for the client to specify exactly what data is desired, including from navigating child resources in a single request, and allows for multiple queries in a single request.

It also works in a much more RPC manner, using named queries and mutations instead of a standard mandatory set of actions. This works to put the control where it belongs, with the API developer specifying what is possible, and the API consumer what is desired.

For example, a blog might allow the following query:

1
2
3
4
5
6
7
8
9
10
11
12
query {
    recentPosts(count: 10, offset: 0) {
        id
        title
        category
        author {
            id
            name
            thumbnail
        }
    }
}

This query will:

  • request the ten most recent posts
  • for each post, request the ID, title, and category
  • for each post request the author, returning the ID, name, and thumbnail

In a traditional REST API, this either needs 11 requests – 1 for the posts and 10 for the authors – or needs to include the author details in the post details.

2.1. GraphQL Schemas

The GraphQL server exposes a schema describing the API. This scheme is made up of type definitions. Each type has one or more fields, which each take zero or more arguments and return a specific type.

The graph is made up from the way these fields are nested with each other. Note that there is no need for the graph to be acyclic – cycles are perfectly acceptable – but it is directed. That is, the client can get from one field to its children, but it can’t automatically get back to the parent unless the schema defines this explicitly.

An example GraphQL Schema for a blog may contain the following definitions, describing a Post, an Author of the post and a root query to get the most recent posts on the blog.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
type Post {
    id: ID!
    title: String!
    text: String!
    category: String
    author: Author!
}
type Author {
    id: ID!
    name: String!
    thumbnail: String
    posts: [Post]!
}
# The Root Query for the application
type Query {
    recentPosts(count: Int, offset: Int): [Post]!
}
# The Root Mutation for the application
type Mutation {
    writePost(title: String!, text: String!, category: String) : Post!
}

The “!” at the end of some names indicates that this is a non-nullable type. Any type that does not have this can be null in the response from the server. The GraphQL service handles these correctly, allowing us to request child fields of nullable types safely.

The GraphQL Service also exposes the schema itself using a standard set of fields, allowing any client to query for the schema definition ahead of time.

This can allow for the client to automatically detect when the schema changes, and to allow for clients that dynamically adapt to the way that the schema works. One incredibly useful example of this is the GraphiQL tool – discussed later – that allows for us to interact with any GraphQL API.

3. Introducing GraphQL Spring Boot Starter

The Spring Boot GraphQL Starter offers a fantastic way to get a GraphQL server running in a very short time. Combined with the GraphQL Java Tools library, we need only write the code necessary for our service.

3.1. Setting up the Service

All we need for this to work is the correct dependencies:

1
2
3
4
5
6
7
8
9
10
<dependency>
    <groupId>com.graphql-java</groupId>
    <artifactId>graphql-spring-boot-starter</artifactId>
    <version>5.0.2</version>
</dependency>
<dependency>
    <groupId>com.graphql-java</groupId>
    <artifactId>graphql-java-tools</artifactId>
    <version>5.2.4</version>
</dependency>

Spring Boot will automatically pick these up and set up the appropriate handlers to work automatically.

By default, this will expose the GraphQL Service on the /graphql endpoint of our application and will accept POST requests containing the GraphQL Payload. This endpoint can be customised in our application.properties file if necessary.

3.2. Writing the Schema

The GraphQL Tools library works by processing GraphQL Schema files to build the correct structure and then wires special beans to this structure. The Spring Boot GraphQL starter automatically finds these schema files.

These files need to be saved with the extension “.graphqls” and can be present anywhere on the classpath. We can also have as many of these files as desired, so we can split the scheme up into modules as desired.

The one requirement is that there must be exactly one root query, and up to one root mutation. This can not be split across files, unlike the rest of the scheme. This is a limitation of the GraphQL Schema definition itself, and not of the Java implementation.

3.3. Root Query Resolver

The root query needs to have special beans defined in the Spring context to handle the various fields in this root query. Unlike the schema definition, there is no restriction that there only be a single Spring bean for the root query fields.

The only requirements are that the beans implement GraphQLQueryResolver and that every field in the root query from the scheme has a method in one of these classes with the same name.

1
2
3
4
5
6
public class Query implements GraphQLQueryResolver {
    private PostDao postDao;
    public List<Post> getRecentPosts(int count, int offset) {
        return postsDao.getRecentPosts(count, offset);
    }
}

The names of the method must be one of the following, in this order:

  1. <field>
  2. is<field> – only if the field is of type Boolean
  3. get<field>

The method must have parameters that correspond to any parameters in the GraphQL schema, and may optionally take a final parameter of type DataFetchingEnvironment.

The method must also return the correct return type for the type in the GraphQL scheme, as we are about to see. Any simple types – String, Int, List, etc. – can be used with the equivalent Java types, and the system just maps them automatically.

The above defined the method getRecentPosts which will be used to handle any GraphQL queries for the recentPosts field in the schema defined earlier.

3.4. Using Beans to Represent Types

Every complex type in the GraphQL server is represented by a Java bean – whether loaded from the root query or from anywhere else in the structure. The same Java class must always represent the same GraphQL type, but the name of the class is not necessary.

Fields inside the Java bean will directly map onto fields in the GraphQL response based on the name of the field.

1
2
3
4
5
6
public class Post {
    private String id;
    private String title;
    private String category;
    private String authorId;
}

Any fields or methods on the Java bean that do not map on to the GraphQL schema will be ignored, but will not cause problems. This is important for field resolvers to work.

For example, the field authorId here does not correspond to anything in our schema we defined earlier, but it will be available to use for the next step.

3.5. Field Resolvers for Complex Values

Sometimes, the value of a field is non-trivial to load. This might involve database lookups, complex calculations, or anything else. GraphQL Tools has a concept of a field resolver that is used for this purpose. These are Spring beans that can provide values in place of the data bean.

The field resolver is any bean in the Spring Context that has the same name as the data bean, with the suffix Resolver, and that implements the GraphQLResolver interface. Methods on the field resolver bean follow all of the same rules as on the data bean but are also provided the data bean itself as a first parameter.

If a field resolver and the data bean both have methods for the same GraphQL field then the field resolver will take precedence.

1
2
3
4
5
6
7
public class PostResolver implements GraphQLResolver<Post> {
    private AuthorDao authorDao;
    public Author getAuthor(Post post) {
        return authorDao.getAuthorById(post.getAuthorId());
    }
}

The fact that these field resolvers are loaded from the Spring context is important. This allows them to work with any other Spring managed beans – e.g., DAOs.

Importantly, if the client does not request a field, then the GraphQL Server will never do the work to retrieve it. This means that if a client retrieves a Post and does not ask for the Author, then the getAuthor()method above will never be executed, and the DAO call will never be made.

3.6. Nullable Values

The GraphQL Schema has the concept that some types are nullable and others are not.

This can be handled in the Java code by directly using null values, but equally, the new Optional type from Java 8 can be used directly here for nullable types, and the system will do the correct thing with the values.

This is very useful as it means that our Java code is more obviously the same as the GraphQL schema from the method definitions.

3.7. Mutations

So far, everything that we have done has been about retrieving data from the server. GraphQL also has the ability to update the data stored on the server, by means of mutations.

From the point of view of the code, there is no reason that a Query can’t change data on the server. We could easily write query resolvers that accept arguments, save new data and return those changes. Doing this will cause surprising side effects for the API clients, and is considered bad practice.

Instead, Mutations should be used to inform the client that this will cause a change to the data being stored.

Mutations are defined in the Java code by using classes that implement GraphQLMutationResolver instead of GraphQLQueryResolver.

Otherwise, all of the same rules apply as for queries. The return value from a Mutation field is then treated exactly the same as from a Query field, allowing nested values to be retrieved as well.

1
2
3
4
5
6
7
public class Mutation implements GraphQLMutationResolver {
    private PostDao postDao;
    public Post writePost(String title, String text, String category) {
        return postDao.savePost(title, text, category);
    }
}

4. Introducing GraphiQL

GraphQL also has a companion tool called GraphiQL. This is a UI that is able to communicate with any GraphQL Server and execute queries and mutations against it. A downloadable version of it exists as an Electron app and can be retrieved from here.

It is also possible to include the web-based version of GraphiQL in our application automatically, by adding the GraphiQL Spring Boot Starter dependency:

1
2
3
4
5
<dependency>
    <groupId>com.graphql-java</groupId>
    <artifactId>graphiql-spring-boot-starter</artifactId>
    <version>5.0.2</version>
</dependency>

This will only work if we are hosting our GraphQL API on the default endpoint of /graphql though, so the standalone application will be needed if that is not the case.

5. Summary

GraphQL is a very exciting new technology that can potentially revolutionize the way that Web APIs are developed.

The combination of the Spring Boot GraphQL Starter and the GraphQL Java Tools libraries make it incredibly easy to add this technology to any new or existing Spring Boot applications.

Code snippets can be found over on GitHub.

Posted in Uncategorized

Spring Scope

source: https://www.baeldung.com/spring-bean-scopes

1. Overview

In this quick tutorial, you’ll learn about the different types of bean scopes in the Spring framework.

The scope of a bean defines the life cycle and visibility of that bean in the contexts in which it is used.

The latest version of Spring framework defines 6 types of scopes:

  • singleton
  • prototype
  • request
  • session
  • application
  • websocket

The last four scopes mentioned request, session, application and websocket are only available in a web-aware application.

2. Singleton Scope

Defining a bean with singleton scope means the container creates a single instance of that bean, and all requests for that bean name will return the same object, which is cached. Any modifications to the object will be reflected in all references to the bean. This scope is the default value if no other scope is specified.

Let’s create a Person entity to exemplify the concept of scopes:

1
2
3
4
5
public class Person {
    private String name;
    // standard constructor, getters and setters
}

Afterwards, we define the bean with singleton scope by using the @Scope annotation:

1
2
3
4
5
@Bean
@Scope("singleton")
public Person personSingleton() {
    return new Person();
}

We can also use a constant instead of the String value in the following manner:

1
@Scope(value = ConfigurableBeanFactory.SCOPE_SINGLETON)

Now we proceed to write a test that shows that two objects referring to the same bean will have the same values, even if only one of them changes their state, as they are both referencing the same bean instance:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
private static final String NAME = "John Smith";
@Test
public void givenSingletonScope_whenSetName_thenEqualNames() {
    ApplicationContext applicationContext = new ClassPathXmlApplicationContext("scopes.xml");
    Person personSingletonA = (Person) applicationContext.getBean("personSingleton");
    Person personSingletonB = (Person) applicationContext.getBean("personSingleton");
    personSingletonA.setName(NAME);
    Assert.assertEquals(NAME, personSingletonB.getName());
    ((AbstractApplicationContext) applicationContext).close();
}

The scopes.xml file in this example should contain the xml definitions of the beans used:

1
2
3
4
5
6
7
8
<?xml version="1.0" encoding="UTF-8"?>
    xsi:schemaLocation="http://www.springframework.org/schema/beans
    <bean id="personSingleton" class="org.baeldung.scopes.Person" scope="singleton"/>   
</beans>

3. Prototype Scope

A bean with prototype scope will return a different instance every time it is requested from the container. It is defined by setting the value prototype to the @Scope annotation in the bean definition:

1
2
3
4
5
@Bean
@Scope("prototype")
public Person personPrototype() {
    return new Person();
}

We could also use a constant as we did for the singleton scope:

1
@Scope(value = ConfigurableBeanFactory.SCOPE_PROTOTYPE)

We will now write a similar test as before that shows two objects requesting the same bean name with scope prototype will have different states, as they are no longer referring to the same bean instance:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
private static final String NAME = "John Smith";
private static final String NAME_OTHER = "Anna Jones";
@Test
public void givenPrototypeScope_whenSetNames_thenDifferentNames() {
    ApplicationContext applicationContext = new ClassPathXmlApplicationContext("scopes.xml");
    Person personPrototypeA = (Person) applicationContext.getBean("personPrototype");
    Person personPrototypeB = (Person) applicationContext.getBean("personPrototype");
    personPrototypeA.setName(NAME);
    personPrototypeB.setName(NAME_OTHER);
    Assert.assertEquals(NAME, personPrototypeA.getName());
    Assert.assertEquals(NAME_OTHER, personPrototypeB.getName());
    ((AbstractApplicationContext) applicationContext).close();
}

The scopes.xml file is similar to the one presented in the previous section while adding the xml definition for the bean with prototype scope:

1
<bean id="personPrototype" class="org.baeldung.scopes.Person" scope="prototype"/>

4. Web Aware Scopes

As mentioned, there are four additional scopes that are only available in a web-aware application context. These are less often used in practice.

The request scope creates a bean instance for a single HTTP request while session scope creates for an HTTP Session.

The application scope creates the bean instance for the lifecycle a ServletContext and the websocket scope creates it for a particular WebSocket session.

Let’s create a class to use for instantiating the beans:

1
2
3
4
5
public class HelloMessageGenerator {
    private String message;
    
    // standard getter and setter
}

4.1. Request Scope

We can define the bean with request scope using the @Scope annotation:

1
2
3
4
5
@Bean
@Scope(value = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS)
public HelloMessageGenerator requestScopedBean() {
    return new HelloMessageGenerator();
}

The proxyMode attribute is necessary because, at the moment of the instantiation of the web application context, there is no active request. Spring will create a proxy to be injected as a dependency, and instantiate the target bean when it is needed in a request.

Next, we can define a controller that has an injected reference to the requestScopedBean. We need to access the same request twice in order to test the web specific scopes.

If we display the message each time the request is run, we can see that the value is reset to null, even though it is later changed in the method. This is because of a different bean instance being returned for each request.

1
2
3
4
5
6
7
8
9
10
11
12
13
@Controller
public class ScopesController {
    @Resource(name = "requestScopedBean")
    HelloMessageGenerator requestScopedBean;
    @RequestMapping("/scopes/request")
    public String getRequestScopeMessage(final Model model) {
        model.addAttribute("previousMessage", requestScopedBean.getMessage());
        requestScopedBean.setMessage("Good morning!");
        model.addAttribute("currentMessage", requestScopedBean.getMessage());
        return "scopesExample";
    }
}

4.2. Session Scope

We can define the bean with session scope in a similar manner:

1
2
3
4
5
@Bean
@Scope(value = WebApplicationContext.SCOPE_SESSION, proxyMode = ScopedProxyMode.TARGET_CLASS)
public HelloMessageGenerator sessionScopedBean() {
    return new HelloMessageGenerator();
}

Next, we define a controller with a reference to the sessionScopedBean. Again, we need to run two requests in order to show that the value of the message field is the same for the session.

In this case, when the request is made for the first time, the value message is null. But once, it is changed, then that value is retained for subsequent requests as the same instance of the bean is returned for the entire session.

1
2
3
4
5
6
7
8
9
10
11
12
13
@Controller
public class ScopesController {
    @Resource(name = "sessionScopedBean")
    HelloMessageGenerator sessionScopedBean;
    @RequestMapping("/scopes/session")
    public String getSessionScopeMessage(final Model model) {
        model.addAttribute("previousMessage", sessionScopedBean.getMessage());
        sessionScopedBean.setMessage("Good afternoon!");
        model.addAttribute("currentMessage", sessionScopedBean.getMessage());
        return "scopesExample";
    }
}

4.3. Application Scope

The application scope creates the bean instance for the lifecycle of a ServletContext.

This is similar to the singleton scope but there is a very important difference with regards to the scope of the bean.

When beans are application scoped the same instance of the bean is shared across multiple servlet-based applications running in the same ServletContext, while singleton-scoped beans are scoped to a single application context only.

Let’s create the bean with application scope:

1
2
3
4
5
@Bean
@Scope(value = WebApplicationContext.SCOPE_APPLICATION, proxyMode = ScopedProxyMode.TARGET_CLASS)
public HelloMessageGenerator applicationScopedBean() {
    return new HelloMessageGenerator();
}

And the controller that references this bean:

1
2
3
4
5
6
7
8
9
10
11
12
13
@Controller
public class ScopesController {
    @Resource(name = "applicationScopedBean")
    HelloMessageGenerator applicationScopedBean;
    @RequestMapping("/scopes/application")
    public String getApplicationScopeMessage(final Model model) {
        model.addAttribute("previousMessage", applicationScopedBean.getMessage());
        applicationScopedBean.setMessage("Good afternoon!");
        model.addAttribute("currentMessage", applicationScopedBean.getMessage());
        return "scopesExample";
    }
}

In this case, value message once set in the applicationScopedBean will be retained for all subsequent requests, sessions and even for a different servlet application that will access this bean, provided it is running in the same ServletContext.

4.4. WebSocket Scope

Finally, let’s create the bean with websocket scope:

1
2
3
4
5
@Bean
@Scope(scopeName = "websocket", proxyMode = ScopedProxyMode.TARGET_CLASS)
public HelloMessageGenerator websocketScopedBean() {
    return new HelloMessageGenerator();
}

WebSocket-scoped beans when first accessed are stored in the WebSocket session attributes. The same instance of the bean is then returned whenever that bean is accessed during the entire WebSocket session.

We can also say that it exhibits singleton behavior but limited to a WebSocket session only.

5. Conclusion

We have demonstrated different bean scopes provided by Spring and what their intended usages are.

The implementation of this tutorial can be found in the github project – this is an Eclipse based project, so it should be easy to import and run as it is.

Posted in Uncategorized

Spring bean thread safety guide

source: http://dolszewski.com/spring/spring-bean-thread-safety-guide/

Is Spring controller/service/singleton thread-safe?

It’s a commonly asked question by Spring newcomers and probably a must-have warm-up question on job interviews. As usual in programming, the answer is: it depends. The main factor which determines thread safety of a component is its scope.

Let’s get down to it and see what Spring’s scopes have to offer in multithreaded programming.

Which Spring scope is thread-safe?

In order to answer that question, you first need to understand when Spring creates a new thread.

In a standard servlet-based Spring web application, every new HTTP request generates a new thread. If the container creates a new bean instance just for that particular request, we can say this bean is thread-safe.

Let’s examine what scopes we have in Spring and focus on when the container creates them.

Is Spring singleton thread safe?

The short answer is: no, it isn’t.

And you probably already know why.

It’s because of the long life cycle of singleton beans. Those beans may be reused over and over again in many HTTP requests coming from different users.

If you don’t use @Lazy, the framework creates a singleton bean at the application startup and makes sure that the same instance is autowired and reused in all other dependent beans. As long the container lives, the singleton beans live as well.

But the framework doesn’t control how the singleton is used. If two different threads execute a method of the singleton at the same time, you’re not guaranteed that both calls will be synchronized and run in the sequence.

In other words, it’s your responsibility to ensure your code runs safely in the multithreaded environment. Spring won’t do that for you.

Request scope to the rescue

If you want to make sure your bean is thread-safe, you should go for the @RequestScopeAs the name suggests, Spring binds such bean to a particular web request. Request beans aren’t shared between multiple threads, hence you don’t have to care about concurrency.

But hang on a minute.

If request scope beans are so great when it comes to concurrency, maybe we should use the scope for all application beans? Before you set the request scope to all your components,  ask yourself the following question.

Do you really need all your beans to be thread-safe?

Usually, you don’t.

Creating a new instance of a bean instead of reusing the existing one is always slower. Stick to singletons unless you have a real use case for request scope beans.

Problematic session scope

Spring associates session beans with a particular user. When a new user visits your application, a new session bean instance is created and reused for all requests from that user.

As you know, some user’s requests may be concurrent. Because of that fact, session beans aren’t thread-safe. Their life cycle is longer than request scope beans. Multiple requests may call the same session bean concurrently.

Tricky thread safety with prototype beans

I left the prototype scope as the last one to discuss because we can’t clearly say it’s always thread-safe or not. Prototype’s thread safety depends on the scope of the bean which contains the prototype.

Spring creates a prototype bean on demand whenever another bean requires its instance.

Imagine you have two beans in your application. One is the singleton and the second is the request scoped component. Both depend on a third bean which is the prototype.

Let’s consider the singleton bean first. Because the singleton isn’t thread-safe, calls to its prototype methods may also run concurrently. When several threads share the singleton, the single instance of the prototype that Spring injects to that singleton will also be shared.

It works the same for all other scopes which make a bean reusable in several web requests.

Prototype in Singleton

What about the request scope bean? Spring creates a new instance of such component for each web request. Each request is bound to a separate thread. Therefore, each instance of the request bean gets its own instance of the prototype bean. In that case, you can consider the prototype as thread-safe.

Request bean in Singleton

So are Spring web controllers thread-safe or not?

The answer again is: it depends.

It depends on the scope of such a controller.

If you define a controller as the default singleton bean, it won’t be thread-safe. Changing the default scope to the session won’t make the controller safe either. However, the request scope will make the controller bean safe to work for concurrent web requests.

What about controllers with the prototype scope? You already know its thread safety depends on the scope of the bean which contains the prototype as a dependency. But we never inject controllers to other beans, right? They’re entry points to our application. So how does Spring behave when you define a controller as the prototype bean?

As you probably suppose, when you define a controller as the prototype, the Spring framework will create a new instance for each web request it serves. Unless you inject them to unsafe scoped beans, you can consider prototype scoped controllers as thread-safe.

How to make any Spring bean thread-safe?

The best thing you can do to tackle access synchronization is to avoid it.

How?

By making your bean classes stateless.

The bean is stateless if execution of its methods doesn’t modify its instance fields. Changing local variables inside a method is totally fine because each call to a method allocates the memory for these variables. Unlike instance fields which are shared between all non-static methods.

The perfect stateless bean has no fields but you won’t see such utility classes very often. Usually, your beans have some fields. But by applying a few simple rules, you can make any bean stateless and thread-safe.

How to make Spring bean stateless?

Start by making all bean fields final to indicate that during the life cycle of the bean fields shouldn’t be reassigned again.

CODE

But don’t confuse field modification with reassignment! Making all bean’s fields final doesn’t make it stateless. If values you assigned to final fields of a bean can be changed during the runtime, such bean is still not thread-safe.

The above example presents a stateless bean because you can’t change the value of the String field. The String class is immutable just like Integer, Boolean, and other primitive wrappers. You can also use primitive types safely in this case. But what about more complex objects like standard Lists, Maps, or your custom data classes?

For the common types like collections, you can go for immutable implementations which you can find in the standard Java library. You can easily create immutable collections with factory methods added in Java 9. If you still use an older version, don’t worry. You can also find conversion methods like unmodifiableList() in the Collections class.

If it comes to custom data types, you have to make sure they are immutable on your own. Creating an immutable class in Java goes beyond the scope of this article. If you need a more detailed guide, read how to design immutable classes in the official Java documentation.

Sounds easy enough. But some of your beans may maintain some state. What then?

Thread-safe variable in stateful Spring bean

Stateless bean sounds like the silver bullet. But what if you already have a stateful bean and you must synchronize access on one of its fields?

In that case, you have the classic Java problem with the concurrent modification access to a class field. The Spring framework won’t solve it for you. You need to select one of the possible solutions:

  • The synchronized keyword and Locks – This option gives you the most control over access synchronization but also requires a deeper understanding of mechanisms used in the concurrent environment.
  • Atomic variables – You can find a small set of thread-safe types in the Java standard library. Types from that package can be safely used as fields in shared stateful beans.
  • Concurrent collections – In addition to atomic variables, Java provides us with a few useful collections which we can use without worrying about the concurrent access problem.

But beware: no matter which method you choose, access synchronization always has an impact on the performance. Try to avoid it if you have an alternative option.

Implementing thread-safe method in Spring component

Frankly, I’ve never had such case in any of my commercial projects but I was asked that question on an interview some time ago so you also may hear it.

As we already discussed, Spring itself doesn’t solve the problem of the concurrent access. If the scope of your bean isn’t thread-safe but its method contains some critical code that you always want to run safely, use the synchronized keyword on that method.

Conclusion

At this point, you should know the position of the Spring framework in the multithreaded environment. You learned how the scope of a component affects its safety and what are the options when you have to provide thread safety on your own.

If you like the post, subscribe to my blog so you won’t miss the next article. Also please leave a comment if you have some interesting stories about tackling concurrency problems in Spring applications. I would love to read them.

Posted in Uncategorized

What do you mean by “Event-Driven”?

Source:

https://martinfowler.com/articles/201701-event-driven.html

07 February 2017

Translations: Japanese

Towards the end of last year I attended a workshop with my colleagues in ThoughtWorks to discuss the nature of “event-driven” applications. Over the last few years we’ve been building lots of systems that make a lot of use of events, and they’ve been often praised, and often damned. Our North American office organized a summit, and ThoughtWorks senior developers from all over the world showed up to share ideas.

The biggest outcome of the summit was recognizing that when people talk about “events”, they actually mean some quite different things. So we spent a lot of time trying to tease out what some useful patterns might be. This note is a brief summary of the main ones we identified.


Event Notification

This happens when a system sends event messages to notify other systems of a change in its domain. A key element of event notification is that the source system doesn’t really care much about the response. Often it doesn’t expect any answer at all, or if there is a response that the source does care about, it’s indirect. There would be a marked separation between the logic flow that sends the event and any logic flow that responds to some reaction to that event.

Event notification is nice because it implies a low level of coupling, and is pretty simple to set up. It can become problematic, however, if there really is a logical flow that runs over various event notifications. The problem is that it can be hard to see such a flow as it’s not explicit in any program text. Often the only way to figure out this flow is from monitoring a live system. This can make it hard to debug and modify such a flow. The danger is that it’s very easy to make nicely decoupled systems with event notification, without realizing that you’re losing sight of that larger-scale flow, and thus set yourself up for trouble in future years. The pattern is still very useful, but you have to be careful of the trap.

A simple example of this trap is when an event is used as a passive-aggressive command. This happens when the source system expects the recipient to carry out an action, and ought to use a command message to show that intention, but styles the message as an event instead.

An event need not carry much data on it, often just some id information and a link back to the sender that can be queried for more information. The receiver knows something has changed, may get some minimal information on the nature of the change, but then issues a request back to the sender to decide what to do next.


Event-Carried State Transfer

This pattern shows up when you want to update clients of a system in such a way that they don’t need to contact the source system in order to do further work. A customer management system might fire off events whenever a customer changes their details (such as an address) with events that contain details of the data that changed. A recipient can then update it’s own copy of customer data with the changes, so that it never needs to talk to the main customer system in order to do its work in the future.

An obvious down-side of this pattern is that there’s lots of data schlepped around and lots of copies. But that’s less of a problem in an age of abundant storage. What we gain is greater resilience, since the recipient systems can function if the customer system is becomes unavailable. We reduce latency, as there’s no remote call required to access customer information. We don’t have to worry about load on the customer system to satisfy queries from all the consumer systems. But it does involve more complexity on the receiver, since it has to sort out maintaining all the state, when it’s usually easier just to call the sender for more information when needed.


Event-Sourcing

The core idea of event sourcing is that whenever we make a change to the state of a system, we record that state change as an event, and we can confidently rebuild the system state by reprocessing the events at any time in the future. The event store becomes the principal source of truth, and the system state is purely derived from it. For programmers, the best example of this is a version-control system. The log of all the commits is the event store and the working copy of the source tree is the system state.

Event-sourcing introduces a lot of issues, which I won’t go into here, but I do want to highlight some common misconceptions. There’s no need for event processing to be asynchronous, consider the case of updating a local git repository – that’s entirely a synchronous operation, as is updating a centralized version-control system like subversion. Certainly having all these commits allows you to do all sorts of interesting behaviors, git is the great example, but the core commit is fundamentally a simple action.

Another common mistake is to assume that everyone using an event-sourced system should understand and access the event log to determine useful data. But knowledge of the event log can be limited. I’m writing this in an editor that is ignorant of all the commits in my source tree, it just assumes there is a file on the disk. Much of the processing in an event-sourced system can be based on a useful working copy. Only elements that really need the information in the event log should have to manipulate it. We can have multiple working copies with different schema, if that helps; but usually there should be a clear separation between domain processing and deriving a working copy from the event log.

When working with an event log, it is often useful to build snapshots of the working copy so that you don’t have to process all the events from scratch every time you need a working copy. Indeed there is a duality here, we can look at the event log as either a list of changes, or as a list of states. We can derive one from the other. Version-control systems often mix snapshots and deltas in their event log in order to get the best performance. [1]

Event-sourcing has many interesting benefits, which easily come to mind when thinking of the value of version-control systems. The event log provides a strong audit capability (accounting transactions are an event source for account balances). We can recreate historic states by replaying the event log up to a point. We can explore alternative histories by injecting hypothetical events when replaying. Event sourcing make it plausible to have non-durable working copies, such as a Memory Image.

Event sourcing does have its problems. Replaying events becomes problematic when results depend on interactions with outside systems. We have to figure out how to deal with changes in the schema of events over time. Many people find the event processing adds a lot of complexity to an application (although I do wonder if that’s more due to poor separation between components that derive a working copy and components that do the domain processing).


CQRS

Command Query Responsibility Segregation (CQRS) is the notion of having separate data structures for reading and writing information. Strictly CQRS isn’t really about events, since you can use CQRS without any events present in your design. But commonly people do combine CQRS with the earlier patterns here, hence their presence at the summit.

The justification for CQRS is that in complex domains, a single model to handle both reads and writes gets too complicated, and we can simplify by separating the models. This is particularly appealing when you have a difference in access patterns, such as lots of reads and very few writes. But the gain for using CQRS has to be balanced against the additional complexity of having separate models. I find many of my colleagues are deeply wary of using CQRS, finding it often misused.

Posted in Uncategorized

What is Docker?

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. By doing so, thanks to the container, the developer can rest assured that the application will run on any other Linux machine regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code.

In a way, Docker is a bit like a virtual machine. But unlike a virtual machine, rather than creating a whole virtual operating system, Docker allows applications to use the same Linux kernel as the system that they’re running on and only requires applications be shipped with things not already running on the host computer. This gives a significant performance boost and reduces the size of the application.

And importantly, Docker is open source. This means that anyone can contribute to Docker and extend it to meet their own needs if they need additional features that aren’t available out of the box.

Who is Docker for?

Docker is a tool that is designed to benefit both developers and system administrators, making it a part of many DevOps (developers + operations) toolchains. For developers, it means that they can focus on writing code without worrying about the system that it will ultimately be running on. It also allows them to get a head start by using one of thousands of programs already designed to run in a Docker container as a part of their application. For operations staff, Docker gives flexibility and potentially reduces the number of systems needed because of its small footprint and lower overhead.

Getting started

Here are some resources that will help you get started using Docker in your workflow. Docker provides a web-based tutorial with a command-line simulator that you can try out basic Docker commands with and begin to understand how it works. There is also a beginners guide to Docker that introduces you to some basic commands and container terminology. Or watch the video below for a more in-depth look:

Docker and security

Docker brings security to applications running in a shared environment, but containers by themselves are not an alternative to taking proper security measures.

Dan Walsh, a computer security leader best known for his work on SELinux, gives his perspective on the importance of making sure Docker containers are secure. He also provides a detailed breakdown of security features currently within Docker, and how they function.

Posted in Uncategorized

Watch why Linus Torvalds says Linux is the best option for career building

from Facebook http://ift.tt/1UB0Cd5
via IFTTT

Posted in Uncategorized

Bootstrap 4 Cheat Sheet

Bootstrap4, a well written resource :

 

http://hackerthemes.com/bootstrap-cheatsheet/

Posted in Uncategorized

Popular programming language 2016

http://pypl.github.io/PYPL.html

http://www.tiobe.com/tiobe_index?page=index

 

Feb 2016 Feb 2015 Change Programming Language Ratings Change
1 2 change Java 21.145% +5.80%
2 1 change C 15.594% -0.89%
3 3 C++ 6.907% +0.29%
4 5 change C# 4.400% -1.34%
5 8 change Python 4.180% +1.30%
6 7 change PHP 2.770% -0.40%
7 9 change Visual Basic .NET 2.454% +0.43%
8 12 change Perl 2.251% +0.86%
9 6 change JavaScript 2.201% -1.31%
10 11 change Delphi/Object Pascal 2.163% +0.59%
11 20 change Ruby 2.053% +1.18%
12 10 change Visual Basic 1.855% +0.14%
13 26 change Assembly language 1.828% +1.08%
14 4 change Objective-C 1.403% -4.62%
15 30 change D 1.391% +0.77%
16 27 change Swift 1.375% +0.65%
17 18 change R 1.192% +0.23%
18 17 change MATLAB 1.091% +0.06%
19 13 change PL/SQL 1.062% -0.20%
20 33 change Groovy 1.012% +0.51%
Posted in Uncategorized

Can I add jars to maven 2 build classpath without installing them?

Problems of popular approaches

Most of the answers you’ll find around the internet will suggest you to either install the dependency to your local repository or specify a “system” scope in the pom and distribute the dependency with the source of your project. But both of these solutions are actually flawed.

Why you shouldn’t apply the “Install to Local Repo” approach

When you install a dependency to your local repository it remains there. Your distribution artifact will do fine as long as it has access to this repository. The problem is in most cases this repository will reside on your local machine, so there’ll be no way to resolve this dependency on any other machine. Clearly making your artifact depend on a specific machine is not a way to handle things. Otherwise this dependency will have to be locally installed on every machine working with that project which is not any better.

Why you shouldn’t apply the “System Scope” approach

The jars you depend on with the “System Scope” approach neither get installed to any repository or attached to your target packages. That’s why your distribution package won’t have a way to resolve that dependency when used. That I believe was the reason why the use of system scope even got deprecated. Anyway you don’t want to rely on a deprecated feature.

The static in-project repository solution

After putting this in your pom:

<repository>
    <id>repo</id>
    <releases>
        <enabled>true</enabled>
        <checksumPolicy>ignore</checksumPolicy>
    </releases>
    <snapshots>
        <enabled>false</enabled>
    </snapshots>
    <url>file://${project.basedir}/repo</url>
</repository>

for each artifact with a group id of form x.y.z Maven will include the following location inside your project dir in its search for artifacts:

repo/
| - x/
|   | - y/
|   |   | - z/
|   |   |   | - ${artifactId}/
|   |   |   |   | - ${version}/
|   |   |   |   |   | - ${artifactId}-${version}.jar

To elaborate more on this you can read this blog post.

Use Maven to install to project repo

Instead of creating this structure by hand I recommend to use a Maven plugin to install your jars as artifacts. So, to install an artifact to an in-project repository under repo folder execute:

mvn install:install-file -DlocalRepositoryPath=repo -DcreateChecksum=true -Dpackaging=jar -Dfile=[your-jar] -DgroupId=[...] -DartifactId=[...] -Dversion=[...]

If you’ll choose this approach you’ll be able to simplify the repository declaration in pom to:

<repository>
    <id>repo</id>
    <url>file://${project.basedir}/repo</url>
</repository>

A helper script

Since executing installation command for each lib is kinda annoying and definitely error prone, I’ve created a utility script which automatically installs all the jars from a lib folder to a project repository, while automatically resolving all metadata (groupId, artifactId and etc.) from names of files. The script also prints out the dependencies xml for you to copy-paste in your pom.

Include the dependencies in your target package

When you’ll have your in-project repository created you’ll have solved a problem of distributing the dependencies of the project with its source, but since then your project’s target artifact will depend on non-published jars, so when you’ll install it to a repository it will have unresolvable dependencies.

To beat this problem I suggest to include these dependencies in your target package. This you can do with either the Assembly Plugin or better with the OneJar Plugin. The official documentaion on OneJar is easy to grasp.

(http://stackoverflow.com/questions/364114/can-i-add-jars-to-maven-2-build-classpath-without-installing-them)

Posted in 3d software, Uncategorized

Sketchup – An Easy way to design your house

After comparing a few 3d modelling software, i decided to use sketchup to design my apartment.

pro:

the Application is Simple to use for beginner, easy to learn, and has thousands of  3d model library.

cons:

doesn’t support lighting and shadow, but there are  3rd party  plugins that enable this feature.(irendering)

I am very satisfied using sketchup, now i am able to generate a 3d rendering similar to those interior designers created

Woodlands-Drive-50_v_dream4_v14_tv_console Woodlands-Drive-50_v_dream4_v14_scandi_guest Woodlands-Drive-50_v_dream4_v14_master_bathroom_2 Woodlands-Drive-50_v_dream4_v14_kitchen Woodlands-Drive-50_v_dream4_v14_bathroom Woodlands-Drive-50_v_dream4_v14