Month: June 2012

Memory Leaks

Posted on

source:http://www.mhaller.de/archives/140-Memory-leaks-et-alii.html

 
Leaky web application
Leaky web application

One of the most common problems in building enterprise web applications are leaks. A leak is consumption of a resource by a program where the program is unable to release the resource. Leaks come in various types, such as

  • Memory leaks
  • Thread and ThreadLocal leaks
  • ClassLoader leaks
  • System resource leaks
  • Connection leaks

As you can see in the picture above, leaks are really nasty and bring down an application server after just a few minutes if they’re stressed at little bit. That should not ever happen.

Memory leaks often occur due to performance optimizations, such as using caches to hold references for expensive objects or for objects retrieved from external sources such as a database: lookup tables, object caches, object pools etc. You can identify such leaks very quickly by analyzing a component. If it adds new entries without ever removing entries from that instance, that’s a potential source for memory leaks (not always though).

Memory Analyzer shows Timer Thread and its references in the Leak Suspects Analysis
Memory Analyzer shows Timer Thread and its references in the Leak Suspects Analysis

Thread leaks often occur when low-level libraries create new Threads without knowing about their own lifecycle within an application container, such as a Java Enterprise Application Server. Most developers know about the unwritten rule that new Threads should not be created within Web Applications: the EJB spec even forbids it, Servlet Spec 2.5 does not forbid or even mention Thread creation by application developers and Servlet Spec 3.0 introduces the chapter about Asynchronous Processing and also states the following, which clearly allows Thread creation:

If a thread created by the application uses the container-managed objects, such as the request or response object, those objects must be accessed only within the object’s life cycle as defined in sections 3.10 and 5.6. […] Servlet Specification 3.0 PR

But often libraries internally create Threads unknown to the application developer. Threads are used for background jobs or for Timers, which should clean up resources after time. However, since the time when a web application should be reloaded or undeployed is unknown to the library, it is impossible for the library to cancel the Timer or shut down the Thread.

ThreadLocals are a similar problem, when their value is not reclaimed and references application classes. Such cases often occur in web applications, as Threads are reused for multiple web application instances and thus values of ThreadLocals often remain. So, how do you find ThreadLocal leaks?
– Take heap dump
– Object Query Language: SELECT * FROM INSTANCEOF java.lang.ThreadLocal
– for each ThreadLocal found:
— List objects – with incoming references
— expand 1 level to see the class name and field name of the class which has the Threadlocal
— open the class in your favourite IDE and see how the ThreadLocal is used and clean up
If set(null) or remove() is never called > ThreadLocal leak > ClassLoader leak

Incoming references to the Timer Thread as shown in a heap dump within Eclipse Memory Analyzer Tool
Incoming references to the Timer Thread as shown in a heap dump within Eclipse Memory Analyzer Tool

The worst kind of Thread leaking is coupled with ClassLoader leaking. Threads hold a strong reference to the ContextClassLoader. The ContextClassLoader is often the WebappClassLoader. In such a cycle, it won’t be possible to unload the classes loaded by the WebappClassLoader due to the running Thread. Even if the Thread is set to be a daemon-thread the leak is there, since in a webapp, the JVM is never shut down and hence Daemon-Threads are not shut down by the JVM automatically.

Path to the Garbage Collector Roots of the Web Application Class Loader
Path to the Garbage Collector Roots of the WebAppClassLoader

ClassLoader leaks are very problematic for redeployment scenarios or dynamic applications. Caches or reflective utilities often hold a reference to the ClassLoader, either the WebappClassLoader or the ThreadContextClassLoader. When those references cannot be reclaimed, the Web application cannot be undeployed cleanly. The result is either that the server needs to be restarted, or that undeploying a web application results in open file handlers or a “corrupted” web app folder. For example, Tomcat expands a .war file into a temporary working directory. This working directory is not deleted completely when such an web application with a leak is tried to be undeployed. Developers often just delete the folder manually and redeploy, since it’s very hard to find out the root cause of the leaks.

The immediate Dominator Tree, grouped by WebappClassLoader shows all the WebApps which cannot be undeployed because there are leaks
Dominator Tree, grouped by WebappClassLoader shows leaky web apps

If you search for ClassLoader leaks in webapps, you often stumble upon the example where a custom LogLevel is used to force a leak and to show how to find and resolve it using profilers and heap dump analyzers. If you happen to stare at a heap dump of a usual web application, you will see a lot of Logging infrastructure objects. Sometimes, they’re really to blame, but not always. When hunting down memory leaks, it’s crucial to watch for all objects and all traces. Sometimes, it’s just that objects are there because the ClassLoader cannot be reclaimed due to other reasons. Don’t blame the first (and easy) find. Go deeper until you’re sure about the cause.

System resource leaks are, for example, open file handles or a temporary folder which fills up all the available disk space. For network sockets, the operating system or network stack will take care of unclosed connections and kill them after a while. However, file handles don’t have a timeout and thus will be kept open. Most operating systems have a limit of open file descriptors.

Rule #1 in preventing leaks: Close the resource you have opened when you don’t need it any more!

Problem #1: When? As a developer of a web app, you often don’t know when to close a resource. When is the page being viewed by the user not used any more? When is a scheduler timer not used any more? When is a cache invalidation timer not used any more?

Problem #2: Where? As a developer of a web app, you often don’t know about all the resources used by your application, because it’s buried down in thirdparty libraries or in the container itself.

XWS Security DefaultSecurityEnvironmentImpl.java with a Timer which is never cancelled and runs and runs and runs …

Rule #2 in fixing leaks: Have automated stress tests
Set up an environment with an application server, your web app, a stress testing tool and a profiler. My suggestion:

  1. Tomcat as app server
  2. Your web app (use the real thing, no dummy)
  3. JMeter test plan
JMeter TestPlan for redeploying a web application in Tomcat

Rule #3 in fixing leaks: Move as much as possible out of your web app
For example, a JDBC driver should not be within your webapp. The DriverManager has references to the ClassLoader and Connection pools. If you package a JDBC driver within your webapp, you certainly trap into a leak. Use JNDI to get Connections from the app server. Don’t package libraries which are already available in the app server, such as logging frameworks. If you can omit them, do it. Tomcat even has a workaround class which tries to deregister JDBC drivers loaded from within the web application ClassLoader.

Rule #3 in fixing leaks: Trial/Error various configurations
Try to disable some of your features, run the tests and compare the results (Memory heap dump, Visual VM graphs). This is a very fast method of getting to know which feature causes a leak.

Rule #4 in fixing leaks: Reevaluate
After you have reached a leak-free state of your webapp, disable all the fixes you have done one-by-one and perform your tests again. You will find out that some of the workarounds or fixes are meaningless, because they were just a symptom of one of the root causes.

In my current project, i’ve identified the following potential leaks:

  • Proprietary Singleton BeanLocator (delegate to Spring’s WebApplicationContext)
  • Multiple EhCache Caches
  • Dozer Bean Mapper’s JMX Beans for Administration and Statistics, which get registered by default but not unregistered automatically
  • Additional Threads whose ContextClassLoader is the WebappClassLoader
  • XWS Security’s Timer Thread for cleaning up nonces
  • iBatis SqlMaps Mapped Statements Cache
  • iBatis SqlMaps ClassInfo Cache
  • JDBC Drivers loaded from the WebappClassLoader (e.g. H2 as in-memory database for demo purposes)
  • Commons-Pool Eviction Timer, when started from within the WebApplication
  • Java Beans Introspector
  • Spring’s CachedIntrospectionResults
  • AspectJ’s ReflectiveWorld in v1.5.4 (seems to be fixed in v1.6.6)
  • Commons-Logging LogFactory
  • OpenOffice ODF Toolkit’s TempDirDeleter Timer

When are you finished fixing leaks?
When your webapp can be redeployed a large number of times without PermGen cranking up and when the Memory Analyzer only finds two suspects after undeploying all your webapps:

  • org.apache.catalina.loader.StandardClassLoader
  • <system class loader>

If there are other ClassLoaders, you might still have a leak. If you’re happy and end up with the following long-term monitoring graph, after 12 hours of continuous redeployments, you haven’t got any ClassLoader leaks:

12 hours stress testing the web application by redeploying (and calling it) continuously

(VisualVM only shows the last view minutes, but you can see in the lower left ‘Unloaded classes’ that there was some smoke-testing going on)

Advertisements

Memory Leaks

Posted on

source:
http://frankkieviet.blogspot.sg/2006/10/classloader-leaks-dreaded-permgen-space.html

Classloader leaks: the dreaded “java.lang.OutOfMemoryError: PermGen space” exception

Did you ever encounter a java.lang.OutOfMemoryError: PermGen space error when you redeployed your application to an application server? Did you curse the application server, while restarting the application server, to continue with your work thinking that this is clearly a bug in the application server. Those application server developers should get their act together, shouldn’t they? Well, perhaps. But perhaps it’s really  your fault!
Take a look at the following example of an innocent looking servlet.

package com.stc.test;

import java.io.\*;
import java.util.logging.\*;
import javax.servlet.\*;
import javax.servlet.http.\*;

public class MyServlet extends HttpServlet {
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
// Log at a custom level
Level customLevel = new Level("OOPS", 555) {};
Logger.getLogger("test").log(customLevel, "doGet() called");
}
}

Try to redeploy this little sample a number of times.  I bet this will eventually fail with the dreaded java.lang.OutOfMemoryError: PermGen space error. If you like to understand what’s happening, read on.

The problem in a nutshell

Application servers such as Glassfish allow you to write an application (.ear, .war, etc) and deploy this application with other applications on this application server. Should you feel the need to make a change to your application, you can simply make the change in your source code, compile the source, and redeploy the application without affecting the other still running applications in the application server: you don’t need to restart the application server. This mechanism works fine on Glassfish and other application servers (e.g. Java CAPS Integration Server).

The way that this works is that each application is loaded using its own classloader. Simply put, a classloader is a special class that loads .class files from jar files. When you undeploy the application, the classloader is discarded and it and all the classes that it loaded, should be garbage collected sooner or later.

Somehow, something may hold on to the classloader however, and prevent it from being garbage collected. And that’s what’s causing the java.lang.OutOfMemoryError: PermGen space exception.

PermGen space

What is PermGen space anyways? The memory in the Virtual Machine is divided into a number of regions. One of these regions is PermGen. It’s an area of memory that is used to (among other things) load class files. The size of this memory region is fixed, i.e. it does not change when the VM is running. You can specify the size of this region with a commandline switch: -XX:MaxPermSize . The default is 64 Mb on the Sun VMs.

If there’s a problem with garbage collecting classes and if you keep loading new classes, the VM will run out of space in that memory region, even if there’s plenty of memory available on the heap. Setting the -Xmx parameter will not help: this parameter only specifies the size of the total heap and does not affect the size of the PermGen region.

Garbage collecting and classloaders

When you write something silly like

 private void x1() {
        for (;;) {
            List c = new ArrayList();
        }
    }

you’re continuously allocating objects; yet the program doesn’t run out of memory: the objects that you create are garbage collected thereby freeing up space so that you can allocate another object. An object can only be garbage collected if the object is “unreachable”. What this means is that there is no way to access the object from anywhere in the program. If nobody can access the object, there’s no point in keeping the object, so it gets garbage collected. Let’s take a look at the memory picture of the servlet example. First, let’s even further simplify this example:

package com.stc.test;

import java.io.\*;
import java.net.\*;
import javax.servlet.\*;
import javax.servlet.http.\*;

public class Servlet1 extends HttpServlet {
private static final String STATICNAME = "Simple";
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
}
}

After loading the above servlet, the following objects are in memory (ofcourse limited to the relevant ones):

In this picture you see the objects loaded by the application classloader in yellow, and the rest in green. You see a simplified container object that holds references to the application classloader that was created just for this application, and to the servlet instance so that the container can invoke the doGet() method on it when a web request comes in. Note that the STATICNAME object is owned by the class object. Other important things to notice:

  1. Like each object, the Servlet1 instance holds a reference to its class (Servlet1.class).
  2. Each class object (e.g. Servlet1.class) holds a reference to the classloader that loaded it.
  3. Each classloader holds references to all the classes that it loaded.

The important consequence of this is that whenever an object outside of AppClassloader holds a reference to an object loaded by AppClassloader, none of the classes can be garbage collected.

To illustrate this, let’s see what happens when the application gets undeployed: the Container object nullifies its references to the Servlet1 instance and to the AppClassloader object.

As you can see, none of the objects are reachable, so they all can be garbage collected. Now let’s see what happens when we use the original example where we use the Level class:

package com.stc.test;

import java.io.\*;
import java.net.\*;
import java.util.logging.\*;
import javax.servlet.\*;
import javax.servlet.http.\*;

public class LeakServlet extends HttpServlet {
private static final String STATICNAME = "This leaks!";
private static final Level CUSTOMLEVEL = new Level("test", 550) {}; // anon class!

protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
Logger.getLogger("test").log(CUSTOMLEVEL, "doGet called");
}
}

Note that the CUSTOMLEVEL‘s class is an anonymous class. That is necessary because the constructor of Level is protected. Let’s take a look at the memory picture of this scenario:

In this picture you see something you may not have expected: the Level class holds a static member to all Level objects that were created. Here’s the constructor of the Level class in the JDK:

 protected Level(String name, int value) {
this.name = name;
this.value = value;
synchronized (Level.class) {
known.add(this);
}
}

Here known is a static ArrayList in the Level class. Now what happens if the application is undeployed?

Only the LeakServlet object can be garbage collected. Because of the reference to the CUSTOMLEVEL object from outside of AppClassloader, the  CUSTOMLEVEL anyonymous class objects (LeakServlet$1.class) cannot be garbage collected, and through that neither can the AppClassloader, and hence none of the classes that the AppClassloader loaded can be garbage collected.
Conclusion: any reference from outside the application to an object in the application of which the class is loaded by the application’s classloader will cause a classloader leak.

More sneaky problems

I don’t blame you if you didn’t see the problem with the Level class: it’s sneaky. Last year we had some undeployment problems in our application server. My team, in particular Edward Chou, spent some time to track them all down. Next to the problem with Level, here are some other problems Edward and I encountered. For instance, if you happen to use some of the Apache Commons BeanHelper’s code: there’s a static cache in that code that refers to Method objects. The Method object holds a reference to the class the Method points to. Not a problem if the Apache Commons code is loaded in your application’s classloader. However, you do have a problem if this code is also present in the classpath of the application server because those classes take precedence. As a result now you have references to classes in your application from the application server’s classloader… a classloader leak!

I did not mentiond yet the simplest recipe for disaster: a thread started by the application while the thread does not exit after the application is undeployed.

Detection and solution

Classloader leaks are difficult. Detecting if there’s such a leak without having to deploy/undeploy a large number of times is difficult. Finding the source of a classloader leak is even trickier. This is because all the profilers that we tried at the time, did not follow links through classloaders. Therefore we resorted to writing some custom code to find the leaks from memory dump files. Since that exercise, new tools came to market in JDK 6. The next blog will outline what the easiest approach today is for tracking down a glassloader leak.

Fork-Join in Java 7

Posted on

Java 7 introduce a new parallel mechanism for compute intensive tasks, the fork-join framework. The fork-join framework allows you to distribute a certain task on several workers and when wait for the result.
For Java 6.0 you can download the package (jsr166y) from
For testing create the Java project “de.vogella.performance.forkjoin”. If you are not using Java 7 you also need to “jsr166y.jar” to the classpath.
Create first a package “algorithm” and then the problem class.

   
package algorithm;

import java.util.Random;


/**
*
* This class defines a long list of integers which defines the problem we will
* later try to solve
*
*/

public class Problem {
private final int[] list = new int[2000000];

public Problem() {
Random generator = new Random(19580427);
for (int i = 0; i < list.length; i++) {
list[i] = generator.nextInt(500000);
}
}

public int[] getList() {
return list;
}

}

Define now the solver class. This class extends RecursiveTask.

Tip

The API defines other top classes, e.g. RecursiveAction, AsyncAction. Check the Javadoc for details.

   
package algorithm;

import java.util.Arrays;

import jsr166y.forkjoin.RecursiveAction;

public class Solver extends RecursiveAction {
private int[] list;
public long result;

public Solver(int[] array) {
this.list = array;
}

@Override
protected void compute() {
if (list.length == 1) {
result = list[0];
} else {
int midpoint = list.length / 2;
int[] l1 = Arrays.copyOfRange(list, 0, midpoint);
int[] l2 = Arrays.copyOfRange(list, midpoint, list.length);
Solver s1 = new Solver(l1);
Solver s2 = new Solver(l2);
forkJoin(s1, s2);
result = s1.result + s2.result;
}
}
}

Now define a small test class for testing it efficiency.

   
package testing;

import jsr166y.forkjoin.ForkJoinExecutor;
import jsr166y.forkjoin.ForkJoinPool;
import algorithm.Problem;
import algorithm.Solver;

public class Test {

public static void main(String[] args) {
Problem test = new Problem();
// Check the number of available processors
int nThreads = Runtime.getRuntime().availableProcessors();
System.out.println(nThreads);
Solver mfj = new Solver(test.getList());
ForkJoinExecutor pool = new ForkJoinPool(nThreads);
pool.invoke(mfj);
long result = mfj.getResult();
System.out.println("Done. Result: " + result);
long sum = 0;
// Check if the result was ok
for (int i = 0; i < test.getList().length; i++) {
sum += test.getList()[i];
}
System.out.println("Done. Result: " + sum);
}
}

Nonblocking algorithms

Posted on

Java 5.0 provides supports for additional atomic operations. This allows to develop algorithm which are non-blocking algorithm, e.g. which do not require synchronization, but are based on low-level atomic hardware primitives such as compare-and-swap (CAS). A compare-and-swap operation check if the variable has a certain value and if it has this value it will perform this operation.
Non-blocking algorithm are usually much faster then blocking algorithms as the synchronization of threads appears on a much finer level (hardware).
For example this created a non-blocking counter which always increases. This example is contained in the project “de.vogella.concurrency.nonblocking.counter”.

   
package de.vogella.concurrency.nonblocking.counter;

import java.util.concurrent.atomic.AtomicInteger;

public class Counter {
private AtomicInteger value = new AtomicInteger();
public int getValue(){
return value.get();
}
public int increment(){
return value.incrementAndGet();
}

// Alternative implementation as increment but just make the
// implementation explicit
public int incrementLongVersion(){
int oldValue = value.get();
while (!value.compareAndSet(oldValue, oldValue+1)){
oldValue = value.get();
}
return oldValue+1;
}

}

And a test.

   
package de.vogella.concurrency.nonblocking.counter;

import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;

public class Test {
private static final int NTHREDS = 10;

public static void main(String[] args) {
final Counter counter = new Counter();
List<Future<Integer>> list = new ArrayList<Future<Integer>>();

ExecutorService executor = Executors.newFixedThreadPool(NTHREDS);
for (int i = 0; i < 500; i++) {
Callable<Integer> worker = new Callable<Integer>() {
@Override
public Integer call() throws Exception {
int number = counter.increment();
System.out.println(number);
return number ;
}
};
Future<Integer> submit= executor.submit(worker);
list.add(submit);

}


// This will make the executor accept no new threads
// and finish all existing threads in the queue
executor.shutdown();
// Wait until all threads are finish
while (!executor.isTerminated()) {
}
Set<Integer> set = new HashSet<Integer>();
for (Future<Integer> future : list) {
try {
set.add(future.get());
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
}
if (list.size()!=set.size()){
throw new RuntimeException("Double-entries!!!");
}

}


}

The interesting part is how incrementAndGet() is implemented. It uses a CAS operation.

    
public final int incrementAndGet() {
for (;;) {
int current = get();
int next = current + 1;
if (compareAndSet(current, next))
return next;
}
}

The JDK itself makes more and more use of non-blocking algorithms to increase performance for every developer. Developing correct non-blocking algorithm is not a trivial task. For more information on non-blocking algorithm, e.g. examples for a non-blocking Stack and non-block LinkedList, please see http://www.ibm.com/developerworks/java/library/j-jtp04186/index.html

Futures and Callables

Posted on

The executor framework presented in the last chapter works with Runnables. Runnable do not return result.
In case you expect your threads to return a computed result you can use java.util.concurrent.Callable. Callables allow to return values after competition.
Callable uses generic to define the type of object which is returned.
If you submit a callable to an executor the framework returns a java.util.concurrent.Future. This futures can be used to check the status of a callable and to retrieve the result from the callable.
On the executor you can use the method submit to submit a Callable and to get a future. To retrieve the result of the future use the get() method.

   
package de.vogella.concurrency.callables;

import java.util.concurrent.Callable;

public class MyCallable implements Callable<Long> {
@Override
public Long call() throws Exception {
long sum = 0;
for (long i = 0; i <= 100; i++) {
sum += i;
}
return sum;
}

}

   
package de.vogella.concurrency.callables;

import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;

public class CallableFutures {
private static final int NTHREDS = 10;

public static void main(String[] args) {

ExecutorService executor = Executors.newFixedThreadPool(NTHREDS);
List<Future<Long>> list = new ArrayList<Future<Long>>();
for (int i = 0; i < 20000; i++) {
Callable<Long> worker = new MyCallable();
Future<Long> submit = executor.submit(worker);
list.add(submit);
}
long sum = 0;
System.out.println(list.size());
// Now retrieve the result
for (Future<Long> future : list) {
try {
sum += future.get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
}
System.out.println(sum);
executor.shutdown();
}
}

Threads pools with the Executor Framework

Posted on

Thread pools manage a pool of worker threads. The thread pools contains a work queue which holds tasks waiting to get executed.
A thread pool can be described as a collection of runnables (work queue) and a connections of running threads. These threads are constantly running and are checking the work query for new work. If there is new work to be done they execute this Runnable. The Thread class itself provides a method, e.g. execute(Runnable r) to add runnables to the work queue.
The Executor framework provides example implementation of the java.util.concurrent.Executor interface, e.g. Executors.newFixedThreadPool(int n) which will create n worker threads. The ExecutorService adds lifecycle methods to the Executor, which allows to shutdown the Executor and to wait for termination.

Tip

If you want to use one thread pool with one thread which executes several runnables you can use Executors.newSingleThreadExecutor();

Create again the Runnable.

   
package de.vogella.concurrency.threadpools;


/**
* MyRunnable will count the sum of the number from 1 to the parameter
* countUntil and then write the result to the console.
* <p>
* MyRunnable is the task which will be performed
*
* @author Lars Vogel
*
*/

public class MyRunnable implements Runnable {
private final long countUntil;

MyRunnable(long countUntil) {
this.countUntil = countUntil;
}

@Override
public void run() {
long sum = 0;
for (long i = 1; i < countUntil; i++) {
sum += i;
}
System.out.println(sum);
}
}

Now you run your runnables with the executor framework.

   
package de.vogella.concurrency.threadpools;

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class Main {
private static final int NTHREDS = 10;

public static void main(String[] args) {
ExecutorService executor = Executors.newFixedThreadPool(NTHREDS);
for (int i = 0; i < 500; i++) {
Runnable worker = new MyRunnable(10000000L + i);
executor.execute(worker);
}
// This will make the executor accept no new threads
// and finish all existing threads in the queue
executor.shutdown();
// Wait until all threads are finish
while (!executor.isTerminated()) {

}
System.out.println("Finished all threads");
}
}

In case the threads should return some value (result-bearing threads) then you could use java.util.concurrent.Callable.

Threads in Java

Posted on

The base means for concurrency are “java.lang.Threads”. Each thread will execute an object of type “java.lang.Runnable”. Runnable is an interface with defines only the one method “run()”. This method is called by “Thread” and contains the work which should be done. Therefore the “Runnable” is the task to perform. The Thread is the worker who is doing this task.
The following demonstrates a task (Runnable) which counts the sum of a given range of numbers. Create the Java project “de.vogella.concurrency.threads” for the example coding of this section.

   
package de.vogella.concurrency.threads;


/**
* MyRunnable will count the sum of the number from 1 to the parameter
* countUntil and then write the result to the console.
* <p>
* MyRunnable is the task which will be performed
*
* @author Lars Vogel
*
*/

public class MyRunnable implements Runnable {
private final long countUntil;

MyRunnable(long countUntil) {
this.countUntil = countUntil;
}

@Override
public void run() {
long sum = 0;
for (long i = 1; i < countUntil; i++) {
sum += i;
}
System.out.println(sum);
}
}

To perform a task (Runnables) you need to define and start a Thread. The following coding will create Threads, assigns a Runnable to each Thread, schedule the Threads to run and wait until all Threads are finished.

   
package de.vogella.concurrency.threads;

import java.util.ArrayList;
import java.util.List;

public class Main {

public static void main(String[] args) {
// We will store the threads so that we can check if they are done
List<Thread> threads = new ArrayList<Thread>();
// We will create 500 threads
for (int i = 0; i < 500; i++) {
Runnable task = new MyRunnable(10000000L + i);
Thread worker = new Thread(task);
// We can set the name of the thread
worker.setName(String.valueOf(i));
// Start the thread, never call method run() direct
worker.start();
// Remember the thread for later usage
threads.add(worker);
}
int running = 0;
do {
running = 0;
for (Thread thread : threads) {
if (thread.isAlive()) {
running++;
}
}
System.out.println("We have " + running + " running threads. ");
} while (running > 0);

}
}

Using Threads directly has the following disadvantages.

  • Creating a new thread causes some performance overhead
  • Too many threads can lead to reduced performance, as the CPU needs to switch between these threads.
  • You cannot easily control the number of threads, therefore you may run into out of memory errors due to too many threads.

The “java.util.concurrent” package offers improved support for concurrency compared to threads. The java.util.concurrent package helps solving several of the issues with threads.