Container technologies in the area of cloud computing are steady on the rise. They allow for great new cloud (platform) architectures, but they can be challenging for even experienced developers. Those challenges are not always visible at first, this is partly because cloud platforms like SAP Business Technology Platform are doing their best to shield the users from the complexity that comes with container technologies.
There are however new troubleshooting situations where a good look behind the scenes can help developers to find solutions to various issues.
In this blog series, we are showing Java developers who are eager to learn about containerized Java applications on SAP BTP Cloud Foundry, how to perform troubleshooting and get the best out of this technology.
- Part 1, this blog post, is all about Java memory management and how it is different in containers.
- Part 2 will go into deeper detail about different out of memory situations and how to detect and mitigate them.
- Part 3 will explain how heap dumps and thread dumps can be created on demand with tools that we’ve found useful in our day-to-day business.
If you have ever experienced crashes of your Java applications running in containers, which seemed to come out of nowhere, you will likely agree that memory management for containerized Java Virtual Machines (JVM) can be a life saver.
Why should I care?
In a containerized environment, resources are typically shared on the same Virtual Machine. To make this possible, every container and its contained processes have to stick to certain rules. The Linux out of memory killer, for example, ensures that a container never uses more memory than it is allowed to ask for. Its heroic name suits it very well: as soon as your container tries to allocate a single bit more than it is entitled to asked for, it is mercilessly terminated to safeguard other containers on the same machine. This is referred to as a container out of memory situation.
Java Memory Management
To avoid such out of memory errors, Java application developers need to think about how to configure the Java Virtual Machine so that it never tries to allocate more memory than the container allows for. This is possible with a broad variety of memory flags you can add to the
java command when running your application. The Java Virtual Machine is built in a way that it manages multiple memory areas for different purposes. The most famous memory area is the heap. It’s the space where objects generated during the lifetime of your application are stored. The JVM implements the so-called Generational Hypothesis, after which the heap is again divided into smaller pieces, which are typically referred to as the memory areas of Young and Old generation. It’s beyond the scope of this blog to go into further details, but for now just remember that the heap is divided into smaller sections and the process of Garbage Collection maintains the lifecycle of objects. For the problem discussed in this blog, it is more important to mention that heap is not the only memory area the JVM maintains. Another memory area managed by the JVM is the Metaspace (
-XX:MaxMetaspaceSize) (introduced in Java 8; earlier Java versions used Permgen instead), which holds the metadata of all Java classes. Other memory areas managed by the JVM are Code Cache (
-XX:ReservedCodeCacheSize), and Direct Memory (
Threads, Threads Everywhere
In addition to the memory areas mentioned above, there’s another configuration you can make when starting a java process: the stack size (
-Xss). It limits the memory of a thread’s execution stack. The default value for the stack size typically lies between 512K and 1M depending on the JVM distribution and buildpack. At a first glance this value looks negligible. When multiplied by 250, which is a typical estimation for the number of threads used during the runtime of a Java cloud application, it is clear that the sum of stack sizes is a considerable amount of memory your process can eventually consume during its runtime. Keep in mind that a default configuration of Tomcat already allows up to 200 threads for incoming web requests. This doesn’t include threads potentially spawned by the business functionality of your Java application, for example via the
ForkJoinPool, or third party libraries such as Hystrix.
This explains why using a
CachedThreadPool, which has no upper limit for the number of threads, is a bad choice unless you know in advance that its use is limited by other criteria. We recommend you know in advance how many threads your application will run in peak situations, to allow you to forecast the required space for all stacks.
Buildpacks to the Rescue
One of the great things about SAP BTP Cloud Foundry is buildpacks. Buildpacks allow you to translate your source code, whether it’s artifact-based (e.g., Java .jar and .war files, binaries) or folder-based (Node.js, Python, Ruby), into a container. The buildpack initiative around Cloud Native Buildpacks and the growing list of integrations shows that this concept is not limited to Cloud Foundry (it originally came from Heroku), but can work well with other platforms too.
For Java developers on Cloud Foundry, the Java buildpack and the SAP Java buildpack – SAP’s own implementation – translate your app artifact into a container image you can run. In addition to creating a runnable container, the buildpacks also take care of calculating and setting reasonable default settings, including but not limited to memory flags. The responsible sub-component of the Java buildpacks is the Java Buildpack Memory Calculator.
Buildpacks are a great way to help developers avoid common pitfalls. Obviously, memory prediction is a difficult job. Therefore, the buildpacks provide additional context information by developers. For example, the Java buildpack estimates that the Java application will not use more than 250 threads. If your application is designed to use a higher number of threads, developer can (and should) give a hint to the buildpack allowing the memory calculator to safeguard you from container memory violations when using a lot of threads and their corresponding stack size. Find an example for such a hint in the Cloud Foundry Java Tips documentation. With the advent of non-blocking frameworks, such as Netty, the number of threads might also be significantly lower which can lead to an overall lowered memory footprint when considered correctly.
This documentation explains in more detail how the algorithm of the default memory calculation works.
In part 2 of this blog series we will explain out of memory situations that may occur in the context of containerized applications and show you how to mitigate them.