Tomcat, Nginx, Docker Containers -App Deployment with High Available/Scalable Containers

Sudhakar Gumparthi
10 min readMay 20, 2021

This post provides insights and simple deployment model. Application deployment on tomcat Docker containers , load balanced with Nginx and Proxy servers with SSL communication setup and along with sticky sessions, and session propagation

The operating system we choose here is Ubuntu 18.04, and with root permissions. All that we are trying here is to deploy a simple web application, SampleWebApp.war or any if you have handy, to multiple tomcat containers. Try to access the application with some host name, e.g., t1.my-site.com or t2.my-site.com.

We are deploying t1.my-site.com to two different Docker containers and t2.my-site.com on two different containers and these sites are load balanced among the two containers respectively for each site.

We access these two sites over SSL, like https://t1.my-site.com or https://t2.my-site.com. So, have to discuss how to setup SSL as well.

Configure Log file setup for the 4 containers and verify the logs, many might have question how to access the application logs from containers. So, we look into that and some basics.

Finally, session propagation or sticky sessions, tricky part of the applications, where we cannot avoid some situations that demand sticky sessions. I still, would prefer stateless apis approach. But, as part of this deployment model, we will discuss sticky sessions and session propagation and setup.

That is pretty much the context, and we are not going to discuss any of these Tomcat, Cluster, Sticky Sessions, Docker, Container, Redis anything in depth. Many of our contributors published lot of information and tips on these topics. We simply use these tools to have a working deployment setup and see a hello world.

Let us split the story into multiple sections for easy readability and understanding.

  1. Install Docker and Docker Compose
  2. Docker Compose scripts
  3. Nginx Virtual Site Configuration
  4. Deploy and Verify Sample Application
  5. Sticky Session vs Session Propagation — Docker Script
  6. Verify Sticky Sessions

Docker and Docker Compose

Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. These containers provides high scalability and availability.

Open the Ubuntu terminal or VM Terminal. Run the commands to see if docker is already installed.

On my machine, they already installed, but if it is not installed already on your system, you can install it by a simple command,

sudo apt-get update

sudo apt-get upgrade

sudo apt-get install docker-ce docker-compose

For details steps and understanding, please visit the docker official site, https://docs.docker.com/engine/install/ubuntu/

Docker Compose scripts

Docker-Compose is a tool for defining and running multi-container Docker applications. With this, we use a YAML file to configure application’s services. Then, with a single command, create and start all the services from the configuration file created.

Let us define first container for one site, for t1.my-site.com

tomcat_1:
image: tomcat:9-jdk8-openjdk
environment:
VIRTUAL_HOST: t1.my-site.com
VIRTUAL_PORT: 8080
restart: always
volumes:
- "./tomcat/webapps:/usr/local/tomcat/webapps"
- "./tomcat/logs:/usr/local/tomcat/logs"

Second container,

tomcat_3:
image: tomcat:9-jdk8-openjdk
environment:
VIRTUAL_HOST: t1.my-site.com
VIRTUAL_PORT: 8080
restart: always
volumes:
- "./tomcat/webapps:/usr/local/tomcat/webapps"
- "./tomcat/logs:/usr/local/tomcat/logs"

The above two configurations represent two containers.

Where is the application to deploy? Let us inspect, “volumes” section and we configured ./tomcat/webapps/ which is on the host system (or the VM we are working). Here, the SampleWebApp.war file will be placed and it is deployed to the tomcat server of the above two containers.

Where did we say, it is tomcat server? Let us inspect “image” section, and it is the docker image pulled from docker site. You can use any of the images available.

How do we get the log files from containers? Check the “volumes” section and “./tomcat/logs”

Similarly, we define two container services for t2.my-site.com. I’m not mentioning any details here, as we see the final version of the file towards the end of this section.

So, we now have to sites and two containers for each site. And, the question is who will load balance?

That is Nginx, reverse-proxy

reverse-proxy:
image: jwilder/nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx/certs:/etc/nginx/certs
- ./nginx/nginx-proxy.conf:/etc/nginx/conf.d/nginx-proxy.conf:ro
links:
- tomcat_1
- tomcat_2
- tomcat_3
- tomcat_4

The highlighted parts of above configuration show that we are using Nginx reverse proxy of image “jwilder”. The folder on host system, or VM, “./nginx/certs” contains certificates of my-site.com (sample self signed certificates for demonstrative purpose). This contain two files, my-site.com.crt and my-site.com.key.

The folder “./nginx/nginx-proxy.conf” is for Ngnix, 80 to 443 redirection and virtual host setup.

So, we have now, all the container configurations that go into “docker-compose.yml” and Nginx configuration that goes tonginx-proxy.conf” file. Now, let us see the folder structure that gives a clear picture which file go where, see the below

If you have the below two files, the entire deployment is done, that simple.

docker-compose.yml

version: "3"

services:

tomcat_1:
image: tomcat:9-jdk8-openjdk
environment:
VIRTUAL_HOST: t1.my-site.com
VIRTUAL_PORT: 8080
restart: always
volumes:
- "./tomcat/webapps:/usr/local/tomcat/webapps"
- "./tomcat/logs:/usr/local/tomcat/logs"

tomcat_2:
image: tomcat:9-jdk8-openjdk
environment:
VIRTUAL_HOST: t2.my-site.com
VIRTUAL_PORT: 8080
restart: always
volumes:
- "./tomcat/webapps:/usr/local/tomcat/webapps"
- "./tomcat/logs:/usr/local/tomcat/logs"

tomcat_3:
image: tomcat:9-jdk8-openjdk
environment:
VIRTUAL_HOST: t1.my-site.com
VIRTUAL_PORT: 8080
restart: always
volumes:
- "./tomcat/webapps:/usr/local/tomcat/webapps"
- "./tomcat/logs:/usr/local/tomcat/logs"

tomcat_4:
image: tomcat:9-jdk8-openjdk
environment:
VIRTUAL_HOST: t2.my-site.com
VIRTUAL_PORT: 8080
restart: always
volumes:
- "./tomcat/webapps:/usr/local/tomcat/webapps"
- "./tomcat/logs:/usr/local/tomcat/logs"

reverse-proxy:
image: jwilder/nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx/certs:/etc/nginx/certs
- ./nginx/nginx-proxy.conf:/etc/nginx/conf.d/nginx-proxy.conf:ro
links:
- tomcat_1
- tomcat_2
- tomcat_3
- tomcat_4

nginx-proxy.conf

server {
listen 80;
server_name t1.my-site.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ;
server_name t1.my-site.com;
access_log /var/log/nginx/data-access.log combined;
location / {
proxy_pass http://t1.my-site.com:8080/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server {
listen 80;
server_name t2.my-site.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ;
server_name t2.my-site.com;
access_log /var/log/nginx/data-access.log combined;
location / {
proxy_pass http://t2.my-site.com:8080/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

Nginx Virtual Site Configuration

The above nginx-proxy.conf, show that we defined two virtual host configurations for each site, one for listening on 80 port, and the other listening on 443. If the request comes via port 80, it is redirected to 443 and the configuration is self explanatory in the above section.

Deploy and Verify Sample Application

Let us navigate to the folder, …/tomcat-nginx-docker, you must have created the folder structure by now. Also, you have a sample web-app ready and placed under … /tomcat-nginx-docker/tomcat/webapps folder. SSL cert and key , we have not discussed how to generate those, you can use openssl and generate them.

Simply run the command ,

Wait a min, the docker-compose.yml, will pull all the needed container images, build them, and make up and running.

Now, our infra is ready, the sample web application is up and running against two different site addresses we mentioned, t1.my-site.com and t2.my-site.com. We have not discussed here in this post, how to map the site name to host ip and configuration and setup. Just to make it simple, go to /etc/hosts file on the host system we are trying to set up the deployment environment, add the following,

127.0.1.1 t1.my-site.com

127.0.1.1 t2.my-site.com

The below are various screenshots of the two different sites of the sample application

The above two, show different hosts addressed user requests and the session ids vary. That means, it keeps creating different sessions for each request. Let us see the screen shots of the other site,

So, the same case here with t2.my-host.com as well.

And, it is time to discuss and implement session and propagation across containers in contrast with sticky and next section for those who need it!

Session Propagation Setup — Docker Script

Sticky, Sticky, Sticky … !

It is time, to loosen little bit from stickiness! Its true!!!

Coming to the point, it is bit challenging to maintain sticky sessions for various reasons. It demands maintenance from the infra point and application point as well.

So, let us get started to balance load keeping session intact, discussed, deployment model. Containers are very individual, and tomcats deployed to those containers are also independent. Propagation of sessions needs some external session management, it has its own pros and cons. Keeping them aside, who maintains those sessions now for us?

Answer is, RedissonRedis Java client with features of an in-memory data grid. Refer more details @ https://github.com/redisson/redisson

Tomcat Session Manager supports this, so we are going to use it and configure all the tomcat containers connect the redis component for validating and maintaining sessions. Let us see the folder structure and components to be added.

jar files shown in the above image can be downloaded from the site, redisson.

redission.conf to be created with below configurations.

{
"singleServerConfig":{
"address": "redis://redis:6379",
"timeout":1800
},
"threads":0,
"nettyThreads":0,
"transportMode":"NIO"
}

The highlighted “redis” in the above is the host name of the container that we are going to setup down the line.

context.xml to tell tomcat servers to use Redisson Session Manager. The contents are below.

<?xml version="1.0" encoding="UTF-8"?>
<Context>
<!-- Default set of monitored resources -->
<WatchedResource>WEB-INF/web.xml</WatchedResource>
<!--
<Valve className="org.apache.catalina.valves.CometConnectionManagerValve" />
-->
<Manager className="org.redisson.tomcat.RedissonSessionManager"
configPath="${catalina.base}/redisson.conf" broadcastSessionEvents="true"
readMode="REDIS" updateMode="DEFAULT"/>
</Context>

Let us now add “redis” container to the docker-compose.yml and link it to all the containers.

redis:
image: redis:alpine
hostname: redis
ports:
- 6379:6379

Now, the modified docker-compose.yml, with redis, looks like below.

version: "3"

services:
redis:
image: redis:alpine
hostname: redis
ports:
- 6379:6379

tomcat_1:
image: tomcat:9-jdk8-openjdk
environment:
VIRTUAL_HOST: t1.my-site.com
VIRTUAL_PORT: 8080
restart: always
volumes:
- "./tomcat/webapps:/usr/local/tomcat/webapps"
- "./tomcat/logs:/usr/local/tomcat/logs"
- "./tomcat/conf/context.xml:/usr/local/tomcat/conf/context.xml"
- "./tomcat/conf/redisson.conf:/usr/local/tomcat/redisson.conf"
- "./tomcat/lib/redisson-tomcat-9-3.15.5.jar:/usr/local/tomcat/lib/redisson-tomcat-9-3.15.5.jar"
- "./tomcat/lib/redisson-all-3.15.5.jar:/usr/local/tomcat/lib/redisson-all-3.15.5.jar"
links:
- redis


tomcat_2:
image: tomcat:9-jdk8-openjdk
environment:
VIRTUAL_HOST: t2.my-site.com
VIRTUAL_PORT: 8080
restart: always
volumes:
- "./tomcat/webapps:/usr/local/tomcat/webapps"
- "./tomcat/logs:/usr/local/tomcat/logs"
- "./tomcat/conf/context.xml:/usr/local/tomcat/conf/context.xml"
- "./tomcat/conf/redisson.conf:/usr/local/tomcat/redisson.conf"
- "./tomcat/lib/redisson-all-3.15.5.jar:/usr/local/tomcat/lib/redisson-all-3.15.5.jar"
- "./tomcat/lib/redisson-tomcat-9-3.15.5.jar:/usr/local/tomcat/lib/redisson-tomcat-9-3.15.5.jar"
links:
- redis


tomcat_3:
image: tomcat:9-jdk8-openjdk
environment:
VIRTUAL_HOST: t1.my-site.com
VIRTUAL_PORT: 8080
restart: always
volumes:
- "./tomcat/webapps:/usr/local/tomcat/webapps"
- "./tomcat/logs:/usr/local/tomcat/logs"
- "./tomcat/conf/context.xml:/usr/local/tomcat/conf/context.xml"
- "./tomcat/conf/redisson.conf:/usr/local/tomcat/redisson.conf"
- "./tomcat/lib/redisson-tomcat-9-3.15.5.jar:/usr/local/tomcat/lib/redisson-tomcat-9-3.15.5.jar"
- "./tomcat/lib/redisson-all-3.15.5.jar:/usr/local/tomcat/lib/redisson-all-3.15.5.jar"
links:
- redis


tomcat_4:
image: tomcat:9-jdk8-openjdk
environment:
VIRTUAL_HOST: t2.my-site.com
VIRTUAL_PORT: 8080
restart: always
volumes:
- "./tomcat/webapps:/usr/local/tomcat/webapps"
- "./tomcat/logs:/usr/local/tomcat/logs"
- "./tomcat/conf/context.xml:/usr/local/tomcat/conf/context.xml"
- "./tomcat/conf/redisson.conf:/usr/local/tomcat/redisson.conf"
- "./tomcat/lib/redisson-tomcat-9-3.15.5.jar:/usr/local/tomcat/lib/redisson-tomcat-9-3.15.5.jar"
- "./tomcat/lib/redisson-all-3.15.5.jar:/usr/local/tomcat/lib/redisson-all-3.15.5.jar"
links:
- redis


reverse-proxy:
image: jwilder/nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx/certs:/etc/nginx/certs
- ./nginx/nginx-proxy.conf:/etc/nginx/conf.d/nginx-proxy.conf:ro
links:
- tomcat_1
- tomcat_2
- tomcat_3
- tomcat_4

Read the above configuration, and the point of interest now is , “links” and “volume” sections of each tomcat container will have redis container link and redission configuration respectively.

Verify Sticky Sessions

Congratulations!!! Now containers share the load and share sessions and propagates. Let us bring the docker-compose up now.

Now, let us stop the host 388e96d2ed68, and the session now goes to the other, see the below screen.

Now, stopping the container host, 359cb54354fa and bringing the other one back, the request goes to the other one, 388e96d2ed68. See below screen, all the session information is propagated vs STICKY

Wait for sometime, and try to access the link, and now session is refreshed. And, you know, that is because “session-timeout” set in the web-app.

The docker scripts provided in this post are basic and has necessary information, like load balance, scale, reverse proxy, ssl, sticky sessions, session propagation. Based on these scripts, many security configuration and application tunings with JVM and ENV parameters and much more can be explored or added to make the deployment model production ready. But, the above does serve its purpose.

Some docker commands used, for stopping and starting containers,

docker stop 359cb54354fa

docker start 359cb54354fa

docker system prune -a (Cleans up all the docker installations)

docker-compose up

I tried as much as possible to keep the point straight for the people who doesn’t know anything on dockers and containers, but little Linux knowledge, can just follow the set up and understand container concepts based on the deployment model demonstrated in this post.

Please feel free to share any security and better capabilities to the above docker scripts to make the deployment model intact, would help all for the betterment.

--

--