How to Make Your Java App Monitoring with JMX Exporter & Prometheus
Introduction
If you have a Java application running in Kubernetes, sooner or later you will want to know what’s really going on inside the JVM. And, is heap memory close to exhaustion? Is the garbage collection process busy? Are we slowly moving towards an OOM error? Without oversight, you’re essentially flying blind.
In this guide, we go through how you can expose JVM metrics through the use of JMX Exporter, collect JVM metrics using Prometheus and visualize JVM metrics in Grafana. This configuration provides you real visibility into your memory usage, GC activity, and JVM health more broadly.
In simple terms:
JMX Exporter allows you to show JVM metrics in a format that Prometheus understands. Prometheus collects these metrics at regular intervals. Grafana helps us visualize and analyze them. This will help us go step-by-step and get the necessary setup set up. Understanding the Flow. And in reality:
Your Java application will reside within a container. Next is JMX Exporter agent which runs alongside it and exposes JVM metrics on a port (for example, 9400). Prometheus scrapes that metrics endpoint. Grafana takes Prometheus’ data and performs the data ingestion and updates of Prometheus data into dashboards. When properly configured, you will be able to see heap usage, max memory, GC counts and other metrics in real time.
The Big Picture
Think of the monitoring flow like this:
- Your Java App is a sealed box with internal gauges (JMX metrics).
- JMX Exporter is the window that exposes those gauges.
- Prometheus periodically collects the readings.
- Grafana visualizes the data into dashboards.
Let’s build this step-by-step.
Step 1: Let’s expose JVM metrics using JMX Exporter
The first thing is to specify which JVM metrics we want to expose. For this, we set up a config.yml file. This file contains JMX metric mapping rules to Prometheus metric names, along with metrics for our indexing of the code for the named Prometheus metrics.
# config.yml
lowercaseOutputName: true # Makes all metric names lowercase (recommended)
rules:
- pattern: 'java.lang<type=Memory><>(HeapMemoryUsage|NonHeapMemoryUsage).used'
name: jvm_memory_used_bytes
labels:
area: "$1"
- pattern: 'java.lang<type=Memory><>(HeapMemoryUsage|NonHeapMemoryUsage).max'
name: jvm_memory_max_bytes
labels:
area: "$1"
- pattern: 'java.lang<type=Memory><>(HeapMemoryUsage|NonHeapMemoryUsage).committed'
name: jvm_memory_committed_bytes
labels:
area: "$1"
- pattern: 'java.lang<name=([^>]+)><type=GarbageCollector><>(CollectionTime|CollectionCount)'
name: jvm_gc_$3
labels:
gc: "$1"
This configuration exposes:
Heap and non-heap memory usage.
Maximum memory.
Committed memory.
Count and time of garbage collection.
Add JMX Exporter to Docker Image
Add JMX Exporter to Docker Image. Now, we have to create the JMX Exporter agent on our Docker image and run it with the application. Here’s a multi-stage Dockerfile:
FROM arm64v8/maven:3.6.3-jdk-8-openj9 AS build-env
USER root
RUN mkdir /opt/app
ADD ./ /opt/app/
WORKDIR /opt/app/
#RUN mvn package -Dmaven.test.skip=true
RUN mvn clean install -Dmaven.test.skip=true
############################################################
FROM arm64v8/openjdk:8-jdk-alpine
RUN apk --update add fontconfig ttf-dejavu
RUN mkdir -p /opt/jars
COPY --from=build-env /opt/app/target/*.jar /opt/jars/latest.jar
COPY ./entrypoint.sh entrypoint.sh
COPY ./jmx_prometheus_javaagent-1.4.0.jar /opt/jmx-exporter/
COPY ./config.yml /opt/jmx-exporter/
ADD ./ca-bundle.crt /etc/ssl/certs/ca-certificates.crt
RUN chmod +x entrypoint.sh
EXPOSE 8080 9400
ENTRYPOINT ["sh","/entrypoint.sh"]
Entrypoint Configuration
Entrypoint.sh
java \
-javaagent:/opt/jmx-exporter/jmx_prometheus_javaagent-1.4.0.jar=9400:/opt/jmx-exporter/config.yml \
-Dserver.port=8080 \
-Xmx400m \
-XX:MaxGCPauseMillis=200 \
-XX:-UseParallelOldGC \
-XX:+UseGCOverheadLimit \
-Dfile.encoding=UTF-8 \
-Dapp.name=test-service \
-Dspring.profiles.active=${env} \
-jar /opt/jars/ROOT.jar 2>&1
Download the agent from:
https://github.com/prometheus/jmx_exporter/releases
Build and push the updated image.
The crux of this is the -javaagent flag. It instructs the JVM to start the JMX Exporter and expose metrics as expected on port 9400. Once you build and push the image you want to verify inside the container:
curl localhost:9400/metrics
If you find a list of metrics, the exporter was working.
Step 2: Add Kubernetes Annotations
Now that our app has a window, we need to hang a sign on our Kubernetes Pod that says, “Hey Prometheus, metrics are here! Come look at port 9400.”
We do this by adding annotations to our Pod template in the Deployment YAML.
Deployment.yaml Deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "transformation.fullname" . }}
labels:
{{- include "transformation.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "transformation.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "transformation.selectorLabels" . | nindent 8 }}
cluster_name: New-test-cluster-gravition
With metrics made public to us, we have to inform Prometheus where these are to be scraped. To accomplish this, we annotate the Pod template with data points. In values.yaml:
values.yaml:
# Default values for testing.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: 123456789.dkr.ecr.ap-south-1.amazonaws.com/testing_service:226
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: false
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9400"
prometheus.io/path: "/metrics"
Prometheus: “This pod exposes metrics. Scrape them from port 9400.”
Step 3: Open the Gate (Kubernetes Service)
Expose the Metrics Port Through Service. Kubernetes networking works via Services so we should have access to metrics port. Example Service configuration:
ports:
- name: http
nodePort: 30744
port: 8080
protocol: TCP
targetPort: http
- name: metrics
nodePort: 30632
port: 9400
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/instance: jvm-new-testing
Apply the changes:
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
Your application is currently properly exposing metrics inside the cluster.
Step 4: Configure Prometheus Scraping
Finally, we configure Prometheus to discover pods with the correct annotations. Example Prometheus’s configuration:
Prometheus’s configuration:
scrape_configs:
- job_name: 'jmx-exporter'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod
- source_labels: [__meta_kubernetes_pod_container_name]
action: replace
target_label: container
metric_relabel_configs:
- source_labels: [pod]
regex: "jvm-new-testing.*"
action: keep
Verify in Prometheus:
- Go to http://localhost:9090/targets → You should see your Java Pod listed as a target with State UP.
- Search for jvm_memory_used_bytes in the Graph tab
Visualizing in Grafana
To calculate Heap Usage Percentage:
100 * (jvm_memory_used_bytes{area="heap"} / jvm_memory_max_bytes{area="heap"})
Steps:
- Open Grafana → http://<grafana-ip>:3000
- Click “+” → Dashboard → Add New Panel
- Select Prometheus as a data source
- Paste the above query

visualizaton
Now you have real-time heap usage visualization.
Conclusion
As a result, you can now see in full your Java application’s JVM metrics. Rather than wonder if memory usage is safe or not, you can see:
- Current heap usage
- Maximum configured heap
- GC frequency and duration
This setup can now be extended for further use:
- Add alerts for high heap usage
- Watch thread counts and CPU metrics.
