{"id":77927,"date":"2026-03-10T11:33:47","date_gmt":"2026-03-10T06:03:47","guid":{"rendered":"https:\/\/www.tothenew.com\/blog\/?p=77927"},"modified":"2026-03-16T09:51:57","modified_gmt":"2026-03-16T04:21:57","slug":"how-to-make-your-java-app-monitoring-with-jmx-exporter-prometheus","status":"publish","type":"post","link":"https:\/\/www.tothenew.com\/blog\/how-to-make-your-java-app-monitoring-with-jmx-exporter-prometheus\/","title":{"rendered":"How to Make Your Java App Monitoring with JMX Exporter &#038; Prometheus"},"content":{"rendered":"<h2>Introduction<\/h2>\n<p>If you have a Java application running in Kubernetes, sooner or later you will want to know what\u2019s really going on inside the JVM. And, is heap memory close to exhaustion? Is the garbage collection process busy? Are we slowly moving towards an OOM error? Without oversight, you\u2019re essentially flying blind.<br \/>\nIn this guide, we go through how you can expose JVM metrics through the use of JMX Exporter, collect JVM metrics using Prometheus and visualize JVM metrics in Grafana. This configuration provides you real visibility into your memory usage, GC activity, and JVM health more broadly.<br \/>\nIn simple terms:<\/p>\n<p>JMX Exporter allows you to show JVM metrics in a format that Prometheus understands. Prometheus collects these metrics at regular intervals. Grafana helps us visualize and analyze them. This will help us go step-by-step and get the necessary setup set up. Understanding the Flow. And in reality:<\/p>\n<p>Your Java application will reside within a container. Next is JMX Exporter agent which runs alongside it and exposes JVM metrics on a port (for example, 9400). Prometheus scrapes that metrics endpoint. Grafana takes Prometheus\u2019 data and performs the data ingestion and updates of Prometheus data into dashboards. When properly configured, you will be able to see heap usage, max memory, GC counts and other metrics in real time.<\/p>\n<h2>The Big Picture<\/h2>\n<p>Think of the monitoring flow like this:<\/p>\n<ul>\n<li>Your <strong>Java App<\/strong> is a sealed box with internal gauges (JMX metrics).<\/li>\n<li><strong>JMX Exporter<\/strong> is the window that exposes those gauges.<\/li>\n<li><strong>Prometheus<\/strong> periodically collects the readings.<\/li>\n<li><strong>Grafana<\/strong> visualizes the data into dashboards.<\/li>\n<\/ul>\n<p>Let\u2019s build this step-by-step.<\/p>\n<h3>Step 1: Let\u2019s expose JVM metrics using JMX Exporter<\/h3>\n<p>The first thing is to specify which JVM metrics we want to expose. For this, we set up a <strong>config.yml<\/strong> file. This file contains JMX metric mapping rules to Prometheus metric names, along with metrics for our indexing of the code for the named Prometheus metrics.<\/p>\n<pre><code># config.yml\r\nlowercaseOutputName: true # Makes all metric names lowercase (recommended)\r\nrules:\r\n  - pattern: 'java.lang&lt;type=Memory&gt;&lt;&gt;(HeapMemoryUsage|NonHeapMemoryUsage).used'\r\n    name: jvm_memory_used_bytes\r\n    labels:\r\n      area: \"$1\" \r\n  - pattern: 'java.lang&lt;type=Memory&gt;&lt;&gt;(HeapMemoryUsage|NonHeapMemoryUsage).max'\r\n    name: jvm_memory_max_bytes\r\n    labels:\r\n      area: \"$1\"\r\n  - pattern: 'java.lang&lt;type=Memory&gt;&lt;&gt;(HeapMemoryUsage|NonHeapMemoryUsage).committed'\r\n    name: jvm_memory_committed_bytes\r\n    labels:\r\n      area: \"$1\"\r\n  - pattern: 'java.lang&lt;name=([^&gt;]+)&gt;&lt;type=GarbageCollector&gt;&lt;&gt;(CollectionTime|CollectionCount)'\r\n    name: jvm_gc_$3\r\n    labels:\r\n      gc: \"$1\" \r\n <\/code><\/pre>\n<p>This configuration exposes:<br \/>\nHeap and non-heap memory usage.<br \/>\nMaximum memory.<br \/>\nCommitted memory.<br \/>\nCount and time of garbage collection.<\/p>\n<h4>Add JMX Exporter to Docker Image<\/h4>\n<p>Add JMX Exporter to Docker Image. Now, we have to create the JMX Exporter agent on our Docker image and run it with the application. Here\u2019s a multi-stage Dockerfile:<\/p>\n<pre><code>FROM arm64v8\/maven:3.6.3-jdk-8-openj9 AS build-env\r\nUSER root\r\nRUN mkdir \/opt\/app\r\nADD .\/  \/opt\/app\/\r\nWORKDIR \/opt\/app\/\r\n#RUN mvn package -Dmaven.test.skip=true\r\nRUN mvn clean install -Dmaven.test.skip=true\r\n############################################################\r\nFROM arm64v8\/openjdk:8-jdk-alpine\r\nRUN apk --update add fontconfig ttf-dejavu\r\nRUN mkdir -p \/opt\/jars\r\nCOPY --from=build-env  \/opt\/app\/target\/*.jar \/opt\/jars\/latest.jar\r\nCOPY .\/entrypoint.sh entrypoint.sh\r\nCOPY .\/jmx_prometheus_javaagent-1.4.0.jar \/opt\/jmx-exporter\/\r\nCOPY .\/config.yml \/opt\/jmx-exporter\/ \r\nADD .\/ca-bundle.crt \/etc\/ssl\/certs\/ca-certificates.crt\r\nRUN chmod +x entrypoint.sh\r\nEXPOSE 8080 9400\r\nENTRYPOINT [\"sh\",\"\/entrypoint.sh\"]\r\n <\/code><\/pre>\n<h4>Entrypoint Configuration<\/h4>\n<pre><code>Entrypoint.sh\r\njava \\\r\n  -javaagent:\/opt\/jmx-exporter\/jmx_prometheus_javaagent-1.4.0.jar=9400:\/opt\/jmx-exporter\/config.yml \\\r\n  -Dserver.port=8080 \\\r\n  -Xmx400m \\\r\n  -XX:MaxGCPauseMillis=200 \\\r\n  -XX:-UseParallelOldGC \\\r\n  -XX:+UseGCOverheadLimit \\\r\n  -Dfile.encoding=UTF-8 \\\r\n  -Dapp.name=test-service \\\r\n  -Dspring.profiles.active=${env} \\\r\n  -jar \/opt\/jars\/ROOT.jar 2&gt;&amp;1\r\n <\/code><\/pre>\n<p>Download the agent from:<br \/>\nhttps:\/\/github.com\/prometheus\/jmx_exporter\/releases<\/p>\n<p>Build and push the updated image.<\/p>\n<p>The crux of this is the -javaagent flag. It instructs the JVM to start the JMX Exporter and expose metrics as expected on port 9400. Once you build and push the image you want to verify inside the container:<\/p>\n<pre><code>curl localhost:9400\/metrics<\/code><\/pre>\n<p>If you find a list of metrics, the exporter was working.<\/p>\n<h3>Step 2: Add Kubernetes Annotations<\/h3>\n<p>Now that our app has a window, we need to hang a sign on our Kubernetes Pod that says, &#8220;Hey Prometheus, metrics are here! Come look at port 9400.&#8221;<br \/>\nWe do this by adding annotations to our Pod template in the Deployment YAML.<br \/>\nDeployment.yaml <strong>Deployment.yaml<\/strong>:<\/p>\n<pre><code>apiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n name: {{ include \"transformation.fullname\" . }}\r\n labels:\r\n   {{- include \"transformation.labels\" . | nindent 4 }}   \r\nspec:\r\n {{- if not .Values.autoscaling.enabled }}\r\n replicas: {{ .Values.replicaCount }}\r\n {{- end }}\r\n selector:\r\n   matchLabels:\r\n     {{- include \"transformation.selectorLabels\" . | nindent 6 }}\r\n template:\r\n   metadata:\r\n     {{- with .Values.podAnnotations }}\r\n       {{- toYaml . | nindent 8 }}\r\n     {{- end }}\r\n     labels:\r\n       {{- include \"transformation.selectorLabels\" . | nindent 8 }}\r\n       cluster_name: New-test-cluster-gravition\r\n <\/code><\/pre>\n<p>With metrics made public to us, we have to inform Prometheus where these are to be scraped. To accomplish this, we annotate the Pod template with data points. In values.yaml:<br \/>\n<strong>values.yaml<\/strong>:<\/p>\n<pre><code> # Default values for testing.\r\n# This is a YAML-formatted file.\r\n# Declare variables to be passed into your templates.\r\nreplicaCount: 1\r\nimage:\r\n repository: 123456789.dkr.ecr.ap-south-1.amazonaws.com\/testing_service:226\r\n pullPolicy: IfNotPresent\r\n # Overrides the image tag whose default is the chart appVersion.\r\nimagePullSecrets: []\r\nnameOverride: \"\"\r\nfullnameOverride: \"\"\r\nserviceAccount:\r\n # Specifies whether a service account should be created\r\n create: false\r\n # Annotations to add to the service account\r\n annotations: {}\r\n # The name of the service account to use.\r\n # If not set and create is true, a name is generated using the fullname template\r\n name: \"\"\r\npodAnnotations:\r\n prometheus.io\/scrape: \"true\"\r\n prometheus.io\/port: \"9400\" \r\n prometheus.io\/path: \"\/metrics\"\r\n <\/code><\/pre>\n<p>Prometheus: \u201cThis pod exposes metrics. Scrape them from port 9400.\u201d<\/p>\n<h3>Step 3: Open the Gate (Kubernetes Service)<\/h3>\n<p>Expose the Metrics Port Through Service. Kubernetes networking works via Services so we should have access to metrics port. Example Service configuration:<\/p>\n<pre><code> ports:\r\n  - name: http\r\n    nodePort: 30744\r\n    port: 8080\r\n    protocol: TCP\r\n    targetPort: http\r\n  - name: metrics\r\n    nodePort: 30632\r\n    port: 9400\r\n    protocol: TCP\r\n    targetPort: http\r\n  selector:\r\n    app.kubernetes.io\/instance: jvm-new-testing\r\n <\/code><\/pre>\n<p>Apply the changes:<\/p>\n<pre><code>kubectl apply -f deployment.yaml \r\nkubectl apply -f service.yaml<\/code><\/pre>\n<p>Your application is currently properly exposing metrics inside the cluster.<\/p>\n<h3>Step 4: Configure Prometheus Scraping<\/h3>\n<p>Finally, we configure Prometheus to discover pods with the correct annotations. Example Prometheus&#8217;s configuration:<br \/>\nPrometheus&#8217;s configuration:<\/p>\n<pre><code>scrape_configs:\r\n    - job_name: 'jmx-exporter'\r\n      kubernetes_sd_configs:\r\n        - role: pod\r\n      relabel_configs:\r\n        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]\r\n          action: keep\r\n          regex: true\r\n        - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]\r\n          action: replace\r\n          regex: ([^:]+)(?::\\d+)?;(\\d+)\r\n          replacement: $1:$2\r\n          target_label: __address__\r\n        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]\r\n          action: replace\r\n          target_label: __metrics_path__\r\n          regex: (.+)\r\n        - source_labels: [__meta_kubernetes_namespace]\r\n          action: replace\r\n          target_label: namespace\r\n        - source_labels: [__meta_kubernetes_pod_name]\r\n          action: replace\r\n          target_label: pod\r\n        - source_labels: [__meta_kubernetes_pod_container_name]\r\n          action: replace\r\n          target_label: container\r\n      metric_relabel_configs:\r\n        - source_labels: [pod]\r\n          regex: \"jvm-new-testing.*\"\r\n          action: keep <\/code><\/pre>\n<p>Verify in Prometheus:<\/p>\n<ul>\n<li>Go to <strong>http:\/\/localhost:9090\/targets<\/strong> \u2192 You should see your Java Pod listed as a target with State UP.<\/li>\n<li>Search for <strong>jvm_memory_used_bytes<\/strong> in the Graph tab<\/li>\n<\/ul>\n<h3>Visualizing in Grafana<\/h3>\n<p>To calculate Heap Usage Percentage:<\/p>\n<pre><code>100 * (jvm_memory_used_bytes{area=\"heap\"} \/ jvm_memory_max_bytes{area=\"heap\"})<\/code><\/pre>\n<p>Steps:<\/p>\n<ol>\n<li>Open Grafana \u2192 http:\/\/&lt;grafana-ip&gt;:3000<\/li>\n<li>Click &#8220;+&#8221; \u2192 Dashboard \u2192 Add New Panel<\/li>\n<li>Select Prometheus as a data source<\/li>\n<li>Paste the above query\n<p><div id=\"attachment_77926\" style=\"width: 652px\" class=\"wp-caption alignnone\"><img aria-describedby=\"caption-attachment-77926\" decoding=\"async\" loading=\"lazy\" class=\" wp-image-77926\" src=\"https:\/\/www.tothenew.com\/blog\/wp-ttn-blog\/uploads\/2026\/02\/Screenshot-2026-02-23-at-5.52.47\u202fPM-300x100.png\" alt=\"visualizaton\" width=\"642\" height=\"214\" srcset=\"\/blog\/wp-ttn-blog\/uploads\/2026\/02\/Screenshot-2026-02-23-at-5.52.47\u202fPM-300x100.png 300w, \/blog\/wp-ttn-blog\/uploads\/2026\/02\/Screenshot-2026-02-23-at-5.52.47\u202fPM-1024x341.png 1024w, \/blog\/wp-ttn-blog\/uploads\/2026\/02\/Screenshot-2026-02-23-at-5.52.47\u202fPM-768x256.png 768w, \/blog\/wp-ttn-blog\/uploads\/2026\/02\/Screenshot-2026-02-23-at-5.52.47\u202fPM-624x208.png 624w, \/blog\/wp-ttn-blog\/uploads\/2026\/02\/Screenshot-2026-02-23-at-5.52.47\u202fPM.png 1374w\" sizes=\"(max-width: 642px) 100vw, 642px\" \/><p id=\"caption-attachment-77926\" class=\"wp-caption-text\">visualizaton<\/p><\/div><\/li>\n<\/ol>\n<p>Now you have real-time heap usage visualization.<\/p>\n<h2>Conclusion<\/h2>\n<p>As a result, you can now see in full your Java application&#8217;s JVM metrics. Rather than wonder if memory usage is safe or not, you can see:<\/p>\n<ul>\n<li>Current heap usage<\/li>\n<li>Maximum configured heap<\/li>\n<li>GC frequency and duration<\/li>\n<\/ul>\n<p><strong>This setup can now be extended for further use:<\/strong><\/p>\n<ul>\n<li>Add alerts for high heap usage<\/li>\n<li>Watch thread counts and CPU metrics.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Introduction If you have a Java application running in Kubernetes, sooner or later you will want to know what\u2019s really going on inside the JVM. And, is heap memory close to exhaustion? Is the garbage collection process busy? Are we slowly moving towards an OOM error? Without oversight, you\u2019re essentially flying blind. In this guide, [&hellip;]<\/p>\n","protected":false},"author":1529,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"iawp_total_views":6},"categories":[2348],"tags":[1892,3836,1499],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/77927"}],"collection":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/users\/1529"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/comments?post=77927"}],"version-history":[{"count":6,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/77927\/revisions"}],"predecessor-version":[{"id":78501,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/77927\/revisions\/78501"}],"wp:attachment":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/media?parent=77927"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/categories?post=77927"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/tags?post=77927"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}