Monitoring Kafka Metrics with ClickStack
This guide shows you how to monitor Apache Kafka performance metrics with ClickStack by using the OpenTelemetry JMX Metric Gatherer. You'll learn how to:
- Enable JMX on Kafka brokers and configure the JMX Metric Gatherer
- Send Kafka metrics to ClickStack via OTLP
- Use a pre-built dashboard to visualize Kafka performance (broker throughput, consumer lag, partition health, request latency)
A demo dataset with sample metrics is available if you want to test the integration before configuring your production Kafka cluster.
Time required: 10-15 minutes
Integration with existing Kafka
Monitor your existing Kafka deployment by running the OpenTelemetry JMX Metric Gatherer container to collect metrics and send them to ClickStack via OTLP.
If you want to test this integration first without modifying your existing setup, skip to the demo dataset section.
Prerequisites
- ClickStack instance running
- Existing Kafka installation (version 2.0 or newer) with JMX enabled
- Network access between ClickStack and Kafka (JMX port 9999, Kafka port 9092)
- OpenTelemetry JMX Metric Gatherer JAR (download instructions below)
Get ClickStack API key
The JMX Metric Gatherer sends data to ClickStack's OTLP endpoint, which requires authentication.
- Open HyperDX at your ClickStack URL (e.g., http://localhost:8080)
- Create an account or log in if needed
- Navigate to Team Settings → API Keys
- Copy your Ingestion API Key
- Set it as an environment variable:
Verify Kafka JMX is enabled
Ensure JMX is enabled on your Kafka brokers. For Docker deployments:
For non-Docker deployments, set these in your Kafka startup:
Verify JMX is accessible:
Deploy JMX Metric Gatherer with Docker Compose
This example shows a complete setup with Kafka, the JMX Metric Gatherer, and ClickStack. Adjust service names and endpoints to match your existing deployment:
Key configuration parameters:
service:jmx:rmi:///jndi/rmi://kafka:9999/jmxrmi- JMX connection URL (use your Kafka hostname)otel.jmx.target.system=kafka- Enables Kafka-specific metricshttp://clickstack:4318- OTLP HTTP endpoint (use your ClickStack hostname)authorization=\${CLICKSTACK_API_KEY}- API key for authentication (required)service.name=kafka,kafka.broker.id=broker-0- Resource attributes for filtering10000- Collection interval in milliseconds (10 seconds)
Verify metrics in HyperDX
Log into HyperDX and confirm metrics are flowing:
- Navigate to the Chart Explorer
- Search for
kafka.message.countorkafka.partition.count - Metrics should appear at 10-second intervals
Key metrics to verify:
kafka.message.count- Total messages processedkafka.partition.count- Total partitionskafka.partition.under_replicated- Should be 0 in a healthy clusterkafka.network.io- Network throughputkafka.request.time.*- Request latency percentiles
To generate activity and populate more metrics:
When running Kafka client commands (kafka-topics, kafka-console-producer, etc.) from within the Kafka container, prefix with unset JMX_PORT && to prevent JMX port conflicts.
Demo dataset
For users who want to test the Kafka Metrics integration before configuring their production systems, we provide a pre-generated dataset with realistic Kafka metrics patterns.
Download the sample metrics dataset
Download the pre-generated metrics files (29 hours of Kafka metrics with realistic patterns):
The dataset includes realistic patterns for a single-broker e-commerce Kafka cluster:
- 06:00-08:00: Morning surge - Sharp traffic ramp from overnight baseline
- 10:00-10:15: Flash sale - Dramatic spike to 3.5x normal traffic
- 11:30: Deployment event - 12x consumer lag spike with under-replicated partitions
- 14:00-15:30: Peak shopping - Sustained high traffic at 2.8x baseline
- 17:00-17:30: After-work surge - Secondary traffic peak
- 18:45: Consumer rebalance - 6x lag spike during rebalancing
- 20:00-22:00: Evening drop - Steep decline to overnight levels
Verify metrics in HyperDX
Once loaded, the quickest way to see your metrics is through the pre-built dashboard.
Proceed to the Dashboards and visualization section to import the dashboard and view all Kafka metrics at once.
The demo dataset time range is 2025-11-05 16:00:00 to 2025-11-06 16:00:00. Make sure your time range in HyperDX matches this window.
Dashboards and visualization
To help you get started monitoring Kafka with ClickStack, we provide essential visualizations for Kafka metrics.
Import the pre-built dashboard
- Open HyperDX and navigate to the Dashboards section
- Click Import Dashboard in the upper right corner under the ellipses
- Upload the
kafka-metrics-dashboard.jsonfile and click Finish Import
View the dashboard
The dashboard will be created with all visualizations pre-configured:
For the demo dataset, ensure the time range is set to 2025-11-05 16:00:00 to 2025-11-06 16:00:00.
Troubleshooting
No metrics appearing in HyperDX
Verify API key is set and passed to the container:
If missing, set it and restart:
Check if metrics are reaching ClickHouse:
If no results, check the JMX exporter logs:
Generate Kafka activity to populate metrics:
Authentication errors
If you see Authorization failed or 401 Unauthorized:
- Verify the API key in HyperDX UI (Settings → API Keys → Ingestion API Key)
- Re-export and restart:
Port conflicts with Kafka client commands
When running Kafka commands from within the Kafka container, you may see:
Prefix commands with unset JMX_PORT &&:
Network connectivity issues
If the JMX exporter logs show Connection refused:
Verify all containers are on the same Docker network:
Test connectivity:
Going to production
This guide sends metrics directly from the JMX Metric Gatherer to ClickStack's OTLP endpoint, which works well for testing and small deployments.
For production environments, deploy your own OpenTelemetry Collector as an agent to receive metrics from the JMX Exporter and forward them to ClickStack. This provides batching, resilience, and centralized configuration management.
See Ingesting with OpenTelemetry for production deployment patterns and collector configuration examples.