Scrape targets (ServiceMonitor / PodMonitor)¶
Prometheus scrapes targets that the Operator generates from
ServiceMonitor, PodMonitor, Probe, and ScrapeConfig Custom Resources.
Add one of these and Prometheus starts scraping — no
prometheus.yml editing.
This page focuses on the two most common cases: scraping an in-cluster service, and scraping pods directly.
ServiceMonitor: scraping a Kubernetes Service¶
Use this when your application exposes a metrics endpoint via a Service.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: my-app
namespace: my-app
labels:
# Must match prometheus.prometheusSpec.serviceMonitorSelector.
# See "Selecting custom rules and ServiceMonitors" in the Configuration page.
app.kubernetes.io/instance: prometheus-stack
app.kubernetes.io/part-of: prometheus-stack
spec:
# selector + namespaceSelector pick the Service(s) to scrape
selector:
matchLabels:
app.kubernetes.io/name: my-app
namespaceSelector:
matchNames: [ "my-app" ]
endpoints:
- port: metrics # name from the Service's ports[*].name
path: /metrics
interval: 30s
scrapeTimeout: 10s
scheme: http # or https + tlsConfig
honorLabels: false # rename if you have label collisions
relabelings:
# Drop noisy labels Prometheus auto-attaches
- action: labeldrop
regex: pod_template_(hash|generation)
metricRelabelings:
# Drop a high-cardinality metric you do not need
- action: drop
sourceLabels: [ __name__ ]
regex: my_app_internal_debug_.*
Prerequisites:
- Your app exposes Prometheus-format metrics on an HTTP endpoint
(
/metricsby convention). - The endpoint is reachable through a
Service(any type works for in-cluster scraping, but headless services are fine too). - The
Servicehas a named port (e.g.name: metrics). TheServiceMonitorreferences the port by name, not number, so port renames in the Service won't break the scrape.
PodMonitor: scraping pods directly¶
Use this when there is no Service (e.g. operators, sidecars), or when you
want to scrape per-pod IPs.
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: my-operator
namespace: my-operator
labels:
app.kubernetes.io/instance: prometheus-stack
app.kubernetes.io/part-of: prometheus-stack
spec:
selector:
matchLabels:
app.kubernetes.io/name: my-operator
namespaceSelector:
matchNames: [ "my-operator" ]
podMetricsEndpoints:
- port: http-metrics
path: /metrics
interval: 30s
Probe: synthetic black-box checks¶
Use the Probe resource together with a prometheus-blackbox-exporter
deployment to do HTTP/TCP/ICMP probes against external URLs:
apiVersion: monitoring.coreos.com/v1
kind: Probe
metadata:
name: external-endpoints
namespace: monitoring
labels:
app.kubernetes.io/instance: prometheus-stack
spec:
jobName: blackbox-http
interval: 60s
module: http_2xx
prober:
url: blackbox-exporter.monitoring.svc:9115
targets:
staticConfig:
static:
- https://my-app.example.com/health
- https://api.example.com/status
You need to install prometheus-community/prometheus-blackbox-exporter
separately; it is not part of this chart.
ScrapeConfig: arbitrary scrape jobs¶
When ServiceMonitor / PodMonitor don't fit (e.g. file_sd, AWS EC2
service-discovery, custom static configs), use the ScrapeConfig CR (added
in recent Operator versions). The schema mirrors a Prometheus scrape job:
apiVersion: monitoring.coreos.com/v1alpha1
kind: ScrapeConfig
metadata:
name: ec2-targets
namespace: monitoring
labels:
app.kubernetes.io/instance: prometheus-stack
spec:
ec2SDConfigs:
- region: eu-west-1
port: 9100
filters:
- name: tag:role
values: [ "monitored" ]
Verifying a scrape target¶
After applying:
- Prometheus UI » Status » Targets — the new target should appear within ~30s.
- If it does not appear at all, the Operator did not include it. Check the
selector mismatch list:
kubectl -n monitoring get prometheus -o yaml | grep -A2 serviceMonitorSelector kubectl -n my-app get servicemonitor my-app -o yaml | grep -A5 labels - If it appears as
DOWN, click on it; the error message tells you whether it is a network reachability, TLS, auth, or 404 problem.
Common patterns¶
Scrape across all namespaces with any labels¶
Set selectors to empty in chart values so the operator-managed Prometheus picks up everything:
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: false
serviceMonitorSelector: {}
serviceMonitorNamespaceSelector: {}
podMonitorSelectorNilUsesHelmValues: false
podMonitorSelector: {}
podMonitorNamespaceSelector: {}
Scrape HTTPS metrics with the Pod's ServiceAccount token¶
endpoints:
- port: https-metrics
scheme: https
path: /metrics
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
Drop pods that are not ready¶
endpoints:
- port: metrics
relabelings:
- sourceLabels: [ __meta_kubernetes_pod_ready ]
action: keep
regex: "true"
Reduce cardinality¶
metricRelabelings:
- action: drop
sourceLabels: [ __name__ ]
regex: my_app_(go_gc_duration_seconds|go_memstats_.*)
Pitfalls¶
| Symptom | Likely cause |
|---|---|
| Target never appears | Selector mismatch (labels on ServiceMonitor vs. prometheus.prometheusSpec.serviceMonitorSelector). |
Target appears but DOWN |
Wrong port name, wrong path, or NetworkPolicy blocking the operator namespace. |
| Target appears but no metrics in Prometheus | metricRelabelings is dropping everything — check the regex. |
cardinality explosion after deploy |
Your service emits a high-cardinality label (user IDs, request IDs). Drop it via metricRelabelings. |