Accessing services¶
This page covers reaching the three primary UIs — Prometheus, Alertmanager, and Grafana — both for ad-hoc operator access and for day-to-day team use.
The exact Service names depend on your nameOverride. The OVES copy of the
chart sets nameOverride: "monitoring", so the examples below assume the
chart was installed with helm install prometheus-stack . -n monitoring and
the resulting Services are named accordingly. Adjust if your install is
different (kubectl get svc -n monitoring shows the truth).
Find the Service names¶
kubectl -n monitoring get svc \
-l app.kubernetes.io/instance=prometheus-stack \
-o custom-columns=NAME:.metadata.name,PORT:.spec.ports[*].port
You should see entries similar to:
| Component | Service (typical) | Port |
|---|---|---|
| Prometheus | prometheus-stack-kube-prom-prometheus |
9090 |
| Alertmanager | prometheus-stack-kube-prom-alertmanager |
9093 |
| Grafana | prometheus-stack-grafana |
80 (proxied to 3000) |
| Operator | prometheus-stack-kube-prom-operator |
443 |
Quick: port-forward (operator-only access)¶
# Prometheus
kubectl -n monitoring port-forward svc/prometheus-stack-kube-prom-prometheus 9090
# -> open http://localhost:9090
# Alertmanager
kubectl -n monitoring port-forward svc/prometheus-stack-kube-prom-alertmanager 9093
# -> open http://localhost:9093
# Grafana
kubectl -n monitoring port-forward svc/prometheus-stack-grafana 3000:80
# -> open http://localhost:3000
This is the simplest way to verify a fresh install.
Grafana login¶
Default credentials when grafana.adminPassword is set in values:
- Username:
admin - Password: the value of
grafana.adminPassword(defaultprom-operatorif you didn't change it)
If you used grafana.admin.existingSecret instead, fetch the password from
the secret:
kubectl -n monitoring get secret <secret-name> \
-o jsonpath='{.data.admin-password}' | base64 -d ; echo
Rotate the default password before exposing Grafana
Anyone with the default prom-operator password gets full Grafana admin
access. Change it before enabling ingress.
Prometheus UI: what to look for¶
Useful tabs once you can reach Prometheus:
- Status » Targets — every scrape target with its current state
(
UP/DOWN). The fastest way to spot a misconfiguredServiceMonitor. - Status » Service Discovery — what targets the Operator
generated from your
ServiceMonitors andPodMonitors. - Status » Configuration — the live
prometheus.yml. Useful to confirm a config change has been hot-reloaded. - Alerts — current rule firings.
- Graph — ad-hoc PromQL.
Alertmanager UI: what to look for¶
- Alerts — what is currently firing, with grouping and silences applied.
- Silences — create a silence to mute a known issue. Always set an
expirestime and a comment with a ticket reference. - Status — verify cluster gossip (in HA setups) and that the loaded config matches what you expect.
Production access via Ingress¶
Enable ingress per service in your override values:
prometheus:
ingress:
enabled: true
ingressClassName: nginx
hosts: [ "prometheus.example.com" ]
tls:
- secretName: prometheus-tls
hosts: [ "prometheus.example.com" ]
alertmanager:
ingress:
enabled: true
ingressClassName: nginx
hosts: [ "alerts.example.com" ]
tls:
- secretName: alertmanager-tls
hosts: [ "alerts.example.com" ]
grafana:
ingress:
enabled: true
ingressClassName: nginx
hosts:
- grafana.example.com
tls:
- secretName: grafana-tls
hosts:
- grafana.example.com
Add authentication in front of Prometheus / Alertmanager¶
These two ship with no authentication. Either keep them unexposed and
require port-forwarding, or put an authenticating proxy in front. Two common
patterns with ingress-nginx:
Basic auth¶
htpasswd -c auth alice
kubectl -n monitoring create secret generic prometheus-basic-auth \
--from-file=auth
prometheus:
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: prometheus-basic-auth
nginx.ingress.kubernetes.io/auth-realm: "Prometheus"
OAuth2 / OIDC via oauth2-proxy¶
Run oauth2-proxy (e.g. via the upstream chart) and point ingress
auth-url / auth-signin annotations at it. Out of scope for this guide;
see oauth2-proxy docs.
Cluster-internal access¶
If you have services in the same cluster that need to query Prometheus (for example, custom autoscalers, capacity planners, or dashboards), use the in-cluster DNS name:
http://prometheus-stack-kube-prom-prometheus.monitoring.svc.cluster.local:9090
For Grafana:
http://prometheus-stack-grafana.monitoring.svc.cluster.local:80
What about the Operator?¶
You normally do not interact with the operator's Service directly. It exists
mostly for the validating admission webhook (it validates PrometheusRule
syntax on kubectl apply). If the webhook fails open or you suspect it is
stuck, see Troubleshooting.