Installation¶
This page walks you through installing the prometheus-stack chart from
ovesorg/deployment-charts.
Prerequisites¶
| Requirement | Minimum |
|---|---|
| Kubernetes | 1.19 (chart kubeVersion: '>=1.19.0-0') |
| Helm | 3.x |
| Cluster admin access | Yes — the chart installs cluster-scoped CRDs and ClusterRoles |
| StorageClass | Required if you enable persistence for Prometheus / Alertmanager / Grafana |
kubectl configured for the target cluster |
Yes |
Do not deploy via ArgoCD UI/CLI
The in-repo prometheus-stack/README.md warns that deploying via ArgoCD
web or CLI fails with a too many characters error (the CRDs are large).
Use helm directly. For GitOps, render with helm template and apply the
generated manifests, or use a deployment tool that handles large CRDs.
Recommended install (in-repo defaults)¶
The deployment-charts repo's own README recommends installing the chart with
the values that are checked in:
# 1. Clone the deployment-charts repo
git clone https://github.com/ovesorg/deployment-charts.git
cd deployment-charts/prometheus-stack
# 2. Create the namespace
kubectl create namespace monitoring
# 3. Resolve subchart dependencies (only needed if charts/ is missing or stale)
helm dependency update .
# 4. Install
helm install prometheus-stack . -n monitoring
After a few minutes, verify everything is running:
kubectl get pods -n monitoring
kubectl get prometheus,alertmanager -n monitoring
kubectl get servicemonitor -n monitoring
You should see, at minimum:
prometheus-stack-prometheus-operator-*(Operator)prometheus-prometheus-stack-prometheus-0(Prometheus StatefulSet pod)alertmanager-prometheus-stack-alertmanager-0(Alertmanager StatefulSet pod)prometheus-stack-grafana-*(Grafana)prometheus-stack-kube-state-metrics-*prometheus-stack-prometheus-node-exporter-*(one per node)
Alternative: install from upstream repo¶
If you do not need the OVES customizations checked in, you can install directly from the upstream Helm repo:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
kubectl create namespace monitoring
helm install prometheus-stack prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--version 66.3.1
You will get the upstream defaults (which differ in things like
nameOverride: "monitoring", ingress hostnames, dashboards, alert routing).
Custom values file¶
For anything beyond a kick-the-tires install, write your own values file:
helm install prometheus-stack . \
-n monitoring \
-f values.yaml \
-f values.overrides.yaml
A small starting values.overrides.yaml:
grafana:
adminPassword: "<rotate-me>"
ingress:
enabled: true
ingressClassName: nginx
hosts:
- grafana.example.com
tls:
- secretName: grafana-tls
hosts:
- grafana.example.com
prometheus:
prometheusSpec:
retention: 30d
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: gp3
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
alertmanager:
alertmanagerSpec:
storage:
volumeClaimTemplate:
spec:
storageClassName: gp3
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
See Configuration for what to put in the override file.
Verifying the install¶
# CRDs are present
kubectl get crd | grep monitoring.coreos.com
# Prometheus is up and scraping
kubectl -n monitoring port-forward svc/prometheus-stack-kube-prom-prometheus 9090
# Visit http://localhost:9090/targets and confirm targets are UP.
# Alertmanager is up
kubectl -n monitoring port-forward svc/prometheus-stack-kube-prom-alertmanager 9093
# Grafana is up
kubectl -n monitoring port-forward svc/prometheus-stack-grafana 3000:80
# Visit http://localhost:3000 and log in (default user: admin)
Uninstall¶
helm uninstall prometheus-stack -n monitoring
CRDs are not removed automatically
Helm intentionally does not delete CRDs on uninstall, because that would also delete the CR instances (and their data) belonging to other releases. Clean them up manually only if you are decommissioning monitoring entirely:
kubectl delete crd alertmanagerconfigs.monitoring.coreos.com
kubectl delete crd alertmanagers.monitoring.coreos.com
kubectl delete crd podmonitors.monitoring.coreos.com
kubectl delete crd probes.monitoring.coreos.com
kubectl delete crd prometheusagents.monitoring.coreos.com
kubectl delete crd prometheuses.monitoring.coreos.com
kubectl delete crd prometheusrules.monitoring.coreos.com
kubectl delete crd scrapeconfigs.monitoring.coreos.com
kubectl delete crd servicemonitors.monitoring.coreos.com
kubectl delete crd thanosrulers.monitoring.coreos.com
PVCs are also kept by default. Delete them manually if you do not need historical data:
kubectl -n monitoring delete pvc -l app.kubernetes.io/instance=prometheus-stack