I used to deploy Prometheus the traditional way, without the help of an operator. Finally, it is time to try out the operator as it has become the de-facto standard way of deploying Prometheus.
To get a good understanding I decided to stay away from Helm this time. I think it helps to really understand the different parts when you deploy the raw yaml files. Helm is a great tool but it hides the implementation details, which is great, but not when you want to get a deeper understanding. However, if you are thinking about deploying prometheus-operator in a production environment for your company I would suggest you to look at the community based helm chart kube-prometheus-stack . It is maintained by the community and installs a complete prometheus stack.
As always you can follow along and check out the source code or clone it here .
Install The Operator
First things first, we need a namespace to deploy our Kubernetes objects into. In this tutorial, I’ve chosen to name it o11y
, but you’re free to use any name you prefer. Let’s create the namespace.
|
|
Now it is time to deploy the prometheus-operator along with the custom resource definitions (CRD). The operator will watch the CRD and ensure that your Prometheus setup reflects the specified values. Use the flag --server-side
to avoid warnings about long annotation names.
|
|
And finally, let’s deploy Prometheus itself.
|
|
Now, if you list the pods in the o11y
namespace, you should see two pods running: one for prometheus-operator and one for Prometheus itself.
|
|
There you have it! Prometheus is now deployed in your cluster, all set to scrape your metrics. Before we wrap up, let’s dive into an important part of the Prometheus CRD. In the YAML file, you’ll notice a few selectors:
|
|
These selector names are arbitrary, but they must match for the prometheus-operator to listen to changes in the CRD and apply them to Prometheus. Check out the labels for the namespace
and the PodMonitor
, and you’ll see they match. This feature is incredibly useful when you have multiple Prometheus instances in your cluster and want to specify which instance should scrape your metrics. In a multi-tenant setup, this becomes crucial, especially when each team has its own Prometheus instance, and you want to keep metrics separate.
|
|
Finally, let’s port-forward the port for prometheus and explore what we can query.
|
|
If you go to http://localhost:9090/targets
you should now see a target called podMonitor/o11y/prometheus/0 (1/1 up)
. We are now successfully monitoring prometheus itself. Now you can keep adding more exporters to get metrics about the application you care about.
That’s it! Thank you for reading!