Prometheus has a lot of exporters available. Sometimes, however, you may want to export data to Prometheus as quickly as possible. I have given it some thought and came up with a rather brutish approach to delivering metrics, which, to my surprise, performs quite good. Or rather, performs as good as your exporter of course.

My thought process/plan of action is described below. While I talk about Bash scripts, you can use any software you'd like I suppose.

Creating a simple Exporter

For the sake of simplicity, let's build a script that would report values from /proc/loadavg, an example of such a script:

#!/bin/bash
# printMetric name description type value
function printMetric {
    echo "# HELP $1 $2"
    echo "# TYPE $1 $3"
    echo "$1 $4"
}

while read -r load1min load5min load15min jobs lastpid; do
    printMetric "loadscript_load1" "1 minute load avg" "GAUGE" "$load1min"
    printMetric "loadscript_load5" "5 minute load avg" "GAUGE" "$load5min"
    printMetric "loadscript_load15" "15 minute load avg" "GAUGE" "$load15min"
    while IFS='/' read -r running background; do
        printMetric "loadscript_jobs_running" "Running jobs" "GAUGE" "$running"
        printMetric "loadscript_jobs_background" "Background jobs" "GAUGE" "$background"
    done <<< "$jobs"
    printMetric "loadscript_pid_last" "Last PID in System" "COUNTER" "$lastpid"
done < /proc/loadavg

And the output of the script:

# HELP loadscript_load1 1 minute load avg
# TYPE loadscript_load1 GAUGE
loadscript_load1 0.07
# HELP loadscript_load5 5 minute load avg
# TYPE loadscript_load5 GAUGE
loadscript_load5 0.03
# HELP loadscript_load15 15 minute load avg
# TYPE loadscript_load15 GAUGE
loadscript_load15 0.05
# HELP loadscript_jobs_running Running jobs
# TYPE loadscript_jobs_running GAUGE
loadscript_jobs_running 1
# HELP loadscript_jobs_background Background jobs
# TYPE loadscript_jobs_background GAUGE
loadscript_jobs_background 159
# HELP loadscript_pid_last Last PID in System
# TYPE loadscript_pid_last COUNTER
loadscript_pid_last 25325

Tip: When building your Exporter, make sure that the Prometheus syntax is correct. You can use the promtool utility to check metric syntax.

We've built our exporter, now we need something to serve this over HTTP for Prometheus to scrape.

Using Xinetd as your TCP server

I've had success using Xinetd as the TCP server to send my data. Xinetd is a generic service server, meaning it provides you with an easy way to send and receive data via TCP.

You can use other TCP listeners, such as Netcat if you'd like. I've decided on Xinetd because of how easy it is to manage service files, and Xinetd provides basic access control so that only my metrics instance can access it.

There is a small catch, though - Xinetd does not have the concept of the HTTP protocol that you'd want Prometheus to scrape.

An example Xinetd service file could look like this:

service loadscript
{
  type = unlisted
  port = 10111
  socket_type = stream
  wait = no
  user = root
  server = /opt/metrics.d/httpwrapper
  server_args = loadscript
  disable = no
  only_from = 127.0.0.1
  log_type = FILE /dev/null
}

More on httpwrapper below.

Building an HTTP wrapper script

My next step was to build a small script that would wrap my metrics in HTTP compliant output that Xinetd would pass to the connecting client.

#!/bin/bash
# small http wrapper for bash scripts via xinetd

ulimit -n 20480
ulimit -l 512

root='/opt/metrics.d/'
file="$1"
mime='text/plain'

cd $root

if [ -f "$root$file" ]; then
  $root$file > /tmp/.$$.output

  size=$(stat -c "%s" "/tmp/.$$.output")

  printf 'HTTP/1.1 200 OK\r\nDate: %s\r\nContent-Length: %s\r\nContent-Type: %s\r\nConnection: close\r\n\r\n' "$(date)" "$size" "$mime"

  cat /tmp/.$$.output

  sleep 1
  rm -f /tmp/.$$.output
  exit 0
fi

exit 1

The above script runs whatever you provide it as an argument (relative to $root - in my case /opt/metrics.d/ is where I store my exporters), then wraps that output in HTTP-compliant data for Prometheus to scrape.

Putting it all together

  1. Install Xinetd: yum install xinetd, apt install xinetd, etc.
  2. mkdir -p /opt/metrics.d/
  3. Place the load script exporter in /opt/metrics.d/loadscript
  4. Place httpwrapper in /opt/metrics.d/httpwrapper
  5. Install the Xinetd service file into /etc/xinetd.d/loadscript
  6. chmod +x /opt/metrics.d/*
  7. Restart Xinetd: systemctl xinetd restart

You should now be able to query your script via HTTP with curl 127.0.0.1:10111

# HELP loadscript_load1 1 minute load avg
# TYPE loadscript_load1 GAUGE
loadscript_load1 0.01
# HELP loadscript_load5 5 minute load avg
# TYPE loadscript_load5 GAUGE
loadscript_load5 0.03
# HELP loadscript_load15 15 minute load avg
# TYPE loadscript_load15 GAUGE
loadscript_load15 0.05
# HELP loadscript_jobs_running Running jobs
# TYPE loadscript_jobs_running GAUGE
loadscript_jobs_running 1
# HELP loadscript_jobs_background Background jobs
# TYPE loadscript_jobs_background GAUGE
loadscript_jobs_background 161
# HELP loadscript_pid_last Last PID in System
# TYPE loadscript_pid_last COUNTER
loadscript_pid_last 25400

Congrats! Your first Bash exporter is now running and can be scraped by Prometheus. Simply add the host/port to your Prometheus config and reload/restart Prometheus and it should work just fine.

Is this fine for Production?

Yes! One sample exporter written this way that I've deployed on hundreds of servers can be found here: https://github.com/pawadski/monitoring_exporter

More stuff...

As usual, the code involved in this blog post is available on my Gitlab: https://gitlab.com/pawadski/blog/tree/master/exporting-prometheus-metrics-bash-scripts