The Munin project is moving slowly closer to a Munin 3 release. In parallel, the Debian packaging is changing, too.
The new web interface is looking much better than the traditional web-1.0 interface normally associated with munin.
All the Munin perl libraries are placed in “libmunin-*-perl”, and split into separate packages, where the split is decided mostly on dependencies.
If you don’t want to monitor samba, or SNMP, or MySQL, there should be no need to have those libraries installed. That does mean more binary packages, on the other hand.
Munin now runs as a standalone HTTPD, it no longer graphs from cron, nor does it run as CGI or FastCGI scripts.
The user “munin” grants read-write access, while the group “munin” grants read only access. The new web interface runs as the “munin-httpd” user, which is member of the “munin” group.
There is a “munin” service. For now, it runs rrdcached for the munin user and RRD directory.
The perl “munin-node” and the compiled “munin-node-c” should be interchangeable, and be able to run the same plugins.
Munin node, and Munin async node, should be wholly separate from the munin master. It should be possible to use the perl “munin-node” package, and the
The munin plugins are placed separate packages named “munin-plugins-*”. The split is based on monitoring subject, or dependencies. They depend on appropriate “libmunin-plugin-*-perl” packages
The “munin-plugins-c” package, which is is from the “munin-node-c” source, contains a number of compiled plugins which should use less resources than their shell, perl or python equivalents.
Plugins from other sources than “munin” must work similar to the ones from “munin”. More work on this is needed.
Late December 2015, I set up Jenkins, with jenkins-debian-glue to build packages, test with autopkgtest and and update my development apt repository on each commit. That helped developing and testing the new Munin packages.
The packages are not quite ready to upload to experimental, but they are continuously deployed to weed out bugs. They can be found in my packaging apt repo. (The usual non-guarantees apply, handle with care, keep away from small children, etc…)
Munin developers, packagers and users hang out on “#munin” on the OFTC network. Please drop by if you have questions or comments.
What did I do in September 2015?
Working on making the munin master fit inside Mojolicious.
The existing code is not written to make this trivial, but all the pieces are there. Most of the pieces need breaking up into smaller pieces to fit.
New version of puppet-module-puppetlabs-apache which closes:
I like it when a new upstream version closes all bugs left in the bts for a package.
(Update 2016-01-02: It’s not in NEW anymore)
Lots of work on a new ceph puppet module.
Using btrfs on a networked backup server looked like a good idea, what with the data integrity checksumming and all. Problem was, we experienced massive performance issues.
Reformatting it to ext4 gave a decent increase in write performance, and will hopefully give fewer server crashes per week (from “many” to “none” is the goal) Just before this wipe-and-reinstall, “umount” had been hanging for a few hours, and the admin got a tad annoyed.
This is on Ubuntu 12.04.4 LTS (GNU/Linux 3.12.8-031208-generic x86_64). The “disk” in question is a single 28T device on a nearby disk shelf.
Now, why this performance difference? We have Munin installed, and showing graphs from before and after the change gives us a few clues.
Network throughput increased to the limit. Looks like it is time to move to active/active bonding, instead of active/passive.
The number of operations went down, while the request size increased massively. This allowed much more data to be written to the device.
I discovered some time ago the marvelous
in the munin source code. So, as its usage is very easy, I'll just
write a tutorial about how to use it
To use it, one has to install all the packages needed for munin, and to grab a copy of the source code. Easiest is to use either a tarball, or to clone the git repository.
Note that the guidelines on contributing back are specified directly in the git repo.
Now, I just assume you want to contribute back, otherwise you would not care much about the said dev environment. That means using the git way of doing it.
First step is to clone the git repository. We will use
$HOME/src/munin as the development directory.
mkdir -p $HOME/src cd $HOME/src git clone https://github.com/munin-monitoring/munin munin cd munin
Now, we have to compile the source code. I know that it sounds strange as the code is mostly Perl, but there are some templates that need to be filled with the environment specifics, such as the Perl interpreter path, a POSIX compatible shell, ...
Now all munin (and munin-node) should be compiled and installed in
Note that the
1 at the end is explained below.
There are some different tools in
This is the one you used already. You have to use it every time you want to recompile & deploy the package.
1 argument, does a full re-install (wipe & install), so
you don't usually want to do that.
This is a tool to start the development node. Note that it listens on the port 4948, so you can use it alongside a normal munin-node.
run command inside is used to launch all the executable
parts of munin, such as
munin-limits. It can also be used to launch
The usage is very simple, just prefix the command to launch with
dev_scripts/run, every environment variable and command line
argument will be forwarded to the said command.
# launch munin-cron dev_scripts/munin-cron # launch manually some cron parts dev_scripts/munin-update dev_scripts/munin-limits dev_scripts/munin-html dev_scripts/munin-graph # debug a plugin dev_scripts/munin-run --debug cpu config
This is the same as
run, only for CGI. It sets up the whole
environment vars that emulates a CGI call. Usage is very easy :
dev_scripts/cgi munin-cgi-graph /localnet/localhost/cpu-day.png > out.dat
out.dat will contain the whole HTTP output, with the HTTP
headers and the PNG content. Everything that is sent to STDERR won't be
catched, so you can liberally use it while debugging.
query_munin_node is used to send commands to the node in a
very simple way. Node commands are just args of the tool.
dev_scripts/query_munin_node list dev_scripts/query_munin_node config cpu dev_scripts/query_munin_node fetch cpu
That's the holy grail. You will have a development version that behaves the same as a real munin install.
First, let's assume you have a working user cgi configuration (ie
~user/cgi/whatever is working). If not you should refer yourself
to the local documentation of your preferred webserver. Note that nginx will
_not_ work, as it does
not support CGI.
I wrote a very simple cgi wrapper script. The home dir is hard coded in the script.
#! /bin/sh ROOT=/home/me/src/munin eval "$(perl -V:version)" PERL5LIB=$ROOT/sandbox/usr/local/share/perl/$version #export DBI_TRACE=2=/tmp/dbitrace.log exec perl -T -I $PERL5LIB $ROOT/sandbox/opt/munin/www/cgi/$CGI_NAME
As I wrote about it earlier, Helmut rewrote some core plugins in C. It was maintly done with efficiency in mind.
As those plugins are only parsing one
/proc file, there seemed
no need to endure the many forks inherent with even trivial shell programming.
It also acknowledges the fact that the measuring system shall be as light as
Munin plugin are highly driven towards simplicity. Therefore having shell plugins is quite logical. It conveys the educational sample purpose for users to write their own, while being quite easy to code/debug for the developpers. Since their impact on current systems is very small, there are not much incentive to change.
Nonetheless, now monitored systems are becoming quite small.
Now the embedded C approach for plugins has a new rationale.
 Usually datacenter nodes are more in the high end of the spectrum than the low-end.
Munin's greatest strength is its very KISS architecture. It therefore gets many things right, such as a huge modularity.
Each component (master/node/plugin) has a simple API to communicate with the others.
I admit that the master, even the node, have convoluted code. In fact some rewrites already do exist.
And they are a really good thing, as it enables rapid prototyping on things that the stock munin has (currently) trouble to do.
The stock munin is a piece of software that many depend upon, so it has to move at a much slower pace than one does want, even me. As much as I really want to add many many features to it, I still have to take extra care that it doesn't break stuff, even the least known features.
So I take munin off-springs very seriously and even offer as much help as I can in order for them to succeed.
In my opinion competition is only short bad in the short term, and in the long term they usually add significant value to the whole ecosystem. That said, there's always a risk to become slowly irrelevant, but I think that's the real power of open-source's evolutionary paradigm : embrace them or become obsolete and get replaced.
Since, if someone takes the time to author a competitor that has a real threat potential, it mostly means that there's a real itch to scratch and that many things are to be learnt.
The munin ecosystem is divided in 3 main categories, obviously related to the 3 main components of munin : master, node & plugin.
That's the most obvious part as custom plugins are the real bread and butter of munin.
Stock plugins are mostly written in Perl or POSIX shell, as Perl is munin's own language and POSIX shell is ubiquitous. That fact is acknowledged by the fact that core munin provides 2 libraries (Perl & Shell) to help plugin authoring.
Some plugins got even rewritten in plain C, as it was shown that shell plugins do have a significant impact on very under-powered nodes, such as embedded routers.
This component is very simple. Yet, it has to be run on all the nodes that one wants to monitor. It is currently written in Perl, and while that's not an issue on UNIX-like systems, it can be quite problematic on embedded ones
The official package comes with a POSIX shell rewrite that has to be run from inetd. It is quite useful for embedded routers like OpenWRT, but still suffers from an hard dep on POSIX shell and inetd.
SNMP is another way to monitor nodes. While it works really well, it mostly suffers the fact that its configuration is quite different of the usual way, so I guess some things will change on that side.
Win32 has long been a very difficult OS to monitor, as it doesn't offer much of the UNIX-esque features. Yet the number of win32 nodes that one wants to monitor is quite high, as it makes munin one the few systems that can easily monitor heterogeneous systems.
Therefore, while you can install the stock munin-node, several projects emerged. We decided to adopt munin-node-win32.
There's also a dedicated node for Android. It makes sense, given that the Android is yet Linux-derived, but lacks Perl, and is a Java mostly platform. This node also has some basic capabilities of pushing data to the master instead of the usual polling.
This is specially interesting given the fact that Android nodes are usually loosely connected, so the node spools values itself and pushes them when it recovers connectivity.
Note that this is specifically an aspect that is currently lacking in munin, and I'm planning to address it in the 2.1 series. So thanks to its author for showing a relevant use-case.
That's my last experiment. It started with a simple question : how difficult would it be to code a fairly portable version of the node ?
It turned out that it wasn't that difficult. I'm even asking myself about eventually replacing the win32 specific port with this one, as the code is much simpler. The win32 node has several plugin built-in mostly due to platform specifics. I still have to find a way to work my way around it, but it's in quite good shape.
This post was originally done to promote it, but while writing it I noticed that the ecosystem deserved a post on its own. So I'll write another one, specific to the C port of munin-node and plugins.
The master is the most complex component. So rewrites of it won't happen as-is. They usually take the form of a bridge between the munin protocol and another graphing system, such as Graphite.
There are also client libraries that are able to directly query munin nodes, to be able to reuse the vast ecosystem. Languages are various, from the obvious Python to Ruby, along with a quite modern node.js one.