It's great to now be a part of the Zenoss team, where I am assisting Zenoss Labs in their efforts advance the capabilities of the Zenoss platform by developing new ZenPacks.
For my initial ZenPack, Chet offered a couple of options, and "OpenVZ" was one of them, and I made a pretty strong request to be able to tackle the OpenVZ ZenPack first, as I'm a big fan of the technology.
I have been using OpenVZ for quite a while and the Linux distribution I now lead, Funtoo Linux, has good support for OpenVZ. So I was excited to be able to add some really advanced OpenVZ support to Zenoss. I am pleased to announce that the OpenVZ ZenPack is now available to the public. It is compatible with Zenoss Core 3.2 and the upcoming Zenoss Core 4 release. It is available on GitHub at the following location:
Detailed documentation has been provided in README.rst which you can browse by scrolling down on the link above. (ZenPack Developer note: we are moving to ReStructuredText as standard for ZenPack documentation -- more info in a future blog post)
I should probably introduce OpenVZ to those who may not be familiar with it, and then give you some insights into the full monitoring capabilities that have been integrated into this ZenPack.
What is OpenVZ?
OpenVZ (see http://wiki.openvz.org) is an OS-level server virtualization solution, built on Linux. OpenVZ allows the creation of isolated, secure virtual Linux containers (called "VE"s) on a single physical server. Each container has its own local uptime, power state, network interfaces, resource limits and isolated portion of the host's filesystem. OpenVZ is often described as "chroot on steroids."
It is a very capable and mature platform for creating high-performance Linux-based clouds, and is used quite extensively by hosting providers and commercial organizations. The project is supported by Parallels and it has a commercial version called Virtuozzo.
The appeal of this technology is that it can fully isolate Linux-based images from one another without incurring any significant "virtualization overhead", that slight (sometimes significant) performance degradation that we typically accept as the cost of going virtual with a more traditional hypervisor/paravirtualization-based virtualization solution.
It does this by utilizing a single Linux kernel for all containers on the OpenVZ host device. An OpenVZ RHEL5 or RHEL6-based kernel has been enhanced so that it can manage each container's memory, processor and networking resources independently and provide isolation between each container, while offering bare metal performance.
The OpenVZ ZenPack
Now, let's explore how this ZenPack extends Zenoss' existing monitoring capabilities.
While Zenoss always has had the ability to monitor an OpenVZ host system and containers as standalone devices by connecting to the host and containers directly using SSH or SNMP, it did not have the ability to understand the relationship between the OpenVZ host and the container. It also did not have the ability to "see" containers by simply monitoring the host.
This ZenPack allows you to see the containers as components within an OpenVZ host even if you are not actively monitoring the individual containers. A number of metrics are made available for the containers without requiring monitoring to be configured on the containers themselves. This is a great benefit for hosting providers as well as large enterprise OpenVZ deployments.
With this ZenPack, it is also still possible to monitor containers "the old-fashioned way", as Linux devices by using SSH or SNMP, and if you do this, Zenoss will now be able to "connect" the OpenVZ host to the container device that you are monitoring. For containers containing production workloads, this dual-monitoring approach allows you to use traditional Zenoss monitoring and alerting functions within the container. It allows you to monitor both OpenVZ-specific metrics as well as traditional Linux metrics.
I have also made this ZenPack very powerful in regards to what metrics it can monitor. Anything exported by /proc/user_beancounters or /proc/vz/veinfo can be monitored, and I try to provide normalized "byte"-equivalent metrics for everything I can. Several example graphs are included, and appear as a graph for each component. Here are some samples:
In addition, I have implemented a special graph called "OpenVZ Container Memory Utilization" that calculates the percentage of memory used for all containers on your OpenVZ host using formulas supplied by the OpenVZ developers. This graph can be used for capacity management purposes, to see how much additional untapped capacity is left in your OpenVZ deployment, and how "close to the edge" you are when it comes to memory usage:
Enjoy the OpenVZ ZenPack and let me know how it works for you!