I am monitoring 51 devices each with 8 data points,
Approx. 6 of these 8 data points are creating graphs...8x51=458
I am running 2 of Ryan Matte's zenpacks which are great.
My issue is every 2 weeks my localhost max's out 100% on memory and wackiness starts specifically shown with strange snmp polling results. A reboot brings it back down to as low as it ever goes 48% memory in use.
It will constantly climb, never gives any memory back until it max's out.
This is on a centos vm,
attached is a top, run just now when localhost is using 58% of memory
Did I just size it wrong for memory? Which is an easy fix...
Initial allocation was 4 gigs
I also installed iptables to shape traffic, we created a lot of graph reports.
I would like to triple the size but need to identify my issue before doing so,
I also just realized a top when memory is pegged would be more useful than what I provided. I will do this next time.
That behavior is normal. When your memory is showing almost all used, see what the cached value on the swap line is reading. My server has 24GB of memory. When I start zenoss/mysql up it hops right to 10-11GB used. Over a week or so it will climb to about 22GB but 10GB is cached in the column. This is a good thing, the more you have crammed into cache the better.
That being said, 4GB is generally considered the bare minimum for running the zenoss stack. With only 50 devices and 400 datapoints I think you'll be ok with it though. Remember your OS takes several hundred up to a Gig. MySQL will bit a chunk out of it as well. Add Zenoss and normal Linux caching and you will be to 4GB in no time.
Memory is a cache not a resource. If you see 4GB available, 4GB used and 0ish being cached then you know you're starved for memory. Any unused memory is wasted memory.