We run many different tomcat instances on a Unix box and need to keep the log files from getting to big and filling up our filesystem. Typically we have logging turned up high since we need the information for debugging.
Instead of using a cron job, we decide to create a Hudson job that periodically loops through the tomcat log directories and removes any old log files. Here's the shell command we periodically run:
/usr/bin/find /opt/*/*/logs/* -type f -mtime +2 | grep -v cruise | grep -v /opt/tools/confluence | xargs --no-run-if-empty rm -v
Note: /opt/tools/confluence is a sym link so we exclude it since it should already be cleaned up via it's real directory name. "cruise" is a directory that I don't want touched by this so I use grep -v to exclude it.
This is a development server so we don't need log files over 2 days old (not modified in over 2 days). We also compress files that are over a day old to save space. You can then use zgrep to grep through compressed files. The compression is done with this shell command:
/usr/bin/find . -type f -mtime +1 | grep -v cruise | grep -v /opt/tools/confluence |xargs --no-run-if-empty /bin/gzip -v
Note 2: I use "rm -v" and "gzip -v" so that the hudson console output includes verbose output about what was removed or compressed.
A blog about software development, primarily in Java and about web applications.
About Me
Friday, November 5, 2010
Subscribe to:
Posts (Atom)