if you already have nagios setup to monitor your servers, it's a good idea to use it to monitor your hadoop data nodes as well. you can use nrpe to call your dfs monitoring plugin. I found this blog post about setting up nrpe on CentOS extremely useful. The only trick to get it to work was to also setup rpmforge repositories for yum, so it finds the nrpe packages (nagios-nrpe and nagios-plugins-nrpe).
The next step was to add monitoring script for hadoop dfs, which I found here. Since I was using hadoop 0.20.2, I had to make some changes to the script to correctly parse values out of the dfs report:
get_vals() {
tmp_vals=`sudo ${path_sh}/get-dfsreport.sh`
if [ -n "$tmp_vals" ]
then
dn_avail=`echo -e "$tmp_vals" | grep -m1 "Datanodes available:" | awk '{print $3}'`
dfs_used=`echo -e "$tmp_vals" | grep -m1 "DFS Used:" | awk '{sub(/\(/,"",$4); print $4}'`
dfs_used_p=`echo -e "$tmp_vals" | grep -m1 "DFS Used%:" | awk '{print $3}'`
dfs_total=`echo -e "$tmp_vals" | grep -m1 "Present Capacity:" | awk '{sub(/\(/,"",$4); print $4}'`
else
echo "Empty Response from Hadoop"
fi
}
do_output() {
output="Datanodes up and running: ${dn_avail}, DFS total: ${dfs_total} TB, DFS used: ${dfs_used} TB (${dfs_used_p})"
}
do_perfdata() {
perfdata="'datanodes_available'=${dn_avail} 'dfs_total'=${dfs_total} 'dfs_used'=${dfs_used}"
}
Here is how I defined the new check command in /etc/nagios/nrpe.cfg:
command[check_hadoop-dfs]=/usr/lib64/nagios/plugins/check_hadoop-dfs.sh -s /usr/lib64/nagios/plugins
Also, nrpe was not able to get the results from hadoop. I ended up commenting out the following line in sudoers file to get it to work:
#Defaults requiretty
6a4718f0-6ecd-4bfd-937a-288e4babd120|0|.0|96d5b379-7e1d-4dac-a6ba-1e50db561b04
Tags :