Red Hat
Advanced Server
And The
Logical Volume
Manager
Cookbook
TABLE OF
CONTENTS
Business Requirements.......................................................................................................................... 3
Cluster
Requirements............................................................................................................................................................................................... 3
Hardware
Requirements........................................................................................................................................................................................... 3
Software
Requirements............................................................................................................................................................................................. 3
Installation
Procedures...................................................................................................................... 4
Kernel
Modifications................................................................................................................................................................................................. 4
Physical
Disk Layout................................................................................................................................................................................................ 5
LVM
Configuration................................................................................................................................................................................................... 6
Cluster
Configuration........................................................................................................................... 7
Node
1: CMIL012....................................................................................................................................................................................................... 7
Node
2: CMIL013....................................................................................................................................................................................................... 7
Floating
IP Addresses............................................................................................................................................................................................... 7
Service
Configuration............................................................................................................................ 8
Appendix
A – Cluster Diagram........................................................................................................... 10
Appendix
B – Cluster Definition...................................................................................................... 11
Appendix
C – Startup Script Modifications.............................................................................. 12
Original
File: /etc/rc.d/rc.sysinit......................................................................................................................................................................... 12
Modified
File: /etc/rc.d/rc.sysinit......................................................................................................................................................................... 12
Appendix
D – Cluster Scripts........................................................................................................... 13
Appendix
E – /etc/cluster.conf........................................................................................................ 18
Appendix
F – /etc/grub.conf............................................................................................................... 20
Appendix
G – Rotating The Cluster Logfile............................................................................. 22
·
The first requirement for this project was that it had to be able to
replace a single IBM RS/6000 server with a clustered Linux solution.
·
The “services” that were currently running on the RS/6000 needed to be
distributed between both nodes in the cluster.
·
Compaq 4100 Storage Array – 10 10K RPM disks @ 72Gb each in a RAID 5,
and 2 10K RPM disks @ 36Gb in a RAID 1+0.
These disks were then carved into 30 logical volumes on the Compaq
array. The Compaq 4100 supports 32
devices on the SCSI bus, so you have to leave 2 open id’s for the 2
controllers.
·
2 – Compaq DL580 servers with 4 CPU’s @ 700Mhz each, 2.5Gb RAM each, 2
single port network cards each, and one Compaq Fiber Card for each server.
·
Compaq Fiber Hub
·
4 Fiber cables to connect the disks and servers.
·
1 crossover network cable, for the dedicated network heartbeat.
·
Red Hat Advanced Server 2.1 – Patched to the latest level.
·
LVM 1.3 RPM
1. Install Red Hat Advanced
Server(RHAS) on both of the Compaq servers.
2. Download and install all of
the latest patches for RHAS.
3. Recompile the kernel to be
LVM aware. I used the following steps:
a. cd /usr/src/linux-2.4
b. make mrproper
c. make clean
d. cp configs/kernel-2.4.9-i686-smp.config
.config
e. make xconfig
f. make dep
g. depmod
h. make modules
i.
make bzImage
j.
make modules_install
k. make install
l.
cp vmlinux /boot/ vmlinux-2.4.9-e.12custom
m. cd /boot
n. mkinitrd /boot/ initrd-2.4.9-e.12custom.img
2.4.9-e.12custom
4. Edit the /etc/grub.conf file to contain the stanza for the new kernel.
5. Install the latest LVM RPM.
6. Edit /etc/rc.d/rc.sysinit and comment out the LVM startup. You want to let the clustering scripts take
care of the LVM disks. NOTE: THIS
IS EXTREMELY IMPORTANT. IF YOU DO NOT DO
THIS, YOU WILL CORRUPT YOUR DATA!!
7. Reboot the machine with the
LVM aware kernel.
Device |
Size(Gb) |
Volume Group |
/dev/sda |
0.64 |
None (Raw) |
/dev/sdb |
8.00 |
|
/dev/sdc |
8.00 |
|
/dev/sdd |
8.00 |
/dev/vgfti |
/dev/sde |
8.00 |
/dev/vgdba |
/dev/sdf |
1.90 |
|
/dev/sdg |
40.00 |
/dev/vgcdr |
/dev/sdh |
40.00 |
/dev/vgcdr |
/dev/sdi |
40.00 |
/dev/vgcdr |
/dev/sdj |
40.00 |
/dev/vgcdr |
/dev/sdk |
40.00 |
/dev/vgcdr |
/dev/sdl |
40.00 |
/dev/vgfti |
/dev/sdm |
40.00 |
/dev/vgfti |
/dev/sdn |
40.00 |
/dev/vgfti |
/dev/sdo |
40.00 |
/dev/vgfti |
/dev/sdp |
40.00 |
/dev/vgfti |
/dev/sdq |
40.00 |
|
/dev/sdr |
20.00 |
/dev/vgcdr |
/dev/sds |
20.00 |
/dev/vgfti |
/dev/sdt |
20.00 |
/dev/vgfti |
/dev/sdu |
20.00 |
/dev/vgfti |
/dev/sdv |
20.00 |
|
/dev/sdw |
10.00 |
/dev/vgdba |
/dev/sdx |
10.00 |
/dev/vgdba |
/dev/sdy |
10.00 |
/dev/vgdba |
/dev/sdz |
10.00 |
|
/dev/sdaa |
10.00 |
|
/dev/sdab |
10.00 |
|
/dev/sdac |
4.00 |
|
/dev/sdad |
6.53 |
|
|
|
|
Total Disk Space |
645.07 |
|
1. Create physical volumes on
the new disks, except for the disk that is to contain the raw devices for the
clustering software(/dev/sda). For this, I did a simple for loop: “for i in b c d e f g h i j k l m n o
p q r s t u v w x y z aa ab
ac ad; do pvcreate /dev/sd$i;
done”
2. vgcreate -s 8M vgcdr
/dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdr
3. lvcreate -L 10G -n cdr vgcdr
4. lvcreate -L 200G -n cdr01 vgcdr
5. vgcreate -s 8M vgfti
/dev/sdd /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sds /dev/sdt /dev/sdu
6. lvcreate -L 10G -n fti vgfti
7. lvcreate -L 150G -n fti01 vgfti
8. lvcreate -L 20G -n vad01 vgfti
9. lvcreate -L 5G -n fulf vgfti
10. lvcreate -L 75G -n work vgfti
11. vgcreate vgdba
/dev/sde /dev/sdw /dev/sdx /dev/sdy
12. lvcreate -L 30G -n dba01 vgdba
13. mke2fs -j /dev/vgcdr/cdr
14. mke2fs -j /dev/vgcdr/cdr01
15. mke2fs -j /dev/vgfti/fti
16. mke2fs -j /dev/vgfti/fti01
17. mke2fs -j /dev/vgfti/vad01
18. mke2fs -j /dev/vgfti/fulf
19. mke2fs -j /dev/vgfti/work
20. mke2fs -j /dev/vgdba/dba01
21. Create the mount points:
“for a in cdr cdr01 fti
fti01 vad01 fulf work dba01; do mkdir
/$a; done”
Things To Note:
·
Don’t put the mounts into /etc/fstab. Let the cluster script take care of it.
·
Don’t activate the same volume group on 2 different machines at the
same time or you will corrupt the Volume Group Descriptor Area(VGDA).
Interface |
IP Address |
Netmask |
Broadcast |
Network |
Eth0 |
10.120.9.131 |
255.255.255.0 |
10.120.9.255 |
10.120.9.0 |
Eth1 |
192.168.0.1 |
255.255.255.0 |
192.168.0.255 |
192.168.0.0 |
Interface |
IP Address |
Netmask |
Broadcast |
Network |
Eth0 |
10.120.9.132 |
255.255.255.0 |
10.120.9.255 |
10.120.9.0 |
Eth1 |
192.168.0.2 |
255.255.255.0 |
192.168.0.255 |
192.168.0.0 |
DNS Alias (Package) |
IP Address |
Primary Node |
CMILC01 (Cluster Alias) |
10.120.9.130 |
CMILC01 |
TRANS_CDR (cluster_cdr) |
10.120.9.133 |
CMIL012 |
TRANS_FTI (cluster_fti) |
10.120.9.134 |
CMIL013 |
TRANS_DBA (cluster_dba) |
10.120.9.135 |
CMIL012 |
For
this cluster of machines, there are 3 services that can be failed back and
forth. Each of the services is named
cluster_<service> and has an associated volume group called
vg<service>.
The
following is the output from the “cluadmin – service
show config cluster_<service>” commands.
name:
cluster_cdr
preferred
node: cmil012
relocate:
yes
user
script: /usr/local/bin/cluster_cdr
monitor
interval: 300
IP
address 0: 10.120.9.133
netmask 0:
255.255.255.0
broadcast 0: 10.120.9.255
name:
cluster_dba
preferred
node: cmil012
relocate:
yes
user
script: /usr/local/bin/cluster_dba
monitor
interval: 300
IP
address 0: 10.120.9.135
netmask 0:
255.255.255.0
broadcast 0: 10.120.9.255
name:
cluster_fti
preferred
node: cmil013
relocate:
yes
user
script: /usr/local/bin/cluster_fti
monitor
interval: 300
IP
address 0: 10.120.9.134
netmask 0:
255.255.255.0
broadcast 0: 10.120.9.255
·
Build all of the services on a single node to begin with. Get the scripts working there before trying
to move it to a different node. It will
just make debugging that much easier.
·
Make sure that you have adequate logging in your script so that you can
determine what went wrong. If you look
at the script below, it does a tremendous amount of logging. However, start and stop are calling the
cluster logging utility with a loglevel of
6(informational) and the status script calls the cluster logging utility with a
loglevel of 7(debug).
You can control the level of logging in the logfile
with the “cluadmin – cluster loglevel”
command.
·
If you are going to do any logical volume and/or volume group work,
verify that the volume group is only active on one system at a time. If it is active in both places, it can potentially
cause data loss of the volume group.
·
The Java gui, while a nicety, consumes too
much CPU. I recommend informing people
about the clustat command instead.
·
Take backups weekly of the quorum data if you are changing it
often. You can use it to rebuild the
quorum partitions if they get corrupted for some reason. You can use a simple shell script like the
following to accomplish this via cron:
#!/bin/bash
DATE=`date +"%Y%m%d"`
CLUADMIN=`type -p cluadmin`
FILENAME="/root/cluster.conf.${DATE}"
${CLUADMIN} -- cluster saveas ${FILENAME}
·
Configure WU-FTP on both nodes in the cluster.
·
Make sure the customized cluster scripts are in sync on both servers.
·
Make sure all of the mount points exist on both servers.
You
do not want the system to automatically try and mount the LVM controlled
disks. This WILL cause corruption in the
VGDA and you will have to reboot the box to correct the corruption. To correct this, /etc/rc.d/rc.sysint
was modified to bypass the LVM startup on each of the nodes in the cluster.
# Remount the root filesystem read-write.
state=`awk '/(^\/dev\/root| \/ )/ { print $4 }' /proc/mounts`
[ "$state"
!= "rw" ] && \
action $"Remounting root filesystem in read-write mode: " mount -n -o remount,rw /
# LVM initialization
if [ -e /proc/lvm -a -x /sbin/vgchange -a -f
/etc/lvmtab ]; then
action $"Setting up Logical Volume
Management:" /sbin/vgscan && \ /sbin/vgchange -a y
fi
# Clear mtab
>/etc/mtab
# Remount the root filesystem read-write.
state=`awk '/(^\/dev\/root| \/ )/ { print $4 }' /proc/mounts`
[ "$state"
!= "rw" ] && \
action $"Remounting root filesystem in read-write mode: " mount -n -o remount,rw /
# LVM initialization
#if [ -e /proc/lvm -a -x /sbin/vgchange -a -f
/etc/lvmtab ]; then
# action
$"Setting up Logical Volume Management:" /sbin/vgscan
&& \ #/sbin/vgchange -a y
#fi
# Clear mtab
>/etc/mtab
The script below is used by the
cluster package to start, stop and query one of the services that are executing
under the control of the cluster.
#!/bin/bash
#----------------------------------------------------------------------------#
# cluster_cdr: This script will handle the failover of the the packages on #
# the linux cluster. #
#----------------------------------------------------------------------------#
# 20030310
# #
#----------------------------------------------------------------------------#
#----------------------------------------------------------------------------#
# Set the log level variables for the clulog command. #
#----------------------------------------------------------------------------#
LOG_EMERG=0 # system is unusable
LOG_ALERT=1 # action must be taken immediately
LOG_CRIT=2 # critical conditions
LOG_ERR=3 # error conditions
LOG_WARNING=4 # warning conditions
LOG_NOTICE=5 # normal but significant condition
LOG_INFO=6 # informational
LOG_DEBUG=7 # debug-level messages
PACKAGE="CLUSTER_CDR" # Name of the package referenced by this script.
PKG_VG="vgcdr" # Volume Group For The Package
PKG_LVS="cdr cdr01" # Logical Volumes In The Volume Group
TXMIT_START="/usr/local/bin/trans_cdr" # Start/Stop script for the application
PERFORM_FSCK=1 # 1=Do an FSCK, 0 = don't do an fsck
#----------------------------------------------------------------------------#
# Find the executables required for this script. #
#----------------------------------------------------------------------------#
CLULOG=`type -p clulog`
VGSCAN=`type -p vgscan`
VGCHANGE=`type -p vgchange`
VGDISPLAY=`type -p vgdisplay`
LVCHANGE=`type -p lvchange`
LVDISPLAY=`type -p lvdisplay`
DFCMD=`type -p df`
MOUNT=`type -p mount`
UMOUNT=`type -p umount`
E2FSCK=`type -p e2fsck`
FSCKOPTS="-p -y"
SYNCCMD=`type -p sync`
TCH_CMD=`type -p touch`
RM_CMD=`type -p rm`
#----------------------------------------------------------------------------#
# This routine is to format "standardized" messages to the logfile. #
#----------------------------------------------------------------------------#
log_msg () {
MSG=$1
${CLULOG} -s ${MSG_LEVEL} "${PACKAGE} ==> ${MSG}"
return 0;
}
#----------------------------------------------------------------------------#
# This routine will scan the disks and rebuild the lvm configuration. It #
# will then activate the volume group. Then it will mount the logical #
# volumes under the correct mountpoints. #
#----------------------------------------------------------------------------#
start () {
MSG_LEVEL=6
log_msg "Starting the package"
log_msg "Creating the lock file"
${TCH_CMD} /var/run/${PACKAGE}
RC=$?
log_msg "Return Code: ${RC} From ${TCH_CMD} /var/run/${PACKAGE}"
if [ ${RC} -gt 0 ] ; then return ${RC}; fi
log_msg "Executing: ${VGSCAN}"
${VGSCAN}
RC=$?
log_msg "Return Code: ${RC} From ${VGSCAN}"
if [ ${RC} -gt 0 ] ; then return ${RC}; fi
log_msg "Executing: ${VGCHANGE} -a y ${PKG_VG}"
${VGCHANGE} -a y ${PKG_VG}
RC=$?
log_msg "Return Code: ${RC} From ${VGCHANGE} -a y ${PKG_VG}"
if [ ${RC} -gt 0 ] ; then return ${RC}; fi
for DISKS in `echo ${PKG_LVS}`
do log_msg "Calling: mount_vols $DISKS"
mount_vols $DISKS
RC=$?
log_msg "Return Code: ${RC} From mount_vols $DISKS"
if [ ${RC} -gt 0 ] ; then return ${RC}; fi
done
log_msg "Calling: ${TXMIT_START} start script"
${TXMIT_START} start
RC=$?
log_msg "Return Code: ${RC} From ${TXMIT_START} start"
if [ ${RC} -gt 0 ] ; then return ${RC}; fi
log_msg "Executing: ${RM_CMD} -f /var/run/${PACKAGE}"
${RM_CMD} -f /var/run/${PACKAGE}
RC=$?
log_msg "Return Code: ${RC} From ${RM_CMD} -f /var/run/${PACKAGE}"
if [ ${RC} -gt 0 ] ; then return ${RC}; fi
log_msg "Lock File Has Been Removed"
log_msg "Package Startup Complete"
return 0
}
#----------------------------------------------------------------------------#
# This routine will perform a file system integrity check and then mount the #
# specified logical volume to the designated mount point. #
#----------------------------------------------------------------------------#
mount_vols () {
lvol=$1
if [ "${PERFORM_FSCK}" -eq "1" ] ; then
log_msg "Executing: ${E2FSCK} ${FSCKOPTS} ${lvol}"
${E2FSCK} ${FSCKOPTS} /dev/${PKG_VG}/${lvol}
RC=$?
log_msg "Return Code: ${RC} From ${E2FSCK} ${FSCKOPTS} ${lvol}"
if [ ${RC} -gt 0 ] ; then return ${RC}; fi
fi
log_msg "Executing: ${MOUNT} /dev/${PKG_VG}/${lvol} /${lvol}"
${MOUNT} /dev/${PKG_VG}/${lvol} /${lvol}
RC=$?
log_msg "Return Code: ${RC} From ${MOUNT} /dev/${PKG_VG}/${lvol} /${lvol}"
if [ ${RC} -gt 0 ] ; then return ${RC}; fi
return 0;
}
#----------------------------------------------------------------------------#
# This routine will unmount the specified mount point. #
#----------------------------------------------------------------------------#
do_umount () {
lvol=$1
log_msg "Executing: ${UMOUNT} /${lvol}"
${UMOUNT} /${lvol}
RC=$?
log_msg "Return Code: ${RC} From ${UMOUNT} /${lvol}"
if [ ${RC} -gt 1 ] ; then return ${RC}; fi
log_msg "Executing: ${LVCHANGE} -a n /dev/${PKG_VG}/${lvol}"
${LVCHANGE} -a n /dev/${PKG_VG}/${>lvol}
RC=$?
log_msg "Return Code: ${RC} From ${LVCHANGE} -a n /dev/${PKG_VG}/${lvol}"
if [ ${RC} -gt 1 ] ; then return ${RC}; fi
return 0
}
#----------------------------------------------------------------------------#
# This routine will unmount the filesystems contained in the volume #
# group and then mark the volume group inactive so that it can be activated #
# on the new node of the cluster. #
#----------------------------------------------------------------------------#
stop () {
MSG_LEVEL=6
log_msg "Stopping The Package"
log_msg "Calling ${TXMIT_START} stop"
${TXMIT_START} stop
RC=$?
log_msg "Return Code: ${RC} From ${TXMIT_START} stop"
if [ ${RC} -gt 0 ] ; then return ${RC}; fi
log_msg "Executing: ${SYNCCMD} to flush buffer cache"
for i in $(seq 1 3)
do ${SYNCCMD}
RC=$?
done
log_msg "Return Code: ${RC} From ${SYNCCMD}"
if [ ${RC} -gt 0 ] ; then return ${RC}; fi
for DISKS in `echo ${PKG_LVS}`
do log_msg "Calling: do_umount $DISKS"
do_umount $DISKS
RC=$?
log_msg "Return Code: ${RC} From do_umount $DISKS"
if [ ${RC} -gt 0 ] ; then return ${RC}; fi
done
log_msg "Executing: ${VGCHANGE} -a n ${PKG_VG}"
${VGCHANGE} -a n ${PKG_VG}
RC=$?
log_msg "Return Code: ${RC} From ${VGCHANGE} -a n ${PKG_VG}"
if [ ${RC} -gt 0 ] ; then return ${RC}; fi
log_msg "Package Shutdown Complete"
return 0
}
#----------------------------------------------------------------------------#
# A status command is a required command for clustering, probably so it can #
# attempt to check the status of a node. Therefore, we'll just issue a df #
# for the filesystems and a vgdisplay for the volume group. #
#----------------------------------------------------------------------------#
status () {
MSG_LEVEL=7
log_msg "Check Package Status"
if [ -e /var/run/${PACKAGE} ] ; then
log_msg "Package Has Not Completely Started Yet."
return 0
fi
log_msg "Executing: ${VGDISPLAY} -D ${PKG_VG}"
${VGDISPLAY} -D ${PKG_VG}
RC=$?
log_msg "Return Code: ${RC} From ${VGDISPLAY} -D ${PKG_VG}"
if [ ${RC} -gt 0 ] ; then return ${RC}; fi
for DISKS in `echo ${PKG_LVS}`
do log_msg "Executing: ${LVDISPLAY} /dev/${PKG_VG}/$DISKS"
${LVDISPLAY} /dev/${PKG_VG}/$DISKS
RC=$?
log_msg "Return Code: ${RC} From ${LVDISPLAY} /dev/${PKG_VG}/$DISKS"
if [ ${RC} -gt 0 ] ; then return ${RC}; fi
log_msg "Executing: ${DFCMD} -h /$DISKS"
${DFCMD} -h /$DISKS
RC=$?
log_msg "Return Code: ${RC} From ${DFCMD} -h /$DISKS"
if [ ${RC} -gt 0 ] ; then return ${RC}; fi
done
log_msg "Calling: ${TXMIT_START} status"
${TXMIT_START} status
RC=$?
log_msg "Return Code: ${RC} From ${TXMIT_START} status"
if [ ${RC} -gt 0 ] ; then return ${RC}; fi
log_msg "Package Status Successful"
return 0
}
#----------------------------------------------------------------------------#
# This routine will stop and restart the process. #
#----------------------------------------------------------------------------#
restart () {
stop
start
RETVAL=$?
return $RETVAL
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status
;;
restart)
restart
;;
*)
echo $"Usage: $0 { start | stop | status | restart }"
RETVAL=1
esac
exit $RETVAL
[cluhbd]
logLevel = 6
[clupowerd]
logLevel = 6
[cluquorumd]
logLevel = 6
[clurmtabd]
logLevel = 6
[cluster]
alias_ip = 10.120.9.130
name = cmilc01
timestamp = 1047472053
[clusvcmgrd]
logLevel = 6
[database]
version = 2.0
[members]
start member0
start chan0
name = cmil012h
type = net
end chan0
id = 0
name = cmil012
powerSwitchIPaddr = cmil012
powerSwitchPortName = unused
quorumPartitionPrimary = /dev/raw/raw1
quorumPartitionShadow = /dev/raw/raw2
end member0
start member1
start chan0
name = cmil013h
type = net
end chan0
id = 1
name = cmil013
powerSwitchIPaddr = cmil013
powerSwitchPortName = unused
quorumPartitionPrimary = /dev/raw/raw1
quorumPartitionShadow = /dev/raw/raw2
end member1
[powercontrollers]
start powercontroller0
IPaddr = cmil012
login = unused
passwd = unused
type = null
end powercontroller0
start powercontroller1
IPaddr = cmil013
login = unused
passwd = unused
type = null
end powercontroller1
[services]
start service0
checkInterval = 300
name = cluster_cdr
start network0
broadcast = 10.120.9.255
ipAddress = 10.120.9.133
netmask = 255.255.255.0
end network0
preferredNode = cmil012
relocateOnPreferredNodeBoot = yes
userScript = /usr/local/bin/cluster_cdr
end service0
start service1
checkInterval = 300
name = cluster_dba
start network0
broadcast = 10.120.9.255
ipAddress = 10.120.9.135
netmask = 255.255.255.0
end network0
preferredNode = cmil012
relocateOnPreferredNodeBoot = yes
userScript = /usr/local/bin/cluster_dba
end service1
start service2
checkInterval = 300
name = cluster_fti
start network0
broadcast = 10.120.9.255
ipAddress = 10.120.9.134
netmask = 255.255.255.0
end network0
preferredNode = cmil013
relocateOnPreferredNodeBoot = yes
userScript = /usr/local/bin/cluster_fti
end service2
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/ida/c0d0p7
# initrd /initrd-version.img
#boot=/dev/ida/c0d0
default=0
fallback=1
timeout=10
splashimage=(hd0,0)/grub/splash.xpm.gz
title Red Hat Advanced Server - LVM Kernel (2.4.9-e.12custom)
root (hd0,0)
kernel /vmlinuz-2.4.9-e.12custom ro root=/dev/ida/c0d0p7
initrd /initrd-2.4.9-e.12custom.img
title Red Hat Linux (2.4.9-e.12smp)
root (hd0,0)
kernel /vmlinuz-2.4.9-e.12smp ro root=/dev/ida/c0d0p7
initrd /initrd-2.4.9-e.12smp.img
title Red Hat Linux (2.4.9-e.12debug)
root (hd0,0)
kernel /vmlinuz-2.4.9-e.12debug ro root=/dev/ida/c0d0p7
initrd /initrd-2.4.9-e.12debug.img
title Red Hat Linux (2.4.9-e.12)
root (hd0,0)
kernel /vmlinuz-2.4.9-e.12 ro root=/dev/ida/c0d0p7
initrd /initrd-2.4.9-e.12.img
title Red Hat Linux Advanced Server (2.4.9-e.3smp)
root (hd0,0)
kernel /vmlinuz-2.4.9-e.3smp ro root=/dev/ida/c0d0p7
initrd /initrd-2.4.9-e.3smp.img
title Red Hat Linux Advanced Server-up (2.4.9-e.3)
root (hd0,0)
kernel /vmlinuz-2.4.9-e.3 ro root=/dev/ida/c0d0p7
initrd /initrd-2.4.9-e.3.img
If
you want the cluster log to be rotated correctly, the best place to do it is
with the rest of the syslog files. The file to edit is: /etc/logrotate.d/syslog
. Here is the syslog
file before it was edited.
/var/log/messages /var/log/secure /var/log/maillog \
/var/log/spooler /var/log/boot.log
/var/log/cron {
sharedscripts
postrotate
/bin/kill -HUP `cat /var/run/syslogd.pid 2>
/dev/null` \ 2> /dev/null || true
endscript
}
Here
is the syslog file after it was edited.
/var/log/messages /var/log/secure /var/log/maillog \
/var/log/spooler /var/log/boot.log
/var/log/cron \ /var/log/cluster
{
sharedscripts
postrotate
/bin/kill -HUP `cat /var/run/syslogd.pid 2>
/dev/null` \ 2> /dev/null || true
endscript
}