Recently, I have come across an issue in 11gR2 RAC (11.2.4), where the GI file system GRID_HOME was mostly consumed by a single file called crfclust.bdb .
crfclust.bdb is a Cluster Health Monitor (CHM) file, which collects the stats of Cluster as well as the OS statistics by means of the Cluster Health Monitor Service ora.crf.
I have resolved by resize it as following commnads:
1-# df -h
Size Used Avail Use% Mounted on
/dev/mapper/vg_emsdb1-LogVol00
99G 85G 9G 90% /
[root@node1]# cd /grid/product/11.2.0/grid_1/bin/
[root@node1 bin]# ./oclumon manage -get repsize
CHM Repository Size = 39688312
Done
2- Change the repository size to the desired number of seconds, between 3600 (1 hour) and 259200 (3 days).on node1 and node2 we'll change repository retention size:
[root@node1 bin]# ./oclumon manage -repos resize 259200
node1 --> retention check successful
node2 --> retention check successful
New retention is 259200 and will use 4524595200 bytes of disk space
CRS-9115-Cluster Health Monitor repository size change completed on all nodes.
Done
3- on node1 and node2 stop and start ora.crf source:
[root@node1 bin]# ./crsctl stop res ora.crf -init
CRS-2673: Attempting to stop 'ora.crf' on 'node1'
CRS-2677: Stop of 'ora.crf' on 'node1' succeeded
[root@node1 bin]# ./crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'node1'
CRS-2676: Start of 'ora.crf' on 'node1' succeeded
4-
[root@node1 bin]# ./oclumon manage -get repsize
CHM Repository Size = 259200
Done
[root@node2 bin]# ./oclumon manage -get repsize
CHM Repository Size = 259200
Done
5- Check your root / size:
df -h
Size Used Avail Use% Mounted on
/dev/mapper/vg_emsdb1-LogVol00
99G 31G 63G 33% /
crfclust.bdb is a Cluster Health Monitor (CHM) file, which collects the stats of Cluster as well as the OS statistics by means of the Cluster Health Monitor Service ora.crf.
I have resolved by resize it as following commnads:
1-# df -h
Size Used Avail Use% Mounted on
/dev/mapper/vg_emsdb1-LogVol00
99G 85G 9G 90% /
[root@node1]# cd /grid/product/11.2.0/grid_1/bin/
[root@node1 bin]# ./oclumon manage -get repsize
CHM Repository Size = 39688312
Done
2- Change the repository size to the desired number of seconds, between 3600 (1 hour) and 259200 (3 days).on node1 and node2 we'll change repository retention size:
[root@node1 bin]# ./oclumon manage -repos resize 259200
node1 --> retention check successful
node2 --> retention check successful
New retention is 259200 and will use 4524595200 bytes of disk space
CRS-9115-Cluster Health Monitor repository size change completed on all nodes.
Done
3- on node1 and node2 stop and start ora.crf source:
[root@node1 bin]# ./crsctl stop res ora.crf -init
CRS-2673: Attempting to stop 'ora.crf' on 'node1'
CRS-2677: Stop of 'ora.crf' on 'node1' succeeded
[root@node1 bin]# ./crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'node1'
CRS-2676: Start of 'ora.crf' on 'node1' succeeded
4-
[root@node1 bin]# ./oclumon manage -get repsize
CHM Repository Size = 259200
Done
[root@node2 bin]# ./oclumon manage -get repsize
CHM Repository Size = 259200
Done
5- Check your root / size:
df -h
Size Used Avail Use% Mounted on
/dev/mapper/vg_emsdb1-LogVol00
99G 31G 63G 33% /
No comments:
Post a Comment