Add new node in 11gR2 RAC
Add new node in 11gR2 RAC
Pre Requests:
1. Create users and groups
groupadd -g 501 oinstall
groupadd -g 502 dba
groupadd -g 504 asmadmin
groupadd -g 506 asmdba
groupadd -g 507 asmoper
useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper grid
useradd -u 502 -g oinstall -G dba,asmdba,asmadmin,asmoper oracle
passwd grid
passwd oracle
2. /etc/hosts file so that it is similar to the all nodes following example:
#eth0 Public
192.168.0.101 node1.clientit.com node1
192.168.0.102 node2.clientit.com node2
192.168.0.108 node3.clientit.com node3
#eth1 Private
192.168.0.103 node1-priv.ucliidt.com node1-priv
192.168.0.104 node2-priv.clientit.com node2-priv
192.168.0.109 node3-priv.clientit.com node3-priv
#VIP IP
192.168.0.105 node1-vip.clientit.com node1-vip
192.168.0.106 node2-vip.clientit.com node2-vip
192.168.0.110 node3-vip.clientit.com node3-vip
#SCAN IP
192.168.0.107 rac-scan.clientit.com rac-scan
3. Configuring Kernel Parameters
i. As the root user add the following kernel parameter settings to /etc/sysctl.conf. If any of the parameters are already in the /etc/sysctl.conf file, the higher of the 2 values should be used.
kernel.shmmni = 4096
kernel.sem = 250 32000100 128
fs.file-max = 6553600
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
ii. Run the following as the root user to allow the new kernel parameters to be put in place:
#/sbin/sysctl –p
4. Set shell limits for the oracle and gird users
To improve the performance of the software on Linux systems, you must increase the shell limits for the oracle user
i. Add the following lines to the /etc/security/limits.conf file:
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
ii. Add or edit the following line in the /etc/pam.d/login file, if it does not already exist: session required pam_limits.so
5. Create the Oracle Inventory Directory
To create the Oracle Inventory directory, enter the following commands as the root user:
mkdir -p /u01/app/oraInventory
chown -R grid:oinstall /u01/app/oraInventory
chmod -R 775 /u01/app/oraInventory
6. Creating the Oracle Grid Infrastructure Home Directory
To create the Grid Infrastructure home directory, enter the following commands as the root user:
mkdir -p /u01/11.2.0/grid
chown -R grid:oinstall/u01/11.2.0/grid
chmod -R 775 /u01/11.2.0/grid
7. Creating the Oracle Base Directory
To create the Oracle Base directory, enter the following commands as the root user:
mkdir -p /u01/app/oracle
mkdir /u01/app/oracle/cfgtoollogs
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/app/oracle
8. Creating the Oracle RDBMS Home Directory
To create the Oracle RDBMS Home directory, enter the following commands as the root user:
mkdir -p /u01/app/oracle/product/11.2.0/dbhome_1
chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/dbhome_1
chmod -R 775 /u01/app/oracle/product/11.2.0/dbhome_1
chmod –R 755 /u01
9. Prepare the shared storage for Oracle RAC
Client [iscsi initiator] Configuration:
Here we are using a redhat enterprise linux5 as client. In order to use openfiler target as disk, it has to be set as iscsi initiator. For that we need check iscsi-inittiator is installed.
Login to the client system as root
[root@node13~]# rpm -qa | grep -i iscsi
iscsi-initiator-utils-6.2.0.871-0.10.el5
[root@node3~]# service iscsid restart
Turning off network shutdown. [ OK ]
Starting iSCSI daemon: [ OK ]
[root@node3 ~]#service iscsi restart
chkconfig iscsid on [ OK ]
[root@node3 ~]# chkconfig iscsi on
[root@node3 ~]# chkconfig --list | grep iscsi
iscsi 0:off 1:off 2:on 3:on 4:on 5:on 6:off
iscsid 0:off 1:off 2:on 3:on 4:on 5:on 6:off
Searching the iscsi target:
[root@node3 ~]# iscsiadm -m discovery -t sendtargets -p <openfiler ip 192.168.0.160>
192.168.2.160:3260,1 iqn.2006-01.com.openfiler:test
You will get the scanned result as the line above.
Manually Login to iSCSI Target(s)
[root@node3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:test -p 192.168.2.160 --login
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:test, portal: 192.168.2.160,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:test, portal: 192.168.2.160,3260]: successful
Configure Automatic Login
[root@node3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:test -p 192.168.2.160 --op update -n node.startup -v automatic
[root@node3~]# fdisk -l
Disk /dev/sdb: 100GB, 1040187392 bytes
32 heads, 62 sectors/track, 1024 cylinders
Units = cylinders of 1984 * 512 = 1015808 bytes
Disk /dev/sdd doesn’t contain a valid partition table
#/sbin/partprobe
10. Configuring ASMLib
#/etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
Default user to own the driver interface []:grid
grid Default group to own the driver interface []:oinstall
asmdba Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
#/etc/init.d/oracleasm scandisks
#/etc/init.d/oracleasm listdisks
11. SSH Configuration
1. On Node3
[oracle@node3~]$mdkir .ssh
[oracle@node3~]$chmod 700 .ssh
[oracle@node3~]$cd .ssh
[oracle@node3~]$/usr/bin/ssh-keygen -t rsa
2. On node1
[oracle@node1~]$cd .ssh
[oracle@node1~]$scp authorized_keys node3:/home/oracle/.ssh
3. On node3
[oracle@node3~]$cd .ssh
[oracle@node3~]$cat id_rsa.pub >> authorized_keys
[oracle@node3~]$scp authorized_keys node1:/home/oracle/.ssh
[oracle@node3~]$scp authorized_keys node2:/home/oracle/.ssh
[oracle@node3~]$ssh node1 date
[oracle@node3~]$ssh node2 date
[oracle@node3~]$ssh node3 date
Adding Node Steps:
1. Verify the requirements for cluster node addition using the Cluster Verification Utility (CVU). From an existing cluster node:
$> $GI_HOME/bin/cluvfy stage -post hwos -n <existing and new nodes> -verbose
2. Compare an existing node with the new node(s) to be added:
$> $GI_HOME/bin/cluvfy comp peer -refnode <existing node> -n <new node> -orainv oinstall -osdba dba -verbose
3. Verify the integrity of the cluster and new node by running from an existing cluster node:
$GI_HOME/bin/cluvfy stage -pre nodeadd -n <new node> -fixup -verbose
4. Add the new node by running the following from an existing cluster node:
a. Not using GNS
$GI_HOME/oui/bin/addNode.sh -silent “CLUSTER_NEW_NODES={<new node>}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={<new node VIP>}”
b. Using GNS
$GI_HOME/oui/bin/addNode.sh -silent “CLUSTER_NEW_NODES={<new node>}”
Run the root scripts when prompted.
5. Verify that the new node has been added to the cluster:
$GI_HOME/bin/cluvfy stage -post nodeadd -n <new node> -verbose
Phase II - Extending Oracle Database RAC to new cluster node
6. a. Using Clone Process
i. Use ‘tar’ to archive an existing DB home, and extract to the same location on the new node
ii. On the new node run:
perl $ORACLE_HOME/clone/bin/clone.pl '-O"CLUSTER_NODES={<existing node>,<new node>}"' '-O"LOCAL_NODE=<new node>"' ORACLE_BASE=$ORACLE_BASE ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=OraDb11g_home1 '-O-noConfig'
iii. On the existing node where the DB home was cloned, run:
$ORACLE_HOME/oui/bin/runInstaller –updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={<existing node>,<new node>}"
OR
b.Using addNode.sh process (RECOMMENDED)
i. From an existing node in the cluster as the ‘oracle’ user:
$> $ORACLE_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={<new node>}"
7. On the new node run the root.sh as prompted.
8. Set ORACLE_HOME and ensure you are using the ‘oracle’ account user.
Note: Ensure permissions for Oracle executable are 6751, if not, then as root user:
cd $ORACLE_HOME/bin
chgrp asmadmin oracle
chmod 6751 oracle
ls -l oracle
9. On any existing node, run DBCA ($ORACLE_HOME/bin/dbca) to add the new instance:
$ORACLE_HOME/bin/dbca -silent -addInstance -nodeList <new node> -gdbName <db name> -instanceName <new instance> -sysDBAUserName sys -sysDBAPassword <sys password>
NOTES: A. Ensure command is run from existing node with same or less memory, otherwise command will fail due to insufficient memory to support database memory structures. Also ensure that the log file is checked for actual success since it can differ from what is displayed at the screen.
B. Anytime when a patch is applied to the database ORACLE_HOME, please ensure above ownership and permission is corrected after the patch.
10. Verify the administrator privileges on the new node by running on existing node:
$ORACLE_HOME/bin/cluvfy comp admprv -o db_config -d $ORACLE_HOME -n <all nodes list> -verbose
11. For an Admin-Managed Cluster, add the new instance to services, or create additional services. For a Policy-Managed Cluster, verify the instance has been added to an existing server pool.
12. Setup OCM in Cloned Homes
a. Delete all subdirectories to remove previously configured host:
$> rm -rf $ORACLE_HOME/ccr/hosts/*
b. Move (no copy) both ‘grid’ and ‘oracle’ homes:
$> mv $ORACLE_HOME/ccr/inventory/core.jar $ORACLE_HOME/ccr/inventory/pending/core.jar
c. Configure OCM for the cloned home on the new node:
$> $ORACLE_HOME/ccr/bin/configCCR -a
Pre Requests:
1. Create users and groups
groupadd -g 501 oinstall
groupadd -g 502 dba
groupadd -g 504 asmadmin
groupadd -g 506 asmdba
groupadd -g 507 asmoper
useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper grid
useradd -u 502 -g oinstall -G dba,asmdba,asmadmin,asmoper oracle
passwd grid
passwd oracle
2. /etc/hosts file so that it is similar to the all nodes following example:
#eth0 Public
192.168.0.101 node1.clientit.com node1
192.168.0.102 node2.clientit.com node2
192.168.0.108 node3.clientit.com node3
#eth1 Private
192.168.0.103 node1-priv.ucliidt.com node1-priv
192.168.0.104 node2-priv.clientit.com node2-priv
192.168.0.109 node3-priv.clientit.com node3-priv
#VIP IP
192.168.0.105 node1-vip.clientit.com node1-vip
192.168.0.106 node2-vip.clientit.com node2-vip
192.168.0.110 node3-vip.clientit.com node3-vip
#SCAN IP
192.168.0.107 rac-scan.clientit.com rac-scan
3. Configuring Kernel Parameters
i. As the root user add the following kernel parameter settings to /etc/sysctl.conf. If any of the parameters are already in the /etc/sysctl.conf file, the higher of the 2 values should be used.
kernel.shmmni = 4096
kernel.sem = 250 32000100 128
fs.file-max = 6553600
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
ii. Run the following as the root user to allow the new kernel parameters to be put in place:
#/sbin/sysctl –p
4. Set shell limits for the oracle and gird users
To improve the performance of the software on Linux systems, you must increase the shell limits for the oracle user
i. Add the following lines to the /etc/security/limits.conf file:
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
ii. Add or edit the following line in the /etc/pam.d/login file, if it does not already exist: session required pam_limits.so
5. Create the Oracle Inventory Directory
To create the Oracle Inventory directory, enter the following commands as the root user:
mkdir -p /u01/app/oraInventory
chown -R grid:oinstall /u01/app/oraInventory
chmod -R 775 /u01/app/oraInventory
6. Creating the Oracle Grid Infrastructure Home Directory
To create the Grid Infrastructure home directory, enter the following commands as the root user:
mkdir -p /u01/11.2.0/grid
chown -R grid:oinstall/u01/11.2.0/grid
chmod -R 775 /u01/11.2.0/grid
7. Creating the Oracle Base Directory
To create the Oracle Base directory, enter the following commands as the root user:
mkdir -p /u01/app/oracle
mkdir /u01/app/oracle/cfgtoollogs
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/app/oracle
8. Creating the Oracle RDBMS Home Directory
To create the Oracle RDBMS Home directory, enter the following commands as the root user:
mkdir -p /u01/app/oracle/product/11.2.0/dbhome_1
chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/dbhome_1
chmod -R 775 /u01/app/oracle/product/11.2.0/dbhome_1
chmod –R 755 /u01
9. Prepare the shared storage for Oracle RAC
Client [iscsi initiator] Configuration:
Here we are using a redhat enterprise linux5 as client. In order to use openfiler target as disk, it has to be set as iscsi initiator. For that we need check iscsi-inittiator is installed.
Login to the client system as root
[root@node13~]# rpm -qa | grep -i iscsi
iscsi-initiator-utils-6.2.0.871-0.10.el5
[root@node3~]# service iscsid restart
Turning off network shutdown. [ OK ]
Starting iSCSI daemon: [ OK ]
[root@node3 ~]#service iscsi restart
chkconfig iscsid on [ OK ]
[root@node3 ~]# chkconfig iscsi on
[root@node3 ~]# chkconfig --list | grep iscsi
iscsi 0:off 1:off 2:on 3:on 4:on 5:on 6:off
iscsid 0:off 1:off 2:on 3:on 4:on 5:on 6:off
Searching the iscsi target:
[root@node3 ~]# iscsiadm -m discovery -t sendtargets -p <openfiler ip 192.168.0.160>
192.168.2.160:3260,1 iqn.2006-01.com.openfiler:test
You will get the scanned result as the line above.
Manually Login to iSCSI Target(s)
[root@node3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:test -p 192.168.2.160 --login
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:test, portal: 192.168.2.160,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:test, portal: 192.168.2.160,3260]: successful
Configure Automatic Login
[root@node3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:test -p 192.168.2.160 --op update -n node.startup -v automatic
[root@node3~]# fdisk -l
Disk /dev/sdb: 100GB, 1040187392 bytes
32 heads, 62 sectors/track, 1024 cylinders
Units = cylinders of 1984 * 512 = 1015808 bytes
Disk /dev/sdd doesn’t contain a valid partition table
#/sbin/partprobe
10. Configuring ASMLib
#/etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
Default user to own the driver interface []:grid
grid Default group to own the driver interface []:oinstall
asmdba Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
#/etc/init.d/oracleasm scandisks
#/etc/init.d/oracleasm listdisks
11. SSH Configuration
1. On Node3
[oracle@node3~]$mdkir .ssh
[oracle@node3~]$chmod 700 .ssh
[oracle@node3~]$cd .ssh
[oracle@node3~]$/usr/bin/ssh-keygen -t rsa
2. On node1
[oracle@node1~]$cd .ssh
[oracle@node1~]$scp authorized_keys node3:/home/oracle/.ssh
3. On node3
[oracle@node3~]$cd .ssh
[oracle@node3~]$cat id_rsa.pub >> authorized_keys
[oracle@node3~]$scp authorized_keys node1:/home/oracle/.ssh
[oracle@node3~]$scp authorized_keys node2:/home/oracle/.ssh
[oracle@node3~]$ssh node1 date
[oracle@node3~]$ssh node2 date
[oracle@node3~]$ssh node3 date
Adding Node Steps:
1. Verify the requirements for cluster node addition using the Cluster Verification Utility (CVU). From an existing cluster node:
$> $GI_HOME/bin/cluvfy stage -post hwos -n <existing and new nodes> -verbose
2. Compare an existing node with the new node(s) to be added:
$> $GI_HOME/bin/cluvfy comp peer -refnode <existing node> -n <new node> -orainv oinstall -osdba dba -verbose
3. Verify the integrity of the cluster and new node by running from an existing cluster node:
$GI_HOME/bin/cluvfy stage -pre nodeadd -n <new node> -fixup -verbose
4. Add the new node by running the following from an existing cluster node:
a. Not using GNS
$GI_HOME/oui/bin/addNode.sh -silent “CLUSTER_NEW_NODES={<new node>}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={<new node VIP>}”
b. Using GNS
$GI_HOME/oui/bin/addNode.sh -silent “CLUSTER_NEW_NODES={<new node>}”
Run the root scripts when prompted.
5. Verify that the new node has been added to the cluster:
$GI_HOME/bin/cluvfy stage -post nodeadd -n <new node> -verbose
Phase II - Extending Oracle Database RAC to new cluster node
6. a. Using Clone Process
i. Use ‘tar’ to archive an existing DB home, and extract to the same location on the new node
ii. On the new node run:
perl $ORACLE_HOME/clone/bin/clone.pl '-O"CLUSTER_NODES={<existing node>,<new node>}"' '-O"LOCAL_NODE=<new node>"' ORACLE_BASE=$ORACLE_BASE ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=OraDb11g_home1 '-O-noConfig'
iii. On the existing node where the DB home was cloned, run:
$ORACLE_HOME/oui/bin/runInstaller –updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={<existing node>,<new node>}"
OR
b.Using addNode.sh process (RECOMMENDED)
i. From an existing node in the cluster as the ‘oracle’ user:
$> $ORACLE_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={<new node>}"
7. On the new node run the root.sh as prompted.
8. Set ORACLE_HOME and ensure you are using the ‘oracle’ account user.
Note: Ensure permissions for Oracle executable are 6751, if not, then as root user:
cd $ORACLE_HOME/bin
chgrp asmadmin oracle
chmod 6751 oracle
ls -l oracle
9. On any existing node, run DBCA ($ORACLE_HOME/bin/dbca) to add the new instance:
$ORACLE_HOME/bin/dbca -silent -addInstance -nodeList <new node> -gdbName <db name> -instanceName <new instance> -sysDBAUserName sys -sysDBAPassword <sys password>
NOTES: A. Ensure command is run from existing node with same or less memory, otherwise command will fail due to insufficient memory to support database memory structures. Also ensure that the log file is checked for actual success since it can differ from what is displayed at the screen.
B. Anytime when a patch is applied to the database ORACLE_HOME, please ensure above ownership and permission is corrected after the patch.
10. Verify the administrator privileges on the new node by running on existing node:
$ORACLE_HOME/bin/cluvfy comp admprv -o db_config -d $ORACLE_HOME -n <all nodes list> -verbose
11. For an Admin-Managed Cluster, add the new instance to services, or create additional services. For a Policy-Managed Cluster, verify the instance has been added to an existing server pool.
12. Setup OCM in Cloned Homes
a. Delete all subdirectories to remove previously configured host:
$> rm -rf $ORACLE_HOME/ccr/hosts/*
b. Move (no copy) both ‘grid’ and ‘oracle’ homes:
$> mv $ORACLE_HOME/ccr/inventory/core.jar $ORACLE_HOME/ccr/inventory/pending/core.jar
c. Configure OCM for the cloned home on the new node:
$> $ORACLE_HOME/ccr/bin/configCCR -a
Comments
Post a Comment