Monday, June 9, 2014

Upgrade 11.2.0.3 cluster to 12.1.0 cluster

Upgrading 11gr2 grid to 12c Grid

We have two node 11.2.0.3 Rac database with the 11.2.0.3 grid as the environment for this Practice The 11.2.0.3 can be directly upgrade to 12c as the out box box upgrade by creating the new grid_home for 12c   . The O/s we are using in this practice is hp-ux 11.31 for Itanium Servers'

Step 1

Check the cluster status before starting the upgrade and make sure the necessary services are running
oragrid @ db01/prod01/oracle/11.2.0/grid/bin >./crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host
----------------------------------------------------------------------
ora.DATA.dg    ora....up.type 0/5    0/     ONLINE    ONLINE    db01
ora.FLASH.dg   ora....up.type 0/5    0/     ONLINE    ONLINE    db01
ora....ER.lsnr ora....er.type 0/5    0/     ONLINE    ONLINE    db01
ora....RD.lsnr ora....er.type 0/5    0/     ONLINE    ONLINE    db01
ora....N1.lsnr ora....er.type 0/5    0/0    OFFLINE   OFFLINE
ora.asm        ora.asm.type   0/5    0/     ONLINE    ONLINE    db01
ora.cvu        ora.cvu.type   0/5    0/0    ONLINE    ONLINE    db02
ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    db01
ora....01.lsnr application    0/5    0/0    ONLINE    ONLINE    db01
ora....01.lsnr application    0/5    0/0    ONLINE    ONLINE    db01
ora.db01.gsd   application    0/5    0/0    OFFLINE   OFFLINE
ora.db01.ons   application    0/3    0/0    ONLINE    ONLINE    db01
ora.db01.vip   ora....t1.type 0/0    0/0    ONLINE    ONLINE    db01
ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    db02
ora....02.lsnr application    0/5    0/0    ONLINE    ONLINE    db02
ora....02.lsnr application    0/5    0/0    ONLINE    ONLINE    db02
ora.db02.gsd   application    0/5    0/0    OFFLINE   OFFLINE
ora.db02.ons   application    0/3    0/0    ONLINE    ONLINE    db02
ora.db02.vip   ora....t1.type 0/0    0/0    ONLINE    ONLINE    db02
ora.gsd        ora.gsd.type   0/5    0/     OFFLINE   OFFLINE
ora....network ora....rk.type 0/5    0/     ONLINE    ONLINE    db01
ora.oc4j       ora.oc4j.type  0/1    0/2    OFFLINE   OFFLINE
ora.omprd.db   ora....se.type 0/2    0/1    ONLINE    ONLINE    db01
ora.ons        ora.ons.type   0/3    0/     ONLINE    ONLINE    db01
ora.scan1.vip  ora....ip.type 0/0    0/0    OFFLINE   OFFLINE

Step 2

Download the  below Software binaries for the  oracle cloud

Step 3

Minor Number for Device files 
check the async value in all nodes of the grid and if the async value differs then change to the suitable value
The recommended values for the async in hp-ux are  101 0x4 [or] 101 0x104
We can use the command  /sbin/mknod to change the value to the async device file
move the old file to different name and create the new file with the desired values
Node1
root @ db01/prod01/oracle >mv /dev/async /dev/async_old_db01
root @ db01./prod01/oracle >/sbin/mknod /dev/async c 101 0x104
Node2
root @ db02./prod01/oracle >mv /dev/async /dev/async_old_db02
root @ db02/prod01/oracle >/sbin/mknod /dev/async c 101 0x104

Step 4
Running the cluster verify and checking the SCAN
Run the cluster verify command  with the parameter value as per to verify the values before upgrade and check the scan listener 
Check all the O/S Prerequisites by using the cluster verify command
If you have configured the scan listener in 11g by adding the host name with one ip in 12c it is mandatory to have DNS configuration with three ip's
reconfigure the scan listener with the three ip's after making the necessary changed in the DNS if the Sacn  existing meets the requirement then we can go head oraprod @ db01/prod01/oracle/11.2.0/grid/bin >srvctl config scan
SCAN name: ommprod-cluster-scan, Network: 1
Subnet IPv4: 192.168.5.0/255.255.255.0/lan0
Subnet IPv6:
SCAN 0 IPv4 VIP: <ip_address>
SCAN name: ommprod-cluster-scan, Network: 1
Subnet IPv4: 192.168.5.0/255.255.255.0/lan0
Subnet IPv6:
SCAN 1 IPv4 VIP: <ipaddress>
SCAN name: ommprod-cluster-scan, Network: 1
Subnet IPv4: 192.168.5.0/255.255.255.0/lan0
Subnet IPv6:
SCAN 2 IPv4 VIP: <ip_address>
oraprod @ db01/prod01/oracle/11.2.0/grid/bin >srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node db01
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node db02
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node db01

Step 5

Backup the OCr disk' and  unset the env variables .. shutdown the database before upgrade of cluster

root @ db01.oasiserp.com/prod01/oracle/11.2.0/grid/bin >./ocrconfig -manualbackup
check the ocr's status by using the ocrcheck and you can check the list of backup files also by using the ocrcheck
Shutdown the database before starting the upgrade of the grid
Unset all the env variables which is pointing to the old location of the grid home
Set the new grid home and patch to start the run installer .. if OUI has Problems with the Host name not found then set the env variable as ORACLE_HOSTNAME ..




Step 6

upgrading the 11gr2 grid to 12c 

start the OUI 




Step7

skip or provide the MOS credentials

Step8
Select the upgrade the grid infrastructure to upgrade the existing 11.2.0.3 grid to the 12 c 


Step 9
Select the Preferred Language


Step10

Select the Node which need to be upgraded in our env e have two node cluster and we had selected both the nodes
As we are in the Process of upgrade the User equivalence between the two nodes had been existing  in the environment



Step 11

Select the necessary groups for OAASM [asmadmin] , ASMDBA [asmdba] ASM operator [asmoper]



Step12
Select the Oracle base and the oracle home for the new grid installation we are doing the out of box installation and specify the new location instead of the old software location



Step13
we can provide the root password to run the  root configuration scripts instead of manual running in each nodes -e decided to run as manual and skipped this page
if you want to run the root scripts automatically then skip this page and follow the steps from 13a ,13b


Step 13A

 we can provide the root password for running the scripts

Step 13B
Once the root password is provide and we had many nodes you can combine the nodes into a batch processing to run the root script here we had to nodes combined to single batch and the batch will be running the sequence if the large number of nodes are present



Step 14

check the Prerequisite and fix the warnings before staring to process


check the async value and set to appropriate value check the step 3 kernel warnings can be fixed by running the fix script or by using the kctune in hp-ux or changing manually  the kernel values according to  recommendations

Step 14 B

Once the all the values are fixed and then you will get the response file page 
save the response file and start the installation




Step 15
Once the GUI installtion is completed in both the nodes then you will pop up prompting for run the rootupgrade.sh script from the grid home form all the Nodes 


Once the rootupgrade.sh had been completed in both the nodes the installation is completed and the grid service's will be up


Check the ASM  and check the asm disk which had been attached

oragrid @ db01/prod01/oracle/12.1.0/12.1.0/grid >sqlplus
SQL*Plus: Release 12.1.0.1.0 Production on Fri Mar 14 10:04:04 2014
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Enter user-name: sys as sysasm
Enter password:
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL> set line 300
SQL> select name, header_status, path from v$asm_disk;

NAME                           HEADER_STATU PATH
------------------------------ ------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
DATA_0000                      MEMBER       /dev/vg_data1/rlv_data1
DATA_0001                      MEMBER       /dev/vg_data2/rlv_data2
DATA_0002                      MEMBER       /dev/vg_data3/rlv_data3
DATA_0003                      MEMBER       /dev/vg_data4/rlv_data4
FLASH_0000                     MEMBER       /dev/vg_flash1/rlv_flash1
FLASH_0001                     MEMBER       /dev/vg_flash2/rlv_flash2
6 rows selected.
oragrid @ db01.oasiserp.com/prod01/oracle/12.1.0/12.1.0/grid >asmcmd
ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N        1024   4096  4194304    819136   167912                0          167912              0             Y  DATA/
MOUNTED  EXTERN  N        1024   4096  4194304    511968   340428                0          340428              0             N  FLASH/
ASMCMD>

 Step 16

Conform the software version's in both the nodes the 11.2.0.3 database is compitable with the 12.1cluster so we can start the database and verify before we start the upgrade of the database to 12.1

Node 1

root @ db01/prod01/oracle/12.1.0/12.1.0/grid/bin >./olsnodes -s
db01 Active
db02 Active
root @ db01/prod01/oracle/12.1.0/12.1.0/grid/bin >./crsctl query crs softwareversion
Oracle Clusterware version on node [db01] is [12.1.0.1.0]
root @ db01/prod01/oracle/12.1.0/12.1.0/grid/bin >./crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.1.0]
root @ db01/prod01/oracle/12.1.0/12.1.0/grid/bin >

Node2
root @ db02/prod01/oracle/12.1.0/12.1.0/grid/bin >./crsctl query crs softwareversion
Oracle Clusterware version on node [db02] is [12.1.0.1.0]
root @ db02./prod01/oracle/12.1.0/12.1.0/grid/bin >./olsnodes -s
db01 Active
db02 Active
root @ db02prod01/oracle/12.1.0/12.1.0/grid/bin >./crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.1.0]
root @ db02/prod01/oracle/12.1.0/12.1.0/grid/bin >


No comments:

Post a Comment