In previous post, the opatch success apply on both nodes. Everything seem fine and result look positive if you check it by execute “opatch lsinventory”
However, there maybe something wrong which as follow
- ASM in rolling status
[grid@racdb2 ~]$ asmcmd
ASMCMD>
ASMCMD> showclusterstate
In Rolling Patch
ASMCMD>
ASMCMD> exit
[grid@racdb2 ~]$
2. The patch detail seem different between Node1 and Node2
At Node1:
ASMCMD> showpatches
---------------
List of Patches
===============
30489227
30489632
30557433
30655595
ASMCMD>
At Node2:
ASMCMD> showpatches
---------------
List of Patches
===============
29585399
30489227
30489632
30557433
30655595
ASMCMD>
We can see there is an extra patch “29585399” applied on Node 2
The crs patch level also different between both nodes
[grid@racdb1 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racdb1 is [2701864972].
[grid@racdb1 ~]$
[grid@racdb1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2701864972] and the complete list of patches [30489227 30489632 30557433 30655595 ] have been applied on the local node. The release patch string is [19.6.0.0.0].
[grid@racdb1 ~]$
[grid@racdb2 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racdb2 is [3439198897].
[grid@racdb2 ~]$
[grid@racdb2 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [3439198897] and the complete list of patches [29585399 30489227 30489632 30557433 30655595 ] have been applied on the local node. The release patch string is [19.6.0.0.0].
[grid@racdb2 ~]$
Solution:
We apply this “missing patch” on Node 1 manually
[root@racdb1 install]# pwd
/u01/app/19.0.0.0/grid/crs/install
[root@racdb1 install]# ./rootcrs.sh -prepatch
Using configuration parameter file: /u01/app/19.0.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/racdb1/crsconfig/crs_prepatch_racdb1_2020-04-14_04-40-08PM.log
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [724960844].
2020/04/14 16:44:42 CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA) Collector.
2020/04/14 16:46:23 CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA) Collector.
2020/04/14 16:46:25 CLSRSC-347: Successfully unlock /u01/app/19.0.0.0/grid
2020/04/14 16:46:29 CLSRSC-671: Pre-patch steps for patching GI home successfully completed.
[root@racdb1 install]#
open another session as grid user,
[grid@racdb1 ~]$ patchgen commit -pi 29585399
19
loading the appropriate library for linux
patchgensh19.so loaded succesfully.
Note: Successfully commited, created .s file with apply and recover patches
mv -f /u01/app/19.0.0.0/grid/lib/libasmclntsh19.so /u01/app/19.0.0.0/grid/lib/libasmclntsh19.so.bak
/usr/bin/as /u01/app/19.0.0.0/grid/rdbms/lib/skgfpmi.s -o /u01/app/19.0.0.0/grid/rdbms/lib/skgfpmi.o
/usr/bin/ar r /u01/app/19.0.0.0/grid/lib/libasmclnt19.a /u01/app/19.0.0.0/grid/rdbms/lib/skgfpmi.o
/usr/bin/ar r /u01/app/19.0.0.0/grid/lib/libasmclntsh19.a /u01/app/19.0.0.0/grid/rdbms/lib/skgfpmi.o
rm -f /u01/app/19.0.0.0/grid/rdbms/lib/skgfpmi.o
rm -f /u01/app/19.0.0.0/grid/rdbms/lib/skgfpmi.s
make /u01/app/19.0.0.0/grid/lib/libasmclntsh19.so -f /u01/app/19.0.0.0/grid/rdbms/lib/ins_rdbms.mk
make[1]: Entering directory `/home/grid'
rm -f /u01/app/19.0.0.0/grid/lib/libasmclntsh19.so
/u01/app/19.0.0.0/grid/bin/linkshlib /u01/app/19.0.0.0/grid/lib/libasmclntsh19.so /u01/app/19.0.0.0/grid/rdbms/lib/ins_rdbms.mk so ld_shlib LIBS
+ PATH=/bin:/usr/bin:/usr/ccs/bin
+ export PATH
+ lib=/u01/app/19.0.0.0/grid/lib/libasmclntsh19.so
+ makefile=/u01/app/19.0.0.0/grid/rdbms/lib/ins_rdbms.mk
+ so_ext=so
+ target=ld_shlib
++ basename /u01/app/19.0.0.0/grid/lib/libasmclntsh19.so .so
+ libname=libasmclntsh19
++ dirname /u01/app/19.0.0.0/grid/lib/libasmclntsh19.so
+ sodir=/u01/app/19.0.0.0/grid/lib
+ ardir=/u01/app/19.0.0.0/grid/lib/
+ '[' var = ld_shlib ']'
+ suffix=LIBS
+ var=
+ '[' '!' -f /u01/app/19.0.0.0/grid/lib/libasmclntsh19.a ']'
+ '[' '' '!=' '' ']'
+ make -f /u01/app/19.0.0.0/grid/rdbms/lib/ins_rdbms.mk ld_shlib _FULL_LIBNAME=/u01/app/19.0.0.0/grid/lib/libasmclntsh19.so _LIBNAME=libasmclntsh19 _LIBDIR=/u01/app/19.0.0.0/grid/lib/ '_LIBNAME_LIBS=$(libasmclntsh19LIBS)' '_LIBNAME_EXTRALIBS=$(libasmclntsh19EXTRALIBS)'
make[2]: Entering directory `/home/grid'
/u01/app/19.0.0.0/grid/bin/orald -o /u01/app/19.0.0.0/grid/lib/libasmclntsh19.so -shared -z noexecstack -Wl,--disable-new-dtags -L/tmp/bootstraplib/ -L/u01/app/19.0.0.0/grid/lib/ -L/u01/app/19.0.0.0/grid/rdbms/lib/ -L/u01/app/19.0.0.0/grid/lib/stubs/ -Wl,--version-script=/u01/app/19.0.0.0/grid/rdbms/admin/libasmclntsh19.def -Wl,--whole-archive /u01/app/19.0.0.0/grid/lib/libasmclntsh19.a -Wl,--no-whole-archive -lirc
make[2]: Leaving directory `/home/grid'
make[1]: Leaving directory `/home/grid'
make libasmclntsh19.so returned code 0
[grid@racdb1 ~]$
Go back to the root user session, and complete the postpatch task
[root@racdb1 install]# ./rootcrs.sh -postpatch
Using configuration parameter file: /u01/app/19.0.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/racdb1/crsconfig/crs_postpatch_racdb1_2020-04-14_04-50-01PM.log
2020/04/14 16:50:13 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [3439198897].
SQL Patching tool version 19.6.0.0.0 Production on Tue Apr 14 16:58:29 2020
Copyright (c) 2012, 2019, Oracle. All rights reserved.
Log file for this invocation: /u01/app/grid/cfgtoollogs/sqlpatch/sqlpatch_14526_2020_04_14_16_58_29/sqlpatch_invocation.log
Connecting to database...OK
Gathering database info...done
Note: Datapatch will only apply or rollback SQL fixes for PDBs
that are in an open state, no patches will be applied to closed PDBs.
Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
(Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done
Current state of interim SQL patches:
No interim patches found
Current state of release update SQL patches:
Binary registry:
19.6.0.0.0 Release_Update 191217155004: Installed
PDB CDB$ROOT:
Applied 19.3.0.0.0 Release_Update 190410122720 successfully on 08-APR-20 03.44.51.059484 PM
PDB GIMR_DSCREP_10:
Applied 19.3.0.0.0 Release_Update 190410122720 successfully on 08-APR-20 03.50.52.971583 PM
PDB PDB$SEED:
Applied 19.3.0.0.0 Release_Update 190410122720 successfully on 08-APR-20 03.50.52.971583 PM
Adding patches to installation queue and performing prereq checks...done
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED GIMR_DSCREP_10
No interim patches need to be rolled back
Patch 30557433 (Database Release Update : 19.6.0.0.200114 (30557433)):
Apply from 19.3.0.0.0 Release_Update 190410122720 to 19.6.0.0.0 Release_Update 191217155004
No interim patches need to be applied
Installing patches...
Patch installation complete. Total patches installed: 3
Validating logfiles...done
Patch 30557433 apply (pdb CDB$ROOT): SUCCESS
logfile: /u01/app/grid/cfgtoollogs/sqlpatch/30557433/23305305/30557433_apply__MGMTDB_CDBROOT_2020Apr14_16_59_18.log (no errors)
Patch 30557433 apply (pdb PDB$SEED): SUCCESS
logfile: /u01/app/grid/cfgtoollogs/sqlpatch/30557433/23305305/30557433_apply__MGMTDB_PDBSEED_2020Apr14_17_00_27.log (no errors)
Patch 30557433 apply (pdb GIMR_DSCREP_10): SUCCESS
logfile: /u01/app/grid/cfgtoollogs/sqlpatch/30557433/23305305/30557433_apply__MGMTDB_GIMR_DSCREP_10_2020Apr14_17_00_27.log (no errors)
SQL Patching tool complete on Tue Apr 14 17:01:53 2020
2020/04/14 17:02:53 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2020/04/14 17:02:56 CLSRSC-672: Post-patch steps for patching GI home successfully completed.
[root@racdb1 install]# 2020/04/14 17:03:04 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
[root@racdb1 install]#
Done !!!!
I believe this is an Oracle bug on 19c RAC. The best solution is we need to do some preparation as follow before patch. Especially for a fresh installed 19c RAC
For 19c RAC after fresh installation, we compare the files in /u01/app/oraInventory/ContentsXML first
in Node1 (the GI and DB installation node)
[grid@racdb1 ~]$ cd /u01/app/oraInventory/ContentsXML/
[grid@racdb1 ContentsXML]$ ls -l
total 16
-rw-rw----. 1 grid oinstall 300 Apr 9 19:09 comps.xml
-rw-rw----. 1 grid oinstall 561 Apr 9 19:07 inventory.xml
-rw-rw----. 1 grid oinstall 292 Apr 9 19:09 libs.xml
-rw-rw----. 1 grid oinstall 174 Apr 9 19:07 oui-patch.xml
[grid@racdb1 ContentsXML]$ cat oui-patch.xml
<?xml version='1.0' encoding='UTF-8'?>
<!-- Copyright (c) 2020 Oracle Corporation. All Rights Reserved.
Do not modify the contents of this file by hand.
-->
<ONEOFF_LIST/>[grid@racdb1 ContentsXML]$
In Node2
[grid@racdb2 GI-patch]$ cd /u01/app/oraInventory/ContentsXML/
[grid@racdb2 ContentsXML]$ ls -l
total 12
-rw-rw----. 1 grid oinstall 300 Apr 9 19:09 comps.xml
-rw-rw----. 1 grid oinstall 561 Apr 9 19:09 inventory.xml
-rw-rw----. 1 grid oinstall 292 Apr 9 19:09 libs.xml
[grid@racdb2 ContentsXML]$
See the different? the oui-patch.xml is MISSING !!
Copy this file to other nodes before apply patch, and confirm the permission is 660, it will safe your life and time