반응형

[Oracle RAC 설치전 사전 Check]

[grid@rac1 grid]$ pwd
/grid/12.2.0.1/grid
[grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -r 12.2 -osdba dba -orainv dba -asm -asmgrp dba -asmdev /dev/oracleasm/asm-disk6 -crshome /grid/12.2.0.1/grid
Verifying Physical Memory ...PASSED
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...PASSED
Verifying Free Space: rac2:/usr,rac2:/var,rac2:/etc,rac2:/grid/12.2.0.1/grid,rac2:/sbin,rac2:/tmp ...PASSED
Verifying Free Space: rac1:/usr,rac1:/var,rac1:/etc,rac1:/grid/12.2.0.1/grid,rac1:/sbin,rac1:/tmp ...PASSED
Verifying User Existence: grid ...
  Verifying Users With Same UID: 54321 ...PASSED
Verifying User Existence: grid ...PASSED
Verifying Group Existence: dba ...PASSED
Verifying Group Membership: dba(Primary) ...PASSED
Verifying Run Level ...PASSED
Verifying Hard Limit: maximum open file descriptors ...PASSED
Verifying Soft Limit: maximum open file descriptors ...PASSED
Verifying Hard Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum stack size ...PASSED
Verifying Architecture ...PASSED
Verifying OS Kernel Version ...PASSED
Verifying OS Kernel Parameter: semmsl ...PASSED
Verifying OS Kernel Parameter: semmns ...PASSED
Verifying OS Kernel Parameter: semopm ...PASSED
Verifying OS Kernel Parameter: semmni ...PASSED
Verifying OS Kernel Parameter: shmmax ...PASSED
Verifying OS Kernel Parameter: shmmni ...PASSED
Verifying OS Kernel Parameter: shmall ...PASSED
Verifying OS Kernel Parameter: file-max ...PASSED
Verifying OS Kernel Parameter: ip_local_port_range ...PASSED
Verifying OS Kernel Parameter: rmem_default ...PASSED
Verifying OS Kernel Parameter: rmem_max ...PASSED
Verifying OS Kernel Parameter: wmem_default ...PASSED
Verifying OS Kernel Parameter: wmem_max ...PASSED
Verifying OS Kernel Parameter: aio-max-nr ...PASSED
Verifying OS Kernel Parameter: panic_on_oops ...PASSED
Verifying Package: binutils-2.23.52.0.1 ...PASSED
Verifying Package: compat-libcap1-1.10 ...PASSED
Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED
Verifying Package: sysstat-10.1.5 ...PASSED
Verifying Package: ksh ...PASSED
Verifying Package: make-3.82 ...PASSED
Verifying Package: glibc-2.17 (x86_64) ...PASSED
Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED
Verifying Package: libaio-0.3.109 (x86_64) ...PASSED
Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED
Verifying Package: nfs-utils-1.2.3-15 ...PASSED
Verifying Package: smartmontools-6.2-4 ...PASSED
Verifying Package: net-tools-2.0-0.17 ...PASSED
Verifying Port Availability for component "Oracle Notification Service (ONS)" ...PASSED
Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED
Verifying Users With Same UID: 0 ...PASSED
Verifying Current Group ID ...PASSED
Verifying Root user consistency ...PASSED
Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
Verifying Node Connectivity ...
  Verifying Hosts File ...PASSED
  Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
  Verifying subnet mask consistency for subnet "192.168.100.0" ...PASSED
  Verifying subnet mask consistency for subnet "192.168.56.0" ...PASSED
  Verifying subnet mask consistency for subnet "192.168.122.0" ...PASSED
  Verifying subnet mask consistency for subnet "10.0.2.0" ...PASSED
  Verifying subnet mask consistency for subnet "192.168.200.0" ...PASSED
Verifying Node Connectivity ...FAILED (PRVG-1172, PRVG-11067, PRVG-11095)
Verifying Multicast check ...PASSED
Verifying Device Checks for ASM ...
  Verifying ASM device sharedness check ...
    Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
    Verifying Shared Storage Accessibility:/dev/oracleasm/asm-disk6 ...PASSED
  Verifying ASM device sharedness check ...PASSED
  Verifying Access Control List check ...PASSED
Verifying Device Checks for ASM ...PASSED
Verifying I/O scheduler ...
  Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
Verifying I/O scheduler ...PASSED
Verifying Network Time Protocol (NTP) ...
  Verifying '/etc/chrony.conf' ...PASSED
  Verifying '/var/run/ntpd.pid' ...PASSED
  Verifying '/var/run/chronyd.pid' ...PASSED
Verifying Network Time Protocol (NTP) ...FAILED
Verifying Same core file name pattern ...PASSED
Verifying User Mask ...PASSED
Verifying User Not In Group "root": grid ...PASSED
Verifying Time zone consistency ...PASSED
Verifying resolv.conf Integrity ...
  Verifying (Linux) resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-2064, PRVG-10048)
Verifying resolv.conf Integrity ...FAILED (PRVF-5636, PRVG-2064, PRVG-10048)
Verifying DNS/NIS name service ...PASSED
Verifying Domain Sockets ...PASSED
Verifying /boot mount ...PASSED
Verifying File system mount options for path GI_HOME ...PASSED
Verifying Daemon "avahi-daemon" not configured and running ...PASSED
Verifying Daemon "proxyt" not configured and running ...PASSED
Verifying Grid Infrastructure home path: /grid/12.2.0.1/grid ...
  Verifying '/grid/12.2.0.1/grid' ...PASSED
Verifying Grid Infrastructure home path: /grid/12.2.0.1/grid ...PASSED
Verifying User Equivalence ...PASSED
Verifying File system mount options for path /var ...PASSED
Verifying zeroconf check ...FAILED (PRVE-10077)
Verifying ASM Filter Driver configuration ...PASSED
Pre-check for cluster services setup was unsuccessful on all the nodes.
Failures were encountered during execution of CVU verification request "stage -pre crsinst".
Verifying Node Connectivity ...FAILED
PRVG-1172 : The IP address "192.168.122.1" is on multiple interfaces "virbr0"
on nodes "rac1,rac2"
rac1: PRVG-11067 : TCP connectivity from node "rac1": "192.168.122.1" to node
      "rac1": "192.168.122.1" failed.
      PRVG-11095 : The TCP system call "connect" failed with error "111" while
      executing exectask on node "rac1"
      Connection refused
rac1: PRVG-11067 : TCP connectivity from node "rac1": "192.168.122.1" to node
      "rac2": "192.168.122.1" failed.
      PRVG-11095 : The TCP system call "connect" failed with error "111" while
      executing exectask on node "rac1"
      Connection refused
Verifying Network Time Protocol (NTP) ...FAILED
Verifying resolv.conf Integrity ...FAILED
rac2: PRVF-5636 : The DNS response time for an unreachable node exceeded
      "15000" ms on following nodes: rac2
rac2: PRVG-2064 : There are no configured name servers in the file
      '/etc/resolv.conf' on the nodes "rac2"
rac2: Check for integrity of file "/etc/resolv.conf" failed
rac1: PRVG-10048 : Name "rac1" was not resolved to an address of the specified
      type by name servers o"210.220.163.82".
rac1: PRVG-10048 : Name "rac1" was not resolved to an address of the specified
      type by name servers o"168.126.63.1".
rac1: Check for integrity of file "/etc/resolv.conf" failed
  Verifying (Linux) resolv.conf Integrity ...FAILED
  rac2: PRVF-5636 : The DNS response time for an unreachable node exceeded
        "15000" ms on following nodes: rac2
  rac2: PRVG-2064 : There are no configured name servers in the file
        '/etc/resolv.conf' on the nodes "rac2"

  rac1: PRVG-10048 : Name "rac1" was not resolved to an address of the
        specified type by name servers o"210.220.163.82".
  rac1: PRVG-10048 : Name "rac1" was not resolved to an address of the
        specified type by name servers o"168.126.63.1".
Verifying zeroconf check ...FAILED
rac2: PRVE-10077 : NOZEROCONF parameter was not  specified or was not set to
      'yes' in file "/etc/sysconfig/network" on node "rac2.localdomain"
CVU operation performed:      stage -pre crsinst
Date:                         May 23, 2021 12:22:25 PM
CVU home:                     /grid/12.2.0.1/grid/
User:                         grid
[grid@rac1 grid]$

 

[Oracle RAC Install]

-RSP File

--주의사항
oracle.install.asm.SYSASMPassword= / oracle.install.asm.monitorPassword= 이부분 기입 해야 함

[grid@rac1 DATASYNCXML]$ more grid2.rsp
###############################################################################
## Copyright(c) Oracle Corporation 1998,2017. All rights reserved.           ##
##                                                                           ##
## Specify values for the variables listed below to customize                ##
## your installation.                                                        ##
##                                                                           ##
## Each variable is associated with a comment. The comment                   ##
## can help to populate the variables with the appropriate                   ##
## values.                                                                   ##
##                                                                           ##
## IMPORTANT NOTE: This file contains plain text passwords and               ##
## should be secured to have read permission only by oracle user             ##
## or db administrator who owns this installation.                           ##
##                                                                           ##
###############################################################################
###############################################################################
##                                                                           ##
## Instructions to fill this response file                                   ##
## To register and configure 'Grid Infrastructure for Cluster'               ##
##  - Fill out sections A,B,C,D,E,F and G                                    ##
##  - Fill out section G if OCR and voting disk should be placed on ASM      ##
##                                                                           ##
## To register and configure 'Grid Infrastructure for Standalone server'     ##
##  - Fill out sections A,B and G                                            ##
##                                                                           ##
## To register software for 'Grid Infrastructure'                            ##
##  - Fill out sections A,B and D                                            ##
##  - Provide the cluster nodes in section D when choosing CRS_SWONLY as     ##
##    installation option in section A                                       ##
##                                                                           ##
## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##
##                                                                           ##
## To add more nodes to the cluster                                          ##
##  - Fill out sections A and D                                              ##
##  - Provide the cluster nodes in section D when choosing CRS_ADDNODE as    ##
##    installation option in section A                                       ##
##                                                                           ##
###############################################################################
#------------------------------------------------------------------------------
# Do not change the following system generated value.
#------------------------------------------------------------------------------
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v12.2.0
###############################################################################
#                                                                             #
#                          SECTION A - BASIC                                  #
#                                                                             #
###############################################################################
#-------------------------------------------------------------------------------
# Specify the location which holds the inventory files.
# This is an optional parameter if installing on
# Windows based Operating System.
#-------------------------------------------------------------------------------
INVENTORY_LOCATION=/grid/oraInventory
#-------------------------------------------------------------------------------
# Specify the installation option.
# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY or HA_SWONLY
#   - CRS_CONFIG  : To register home and configure Grid Infrastructure for cluster
#   - HA_CONFIG   : To register home and configure Grid Infrastructure for stand alone server
#   - UPGRADE     : To register home and upgrade clusterware software of earlier release
#   - CRS_SWONLY  : To register Grid Infrastructure Software home (can be configured for cluster
#                   or stand alone server later)
#   - HA_SWONLY   : To register Grid Infrastructure Software home (can be configured for stand
#                   alone server later. This is only supported on Windows.)
#   - CRS_ADDNODE : To add more nodes to the cluster
#-------------------------------------------------------------------------------
oracle.install.option=CRS_CONFIG
#-------------------------------------------------------------------------------
# Specify the complete path of the Oracle Base.
#-------------------------------------------------------------------------------
ORACLE_BASE=/grid/base
################################################################################
#                                                                              #
#                              SECTION B - GROUPS                              #
#                                                                              #
#   The following three groups need to be assigned for all GI installations.   #
#   OSDBA and OSOPER can be the same or different.  OSASM must be different    #
#   than the other two.                                                        #
#   The value to be specified for OSDBA, OSOPER and OSASM group is only for    #
#   Unix based Operating System.                                               #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges.
#-------------------------------------------------------------------------------
oracle.install.asm.OSDBA=dba
#-------------------------------------------------------------------------------
# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges.
# The value to be specified for OSOPER group is optional.
# Value should not be provided if configuring Client Cluster - i.e. storageOption=CLIENT_ASM_STORAGE.
#-------------------------------------------------------------------------------
oracle.install.asm.OSOPER=dba
#-------------------------------------------------------------------------------
# The OSASM_GROUP is the OS group which is to be granted SYSASM privileges. This
# must be different than the previous two.
#-------------------------------------------------------------------------------
oracle.install.asm.OSASM=dba
################################################################################
#                                                                              #
#                           SECTION C - SCAN                                   #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify a name for SCAN
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.scanName=rac-scan
#-------------------------------------------------------------------------------
# Specify a unused port number for SCAN service
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.scanPort=1621
################################################################################
#                                                                              #
#                           SECTION D - CLUSTER & GNS                         #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE
#-------------------------------------------------------------------------------
# Specify 'true' if you would like to configure the cluster as Extended, else
# specify 'false'
#
# Applicable only for STANDALONE and DOMAIN cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.configureAsExtendedCluster=false
#-------------------------------------------------------------------------------
# Specify the Member Cluster Manifest file
#
# Applicable only for MEMBERDB and MEMBERAPP cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.memberClusterManifestFile=
#-------------------------------------------------------------------------------
# Specify a name for the Cluster you are creating.
#
# The maximum length allowed for clustername is 15 characters. The name can be
# any combination of lower and uppercase alphabets (A - Z), (0 - 9), hyphen(-)
# and underscore(_).
#
# Applicable only for STANDALONE and DOMAIN cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.clusterName=rac
#-------------------------------------------------------------------------------
# Applicable only for STANDALONE, DOMAIN, MEMBERDB cluster configuration.
# Specify 'true' if you would like to configure Grid Naming Service(GNS), else
# specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.configureGNS=false
#-------------------------------------------------------------------------------
# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to configure GNS.
# Specify 'true' if you would like to assign SCAN name VIP and Node VIPs by DHCP
# , else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
#-------------------------------------------------------------------------------
# Applicable only if you choose to configure GNS.
# Specify the type of GNS configuration for cluster
# Allowed values are: CREATE_NEW_GNS and USE_SHARED_GNS
# Only USE_SHARED_GNS value is allowed for MEMBERDB cluster configuration.
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsOption=
#-------------------------------------------------------------------------------
# Applicable only if SHARED_GNS is being configured for cluster
# Specify the path to the GNS client data file
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsClientDataFile=
#-------------------------------------------------------------------------------
# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to
# configure GNS for this cluster oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
# Specify the GNS subdomain and an unused virtual hostname for GNS service
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=
#-------------------------------------------------------------------------------
# Specify the list of sites - only if configuring an Extended Cluster
#-------------------------------------------------------------------------------
oracle.install.crs.config.sites=
#-------------------------------------------------------------------------------
# Specify the list of nodes that have to be configured to be part of the cluster.
#
# The list should a comma-separated list of tuples.  Each tuple should be a
# colon-separated string that contains
# - 1 field if you have chosen CRS_SWONLY as installation option, or
# - 1 field if configuring an Application Cluster, or
# - 3 fields if configuring a Flex Cluster
# - 3 fields if adding more nodes to the configured cluster, or
# - 4 fields if configuring an Extended Cluster
#
# The fields should be ordered as follows:
# 1. The first field should be the public node name.
# 2. The second field should be the virtual host name
#    (Should be specified as AUTO if you have chosen 'auto configure for VIP'
#     i.e. autoConfigureClusterNodeVIP=true)
# 3. The third field indicates the role of node (HUB,LEAF). This has to
#    be provided only if Flex Cluster is being configured.
#    For Extended Cluster only HUB should be specified for all nodes
# 4. The fourth field indicates the site designation for the node. To be specified only if configuring an Extended Cluster.
# The 2nd and 3rd fields are not applicable if you have chosen CRS_SWONLY as installation option
# The 2nd and 3rd fields are not applicable if configuring an Application Cluster
#
# Examples
# For registering GI for a cluster software: oracle.install.crs.config.clusterNodes=node1,node2
# For adding more nodes to the configured cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF
# For configuring Application Cluster: oracle.install.crs.config.clusterNodes=node1,node2
# For configuring Flex Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF
# For configuring Extended Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB:site1,node2:node2-vip:HUB:site2
# You can specify a range of nodes in the tuple using colon separated fields of format
# hostnameprefix:lowerbound-upperbound:hostnamesuffix:vipsuffix:role of node
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.clusterNodes=rac1.localdomain:rac1-vip.localdomain:HUB,rac2.localdomain:rac2-vip.localdomain:HUB
#-------------------------------------------------------------------------------
# The value should be a comma separated strings where each string is as shown below
# InterfaceName:SubnetAddress:InterfaceType
# where InterfaceType can be either "1", "2", "3", "4", or "5"
# InterfaceType stand for the following values
#   - 1 : PUBLIC
#   - 2 : PRIVATE
#   - 3 : DO NOT USE
#   - 4 : ASM
#   - 5 : ASM & PRIVATE
#
# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.networkInterfaceList=enp0s3:10.0.2.0:3,enp0s8:192.168.56.0:1,enp0s9:192.168.100.0:5,enp0s10:192.168.200.0:5,virbr0:192.168.122.0:3
#------------------------------------------------------------------------------
# Create a separate ASM DiskGroup to store GIMR data.
# Specify 'true' if you would like to separate GIMR data with clusterware data,
# else specify 'false'
# Value should be 'true' for DOMAIN cluster configurations
# Value can be true/false for STANDALONE cluster configurations.
#------------------------------------------------------------------------------
oracle.install.asm.configureGIMRDataDG=false
################################################################################
#                                                                              #
#                              SECTION E - STORAGE                             #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify the type of storage to use for Oracle Cluster Registry(OCR) and Voting
# Disks files
#   - FLEX_ASM_STORAGE
#   - CLIENT_ASM_STORAGE
#
# Applicable only for MEMBERDB cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.storageOption=
################################################################################
#                                                                              #
#                               SECTION F - IPMI                               #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify 'true' if you would like to configure Intelligent Power Management interface
# (IPMI), else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.useIPMI=false
#-------------------------------------------------------------------------------
# Applicable only if you choose to configure IPMI
# i.e. oracle.install.crs.config.useIPMI=true
# Specify the username and password for using IPMI service
#-------------------------------------------------------------------------------
oracle.install.crs.config.ipmi.bmcUsername=
oracle.install.crs.config.ipmi.bmcPassword=
################################################################################
#                                                                              #
#                                SECTION G - ASM                               #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# ASM Storage Type
# Allowed values are : ASM and ASM_ON_NAS
# ASM_ON_NAS applicable only if
# oracle.install.crs.config.ClusterConfiguration=STANDALONE
#-------------------------------------------------------------------------------
oracle.install.asm.storageOption=ASM
#-------------------------------------------------------------------------------
# NAS location to create ASM disk group for storing OCR/VDSK
# Specify the NAS location where you want the ASM disk group to be created
# to be used to store OCR/VDSK files
# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS
#-------------------------------------------------------------------------------
oracle.install.asmOnNAS.ocrLocation=
#------------------------------------------------------------------------------
# Create a separate ASM DiskGroup on NAS to store GIMR data
# Specify 'true' if you would like to separate GIMR data with clusterware data, else
# specify 'false'
# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS
#------------------------------------------------------------------------------
oracle.install.asmOnNAS.configureGIMRDataDG=false
#-------------------------------------------------------------------------------
# NAS location to create ASM disk group for storing GIMR data
# Specify the NAS location where you want the ASM disk group to be created
# to be used to store the GIMR database
# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS
# and oracle.install.asmOnNAS.configureGIMRDataDG=true
#-------------------------------------------------------------------------------
oracle.install.asmOnNAS.gimrLocation=
#-------------------------------------------------------------------------------
# Password for SYS user of Oracle ASM
#-------------------------------------------------------------------------------
oracle.install.asm.SYSASMPassword=
#-------------------------------------------------------------------------------
# The ASM DiskGroup
#
# Example: oracle.install.asm.diskGroup.name=data
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.name=OCRVOTE
#-------------------------------------------------------------------------------
# Redundancy level to be used by ASM.
# It can be one of the following
#   - NORMAL
#   - HIGH
#   - EXTERNAL
#   - FLEX#   - EXTENDED (required if oracle.install.crs.config.ClusterConfiguration=EXTENDED)
# Example: oracle.install.asm.diskGroup.redundancy=NORMAL
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.redundancy=EXTERNAL
#-------------------------------------------------------------------------------
# Allocation unit size to be used by ASM.
# It can be one of the following values
#   - 1
#   - 2
#   - 4
#   - 8
#   - 16
# Example: oracle.install.asm.diskGroup.AUSize=4
# size unit is MB
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.AUSize=4
#-------------------------------------------------------------------------------
# Failure Groups for the disk group
# If configuring for Extended cluster specify as list of "failure group name:site"
# tuples.
# Else just specify as list of failure group names
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.FailureGroups=
#-------------------------------------------------------------------------------
# List of disks and their failure groups to create a ASM DiskGroup
# (Use this if each of the disks have an associated failure group)
# Failure Groups are not required if oracle.install.asm.diskGroup.redundancy=EXTERNAL
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName
#     For Windows based Operating System:
#     oracle.install.asm.diskGroup.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/oracleasm/asm-disk6,
#-------------------------------------------------------------------------------
# List of disks to create a ASM DiskGroup
# (Use this variable only if failure groups configuration is not required)
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.diskGroup.disks=/oracle/asm/disk1,/oracle/asm/disk2
#     For Windows based Operating System:
#     oracle.install.asm.diskGroup.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.disks=/dev/oracleasm/asm-disk6
#-------------------------------------------------------------------------------
# List of failure groups to be marked as QUORUM.
# Quorum failure groups contain only voting disk data, no user data is stored
# Example:
#       oracle.install.asm.diskGroup.quorumFailureGroupNames=FGName1,FGName2
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.quorumFailureGroupNames=
#-------------------------------------------------------------------------------
# The disk discovery string to be used to discover the disks used create a ASM DiskGroup
#
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.diskGroup.diskDiscoveryString=/oracle/asm/*
#     For Windows based Operating System:
#     oracle.install.asm.diskGroup.diskDiscoveryString=\\.\ORCLDISK*
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/*
#-------------------------------------------------------------------------------
# Password for ASMSNMP account
# ASMSNMP account is used by Oracle Enterprise Manager to monitor Oracle ASM instances
#-------------------------------------------------------------------------------
oracle.install.asm.monitorPassword=
#-------------------------------------------------------------------------------
# GIMR Storage data ASM DiskGroup
# Applicable only when
# oracle.install.asm.configureGIMRDataDG=true
# Example: oracle.install.asm.GIMRDG.name=MGMT
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.name=
#-------------------------------------------------------------------------------
# Redundancy level to be used by ASM.
# It can be one of the following
#   - NORMAL
#   - HIGH
#   - EXTERNAL
#   - FLEX#   - EXTENDED (only if oracle.install.crs.config.ClusterConfiguration=EXTENDED)
# Example: oracle.install.asm.gimrDG.redundancy=NORMAL
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.redundancy=
#-------------------------------------------------------------------------------
# Allocation unit size to be used by ASM.
# It can be one of the following values
#   - 1
#   - 2
#   - 4
#   - 8
#   - 16
# Example: oracle.install.asm.gimrDG.AUSize=4
# size unit is MB
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.AUSize=1
#-------------------------------------------------------------------------------
# Failure Groups for the GIMR storage data ASM disk group
# If configuring for Extended cluster specify as list of "failure group name:site"
# tuples.
# Else just specify as list of failure group names
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.FailureGroups=
#-------------------------------------------------------------------------------
# List of disks and their failure groups to create GIMR data ASM DiskGroup
# (Use this if each of the disks have an associated failure group)
# Failure Groups are not required if oracle.install.asm.gimrDG.redundancy=EXTERNAL
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.gimrDG.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName
#     For Windows based Operating System:
#     oracle.install.asm.gimrDG.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.disksWithFailureGroupNames=
#-------------------------------------------------------------------------------
# List of disks to create GIMR data ASM DiskGroup
# (Use this variable only if failure groups configuration is not required)
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.gimrDG.disks=/oracle/asm/disk1,/oracle/asm/disk2
#     For Windows based Operating System:
#     oracle.install.asm.gimrDG.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.disks=
#-------------------------------------------------------------------------------
# List of failure groups to be marked as QUORUM.
# Quorum failure groups contain only voting disk data, no user data is stored
# Example:
#       oracle.install.asm.gimrDG.quorumFailureGroupNames=FGName1,FGName2
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.quorumFailureGroupNames=
#-------------------------------------------------------------------------------
# Configure AFD - ASM Filter Driver
# Applicable only for FLEX_ASM_STORAGE option
# Specify 'true' if you want to configure AFD, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.asm.configureAFD=false
#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false
################################################################################
#                                                                              #
#                             SECTION H - UPGRADE                              #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify whether to ignore down nodes during upgrade operation.
# Value should be 'true' to ignore down nodes otherwise specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.ignoreDownNodes=false
################################################################################
#                                                                              #
#                               MANAGEMENT OPTIONS                             #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify the management option to use for managing Oracle Grid Infrastructure
# Options are:
# 1. CLOUD_CONTROL - If you want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control.
# 2. NONE   -If you do not want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control.
#-------------------------------------------------------------------------------
oracle.install.config.managementOption=NONE
#-------------------------------------------------------------------------------
# Specify the OMS host to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.omsHost=
#-------------------------------------------------------------------------------
# Specify the OMS port to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.omsPort=0
#-------------------------------------------------------------------------------
# Specify the EM Admin user name to use to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.emAdminUser=
#-------------------------------------------------------------------------------
# Specify the EM Admin password to use to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.emAdminPassword=
################################################################################
#                                                                              #
#                      Root script execution configuration                     #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------------------------------
# Specify the root script execution mode.
#
#   - true  : To execute the root script automatically by using the appropriate configuration methods.
#   - false : To execute the root script manually.
#
# If this option is selected, password should be specified on the console.
#-------------------------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.executeRootScript=false
#--------------------------------------------------------------------------------------
# Specify the configuration method to be used for automatic root script execution.
#
# Following are the possible choices:
#   - ROOT
#   - SUDO
#--------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.configMethod=
#--------------------------------------------------------------------------------------
# Specify the absolute path of the sudo program.
#
# Applicable only when SUDO configuration method was chosen.
#--------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.sudoPath=
#--------------------------------------------------------------------------------------
# Specify the name of the user who is in the sudoers list.
#
# Applicable only when SUDO configuration method was chosen.
#--------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.sudoUserName=
#--------------------------------------------------------------------------------------
# Specify the nodes batch map.
#
# This should be a comma separated list of node:batch pairs.
# During upgrade, you can sequence the automatic execution of root scripts
# by pooling the nodes into batches.
# A maximum of three batches can be specified.
# Installer will execute the root scripts on all the nodes in one batch before
# proceeding to next batch.
# Root script execution on the local node must be in Batch 1.
# Only one type of node role can be used for each batch.
# Root script execution should be done first in all HUB nodes and then, when
# existent, in all the LEAF nodes.
#
# Examples:
# 1. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:2,HUBNode3:2,LEAFNode4:3
# 2. oracle.install.crs.config.batchinfo=HUBNode1:1,LEAFNode2:2,LEAFNode3:2,LEAFNode4:2
# 3. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:1,LEAFNode3:2,LEAFNode4:3
#
# Applicable only for UPGRADE install option.
#--------------------------------------------------------------------------------------
oracle.install.crs.config.batchinfo=
################################################################################
#                                                                              #
#                           APPLICATION CLUSTER OPTIONS                        #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify the Virtual hostname to configure virtual access for your Application
# The value to be specified for Virtual hostname is optional.
#-------------------------------------------------------------------------------
oracle.install.crs.app.applicationAddress=
[grid@rac1 DATASYNCXML]$

--Oracle RAC Install

[grid@rac1 grid]$ pwd
/grid/12.2.0.1/grid
[grid@rac1 grid]$ ./gridSetup.sh -silent -responseFile /home/grid/DATASYNCXML/gi.rsp -ignorePrereq
Launching Oracle Grid Infrastructure Setup Wizard...
[WARNING] [INS-30011] The SYS password entered does not conform to the Oracle recommended standards.
   CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
   ACTION: Provide a password that conforms to the Oracle recommended standards.
[WARNING] [INS-30011] The ASMSNMP password entered does not conform to the Oracle recommended standards.
   CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
   ACTION: Provide a password that conforms to the Oracle recommended standards.
[WARNING] [INS-41808] Possible invalid choice for OSASM Group.
   CAUSE: The name of the group you selected for the OSASM group is commonly used to grant other system privileges (For example: asmdba, asmoper, dba, oper).
   ACTION: Oracle recommends that you designate asmadmin as the OSASM group.
[WARNING] [INS-41809] Possible invalid choice for OSDBA Group.
   CAUSE: The group name you selected as the OSDBA for ASM group is commonly used for Oracle Database administrator privileges.
   ACTION: Oracle recommends that you designate asmdba as the OSDBA for ASM group, and that the group should not be the same group as an Oracle Database OSDBA group.
[WARNING] [INS-41810] Possible invalid choice for OSOPER Group.
   CAUSE: The group name you selected as the OSOPER for ASM group is commonly used for Oracle Database administrator privileges.
   ACTION: Oracle recommends that you designate asmoper as the OSOPER for ASM group, and that the group should not be the same group as an Oracle Database OSOPER group.
[WARNING] [INS-41813] OSDBA for ASM, OSOPER for ASM, and OSASM are the same OS group.
   CAUSE: The group you selected for granting the OSDBA for ASM group for database access, and the OSOPER for ASM group for startup and shutdown of Oracle ASM, is the same group as the OSASM group, whose members have SYSASM privileges on Oracle ASM.
   ACTION: Choose different groups as the OSASM, OSDBA for ASM, and OSOPER for ASM groups.
[WARNING] [INS-41875] Oracle ASM Administrator (OSASM) Group specified is same as the users primary group.
   CAUSE: Operating system group dba specified for OSASM Group is same as the users primary group.
   ACTION: It is not recommended to have OSASM group same as primary group of user as it becomes the inventory group. Select any of the group other than the primary group to avoid misconfiguration.
[WARNING] [INS-40109] The specified Oracle Base location is not empty on this server.
   ACTION: Specify an empty location for Oracle Base.
You can find the log of this install session at:
 /tmp/GridSetupActions2021-05-23_12-55-24PM/gridSetupActions2021-05-23_12-55-24PM.log
As a root user, execute the following script(s):
        1. /grid/oraInventory/orainstRoot.sh
        2. /grid/12.2.0.1/grid/root.sh
Execute /grid/oraInventory/orainstRoot.sh on the following nodes:
[rac1, rac2]
Execute /grid/12.2.0.1/grid/root.sh on the following nodes:
[rac1, rac2]
Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes.
Successfully Setup Software.
As install user, execute the following command to complete the configuration.
        /grid/12.2.0.1/grid/gridSetup.sh -executeConfigTools -responseFile /home/grid/DATASYNCXML/gi.rsp [-silent]
Moved the install session logs to:
 /grid/oraInventory/logs/GridSetupActions2021-05-23_12-55-24PM
[grid@rac1 grid]$
---별도의 세션에서 실행
[root@rac1 ~]# /grid/12.2.0.1/grid/root.sh
Check /grid/12.2.0.1/grid/install/root_rac1.localdomain_2021-05-23_13-03-41-698499258.log for the output of root script
[root@rac1 ~]#

[root@rac2 ~]# /grid/12.2.0.1/grid/root.sh
Check /grid/12.2.0.1/grid/install/root_rac2.localdomain_2021-05-23_13-24-02-651981142.log for the output of root script
[root@rac2 ~]#
[grid@rac1 ~]$ /grid/12.2.0.1/grid/gridSetup.sh -executeConfigTools -responseFile /home/grid/DATASYNCXML/gi.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...
You can find the logs of this session at:
/grid/oraInventory/logs/GridSetupActions2021-05-23_01-36-23PM
Configuration failed.
[WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
[grid@rac1 ~]$
---1-Node root.sh 끝나고 상태
[grid@rac1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac1                     STABLE
ora.ASMNET2LSNR_ASM.lsnr
               ONLINE  ONLINE       rac1                     STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
ora.OCRVOTE.dg
               ONLINE  ONLINE       rac1                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
ora.proxy_advm
               OFFLINE OFFLINE      rac1                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.MGMTLSNR
      1        OFFLINE OFFLINE                               STABLE
ora.asm
      1        ONLINE  ONLINE       rac1                     Started,STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac1                     STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
--------------------------------------------------------------------------------
[grid@rac1 ~]$
---2-Node root.sh 끝나고 상태
[grid@rac1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ASMNET2LSNR_ASM.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.OCRVOTE.dg
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.proxy_advm
               OFFLINE OFFLINE      rac1                     STABLE
               OFFLINE OFFLINE      rac2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.MGMTLSNR
      1        OFFLINE OFFLINE                               STABLE
ora.asm
      1        ONLINE  ONLINE       rac1                     Started,STABLE
      2        ONLINE  ONLINE       rac2                     Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac1                     STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
--------------------------------------------------------------------------------
[grid@rac1 ~]$
---CRS 설치 후 환경
[grid@rac1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ASMNET2LSNR_ASM.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.OCRVOTE.dg
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.chad
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.proxy_advm
               OFFLINE OFFLINE      rac1                     STABLE
               OFFLINE OFFLINE      rac2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       rac1                     169.254.36.171 192.1
                                                             68.200.20 192.168.10
                                                             0.20,STABLE
ora.asm
      1        ONLINE  ONLINE       rac1                     Started,STABLE
      2        ONLINE  ONLINE       rac2                     Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac1                     STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       rac1                     Open,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
--------------------------------------------------------------------------------
[grid@rac1 ~]$

[Oracle Database Install]

-RSP File

[oracle@rac1 DATASYNCXML]$ more db.rsp
####################################################################
## Copyright(c) Oracle Corporation 1998,2017. All rights reserved.##
##                                                                ##
## Specify values for the variables listed below to customize     ##
## your installation.                                             ##
##                                                                ##
## Each variable is associated with a comment. The comment        ##
## can help to populate the variables with the appropriate        ##
## values.                                                        ##
##                                                                ##
## IMPORTANT NOTE: This file contains plain text passwords and    ##
## should be secured to have read permission only by oracle user  ##
## or db administrator who owns this installation.                ##
##                                                                ##
####################################################################
#-------------------------------------------------------------------------------
# Do not change the following system generated value.
#-------------------------------------------------------------------------------
oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v12.2.0
#-------------------------------------------------------------------------------
# Specify the installation option.
# It can be one of the following:
#   - INSTALL_DB_SWONLY
#   - INSTALL_DB_AND_CONFIG
#   - UPGRADE_DB
#-------------------------------------------------------------------------------
oracle.install.option=INSTALL_DB_SWONLY
#-------------------------------------------------------------------------------
# Specify the Unix group to be set for the inventory directory.
#-------------------------------------------------------------------------------
UNIX_GROUP_NAME=dba
#-------------------------------------------------------------------------------
# Specify the location which holds the inventory files.
# This is an optional parameter if installing on
# Windows based Operating System.
#-------------------------------------------------------------------------------
INVENTORY_LOCATION=/grid/oraInventory
#-------------------------------------------------------------------------------
# Specify the complete path of the Oracle Home.
#-------------------------------------------------------------------------------
ORACLE_HOME=/oracle/product/12.2.0.1/db_1
#-------------------------------------------------------------------------------
# Specify the complete path of the Oracle Base.
#-------------------------------------------------------------------------------
ORACLE_BASE=/oracle/base
#-------------------------------------------------------------------------------
# Specify the installation edition of the component.
#
# The value should contain only one of these choices.
#   - EE     : Enterprise Edition
#   - SE2     : Standard Edition 2
#-------------------------------------------------------------------------------
oracle.install.db.InstallEdition=EE
###############################################################################
#                                                                             #
# PRIVILEGED OPERATING SYSTEM GROUPS                                          #
# ------------------------------------------                                  #
# Provide values for the OS groups to which SYSDBA and SYSOPER privileges     #
# needs to be granted. If the install is being performed as a member of the   #
# group "dba", then that will be used unless specified otherwise below.       #
#                                                                             #
# The value to be specified for OSDBA and OSOPER group is only for UNIX based #
# Operating System.                                                           #
#                                                                             #
###############################################################################
#------------------------------------------------------------------------------
# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges.
#-------------------------------------------------------------------------------
oracle.install.db.OSDBA_GROUP=dba
#------------------------------------------------------------------------------
# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges.
# The value to be specified for OSOPER group is optional.
#------------------------------------------------------------------------------
oracle.install.db.OSOPER_GROUP=dba
#------------------------------------------------------------------------------
# The OSBACKUPDBA_GROUP is the OS group which is to be granted SYSBACKUP privileges.
#------------------------------------------------------------------------------
oracle.install.db.OSBACKUPDBA_GROUP=dba
#------------------------------------------------------------------------------
# The OSDGDBA_GROUP is the OS group which is to be granted SYSDG privileges.
#------------------------------------------------------------------------------
oracle.install.db.OSDGDBA_GROUP=dba
#------------------------------------------------------------------------------
# The OSKMDBA_GROUP is the OS group which is to be granted SYSKM privileges.
#------------------------------------------------------------------------------
oracle.install.db.OSKMDBA_GROUP=dba
#------------------------------------------------------------------------------
# The OSRACDBA_GROUP is the OS group which is to be granted SYSRAC privileges.
#------------------------------------------------------------------------------
oracle.install.db.OSRACDBA_GROUP=dba
###############################################################################
#                                                                             #
#                               Grid Options                                  #
#                                                                             #
###############################################################################
#------------------------------------------------------------------------------
# Specify the type of Real Application Cluster Database
#
#   - ADMIN_MANAGED: Admin-Managed
#   - POLICY_MANAGED: Policy-Managed
#
# If left unspecified, default will be ADMIN_MANAGED
#------------------------------------------------------------------------------
oracle.install.db.rac.configurationType=
#------------------------------------------------------------------------------
# Value is required only if RAC database type is ADMIN_MANAGED
#
# Specify the cluster node names selected during the installation.
# Leaving it blank will result in install on local server only (Single Instance)
#
# Example : oracle.install.db.CLUSTER_NODES=node1,node2
#------------------------------------------------------------------------------
oracle.install.db.CLUSTER_NODES=rac1,rac2
#------------------------------------------------------------------------------
# This variable is used to enable or disable RAC One Node install.
#
#   - true  : Value of RAC One Node service name is used.
#   - false : Value of RAC One Node service name is not used.
#
# If left blank, it will be assumed to be false.
#------------------------------------------------------------------------------
oracle.install.db.isRACOneInstall=false
#------------------------------------------------------------------------------
# Value is required only if oracle.install.db.isRACOneInstall is true.
#
# Specify the name for RAC One Node Service
#------------------------------------------------------------------------------
oracle.install.db.racOneServiceName=
#------------------------------------------------------------------------------
# Value is required only if RAC database type is POLICY_MANAGED
#
# Specify a name for the new Server pool that will be configured
# Example : oracle.install.db.rac.serverpoolName=pool1
#------------------------------------------------------------------------------
oracle.install.db.rac.serverpoolName=
#------------------------------------------------------------------------------
# Value is required only if RAC database type is POLICY_MANAGED
#
# Specify a number as cardinality for the new Server pool that will be configured
# Example : oracle.install.db.rac.serverpoolCardinality=2
#------------------------------------------------------------------------------
oracle.install.db.rac.serverpoolCardinality=0
###############################################################################
#                                                                             #
#                        Database Configuration Options                       #
#                                                                             #
###############################################################################
#-------------------------------------------------------------------------------
# Specify the type of database to create.
# It can be one of the following:
#   - GENERAL_PURPOSE
#   - DATA_WAREHOUSE
# GENERAL_PURPOSE: A starter database designed for general purpose use or transaction-heavy applications.
# DATA_WAREHOUSE : A starter database optimized for data warehousing applications.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.type=GENERAL_PURPOSE
#-------------------------------------------------------------------------------
# Specify the Starter Database Global Database Name.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.globalDBName=
#-------------------------------------------------------------------------------
# Specify the Starter Database SID.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.SID=
#-------------------------------------------------------------------------------
# Specify whether the database should be configured as a Container database.
# The value can be either "true" or "false". If left blank it will be assumed
# to be "false".
#-------------------------------------------------------------------------------
oracle.install.db.ConfigureAsContainerDB=false
#-------------------------------------------------------------------------------
# Specify the  Pluggable Database name for the pluggable database in Container Database.
#-------------------------------------------------------------------------------
oracle.install.db.config.PDBName=
#-------------------------------------------------------------------------------
# Specify the Starter Database character set.
#
#  One of the following
#  AL32UTF8, WE8ISO8859P15, WE8MSWIN1252, EE8ISO8859P2,
#  EE8MSWIN1250, NE8ISO8859P10, NEE8ISO8859P4, BLT8MSWIN1257,
#  BLT8ISO8859P13, CL8ISO8859P5, CL8MSWIN1251, AR8ISO8859P6,
#  AR8MSWIN1256, EL8ISO8859P7, EL8MSWIN1253, IW8ISO8859P8,
#  IW8MSWIN1255, JA16EUC, JA16EUCTILDE, JA16SJIS, JA16SJISTILDE,
#  KO16MSWIN949, ZHS16GBK, TH8TISASCII, ZHT32EUC, ZHT16MSWIN950,
#  ZHT16HKSCS, WE8ISO8859P9, TR8MSWIN1254, VN8MSWIN1258
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.characterSet=
#------------------------------------------------------------------------------
# This variable should be set to true if Automatic Memory Management
# in Database is desired.
# If Automatic Memory Management is not desired, and memory allocation
# is to be done manually, then set it to false.
#------------------------------------------------------------------------------
oracle.install.db.config.starterdb.memoryOption=false
#-------------------------------------------------------------------------------
# Specify the total memory allocation for the database. Value(in MB) should be
# at least 256 MB, and should not exceed the total physical memory available
# on the system.
# Example: oracle.install.db.config.starterdb.memoryLimit=512
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.memoryLimit=
#-------------------------------------------------------------------------------
# This variable controls whether to load Example Schemas onto
# the starter database or not.
# The value can be either "true" or "false". If left blank it will be assumed
# to be "false".
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.installExampleSchemas=false
###############################################################################
#                                                                             #
# Passwords can be supplied for the following four schemas in the             #
# starter database:                                                           #
#   SYS                                                                       #
#   SYSTEM                                                                    #
#   DBSNMP (used by Enterprise Manager)                                       #
#                                                                             #
# Same password can be used for all accounts (not recommended)                #
# or different passwords for each account can be provided (recommended)       #
#                                                                             #
###############################################################################
#------------------------------------------------------------------------------
# This variable holds the password that is to be used for all schemas in the
# starter database.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.password.ALL=
#-------------------------------------------------------------------------------
# Specify the SYS password for the starter database.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.password.SYS=
#-------------------------------------------------------------------------------
# Specify the SYSTEM password for the starter database.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.password.SYSTEM=
#-------------------------------------------------------------------------------
# Specify the DBSNMP password for the starter database.
# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.password.DBSNMP=
#-------------------------------------------------------------------------------
# Specify the PDBADMIN password required for creation of Pluggable Database in the Container Database.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.password.PDBADMIN=
#-------------------------------------------------------------------------------
# Specify the management option to use for managing the database.
# Options are:
# 1. CLOUD_CONTROL - If you want to manage your database with Enterprise Manager Cloud Control along with Database Express.
# 2. DEFAULT   -If you want to manage your database using the default Database Express option.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.managementOption=DEFAULT
#-------------------------------------------------------------------------------
# Specify the OMS host to connect to Cloud Control.
# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.omsHost=
#-------------------------------------------------------------------------------
# Specify the OMS port to connect to Cloud Control.
# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.omsPort=0
#-------------------------------------------------------------------------------
# Specify the EM Admin user name to use to connect to Cloud Control.
# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.emAdminUser=
#-------------------------------------------------------------------------------
# Specify the EM Admin password to use to connect to Cloud Control.
# Applicable only when oracle.install.db.config.starterdb.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.emAdminPassword=
###############################################################################
#                                                                             #
# SPECIFY RECOVERY OPTIONS                                                    #
# ------------------------------------                                        #
# Recovery options for the database can be mentioned using the entries below  #
#                                                                             #
###############################################################################
#------------------------------------------------------------------------------
# This variable is to be set to false if database recovery is not required. Else
# this can be set to true.
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.enableRecovery=false
#-------------------------------------------------------------------------------
# Specify the type of storage to use for the database.
# It can be one of the following:
#   - FILE_SYSTEM_STORAGE
#   - ASM_STORAGE
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.storageType=
#-------------------------------------------------------------------------------
# Specify the database file location which is a directory for datafiles, control
# files, redo logs.
#
# Applicable only when oracle.install.db.config.starterdb.storage=FILE_SYSTEM_STORAGE
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.fileSystemStorage.dataLocation=
#-------------------------------------------------------------------------------
# Specify the recovery location.
#
# Applicable only when oracle.install.db.config.starterdb.storage=FILE_SYSTEM_STORAGE
#-------------------------------------------------------------------------------
oracle.install.db.config.starterdb.fileSystemStorage.recoveryLocation=
#-------------------------------------------------------------------------------
# Specify the existing ASM disk groups to be used for storage.
#
# Applicable only when oracle.install.db.config.starterdb.storageType=ASM_STORAGE
#-------------------------------------------------------------------------------
oracle.install.db.config.asm.diskGroup=
#-------------------------------------------------------------------------------
# Specify the password for ASMSNMP user of the ASM instance.
#
# Applicable only when oracle.install.db.config.starterdb.storage=ASM_STORAGE
#-------------------------------------------------------------------------------
oracle.install.db.config.asm.ASMSNMPPassword=
#------------------------------------------------------------------------------
# Specify the My Oracle Support Account Username.
#
#  Example   : MYORACLESUPPORT_USERNAME=abc@oracle.com
#------------------------------------------------------------------------------
MYORACLESUPPORT_USERNAME=
#------------------------------------------------------------------------------
# Specify the My Oracle Support Account Username password.
#
# Example    : MYORACLESUPPORT_PASSWORD=password
#------------------------------------------------------------------------------
MYORACLESUPPORT_PASSWORD=
#------------------------------------------------------------------------------
# Specify whether to enable the user to set the password for
# My Oracle Support credentials. The value can be either true or false.
# If left blank it will be assumed to be false.
#
# Example    : SECURITY_UPDATES_VIA_MYORACLESUPPORT=true
#------------------------------------------------------------------------------
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false
#------------------------------------------------------------------------------
# Specify whether user doesn't want to configure Security Updates.
# The value for this variable should be true if you don't want to configure
# Security Updates, false otherwise.
#
# The value can be either true or false. If left blank it will be assumed
# to be true.
#
# Example    : DECLINE_SECURITY_UPDATES=false
#------------------------------------------------------------------------------
DECLINE_SECURITY_UPDATES=true
#------------------------------------------------------------------------------
# Specify the Proxy server name. Length should be greater than zero.
#
# Example    : PROXY_HOST=proxy.domain.com
#------------------------------------------------------------------------------
PROXY_HOST=
#------------------------------------------------------------------------------
# Specify the proxy port number. Should be Numeric and at least 2 chars.
#
# Example    : PROXY_PORT=25
#------------------------------------------------------------------------------
PROXY_PORT=
#------------------------------------------------------------------------------
# Specify the proxy user name. Leave PROXY_USER and PROXY_PWD
# blank if your proxy server requires no authentication.
#
# Example    : PROXY_USER=username
#------------------------------------------------------------------------------
PROXY_USER=
#------------------------------------------------------------------------------
# Specify the proxy password. Leave PROXY_USER and PROXY_PWD
# blank if your proxy server requires no authentication.
#
# Example    : PROXY_PWD=password
#------------------------------------------------------------------------------
PROXY_PWD=
#------------------------------------------------------------------------------
# Specify the Oracle Support Hub URL.
#
# Example    : COLLECTOR_SUPPORTHUB_URL=https://orasupporthub.company.com:8080/
#------------------------------------------------------------------------------
COLLECTOR_SUPPORTHUB_URL=
[oracle@rac1 DATASYNCXML]$

-Oracle Database Install

[oracle@rac1 database]$ pwd
/STAGE/database
[oracle@rac1 database]$ ./runInstaller -silent -responsefile /home/oracle/DATASYNCXML/gtdb.rsp -showProgress -waitforcompletion -ignorePrereq
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 500 MB.   Actual 38713 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 16356 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2021-05-23_02-52-56PM. Please wait ...[WARNING] [INS-32018] The selected Oracle home is outside of Oracle base.
   ACTION: Oracle recommends installing Oracle software within the Oracle base directory. Adjust the Oracle home or Oracle base accordingly.
You can find the log of this install session at:
 /grid/oraInventory/logs/installActions2021-05-23_02-52-56PM.log
Prepare in progress.
..................................................   7% Done.
Prepare successful.
Copy files in progress.
..................................................   14% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   36% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..........
Copy files successful.
Link binaries in progress.
....................
Link binaries successful.
Setup files in progress.
....................
Setup files successful.
Setup Inventory in progress.
Setup Inventory successful.
Finish Setup successful.
The installation of Oracle Database 12c was successful.
Please check '/grid/oraInventory/logs/silentInstall2021-05-23_02-52-56PM.log' for more details.
Copy Files to Remote Nodes in progress.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
Copy Files to Remote Nodes successful.
Prepare in progress.
Prepare successful.
..........
Setup in progress.
....................
Setup successful.
The Cluster Node Addition of /oracle/product/12.2.0.1/db_1 was successful.
Please check '/grid/oraInventory/logs/silentInstall2021-05-23_02-52-56PM.log' for more details.
Setup Oracle Base in progress.
Setup Oracle Base successful.
..................................................   97% Done.
As a root user, execute the following script(s):
        1. /oracle/product/12.2.0.1/db_1/root.sh
Execute /oracle/product/12.2.0.1/db_1/root.sh on the following nodes:
[rac1, rac2]
..................................................   100% Done.
Successfully Setup Software.
[oracle@rac1 database]$
--별도의 세션에서 수행
[root@rac1 ~]# /oracle/product/12.2.0.1/db_1/root.sh
Check /oracle/product/12.2.0.1/db_1/install/root_rac1.localdomain_2021-05-23_15-12-08-232120257.log for the output of root script
[root@rac1 ~]# 
[root@rac2 ~]# /oracle/product/12.2.0.1/db_1/root.sh
Check /oracle/product/12.2.0.1/db_1/install/root_rac2.localdomain_2021-05-23_15-12-39-590047557.log for the output of root script
[root@rac2 ~]#
반응형
반응형

Oracle Restart(One Node RAC)를 Oracle RAC로 전환 하는 과정을 설명하려 한다.

결론부터 이야기 하면 기존 Oracle Restart를 Deconfig, Deinstall 후에 다시 Oracle RAC를 설치 하고 하고 구성 하면 된다.

[기존 Oracle Restart 환경 백업]

-기존 Oracle ASM 환경 백업

---ASM 환경 백업
[grid@rac1 DATASYNCXML]$ sqlplus "/as sysasm"

SQL*Plus: Release 12.2.0.1.0 Production on Sat May 15 22:59:52 2021

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> show parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      +DG1/ASM/ASMPARAMETERFILE/regi
                                                 stry.253.1072565469
SQL> create pfile='/home/grid/DATASYNCXML/initASM.ora_20210514' from spfile;

File created.

SQL> !pwd
/home/grid/DATASYNCXML

SQL> !ls
initASM.ora_20210514

SQL> select GROUP_NUMBER, NAME, PATH from v$asm_disk;
GROUP_NUMBER NAME                           PATH
------------ ------------------------------ --------------------------------------------------
           0                                /dev/oracleasm/asm-disk6
           0                                /dev/oracleasm/asm-disk3
           0                                /dev/oracleasm/asm-disk2
           0                                /dev/oracleasm/asm-disk1
           2 DG2_0000                       /dev/oracleasm/asm-disk5
           1 DG1_0000                       /dev/oracleasm/asm-disk4

6 rows selected.


SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
[grid@rac1 DATASYNCXML]$

-기본 DB 환경 백업

[oracle@rac1 DATASYNCXML]$ sqlplus "/as sysdba"
SQL> show parameter spfile;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      +DG1/TEST/PARAMETERFILE/spfile
                                                 .266.1072569847
SQL> create pfile='/home/oracle/DATASYNCXML/initTEST.ora_20210515' from spfile;
File created.
SQL> alter database backup controlfile to trace as '/home/oracle/DATASYNCXML/cr_con.sql_20210515';
Database altered.
SQL> !pwd
/home/oracle/DATASYNCXML
SQL> !ls
cr_con.sql_20210515  initTEST.ora_20210515
SQL> exit
[oracle@rac1 DATASYNCXML]$
[oracle@rac1 dbs]$ pwd
/oracle/product/12.2.0.1/db_1/dbs
[oracle@rac1 dbs]$ ls
hc_TEST.dat  init.ora  lkTEST  orapwTEST
[oracle@rac1 dbs]$ cp orapwTEST /home/oracle/DATASYNCXML/
[oracle@rac1 dbs]$ cd $ORACLE_HOME/network/admin
[oracle@rac1 admin]$ pwd
/oracle/product/12.2.0.1/db_1/network/admin
[oracle@rac1 admin]$ ls
samples  shrept.lst  tnsnames.ora
[oracle@rac1 admin]$ cp tnsnames.ora /home/oracle/DATASYNCXML/
[oracle@rac1 admin]$ cd /home/oracle/DATASYNCXML/
[oracle@rac1 DATASYNCXML]$ ls
cr_con.sql_20210515  initTEST.ora_20210515  orapwTEST  tnsnames.ora
[oracle@rac1 DATASYNCXML]$

-GRID 환경 확인

[root@rac1 bin]# pwd
/grid/12.2.0.1/grid/bin
[root@rac1 bin]# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :        228
         Available space (kbytes) :     409340
         ID                       : 1285054990
         Device/File Name         : /grid/12.2.0.1/grid/cdata/localhost/local.ocr
                                    Device/File integrity check succeeded
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
                                    Device/File not configured
         Cluster registry integrity check succeeded
         Logical corruption check succeeded
[root@rac1 bin]#
[root@rac1 bin]# cd /etc
[root@rac1 etc]# ls -al ora*
-rw-r--r--. 1 root   root   32 Oct  1  2020 oracle-release
-rw-r--r--. 1 root   root   48 May 14 22:42 oraInst.loc
-rw-rw-r--. 1 oracle dba   805 May 14 23:10 oratab
oracle:
total 3544
drwxr-x---.   6 root dba     4096 May 14 22:44 .
drwxr-xr-x. 148 root root    8192 May 15 22:49 ..
drwxrwx---.   2 grid dba        6 May 14 22:42 lastgasp
drwxrwx---.   2 root dba        6 May 14 22:42 maps
-rw-r-----.   1 grid dba       76 May 14 22:44 ocr.loc
-rw-r--r--.   1 root root      16 May 14 22:44 ocr.loc.orig
-rw-r-----.   1 root dba       88 May 14 22:44 olr.loc
-rw-r--r--.   1 root root       0 May 14 22:44 olr.loc.orig
drwxrwx---.   5 root dba       44 May 14 22:42 oprocd
drwxr-x---.   3 root dba       18 May 14 22:42 scls_scr
-rws--x---.   1 root dba  3598480 May 14 22:42 setasmgid
[root@rac1 etc]#
[root@rac1 etc]# cd oracle
[root@rac1 oracle]# ls
lastgasp  maps  ocr.loc  ocr.loc.orig  olr.loc  olr.loc.orig  oprocd  scls_scr  setasmgid
[root@rac1 oracle]# more ocr.loc
ocrconfig_loc=/grid/12.2.0.1/grid/cdata/localhost/local.ocr
local_only=TRUE
[root@rac1 oracle]#
[root@rac1 oracle]# more olr.loc
olrconfig_loc=/grid/12.2.0.1/grid/cdata/localhost/rac1.olr
crs_home=/grid/12.2.0.1/grid
[root@rac1 oracle]#
[root@rac1 oracle]# cd ..
[root@rac1 etc]# more oraInst.loc
inventory_loc=/grid/oraInventory
inst_group=dba
[root@rac1 etc]# more oratab
# This file is used by ORACLE utilities.  It is created by root.sh
# and updated by either Database Configuration Assistant while creating
# a database or ASM Configuration Assistant while creating ASM instance.

# A colon, ':', is used as the field terminator.  A new line terminates
# the entry.  Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
#   $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home
# directory of the database respectively.  The third field indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
+ASM:/grid/12.2.0.1/grid:N
TEST:/oracle/product/12.2.0.1/db_1:N
[root@rac1 etc]#

[Oracle Restart Deconifg & Deinstall]

-작업전 상태 확인

[root@rac1 install]# cd /grid/12.2.0.1/grid/crs/install
[root@rac1 install]# pwd
/grid/12.2.0.1/grid/crs/install
[root@rac1 install]# /grid/12.2.0.1/grid/bin/crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DG1.dg
               ONLINE  ONLINE       rac1                     STABLE
ora.DG2.dg
               ONLINE  ONLINE       rac1                     STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
ora.asm
               ONLINE  ONLINE       rac1                     Started,STABLE
ora.ons
               OFFLINE OFFLINE      rac1                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       rac1                     STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       rac1                     STABLE
ora.test.db
      1        OFFLINE OFFLINE                               Instance Shutdown,ST
                                                             ABLE
--------------------------------------------------------------------------------
[root@rac1 install]#

-Oracle Restart Deconfig

이슈 #1
#./roothas.pl -deconfig -force -verbose -keepdg ==> Oracle Restart는 Keepdg 옵션 없음

이슈 #2
Perl 버전 맞지 않아서 발생 하는 문제 임.
[root@rac1 install]# ./roothas.pl -deconfig -force -verbose
Can't locate Env.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 . . ./../../perl/lib) at crsinstall.pm line 286.
BEGIN failed--compilation aborted at crsinstall.pm line 286.
Compilation failed in require at ./roothas.pl line 97.
BEGIN failed--compilation aborted at ./roothas.pl line 97.
[root@rac1 install]#
[root@rac1 install]# /grid/12.2.0.1/grid/perl/bin/perl ./roothas.pl -deconfig -force -verbose
Using configuration parameter file: ./crsconfig_params
The log of current session can be found at:
  /grid/base/crsdata/rac1/crsconfig/hadeconfig.log
2021/05/15 23:17:28 CLSRSC-332: CRS resources for listeners are still configured
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.DG1.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.DG2.dg' on 'rac1'
CRS-2677: Stop of 'ora.DG1.dg' on 'rac1' succeeded
CRS-2677: Stop of 'ora.DG2.dg' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2021/05/15 23:20:33 CLSRSC-337: Successfully deconfigured Oracle Restart stack
[root@rac1 install]#

-Oracle Restart Deinstall

[grid@rac1 deinstall]$ id
uid=54321(grid) gid=54322(dba) groups=54322(dba) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[grid@rac1 deinstall]$ pwd
/grid/12.2.0.1/grid/deinstall
[grid@rac1 deinstall]$ ls
bootstrap_files.lst  deinstall         deinstall.pl   jlib        response         utl
bootstrap.pl         deinstall.ouibak  deinstall.xml  readme.txt  sshUserSetup.sh
[grid@rac1 deinstall]$
[grid@rac1 deinstall]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /grid/oraInventory/logs/
############ ORACLE DECONFIG TOOL START ############
######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##
 Specify a comma-separated list of remote nodes to cleanup :
Checking for existence of the Oracle home location /grid/12.2.0.1/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Standalone Server
Oracle Base selected for deinstall is: /grid/base
Checking for existence of central inventory location /grid/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
## [END] Install check configuration ##
Traces log file: /grid/oraInventory/logs//crsdc_2021-05-15_11-25-26-PM.log
Network Configuration check config START
Network de-configuration trace file location: /grid/oraInventory/logs/netdc_check2021-05-15_11-25-26-PM.log
Specify all Oracle Restart enabled listeners that are to be de-configured. Enter .(dot) to deselect all. [LISTENER]:
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /grid/oraInventory/logs/asmcadc_check2021-05-15_11-25-46-PM.log
ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: y
Automatic Storage Management (ASM) instance is detected in this Oracle home /grid/12.2.0.1/grid.
ASM Diagnostic Destination : /grid/base
ASM Diskgroups :
ASM diskstring : <Default>
Diskgroups will not be dropped
 If you want to retain the existing diskgroups or if any of the information detected is incorrect, you can modify by entering 'y'. Do you  want to modify above information (y|n) [n]:
Database Check Configuration START
Database de-configuration trace file location: /grid/oraInventory/logs/databasedc_check2021-05-15_11-26-12-PM.log
Database Check Configuration END
######################### DECONFIG CHECK OPERATION END #########################
####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
Oracle Home selected for deinstall is: /grid/12.2.0.1/grid
Inventory Location where the Oracle home registered is: /grid/oraInventory
Following Oracle Restart enabled listener(s) will be de-configured: LISTENER
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/grid/oraInventory/logs/deinstall_deconfig2021-05-15_11-23-46-PM.out'
Any error messages from this session will be written to: '/grid/oraInventory/logs/deinstall_deconfig2021-05-15_11-23-46-PM.err'
######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /grid/oraInventory/logs/databasedc_clean2021-05-15_11-26-24-PM.log
ASM de-configuration trace file location: /grid/oraInventory/logs/asmcadc_clean2021-05-15_11-26-24-PM.log
ASM Clean Configuration START
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /grid/oraInventory/logs/netdc_clean2021-05-15_11-26-28-PM.log
De-configuring Oracle Restart enabled listener(s): LISTENER
De-configuring listener: LISTENER
    Stopping listener: LISTENER
    Warning: Failed to stop listener. Listener may not be running.
    Deleting listener: LISTENER
    Listener deleted successfully.
Listener de-configured successfully.
De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
######################### DECONFIG CLEAN OPERATION END #########################
####################### DECONFIG CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Following Oracle Restart enabled listener(s) were de-configured successfully: LISTENER
Oracle Restart is stopped and de-configured successfully.
#######################################################################
############# ORACLE DECONFIG TOOL END #############
Using properties file /tmp/deinstall2021-05-15_11-23-31PM/response/deinstall_2021-05-15_11-23-46-PM.rsp
Location of logs /grid/oraInventory/logs/
############ ORACLE DEINSTALL TOOL START ############
####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/grid/oraInventory/logs/deinstall_deconfig2021-05-15_11-23-46-PM.out'
Any error messages from this session will be written to: '/grid/oraInventory/logs/deinstall_deconfig2021-05-15_11-23-46-PM.err'
######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to rac1
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2021-05-15_11-23-31PM/oraInst.loc
Setting oracle.installer.local to false
## [END] Preparing for Deinstall ##
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/grid/12.2.0.1/grid' from the central inventory on the local node : Done
Failed to delete the directory '/grid/12.2.0.1/grid/addnode'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/assistants'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/bin'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/cdata'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/cha'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/clone'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/crs'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/css'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/cv'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/dbjava'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/dbs'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/dc_ocm'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/deinstall'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/demo'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/diagnostics'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/dmu'. Either user has no permission to delete or it is in use.
Failed to delete the file '/grid/12.2.0.1/grid/env.ora'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the directory '/grid/12.2.0.1/grid/evm'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/gpnp'. Either user has no permission to delete or it is in use.
Failed to delete the file '/grid/12.2.0.1/grid/gridSetup.sh'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the directory '/grid/12.2.0.1/grid/has'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/hs'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/install'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/instantclient'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/inventory'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/javavm'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/jdbc'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/jdk'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/jlib'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/ldap'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/lib'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/log'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/md'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/network'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/nls'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/OPatch'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/opmn'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/oracore'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/ord'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/ordim'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/ords'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/oss'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/oui'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/owm'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/perl'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/plsql'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/precomp'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/QOpatch'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/qos'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/racg'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/rdbms'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/relnotes'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/rhp'. Either user has no permission to delete or it is in use.
Failed to delete the file '/grid/12.2.0.1/grid/root.sh'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/12.2.0.1/grid/rootupgrade.sh'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/12.2.0.1/grid/runcluvfy.sh'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the directory '/grid/12.2.0.1/grid/scheduler'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/slax'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/sqlpatch'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/sqlplus'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/srvm'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/suptools'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/tomcat'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/ucp'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/usm'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/utl'. Either user has no permission to delete or it is in use.
Failed to delete the file '/grid/12.2.0.1/grid/welcome.html'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the directory '/grid/12.2.0.1/grid/wlm'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/wwg'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/xag'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/xdk'. Either user has no permission to delete or it is in use.
Failed to delete the file '/grid/12.2.0.1/grid/root.sh.old'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/12.2.0.1/grid/root.sh.ouibak'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/12.2.0.1/grid/rootupgrade.sh.ouibak'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/12.2.0.1/grid/root.sh.old.1'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the directory '/grid/12.2.0.1/grid/cfgtoollogs'. Either user has no permission to delete or it is in use.
Failed to delete the file '/grid/12.2.0.1/grid/oraInst.loc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the directory '/grid/12.2.0.1/grid/auth'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/eons'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/mdns'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/gipc'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/gnsd'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/ohasd'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/ctss'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/crf'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/osysmond'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/ologgerd'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/cdp'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/advmccb'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/acfsccm'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/acfsrm'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/acfsccreg'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/acfs/tunables'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/12.2.0.1/grid/acfs'. The directory is not empty.
Failed to delete the directory '/grid/12.2.0.1/grid'. The directory is not empty.
Delete directory '/grid/12.2.0.1/grid' on the local node : Failed <<<<
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/incident'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/metadata_pv'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/metadata_dgif'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/incpkg'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/sweep'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/cdump'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/stage'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/log/debug'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/log/test'. Either user has no permission to delete or it is in use.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/sqlnet.log'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_11651_140209272951296.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_11651_140209272951296.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_11675_139718512308736.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_11675_139718512308736.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_11768_139688813842944.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_11768_139688813842944.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_12071_140168799908352.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_12071_140168799908352.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_12120_139725718536704.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_12120_139725718536704.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_12238_140577934811648.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_12238_140577934811648.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_12278_139967341953536.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_12278_139967341953536.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_12956_140481210315264.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_12956_140481210315264.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13002_140501554618880.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13002_140501554618880.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13114_139704531800576.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13114_139704531800576.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13148_140281615708672.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13148_140281615708672.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13225_140167616795136.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13225_140167616795136.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13289_140111050756608.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13289_140111050756608.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13345_139993017614848.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13345_139993017614848.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13403_140218814157312.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13403_140218814157312.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13463_140117044482560.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13463_140117044482560.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13691_139664222880256.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13691_139664222880256.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13728_140088226955776.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13728_140088226955776.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13786_140300199977472.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13786_140300199977472.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13846_140599602676224.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13846_140599602676224.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13935_140266809573888.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13935_140266809573888.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13980_140043395428864.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_13980_140043395428864.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_14401_140239951036928.trc'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/trace/ora_14401_140239951036928.trm'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/trace'. The directory is not empty.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/alert/log.xml'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/alert'. The directory is not empty.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/incident'. Either user has no permission to delete or it is in use.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/metadata/ADR_CONTROL.ams'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/metadata/ADR_INVALIDATION.ams'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/metadata/INC_METER_IMPT_DEF.ams'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/metadata/INC_METER_PK_IMPTS.ams'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/metadata'. The directory is not empty.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/metadata_pv'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/metadata_dgif'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/incpkg'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/sweep'. Either user has no permission to delete or it is in use.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/lck/AM_3216668543_3129272988.lck'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/lck/AM_1744845641_3861997533.lck'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/lck/AM_1096102193_3488045378.lck'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the file '/grid/base/diag/clients/user_oracle/host_203307297_107/lck/AM_1096102262_3454819329.lck'. Either the file is in use or there are not enough permissions to delete the file.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/lck'. The directory is not empty.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/cdump'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/stage'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/log/debug'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/log/test'. Either user has no permission to delete or it is in use.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107/log'. The directory is not empty.
Failed to delete the directory '/grid/base/diag/clients/user_oracle/host_203307297_107'. The directory is not empty.
Failed to delete the directory '/grid/base/diag/clients/user_oracle'. The directory is not empty.
Failed to delete the directory '/grid/base/diag/clients'. The directory is not empty.
Failed to delete the directory '/grid/base/diag'. The directory is not empty.
The Oracle Base directory '/grid/base' will not be removed on local node. The directory is not empty.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
## [END] Oracle install clean ##
######################### DEINSTALL CLEAN OPERATION END #########################
####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/grid/12.2.0.1/grid' from the central inventory on the local node.
Failed to delete directory '/grid/12.2.0.1/grid' on the local node.
Oracle Universal Installer cleanup was successful.
Review the permissions and contents of '/grid/base' on nodes(s) 'rac1'.
If there are no Oracle home(s) associated with '/grid/base', manually delete '/grid/base' and its contents.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL TOOL END #############
[grid@rac1 deinstall]$

-Oracle Database Deinstall

[oracle@rac1 deinstall]$ id
uid=54322(oracle) gid=54322(dba) groups=54322(dba) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[oracle@rac1 deinstall]$ pwd
/oracle/product/12.2.0.1/db_1/deinstall
[oracle@rac1 deinstall]$ ls
bootstrap_files.lst  bootstrap.pl  deinstall  deinstall.pl  deinstall.xml  jlib  readme.txt  response  sshUserSetup.sh  utl
[oracle@rac1 deinstall]$
[oracle@rac1 deinstall]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2021-05-15_11-30-58PM/logs/
############ ORACLE DECONFIG TOOL START ############
######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /oracle/product/12.2.0.1/db_1
Oracle Home type selected for deinstall is: Oracle Single Instance Database
Oracle Base selected for deinstall is: /oracle/base
Checking for existence of central inventory location /grid/oraInventory
## [END] Install check configuration ##
Network Configuration check config START
Network de-configuration trace file location: /tmp/deinstall2021-05-15_11-30-58PM/logs/netdc_check2021-05-15_11-31-14-PM.log
Network Configuration check config END
Database Check Configuration START
Database de-configuration trace file location: /tmp/deinstall2021-05-15_11-30-58PM/logs/databasedc_check2021-05-15_11-31-14-PM.log
Use comma as separator when specifying list of values as input
Specify the list of database names that are configured in this Oracle home [TEST]:
###### For Database 'TEST' ######
Specify the type of this database (1.Single Instance Database|2.Oracle Restart Enabled Database) [1]:
Specify the diagnostic destination location of the database [/oracle/base/diag/rdbms/test]:
Specify the storage type used by the Database ASM|FS []: ASM
Specify if database Archive Mode is Enabled. y/n [n]:
Database Check Configuration END
Oracle Configuration Manager check START
OCM check log file location : /tmp/deinstall2021-05-15_11-30-58PM/logs//ocm_check3774.log
Oracle Configuration Manager check END
######################### DECONFIG CHECK OPERATION END #########################
####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Home selected for deinstall is: /oracle/product/12.2.0.1/db_1
Inventory Location where the Oracle home registered is: /grid/oraInventory
The following databases were selected for de-configuration : TEST
Database unique name : TEST
Storage used : ASM
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2021-05-15_11-30-58PM/logs/deinstall_deconfig2021-05-15_11-31-13-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2021-05-15_11-30-58PM/logs/deinstall_deconfig2021-05-15_11-31-13-PM.err'
######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /tmp/deinstall2021-05-15_11-30-58PM/logs/databasedc_clean2021-05-15_11-32-26-PM.log
Database Clean Configuration START TEST
This operation may take few minutes.
Database Clean Configuration END TEST
Network Configuration clean config START
Network de-configuration trace file location: /tmp/deinstall2021-05-15_11-30-58PM/logs/netdc_clean2021-05-15_11-33-06-PM.log
De-configuring Local Net Service Names configuration file...
Local Net Service Names configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
Oracle Configuration Manager clean START
OCM clean log file location : /tmp/deinstall2021-05-15_11-30-58PM/logs//ocm_clean3774.log
Oracle Configuration Manager clean END
######################### DECONFIG CLEAN OPERATION END #########################
####################### DECONFIG CLEAN OPERATION SUMMARY #######################
Successfully de-configured the following database instances : TEST
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
#######################################################################
############# ORACLE DECONFIG TOOL END #############
Using properties file /tmp/deinstall2021-05-15_11-30-58PM/response/deinstall_2021-05-15_11-31-13-PM.rsp
Location of logs /tmp/deinstall2021-05-15_11-30-58PM/logs/
############ ORACLE DEINSTALL TOOL START ############
####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/tmp/deinstall2021-05-15_11-30-58PM/logs/deinstall_deconfig2021-05-15_11-31-13-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2021-05-15_11-30-58PM/logs/deinstall_deconfig2021-05-15_11-31-13-PM.err'
######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to rac1
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2021-05-15_11-30-58PM/oraInst.loc
Setting oracle.installer.local to false
## [END] Preparing for Deinstall ##
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/oracle/product/12.2.0.1/db_1' from the central inventory on the local node : Done
Delete directory '/oracle/product/12.2.0.1/db_1' on the local node : Done
Delete directory '/grid/oraInventory' on the local node : Failed <<<<
Delete directory '/oracle/base' on the local node : Done
Oracle Universal Installer cleanup completed with errors.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
## [END] Oracle install clean ##
######################### DEINSTALL CLEAN OPERATION END #########################
####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/oracle/product/12.2.0.1/db_1' from the central inventory on the local node.
Successfully deleted directory '/oracle/product/12.2.0.1/db_1' on the local node.
Failed to delete directory '/grid/oraInventory' on the local node.
Successfully deleted directory '/oracle/base' on the local node.
Oracle Universal Installer cleanup completed with errors.
Run 'rm -r /etc/oraInst.loc' as root on node(s) 'rac1' at the end of the session.
Run 'rm -r /opt/ORCLfmap' as root on node(s) 'rac1' at the end of the session.
Run 'rm -r /etc/oratab' as root on node(s) 'rac1' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL TOOL END #############
[oracle@rac1 deinstall]$

-기타 파일 삭제 및 Oracle HOME/GRID HOME 삭제 후 재 생성

[root@rac1 ~]# rm -r /etc/oraInst.loc
rm: remove regular file ‘/etc/oraInst.loc’? y
[root@rac1 ~]# rm -r /opt/ORCLfmap
rm: descend into directory ‘/opt/ORCLfmap’? y
rm: descend into directory ‘/opt/ORCLfmap/prot1_64’? y
rm: descend into directory ‘/opt/ORCLfmap/prot1_64/bin’? y
rm: remove regular file ‘/opt/ORCLfmap/prot1_64/bin/fmputl’? y
rm: remove regular file ‘/opt/ORCLfmap/prot1_64/bin/fmputlhp’? y
rm: remove directory ‘/opt/ORCLfmap/prot1_64/bin’? y
rm: descend into directory ‘/opt/ORCLfmap/prot1_64/etc’? y
rm: remove regular file ‘/opt/ORCLfmap/prot1_64/etc/filemap.ora’? y
rm: remove directory ‘/opt/ORCLfmap/prot1_64/etc’? y
rm: remove directory ‘/opt/ORCLfmap/prot1_64/log’? y
rm: remove directory ‘/opt/ORCLfmap/prot1_64’? y
rm: remove directory ‘/opt/ORCLfmap’? y
[root@rac1 ~]# rm -r /etc/oratab
rm: remove regular file ‘/etc/oratab’? y
[root@rac1 ~]#
[root@rac1 ~]# cd /grid
[root@rac1 grid]# ls
12.2.0.1  base  oraInventory
[root@rac1 grid]# rm -rf base oraInventory
[root@rac1 grid]# mkdir base oraInventory
[root@rac1 grid]# ls
12.2.0.1  base  oraInventory
[root@rac1 grid]# chown -R grid:dba ./base
[root@rac1 grid]# chown -R grid:dba ./oraInventory/
[root@rac1 grid]# ls -al
total 4
drwxrwxr-x.  5 grid dba    54 May 15 23:36 .
dr-xr-xr-x. 20 root root 4096 May 14 15:56 ..
drwxrwxr-x.  3 grid dba    18 May 14 16:07 12.2.0.1
drwxr-xr-x.  2 grid dba     6 May 15 23:36 base
drwxr-xr-x.  2 grid dba     6 May 15 23:36 oraInventory
[root@rac1 grid]# cd 12.2.0.1/
[root@rac1 12.2.0.1]# ls
grid
[root@rac1 12.2.0.1]# rm -rf grid
[root@rac1 12.2.0.1]# mkdir grid
[root@rac1 12.2.0.1]# chown -R grid:dba ./grid/
[root@rac1 12.2.0.1]# ls -al
total 0
drwxrwxr-x. 3 grid dba 18 May 15 23:37 .
drwxrwxr-x. 5 grid dba 54 May 15 23:36 ..
drwxr-xr-x. 2 grid dba  6 May 15 23:37 grid
[root@rac1 12.2.0.1]#
[root@rac1 12.2.0.1]# cd /oracle
[root@rac1 oracle]# ls
product
[root@rac1 oracle]# mkdir base
[root@rac1 oracle]# chown -R oracle:dba ./base/
[root@rac1 oracle]# ls -al
total 4
drwxrwxr-x.  4 oracle dba    33 May 15 23:39 .
dr-xr-xr-x. 20 root   root 4096 May 14 15:56 ..
drwxr-xr-x.  2 oracle dba     6 May 15 23:39 base
drwxrwxr-x.  3 oracle dba    22 May 14 15:18 product
[root@rac1 oracle]# cd product/12.2.0.1/
[root@rac1 12.2.0.1]# ls
[root@rac1 12.2.0.1]# mkdir db_1
[root@rac1 12.2.0.1]# chown -R oracle:dba ./db_1/
[root@rac1 12.2.0.1]# l s-al
bash: l: command not found...
[root@rac1 12.2.0.1]# ls -al
total 0
drwxrwxr-x. 3 oracle dba 18 May 15 23:39 .
drwxrwxr-x. 3 oracle dba 22 May 14 15:18 ..
drwxr-xr-x. 2 oracle dba  6 May 15 23:39 db_1
[root@rac1 12.2.0.1]#

 

반응형
반응형

[증 상]

수동/자동으로 SSH Seting 후 Oracle RAC 설치과정 중 Local Node와 Remote Node간 SSH Setup 하는 과정에서 Error 발생.

무시하고 넘어가면 맨 마지막 Summary 하는 단계에서 Remote Node가 보이지 않음.

 

[원 인]

hosts.allow hosts.deny에 Public IP가 등록 되지 않아서 발생

 

[처리방법]

hosts.allow hosts.deny의 내용을 주석 처리 후 설치 성공

 

[기타 사항]

Google에서 "ins-44000 passwordless ssh connectivity is not setup from the local node"를 검색하면 매우 다양한 Case가 나온다. 그러니 모든 Case를 확인 하고 본인 환경에 맞는 방법을 적용 하면 된다.

반응형
반응형

[제품 주기 및 호환성]

Release Schedule of Current Database Releases (문서 ID 742060.1)

데이터베이스 릴리즈 일정 (문서 ID 2460719.1)

Client / Server Interoperability Support Matrix for Different Oracle Versions (문서 ID 207303.1)

서로 다른 오라클 버전의 클라이언트 / 서버 간의 상호 운용성 지원 매트릭스 (문서 ID 1556542.1)

Starting With Oracle JDBC Drivers - Installation, Certification, and More! (문서 ID 401934.1)

오라클 JDBC 드라이버 시작하기 - - 설치, 호환성 등 (문서 ID 1684635.1)

 

[Install]

Oracle Database (RDBMS) on Unix AIX,HP-UX,Linux,Mac OS X,Solaris,Tru64 Unix Operating Systems Installation and Configuration Requirements Quick Reference (8.0.5 to 11.2) (문서 ID 169706.1)

Oracle Database (RDBMS) on Unix AIX,HP-UX,Linux,Solaris and MS Windows Operating Systems Installation and Configuration Requirements Quick Reference (12.1/12.2/18c/19c) (문서 ID 1587357.1)

How to Install / Upgrade/ Clone 12.2 Grid Infrastructure in Silent Mode Using gridSetup.sh (문서 ID 2327772.1) Requirements for Installing Oracle Database/Client 19c on OL8 or RHEL8 64-bit (x86-64) (문서 ID 2668780.1) Requirements for Installing Oracle Database 12.1 on RHEL5 or OL5 64-bit (x86-64) (문서 ID 1529433.1)

Requirements for Installing Oracle 11gR2 RDBMS on RHEL (and OL) 5 on AMD64/EM64T (문서 ID 880989.1)

 

[Patch]

Master Note for Database Proactive Patch Program (문서 ID 888.1)

Oracle Database 19c Important Recommended One-off Patches (문서 ID 555.1)

Assistant: Download Reference for Oracle Database/GI Update, Revision, PSU, SPU(CPU), Bundle Patches, Patchsets and Base Releases (문서 ID 2118136.2)

Example: Manually Apply a 12c GI PSU/Interim or DB Interim Patch in Cluster Environment (문서 ID 1594184.1) Supplemental Readme - Grid Infrastructure Release Update 12.2.0.1.x / 18c /19c (문서 ID 2246888.1)

 

[RAC]

How to Restore ASM Based OCR After Complete Loss of the CRS Diskgroup on Linux/Unix Systems (문서 ID 1062983.1)

11gR2 RAC : OCR을 구성하고있는 ASM Diskgroup이 전체손실 된 경우 복구 방법 (문서 ID 2139155.1)

How to Modify Public Network Information including VIP in Oracle Clusterware (문서 ID 276434.1)

오라클 클러스터웨어에서 VIP를 포함한 공용 네트워크 정보를 수정하는 방법 (문서 ID 1572572.1)

How to Modify Private Network Information in Oracle Clusterware (문서 ID 283684.1)

How to Update the IP Address of the SCAN VIP Resources (ora.scan{n}.vip) (문서 ID 952903.1)

 

[Exadata]

Information Center: Oracle Exadata Database Machine (문서 ID 1306791.2)

Exadata Database Machine and Exadata Storage Server Supported Versions (문서 ID 888828.1)

Oracle Exadata Best Practices (문서 ID 757552.1) Exadata Starter Kit (문서 ID 1244344.1)

Exadata Critical Issues (문서 ID 1270094.1)

Changing IP addresses on Exadata Database Machine (문서 ID 1317159.1)

반응형
반응형

증 상

RHEL 8.2에서 Oracle 19c RAC 구성 후 asmca 화면에서 ACFS관련 매뉴가 없는 증상

 

관련 Log

Oracle CRS 기동하면서 Oracle CRS alert Log에 아래 메시지 출력

Alert Log 파일 위치 :  $ORACLE_BASE/diag/<hostname>/crs/trace/alert.log

2021-04-12 13:33:36.080 [ORAAGENT(11504)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 11504
2021-04-12 13:36:23.797 [CLSECHO(22119)]ACFS-9213: Configuration file 'symvers-4.18.0-193.el8.x86_64.gz' in the /boot directory does not exist or cannot be read.

원 인

RHEL 8분서 symvers.gz 파일 위치가 /usr/lib/modules/변경 됨

access.redhat.com/solutions/5174441

해결 방법

해당 커널 버전에 맞는 파일을 /boot에 복사 하면 됨

[RHEL 8.1] 
cp /usr/lib/modules/4.18.0-147.el8.x86_64/symvers.gz /boot/symvers-4.18.0-147.el8.x86_64.gz
[RHEL 8.2] 
cp /usr/lib/modules/4.18.0-193.el8.x86_64/symvers.gz /boot/symvers-4.18.0-193.el8.x86_64.gz
[RHEL 8.3]
RHEL 8.3은 아직 Oracle ACFS 인증 되지 않았음

실제 해결 방법

[root@rac1 ~]# cd /boot
[root@rac1 boot]# ls
config-4.18.0-193.el8.x86_64  grub2                                                    initramfs-4.18.0-193.el8.x86_64.img  System.map-4.18.0-193.el8.x86_64                   vmlinuz-4.18.0-193.el8.x86_64
efi                           initramfs-0-rescue-f39d3f5fb51341ae96c29713dcd0be39.img  loader                               vmlinuz-0-rescue-f39d3f5fb51341ae96c29713dcd0be39
[root@rac1 boot]#  ls -l /boot/symvers-$(uname -r).gz
ls: cannot access '/boot/symvers-4.18.0-193.el8.x86_64.gz': No such file or directory
[root@rac1 boot]# cd  /usr/lib/modules
[root@rac1 modules]# ls
4.18.0-187.el8.x86_64  4.18.0-193.el8.x86_64
[root@rac1 modules]# uname -a
Linux rac1.localdomain 4.18.0-193.el8.x86_64 #1 SMP Fri Mar 27 14:35:58 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
[root@rac1 modules]# cd 4.18.0-193.el8.x86_64
[root@rac1 4.18.0-193.el8.x86_64]# ls
bls.conf  kernel         modules.alias.bin  modules.builtin.bin  modules.devname      modules.networking  modules.symbols      symvers.gz  vdso
build     misc           modules.block      modules.dep          modules.drm          modules.order       modules.symbols.bin  System.map  vmlinuz
config    modules.alias  modules.builtin    modules.dep.bin      modules.modesetting  modules.softdep     source               updates     weak-updates
[root@rac1 4.18.0-193.el8.x86_64]# cp symvers.gz /boot/symvers-4.18.0-193.el8.x86_64.gz
[root@rac1 4.18.0-193.el8.x86_64]# pwd
/usr/lib/modules/4.18.0-193.el8.x86_64
[root@rac1 4.18.0-193.el8.x86_64]# ls -l /boot/symvers-$(uname -r).gz
-rw-r--r--. 1 root root 347581 Apr 12 13:46 /boot/symvers-4.18.0-193.el8.x86_64.gz
[root@rac1 4.18.0-193.el8.x86_64]#
ACFS Support On OS Platforms (Certification Matrix). (문서 ID 1369107.1)
https://access.redhat.com/solutions/5174441
반응형

+ Recent posts