Cloning Oracle RAC to Nodes in a New Cluster

12c-oracle-administration-deployment-e48838-10

This chapter describes how to clone Oracle Real Application Clusters (Oracle RAC) database homes on Linux and UNIX systems to nodes in a new cluster. To extend Oracle RAC to nodes in an existing cluster, see Chapter 9, “Using Cloning to Extend Oracle RAC to Nodes in the Same Cluster”.

This chapter describes a noninteractive cloning technique that you implement with scripts. The cloning techniques described in this chapter are best suited for performing multiple simultaneous cluster installations. Creating the scripts is a manual process and can be error prone. If you only have one cluster to install, then you should use the traditional automated and interactive installation methods, such as Oracle Universal Installer, or the Provisioning Pack feature of Oracle Enterprise Manager.

Note:

Cloning is not a replacement for Oracle Enterprise Manager cloning that is a part of the Provisioning Pack. During Oracle Enterprise Manager cloning, the provisioning process interactively asks you the details about the Oracle home (such as the location to which you want to deploy the clone, the name of the Oracle Database home, a list of the nodes in the cluster, and so on).The Provisioning Pack feature of Oracle Enterprise Manager Cloud Control provides a framework to make it easy for you to automate the provisioning of new nodes and clusters. For data centers with many Oracle RAC clusters, the investment in creating a cloning procedure to easily provision new clusters and new nodes to existing clusters is worth the effort.

This chapter includes the following topics:

Introduction to Cloning Oracle RAC

Cloning is the process of copying an existing Oracle RAC installation to a different location and updating the copied bits to work in the new environment. The changes made by one-off patches applied on the source Oracle home, would also be present after the clone operation. The source and the destination path (host to be cloned) need not be the same.

Some situations in which cloning is useful are:

  • Cloning provides a way to prepare an Oracle home once and deploy it to many hosts simultaneously. You can complete the installation silently, as a noninteractive process. You do not need to use a graphical user interface (GUI) console and you can perform cloning from a Secure Shell (SSH) terminal session, if required.
  • Cloning enables you to create an installation (copy of a production, test, or development installation) with all patches applied to it in a single step. Once you have performed the base installation and applied all patch sets and patches on the source system, the clone performs all of these individual steps as a single procedure. This is in contrast to going through the installation process to perform the separate steps to install, configure, and patch the installation on each node in the cluster.
  • Installing Oracle RAC by cloning is a very quick process. For example, cloning an Oracle home to a new cluster of more than two nodes requires a few minutes to install the Oracle base software, plus a few minutes more for each node (approximately the amount of time it takes to run the root.sh script).

The cloned installation behaves the same as the source installation. For example, the cloned Oracle home can be removed using Oracle Universal Installer or patched using OPatch. You can also use the cloned Oracle home as the source for another cloning operation. You can create a cloned copy of a test, development, or production installation by using the command-line cloning scripts. The default cloning procedure is adequate for most usage cases. However, you can also customize various aspects of cloning, for example, to specify custom port assignments, or to preserve custom settings.

The cloning process works by copying all of the files from the source Oracle home to the destination Oracle home. Thus, any files used by the source instance that are located outside the source Oracle home’s directory structure are not copied to the destination location.

The size of the binaries at the source and the destination may differ because these are relinked as part of the clone operation and the operating system patch levels may also differ between these two locations. Additionally, the number of files in the cloned home would increase because several files copied from the source, specifically those being instantiated, are backed up as part of the clone operation.

Preparing to Clone Oracle RAC

In the preparation phase, you create a copy of an Oracle home that you then use to perform the cloning procedure on one or more nodes. You also install Oracle Clusterware.

Step 1   Install Oracle RAC
Use the detailed instructions in the Oracle Real Application Clusters Installation Guide for your platform to install the Oracle RAC software and patches:

  1. Install Oracle RAC and choose the Software only installation option.
  2. Patch the release to the required level (for example, 12.1.0.n).
  3. Apply one-off patches, if necessary.
Step 2   Create a backup of the source home
Create a copy of the Oracle RAC home. Use this file to copy the Oracle RAC home to each node in the cluster (as described in “Deploying Oracle RAC Clone to Nodes in a Cluster”).

When creating the backup (tar) file, the best practice is to include the release number in the name of the file. For example:

# cd /opt/oracle/product/12c/db_1
# tar –zcvf /pathname/db1120.tgz .
Step 3   Install and start Oracle Clusterware
Before you can use cloning to create an Oracle RAC home, you must first install and start Oracle Clusterware on the node or nodes to which you want to copy a cloned Oracle RAC home. In other words, you configure an Oracle RAC home that you cloned from a source cluster onto the nodes in a target cluster in the same order that you installed the Oracle Clusterware and Oracle RAC software components on the original nodes.

See Also:

Oracle Clusterware Administration and Deployment Guide for information about cloning Oracle Clusterware homes to create new clusters, and starting Oracle Clusterware by issuing the crsctl start crs command

Deploying Oracle RAC Clone to Nodes in a Cluster

After you complete the prerequisite tasks described in “Preparing to Clone Oracle RAC”, you can deploy cloned Oracle homes.

Deploying the Oracle RAC database home to a cluster is a multiple-step process.

This section provides step-by-step instructions that describe how to:

  1. Prepare the new cluster nodes
  2. Deploy the Oracle RAC database software
  3. Run the clone.pl script on each node
  4. Run the $ORACLE_HOME/root.sh script on each node
  5. Run DBCA on one node to create the Oracle RAC instances on each node
Step 1   Prepare the new cluster nodes
Perform the Oracle RAC preinstallation steps, including such things as:

  • Specify the kernel parameters.
  • Ensure Oracle Clusterware is active.
  • Ensure that Oracle ASM is active and that at least one Oracle ASM disk group exists and is mounted.

See your platform-specific Oracle RAC installation guide for a complete preinstallation checklist.

Step 2   Deploy the Oracle RAC database software
To deploy the Oracle RAC software, you must:

  1. Copy the clone of the Oracle home to all nodes. For example:
    [root@node1 root]# mkdir -p /opt/oracle/product/12c/db
    [root@node1 root]# cd /opt/oracle/product/12c/db
    [root@node1 db]# tar –zxvf /path_name/db1120.tgz
    

    When providing the home location and path_name, the home location can be in the same directory path or in a different directory path from the source home that you used to create the tar.

  2. If either the oracle user or the oinstall group, or both is different between the source and destination nodes, then change the ownership of the Oracle Inventory files, as follows:
    [root@node1]# chown -R oracle:oinstall /opt/oracle/product/12c/db
    

    When you run the preceding command on the Oracle RAC home, it clears setuid and setgid information from the Oracle binary.

    Note:

    You can perform this step at the same time you perform Step 3 and Step 4 to run the clone.pl and $ORACLE_HOME/root.shscripts on each cluster node.

Step 3   Run the clone.pl script on each node
To run the clone.pl script, which performs the main Oracle RAC cloning tasks, you must:

  • Supply the environment variables and cloning parameters in the start.sh script, as described in Table 8-2 and Table 8-3. Because the clone.pl script is sensitive to the parameters being passed to it, you must be accurate in your use of brackets, single quotation marks, and double quotation marks.
  • Run the script as oracle or the user that owns the Oracle RAC software.

Table 8-1 lists and describes the clone.pl script parameters.

Table 8-1 clone.pl Script Parameters

Parameter Description
ORACLE_HOME=Oracle_home
The complete path to the Oracle home you want to clone. If you specify an invalid path, then the script exits. This parameter is required.
ORACLE_BASE=ORACLE_BASE
The complete path to the Oracle base you want to clone. If you specify an invalid path, then the script exits. This parameter is required.
ORACLE_HOME_NAME=
Oracle_home_name |
-defaultHomeName
The Oracle home name of the home you want to clone. Optionally, you can specify the -defaultHomeName flag. This parameter is optional.
ORACLE_HOME_USER=Oracle_home_user
The OracleHomeUser for Windows services. This parameter is applicable to Windows only and is optional.
OSDBA_GROUP=group_name
Specify the operating system group you want to use as the OSDBA privileged group. This parameter is optional.
OSOPER_GROUP=group_name
Specify the operating system group you want to use as the OSOPER privileged group. This parameter is optional.
OSASM_GROUP=group_name
Specify the operating system group you want to use as the OSASM privileged group. This parameter is optional.
OSBACKUPDBA_GROUP=group_name
Specify the operating system group you want to use as the OSBACKUPDBA privileged group. This parameter is optional.
OSDGDBA_GROUP=group_name
Specify the operating system group you want to use as the OSDGDBA privileged group. This parameter is optional.
OSKMDBA_GROUP=group_name
Specify the operating system group you want to use as the OSKMDBA privileged group. This parameter is optional.
-debug
Specify this option to run the clone.pl script in debug mode
-help
Specify this option to obtain help for the clone.pl script.

See Also:

Oracle Real Application Clusters Installation Guide for your platform for more information about the operating system groups listed in the preceding table

Example 8-1 shows an excerpt from the start.sh script that calls the clone.pl script.

Example 8-1 Excerpt From the start.sh Script to Clone Oracle RAC for Linux and UNIX

ORACLE_BASE=/opt/oracle
ORACLE_HOME=/opt/oracle/product/12c/db
cd $ORACLE_HOME/clone
THISNODE='hostname -s'

E01=ORACLE_HOME=/opt/oracle/product/12c/db
E02=ORACLE_HOME_NAME=OraDBRAC
E03=ORACLE_BASE=/opt/oracle
C01="-O CLUSTER_NODES={node1,node2}"
C02="-O LOCAL_NODE=$THISNODE"

perl $ORACLE_HOME/clone/bin/clone.pl $E01 $E02 $E03 $C01 $C02

Example 8-2 shows an excerpt from the start.bat script that the user must create that calls the clone.pl script.

Example 8-2 Excerpt From the start.bat Script to Clone Oracle RAC for Windows

set ORACLE_home=C:\oracle\product\12c\db1
cd %ORACLE_home%\clone\bin
set THISNODE=%hostname%
set E01=ORACLE_HOME=%ORACLE_home%
set E02=ORACLE_HOME_NAME=OraDBRAC
set E03=ORACLE_BASE=Oracle_Base
set C01="CLUSTER_NODES={node1,node2}"
set C02="-O LOCAL_NODE=%THISNODE%"
perl clone.pl %E01% %E02% %E03% %C01% %C02%

Table 8-2 describes the environment variables E01, E02, and E03 that are shown in bold typeface in Example 8-1.

Table 8-2 Environment Variables Passed to the clone.pl Script

Symbol Variable Description
E01 ORACLE_HOME The location of the Oracle RAC database home. This directory location must exist and must be owned by the Oracle operating system group: oinstall.
E02 ORACLE_HOME_NAME The name of the Oracle home for the Oracle RAC database. This is stored in the Oracle Inventory.
E03 ORACLE_BASE The location of the Oracle Base directory.

Table 8-3 describes the cloning parameters C01 and C02, that are shown in bold typeface in Example 8-1.

Table 8-3 Cloning Parameters Passed to the clone.pl Script.

Variable Name Parameter Description
C01 Cluster Nodes CLUSTER_NODES Lists the nodes in the cluster.
C02 Local Node LOCAL_NODE The name of the local node.
Step 4   Run the $ORACLE_HOME/root.sh script on each node

Note:

This step applies to Linux and UNIX installations, only.

Run the $ORACLE_HOME/root.sh as the root operating system user as soon as the clone.pl procedure completes on the node.

[root@node1 root]# /opt/oracle/product/12c/db/root.sh -silent

Note that you can run the script on each node simultaneously:

[root@node2 root]# /opt/oracle/product/12c/db/root.sh -silent

Ensure the script has completed on each node before proceeding to the next step.

Step 5   Run DBCA on one node to create the Oracle RAC instances on each node

Note:

You need only run DBCA on one node in the cluster to create Oracle RAC instances on all nodes.

This step shows how to run DBCA in silent mode and provide response file input to create the Oracle RAC instances.

The following example creates an Oracle RAC database named ERI on each node, creates database instances on each node, registers the instances in OCR, creates the database files in the Oracle ASM disk group called DATA, and creates sample schemas. It also sets the SYS, SYSTEM, SYSMAN and DBSNMP passwords to password, which is the password for each account:

[oracle@node1 oracle]$ export ORACLE_HOME=/opt/oracle/product/12c/db
[oracle@node1 oracle]$ cd $ORACLE_HOME/bin/
[oracle@node1 bin]$./dbca -silent -createDatabase -templateName General_Purpose.dbc \
-gdbName ERI -sid ERI \
-sysPassword password -systemPassword password \
-sysmanPassword password -dbsnmpPassword password \
-emConfiguration LOCAL \
-storageType ASM -diskGroupName DATA \
-datafileJarLocation $ORACLE_HOME/assistants/dbca/templates \
-nodelist node1,node2 -characterset WE8ISO8859P1 \
-obfuscatedPasswords false -sampleSchema true

See Also:

Oracle Database 2 Day DBA for information about using DBCA to create and configure a database

Locating and Viewing Log Files Generated During Cloning

The cloning script runs multiple tools, each of which may generate its own log files. After the clone.pl script finishes running, you can view log files to obtain more information about the cloning process.

The following log files that are generated during cloning are the key log files of interest for diagnostic purposes:

  • Central_Inventory/logs/cloneActionstimestamp.logContains a detailed log of the actions that occur during the Oracle Universal Installer part of the cloning.
  • Central_Inventory/logs/oraInstalltimestamp.errContains information about errors that occur when Oracle Universal Installer is running.
  • Central_Inventory/logs/oraInstalltimestamp.outContains other miscellaneous messages generated by Oracle Universal Installer.
  • $ORACLE_HOME/clone/logs/clonetimestamp.logContains a detailed log of the actions that occur before cloning and during the cloning operations.
  • $ORACLE_HOME/clone/logs/errortimestamp.logContains information about errors that occur before cloning and during cloning operations.

Table 8-4 describes how to find the location of the Oracle inventory directory.

Table 8-4 Finding the Location of the Oracle Inventory Directory

Type of System… Location of the Oracle Inventory Directory
All UNIX computers except Linux and IBM AIX /var/opt/oracle/oraInst.loc
IBM AIX and Linux /etc/oraInst.loc file.
Windows C:\Program Files\Oracle\Inventory

Site Reference:

https://docs.oracle.com/database/121/RACAD/clonerac.htm#RACAD03202

Using Cloning to Extend Oracle RAC to Nodes in the Same Cluster

This chapter provides information about using cloning to extend Oracle Real Application Clusters (Oracle RAC) to nodes in an existing cluster. To add Oracle RAC to nodes in a new cluster, see Chapter 8, “Cloning Oracle RAC to Nodes in a New Cluster”.

This chapter contains the following topics:

See Also:

About Adding Nodes Using Cloning in Oracle RAC Environments

The cloning procedures assume that you have successfully installed and configured an Oracle RAC environment to which you want to add nodes and instances. To add nodes to an Oracle RAC environment using cloning, first extend the Oracle Clusterware configuration, then extend the Oracle Database software with Oracle RAC, and then add the listeners and instances by running the Oracle assistants

The cloning script runs multiple tools, each of which may generate its own log files. After the clone.pl script finishes running, you can view log files to obtain more information about the cloning process. See “Locating and Viewing Log Files Generated During Cloning” for more information.

Cloning Local Oracle Homes on Linux and UNIX Systems

This section explains how to add nodes to existing Oracle RAC environments by cloning a local (non-shared) Oracle home in Linux and UNIX system environments.

Complete the following steps to clone Oracle Database with Oracle RAC software:

  1. Follow the steps in the “Preparing to Clone Oracle RAC” to create a copy of an Oracle home that you then use to perform the cloning procedure on one or more nodes.
  2. Use the tar utility to create an archive of the Oracle home on the existing node and copy it to the new node. If the location of the Oracle home on the source node is $ORACLE_HOME, then you must use this same directory as the destination location on the new node.
  3. On the new node, configure the environment variables ORACLE_HOME and ORACLE_BASE. Then go to the $ORACLE_HOME/clone/bindirectory and run the following command where existing_node is the name of the node that you are cloning, new_node2 and new_node3are the names of the new nodes, and Oracle_home_name is the name of the Oracle home:
    perl clone.pl -O 'CLUSTER_NODES={existing_node,new_node2,new_node3}'
    -O LOCAL_NODE=new_node2 ORACLE_BASE=$ORACLE_BASE ORACLE_HOME=$ORACLE_HOME
    ORACLE_HOME_NAME=Oracle_home_name -O -noConfig
    
  4. Run the following command to run the configuration assistants to configure Oracle RAC on the new nodes:
    $ORACLE_HOME/cfgtoollogs/configToolFailedCommands
    

    This script contains all commands that failed, were skipped, or were canceled during the installation. You can use this script to run the database configuration assistants outside of Oracle Universal Installer. Note that before you run the script you should check the script to see if any passwords within it need to be updated.

  5. Run the following command on the existing node from the $ORACLE_HOME/oui/bin directory to update the inventory in the Oracle Database home with Oracle RAC, specified by Oracle_home, where existing_node is the name of the original node that you are cloning and new_node2 and new_node3 are the names of the new nodes:
    ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME -O "CLUSTER_
    NODES={existing_node,new_node2,new_node3}" 
    
  6. On each new node, go to the $ORACLE_HOME directory and run the following command:
    ./root.sh
    
  7. From the node that you cloned, run Database Configuration Assistant (DBCA) to add Oracle RAC database instances on the new nodes.

Cloning Shared Oracle Homes on Linux and UNIX Systems

This section explains how to add nodes to existing Oracle RAC environments by cloning a shared Oracle home in Linux and UNIX system environments.

Complete the following steps to clone Oracle Database with Oracle RAC software:

  1. Follow the steps in the “Preparing to Clone Oracle RAC” to create a copy of an Oracle home that you then use to perform the cloning procedure on one or more nodes.
  2. On the new node, configure the environment variables ORACLE_HOME and ORACLE_BASE. Then go to the $ORACLE_HOME/clone/bindirectory and run the following command where existing_node is the name of the node that you are cloning, new_node2 and new_node3are the names of the new nodes, Oracle_home_name is the name of the Oracle home, and the -cfs option indicates the Oracle home is shared:
    perl clone.pl -O 'CLUSTER_NODES={existing_node,new_node2,new_node3}'
    -O LOCAL_NODE=new_node2 ORACLE_BASE=$ORACLE_BASE ORACLE_HOME=$ORACLE_HOME
     ORACLE_HOME_NAME=Oracle_home_name [-cfs -noConfig]
    

    Notes:

    In the preceding command:

    • Use the -cfs and -noConfig options for a shared Oracle Database home with Oracle RAC.
    • The value for the ORACLE_HOME_NAME parameter must be that of the node you are cloning.
  3. Run the following command on the existing node from the $ORACLE_HOME/oui/bin directory to update the inventory in the Oracle Database home with Oracle RAC, specified by Oracle_home, where existing_node is the name of the original node that you are cloning and new_node2 and new_node3 are the names of the new nodes:
    ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_
    NODES={existing_node,new_node2,new_node3}"
    
  4. On each new node, go to the $ORACLE_HOME directory and run the following command:
    ./root.sh
    
  5. From the node that you cloned, run Database Configuration Assistant (DBCA) to add Oracle RAC database instances to the new nodes.

Cloning Oracle Homes on Windows Systems

This section explains how to add nodes to existing Oracle RAC environments by cloning a shared or local Oracle home in Windows system environments.

Complete the following steps to clone Oracle Database with Oracle RAC software:

  1. If you have a local Oracle home, then use the ZIP utility to create an archive of the Oracle Database home with Oracle RAC on the existing node and copy it to the new node. Otherwise, proceed to the next step.Extract the Oracle Database with Oracle RAC home files from the ZIP file on the new node in the same directory in which the Oracle Database home with Oracle RAC resided on the existing node. For example, assume that the location of the destination Oracle RAC home on the new node is %ORACLE_HOME%.
  2. On the new node, go to the %ORACLE_HOME%\clone\bin directory and run the following command where Oracle_Home is the Oracle Database home, Oracle_Home_Name is the name of the Oracle Database home, Oracle_Base is the Oracle base directory, user_name is the name of the Oracle home user (a non-Administrator user) for the Oracle home being cloned, existing_node is the name of the existing node, and new_node is the name of the new node:
    perl clone.pl ORACLE_HOME=Oracle_Home ORACLE_BASE=Oracle_Base 
    ORACLE_HOME_NAME=Oracle_Home_Name ORACLE_HOME_USER=user_name 
    -O 'CLUSTER_NODES={existing_node,new_node}'
    -O LOCAL_NODE=new_node

    If you have a shared Oracle Database home with Oracle RAC, then append the -cfs option to the command to indicate that the Oracle home is shared, as shown in the following example:

    perl clone.pl ORACLE_HOME=Oracle_Home ORACLE_BASE=Oracle_Base 
    ORACLE_HOME_NAME=Oracle_Home_Name ORACLE_HOME_USER=user_name
    -O 'CLUSTER_NODES={existing_node,new_node}' -O LOCAL_NODE=new_node
    [-cfs -noConfig]
    

    Note:

    • The ORACLE_HOME_USER is required only if you are cloning a secured Oracle home.
    • Use the -cfs and -noConfig options for a shared Oracle Database home with Oracle RAC.
    • The value for the ORACLE_HOME_NAME parameter must be that of the node you are cloning. To obtain the ORACLE_HOME_NAME, look in the registry on the node you cloning for the ORACLE_HOME_NAME parameter key under HKEY_LOCAL_MACHINE\SOFTWARE\oracle\KEY_OraCRs12c_home1.
  3. On the existing node, from the %ORACLE_HOME%\oui\bin directory run the following command to update the inventory in the Oracle Database home with Oracle RAC, specified by Oracle_home, where existing_node is the name of the existing node, and new_node is the name of the new node:
    setup.exe -updateNodeList ORACLE_HOME=Oracle_home "CLUSTER_NODES=
    {existing_node,new_node}" LOCAL_NODE=existing_node
  4. From the node that you cloned, run DBCA to add Oracle RAC database instances to the new nodes.

Site Reference:

https://docs.oracle.com/database/121/RACAD/cloneracwithoui.htm#RACAD007

Adding and Deleting Oracle RAC from Nodes on Linux and UNIX Systems

This chapter describes how to extend an existing Oracle Real Application Clusters (Oracle RAC) home to other nodes and instances in the cluster, and delete Oracle RAC from nodes and instances in the cluster. This chapter provides instructions for Linux and UNIX systems.

If your goal is to clone an existing Oracle RAC home to create multiple new Oracle RAC installations across the cluster, then use the cloning procedures that are described in Chapter 8, “Cloning Oracle RAC to Nodes in a New Cluster”.

The topics in this chapter include the following:

Notes:

  • Ensure that you have a current backup of Oracle Cluster Registry (OCR) before adding or deleting Oracle RAC by running the ocrconfig -showbackup command.
  • The phrase “target node” as used in this chapter refers to the node to which you plan to extend the Oracle RAC environment.

See Also:

Adding Oracle RAC to Nodes with Oracle Clusterware Installed

Before beginning this procedure, ensure that your existing nodes have the correct path to the Grid_home and that the $ORACLE_HOME environment variable is set to the Oracle RAC home.

See Also:

Oracle Clusterware Administration and Deployment Guide for information about extending the Oracle Clusterware home to new nodes in a cluster

  • If you are using a local (non-shared) Oracle home, then you must extend the Oracle RAC database home that is on an existing node (node1 in this procedure) to a target node (node3 in this procedure).Navigate to the Oracle_home/addnode directory on node1 and run the addnode.sh script.

    If you want to perform a silent installation, run the addnode.sh script using the following syntax:

    $ ./addnode.sh -silent "CLUSTER_NEW_NODES={node3}"
    
  • If you have a shared Oracle home that is shared using Oracle Automatic Storage Management Cluster File System (Oracle ACFS), then do the following to extend the Oracle database home to node3:
    1. Start the Oracle ACFS resource on the new node by running the following command as root from the Grid_home/bin directory:
      # srvctl start filesystem -device volume_device [-node node_name]
      

      Note:

      Make sure the Oracle ACFS resources, including Oracle ACFS registry resource and Oracle ACFS file system resource where the Oracle home is located, are online on the newly added node.

    2. Run the following command as the user that installed Oracle RAC from the Oracle_home/oui/bin directory on the node you are adding to add the Oracle RAC database home:
      $ ./runInstaller -attachHome ORACLE_HOME="ORACLE_HOME" "CLUSTER_NODES={node3}"
      LOCAL_NODE="node3" ORACLE_HOME_NAME="home_name" -cfs
      
    3. Navigate to the Oracle_home/addnode directory on node1 and run the addnode.sh script as the user that installed Oracle RAC using the following syntax:
      $ ./addnode.sh -noCopy "CLUSTER_NEW_NODES={node3}"
      

      Note:

      Use the -noCopy option because the Oracle home on the destination node is already fully populated with software.

  • If you have a shared Oracle home on a shared file system that is not Oracle ACFS, then you must first create a mount point for the Oracle RAC database home on the target node, mount and attach the Oracle RAC database home, and update the Oracle Inventory, as follows:
    1. Run the srvctl config database -dbdb_name command on an existing node in the cluster to obtain the mount point information.
    2. Run the following command as root on node3 to create the mount point:
      # mkdir -p mount_point_path
    3. Mount the file system that hosts the Oracle RAC database home.
    4. Run the following command as the user that installed Oracle RAC from the Oracle_home/oui/bin directory on the node you are adding to add the Oracle RAC database home:
      $ ./runInstaller -attachHome ORACLE_HOME="ORACLE_HOME" "CLUSTER
      _NODES={local_node_name}" LOCAL_NODE="node_name" ORACLE_HOME_NAME="home_name"
      
    5. Update the Oracle Inventory as the user that installed Oracle RAC, as follows:
      $ ./runInstaller -updateNodeList ORACLE_HOME=mount_point_path "CLUSTER_NODES={node_list}"
      

      In the preceding command, node_list refers to a list of all nodes where the Oracle RAC database home is installed, including the node you are adding.

Run the Oracle_home/root.sh script on node3 as root.

Note:

Oracle recommends that you back up the OCR after you complete the node addition process.

You can now add an Oracle RAC database instance to the target node using either of the procedures in the following sections.

Adding Policy-Managed Oracle RAC Database Instances to Target Nodes

You must manually add undo and redo logs, unless you store your policy-managed database on Oracle Automatic Storage Management (Oracle ASM) and Oracle Managed Files is enabled.

If there is space in a server pool to add a node and the database has been started at least once, then Oracle Clusterware adds the Oracle RAC database instance to the newly added node and no further action is necessary.

Note:

The database must have been started at least once before you can add the database instance to the newly added node.

If there is no space in any server pool, then the newly added node moves into the Free server pool. Use the srvctl modify srvpool command to increase the cardinality of a server pool to accommodate the newly added node, after which the node moves out of the Free server pool and into the modified server pool, and Oracle Clusterware adds the Oracle RAC database instance to the node.

Adding Administrator-Managed Oracle RAC Database Instances to Target Nodes

Note:

The procedures in this section only apply to administrator-managed databases. Policy-managed databases use nodes when the nodes are available in the databases’ server pool.

You can use either Oracle Enterprise Manager or DBCA to add Oracle RAC database instances to the target nodes. To add a database instance to a target node with Oracle Enterprise Manager, see the Oracle Database 2 Day + Real Application Clusters Guide for complete information.

This section describes using DBCA to add Oracle RAC database instances.

These tools guide you through the following tasks:

  • Creating a new database instance on each target node
  • Creating and configuring high availability components
  • Creating the Oracle Net configuration for a non-default listener from the Oracle home
  • Starting the new instance
  • Creating and starting services if you entered services information on the Services Configuration page

After adding the instances to the target nodes, you should perform any necessary service configuration procedures, as described in Chapter 5, “Workload Management with Dynamic Database Services”.

Using DBCA in Interactive Mode to Add Database Instances to Target Nodes

To add a database instance to a target node with DBCA in interactive mode, perform the following steps:

  1. Ensure that your existing nodes have the $ORACLE_HOME environment variable set to the Oracle RAC home.
  2. Start DBCA by entering dbca at the system prompt from the Oracle_home/bin directory.DBCA performs certain CVU checks while running. However, you can also run CVU from the command line to perform various verifications.

    See Also:

    Oracle Clusterware Administration and Deployment Guide for more information about CVU

    DBCA displays the Welcome page for Oracle RAC. Click Help on any DBCA page for additional information.

  3. Select Instance Management, click Next, and DBCA displays the Instance Management page.
  4. Select Add Instance and click Next. DBCA displays the List of Cluster Databases page that shows the databases and their current status, such as ACTIVE or INACTIVE.
  5. From the List of Cluster Databases page, select the active Oracle RAC database to which you want to add an instance. Click Next and DBCA displays the List of Cluster Database Instances page showing the names of the existing instances for the Oracle RAC database that you selected.
  6. Click Next to add a new instance and DBCA displays the Adding an Instance page.
  7. On the Adding an Instance page, enter the instance name in the field at the top of this page if the instance name that DBCA provides does not match your existing instance naming scheme.
  8. Review the information on the Summary dialog and click OK or click Cancel to end the instance addition operation. DBCA displays a progress dialog showing DBCA performing the instance addition operation.
  9. After you terminate your DBCA session, run the following command to verify the administrative privileges on the target node and obtain detailed information about these privileges where nodelist consists of the names of the nodes on which you added database instances:
    cluvfy comp admprv -o db_config -d Oracle_home -n nodelist [-verbose]
    
  10. Perform any necessary service configuration procedures, as described in Chapter 5, “Workload Management with Dynamic Database Services”.

Deleting Oracle RAC from a Cluster Node

To remove Oracle RAC from a cluster node, you must delete the database instance and the Oracle RAC software before removing the node from the cluster.

Note:

If there are no database instances on the node you want to delete, then proceed to “Removing Oracle RAC”.

This section includes the following procedures to delete nodes from clusters in an Oracle RAC environment:

Deleting Instances from Oracle RAC Databases

The procedures for deleting database instances are different for policy-managed and administrator-managed databases. Deleting a policy-managed database instance involves reducing the number of servers in the server pool in which the database instance resides. Deleting an administrator-managed database instance involves using DBCA to delete the database instance.

To delete a policy-managed database, reduce the number of servers in the server pool in which a database instance resides by relocating the server on which the database instance resides to another server pool. This effectively removes the instance without having to remove the Oracle RAC software from the node or the node from the cluster.

For example, you can delete a policy-managed database by running the following commands on any node in the cluster:

$ srvctl stop instance -d db_unique_name -n node_name
$ srvctl relocate server -n node_name -g Free

The first command stops the database instance on a particular node and the second command moves the node out of its current server pool and into the Free server pool.

See Also:

“Removing Oracle RAC” for information about removing the Oracle RAC software from a node

Deleting Instances from Administrator-Managed Databases

Note:

Before deleting an instance from an Oracle RAC database using SRVCTL to do the following:

  • If you have services configured, then relocate the services
  • Modify the services so that each service can run on one of the remaining instances
  • Ensure that the instance to be removed from an administrator-managed database is neither a preferred nor an available instance of any service

The procedure in this section explains how to use DBCA in interactive mode to delete an instance from an Oracle RAC database.

See Also:

Oracle Database 2 Day + Real Application Clusters Guide for information about how to delete a database instance from a target node with Oracle Enterprise Manager

Using DBCA in Interactive Mode to Delete Instances from Nodes

To delete an instance using DBCA in interactive mode, perform the following steps:

  1. Start DBCA.Start DBCA on a node other than the node that hosts the instance that you want to delete. The database and the instance that you plan to delete should be running during this step.
  2. On the DBCA Operations page, select Instance Management and click Next. DBCA displays the Instance Management page.
  3. On the DBCA Instance Management page, select the instance to be deleted, select Delete Instance, and click Next.
  4. On the List of Cluster Databases page, select the Oracle RAC database from which to delete the instance, as follows:
    1. On the List of Cluster Database Instances page, DBCA displays the instances that are associated with the Oracle RAC database that you selected and the status of each instance. Select the cluster database from which you will delete the instance.
    2. Click OK on the Confirmation dialog to proceed to delete the instance.DBCA displays a progress dialog showing that DBCA is deleting the instance. During this operation, DBCA removes the instance and the instance’s Oracle Net configuration.

      Click No and exit DBCA or click Yes to perform another operation. If you click Yes, then DBCA displays the Operations page.

  5. Verify that the dropped instance’s redo thread has been removed by using SQL*Plus on an existing node to query the GV$LOG view. If the redo thread is not disabled, then disable the thread. For example:
    SQL> ALTER DATABASE DISABLE THREAD 2;
    
  6. Verify that the instance has been removed from OCR by running the following command, where db_unique_name is the database unique name for your Oracle RAC database:
    srvctl config database -d db_unique_name
  7. If you are deleting more than one node, then repeat these steps to delete the instances from all the nodes that you are going to delete.

Removing Oracle RAC

This procedure removes Oracle RAC software from the node you are deleting from the cluster and updates inventories on the remaining nodes.

  1. If there is a listener in the Oracle RAC home on the node you are deleting, then you must disable and stop it before deleting the Oracle RAC software. Run the following commands on any node in the cluster, specifying the name of the listener and the name of the node you are deleting:
    $ srvctl disable listener -l listener_name -n name_of_node_to_delete
    $ srvctl stop listener -l listener_name -n name_of_node_to_delete
  2. Run the following command from $ORACLE_HOME/oui/bin on the node that you are deleting to update the inventory on that node:
    $ ./runInstaller -updateNodeList ORACLE_HOME=Oracle_home_location
    "CLUSTER_NODES={name_of_node_to_delete}" -local
    

    Note:

    If you have a shared Oracle RAC home, then append the -cfs option to the preceding command and provide a complete path to the location of the cluster file system.

  3. Deinstall the Oracle home—only if the Oracle home is not shared—from the node that you are deleting by running the following command from the Oracle_home\deinstall directory:
    deinstall -local
    

    Caution:

    If the Oracle home is shared, then do not run this command because it will remove the shared software. Proceed to the next step, instead.

  4. Run the following command from the $ORACLE_HOME/oui/bin directory on any one of the remaining nodes in the cluster to update the inventories of those nodes, specifying a comma-delimited list of remaining node names and the name of the local node:
    $ ./runInstaller -updateNodeList ORACLE_HOME=Oracle_home_location
    "CLUSTER_NODES={remaining_node_list}" LOCAL_NODE=local_node_name

    Notes:

    • Because all nodes may not have database software installed on an Oracle Flex Cluster, remaining_node_list must list only those nodes with installed database software homes.
    • If you have a shared Oracle RAC home, then append the -cfs option to the command example in this step and provide a complete path to the location of the cluster file system.

Deleting Nodes from the Cluster

After you delete the database instance and the Oracle RAC software, you can begin the process of deleting the node from the cluster. You accomplish this by running scripts on the node you want to delete to remove the Oracle Clusterware installation and then you run scripts on the remaining nodes to update the node list.

See Also:

Oracle Clusterware Administration and Deployment Guide for information about deleting nodes from the cluster

Site Reference:

Clone ORACLE HOME to other DB Server

What is cloning?

Cloning is a process of copying an existing installation to a different server or location. Cloning is similar to an Oracle installation except Oracle universal installation performs the actions in a special mode called “clone mode”.

Starting from 10g onwards, Oracle supports cloning and users can easily clone existing Oracle installations. The source and destination servers should have same configuration and packages installed in order Oracle cloning to work.  The cloning process works by copying all files from the source Oracle home to the destination Oracle home, and files which are not part of the source instance will not be copied to the destination location.

When it comes to a server upgrade or migration, database administrators will have to question whether to clone or install the Oracle binaries. In the next steps, I’ll explain why it’s useful and the processes involved.

When is cloning useful?

  1. If you need to create a new installation with many patches, then cloning enables you to create a new installation with all patches applied to it in one step and eliminate manual installation.
  2. To create an installation that is the same as Production for Development/Testing purpose
  3. If you need to create Oracle home and deploy it to many hosts
  4. If you need to quickly deploy an instance and the applications

Note that the clone installation behaves the same as source installation. You can patch Oracle home using OPatch and you can remove using the Oracle universal installer. But cloning is not possible across the platforms.

Methods available for Cloning

You can use one of the below methods to clone the Oracle installation:

  1. Clone using “perl clone.pl”In this method you need to install DB Console so that the required Perl files will be installed in $ORACLE_HOME/clone/bin.
  2. Clone using “runInstaller”In this method you need to install DB Console
  3. Clone using “runInstaller in silent” modeIn this method you will be using same “runInstaller” in non-interactive mode

How is Cloning done?

Cloning is a two-step process, in the first step you will copy Oracle installation from source to destination and in second step you will run Oracle Universal Installer to clone the installation on the destination.

Step 1: Copy source Oracle Installation

Before you make a copy of the existing installation, the databases, listeners and agents, etc running on the server should be   shut down on the source installation.

The source Oracle home will have configuration/trace/log files related to the environment such as udump, bdump, alert.log , init.ora, listener.ora, tnsnames.ora, etc.  When you clone the Oracle home, the destination will have all those files.  If your destination will have different database/instances then your need to exclude those files during copy.

If your requirement is server migration and then no need to exclude those files.

Below are the lists of few files that may need to exclude during the cloning process, in case your destination will be different.

  • Database related files (Data/tmp files, log  files, Control files…etc)
  • SQL*Net Files $
    • $ORACLE_HOME/listener.ora
    • $ORACLE_HOME/tnsnames.ora
    • $ORACLE_HOME/sqlnet.ora
  • Database related directories
    • $ORACLE_HOME/dbs                        ( init.ora,spfile.ora,orapwd)
    • $ORACLE_HOME/admin                      (trace, alert, core files…etc)
    • $ORACLE_BASE/diag                       (trace, alert, core, incident files…etc)
    • $ORACLE_HOME/oc4j/j2ee/OC4J_DBConsole__ (Enterprise manager/db console)
    • $ORACLE_HOME/hs/admin/                  (heterogeneous files)

Note that for Database 11g the permissions are more restrictive for the files $ORACLE_HOME/bin/nmo, $ORACLE_HOME/bin/nmb and $ORACLE_HOME/bin/nmhs. When you attempt to tar or copy you will receive the “permission denied” error. For 11g and higher, add these files to the exclusion list, they’ll be created by root.sh script later in the cloning process.

If any other products are installed/configured but not required at destination, then you need to exclude those files as well.

Make a copy using “tar”:

You can create an exclude list as below

$ cat exclude_list.txt
./network/admin/listener.ora
./network/admin/sqlnet.ora
./network/admin/tnsnames.ora
./oc4j/j2ee/OC4J_DBConsole_oelinux.localdomain_ora11gr2
./oc4j/j2ee/OC4J_DBConsole
……
……
…etc

Create a tar file by excluding files or directories which are not required using below command

$ cd $ORACLE_HOME
$ tar -zcvf  ~/Oracle_Home11g_clone.tar.gz . -X ~/exclude_list.txt > ~/ Oracle_Home11g_clone.log

On the destination server create directory structure and set the environment variables such as ORACLE_HOME, ORACLE_BASE, etc.

$ mkdir /home/oracle/product/11.2.0.4/dbhome_1
$ chown –R oracle:dba/ oracle

Copy the tar file to destination server and unpack.

$ cd $ORACLE_HOME
$ tar -xvf Oracle_Home11g_clone.tar.gz

Make a copy using “cp –RP”

If the source and destination of Oracle installation is on the same server then “cp –Rp” should be used.

$ cp -Rp $ORACLE_HOME /$ORACLE_HOME_clone

Step 2: Clone Oracle Installation

On the Destination make sure that ORACLE_HOME and ORACLE_BASE environment variables are set. The /etc/orainst.loc (for AIX,Linux), /var/opt/oracle/oraInst.loc (for Solaris,HP-UX) file should exists on the destination server. These files will not exist if Oracle never installed on the destination server, in that case you’ll need to create these files manually.

Here is the sample file

$ cat /etc/oraInst.loc
inventory_loc=/home/oracle/oraInventory
inst_group=dba

Where INVENTORY_LOCATION is the path of the oraInventory location.

If orainst.loc file exists on a different location then you can create a soft link or edit the $ORACLE_HOME/clone/config/cs.properties file and add  “–invPtrLoc <path>/oraInst.loc” parameter to “clone_command_line”.

If you have copied oracle Inventory, then you will receive a message that the inventory already exists when you run “clone.pl” or “./runInstaller” to clone the Oracle installation. So before you run cloning commands makes sure to detach the Oracle home.

See below See the below example to detach the oracle inventory:

$ cd  /home/oracle/product/11.2.0/dbhome_1/oui/bin
$./runInstaller -detachHome ORACLE_HOME=/home/oracle/product/10.2.0/db_1
Starting Oracle Universal Installer...

No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at  /home/oracle/oraInventory
‘DetachHome’ was successful.

Note: – If it is the first Oracle home on the destination server then cloning will create the Inventory with the new Oracle home. If it is NOT the first Oracle home then it will update the existing inventory with New Oracle home.

1. Clone using “perl clone.pl”

$ cd $ORACLE_HOME/clone/bin
$ perl clone.pl ORACLE_HOME="<destination_home>"  ORACLE_HOME_NAME="<unique_home_name>"

Where
ORACLE_HOME: specifies the new Oracle home directory location
ORACLE_HOME_NAME: specifies a unique name for the Oracle Home for the server

See the below example to clone the installation using “perl clone.pl”.

$ perl clone.pl ORACLE_HOME="/home/oracle/product/11.2.0.4/dbhome_1" 
ORACLE_HOME_NAME=" Oracle_home11204" ORACLE_BASE="/home/oracle" OSDBA_GROUP=dba 
OSPER_GROUP=dba

OSOPER_GROUP=dba
./runInstaller -clone -
waitForCompletion  "ORACLE_HOME=/home/oracle/product/11.2.0.4/dbhome_1" 
"ORACLE_HOME_NAME=Oracle_home11204" 

"ORACLE_BASE=/home/oracle" "oracle_install_OSDBA=dba" "oracle_install_
OSOPER=dba" -silent -noConfig -nowait
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3999 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-07-13_05
-34-06PM. Please wait ...Oracle Universal Installer, Version 11.2.0.4.0 

Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.

You can find the log of this install session at:
/home/oracle/oraInventory/logs/cloneActions2014-07-13_05-34-06PM.log
.................................................................100% Done.

Installation in progress (Sunday, July 13, 2014 5:34:16 PM PDT)
...........................................................       84% Done.

Install successful

Linking in progress (Sunday, July 13, 2014 5:34:27 PM PDT)
Link successful

Setup in progress (Sunday, July 13, 2014 5:35:00 PM PDT)
Setup successful

End of install phases.(Sunday, July 13, 2014 5:35:22 PM PDT)
WARNING:
The following configuration scripts need to be executed as the "root" user.
/home/oracle/product/11.2.0.4/dbhome_1/root.sh
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts

The cloning of Oracle_home11204 was successful.
Please check '/home/oracle/oraInventory/logs/cloneActions2014-07-13_05-34-06PM
.log' for more details.

2. Clone using “runInstaller”

$ORACLE_HOME/oui/bin/runInstaller -clone  ORACLE_HOME="<destination_home>"
ORACLE_HOME_NAME="<unique_home_name>"

Where
-clone: specifies that the home is a clone of another location
ORACLE_BASE: specifies the base directory location for Oracle products
ORACLE_HOME: specifies the new Oracle home directory location
ORACLE_HOME_NAME: specifies a unique name for the Oracle Home for the server

See the below example to clone the installation using “runInstaller”.

$ORACLE_HOME/oui/bin/runInstaller -clone ORACLE_HOME_NAME=" Oracle_home11204" 
ORACLE_BASE="/home/oracle" OSDBA_GROUP=dba OSPER_GROUP=dba

   Starting Oracle Universal Installer...
   Checking swap space: must be greater than 500 MB.   Actual 3999 MB    Passed
   Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-07-13
_05-18-27PM. Please wait ...Oracle Universal Installer, Version 11.2.0.4.0 

   Production
   Copyright (C) 1999, 2008, Oracle. All rights reserved.

   You can find a log of this install session at:
   /home/oracle/oraInventory/logs/cloneActions2014-07-13_05-18-27PM.log
   .................................................................. 100% Done.

   Installation in progress (Sunday, July 13, 2014 5:18:38 PM PDT)
   ..................................................                  74% Done.
   Install successful

   Linking in progress (Sunday, July 13, 2014 5:18:47 PM PDT)
   Link successful

   Setup in progress (Sunday, July 13, 2014 5:20:20 PM PDT)
   Setup successful

   End of install phases.(Sunday, July 13, 2014 5:20:22 PM PDT)
   WARNING:
   The following configuration scripts need to be executed as the "root" user.
   #!/bin/sh
   #Root script to run
   /home/oracle/product/10.2.0/db_1/root.sh
   To execute the configuration scripts:
          1. Open a terminal window
          2. Log in as "root"
          3. Run the scripts

   The cloning of home10g was successful.
   Please check '/home/oracle/oraInventory/logs/cloneActions2014-07-13_05
-18-27PM.log' for more details.

3. Clone using “runInstaller in silent” mode

See the below example to clone the installation using “runInstaller in silent” mode.

$ORACLE_HOME/oui/bin/runInstaller -clone -silent ORACLE_HOME_NAME=" Oracle_home11204" 
ORACLE_BASE="/home/oracle" OSDBA_GROUP=dba OSPER_GROUP=dba

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3999 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-07-13
_05-29-22PM. Please wait ...Oracle Universal Installer, Version 11.2.0.1.0 

Production
Copyright (C) 1999, 2009, Oracle. All rights reserved.

You can find the log of this install session at:
/home/oracle/oraInventory/logs/cloneActions2014-07-13_05-29-22PM.log
.................................................................... 100% Done.

Installation in progress (Sunday, July 13, 2014 5:29:30 PM PDT)
..................................................                    77% Done.
Install successful

Linking in progress (Sunday, July 13, 2014 5:29:39 PM PDT)
Link successful

Setup in progress (Sunday, July 13, 2014 5:30:10 PM PDT)
Setup successful

End of install phases.(Sunday, July 13, 2014 5:30:43 PM PDT)
WARNING:
The following configuration scripts need to be executed as the "root" user.
/home/oracle/product/11.2.0.4/dbhome_1/root.sh
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts

   The cloning of Oracle_home11204 was successful.
   Please check '/home/oracle/oraInventory/logs/cloneActions2014-07-13_05-29-22
PM.log' for more details.

To complete the clone you should log in as “root” and run $ORACLE_HOME/root.sh.

# /home/oracle/product/11.2.0.4/dbhome_1/root.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=  /home/oracle/product/11.2.0.4/dbhome_1
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
Finished product-specific root actions.

On UNIX & Linux platforms run “changePerm.sh” for versions 10g and older.

#/home/oracle/product/10.2.0/db_1/install/changePerm.sh -o  /home/oracle/product/10.2.0/db_1
------------------------------------------------------------------------------
Disclaimer: The purpose of this script is to relax permissions on some of the
files in the database Oracle Home so that all clients can access them.
Please note that Oracle Corporation recommends using the most restrictive file
permissions as possible for your given implementation.  Running this script
should be done only after considering all security ramifications.
-------------------------------------------------------------------------------
Do you wish to continue (y/n) [n]: y
Spooling the error log /tmp/changePerm_err.log...
Finished running the script successfully

References:-

  • How To Clone An Existing RDBMS Installation Using OUI (Doc ID 300062.1)
  • 11g Install : Understanding Oracle Base, Oracle Home and Oracle Central/Global Inventory locations (Doc ID 454442.1)
  • FAQs on RDBMS Oracle Home Cloning Using OUI (Doc ID 565009.1)
  • Cloning An Existing Oracle10g Release 2 (10.2.0.x) RDBMS Installation Using OUI (Doc ID 559304.1)
  • Cloning An Existing Oracle11g Release 2 (11.2.0.x) RDBMS Installation Using OUI (Doc ID 1221705.1)

Conclusion:

This article helps Oracle community to understand when cloning is useful, available methods and how to clone an Oracle installation.

 

Site References:

http://deegeplanet.com/noteblog/index.php/en/rdbms/18-ohcloning

http://allthingsoracle.com/should-you-install-or-clone-oracle-home/

http://www.vitalsofttech.com/cloning-oracle-12c-home-binaries/

Multi-Process Multi-Threaded Architecture in Oracle 12c

James Huang - Databases Consultant

Overview

In default Unix/Linux architecture, every Oracle process including background and foreground ones runs as a dedicated OS process.

In 12c release 1 ( 12.1.0.x ) multi-process multi-threaded architecture, those four background processes ( PMON, DBW, VKTM and PSP) run as dedicated OS processes, the rest processes can be configured running as a thread, which largely saves CPU and Memory usage.

Configure Multi-Process Multi-Threaded for Background Processes

  • In default Unix/Linux architecture, every process runs as a dedicated OS process
$ ps -eLo "pid tid comm args"|grep cdb2
 6163 6163 ora_pmon_cdb2 ora_pmon_cdb2
 6165 6165 ora_psp0_cdb2 ora_psp0_cdb2
 6167 6167 ora_vktm_cdb2 ora_vktm_cdb2
 6171 6171 ora_gen0_cdb2 ora_gen0_cdb2
 6173 6173 ora_mman_cdb2 ora_mman_cdb2
 6177 6177 ora_diag_cdb2 ora_diag_cdb2
 6179 6179 ora_dbrm_cdb2 ora_dbrm_cdb2
 6181 6181 ora_vkrm_cdb2 ora_vkrm_cdb2
 6183 6183 ora_dia0_cdb2 ora_dia0_cdb2
 6185 6185 ora_dbw0_cdb2 ora_dbw0_cdb2
 6187 6187 ora_lgwr_cdb2 ora_lgwr_cdb2
 6189 6189 ora_ckpt_cdb2 ora_ckpt_cdb2
 6191 6191 ora_lg00_cdb2 ora_lg00_cdb2
 6193 6193 ora_smon_cdb2 ora_smon_cdb2
 6195 6195 ora_lg01_cdb2 ora_lg01_cdb2
...............
................
.............
  • Change parameter and shutdown…

View original post 1,904 more words

How to Install Example Schemas in 12c by Using Templates and Creating a New PDB

James Huang - Databases Consultant

In another post “How to Install Example Schemas in 12c Database?“, It shows detail steps of how to install example schemas manually through SQL command line.

Here we use another way to install example schemas by creating a PDB with example schemas and plug this PDB into a current CDB .

1) Start up DBCA, select “Manage Pluggable Databases”

Capture1

2) Select “Create a Pluggable Database”

Capture2

3)Select CDB database to create a PDB in

Capture3

4)select “Create pluggable database using PDB file set”

Capture4

5) Select example schemas file set under $ORACLE_HOME/assitants/dbca/templates/

Capture5

6) Specify PDB name and PDB datafiles location

Capture6   7)Review the template summary

Capture7

8) Start to create PDB of example schemas

Capture8

9) PDB creation is complete

Capture9

Verify PDB creation and connect to example schemas

1) connect to CDB

$ sqlplus sys as sysdba SQL*Plus: Release 12.1.0.2.0 Production on Sat Dec 6 22:47:48 2014 Copyright (c) 1982, 2014, Oracle. All rights…

View original post 125 more words

Where is 12.1.0.2 Oracle Clusterware Diagnostic and Alert Logs ?

James Huang - Databases Consultant

We’ve just upgraded Oracle 12.1.0.1 GI to 12.1.0.2, and found clusterware logs have been relocated from $GI_HOME/log/.
According to Oracle support Doc (ID 1915729.1), from Oracle 12.1.0.2 on Oracle clusterware (part of Grid Infrastructure) uses Oracle database fault diagnosability infrastructure to manage diagnostic data and its alert logs. As a result, most diagnostic data resides in the Automatic Diagnostic Repository (ADR).

Please note :

1)More space is required for GI $ORACLE_BASE in case of file system full , because Clusterware logs are moved to ADR which is part of ORACLE_BASE.
2)When opening SR, instead of diagcollection.pl, TFA should be used to collect diagnostics:
$/u01/app/12.1.0.2/grid/bin/tfactl diagcollect -from “Jan/20/2015 08:00:00” -to “Jan/23/2015 13:00:00”

Collecting data for all nodes
Scanning files from Jan/20/2015 08:00:00 to Jan/23/2015 13:00:00

Repository Location in hx415 : /u01/app/grid/tfa/repository
2015/01/23 14:42:55 EST : Running an inventory clusterwide …
2015/01/23 14:42:55 EST : Collection Name : tfa_Fri_Jan_23_14_42_49_EST_2015.zip
……
……
……

View original post

How to Install MySQL on Unix/Linux Using Generic Binaries

James Huang - Databases Consultant

1) Download “V74396-01.zip” which contains:

V74396-01

There are two files – ASC and MD5 for “mysql-advanced-5.6.23-linux-glibc2.5-x86_64.tar.gz”

a: Verify MD5 checksum:

$ cat mysql-advanced-5.6.23-linux-glibc2.5-x86_64.tar.gz.md5
f2ace50e757f1a63736e8dcbf5cfeb19 mysql-advanced-5.6.23-linux-glibc2.5-x86_64.tar.gz
$ md5sum mysql-advanced-5.6.23-linux-glibc2.5-x86_64.tar.gz
f2ace50e757f1a63736e8dcbf5cfeb19 mysql-advanced-5.6.23-linux-glibc2.5-x86_64.tar.gz

b) Signature Checking Using GnuPG

1. To obtain a copy of our public GPG build key by copying or download from http://pgp.mit.edu/( The key is” mysql-build@oss.oracle.com”) to file mysql_pubkey.asc.

2. To import the build key into your personal public GPG keyring:

[root@racnote1 .gnupg]# gpg --import mysql_pubkey.asc gpg: keyring `/root/.gnupg/secring.gpg' created gpg: keyring `/root/.gnupg/pubring.gpg' created gpg: /root/.gnupg/trustdb.gpg: trustdb created gpg: key 5072E1F5: public key "MySQL Release Engineering <mysql-build@oss.oracle.com>" imported gpg: Total number processed: 1 gpg: imported: 1 gpg: no ultimately trusted keys found [root@racnote1 .gnupg]# ls -ltr total 28 -rw-r--r--. 1 root root 5968 Mar 24 12:02 mysql_pubkey.asc -rw-------. 1 root root 0 Mar 24 13:07 secring.gpg -rw-------. 1 root root 4434 Mar 24 13:07 pubring.gpg~ -rw-------. 1 root root 4434 Mar 24 13:07…

View original post 755 more words

How to Install MySQL on Linux Using RPM Packages

James Huang - Databases Consultant

1) Download RPM packages zip file “V74391-01.zip” for MySQL 5.6.23 with files below:

MySQL RMP packages

2) unpack the zip file

# unzip V74391-01.zip
Archive: V74391-01.zip
 extracting: MySQL-shared-advanced-5.6.23-1.el7.x86_64.rpm 
 extracting: MySQL-test-advanced-5.6.23-1.el7.x86_64.rpm 
 extracting: MySQL-devel-advanced-5.6.23-1.el7.x86_64.rpm 
 extracting: MySQL-shared-compat-advanced-5.6.23-1.el7.x86_64.rpm 
 extracting: MySQL-embedded-advanced-5.6.23-1.el7.x86_64.rpm 
 extracting: MySQL-server-advanced-5.6.23-1.el7.x86_64.rpm 
 extracting: MySQL-client-advanced-5.6.23-1.el7.x86_64.rpm 
 extracting: README.txt

3)To perform a standard minimal installation, install the server and client RPMs:

#rpm -i MySQL-server-advanced-5.6.23-1.el7.x86_64.rpm
#rpm -i MySQL-client-advanced-5.6.23-1.el7.x86_64.rpm

4) Or to install packages using yum. In a directory containing all RPM packages for a MySQL release, “yum install MySQL*rpm” installs them in the correct orde

 [root@racnote1 MySQL_5_6_23_RPMS]# yum install MySQL*rpm Loaded plugins: langpacks Examining MySQL-client-advanced-5.6.23-1.el7.x86_64.rpm: MySQL-client-advanced-5.6.23-1.el7.x86_64 Marking MySQL-client-advanced-5.6.23-1.el7.x86_64.rpm to be installed Examining MySQL-devel-advanced-5.6.23-1.el7.x86_64.rpm: MySQL-devel-advanced-5.6.23-1.el7.x86_64 Marking MySQL-devel-advanced-5.6.23-1.el7.x86_64.rpm to be installed Examining MySQL-embedded-advanced-5.6.23-1.el7.x86_64.rpm: MySQL-embedded-advanced-5.6.23-1.el7.x86_64 Marking MySQL-embedded-advanced-5.6.23-1.el7.x86_64.rpm to be installed Examining MySQL-server-advanced-5.6.23-1.el7.x86_64.rpm: MySQL-server-advanced-5.6.23-1.el7.x86_64 Marking MySQL-server-advanced-5.6.23-1.el7.x86_64.rpm to be installed Examining MySQL-shared-advanced-5.6.23-1.el7.x86_64.rpm: MySQL-shared-advanced-5.6.23-1.el7.x86_64 Marking MySQL-shared-advanced-5.6.23-1.el7.x86_64.rpm to be installed Examining MySQL-shared-compat-advanced-5.6.23-1.el7.x86_64.rpm: MySQL-shared-compat-advanced-5.6.23-1.el7.x86_64 Marking MySQL-shared-compat-advanced-5.6.23-1.el7.x86_64.rpm to be installed Examining MySQL-test-advanced-5.6.23-1.el7.x86_64.rpm: MySQL-test-advanced-5.6.23-1.el7.x86_64 Marking MySQL-test-advanced-5.6.23-1.el7.x86_64.rpm to be installed Resolving Dependencies -->…

View original post 335 more words

How to Install rlwrap on Linux

James Huang - Databases Consultant

rlwrap is an utility that allows you to use up and down arrows like in DOS environment. For Oracle commands like sqlplus, rman, adrci, we can do the same as in DOS environment to choose one of the history command instead of type the same command again.

1) Downloand and install as a package from “http://rpm.pbone.net/” or an alternative web sites

[root@racnote1 Patches]# yum install rlwrap-0.42-1.el7.x86_64.rpm Loaded plugins: langpacks Examining rlwrap-0.42-1.el7.x86_64.rpm: rlwrap-0.42-1.el7.x86_64 Marking rlwrap-0.42-1.el7.x86_64.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package rlwrap.x86_64 0:0.42-1.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ====================================================================================================================================================== Package Arch Version Repository Size ====================================================================================================================================================== Installing: rlwrap x86_64 0.42-1.el7 /rlwrap-0.42-1.el7.x86_64 209 k Transaction Summary ====================================================================================================================================================== Install 1 Package Total size: 209 k Installed size: 209 k Is this ok [y/d/N]: y Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : rlwrap-0.42-1.el7.x86_64 1/1 Verifying :…

View original post 113 more words

Revoking OEM12c License Packs

OraManageability

Disabling a management pack is just as easy as enabling the pack in the first place.

Go to the bottom of the Setup menu and select Management Packs | Management Pack Access.

revoke_mgmtpack_access00

To completely disable the pack, click radio buttons

  • All Targets (Licensable targets and dependent targets)
  • Pack based Batch Update

Select the management pack from the list and move it to the right-hand panel

Press the Apply button

revoke_mgmtpack_access01

Verify your completion by checking the Auto Licensing Disabled List and then check the target list

revoke_mgmtpack_access02

Verify that targets no longer have access to the pack by clicking on

  • Licensable Targets (Licensable targets and dependent targets)
  • Target Based Pack Access

at the top of the same page.

revoke_mgmtpack_access04

View original post