Quantcast
Channel: 12c – OracleBlog
Viewing all 61 articles
Browse latest View live

12c开始的新一轮Lifetime Support

$
0
0

12c开始的新一轮Lifetime Support

注意,11gR2也有一段Waived Extended Support的时期,和9iR2和10gR2一样,在这一时期不加收10%费用。详情请咨询oracle销售。


基于vbox的12c RAC的安装

$
0
0

总体上说,12c RAC的安装基本和11g的一致。

先整个简单版的12c RAC(不启用dns,不启用flex cluster,不启用admin policy),基于Oracle Linux Release 6 Update 4 for x86_64 (64 Bit),安装在virtualbox 4.2.14上。

一、virtualbox(vbox)的部署:

1.点击“新建”,选择类型为linux,版本为oracle(64位),主机名取名叫ol6-121-rac1

2.内存大小我设置了3000M,建议设置为4000M以上。

3.现在创建虚拟硬盘

4.选择VDI

5.选择动态分配

6.分配硬盘大小为30G

7.网卡1为host-only,网卡2为内部网络

8.存储-控制器IDE,选择linux的ISO镜像

9.点击启动,开始安装,注swap的大小最好大于4G,我填的比较小,只有3G,因此后面安装的时候会有warning,但也不影响。

10.安装时选择下面的package:
Base System > Base
Base System > Compatibility libraries
Base System > Hardware monitoring utilities
Base System > Large Systems Performance
Base System > Network file system client
Base System > Performance Tools
Base System > Perl Support
Servers > Server Platform
Servers > System administration tools
Desktops > Desktop
Desktops > Desktop Platform
Desktops > Fonts
Desktops > General Purpose Desktop
Desktops > Graphical Administration Tools
Desktops > Input Methods
Desktops > X Window System
Applications > Internet Browser
Development > Additional Development
Development > Development Tools

11.在 /etc/sysctl.conf添加:
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500

12.在/etc/security/limits.conf添加:
oracle   soft   nofile    1024
oracle   hard   nofile    65536
oracle   soft   nproc    2047
oracle   hard   nproc    16384
oracle   soft   stack    10240
oracle   hard   stack    32768

13.安装rpm包:
rpm -Uvh *binutils*
rpm -Uvh *compat-libcap1*
rpm -Uvh *compat-libstdc++-33*
rpm -Uvh *gcc*
rpm -Uvh *gcc-c++*
rpm -Uvh *glibc*
rpm -Uvh *glibc-devel*
rpm -Uvh *ksh*
rpm -Uvh *libgcc*
rpm -Uvh *libstdc++*
rpm -Uvh *libstdc++-devel*
rpm -Uvh *libaio*
rpm -Uvh *libaio-devel*
rpm -Uvh *libXext*
rpm -Uvh *libXtst*
rpm -Uvh *libX11*
rpm -Uvh *libXau*
rpm -Uvh *libxcb*
rpm -Uvh *libXi*
rpm -Uvh *make*
rpm -Uvh *sysstat*
rpm -Uvh *unixODBC*
rpm -Uvh *unixODBC-devel*

14.新建oracle用户
groupadd -g 54321 oinstall
groupadd -g 54322 dba
groupadd -g 54323 oper
groupadd -g 54324 backupdba
groupadd -g 54325 dgdba
groupadd -g 54326 kmdba
groupadd -g 54327 asmdba
groupadd -g 54328 asmoper
groupadd -g 54329 asmadmin
useradd -u 54321 -g oinstall -G dba,oper oracle

15.设置oracle用户的密码:
passwd oracle

16.设置/etc/hosts
127.0.0.1       localhost.localdomain   localhost
# Public
192.168.56.101   ol6-121-rac1.localdomain        ol6-121-rac1
192.168.56.102   ol6-121-rac2.localdomain        ol6-121-rac2
# Private
192.168.1.101   ol6-121-rac1-priv.localdomain   ol6-121-rac1-priv
192.168.1.102   ol6-121-rac2-priv.localdomain   ol6-121-rac2-priv
# Virtual
192.168.56.103   ol6-121-rac1-vip.localdomain    ol6-121-rac1-vip
192.168.56.104   ol6-121-rac2-vip.localdomain    ol6-121-rac2-vip
# SCAN
192.168.56.105   ol6-121-scan.localdomain ol6-121-scan
192.168.56.106   ol6-121-scan.localdomain ol6-121-scan
192.168.56.107   ol6-121-scan.localdomain ol6-121-scan

17.修改/etc/security/limits.d/90-nproc.conf
将下面的一行:
*          soft    nproc    1024
改成:
* - nproc 16384

18.修改SELINUX:
修改/etc/selinux/config为:
SELINUX=permissive

19.关闭防火墙:
# service iptables stop
# chkconfig iptables off

二、设置oracle用户的环境:

1.建立目录
mkdir -p  /u01/app/12.1.0.1/grid
mkdir -p /u01/app/oracle/product/12.1.0.1/db_1
chown -R oracle:oinstall /u01
chmod -R 775 /u01/

2.设置oracle用户环境变量:
# Oracle Settings
export TMP=/tmp
export TMPDIR=$TMP

export ORACLE_HOSTNAME=ol6-121-rac1.localdomain
export ORACLE_UNQNAME=CDBRAC
export ORACLE_BASE=/u01/app/oracle
export GRID_HOME=/u01/app/12.1.0.1/grid
export DB_HOME=$ORACLE_BASE/product/12.1.0.1/db_1
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=cdbrac1
export ORACLE_TERM=xterm
export BASE_PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
alias grid_env='. /home/oracle/grid_env'
alias db_env='. /home/oracle/db_env'

13.由于我们不建立grid用户,我们只在oracle用户下建立2个环境变量grid_env和db_env来加载环境,不需要切换用户。
建立/home/oracle/grid_env文件如下:
export ORACLE_SID=+ASM1
export ORACLE_HOME=$GRID_HOME
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

14.建立/home/oracle/db_env文件如下:
export ORACLE_SID=cdbrac1
export ORACLE_HOME=$DB_HOME
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

15.测试一下:
[root@ol6-121-rac1 ~]# su - oracle
[oracle@ol6-121-rac1 ~]$ grid_env
[oracle@ol6-121-rac1 ~]$ echo $ORACLE_HOME
/u01/app/12.1.0.1/grid
[oracle@ol6-121-rac1 ~]$
[oracle@ol6-121-rac1 ~]$ db_env
[oracle@ol6-121-rac1 ~]$ echo $ORACLE_HOME
/u01/app/oracle/product/12.1.0.1/db_1

三、安装vbox的Guest Additions

1.在vbox管理器界面,选择我们的ol6-121-rac1的机器,点击设置,进存储-控制器IDE,点右边的分配光驱,选择一个虚拟光盘,X:\Program Files\Oracle\VirtualBox\VBoxGuestAdditions.iso
 
2.运行:
cd /media/VBOXADDITIONS_4.2.14_86644
sh ./VBoxLinuxAdditions.run
 
3.安装完Guest Additions,之后,就可以使用vbox的共享文件夹功能了。但是,为了能让oracle用户访问共享文件夹,需要给oracle用户加vboxsf组:
# usermod -G vboxsf,dba oracle
# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(vboxsf)
 
4.好了,你可以在你的目录解压缩你下载的安装介质了。
unzip linuxamd64_12c_grid_1of2.zip
unzip linuxamd64_12c_grid_2of2.zip
unzip linuxamd64_12c_database_1of2.zip
unzip linuxamd64_12c_database_2of2.zip
注意在windows下,解压出来的要合并成2个目录,database和grid
 
5.在vbox管理器界面,选择我们的ol6-121-rac1的机器,点击设置,进共享文件夹,点加号,然后勾上 "自动挂载" 和"固定分配"
 
 
6.安装在共享文件夹中的grid目录中的cvu:
cd /media/sf_12cR1/grid/rpm
rpm -Uvh cvuqdisk-1.0.9-1.rpm

四、创建asm的共享盘

1.关闭虚拟机ol6-121-rac1
 
2.创建共享asm盘,共4个asm盘,每个盘5G大小:
E:\>cd E:\Oralce_Virtual_Box\ol6-121-rac
E:\Oralce_Virtual_Box\ol6-121-rac>VBoxManage createhd --filename asm1.vdi --size 5120 --format VDI --variant Fixed
E:\Oralce_Virtual_Box\ol6-121-rac>VBoxManage createhd --filename asm2.vdi --size 5120 --format VDI --variant Fixed
E:\Oralce_Virtual_Box\ol6-121-rac>VBoxManage createhd --filename asm3.vdi --size 5120 --format VDI --variant Fixed
E:\Oralce_Virtual_Box\ol6-121-rac>VBoxManage createhd --filename asm4.vdi --size 5120 --format VDI --variant Fixed
 
3.将创建的asm盘attach到虚拟机ol6-121-rac1上
E:\Oralce_Virtual_Box\ol6-121-rac>VBoxManage storageattach ol6-121-rac1 --storagectl "SATA" --port 1 --device 0 --type hdd     --medium asm1.vdi --mtype shareable
E:\Oralce_Virtual_Box\ol6-121-rac>VBoxManage storageattach ol6-121-rac1 --storagectl "SATA" --port 2 --device 0 --type hdd     --medium asm2.vdi --mtype shareable
E:\Oralce_Virtual_Box\ol6-121-rac>VBoxManage storageattach ol6-121-rac1 --storagectl "SATA" --port 3 --device 0 --type hdd     --medium asm3.vdi --mtype shareable
E:\Oralce_Virtual_Box\ol6-121-rac>VBoxManage storageattach ol6-121-rac1 --storagectl "SATA" --port 4 --device 0 --type hdd     --medium asm4.vdi --mtype shareable
 
4.将这些共享盘设置为可共享的:
E:\Oralce_Virtual_Box\ol6-121-rac>VBoxManage modifyhd asm1.vdi --type shareable
E:\Oralce_Virtual_Box\ol6-121-rac>VBoxManage modifyhd asm2.vdi --type shareable
E:\Oralce_Virtual_Box\ol6-121-rac>VBoxManage modifyhd asm3.vdi --type shareable
E:\Oralce_Virtual_Box\ol6-121-rac>VBoxManage modifyhd asm4.vdi --type shareable
 
5.启动虚拟机

五、将刚刚加的共享盘用udev加到主机上:

1.将硬盘分区,检查下新加的4块盘:
# cd /dev
# ls sd*
sda  sda1  sda2  sdb  sdc  sdd  sde
 
2. fdisk分区
fdisk /dev/sdb
输入n p 1 回车 回车 w
同理fdisk /dev/sdc,fdisk /dev/sdd,fdisk /dev/sde
 
3.分区完成后,检查看看:
# cd /dev
# ls sd*
sda  sda1  sda2  sdb  sdb1  sdc  sdc1  sdd  sdd1  sde  sde1
 
4.在/etc/scsi_id.config添加:
options=-g
 
5.查找SCSI ID
[root@ol6-121-rac1 dev]# /sbin/scsi_id -g -u -d /dev/sdb
[root@ol6-121-rac1 dev]# /sbin/scsi_id -g -u -d /dev/sdc
[root@ol6-121-rac1 dev]# /sbin/scsi_id -g -u -d /dev/sdd
[root@ol6-121-rac1 dev]# /sbin/scsi_id -g -u -d /dev/sde
我的显示如下:
1ATA_VBOX_HARDDISK_VBd468bcab-b01d8894
1ATA_VBOX_HARDDISK_VBc1b0c3f0-162d709a
1ATA_VBOX_HARDDISK_VB527c91e6-934cf458
1ATA_VBOX_HARDDISK_VB59bb6d05-167b1e5f
 
6.在文件/etc/udev/rules.d/99-oracle-asmdevices.rules中添加刚刚查到的SCSI ID信息,进行绑定:
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBd468bcab-b01d8894",  NAME="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBc1b0c3f0-162d709a",  NAME="asm-disk2", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB527c91e6-934cf458",  NAME="asm-disk3", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB59bb6d05-167b1e5f",  NAME="asm-disk4", OWNER="oracle", GROUP="dba", MODE="0660"
 
7.更新:
# /sbin/partprobe /dev/sdb1
# /sbin/partprobe /dev/sdc1
# /sbin/partprobe /dev/sdd1
# /sbin/partprobe /dev/sde1
 
8.重启服务:
# /sbin/udevadm control --reload-rules
# /sbin/start_udev
 
9.检查asm磁盘是否已经存在:
# ls -al /dev/asm*
brw-rw---- 1 oracle dba 8, 17 Oct 12 14:39 /dev/asm-disk1
brw-rw---- 1 oracle dba 8, 33 Oct 12 14:38 /dev/asm-disk2
brw-rw---- 1 oracle dba 8, 49 Oct 12 14:39 /dev/asm-disk3
brw-rw---- 1 oracle dba 8, 65 Oct 12 14:39 /dev/asm-disk4
#

六、克隆主机ol6-121-rac1为ol6-121-rac2:

1.克隆主机:
E:\Oralce_Virtual_Box>cd ol6-121-rac1
E:\Oralce_Virtual_Box\ol6-121-rac1>VBoxManage clonehd ol6-121-rac1.vdi ol6-121-rac2.vdi
E:\Oralce_Virtual_Box\ol6-121-rac1>
 
2.在vbox管理器界面,点击新建,选择和ol6-121-rac1一样的类型和版本,内存也一样,后面选择“使用已有的虚拟硬盘文件”,然后选择ol6-121-rac2.vdi
 
3.也添加asm共享硬盘:
VBoxManage storageattach ol6-121-rac2 --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1.vdi --mtype shareable
VBoxManage storageattach ol6-121-rac2 --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2.vdi --mtype shareable
VBoxManage storageattach ol6-121-rac2 --storagectl "SATA" --port 3 --device 0 --type hdd --medium asm3.vdi --mtype shareable
VBoxManage storageattach ol6-121-rac2 --storagectl "SATA" --port 4 --device 0 --type hdd --medium asm4.vdi --mtype shareable
 
4.启动后主机,搞一下网络:
清空/etc/udev/rules.d
点击system-preferences-network connections,中的所有的网卡
重启主机
ifconfig,看到eth0和eth1两块网卡的mac地址,在进system-preferences-network connections,add wire网卡,MAC地址填入我们刚刚看到的,IPv4的地址写入和ol6-121-rac1类似的。
# service network restart 重启网络之后,就可以连上去了。注意两个主机的一致性,如都是eth0为public网卡,eth1为private网卡。
 
5.检查安装前的条件是否满足:
[oracle@ol6-121-rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n ol6-121-rac1,ol6-121-rac2 -verbose
有3个fail可以忽略,dns,/etc/resolv.conf和swap

七、安装cluster
OUI安装,看图说话


呵呵,这边有著名的flex cluster

八、安装DB,同样也是看图说话,

至此,12c RAC数据库数据库已经安装完毕,enjoy.

SQL*Plus: Release 12.1.0.1.0 Production on Sat Jul 6 10:38:42 2013
 
Copyright (c) 1982, 2013, OracleAll rights reserved.
 
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
 
SQL> select instance_name,status from gv$instance order by 1;
 
INSTANCE_NAME    STATUS
--------------
-- ------------
cdbrac1          OPEN
cdbrac2          OPEN
 
SQL>

ps:如果遇到启动数据库时报错ORA-00845: MEMORY_TARGET not supported on this system
那应该是/dev/shm不够大引起,修改/etc/fstab中的:
tmpfs /dev/shm tmpfs defaults 0 0
修改为:
tmpfs /dev/shm tmpfs defaults,size=3g 0 0
重启主机后再重启数据库即可。

12c的架构图

小谈12c的in memory option

$
0
0

(1) in memory option(以下简称imo)将随着12.1.0.2发布

(2)in memory option不会取代TimesTen(以下简称TT),因为这是2种层面的产品,TT还是会架在DB层之前,和应用紧密相连,为应用提供缓存,imo在DB层,可以做到高可用如RAC,DG等一些TT无法实现的架构。另外同样道理,imo也不会替代Exalytics。

(3)imo引入了、或者说学习了列存储DB,在内存的extend存储每列的最大最小值,类似Exadata中的Exadata Storage Index on every column方式进行列存储,oracle称之为:In-Memory Column Store Storage Index。

(4)Oracle In-Memory Columnar Compression,提供2倍到10倍的压缩率。

(5)执行计划有新的表达:table access in memory full。类似Exadata中的table access storage full。

(6)表连接中将使用布隆过滤和hash连接。

(7)需要几个初始化参数开启imo,在12.1.0.2中show parameter memory即可看到,如inmemory_query可以在system或session级别开启imo功能。会有几个新的v$im开头的视图可以查询在memory中的对象,如v$IM_SEGMENTS,v$IM_USER_SEGMENTS.

(8)对已OLTP,对于DML操作和原来一样。当DML修改时,在内存中的列存储,会标记为stale,并且会copy变动的记录到Transaction Journal。注:存储在内存中的列是永远保持最新的。读一致性会将列的内容和Transaction Journal进行merge,在个merge操作是online操作的。

(9)in memory option对数据库crash重启后对外提供服务的速度可能会有影响,虽然行存储的数据可以对外提供服务,但是load数据到内存中的列存储还是需要一定的时间。

(10)imo最厉害的一点是对application完全透明,你可以完全不做任何修改,就获得巨大的效率。

12c的网络设置

$
0
0

12c开始,对于pdb一般都是需要tnsname登录了,在这里记录一下主要的3个网络文件配置。

listener.ora

# listener.ora Network Configuration File: /u01/ora12c/app/oracle/product/12.1.0/db_1/network/admin/listener.ora
# Generated by Oracle configuration tools.
 
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = ora12c)
      (ORACLE_HOME = /u01/ora12c/app/oracle/product/12.1.0/db_1)
      (SID_NAME = ora12c)
    )
    (SID_DESC =
      (GLOBAL_DBNAME = pdb1)
      (ORACLE_HOME = /u01/ora12c/app/oracle/product/12.1.0/db_1)
      (SID_NAME = ora12c)
    )
  )
 
LISTENER =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.56.132)(PORT = 1522))
  )
 
ADR_BASE_LISTENER = /u01/ora12c/app/oracle



tnsnames.ora

# tnsnames.ora Network Configuration File: /u01/ora12c/app/oracle/product/12.1.0/db_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.
 
ORA12C =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.56.132)(PORT = 1522))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = ora12c)
    )
  )
 
PDB1 =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.56.132)(PORT = 1522))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = pdb1)
    )
  )



sqlnet.ora

# sqlnet.ora Network Configuration File: /u01/ora12c/app/oracle/product/12.1.0/db_1/network/admin/sqlnet.ora
# Generated by Oracle configuration tools.
 
NAMES.DIRECTORY_PATH= (TNSNAMES, HOSTNAME)
 
ADR_BASE = /u01/ora12c/app/oracle

12c flex cluster小记(1)

$
0
0

这篇文章其实在草稿箱里面躺了快1年了,只是太长,长的我都没有信心完成它了。不过,放着可惜,还是拆分一下share出来吧。

关于flex cluster

1. 有hub node和leaf node的概念,目前数据库只能放在hub node,leaf node据说是用于放耦合度较低的服务,如weblogic。
2. flex和非flex可以互相转换crsctl set cluster flex|standard, 需重启crs
3. hub和leaf可以互相转换
4. 必须GNS固定VIP

之前有消息说db可以安装在leaf node,甚至有人写书都写出来说可以安装在leaf node上,因为有flex asm的关系,可以接收来自非本主机的db实例。但是事实证明,等正式发布的这个版本,db是不能安装在leaf node,如果尝试安装,会报错:

1.flex cluster安装。
flex cluster的安装,其实只要做好了准备工作,安装难度并不大。只是12c中需要使用GNS,因此在虚拟机环境下配dhcp和dns就稍微麻烦点了。

我是建立一个虚拟机做dhcp server,在本机物理主机(win 7)上用ISC BIND来做dns解析。

在虚拟机dhcp server的配置如下:

[root@dhcpserver sbin]# cat /etc/dhcp/dhcpd.conf
ddns-update-style interim;
 
ignore client-updates;

## DHCP for public:
subnet 192.168.56.0 netmask 255.255.255.0
{
default-lease-time 43200;
max-lease-time 86400;
option subnet-mask 255.255.255.0;
option broadcast-address 192.168.56.255;
option routers 192.168.56.1;
option domain-name-servers
192.168.56.3;
option domain-name "grid.localdomain";
pool
{
range 192.168.56.10 192.168.56.29;
}
}



## DHCP for private
subnet 192.168.57.0 netmask 255.255.255.0
{
default-lease-time 43200;
max-lease-time 86400;
option subnet-mask 255.255.0.0;
option broadcast-address 192.168.57.255;
pool
{
range 192.168.57.30 192.168.57.49;
}
}

在windows上的BIND配置:

C:\Windows\SysWOW64\dns\etc>cat named.conf

options {
  directory "c:\windows\SysWOW64\dns\etc";
  forwarders {8.8.8.8; 8.8.4.4;};
  allow-transfer { none; };
};

logging{
  channel my_log{
    file "named.log" versions 3 size 2m;
    severity info;
    print-time yes;
    print-severity yes;
    print-category yes;
  };
  category default{
    my_log;
  };
};


######################################
# ADD for oracle RAC SCAN,
# START FROM HERE
######################################
zone "56.168.192.in-addr.arpa" IN {

       type master;

       file "C:\Windows\SysWOW64\dns\etc\56.168.192.in-addr.local";

       allow-update { none; };

};



zone "localdomain" IN {

       type master;

       file "C:\Windows\SysWOW64\dns\etc\localdomain.zone";

       allow-update { none; };

};

zone "grid.localdomain" IN {
type forward;
forward only;
forwarders { 192.168.56.108 ;};
};


######################################
# ADD for oracle RAC SCAN,
# END FROM HERE
######################################
C:\Windows\SysWOW64\dns\etc>

在cluster节点的主机上,看hosts文件是这样的:

[oracle@ol6-121-rac1 ~]$ cat /etc/hosts
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

127.0.0.1       localhost.localdomain   localhost
# Public
192.168.56.121   ol6-121-rac1.localdomain        ol6-121-rac1
192.168.56.122   ol6-121-rac2.localdomain        ol6-121-rac2
# Private
192.168.57.31   ol6-121-rac1-priv.localdomain   ol6-121-rac1-priv
192.168.57.32   ol6-121-rac2-priv.localdomain   ol6-121-rac2-priv
#Because use GNS, so vip and scanvip is provide by GNS
# Virtual
#192.168.56.103   ol6-121-rac1-vip.localdomain    ol6-121-rac1-vip
#192.168.56.104   ol6-121-rac2-vip.localdomain    ol6-121-rac2-vip
# SCAN
#192.168.56.105   ol6-121-scan.localdomain ol6-121-scan
#192.168.56.106   ol6-121-scan.localdomain ol6-121-scan
#192.168.56.107   ol6-121-scan.localdomain ol6-121-scan
[oracle@ol6-121-rac1 ~]$

一会等安装完成后,我们再来通过ifconfig来看看带起来的IP。

安装比较简单,基本是一路next下去了。我的环境是2个节点的cluster,安装的时候,我建立成了一个hub node,一个leaf node。这其实可以互相转换的。

安装过程我们看图说话:


注意,这里的gns的subdomains是grid.localmain,和我们之前在dhcp配置中的子域一致。


选择一个hub node,一个leaf node。


注意,这里核对下你的子网,和之前在计划的是否一样。


注,这里如果看不到asm的disk,change discovery path一下即可。


注,如果你之前的GNS环境没建立好,如dhcp有问题,或者dns有问题,那么在这一步执行root.sh的脚本的时候很有可能报错。

[root@ol6-121-rac1 ~]# sh /u01/app/12.1.0.1/grid/root.sh
……
CRS-2672: Attempting to start 'ora.DATA.dg' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.DATA.dg' on 'ol6-121-rac1' succeeded
  2013/07/12 15:37:01 CLSRSC-349: The Oracle Clusterware stack failed to stop <<<<<<<<

Died at /u01/app/12.1.0.1/grid/crs/install/crsinstall.pm line 310. <<<<<<<<<<
The command '/u01/app/12.1.0.1/grid/perl/bin/perl -I/u01/app/12.1.0.1/grid/perl/lib -I/u01/app/12.1.0.1/grid/crs/install /u01/app/12.1.0.1/grid/crs/install/rootcrs.pl ' execution failed
[root@ol6-121-rac1 ~]#

查log可以看到如下,但是根据相关的报错去mos或者网上查,基本没有什么信息,(我不知道现在有没有,但是1年前我去查的时候,资料基本是空白),只有11g cluster安装的时候,有相似的报错。报错的原因是GNS的配置问题。因此,花了比较大的力气,配置了DNS和dhcp,搞好了GNS,才使得安装顺利。如果你也安装flex cluster,请一定要配置好网络配置好GNS。

log中的报错信息:
……
>  CRS-2794: Shutdown of Cluster Ready Services-managed resources on 'ol6-121-rac1' has failed
>  CRS-2675: Stop of 'ora.crsd' on 'ol6-121-rac1' failed
>  CRS-2799: Failed to shut down resource 'ora.crsd' on 'ol6-121-rac1'
>  CRS-2795: Shutdown of Oracle High Availability Services-managed resources on 'ol6-121-rac1' has failed
>  CRS-4687: Shutdown command has completed with errors.
>  CRS-4000: Command Stop failed, or completed with errors.
>End Command output
2013-07-12 15:37:00: The return value of stop of CRS: 1
2013-07-12 15:37:00: Executing cmd: /u01/app/12.1.0.1/grid/bin/crsctl check crs
2013-07-12 15:37:01: Command output:
>  CRS-4638: Oracle High Availability Services is online
>  CRS-4537: Cluster Ready Services is online
>  CRS-4529: Cluster Synchronization Services is online
>  CRS-4533: Event Manager is online
>End Command output
2013-07-12 15:37:01: Executing cmd: /u01/app/12.1.0.1/grid/bin/clsecho -p has -f clsrsc -m 349
2013-07-12 15:37:01: Command output:
>  CLSRSC-349: The Oracle Clusterware stack failed to stop
>End Command output
2013-07-12 15:37:01: Executing cmd: /u01/app/12.1.0.1/grid/bin/clsecho -p has -f clsrsc -m 349
2013-07-12 15:37:01: Command output:
>  CLSRSC-349: The Oracle Clusterware stack failed to stop
>End Command output
2013-07-12 15:37:01: CLSRSC-349: The Oracle Clusterware stack failed to stop
2013-07-12 15:37:01: ###### Begin DIE Stack Trace ######
2013-07-12 15:37:01:     Package         File                 Line Calling   
2013-07-12 15:37:01:     --------------- -------------------- ---- ----------
2013-07-12 15:37:01:  1: main            rootcrs.pl            211 crsutils::dietrap
2013-07-12 15:37:01:  2: crsinstall      crsinstall.pm         310 main::__ANON__
2013-07-12 15:37:01:  3: crsinstall      crsinstall.pm         219 crsinstall::CRSInstall
2013-07-12 15:37:01:  4: main            rootcrs.pl            334 crsinstall::new
2013-07-12 15:37:01: ####### End DIE Stack Trace #######

2013-07-12 15:37:01: ROOTCRS_STACK checkpoint has failed
2013-07-12 15:37:01: Running as user oracle: /u01/app/12.1.0.1/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL
2013-07-12 15:37:01: s_run_as_user2: Running /bin/su oracle -c ' /u01/app/12.1.0.1/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL '
2013-07-12 15:37:01: Removing file /tmp/fileBn895c
2013-07-12 15:37:01: Successfully removed file: /tmp/fileBn895c
2013-07-12 15:37:01: /bin/su successfully executed

2013-07-12 15:37:01: Succeeded in writing the checkpoint:'ROOTCRS_STACK' with status:FAIL
2013-07-12 15:37:01: Running as user oracle: /u01/app/12.1.0.1/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL
2013-07-12 15:37:01: s_run_as_user2: Running /bin/su oracle -c ' /u01/app/12.1.0.1/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL '
2013-07-12 15:37:01: Removing file /tmp/fileYx9kDX
2013-07-12 15:37:01: Successfully removed file: /tmp/fileYx9kDX
2013-07-12 15:37:01: /bin/su successfully executed

2013-07-12 15:37:01: Succeeded in writing the checkpoint:'ROOTCRS_STACK' with status:FAIL

但是如果没有问题,执行的时候如下面这样,那么恭喜你,直接进入下一步配置asm了。

--在node1上执行:
[root@ol6-121-rac1 12.1.0.1]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@ol6-121-rac1 12.1.0.1]#
[root@ol6-121-rac1 12.1.0.1]#
[root@ol6-121-rac1 12.1.0.1]# /u01/app/12.1.0.1/grid/root.sh
Performing root user operation for Oracle 12c

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0.1/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.1/grid/crs/install/crsconfig_params
2013/08/20 00:23:03 CLSRSC-363: User ignored prerequisites during installation

OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
2013/08/20 00:24:00 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
 CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'ol6-121-rac1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.mdnsd' on 'ol6-121-rac1' succeeded
CRS-2676: Start of 'ora.evmd' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.gpnpd' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'ol6-121-rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'ol6-121-rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'ol6-121-rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.diskmon' on 'ol6-121-rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'ol6-121-rac1' succeeded
 
ASM created and started successfully.

Disk Group DATA created successfully.

CRS-2672: Attempting to start 'ora.storage' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.storage' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.crsd' on 'ol6-121-rac1' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 7998f05a43964f77bfe185cd468262c4.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   7998f05a43964f77bfe185cd468262c4 (/dev/asm-disk1) [DATA]
Located 1 voting disk(s).
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.crsd' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.crsd' on 'ol6-121-rac1' succeeded
CRS-2673: Attempting to stop 'ora.storage' on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.storage' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'ol6-121-rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'ol6-121-rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'ol6-121-rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.cssd' on 'ol6-121-rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.gipcd' on 'ol6-121-rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'ol6-121-rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
 CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'ol6-121-rac1'
CRS-2672: Attempting to start 'ora.evmd' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.mdnsd' on 'ol6-121-rac1' succeeded
CRS-2676: Start of 'ora.evmd' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.gpnpd' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.gipcd' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'ol6-121-rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.diskmon' on 'ol6-121-rac1' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'ol6-121-rac1'
CRS-2676: Start of 'ora.cssd' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'ol6-121-rac1'
CRS-2672: Attempting to start 'ora.ctssd' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.ctssd' on 'ol6-121-rac1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.asm' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.storage' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.crsd' on 'ol6-121-rac1' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: ol6-121-rac1
CRS-6016: Resource auto-start has completed for server ol6-121-rac1
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2013/08/20 00:30:20 CLSRSC-343: Successfully started Oracle clusterware stack

 CRS-2672: Attempting to start 'ora.net1.network' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.net1.network' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.gns.vip' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.gns.vip' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.gns' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.gns' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.asm' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.DATA.dg' on 'ol6-121-rac1' succeeded
 CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.crsd' on 'ol6-121-rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.ol6-121-rac1.vip' on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.gns' on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.cvu' on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'ol6-121-rac1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'ol6-121-rac1' succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.cvu' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'ol6-121-rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.ol6-121-rac1.vip' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'ol6-121-rac1' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.scan2.vip' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.gns' on 'ol6-121-rac1' succeeded
CRS-2673: Attempting to stop 'ora.gns.vip' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.gns.vip' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'ol6-121-rac1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.ons' on 'ol6-121-rac1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.net1.network' on 'ol6-121-rac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'ol6-121-rac1' has completed
CRS-2677: Stop of 'ora.crsd' on 'ol6-121-rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.storage' on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.storage' on 'ol6-121-rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'ol6-121-rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'ol6-121-rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.cssd' on 'ol6-121-rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.gipcd' on 'ol6-121-rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'ol6-121-rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
 CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'ol6-121-rac1'
CRS-2672: Attempting to start 'ora.evmd' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.mdnsd' on 'ol6-121-rac1' succeeded
CRS-2676: Start of 'ora.evmd' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.gpnpd' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.gipcd' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'ol6-121-rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.diskmon' on 'ol6-121-rac1' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'ol6-121-rac1'
CRS-2676: Start of 'ora.cssd' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'ol6-121-rac1'
CRS-2672: Attempting to start 'ora.ctssd' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.ctssd' on 'ol6-121-rac1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.asm' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.storage' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.crsd' on 'ol6-121-rac1' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: ol6-121-rac1
CRS-2672: Attempting to start 'ora.cvu' on 'ol6-121-rac1'
CRS-2672: Attempting to start 'ora.ons' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.cvu' on 'ol6-121-rac1' succeeded
CRS-2676: Start of 'ora.ons' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on 'ol6-121-rac1'
CRS-2672: Attempting to start 'ora.scan3.vip' on 'ol6-121-rac1'
CRS-2672: Attempting to start 'ora.scan1.vip' on 'ol6-121-rac1'
CRS-2672: Attempting to start 'ora.ol6-121-rac1.vip' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.scan1.vip' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.scan3.vip' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.scan2.vip' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.ol6-121-rac1.vip' on 'ol6-121-rac1' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'ol6-121-rac1' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'ol6-121-rac1' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'ol6-121-rac1' succeeded
CRS-6016: Resource auto-start has completed for server ol6-121-rac1
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
 2013/08/20 00:37:55 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@ol6-121-rac1 12.1.0.1]#   


--在node2上执行:
[root@ol6-121-rac2 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@ol6-121-rac2 ~]#       
[root@ol6-121-rac2 ~]#
[root@ol6-121-rac2 ~]#
[root@ol6-121-rac2 ~]# /u01/app/12.1.0.1/grid/root.sh
Performing root user operation for Oracle 12c

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0.1/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.1/grid/crs/install/crsconfig_params
2013/08/20 00:38:48 CLSRSC-363: User ignored prerequisites during installation

OLR initialization - successful
2013/08/20 00:39:20 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
 CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'ol6-121-rac2'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'ol6-121-rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'ol6-121-rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
 CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'ol6-121-rac2'
CRS-2672: Attempting to start 'ora.evmd' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.mdnsd' on 'ol6-121-rac2' succeeded
CRS-2676: Start of 'ora.evmd' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.gpnpd' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.gipcd' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'ol6-121-rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.diskmon' on 'ol6-121-rac2' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'ol6-121-rac2'
CRS-2676: Start of 'ora.cssd' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'ol6-121-rac2'
CRS-2672: Attempting to start 'ora.ctssd' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'ol6-121-rac2' succeeded
CRS-2676: Start of 'ora.ctssd' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.storage' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.crsd' on 'ol6-121-rac2' succeeded
CRS-6017: Processing resource auto-start for servers: ol6-121-rac2
CRS-6016: Resource auto-start has completed for server ol6-121-rac2
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2013/08/20 00:43:36 CLSRSC-343: Successfully started Oracle clusterware stack

2013/08/20 00:43:41 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@ol6-121-rac2 ~]#     
Broadcast message from root@ol6-121-rac2


asm的配置,不需要该什么参数,由于我们节点2是leaf node,所以不会有启动的asm instance。

等asm配置完,flex cluster也就安装完毕了。

好了,flex cluster,安装完毕,在继续安装db之前,我们用ifconfig来看看其当前的网络情况和相关资源:
node 1是hub node,上面的情况是:

[oracle@ol6-121-rac1 ~]$ ifconfig -a
eth0      Link encap:Ethernet  HWaddr 08:00:27:06:72:4B 
          inet addr:192.168.56.121  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe06:724b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1204 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1211 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:166853 (162.9 KiB)  TX bytes:208611 (203.7 KiB)

eth0:1    Link encap:Ethernet  HWaddr 08:00:27:06:72:4B 
          inet addr:192.168.56.108  Bcast:192.168.56.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth0:2    Link encap:Ethernet  HWaddr 08:00:27:06:72:4B 
          inet addr:192.168.56.12  Bcast:192.168.56.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth0:3    Link encap:Ethernet  HWaddr 08:00:27:06:72:4B 
          inet addr:192.168.56.14  Bcast:192.168.56.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth0:4    Link encap:Ethernet  HWaddr 08:00:27:06:72:4B 
          inet addr:192.168.56.13  Bcast:192.168.56.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth0:5    Link encap:Ethernet  HWaddr 08:00:27:06:72:4B 
          inet addr:192.168.56.11  Bcast:192.168.56.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth1      Link encap:Ethernet  HWaddr 08:00:27:4D:0D:02 
          inet addr:192.168.57.31  Bcast:192.168.57.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe4d:d02/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1147 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1858 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:523883 (511.6 KiB)  TX bytes:1344884 (1.2 MiB)

eth1:1    Link encap:Ethernet  HWaddr 08:00:27:4D:0D:02 
          inet addr:169.254.2.250  Bcast:169.254.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:1871 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1871 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:2288895 (2.1 MiB)  TX bytes:2288895 (2.1 MiB)

[oracle@ol6-121-rac1 ~]$ crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora....SM.lsnr ora....er.type ONLINE    ONLINE    ol6-...rac1
ora.DATA.dg    ora....up.type ONLINE    ONLINE    ol6-...rac1
ora....ER.lsnr ora....er.type ONLINE    ONLINE    ol6-...rac1
ora....AF.lsnr ora....er.type ONLINE    ONLINE    ol6-...rac2
ora....N1.lsnr ora....er.type ONLINE    ONLINE    ol6-...rac1
ora....N2.lsnr ora....er.type ONLINE    ONLINE    ol6-...rac1
ora....N3.lsnr ora....er.type ONLINE    ONLINE    ol6-...rac1
ora.asm        ora.asm.type   ONLINE    ONLINE    ol6-...rac1
ora.cvu        ora.cvu.type   ONLINE    ONLINE    ol6-...rac1
ora.gns        ora.gns.type   ONLINE    ONLINE    ol6-...rac1
ora.gns.vip    ora....ip.type ONLINE    ONLINE    ol6-...rac1
ora....network ora....rk.type ONLINE    ONLINE    ol6-...rac1
ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE               
ora....C1.lsnr application    ONLINE    ONLINE    ol6-...rac1
ora....ac1.ons application    ONLINE    ONLINE    ol6-...rac1
ora....ac1.vip ora....t1.type ONLINE    ONLINE    ol6-...rac1
ora.ons        ora.ons.type   ONLINE    ONLINE    ol6-...rac1
ora.proxy_advm ora....vm.type ONLINE    ONLINE    ol6-...rac1
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    ol6-...rac1
ora.scan2.vip  ora....ip.type ONLINE    ONLINE    ol6-...rac1
ora.scan3.vip  ora....ip.type ONLINE    ONLINE    ol6-...rac1
[oracle@ol6-121-rac1 ~]$ ps -ef |grep asm
root      2416     2  0 22:45 ?        00:00:00 [asmWorkerThread]
root      2417     2  0 22:45 ?        00:00:00 [asmWorkerThread]
root      2418     2  0 22:45 ?        00:00:00 [asmWorkerThread]
root      2419     2  0 22:45 ?        00:00:00 [asmWorkerThread]
root      2420     2  0 22:45 ?        00:00:00 [asmWorkerThread]
oracle    2746     1  0 22:46 ?        00:00:00 asm_pmon_+ASM1
oracle    2748     1  0 22:46 ?        00:00:00 asm_psp0_+ASM1
oracle    2750     1  5 22:46 ?        00:00:22 asm_vktm_+ASM1
oracle    2754     1  0 22:46 ?        00:00:00 asm_gen0_+ASM1
oracle    2756     1  0 22:46 ?        00:00:00 asm_mman_+ASM1
oracle    2760     1  0 22:46 ?        00:00:00 asm_diag_+ASM1
oracle    2762     1  0 22:46 ?        00:00:00 asm_ping_+ASM1
oracle    2764     1  0 22:46 ?        00:00:00 asm_dia0_+ASM1
oracle    2766     1  0 22:46 ?        00:00:00 asm_lmon_+ASM1
oracle    2768     1  0 22:46 ?        00:00:00 asm_lmd0_+ASM1
oracle    2770     1  0 22:46 ?        00:00:02 asm_lms0_+ASM1
oracle    2774     1  0 22:46 ?        00:00:00 asm_lmhb_+ASM1
oracle    2776     1  0 22:46 ?        00:00:00 asm_lck1_+ASM1
oracle    2778     1  0 22:46 ?        00:00:00 asm_gcr0_+ASM1
oracle    2780     1  0 22:46 ?        00:00:00 asm_dbw0_+ASM1
oracle    2782     1  0 22:46 ?        00:00:00 asm_lgwr_+ASM1
oracle    2784     1  0 22:46 ?        00:00:00 asm_ckpt_+ASM1
oracle    2786     1  0 22:46 ?        00:00:00 asm_smon_+ASM1
oracle    2788     1  0 22:46 ?        00:00:00 asm_lreg_+ASM1
oracle    2790     1  0 22:46 ?        00:00:00 asm_rbal_+ASM1
oracle    2792     1  0 22:46 ?        00:00:00 asm_gmon_+ASM1
oracle    2794     1  0 22:46 ?        00:00:00 asm_mmon_+ASM1
oracle    2796     1  0 22:46 ?        00:00:00 asm_mmnl_+ASM1
oracle    2798     1  0 22:46 ?        00:00:00 asm_lck0_+ASM1
oracle    2886     1  0 22:47 ?        00:00:00 asm_asmb_+ASM1
oracle    2888     1  0 22:47 ?        00:00:00 oracle+ASM1_asmb_+asm1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle    6606  3162  0 22:54 pts/0    00:00:00 grep asm
[oracle@ol6-121-rac1 ~]$ ps -ef |grep tns
root        10     2  0 22:44 ?        00:00:00 [netns]
oracle    3146     1  0 22:47 ?        00:00:00 /u01/app/12.1.0.1/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit
oracle    3156     1  0 22:47 ?        00:00:00 /u01/app/12.1.0.1/grid/bin/tnslsnr LISTENER_SCAN1 -no_crs_notify -inherit
oracle    3187     1  0 22:47 ?        00:00:00 /u01/app/12.1.0.1/grid/bin/tnslsnr LISTENER_SCAN2 -no_crs_notify -inherit
oracle    3199     1  0 22:47 ?        00:00:00 /u01/app/12.1.0.1/grid/bin/tnslsnr LISTENER_SCAN3 -no_crs_notify -inherit
oracle    3237     1  0 22:47 ?        00:00:00 /u01/app/12.1.0.1/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
oracle    6608  3162  0 22:54 pts/0    00:00:00 grep tns
[oracle@ol6-121-rac1 ~]$
[oracle@ol6-121-rac1 ~]$ ps -ef |grep ohas
root      1162     1  0 22:45 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
root      2061     1  1 22:45 ?        00:00:09 /u01/app/12.1.0.1/grid/bin/ohasd.bin reboot
oracle    6611  3162  0 22:54 pts/0    00:00:00 grep ohas
[oracle@ol6-121-rac1 ~]$

node 2是leaf node,上面的情况是:

[oracle@ol6-121-rac2 ~]$ ifconfig -a
eth0      Link encap:Ethernet  HWaddr 08:00:27:8E:CE:20 
          inet addr:192.168.56.122  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe8e:ce20/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1049 errors:0 dropped:0 overruns:0 frame:0
          TX packets:854 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:157826 (154.1 KiB)  TX bytes:135961 (132.7 KiB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:3B:C4:0A 
          inet addr:192.168.57.32  Bcast:192.168.57.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe3b:c40a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1886 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1242 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1353953 (1.2 MiB)  TX bytes:543170 (530.4 KiB)

lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:6 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:408 (408.0 b)  TX bytes:408 (408.0 b)

[oracle@ol6-121-rac2 ~]$ crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora....SM.lsnr ora....er.type ONLINE    ONLINE    ol6-...rac1
ora.DATA.dg    ora....up.type ONLINE    ONLINE    ol6-...rac1
ora....ER.lsnr ora....er.type ONLINE    ONLINE    ol6-...rac1
ora....AF.lsnr ora....er.type ONLINE    ONLINE    ol6-...rac2
ora....N1.lsnr ora....er.type ONLINE    ONLINE    ol6-...rac1
ora....N2.lsnr ora....er.type ONLINE    ONLINE    ol6-...rac1
ora....N3.lsnr ora....er.type ONLINE    ONLINE    ol6-...rac1
ora.asm        ora.asm.type   ONLINE    ONLINE    ol6-...rac1
ora.cvu        ora.cvu.type   ONLINE    ONLINE    ol6-...rac1
ora.gns        ora.gns.type   ONLINE    ONLINE    ol6-...rac1
ora.gns.vip    ora....ip.type ONLINE    ONLINE    ol6-...rac1
ora....network ora....rk.type ONLINE    ONLINE    ol6-...rac1
ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE               
ora....C1.lsnr application    ONLINE    ONLINE    ol6-...rac1
ora....ac1.ons application    ONLINE    ONLINE    ol6-...rac1
ora....ac1.vip ora....t1.type ONLINE    ONLINE    ol6-...rac1
ora.ons        ora.ons.type   ONLINE    ONLINE    ol6-...rac1
ora.proxy_advm ora....vm.type ONLINE    ONLINE    ol6-...rac1
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    ol6-...rac1
ora.scan2.vip  ora....ip.type ONLINE    ONLINE    ol6-...rac1
ora.scan3.vip  ora....ip.type ONLINE    ONLINE    ol6-...rac1
[oracle@ol6-121-rac2 ~]$ ps -ef |grep asm
oracle    2663  2566  0 22:55 pts/0    00:00:00 grep asm
[oracle@ol6-121-rac2 ~]$ ps -ef |grep tns
root        10     2  0 22:46 ?        00:00:00 [netns]
oracle    2641     1  0 22:53 ?        00:00:00 /u01/app/12.1.0.1/grid/bin/tnslsnr LISTENER_LEAF -no_crs_notify -inherit
oracle    2666  2566  0 22:55 pts/0    00:00:00 grep tns
[oracle@ol6-121-rac2 ~]$ ps -ef |grep ohas
root      1124     1  0 22:46 ?        00:00:00 /bin/sh /etc/init.d/init.ohasd run
root      2058     1  2 22:47 ?        00:00:14 /u01/app/12.1.0.1/grid/bin/ohasd.bin reboot
oracle    2668  2566  0 22:55 pts/0    00:00:00 grep ohas
[oracle@ol6-121-rac2 ~]$

12c flex cluster小记(2)

$
0
0

装完了cluster,我们来装DB。在上一篇 我已经说了,在leaf node上是不允许装DB的(至少目前如此),所以我们只能在hub node上安装。

你可以在hub node上装单实例DB,也可以装rac one node,也可以装rac。我这里想安装成2节点的rac。但是目前我只有2个node,一个hub node,一个leaf node,要装rac必须在hub node上。所以我需要将一个leaf node转换成hub node,再装2节点的rac数据库。

我们先来用一下常用命令先检查一下:

1. VIP分别在2个节点上运行正常,IP是192.168.56.11和192.168.56.19。这是dhcp分配的。之前我们配置的range 192.168.56.10 192.168.56.29;
[oracle@ol6-121-rac1 ~]$ srvctl status vip -node ol6-121-rac1
VIP 192.168.56.11 is enabled
VIP 192.168.56.11 is running on node: ol6-121-rac1

[oracle@ol6-121-rac1 ~]$ srvctl status vip -node ol6-121-rac2
VIP 192.168.56.19 is enabled
VIP 192.168.56.19 is running on node: ol6-121-rac2

2. SCAN运行正常,2个scanIP是在node1,1个scan IP在节点2;
[oracle@ol6-121-rac1 ~]$ srvctl status scan -all
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node ol6-121-rac2
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node ol6-121-rac1
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node ol6-121-rac1

3. DNS解析SCAN的IP分别是192.168.56.12、192.168.56.13和192.168.56.14.
[oracle@ol6-121-rac1 ~]$ nslookup ol6-121-cluster-scan.grid.localdomain
Server:         192.168.56.1
Address:        192.168.56.1#53

Non-authoritative answer:
Name:   ol6-121-cluster-scan.grid.localdomain
Address: 192.168.56.13
Name:   ol6-121-cluster-scan.grid.localdomain
Address: 192.168.56.12
Name:   ol6-121-cluster-scan.grid.localdomain
Address: 192.168.56.14

4. GNS运行正常,运行在节点1上。
[oracle@ol6-121-rac1 ~]$ srvctl status gns -v
GNS is running on node ol6-121-rac1.
GNS is enabled on node ol6-121-rac1.

5. GNS的VIP是192.168.56.108,这是固定的,我在前一篇关于flex cluster的第四点中也说明了GNS VIP需要固定的。
[oracle@ol6-121-rac1 ~]$ nslookup gns-vip.localdomain
Server:         192.168.56.1
Address:        192.168.56.1#53

Name:   gns-vip.localdomain
Address: 192.168.56.108

6. 进一步详细检查GNS的运行情况
[root@ol6-121-rac1 ~]# srvctl config gns -a -l
GNS is enabled.
GNS is listening for DNS server requests on port 53
GNS is using port 5,353 to connect to mDNS
GNS status: OK
Domain served by GNS: grid.localdomain
GNS version: 12.1.0.1.0
Globally unique identifier of the cluster where GNS is running: 3d52171d97fddf8cff384db43f6d1459
Name of the cluster where GNS is running: ol6-121-cluster
Cluster type: server.
GNS log level: 1.
GNS listening addresses: tcp://192.168.56.108:38182.
[root@ol6-121-rac1 ~]#

关于其他资源的情况,如crsctl stat res -p显示的所有结果,由于太长,我附在这篇文章最后处了。

我们来开始leaf转hub,注意,必须要保证cluster工作在flex模式下,asm也工作在flex模式下。

我们先检查一下上述两个是否工作在flex模式下:

--检查cluster:
[oracle@ol6-121-rac1 ~]$ crsctl get cluster mode status
Cluster is running in "flex" mode

--检查asm:
[oracle@ol6-121-rac1 ~]$ asmcmd
ASMCMD> showclustermode
ASM cluster : Flex mode enabled
ASMCMD>

再看看当前工作在哪个模式下,我们可以看到node2是leaf node:

--节点1
[oracle@ol6-121-rac1 ~]$ crsctl get node role config
Node 'ol6-121-rac1' configured role is 'hub'

--节点2
[root@ol6-121-rac2 ~]# crsctl get node role config
Node 'ol6-121-rac2' configured role is 'leaf'

开始转换,我们只需将node2转换成hub node。

其实步骤只有2个,第一,crsctl set node role hub,第二,重启crs。

我们开始LEAF -> HUB:

--设置当前节点的集群role为hub:
[root@ol6-121-rac2 ~]# crsctl set node role hub
CRS-4408: Node 'ol6-121-rac2' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.

--重启crs,先stop crs。注意重启crs需要到root用户下。
[root@ol6-121-rac2 ~]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'ol6-121-rac2'
CRS-2673: Attempting to stop 'ora.crsd' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.crsd' on 'ol6-121-rac2' succeeded
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'ol6-121-rac2'
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'ol6-121-rac2'
CRS-2673: Attempting to stop 'ora.storage' on 'ol6-121-rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'ol6-121-rac2'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'ol6-121-rac2' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'ol6-121-rac2' succeeded
CRS-2677: Stop of 'ora.storage' on 'ol6-121-rac2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'ol6-121-rac2'
CRS-2673: Attempting to stop 'ora.evmd' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'ol6-121-rac2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'ol6-121-rac2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'ol6-121-rac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'ol6-121-rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.cssd' on 'ol6-121-rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.gipcd' on 'ol6-121-rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'ol6-121-rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.

--stop完成后,在start crs
[root@ol6-121-rac2 ~]# crsctl start crs -wait
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'ol6-121-rac2'
CRS-2672: Attempting to start 'ora.evmd' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.evmd' on 'ol6-121-rac2' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.gpnpd' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.gipcd' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'ol6-121-rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.diskmon' on 'ol6-121-rac2' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'ol6-121-rac2'
CRS-2676: Start of 'ora.cssd' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'ol6-121-rac2'
CRS-2672: Attempting to start 'ora.ctssd' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.ctssd' on 'ol6-121-rac2' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.asm' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.storage' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.crsd' on 'ol6-121-rac2' succeeded
CRS-6017: Processing resource auto-start for servers: ol6-121-rac2
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'ol6-121-rac2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'ol6-121-rac1'
CRS-2673: Attempting to stop 'ora.ol6-121-rac2.vip' on 'ol6-121-rac1'
CRS-2672: Attempting to start 'ora.ons' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'ol6-121-rac1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.ol6-121-rac2.vip' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.ol6-121-rac2.vip' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.scan1.vip' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.scan1.vip' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.ol6-121-rac2.vip' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'ol6-121-rac2' succeeded
CRS-2676: Start of 'ora.ons' on 'ol6-121-rac2' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'ol6-121-rac2' succeeded
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'ol6-121-rac2'
 CRS-2676: Start of 'ora.asm' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.proxy_advm' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.proxy_advm' on 'ol6-121-rac2' succeeded
CRS-6016: Resource auto-start has completed for server ol6-121-rac2
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
[root@ol6-121-rac2 ~]# 

--检查集群role是否已经转换过来:
[root@ol6-121-rac2 ~]# crsctl get node role config
Node 'ol6-121-rac2' configured role is 'hub'
[root@ol6-121-rac2 ~]#

看到node2现在已经是hub node。我们可以装RAC DB了。

安装DB,我会另外再分开来一篇。这里先再讲一下如何把hub node转回去leaf node。

其实很简单,步骤也是2步。第一crsctl set node role leaf,第二步重启crs。

我们开始HUB -> LEAF:

--检查当前各个节点的role:
[oracle@ol6-121-rac1 ~]$ crsctl get node role config
Node 'ol6-121-rac1' configured role is 'hub'
[oracle@ol6-121-rac2 ~]$ crsctl get node role config
Node 'ol6-121-rac2' configured role is 'hub'

--将节点2转成leaf node
[root@ol6-121-rac2 ~]# crsctl set node role leaf
CRS-4408: Node 'ol6-121-rac2' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.

--重启crs,先stop crs
[root@ol6-121-rac2 ~]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'ol6-121-rac2'
CRS-2673: Attempting to stop 'ora.crsd' on 'ol6-121-rac2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'ol6-121-rac2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'ol6-121-rac2'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'ol6-121-rac2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'ol6-121-rac2'
CRS-2673: Attempting to stop 'ora.proxy_advm' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'ol6-121-rac2' succeeded
CRS-2673: Attempting to stop 'ora.ol6-121-rac2.vip' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'ol6-121-rac2' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.ol6-121-rac2.vip' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.ol6-121-rac2.vip' on 'ol6-121-rac1'
CRS-2677: Stop of 'ora.scan1.vip' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.scan1.vip' on 'ol6-121-rac1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'ol6-121-rac1'
CRS-2676: Start of 'ora.ol6-121-rac2.vip' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.proxy_advm' on 'ol6-121-rac2' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'ol6-121-rac1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'ol6-121-rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.asm' on 'ol6-121-rac2' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'ol6-121-rac2' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.ons' on 'ol6-121-rac2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.net1.network' on 'ol6-121-rac2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'ol6-121-rac2' has completed
CRS-2677: Stop of 'ora.crsd' on 'ol6-121-rac2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'ol6-121-rac2'
CRS-2673: Attempting to stop 'ora.evmd' on 'ol6-121-rac2'
CRS-2673: Attempting to stop 'ora.storage' on 'ol6-121-rac2'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'ol6-121-rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'ol6-121-rac2'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.storage' on 'ol6-121-rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'ol6-121-rac2' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'ol6-121-rac2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'ol6-121-rac2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'ol6-121-rac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'ol6-121-rac2' succeeded
CRS-2677: Stop of 'ora.asm' on 'ol6-121-rac2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'ol6-121-rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.cssd' on 'ol6-121-rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'ol6-121-rac2'
CRS-2677: Stop of 'ora.gipcd' on 'ol6-121-rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'ol6-121-rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.

--stop完成后,在start crs
[root@ol6-121-rac2 ~]# crsctl start crs -wait
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'ol6-121-rac2'
CRS-2672: Attempting to start 'ora.evmd' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.mdnsd' on 'ol6-121-rac2' succeeded
CRS-2676: Start of 'ora.evmd' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.gpnpd' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.gipcd' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'ol6-121-rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.diskmon' on 'ol6-121-rac2' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'ol6-121-rac2'
CRS-2676: Start of 'ora.cssd' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'ol6-121-rac2'
CRS-2672: Attempting to start 'ora.ctssd' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'ol6-121-rac2' succeeded
CRS-2676: Start of 'ora.ctssd' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.storage' on 'ol6-121-rac2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'ol6-121-rac2'
CRS-2676: Start of 'ora.crsd' on 'ol6-121-rac2' succeeded
 CRS-6017: Processing resource auto-start for servers: ol6-121-rac2
CRS-6016: Resource auto-start has completed for server ol6-121-rac2
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
[root@ol6-121-rac2 ~]#





附:crsctl stat res -p的输出,所有的资源情况:

[oracle@ol6-121-rac1 ~]$ cat  /tmp/111.txt
NAME=ora.ASMNET1LSNR_ASM.lsnr
TYPE=ora.asm_listener.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=%CRS_HOME%/bin/racgwrap%CRS_SCRIPT_SUFFIX%
ACTION_TIMEOUT=60
AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=restore
CHECK_INTERVAL=60
CHECK_TIMEOUT=120
CLEAN_TIMEOUT=60
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=listener) PROPERTY(LISTENER_NAME=PARSE(%NAME%, ., 2))
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle ASM Listener resource
ENABLED=1
ENDPOINTS=TCP:1521
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
ORACLE_HOME=%CRS_HOME%
PORT=1521
PROFILE_CHANGE_TEMPLATE=
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=ora.hub.category
START_CONCURRENCY=0
START_DEPENDENCIES=weak(global:ora.gns)
START_TIMEOUT=180
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=
STOP_TIMEOUT=0
SUBNET=192.168.57.0
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1d
USER_WORKLOAD=no
USR_ORA_ENV=
USR_ORA_OPI=false
VERSION=12.1.0.1.0

NAME=ora.DATA.dg
TYPE=ora.diskgroup.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=never
CHECK_INTERVAL=300
CHECK_TIMEOUT=30
CLEAN_TIMEOUT=60
DEFAULT_TEMPLATE=
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=CRS resource type definition for ASM disk group resource
ENABLED=1
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PROFILE_CHANGE_TEMPLATE=
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=ora.hub.category
START_CONCURRENCY=0
START_DEPENDENCIES=pullup:always(ora.asm) hard(ora.asm)
START_TIMEOUT=900
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:ora.asm)
STOP_TIMEOUT=180
TYPE_VERSION=1.2
UPTIME_THRESHOLD=1d
USER_WORKLOAD=no
USR_ORA_ENV=
USR_ORA_OPI=false
USR_ORA_STOP_MODE=
VERSION=12.1.0.1.0

NAME=ora.LISTENER.lsnr
TYPE=ora.listener.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=%CRS_HOME%/bin/racgwrap%CRS_SCRIPT_SUFFIX%
ACTION_TIMEOUT=60
AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=ora.%CRS_CSS_NODENAME_LOWER_CASE%.LISTENER_%CRS_CSS_NODENAME_UPPER_CASE%.lsnr
AUTO_START=restore
CHECK_INTERVAL=60
CHECK_TIMEOUT=120
CLEAN_TIMEOUT=60
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=listener) PROPERTY(LISTENER_NAME=PARSE(%NAME%, ., 2))
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle Listener resource
ENABLED=1
ENDPOINTS=TCP:1521
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
ORACLE_HOME=%CRS_HOME%
PORT=1521
PROFILE_CHANGE_TEMPLATE=
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=ora.hub.category
START_CONCURRENCY=0
START_DEPENDENCIES=hard(type:ora.cluster_vip_net1.type) pullup(type:ora.cluster_vip_net1.type)
START_TIMEOUT=180
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:type:ora.cluster_vip_net1.type)
STOP_TIMEOUT=0
TYPE_VERSION=1.2
UPTIME_THRESHOLD=1d
USER_WORKLOAD=no
USR_ORA_ENV=
USR_ORA_OPI=false
VERSION=12.1.0.1.0

NAME=ora.LISTENER_LEAF.lsnr
TYPE=ora.leaf_listener.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=%CRS_HOME%/bin/racgwrap%CRS_SCRIPT_SUFFIX%
ACTION_TIMEOUT=60
AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=never
CHECK_INTERVAL=60
CHECK_TIMEOUT=120
CLEAN_TIMEOUT=60
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=listener) PROPERTY(LISTENER_NAME=PARSE(%NAME%, ., 2))
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle LEAF Listener resource
ENABLED=1
ENDPOINTS=TCP:1521
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
ORACLE_HOME=%CRS_HOME%
PORT=1521
PROFILE_CHANGE_TEMPLATE=
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=ora.leaf.category
START_CONCURRENCY=0
START_DEPENDENCIES=
START_TIMEOUT=180
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=
STOP_TIMEOUT=0
SUBNET=
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1d
USER_WORKLOAD=no
USR_ORA_ENV=
USR_ORA_OPI=false
VERSION=12.1.0.1.0

NAME=ora.LISTENER_SCAN1.lsnr
TYPE=ora.scan_listener.type
ACL=owner:oracle:rwx,pgrp:oinstall:r-x,other::r--
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
CHECK_TIMEOUT=120
CLEAN_TIMEOUT=60
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=scan_listener) PROPERTY(LISTENER_NAME=PARSE(%NAME%, ., 2))
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle SCAN listener resource
ENABLED=1
ENDPOINTS=TCP:1521
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NETNUM=1
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
PORT=1521
PROFILE_CHANGE_TEMPLATE=
REGISTRATION_INVITED_NODES=
REGISTRATION_INVITED_SUBNETS=
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.scan1.vip) dispersion:active(type:ora.scan_listener.type) pullup(ora.scan1.vip)
START_TIMEOUT=180
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:ora.scan1.vip)
STOP_TIMEOUT=0
TYPE_VERSION=2.2
UPTIME_THRESHOLD=1d
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
USR_ORA_OPI=false
VERSION=12.1.0.1.0

NAME=ora.LISTENER_SCAN2.lsnr
TYPE=ora.scan_listener.type
ACL=owner:oracle:rwx,pgrp:oinstall:r-x,other::r--
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
CHECK_TIMEOUT=120
CLEAN_TIMEOUT=60
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=scan_listener) PROPERTY(LISTENER_NAME=PARSE(%NAME%, ., 2))
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle SCAN listener resource
ENABLED=1
ENDPOINTS=TCP:1521
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NETNUM=1
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
PORT=1521
PROFILE_CHANGE_TEMPLATE=
REGISTRATION_INVITED_NODES=
REGISTRATION_INVITED_SUBNETS=
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.scan2.vip) dispersion:active(type:ora.scan_listener.type) pullup(ora.scan2.vip)
START_TIMEOUT=180
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:ora.scan2.vip)
STOP_TIMEOUT=0
TYPE_VERSION=2.2
UPTIME_THRESHOLD=1d
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
USR_ORA_OPI=false
VERSION=12.1.0.1.0

NAME=ora.LISTENER_SCAN3.lsnr
TYPE=ora.scan_listener.type
ACL=owner:oracle:rwx,pgrp:oinstall:r-x,other::r--
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
CHECK_TIMEOUT=120
CLEAN_TIMEOUT=60
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=scan_listener) PROPERTY(LISTENER_NAME=PARSE(%NAME%, ., 2))
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle SCAN listener resource
ENABLED=1
ENDPOINTS=TCP:1521
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NETNUM=1
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
PORT=1521
PROFILE_CHANGE_TEMPLATE=
REGISTRATION_INVITED_NODES=
REGISTRATION_INVITED_SUBNETS=
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.scan3.vip) dispersion:active(type:ora.scan_listener.type) pullup(ora.scan3.vip)
START_TIMEOUT=180
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:ora.scan3.vip)
STOP_TIMEOUT=0
TYPE_VERSION=2.2
UPTIME_THRESHOLD=1d
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
USR_ORA_OPI=false
VERSION=12.1.0.1.0

NAME=ora.asm
TYPE=ora.asm.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=always
CARDINALITY=3
CHECK_INTERVAL=60
CHECK_TIMEOUT=30
CLEAN_TIMEOUT=60
DEFAULT_TEMPLATE=
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=CRS resource type for the Cluster ASM instance
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
GEN_USR_ORA_INST_NAME=
GEN_USR_ORA_INST_NAME@SERVERNAME(ol6-121-rac1)=+ASM1
GEN_USR_ORA_INST_NAME@SERVERNAME(ol6-121-rac2)=+ASM2
HOSTING_MEMBERS=
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
PRESENCE=flex
PROFILE_CHANGE_TEMPLATE=
PWFILE=+DATA/orapwASM
RELOCATE_BY_DEPENDENCY=0
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=weak(ora.LISTENER.lsnr) pullup(ora.ASMNET1LSNR_ASM.lsnr) hard(ora.ASMNET1LSNR_ASM.lsnr)
START_TIMEOUT=900
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:ora.ASMNET1LSNR_ASM.lsnr)
STOP_TIMEOUT=600
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1h
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
USR_ORA_INST_NAME=+ASM%CRS_CSS_NODENUMBER%
USR_ORA_OPEN_MODE=mount
USR_ORA_OPI=false
USR_ORA_STOP_MODE=immediate
VERSION=12.1.0.1.0

NAME=ora.cdbrac.db
TYPE=ora.database.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=restore
CARDINALITY=%CRS_SERVER_POOL_SIZE%
CHECK_INTERVAL=1
CHECK_TIMEOUT=30
CLEAN_TIMEOUT=60
CLUSTER_DATABASE=true
DATABASE_TYPE=RAC
DB_UNIQUE_NAME=cdbrac
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) ELEMENT(DATABASE_TYPE= %DATABASE_TYPE%)
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle Database resource
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=60
FAILURE_THRESHOLD=1
GEN_AUDIT_FILE_DEST=/u01/app/oracle/admin/cdbrac/adump
GEN_START_OPTIONS=
GEN_USR_ORA_INST_NAME=
GEN_USR_ORA_INST_NAME@SERVERNAME(ol6-121-rac1)=cdbrac_1
GEN_USR_ORA_INST_NAME@SERVERNAME(ol6-121-rac2)=cdbrac_2
HOSTING_MEMBERS=
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MANAGEMENT_POLICY=AUTOMATIC
MODIFY_TIMEOUT=60
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
ONLINE_RELOCATION_TIMEOUT=0
ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/db_1
ORACLE_HOME_OLD=
PLACEMENT=restricted
PROFILE_CHANGE_TEMPLATE=
PWFILE=+DATA/cdbrac/orapwcdbrac
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=2
ROLE=PRIMARY
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=
SERVER_POOLS=ora.myservpool
SERVER_POOLS_PQ=
SPFILE=+DATA/cdbrac/spfilecdbrac.ora
START_CONCURRENCY=0
START_DEPENDENCIES=weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,global:ora.gns,global:uniform:ora.DATA.dg)
START_TIMEOUT=600
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(global:intermediate:ora.asm,global:shutdown:ora.DATA.dg)
STOP_TIMEOUT=600
TYPE_VERSION=3.3
UPTIME_THRESHOLD=1h
USER_WORKLOAD=yes
USE_STICKINESS=0
USR_ORA_DB_NAME=cdbrac
USR_ORA_DOMAIN=
USR_ORA_ENV=
USR_ORA_FLAGS=
USR_ORA_INST_NAME=
USR_ORA_OPEN_MODE=open
USR_ORA_OPI=false
USR_ORA_STOP_MODE=immediate
VERSION=12.1.0.1.0

NAME=ora.cvu
TYPE=ora.cvu.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
CHECK_RESULTS=PRVG-1101 : SCAN name "ol6-121-cluster-scan.grid.localdomain" failed to resolve,PRVG-1101 : SCAN name "ol6-121-cluster-scan.grid.localdomain" failed to resolve,PRVF-5218 : "ol6-121-rac2-vip.grid.localdomain" did not resolve into any IP address,PRVF-5218 : "ol6-121-rac1-vip.grid.localdomain" did not resolve into any IP address,PRVG-6056 : Insufficient ASM instances found.  Expected 2 but found 1, on nodes "ol6-121-rac1".,PRVF-7530 : Sufficient physical memory is not available on node "ol6-121-rac2" [Required physical memory = 4GB (4194304.0KB)],PRVF-7573 : Sufficient swap size is not available on node "ol6-121-rac2" [Required = 2.8759GB (3015556.0KB) ; Found = 1.9687GB (2064380.0KB)],PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: ol6-121-rac2,ol6-121-rac1,Check for integrity of file "/etc/resolv.conf" failed,,PRVF-4557 : Node application "ora.ol6-121-rac2.vip" is offline on node "ol6-121-rac2",PRVF-5827 : The response time for name lookup for name "ol6-121-rac2-vip.grid.localdomain" exceeded 15 seconds,PRVF-5827 : The response time for name lookup for name "ol6-121-rac1-vip.grid.localdomain" exceeded 15 seconds,PRCQ-1000 : An error occurred while establishing connection to database with user name "DBSNMP" and connect descriptor:,(DESCRIPTION = (LOAD_BALANCE=on)  (ADDRESS = (PROTOCOL = TCP)(HOST = ol6-121-cluster-scan.grid.localdomain)(PORT = 1521)) (CONNECT_DATA =(SERVER = DEDICATED)(SERVICE_NAME = cdbrac))),IO Error: The Network Adapter could not establish the connection,PRVF-7530 : Sufficient physical memory is not available on node "ol6-121-rac1" [Required physical memory = 4GB (4194304.0KB)],PRVF-7573 : Sufficient swap size is not available on node "ol6-121-rac1" [Required = 2.8759GB (3015556.0KB) ; Found = 1.9687GB (2064380.0KB)],PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: ol6-121-rac2,ol6-121-rac1,Check for integrity of file "/etc/resolv.conf" failed,,PRVF-5827 : The response time for name lookup for name "ol6-121-rac2-vip.grid.localdomain" exceeded 15 seconds,PRVF-5827 : The response time for name lookup for name "ol6-121-rac1-vip.grid.localdomain" exceeded 15 seconds,PRCQ-1000 : An error occurred while establishing connection to database with user name "DBSNMP" and connect descriptor:,(DESCRIPTION = (LOAD_BALANCE=on)  (ADDRESS = (PROTOCOL = TCP)(HOST = ol6-121-cluster-scan.grid.localdomain)(PORT = 1521)) (CONNECT_DATA =(SERVER = DEDICATED)(SERVICE_NAME = cdbrac))),IO Error: The Network Adapter could not establish the connection
CHECK_TIMEOUT=600
CLEAN_TIMEOUT=60
CRSHOME_SPACE_ALERT_STATE=OFF
DEFAULT_TEMPLATE=
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle CVU resource
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NEXT_CHECK_TIME=4315938320
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
PROFILE_CHANGE_TEMPLATE=
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=5
RUN_INTERVAL=21600
SCRIPT_TIMEOUT=30
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network)
START_TIMEOUT=0
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(ora.net1.network)
STOP_TIMEOUT=0
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1h
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
VERSION=12.1.0.1.0

NAME=ora.gns
TYPE=ora.gns.type
ACL=owner:root:rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:oracle:r-x
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
CHECK_TIMEOUT=30
CLEAN_TIMEOUT=60
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=gns)
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=CRS resource type definition for Grid Naming Service
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
PROFILE_CHANGE_TEMPLATE=
PROPERTIES=
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.gns.vip) pullup(ora.gns.vip)
START_TIMEOUT=600
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:ora.gns.vip)
STOP_TIMEOUT=600
SUBDOMAIN=grid.localdomain
TRACE_LEVEL=0
TYPE_VERSION=2.2
UPTIME_THRESHOLD=1d
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
VERSION=12.1.0.1.0

NAME=ora.gns.vip
TYPE=ora.gns_vip.type
ACL=owner:root:rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:oracle:r-x
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=1
AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=1
CHECK_TIMEOUT=120
CLEAN_TIMEOUT=60
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=gns_vip) ELEMENT(HOSTING_MEMBERS=%HOSTING_MEMBERS%)
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=CRS resource type definition for Clusterware GNS VIP
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
GEN_USR_ORA_STATIC_VIP=
GEN_USR_ORA_VIP=
HOSTING_MEMBERS=
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
PROFILE_CHANGE_TEMPLATE=
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=0
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network)
START_TIMEOUT=600
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:ora.net1.network)
STOP_TIMEOUT=0
TYPE_VERSION=2.2
UPTIME_THRESHOLD=1h
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
USR_ORA_VIP=192.168.56.108
VERSION=12.1.0.1.0

NAME=ora.net1.network
TYPE=ora.network.type
ACL=owner:root:rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:oracle:r-x
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ADDRESS_TYPE=IPV4
AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=restore
CHECK_INTERVAL=1
CHECK_TIMEOUT=0
CLEAN_TIMEOUT=60
DEFAULT_TEMPLATE=
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle Network resource
ENABLED=1
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=60
PROFILE_CHANGE_TEMPLATE=
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=ora.hub.category
START_CONCURRENCY=0
START_DEPENDENCIES=
START_TIMEOUT=0
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=
STOP_TIMEOUT=0
TYPE_VERSION=3.3
UPTIME_THRESHOLD=1d
USER_WORKLOAD=no
USR_ORA_AUTO=dhcp
USR_ORA_ENV=
USR_ORA_IF=eth0
USR_ORA_NETMASK=255.255.255.0
USR_ORA_SUBNET=192.168.56.0
VERSION=12.1.0.1.0

NAME=ora.oc4j
TYPE=ora.oc4j.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=%CRS_HOME%/bin/oc4jctl%CRS_SCRIPT_SUFFIX%
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/scriptagent
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
CHECK_TIMEOUT=0
CLEAN_TIMEOUT=60
DEFAULT_TEMPLATE=
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle OC4J resource
ENABLED=0
FAILOVER_DELAY=0
FAILURE_INTERVAL=3600
FAILURE_THRESHOLD=2
HOSTING_MEMBERS=
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
PORT=23792
PROFILE_CHANGE_TEMPLATE=
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=1
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=
START_TIMEOUT=300
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=
STOP_TIMEOUT=120
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1d
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
VERSION=12.1.0.1.0

NAME=ora.ol6-121-rac1.vip
TYPE=ora.cluster_vip_net1.type
ACL=owner:root:rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:oracle:r-x
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=1
AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=1
CHECK_TIMEOUT=120
CLEAN_TIMEOUT=60
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=vip) ELEMENT(HOSTING_MEMBERS=%HOSTING_MEMBERS%) ELEMENT(USR_ORA_VIP=%USR_ORA_VIP%)
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle VIP resource
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
GEN_USR_ORA_STATIC_VIP=
GEN_USR_ORA_VIP=192.168.56.11
HOSTING_MEMBERS=ol6-121-rac1
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=favored
PROFILE_CHANGE_TEMPLATE=
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=0
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network) weak(global:ora.gns)
START_TIMEOUT=120
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:ora.net1.network)
STOP_TIMEOUT=0
TYPE_VERSION=2.2
UPTIME_THRESHOLD=1h
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
USR_ORA_VIP=192.168.56.11
VERSION=12.1.0.0.1

NAME=ora.ol6-121-rac2.vip
TYPE=ora.cluster_vip_net1.type
ACL=owner:root:rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:oracle:r-x
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=1
AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=1
CHECK_TIMEOUT=120
CLEAN_TIMEOUT=60
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=vip) ELEMENT(HOSTING_MEMBERS=%HOSTING_MEMBERS%) ELEMENT(USR_ORA_VIP=%USR_ORA_VIP%)
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle VIP resource
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
GEN_USR_ORA_STATIC_VIP=
GEN_USR_ORA_VIP=192.168.56.19
HOSTING_MEMBERS=ol6-121-rac2
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=favored
PROFILE_CHANGE_TEMPLATE=
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=0
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network) weak(global:ora.gns)
START_TIMEOUT=120
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:ora.net1.network)
STOP_TIMEOUT=0
TYPE_VERSION=2.2
UPTIME_THRESHOLD=1h
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
USR_ORA_VIP=192.168.56.19
VERSION=12.1.0.0.1

NAME=ora.ons
TYPE=ora.ons.type
ACL=owner:oracle:rwx,pgrp:oinstall:r-x,other::r--
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=ora.%CRS_CSS_NODENAME%.ons
AUTO_START=always
CHECK_INTERVAL=60
CHECK_TIMEOUT=0
CLEAN_TIMEOUT=60
DEBUG_COMP=
DEBUG_FILE=
DEFAULT_TEMPLATE=
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle ONS resource
EM_PORT=2016
ENABLED=1
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOCAL_PORT=6100
LOGGING_LEVEL=1
LOG_COMP=
LOG_FILE=
MODIFY_TIMEOUT=60
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PROFILE_CHANGE_TEMPLATE=
REMOTE_HOSTS=
REMOTE_PORT=6200
RESTART_ATTEMPTS=3
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=ora.hub.category
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network)
START_TIMEOUT=0
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(ora.net1.network)
STOP_TIMEOUT=0
TYPE_VERSION=2.1
UPTIME_THRESHOLD=1d
USER_WORKLOAD=no
USE_EVM=true
USR_ORA_ENV=
VERSION=12.1.0.1.0

NAME=ora.proxy_advm
TYPE=ora.proxy_advm.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=always
CHECK_INTERVAL=60
CHECK_TIMEOUT=30
CLEAN_TIMEOUT=60
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=asm) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%)
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=CRS resource type for the ADVM proxy instance
ENABLED=1
GEN_USR_ORA_INST_NAME=
GEN_USR_ORA_INST_NAME@SERVERNAME(ol6-121-rac1)=+APX1
GEN_USR_ORA_INST_NAME@SERVERNAME(ol6-121-rac2)=+APX2
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PRESENCE=flex
PROFILE_CHANGE_TEMPLATE=
PWFILE=
REGISTERED_TYPE=srvctl
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=ora.hub.category
START_CONCURRENCY=0
START_DEPENDENCIES=hard(uniform:global:ora.asm) pullup:always(global:ora.asm)
START_TIMEOUT=900
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(global:intermediate:ora.asm)
STOP_TIMEOUT=600
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1d
USER_WORKLOAD=no
USR_ORA_ENV=
USR_ORA_INST_NAME=+APX%CRS_CSS_NODENUMBER%
USR_ORA_OPEN_MODE=mount
USR_ORA_OPI=false
USR_ORA_STOP_MODE=immediate
VERSION=12.1.0.1.0

NAME=ora.scan1.vip
TYPE=ora.scan_vip.type
ACL=owner:root:rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:oracle:r-x
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=1
AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=1
CHECK_TIMEOUT=120
CLEAN_TIMEOUT=60
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=scan_vip) ELEMENT(HOSTING_MEMBERS=%HOSTING_MEMBERS%)
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle SCAN VIP resource
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
GEN_USR_ORA_STATIC_VIP=
GEN_USR_ORA_VIP=192.168.56.12
HOSTING_MEMBERS=
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NETNUM=1
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
PROFILE_CHANGE_TEMPLATE=
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=0
SCAN_NAME=ol6-121-cluster-scan.grid.localdomain
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.net1.network) weak(global:ora.gns) dispersion:active(type:ora.scan_vip.type) pullup(global:ora.net1.network)
START_TIMEOUT=120
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:ora.net1.network)
STOP_TIMEOUT=0
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1h
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
USR_ORA_VIP=192.168.56.12
VERSION=12.1.0.1.0

NAME=ora.scan2.vip
TYPE=ora.scan_vip.type
ACL=owner:root:rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:oracle:r-x
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=1
AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=1
CHECK_TIMEOUT=120
CLEAN_TIMEOUT=60
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=scan_vip) ELEMENT(HOSTING_MEMBERS=%HOSTING_MEMBERS%)
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle SCAN VIP resource
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
GEN_USR_ORA_STATIC_VIP=
GEN_USR_ORA_VIP=192.168.56.13
HOSTING_MEMBERS=
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NETNUM=1
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
PROFILE_CHANGE_TEMPLATE=
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=0
SCAN_NAME=ol6-121-cluster-scan.grid.localdomain
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.net1.network) weak(global:ora.gns) dispersion:active(type:ora.scan_vip.type) pullup(global:ora.net1.network)
START_TIMEOUT=120
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:ora.net1.network)
STOP_TIMEOUT=0
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1h
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
USR_ORA_VIP=192.168.56.13
VERSION=12.1.0.1.0

NAME=ora.scan3.vip
TYPE=ora.scan_vip.type
ACL=owner:root:rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:oracle:r-x
ACTIONS=
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=1
AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%
ALERT_TEMPLATE=
ALIAS_NAME=
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=1
CHECK_TIMEOUT=120
CLEAN_TIMEOUT=60
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=scan_vip) ELEMENT(HOSTING_MEMBERS=%HOSTING_MEMBERS%)
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle SCAN VIP resource
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
GEN_USR_ORA_STATIC_VIP=
GEN_USR_ORA_VIP=192.168.56.14
HOSTING_MEMBERS=
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NETNUM=1
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
PROFILE_CHANGE_TEMPLATE=
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=0
SCAN_NAME=ol6-121-cluster-scan.grid.localdomain
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.net1.network) weak(global:ora.gns) dispersion:active(type:ora.scan_vip.type) pullup(global:ora.net1.network)
START_TIMEOUT=120
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:ora.net1.network)
STOP_TIMEOUT=0
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1h
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
USR_ORA_VIP=192.168.56.14
VERSION=12.1.0.1.0

[oracle@ol6-121-rac1 ~]$

12c flex cluster小记(3)

$
0
0

好了,在安装完flex cluster将leaf node转换为hub node之后,我们现在开始装2节点的rac。

先是安装数据库软件,这很容易,这边省略不讲了。我要讲的是安装完数据库软件之后,我用dbca建库,建库过程没报什么错,但是却发现完成后,2个节点只有一个节点有db instance,另一节点就是没有db instance。db instance只能启动在一个节点上。

我们先来看看安装过程。也是看图说话。


注意我这里选了policy-managed,而非传统的administrator-managed。


注意这里,由于之前选的是policy-managed,所以这里就出现了一个要求指定server pool的选项。可以create new server pool,也可以选择已经有的。注意这边的默认cardinality是1。

在此后面的步骤,我就不贴了,因为都是常规的建库,一路next下去就行。

安装一路都没有报错,安装完成后启动后发现,db instance只能存在于一个节点中,如一开始cdbrac_1存在在节点1上,只有等节点1宕机后,cdbrac_2才会在节点2起来,仿佛就像一个rac one node,但是奇怪,我安装的是rac,不是rac one node呀。如果是one node,instance name应该是唯一的。

细细回想安装过程,觉得和cardinality有关,一查文档,果然。

A policy-managed database is defined by cardinality, which is the number of database instances you want running during normal operations.

原来在policy-managed方式cluster,节点被分成了若干个server pool,我定义的myservpool中,cardinality为1,也就是定义了在这个2节点的server pool中,允许running的db instance只有1个。注:server pool的概念其实在11g就有了。

ok,既然知道了原因,那就改回来吧。

--检查当前server pool的情况,可以看到节点2是在free server pool中,
[oracle@ol6-121-rac1 ~]$ srvctl status srvpool -detail
Server pool name: Free
Active servers count: 1
Active server names: ol6-121-rac2
NAME=ol6-121-rac2 STATE=ONLINE
Server pool name: Generic
Active servers count: 0
Active server names:
Server pool name: myservpool
Active servers count: 1
Active server names: ol6-121-rac1
NAME=ol6-121-rac1 STATE=ONLINE

--修改cardinality的为2,即max为2
[oracle@ol6-121-rac1 ~]$ srvctl modify srvpool -serverpool myservpool -max 2

--再次检查,发现free server pool已经为0,2个节点都划在了myservpool中了,当前都是online状态。注:这里的online是指server的情况。表示节点在cluster中,服务器没有down。
[oracle@ol6-121-rac1 ~]$ srvctl status srvpool -detail
Server pool name: Free
Active servers count: 0
Active server names:
Server pool name: Generic
Active servers count: 0
Active server names:
Server pool name: myservpool
Active servers count: 2
Active server names: ol6-121-rac1,ol6-121-rac2
NAME=ol6-121-rac1 STATE=ONLINE
NAME=ol6-121-rac2 STATE=ONLINE
[oracle@ol6-121-rac1 ~]$

此时如果ps还是没发现db进程,可以手工将其起来:

--start instance
[oracle@ol6-121-rac1 ~]$ srvctl start instance -db cdbrac -instance cdbrac_2

--检查db instance 情况:
[oracle@ol6-121-rac1 ~]$ srvctl status database -db cdbrac
Instance cdbrac_1 is running on node ol6-121-rac1
Instance cdbrac_2 is running on node ol6-121-rac2

ok,我们现在已经改成2个了,那么如果要改回去,怎么改?也很简单,只是注意一下已经起来的instance,如果要改小cardinality,可能会报错资源正在被使用,需要加force的参数来强制关闭。

--改回1,报错资源still running
[oracle@ol6-121-rac1 ~]$ srvctl modify srvpool -serverpool myservpool -max 1
PRCS-1011 : Failed to modify server pool myservpool
CRS-2736: The operation requires stopping resource 'ora.cdbrac.db' on server 'ol6-121-rac1'
CRS-2738: Unable to modify server pool 'ora.myservpool' as this will affect running resources, but the force option was not specified
[oracle@ol6-121-rac1 ~]$

--加force参数强制关闭
[oracle@ol6-121-rac1 ~]$ srvctl modify srvpool -serverpool myservpool -max 1 -force -verbose
 
--检查db instance情况
[oracle@ol6-121-rac1 ~]$ srvctl status database -db cdbrac
Instance cdbrac_2 is running on node ol6-121-rac2

--检查server pool情况:
[oracle@ol6-121-rac1 ~]$ srvctl status srvpool -detail
Server pool name: Free
Active servers count: 1
Active server names: ol6-121-rac1
NAME=ol6-121-rac1 STATE=ONLINE
Server pool name: Generic
Active servers count: 0
Active server names:
Server pool name: myservpool
Active servers count: 1
Active server names: ol6-121-rac2
NAME=ol6-121-rac2 STATE=ONLINE
[oracle@ol6-121-rac1 ~]$

好了,关于flex cluster的学习过程,就写到这里。太长的一篇文章拆成了3篇。在安装和测试的过程中,大小问题也经历不少。也认识到了不少新的特性,新的功能。Flex cluster,Flex asm,Serverpool,这些新东西不知道会被多少人使用,拭目以待……


inmemory option的简单介绍和测试

$
0
0

12c的inmemory option 已经在6月10日发布,预计会在7月份有正式的产品release,即在12.1.0.2中,你就可以看到这个新特性了。

下面我们来简单看看这个新特性的用法和体会一下其厉害之处。

SQL> show parameter inmem
 
NAME                                 TYPE        VALUE
----------------------------------
-- ----------- ------------------------------
inmemory_clause_default              string
inmemory_force                       string      DEFAULT
inmemory_query                       string      ENABLE
inmemory_size                        big integer 500M

上面的几个参数和inmemory option有关。

inmemory_clause_default:
默认空值,表示需要显式的指定某个table才能in memory
INMEMORY,表示所有的new table都in memory
NO INMEMORY,和空值是一个意思
 
inmemory_force:
default:具有IN MEMORY属性的table,才会被选定以in memory的方式存储。
OFF:即使具有IN MEMORY AREA被配置了,也不会有table以in memory的方式存储。
ON:除非显式的指定NO INMEMORY的属性的table,其他的table都会以in memory方式存储。
 
inmemory_query:
enable,可以进行inmemory_query
disable,禁用inmemory_query
 
inmemory_size:
设置inmemory option的内存大小,注,不能动态调整。

常用的检查视图:

SQL> desc v$im_segments
 
Name                                      Null?    Type
 ---------------------------------------
-- -------- ----------------------------
 
OWNER                                              VARCHAR2(128)
 
SEGMENT_NAME                              NOT NULL VARCHAR2(128)
 
PARTITION_NAME                                     VARCHAR2(128)
 
SEGMENT_TYPE                                       VARCHAR2(18)
 
TABLESPACE_NAME                           NOT NULL VARCHAR2(30)
 
INMEMORY_SIZE                                      NUMBER
 
BYTES                                              NUMBER
 
BYTES_NOT_POPULATED                                NUMBER
 
POPULATE_STATUS                                    VARCHAR2(9)
 
INMEMORY_PRIORITY                                  VARCHAR2(8)
 
INMEMORY_DISTRIBUTE                                VARCHAR2(15)
 
INMEMORY_COMPRESSION                               VARCHAR2(17)
 
CON_ID                                             NUMBER
 
SQL>
--注意dba_tables中已经多了几个和inmemory相关的字段,见下面标记<<<的几个
SQL> desc user_tables
 
Name                                      Null?    Type
 ---------------------------------------
-- -------- ----------------------------
 
TABLE_NAME                                NOT NULL VARCHAR2(128)
 
TABLESPACE_NAME                                    VARCHAR2(30)
 
CLUSTER_NAME                                       VARCHAR2(128)
 
IOT_NAME                                           VARCHAR2(128)
 
STATUS                                             VARCHAR2(8)
 
PCT_FREE                                           NUMBER
 
PCT_USED                                           NUMBER
 
INI_TRANS                                          NUMBER
 
MAX_TRANS                                          NUMBER
 
INITIAL_EXTENT                                     NUMBER
 
NEXT_EXTENT                                        NUMBER
 
MIN_EXTENTS                                        NUMBER
 
MAX_EXTENTS                                        NUMBER
 
PCT_INCREASE                                       NUMBER
 
FREELISTS                                          NUMBER
 
FREELIST_GROUPS                                    NUMBER
 
LOGGING                                            VARCHAR2(3)
 
BACKED_UP                                          VARCHAR2(1)
 
NUM_ROWS                                           NUMBER
 
BLOCKS                                             NUMBER
 
EMPTY_BLOCKS                                       NUMBER
 
AVG_SPACE                                          NUMBER
 
CHAIN_CNT                                          NUMBER
 
AVG_ROW_LEN                                        NUMBER
 
AVG_SPACE_FREELIST_BLOCKS                          NUMBER
 
NUM_FREELIST_BLOCKS                                NUMBER
 
DEGREE                                             VARCHAR2(40)
 
INSTANCES                                          VARCHAR2(40)
 
CACHE                                              VARCHAR2(20)
 
TABLE_LOCK                                         VARCHAR2(8)
 
SAMPLE_SIZE                                        NUMBER
 
LAST_ANALYZED                                      DATE
 
PARTITIONED                                        VARCHAR2(3)
 
IOT_TYPE                                           VARCHAR2(12)
 
TEMPORARY                                          VARCHAR2(1)
 
SECONDARY                                          VARCHAR2(1)
 
NESTED                                             VARCHAR2(3)
 
BUFFER_POOL                                        VARCHAR2(7)
 
FLASH_CACHE                                        VARCHAR2(7)
 
CELL_FLASH_CACHE                                   VARCHAR2(7)
 
ROW_MOVEMENT                                       VARCHAR2(8)
 
GLOBAL_STATS                                       VARCHAR2(3)
 
USER_STATS                                         VARCHAR2(3)
 
DURATION                                           VARCHAR2(15)
 
SKIP_CORRUPT                                       VARCHAR2(8)
 
MONITORING                                         VARCHAR2(3)
 
CLUSTER_OWNER                                      VARCHAR2(128)
 
DEPENDENCIES                                       VARCHAR2(8)
 
COMPRESSION                                        VARCHAR2(8)
 
COMPRESS_FOR                                       VARCHAR2(30)
 
DROPPED                                            VARCHAR2(3)
 
READ_ONLY                                          VARCHAR2(3)
 
SEGMENT_CREATED                                    VARCHAR2(3)
 
RESULT_CACHE                                       VARCHAR2(7)
 
CLUSTERING                                         VARCHAR2(3)
 
ACTIVITY_TRACKING                                  VARCHAR2(23)
 
DML_TIMESTAMP                                      VARCHAR2(25)
 
HAS_IDENTITY                                       VARCHAR2(3)
 
CONTAINER_DATA                                     VARCHAR2(3)
 
INMEMORY_PRIORITY                                  VARCHAR2(8)   <<<
 
INMEMORY_DISTRIBUTE                                VARCHAR2(15)  <<<
 
INMEMORY_COMPRESSION                               VARCHAR2(17)  <<<
 
SQL>

我们来测试一下:

--登录PDB,建立测试表
SQL> conn test/test@pdb1
Connected.
SQL> create table oracleblog1 as select * from dba_source;
 
Table created.
 
--看看其大小:
SQL> select sum(bytes)/1024/1024 from dba_segments where segment_name='ORACLEBLOG1';
 
SUM(BYTES)/1024/1024
--------------------
                  63
 
--将表用cache的方式缓存起来,以便一会与inmemory做对比。
SQL> alter table ORACLEBLOG1 cache;
 
Table altered.
 
--检查dba_tables中,相关inmemory字段的情况。看到结果是空。但cache字段是Y了。
SQL> SELECT table_name, cache, inmemory_priority,                                             
  2  inmemory_distribute,inmemory_compression FROM user_tables where table_name='ORACLEBLOG1'
  3  /
 
TABLE_NAME                     CACHE                INMEMORY INMEMORY_DISTRI INMEMORY_COMPRESS
------------------------------ -------------------- -------- --------------- -----------------
ORACLEBLOG1                        Y
 
SQL>
 
 
 
--建立另外一个测试表
SQL> create table oracleblog2 as select * from dba_source;
 
Table created.
 
--注意,这里我们将table的属性改成inmemory,但此时还没加载到inmemory中。
SQL> alter table ORACLEBLOG2 inmemory;
 
Table altered.
 
--我们可以通过看im_segments表,发现目前还没有东西进去到inmemory segment中。
SQL> SELECT v.owner, v.segment_name name,
  2  v.populate_status status, v.bytes_not_populated FROM v$im_segments v;
 
no rows selected
 
SQL>
 
--检查一下该table的属性:
SQL> SELECT table_name, cache, inmemory_priority,
  2  inmemory_distribute,inmemory_compression FROM user_tables where table_name='ORACLEBLOG2';
 
TABLE_NAME                     CACHE                INMEMORY INMEMORY_DISTRI INMEMORY_COMPRESS
------------------------------ -------------------- -------- --------------- -----------------
ORACLEBLOG2                        N                NONE     AUTO DISTRIBUTE FOR QUERY
 
SQL>
 
 
--跑一下下面的语句,使其加载到inmemory中:
SELECT /*+ full(t1) noparallel (t1)*/ Count(*) FROM oracleblog1 t1;
SELECT /*+ full(t2) noparallel (t2)*/ Count(*) FROM oracleblog2 t2;
 
--检查inmemory segment中,已经有了oracleblog2表。
--注: BYTES_NOT_POPULATED为0,表示整个表都inmemory了。
SQL> SELECT v.owner, v.segment_name name,
  2  v.populate_status status, v.bytes_not_populated FROM v$im_segments v;
 
OWNER                          NAME                           STATUS    BYTES_NOT_POPULATED
------------------------------ ------------------------------ --------- -------------------
TEST                           ORACLEBLOG2                    COMPLETED                   0
 
--同时,我们还能看一下压缩了多少:
SQL> SELECT v.owner, v.segment_name, v.bytes orig_size, v.inmemory_size in_mem_size,
  2  v.bytes/v.inmemory_size comp_ratio FROM v$im_segments v;
 
OWNER                          SEGMENT_NAME          ORIG_SIZE IN_MEM_SIZE COMP_RATIO
------------------------------ -------------------- ---------- ----------- ----------
TEST                           ORACLEBLOG2            66060288    33685504 1.96108949
 
SQL>

好,我们正式开始对比测试,每个语句跑3次。

(1)用cache的情况:

SQL> select /* use cache */ max(LINE) from oracleblog1;
 
 MAX(LINE)
----------
     11574
 
Elapsed: 00:00:00.05
 
Execution Plan
----------------------------------------------------------
Plan hash value: 4099259911
 
----------------------------------------------------------------------------------
| Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |             |     1 |     4 |  2176   (1)| 00:00:01 |
|   1 |  SORT AGGREGATE    |             |     1 |     4 |            |          |
|   2 |   TABLE ACCESS FULL| ORACLEBLOG1 |   338K|  1322K|  2176   (1)| 00:00:01 |
----------------------------------------------------------------------------------
 
 
Statistics
----------------------------------------------------------
         23  recursive calls
          0  db block gets
       7913  consistent gets
       7882  physical reads
          0  redo size
        545  bytes sent via SQL*Net to client
        544  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          4  sorts (memory)
          0  sorts (disk)
          1  rows processed
 
SQL>
SQL>
SQL>
SQL>
SQL> select /* use cache */ max(LINE) from oracleblog1;
 
 MAX(LINE)
----------
     11574
 
Elapsed: 00:00:00.04
 
Execution Plan
----------------------------------------------------------
Plan hash value: 4099259911
 
----------------------------------------------------------------------------------
| Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |             |     1 |     4 |  2176   (1)| 00:00:01 |
|   1 |  SORT AGGREGATE    |             |     1 |     4 |            |          |
|   2 |   TABLE ACCESS FULL| ORACLEBLOG1 |   338K|  1322K|  2176   (1)| 00:00:01 |
----------------------------------------------------------------------------------
 
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       7886  consistent gets
       7882  physical reads
          0  redo size
        545  bytes sent via SQL*Net to client
        544  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
 
SQL>
SQL>
SQL>
SQL>
SQL> select /* use cache */ max(LINE) from oracleblog1;
 
 MAX(LINE)
----------
     11574
 
Elapsed: 00:00:00.04
 
Execution Plan
----------------------------------------------------------
Plan hash value: 4099259911
 
----------------------------------------------------------------------------------
| Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |             |     1 |     4 |  2176   (1)| 00:00:01 |
|   1 |  SORT AGGREGATE    |             |     1 |     4 |            |          |
|   2 |   TABLE ACCESS FULL| ORACLEBLOG1 |   338K|  1322K|  2176   (1)| 00:00:01 |
----------------------------------------------------------------------------------
 
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       7886  consistent gets
       7882  physical reads
          0  redo size
        545  bytes sent via SQL*Net to client
        544  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
 
SQL>

我们很奇怪的发现,3次运行,时间在4秒左右,consistent gets为7886,始终有physical reads。这是怎么回事?为什么始终有physical reads?

这其实是11g之后的新特性,大表就不经过缓存,直接走direct path read。为了避免该特性影响对比,我们用event将其屏蔽。

SQL> alter session set events '10949 trace name context forever, level 1';
 
Session altered.
 
Elapsed: 00:00:00.00
SQL>

(1.1)屏蔽direct path read后,再次测试用cache的情况

SQL> select /* use cache */ max(LINE) from oracleblog1;
 
 MAX(LINE)
----------
     11574
 
Elapsed: 00:00:00.10
 
Execution Plan
----------------------------------------------------------
Plan hash value: 4099259911
 
----------------------------------------------------------------------------------
| Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |             |     1 |     4 |  2176   (1)| 00:00:01 |
|   1 |  SORT AGGREGATE    |             |     1 |     4 |            |          |
|   2 |   TABLE ACCESS FULL| ORACLEBLOG1 |   338K|  1322K|  2176   (1)| 00:00:01 |
----------------------------------------------------------------------------------
 
 
Statistics
----------------------------------------------------------
         39  recursive calls
          0  db block gets
       7936  consistent gets
       7883  physical reads
          0  redo size
        545  bytes sent via SQL*Net to client
        544  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
          1  rows processed
 
SQL> select /* use cache */ max(LINE) from oracleblog1;
 
 MAX(LINE)
----------
     11574
 
Elapsed: 00:00:00.03
 
Execution Plan
----------------------------------------------------------
Plan hash value: 4099259911
 
----------------------------------------------------------------------------------
| Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |             |     1 |     4 |  2176   (1)| 00:00:01 |
|   1 |  SORT AGGREGATE    |             |     1 |     4 |            |          |
|   2 |   TABLE ACCESS FULL| ORACLEBLOG1 |   338K|  1322K|  2176   (1)| 00:00:01 |
----------------------------------------------------------------------------------
 
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       7892  consistent gets
          0  physical reads
          0  redo size
        545  bytes sent via SQL*Net to client
        544  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
 
SQL>
SQL> select /* use cache */ max(LINE) from oracleblog1;
 
 MAX(LINE)
----------
     11574
 
Elapsed: 00:00:00.03
 
Execution Plan
----------------------------------------------------------
Plan hash value: 4099259911
 
----------------------------------------------------------------------------------
| Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |             |     1 |     4 |  2176   (1)| 00:00:01 |
|   1 |  SORT AGGREGATE    |             |     1 |     4 |            |          |
|   2 |   TABLE ACCESS FULL| ORACLEBLOG1 |   338K|  1322K|  2176   (1)| 00:00:01 |
----------------------------------------------------------------------------------
 
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       7892  consistent gets
          0  physical reads
          0  redo size
        545  bytes sent via SQL*Net to client
        544  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
 
SQL>

我们看到基本的运行时间在3秒左右。consistent gets为7892,无physical reads。

好,轮到主角登场,我们来看看。

(2)测试用inmemory option的情况:

SQL> select /* use inmemory */ max(LINE) from oracleblog2;
 
 MAX(LINE)
----------
     11574
 
Elapsed: 00:00:00.01
 
Execution Plan
----------------------------------------------------------
Plan hash value: 1514125655
 
-------------------------------------------------------------------------------------------
| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |             |     1 |     4 |     4 (100)| 00:00:01 |
|   1 |  SORT AGGREGATE             |             |     1 |     4 |            |          |
|   2 |   TABLE ACCESS INMEMORY FULL| ORACLEBLOG2 |   338K|  1322K|     4 (100)| 00:00:01 |
-------------------------------------------------------------------------------------------
 
 
Statistics
----------------------------------------------------------
        208  recursive calls
          0  db block gets
        158  consistent gets
          0  physical reads
          0  redo size
        545  bytes sent via SQL*Net to client
        544  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
         16  sorts (memory)
          0  sorts (disk)
          1  rows processed
 
SQL>                                                       
SQL>
SQL> select /* use inmemory */ max(LINE) from oracleblog2;
 
 MAX(LINE)
----------
     11574
 
Elapsed: 00:00:00.00
 
Execution Plan
----------------------------------------------------------
Plan hash value: 1514125655
 
-------------------------------------------------------------------------------------------
| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |             |     1 |     4 |     4 (100)| 00:00:01 |
|   1 |  SORT AGGREGATE             |             |     1 |     4 |            |          |
|   2 |   TABLE ACCESS INMEMORY FULL| ORACLEBLOG2 |   338K|  1322K|     4 (100)| 00:00:01 |
-------------------------------------------------------------------------------------------
 
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        545  bytes sent via SQL*Net to client
        544  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
 
SQL>
SQL>
SQL> select /* use inmemory */ max(LINE) from oracleblog2;
 
 MAX(LINE)
----------
     11574
 
Elapsed: 00:00:00.01
 
Execution Plan
----------------------------------------------------------
Plan hash value: 1514125655
 
-------------------------------------------------------------------------------------------
| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |             |     1 |     4 |     4 (100)| 00:00:01 |
|   1 |  SORT AGGREGATE             |             |     1 |     4 |            |          |
|   2 |   TABLE ACCESS INMEMORY FULL| ORACLEBLOG2 |   338K|  1322K|     4 (100)| 00:00:01 |
-------------------------------------------------------------------------------------------
 
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        545  bytes sent via SQL*Net to client
        544  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
 
SQL>

运行时间几乎是0秒,且consistent gets只有3!没有physical reads。

注意上面执行计划中的 TABLE ACCESS INMEMORY FULL,是显示了使用哦inmemory option。

太强大了,我已经不知道该说什么好了……




==== update 2014-06-18=========================

应木匠同学的要求,再加一次display_cursor和statistics_level=all测试。:)

这次用了4100多万的数据。测试如下:

SQL> select cache,INMEMORY_PRIORITY,INMEMORY_DISTRIBUTE,INMEMORY_COMPRESSION,table_name from dba_tables where table_name in ('ORASUP1','ORASUP2');
 
CACHE                INMEMORY_PRIORITY INMEMORY_DISTRIBUTE INMEMORY_COMPRESSION TABLE_NAME
-------------------- ----------------- ------------------- -------------------- --------------------------------------------------------------------------------
    N                NONE              AUTO DISTRIBUTE     FOR QUERY            ORASUP1
    Y                                                                           ORASUP2
    
Executed in 0.624 seconds
 
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
SQL> select /* use cache */ max(line) from orasup1;
 
 MAX(LINE)
----------
     11883
 
Executed in 1.107 seconds
 
SQL> select /* use cache */ max(line) from orasup1;
 
 MAX(LINE)
----------
     11883
 
Executed in 1.311 seconds
 
SQL> select /* use cache */ max(line) from orasup1;
 
 MAX(LINE)
----------
     11883
 
Executed in 1.373 seconds
 
SQL>
SQL>
SQL>
SQL>
SQL> select /* use inmemory */ max(line) from orasup2;
 
 MAX(LINE)
----------
     11883
 
Executed in 8.736 seconds
 
SQL> select /* use inmemory */ max(line) from orasup2;
 
 MAX(LINE)
----------
     11883
 
Executed in 10.062 seconds
 
SQL> select /* use inmemory */ max(line) from orasup2;
 
 MAX(LINE)
----------
     11883
 
Executed in 9.906 seconds
 
SQL>
 
 
SQL> select * from table(dbms_xplan.display_cursor('49ks122w2x163',0,'ADVANCED'));
 
SQL_ID  49ks122w2x163, child number 0
-------------------------------------
select /* use inmemory */ max(line) from orasup1
 
Plan hash value: 1818600857
 
---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |     1 (100)|          |
|   1 |  SORT AGGREGATE             |         |     1 |     4 |            |          |
|   2 |   TABLE ACCESS INMEMORY FULL| ORASUP1 |   324K|  1266K|     1 (100)| 00:00:01 |
---------------------------------------------------------------------------------------
 
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
 
   1 - SEL$1
   2 - SEL$1 / ORASUP1@SEL$1
 
Outline Data
-------------
 
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
      DB_VERSION('12.1.0.2')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "ORASUP1"@"SEL$1")
      END_OUTLINE_DATA
  */
 
Column Projection Information (identified by operation id):
-----------------------------------------------------------
 
   1 - (#keys=0) MAX("LINE")[22]
   2 - (rowset=200) "LINE"[NUMBER,22]
 
SQL>
SQL> select * from table(dbms_xplan.display_cursor('49ks122w2x163',0,'ALLSTATS LAST'));
 
SQL_ID  49ks122w2x163, child number 0
-------------------------------------
select /* use inmemory */ max(line) from orasup1
 
Plan hash value: 1818600857
 
-------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |      1 |        |      1 |00:00:01.25 |      11 |
|   1 |  SORT AGGREGATE             |         |      1 |      1 |      1 |00:00:01.25 |      11 |
|   2 |   TABLE ACCESS INMEMORY FULL| ORASUP1 |      1 |    324K|     41M|00:00:00.51 |      11 |
-------------------------------------------------------------------------------------------------
 
SQL>
SQL>
SQL>
SQL>
SQL>
SQL> select * from table(dbms_xplan.display_cursor('2ahgf7ap509ty',0,'ADVANCED'));
 
SQL_ID  2ahgf7ap509ty, child number 0
-------------------------------------
select /* use cache */ max(line) from orasup2
 
Plan hash value: 2938134919
 
------------------------------------------------------------------------------
| Id  | Operation          | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |         |       |       |   255K(100)|          |
|   1 |  SORT AGGREGATE    |         |     1 |     4 |            |          |
|   2 |   TABLE ACCESS FULL| ORASUP2 |    41M|   158M|   255K  (1)| 00:00:10 |
------------------------------------------------------------------------------
 
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
 
   1 - SEL$1
   2 - SEL$1 / ORASUP2@SEL$1
 
Outline Data
-------------
 
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
      DB_VERSION('12.1.0.2')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "ORASUP2"@"SEL$1")
      END_OUTLINE_DATA
  */
 
Column Projection Information (identified by operation id):
-----------------------------------------------------------
 
   1 - (#keys=0) MAX("LINE")[22]
   2 - (rowset=200) "LINE"[NUMBER,22]
 
SQL>
SQL> select * from table(dbms_xplan.display_cursor('2ahgf7ap509ty',0,'ALLSTATS LAST'));
 
SQL_ID  2ahgf7ap509ty, child number 0
-------------------------------------
select /* use cache */ max(line) from orasup2
 
Plan hash value: 2938134919
 
----------------------------------------------------------------------------------------
| Id  | Operation          | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |         |      1 |        |      1 |00:00:10.43 |     941K|
|   1 |  SORT AGGREGATE    |         |      1 |      1 |      1 |00:00:10.43 |     941K|
|   2 |   TABLE ACCESS FULL| ORASUP2 |      1 |     41M|     41M|00:00:09.34 |     941K|
----------------------------------------------------------------------------------------

可以看到bytes还是有变化的。

12c 的RAC节点增加节点

$
0
0

我们以vitrualbox为例,给12c的RAC添加一个节点。主要步骤是:

1.检查新加节点物理需求
2.用$GRID_HOME/addnode/addnode.sh添加grid软件和配置grid
3.用$ORACLE_HOME/addnode/addnode.sh添加database软件
4.添加database到grid中。

我们先关闭虚拟机中的2节点RAC,对其中一个节点反击右键,复制一份。

复制完成后,清除一下原有的grid和database的信息,包括软件目录,inventory,/var下面的一下目录,oratab,/etc/init.d下的一些关于gi随机启动的服务等等(关键字,crs,ohas,tfa)。

清理复制出来的共享盘(注,不是原来节点1和节点2的共享盘!!):
dd if=/dev/zero of=/dev/sdb bs=1024k count=50
dd if=/dev/zero of=/dev/sdb1 bs=1024k count=50
dd if=/dev/zero of=/dev/sdc bs=1024k count=50
dd if=/dev/zero of=/dev/sdc1 bs=1024k count=50
dd if=/dev/zero of=/dev/sdd bs=1024k count=50
dd if=/dev/zero of=/dev/sdd1 bs=1024k count=50
dd if=/dev/zero of=/dev/sde bs=1024k count=50
dd if=/dev/zero of=/dev/sde1 bs=1024k count=50
 
清理软件目录和其他信息:
rm -rf /etc/oracle
rm -rf /var/tmp/.oracle
rm -rf /etc/oraInst.loc
rm -rf /etc/init/oracle-ohasd.conf
rm -rf /etc/oratab
rm -rf /etc/udev/rules.d/99-oracle-asmdevices.rules
rm -rf /etc/init.d/ohasd
 
清除随机启动的服务:
rm -rf /etc/rc.d/rc1.d/K15ohasd
rm -rf /etc/rc.d/rc0.d/K15ohasd
rm -rf /etc/rc.d/rc6.d/K15ohasd
rm -rf /etc/rc.d/rc4.d/K15ohasd
rm -rf /etc/rc.d/init.d/init.ohasd
rm -rf /etc/rc.d/init.d/ohasd
rm -rf /etc/rc.d/rc2.d/K15ohasd
rm -rf /etc/rc.d/rc5.d/S96ohasd
rm -rf /etc/rc.d/rc3.d/S96ohasd

检查共享存储:

Z:\Oralce_Virtual_Box\ol6-12102-rac>ls -l
total 41959424
-rwxrwxrwa   1 Administrators  None            5370806272 Aug 10 18:06 12102_rac1-disk2.vdi
-rwxrwxrwa   1 Administrators  None            5370806272 Aug 10 18:06 12102_rac1-disk3.vdi
-rwxrwxrwa   1 Administrators  None            5370806272 Aug 10 18:06 12102_rac1-disk4.vdi
-rwxrwxrwa   1 Administrators  None            5370806272 Aug 10 18:06 12102_rac1-disk5.vdi

attach共享存储到rac3节点:

Z:\Oralce_Virtual_Box\ol6-12102-rac>VBoxManage storageattach 12102-rac3 --storagectl "SATA" --port 1 --device 0 --type hdd --medium 12102_rac1-disk2.vdi --mtype shareable
Z:\Oralce_Virtual_Box\ol6-12102-rac>VBoxManage storageattach 12102-rac3 --storagectl "SATA" --port 2 --device 0 --type hdd --medium 12102_rac1-disk3.vdi --mtype shareable
Z:\Oralce_Virtual_Box\ol6-12102-rac>VBoxManage storageattach 12102-rac3 --storagectl "SATA" --port 3 --device 0 --type hdd --medium 12102_rac1-disk4.vdi --mtype shareable
Z:\Oralce_Virtual_Box\ol6-12102-rac>VBoxManage storageattach 12102-rac3 --storagectl "SATA" --port 4 --device 0 --type hdd --medium 12102_rac1-disk5.vdi --mtype shareable
Z:\Oralce_Virtual_Box\ol6-12102-rac>

打开主机,将这些共享盘以udev的方式加入到随机启动中:

[root@12102-rac3 ~]# /sbin/scsi_id -g -u -d /dev/sdb                                                                                                                                     
1ATA_VBOX_HARDDISK_VBe3484a98-a77aaec1                                                                                                                                                   
[root@12102-rac3 ~]# /sbin/scsi_id -g -u -d /dev/sdc                                                                                                                                     
1ATA_VBOX_HARDDISK_VB9d897555-1b30e790                                                                                                                                                   
[root@12102-rac3 ~]# /sbin/scsi_id -g -u -d /dev/sdd                                                                                                                                     
1ATA_VBOX_HARDDISK_VB2a6662eb-b04f8b6b                                                                                                                                                   
[root@12102-rac3 ~]# /sbin/scsi_id -g -u -d /dev/sde                                                                                                                                     
1ATA_VBOX_HARDDISK_VBcbeec833-3107d8f3                                                                                                                                                   
[root@12102-rac3 ~]#                                                                                                                                                                     
[root@12102-rac3 ~]#                                                                                                                                                                     
[root@12102-rac3 ~]#                                                                                                                                                                     
[root@12102-rac3 ~]# cd /etc/udev                                                                                                                                                       
[root@12102-rac3 udev]# cd ru*                                                                                                                                                           
[root@12102-rac3 rules.d]# ls                                                                                                                                                           
55-usm.rules                 60-vboxadd.rules             90-alsa.rules              99-fuse.rules                                                                                       
60-fprint-autosuspend.rules  70-persistent-cd.rules       90-hal.rules               99-oracle-asmdevices.rules                                                                         
60-pcmcia.rules              70-persistent-net.rules      97-bluetooth-serial.rules                                                                                                     
60-raw.rules                 70-persistent-net.rules.bak  98-kexec.rules                                                                                                                 
[root@12102-rac3 rules.d]#                                                                                                                                                               
[root@12102-rac3 rules.d]#                                                                                                                                                               
[root@12102-rac3 rules.d]# cat 99-oracle-asmdevices.rules                                                                                                                               
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBe3484a98-a77aaec1", NAME="asm-disk1", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB9d897555-1b30e790", NAME="asm-disk2", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB2a6662eb-b04f8b6b", NAME="asm-disk3", OWNER="oracle", GROUP="dba", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBcbeec833-3107d8f3", NAME="asm-disk4", OWNER="oracle", GROUP="dba", MODE="0660"
[root@12102-rac3 rules.d]# /sbin/udevadm control --reload-rules
[root@12102-rac3 rules.d]# /sbin/start_udev
Starting udev: [  OK  ]
[root@12102-rac3 rules.d]#
[root@12102-rac3 rules.d]# ls -al /dev/asm*
brw-rw----. 1 oracle dba 8, 17 Aug 10 17:19 /dev/asm-disk1
brw-rw----. 1 oracle dba 8, 33 Aug 10 17:19 /dev/asm-disk2
brw-rw----. 1 oracle dba 8, 49 Aug 10 17:19 /dev/asm-disk3
brw-rw----. 1 oracle dba 8, 65 Aug 10 17:19 /dev/asm-disk4
[root@12102-rac3 rules.d]#

检查hosts文件,添加第三个节点:

[root@12102-rac3 rules.d]# cat /etc/hosts
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
 
127.0.0.1       localhost.localdomain   localhost
# Public
192.168.56.124   12102-rac1.localdomain       12102-rac1
192.168.56.125   12102-rac2.localdomain       12102-rac2
192.168.56.127   12102-rac3.localdomain       12102-rac3
# Private
192.168.57.34   12102-rac1-priv.localdomain   12102-rac1-priv
192.168.57.35   12102-rac2-priv.localdomain   12102-rac2-priv
192.168.57.37   12102-rac3-priv.localdomain   12102-rac3-priv
#Because use GNS, so vip and scanvip is provide by GNS
# Virtual
#192.168.56.103   12102-rac1-vip.localdomain    12102-rac1-vip
#192.168.56.104   12102-rac2-vip.localdomain    12102-rac2-vip
#192.168.56.109   12102-rac3-vip.localdomain    12102-rac3-vip
# SCAN
#192.168.56.105   12102-scan.localdomain 12102-scan
#192.168.56.106   12102-scan.localdomain 12102-scan
#192.168.56.107   12102-scan.localdomain 12102-scan
[root@12102-rac3 rules.d]#

并且检查主机的互信机制:

ssh 12102-rac1 date
ssh 12102-rac2 date
ssh 12102-rac3 date
 
ssh 12102-rac1-priv date
ssh 12102-rac2-priv date
ssh 12102-rac3-priv date
 
ssh 12102-rac1-vip date
ssh 12102-rac2-vip date
ssh 12102-rac3-vip date

给DNS server和dncp server也加上相关信息:

DNS我是用dnsmasq实现,参考这个文档

我的hosts文件内容为:

[root@dnsserver ~]# cat /etc/hosts
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
 
127.0.0.1       localhost.localdomain   localhost
# Public
192.168.56.124   12102-rac1.localdomain       12102-rac1
192.168.56.125   12102-rac2.localdomain       12102-rac2
# Private
192.168.57.34   12102-rac1-priv.localdomain   12102-rac1-priv
192.168.57.35   12102-rac2-priv.localdomain   12102-rac2-priv
#Because use GNS, so vip and scanvip is provide by GNS
# Virtual
192.168.56.103   12102-rac1-vip.localdomain    12102-rac1-vip
192.168.56.104   12102-rac2-vip.localdomain    12102-rac2-vip
# SCAN
192.168.56.105   12102-scan.localdomain 12102-scan
192.168.56.106   12102-scan.localdomain 12102-scan
192.168.56.107   12102-scan.localdomain 12102-scan
#GNS
192.168.56.108  gns.localdomain
[root@dnsserver ~]#

GNS的DHCP我是参考这个文章实现。

我的DHCP配置文件为:

[root@dnsserver ~]# cat /etc/dhcpd.conf
ddns-update-style interim;
 
ignore client-updates;
 
## DHCP for public:
subnet 192.168.56.0 netmask 255.255.255.0
{
default-lease-time 43200;
max-lease-time 86400;
option subnet-mask 255.255.255.0;
option broadcast-address 192.168.56.255;
option routers 192.168.56.1;
option domain-name-servers
192.168.56.3;
option domain-name "grid.localdomain";
pool
{
range 192.168.56.10 192.168.56.29;
}
}
 
 
 
## DHCP for private
subnet 192.168.57.0 netmask 255.255.255.0
{
default-lease-time 43200;
max-lease-time 86400;
option subnet-mask 255.255.0.0;
option broadcast-address 192.168.57.255;
pool
{
range 192.168.57.30 192.168.57.49;
}
}
[root@dnsserver ~]#

运行cluster verify脚本,检查节点硬件和OS的情况:

cluvfy stage -post hwos -n 12102-rac1,12102-rac2,12102-rac3 -verbose
 
[oracle@12102-rac1 bin]$ ./cluvfy stage -post hwos -n 12102-rac1,12102-rac2,12102-rac3 -verbose
 
Performing post-checks for hardware and operating system setup
 
Checking node reachability...
 
Check: Node reachability from node "12102-rac1"
  Destination Node                      Reachable?             
  ------------------------------------  ------------------------
  12102-rac1                            yes                     
  12102-rac2                            yes                     
  12102-rac3                            yes                     
Result: Node reachability check passed from node "12102-rac1"
 
 
Checking user equivalence...
 
Check: User equivalence for user "oracle"
  Node Name                             Status                 
  ------------------------------------  ------------------------
  12102-rac3                            passed                 
  12102-rac2                            passed                 
  12102-rac1                            passed                 
Result: User equivalence check passed for user "oracle"
 
Checking node connectivity...
 
Checking hosts config file...
  Node Name                             Status                 
  ------------------------------------  ------------------------
  12102-rac1                            passed                 
  12102-rac3                            passed                 
  12102-rac2                            passed                 
 
Verification of the hosts config file successful
 
 
Interface information for node "12102-rac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.124  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:AD:B5:33 1500 
 eth0   192.168.56.26   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:AD:B5:33 1500 
 eth0   192.168.56.22   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:AD:B5:33 1500 
 eth1   192.168.57.34   192.168.57.0    0.0.0.0         192.168.56.1    08:00:27:35:04:BB 1500 
 eth1   169.254.161.44  169.254.0.0     0.0.0.0         192.168.56.1    08:00:27:35:04:BB 1500 
 
 
Interface information for node "12102-rac3"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.127  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:A6:B7:99 1500 
 eth1   192.168.57.37   192.168.57.0    0.0.0.0         192.168.56.1    08:00:27:2C:DC:8C 1500 
 
 
Interface information for node "12102-rac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.125  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:60:27:F9 1500 
 eth0   192.168.56.108  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:60:27:F9 1500 
 eth0   192.168.56.25   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:60:27:F9 1500 
 eth0   192.168.56.27   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:60:27:F9 1500 
 eth0   192.168.56.28   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:60:27:F9 1500 
 eth1   192.168.57.35   192.168.57.0    0.0.0.0         192.168.56.1    08:00:27:47:D4:A9 1500 
 eth1   169.254.7.3     169.254.0.0     0.0.0.0         192.168.56.1    08:00:27:47:D4:A9 1500 
 
 
Check: Node connectivity using interfaces on subnet "192.168.57.0"
 
Check: Node connectivity of subnet "192.168.57.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac1[192.168.57.34]       12102-rac3[192.168.57.37]       yes             
  12102-rac1[192.168.57.34]       12102-rac2[192.168.57.35]       yes             
  12102-rac3[192.168.57.37]       12102-rac2[192.168.57.35]       yes             
Result: Node connectivity passed for subnet "192.168.57.0" with node(s) 12102-rac1,12102-rac3,12102-rac2
 
 
Check: TCP connectivity of subnet "192.168.57.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac1 : 192.168.57.34      12102-rac1 : 192.168.57.34      passed         
  12102-rac3 : 192.168.57.37      12102-rac1 : 192.168.57.34      passed         
  12102-rac2 : 192.168.57.35      12102-rac1 : 192.168.57.34      passed         
  12102-rac1 : 192.168.57.34      12102-rac3 : 192.168.57.37      passed         
  12102-rac3 : 192.168.57.37      12102-rac3 : 192.168.57.37      passed         
  12102-rac2 : 192.168.57.35      12102-rac3 : 192.168.57.37      passed         
  12102-rac1 : 192.168.57.34      12102-rac2 : 192.168.57.35      passed         
  12102-rac3 : 192.168.57.37      12102-rac2 : 192.168.57.35      passed         
  12102-rac2 : 192.168.57.35      12102-rac2 : 192.168.57.35      passed         
Result: TCP connectivity check passed for subnet "192.168.57.0"
 
 
Check: Node connectivity using interfaces on subnet "192.168.56.0"
 
Check: Node connectivity of subnet "192.168.56.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac1[192.168.56.22]       12102-rac1[192.168.56.26]       yes             
  12102-rac1[192.168.56.22]       12102-rac2[192.168.56.108]      yes             
  12102-rac1[192.168.56.22]       12102-rac2[192.168.56.125]      yes             
  12102-rac1[192.168.56.22]       12102-rac2[192.168.56.28]       yes             
  12102-rac1[192.168.56.22]       12102-rac1[192.168.56.124]      yes             
  12102-rac1[192.168.56.22]       12102-rac3[192.168.56.127]      yes             
  12102-rac1[192.168.56.22]       12102-rac2[192.168.56.25]       yes             
  12102-rac1[192.168.56.22]       12102-rac2[192.168.56.27]       yes             
  12102-rac1[192.168.56.26]       12102-rac2[192.168.56.108]      yes             
  12102-rac1[192.168.56.26]       12102-rac2[192.168.56.125]      yes             
  12102-rac1[192.168.56.26]       12102-rac2[192.168.56.28]       yes             
  12102-rac1[192.168.56.26]       12102-rac1[192.168.56.124]      yes             
  12102-rac1[192.168.56.26]       12102-rac3[192.168.56.127]      yes             
  12102-rac1[192.168.56.26]       12102-rac2[192.168.56.25]       yes             
  12102-rac1[192.168.56.26]       12102-rac2[192.168.56.27]       yes             
  12102-rac2[192.168.56.108]      12102-rac2[192.168.56.125]      yes             
  12102-rac2[192.168.56.108]      12102-rac2[192.168.56.28]       yes             
  12102-rac2[192.168.56.108]      12102-rac1[192.168.56.124]      yes             
  12102-rac2[192.168.56.108]      12102-rac3[192.168.56.127]      yes             
  12102-rac2[192.168.56.108]      12102-rac2[192.168.56.25]       yes             
  12102-rac2[192.168.56.108]      12102-rac2[192.168.56.27]       yes             
  12102-rac2[192.168.56.125]      12102-rac2[192.168.56.28]       yes             
  12102-rac2[192.168.56.125]      12102-rac1[192.168.56.124]      yes             
  12102-rac2[192.168.56.125]      12102-rac3[192.168.56.127]      yes             
  12102-rac2[192.168.56.125]      12102-rac2[192.168.56.25]       yes             
  12102-rac2[192.168.56.125]      12102-rac2[192.168.56.27]       yes             
  12102-rac2[192.168.56.28]       12102-rac1[192.168.56.124]      yes             
  12102-rac2[192.168.56.28]       12102-rac3[192.168.56.127]      yes             
  12102-rac2[192.168.56.28]       12102-rac2[192.168.56.25]       yes             
  12102-rac2[192.168.56.28]       12102-rac2[192.168.56.27]       yes             
  12102-rac1[192.168.56.124]      12102-rac3[192.168.56.127]      yes             
  12102-rac1[192.168.56.124]      12102-rac2[192.168.56.25]       yes             
  12102-rac1[192.168.56.124]      12102-rac2[192.168.56.27]       yes             
  12102-rac3[192.168.56.127]      12102-rac2[192.168.56.25]       yes             
  12102-rac3[192.168.56.127]      12102-rac2[192.168.56.27]       yes             
  12102-rac2[192.168.56.25]       12102-rac2[192.168.56.27]       yes             
Result: Node connectivity passed for subnet "192.168.56.0" with node(s) 12102-rac1,12102-rac2,12102-rac3
 
 
Check: TCP connectivity of subnet "192.168.56.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac1 : 192.168.56.22      12102-rac1 : 192.168.56.22      passed         
  12102-rac1 : 192.168.56.26      12102-rac1 : 192.168.56.22      passed         
  12102-rac2 : 192.168.56.108     12102-rac1 : 192.168.56.22      passed         
  12102-rac2 : 192.168.56.125     12102-rac1 : 192.168.56.22      passed         
  12102-rac2 : 192.168.56.28      12102-rac1 : 192.168.56.22      passed         
  12102-rac1 : 192.168.56.124     12102-rac1 : 192.168.56.22      passed         
  12102-rac3 : 192.168.56.127     12102-rac1 : 192.168.56.22      passed         
  12102-rac2 : 192.168.56.25      12102-rac1 : 192.168.56.22      passed         
  12102-rac2 : 192.168.56.27      12102-rac1 : 192.168.56.22      passed         
  12102-rac1 : 192.168.56.22      12102-rac1 : 192.168.56.26      passed         
  12102-rac1 : 192.168.56.26      12102-rac1 : 192.168.56.26      passed         
  12102-rac2 : 192.168.56.108     12102-rac1 : 192.168.56.26      passed         
  12102-rac2 : 192.168.56.125     12102-rac1 : 192.168.56.26      passed         
  12102-rac2 : 192.168.56.28      12102-rac1 : 192.168.56.26      passed         
  12102-rac1 : 192.168.56.124     12102-rac1 : 192.168.56.26      passed         
  12102-rac3 : 192.168.56.127     12102-rac1 : 192.168.56.26      passed         
  12102-rac2 : 192.168.56.25      12102-rac1 : 192.168.56.26      passed         
  12102-rac2 : 192.168.56.27      12102-rac1 : 192.168.56.26      passed         
  12102-rac1 : 192.168.56.22      12102-rac2 : 192.168.56.108     passed         
  12102-rac1 : 192.168.56.26      12102-rac2 : 192.168.56.108     passed         
  12102-rac2 : 192.168.56.108     12102-rac2 : 192.168.56.108     passed         
  12102-rac2 : 192.168.56.125     12102-rac2 : 192.168.56.108     passed         
  12102-rac2 : 192.168.56.28      12102-rac2 : 192.168.56.108     passed         
  12102-rac1 : 192.168.56.124     12102-rac2 : 192.168.56.108     passed         
  12102-rac3 : 192.168.56.127     12102-rac2 : 192.168.56.108     passed         
  12102-rac2 : 192.168.56.25      12102-rac2 : 192.168.56.108     passed         
  12102-rac2 : 192.168.56.27      12102-rac2 : 192.168.56.108     passed         
  12102-rac1 : 192.168.56.22      12102-rac2 : 192.168.56.125     passed         
  12102-rac1 : 192.168.56.26      12102-rac2 : 192.168.56.125     passed         
  12102-rac2 : 192.168.56.108     12102-rac2 : 192.168.56.125     passed         
  12102-rac2 : 192.168.56.125     12102-rac2 : 192.168.56.125     passed         
  12102-rac2 : 192.168.56.28      12102-rac2 : 192.168.56.125     passed         
  12102-rac1 : 192.168.56.124     12102-rac2 : 192.168.56.125     passed         
  12102-rac3 : 192.168.56.127     12102-rac2 : 192.168.56.125     passed         
  12102-rac2 : 192.168.56.25      12102-rac2 : 192.168.56.125     passed         
  12102-rac2 : 192.168.56.27      12102-rac2 : 192.168.56.125     passed         
  12102-rac1 : 192.168.56.22      12102-rac2 : 192.168.56.28      passed         
  12102-rac1 : 192.168.56.26      12102-rac2 : 192.168.56.28      passed         
  12102-rac2 : 192.168.56.108     12102-rac2 : 192.168.56.28      passed         
  12102-rac2 : 192.168.56.125     12102-rac2 : 192.168.56.28      passed         
  12102-rac2 : 192.168.56.28      12102-rac2 : 192.168.56.28      passed         
  12102-rac1 : 192.168.56.124     12102-rac2 : 192.168.56.28      passed         
  12102-rac3 : 192.168.56.127     12102-rac2 : 192.168.56.28      passed         
  12102-rac2 : 192.168.56.25      12102-rac2 : 192.168.56.28      passed         
  12102-rac2 : 192.168.56.27      12102-rac2 : 192.168.56.28      passed         
  12102-rac1 : 192.168.56.22      12102-rac1 : 192.168.56.124     passed         
  12102-rac1 : 192.168.56.26      12102-rac1 : 192.168.56.124     passed         
  12102-rac2 : 192.168.56.108     12102-rac1 : 192.168.56.124     passed         
  12102-rac2 : 192.168.56.125     12102-rac1 : 192.168.56.124     passed         
  12102-rac2 : 192.168.56.28      12102-rac1 : 192.168.56.124     passed         
  12102-rac1 : 192.168.56.124     12102-rac1 : 192.168.56.124     passed         
  12102-rac3 : 192.168.56.127     12102-rac1 : 192.168.56.124     passed         
  12102-rac2 : 192.168.56.25      12102-rac1 : 192.168.56.124     passed         
  12102-rac2 : 192.168.56.27      12102-rac1 : 192.168.56.124     passed         
  12102-rac1 : 192.168.56.22      12102-rac3 : 192.168.56.127     passed         
  12102-rac1 : 192.168.56.26      12102-rac3 : 192.168.56.127     passed         
  12102-rac2 : 192.168.56.108     12102-rac3 : 192.168.56.127     passed         
  12102-rac2 : 192.168.56.125     12102-rac3 : 192.168.56.127     passed         
  12102-rac2 : 192.168.56.28      12102-rac3 : 192.168.56.127     passed         
  12102-rac1 : 192.168.56.124     12102-rac3 : 192.168.56.127     passed         
  12102-rac3 : 192.168.56.127     12102-rac3 : 192.168.56.127     passed         
  12102-rac2 : 192.168.56.25      12102-rac3 : 192.168.56.127     passed         
  12102-rac2 : 192.168.56.27      12102-rac3 : 192.168.56.127     passed         
  12102-rac1 : 192.168.56.22      12102-rac2 : 192.168.56.25      passed         
  12102-rac1 : 192.168.56.26      12102-rac2 : 192.168.56.25      passed         
  12102-rac2 : 192.168.56.108     12102-rac2 : 192.168.56.25      passed         
  12102-rac2 : 192.168.56.125     12102-rac2 : 192.168.56.25      passed         
  12102-rac2 : 192.168.56.28      12102-rac2 : 192.168.56.25      passed         
  12102-rac1 : 192.168.56.124     12102-rac2 : 192.168.56.25      passed         
  12102-rac3 : 192.168.56.127     12102-rac2 : 192.168.56.25      passed         
  12102-rac2 : 192.168.56.25      12102-rac2 : 192.168.56.25      passed         
  12102-rac2 : 192.168.56.27      12102-rac2 : 192.168.56.25      passed         
  12102-rac1 : 192.168.56.22      12102-rac2 : 192.168.56.27      passed         
  12102-rac1 : 192.168.56.26      12102-rac2 : 192.168.56.27      passed         
  12102-rac2 : 192.168.56.108     12102-rac2 : 192.168.56.27      passed         
  12102-rac2 : 192.168.56.125     12102-rac2 : 192.168.56.27      passed         
  12102-rac2 : 192.168.56.28      12102-rac2 : 192.168.56.27      passed         
  12102-rac1 : 192.168.56.124     12102-rac2 : 192.168.56.27      passed         
  12102-rac3 : 192.168.56.127     12102-rac2 : 192.168.56.27      passed         
  12102-rac2 : 192.168.56.25      12102-rac2 : 192.168.56.27      passed         
  12102-rac2 : 192.168.56.27      12102-rac2 : 192.168.56.27      passed         
Result: TCP connectivity check passed for subnet "192.168.56.0"
 
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "192.168.57.0".
Subnet mask consistency check passed.
 
Result: Node connectivity check passed
 
Checking multicast communication...
 
Checking subnet "192.168.57.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.57.0" for multicast communication with multicast group "224.0.0.251" passed.
 
Check of multicast communication passed.
Task ASM Integrity check started...
 
Checking if connectivity exists across cluster nodes on the ASM network
 
Checking node connectivity...
 
Checking hosts config file...
  Node Name                             Status                 
  ------------------------------------  ------------------------
  12102-rac2                            passed                 
  12102-rac1                            passed                 
 
Verification of the hosts config file successful
 
 
Interface information for node "12102-rac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth1   192.168.57.35   192.168.57.0    0.0.0.0         192.168.56.1    08:00:27:47:D4:A9 1500 
 eth1   169.254.7.3     169.254.0.0     0.0.0.0         192.168.56.1    08:00:27:47:D4:A9 1500 
 
 
Interface information for node "12102-rac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth1   192.168.57.34   192.168.57.0    0.0.0.0         192.168.56.1    08:00:27:35:04:BB 1500 
 eth1   169.254.161.44  169.254.0.0     0.0.0.0         192.168.56.1    08:00:27:35:04:BB 1500 
 
 
Check: Node connectivity using interfaces on subnet "192.168.57.0"
 
Check: Node connectivity of subnet "192.168.57.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac2[192.168.57.35]       12102-rac1[192.168.57.34]       yes             
Result: Node connectivity passed for subnet "192.168.57.0" with node(s) 12102-rac2,12102-rac1
 
 
Check: TCP connectivity of subnet "192.168.57.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac2 : 192.168.57.35      12102-rac2 : 192.168.57.35      passed         
  12102-rac1 : 192.168.57.34      12102-rac2 : 192.168.57.35      passed         
  12102-rac2 : 192.168.57.35      12102-rac1 : 192.168.57.34      passed         
  12102-rac1 : 192.168.57.34      12102-rac1 : 192.168.57.34      passed         
Result: TCP connectivity check passed for subnet "192.168.57.0"
 
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.57.0".
Subnet mask consistency check passed.
 
Result: Node connectivity check passed
 
Network connectivity check across cluster nodes on the ASM network passed
 
Task ASM Integrity check passed...
Task ASM Integrity check started...
 
Checking if connectivity exists across cluster nodes on the ASM network
 
Checking node connectivity...
 
Checking hosts config file...
  Node Name                             Status                 
  ------------------------------------  ------------------------
  12102-rac2                            passed                 
  12102-rac1                            passed                 
 
Verification of the hosts config file successful
 
 
Interface information for node "12102-rac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth1   192.168.57.35   192.168.57.0    0.0.0.0         192.168.56.1    08:00:27:47:D4:A9 1500 
 eth1   169.254.7.3     169.254.0.0     0.0.0.0         192.168.56.1    08:00:27:47:D4:A9 1500 
 
 
Interface information for node "12102-rac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth1   192.168.57.34   192.168.57.0    0.0.0.0         192.168.56.1    08:00:27:35:04:BB 1500 
 eth1   169.254.161.44  169.254.0.0     0.0.0.0         192.168.56.1    08:00:27:35:04:BB 1500 
 
 
Check: Node connectivity using interfaces on subnet "192.168.57.0"
 
Check: Node connectivity of subnet "192.168.57.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac1[192.168.57.34]       12102-rac2[192.168.57.35]       yes             
Result: Node connectivity passed for subnet "192.168.57.0" with node(s) 12102-rac1,12102-rac2
 
 
Check: TCP connectivity of subnet "192.168.57.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac1 : 192.168.57.34      12102-rac1 : 192.168.57.34      passed         
  12102-rac2 : 192.168.57.35      12102-rac1 : 192.168.57.34      passed         
  12102-rac1 : 192.168.57.34      12102-rac2 : 192.168.57.35      passed         
  12102-rac2 : 192.168.57.35      12102-rac2 : 192.168.57.35      passed         
Result: TCP connectivity check passed for subnet "192.168.57.0"
 
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.57.0".
Subnet mask consistency check passed.
 
Result: Node connectivity check passed
 
Network connectivity check across cluster nodes on the ASM network passed
 
Task ASM Integrity check passed...
 
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Time zone consistency
Result: Time zone consistency check passed
 
Checking shared storage accessibility...
 
No shared storage found
 
 
Shared storage check failed on nodes "12102-rac1,12102-rac3,12102-rac2"
 
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
 
 
Post-check for hardware and operating system setup was unsuccessful on all the nodes.
[oracle@12102-rac1 bin]$

将需要加进来的12102-rac3节点和节点1对比一下,看看是否有不match的情况:

[oracle@12102-rac1 bin]$ ./cluvfy comp peer -refnode 12102-rac1 -n 12102-rac3 -verbose
 
Verifying peer compatibility
 
Checking peer compatibility...
 
Compatibility check: Physical memory [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    3.6476GB (3824748.0KB)    3.6476GB (3824748.0KB)    matched   
Physical memory
<null>
 
Compatibility check: Available memory [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    3.4812GB (3650308.0KB)    2.1513GB (2255776.0KB)    matched   
Available memory
<null>
 
Compatibility check: Swap space [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    1.9687GB (2064376.0KB)    1.9687GB (2064376.0KB)    matched   
Swap space
<null>
 
Compatibility check: Free disk space for "/usr" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    17.4385GB (1.8285568E7KB)  3.7285GB (3909632.0KB)    matched   
Free disk space
<null>
 
Compatibility check: Free disk space for "/var" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    17.4385GB (1.8285568E7KB)  3.7285GB (3909632.0KB)    matched   
Free disk space
<null>
 
Compatibility check: Free disk space for "/etc" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    17.4385GB (1.8285568E7KB)  3.7285GB (3909632.0KB)    matched   
Free disk space
<null>
 
Compatibility check: Free disk space for "/u01/app/12.1.0.2/grid" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    17.4385GB (1.8285568E7KB)  3.7285GB (3909632.0KB)    matched   
Free disk space
<null>
 
Compatibility check: Free disk space for "/sbin" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    17.4385GB (1.8285568E7KB)  3.7285GB (3909632.0KB)    matched   
Free disk space
<null>
 
Compatibility check: Free disk space for "/tmp" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    17.4385GB (1.8285568E7KB)  3.7285GB (3909632.0KB)    matched   
Free disk space
<null>
 
Compatibility check: User existence for "oracle" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    oracle(54321)             oracle(54321)             matched   
User existence for "oracle" check passed
 
Compatibility check: Group existence for "oinstall" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    oinstall(54321)           oinstall(54321)           matched   
Group existence for "oinstall" check passed
 
Compatibility check: Group existence for "dba" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    dba(54322)                dba(54322)                matched   
Group existence for "dba" check passed
 
Compatibility check: Group membership for "oracle" in "oinstall (Primary)" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    yes                       yes                       matched   
Group membership for "oracle" in "oinstall (Primary)" check passed
 
Compatibility check: Group membership for "oracle" in "dba" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    yes                       yes                       matched   
Group membership for "oracle" in "dba" check passed
 
Compatibility check: Run level [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    5                         5                         matched   
Run level check passed
 
Compatibility check: System architecture [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    x86_64                    x86_64                    matched   
System architecture check passed
 
Compatibility check: Kernel version [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    2.6.32-358.el6.x86_64     2.6.32-358.el6.x86_64     matched   
Kernel version check passed
 
Compatibility check: Kernel param "semmsl" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    250                       250                       matched   
Kernel param "semmsl" check passed
 
Compatibility check: Kernel param "semmns" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    32000                     32000                     matched   
Kernel param "semmns" check passed
 
Compatibility check: Kernel param "semopm" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    100                       100                       matched   
Kernel param "semopm" check passed
 
Compatibility check: Kernel param "semmni" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    128                       128                       matched   
Kernel param "semmni" check passed
 
Compatibility check: Kernel param "shmmax" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    4398046511104             4398046511104             matched   
Kernel param "shmmax" check passed
 
Compatibility check: Kernel param "shmmni" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    4096                      4096                      matched   
Kernel param "shmmni" check passed
 
Compatibility check: Kernel param "shmall" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    0                         0                         matched   
Kernel param "shmall" check passed
 
Compatibility check: Kernel param "file-max" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    6815744                   6815744                   matched   
Kernel param "file-max" check passed
 
Compatibility check: Kernel param "ip_local_port_range" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    9000 65500                9000 65500                matched   
Kernel param "ip_local_port_range" check passed
 
Compatibility check: Kernel param "rmem_default" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    262144                    262144                    matched   
Kernel param "rmem_default" check passed
 
Compatibility check: Kernel param "rmem_max" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    4194304                   4194304                   matched   
Kernel param "rmem_max" check passed
 
Compatibility check: Kernel param "wmem_default" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    262144                    262144                    matched   
Kernel param "wmem_default" check passed
 
Compatibility check: Kernel param "wmem_max" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    1048576                   1048576                   matched   
Kernel param "wmem_max" check passed
 
Compatibility check: Kernel param "aio-max-nr" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    1048576                   1048576                   matched   
Kernel param "aio-max-nr" check passed
 
Compatibility check: Kernel param "panic_on_oops" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    1                         1                         matched   
Kernel param "panic_on_oops" check passed
 
Compatibility check: Package existence for "binutils" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2-5.36.el6  matched   
Package existence for "binutils" check passed
 
Compatibility check: Package existence for "compat-libcap1" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    compat-libcap1-1.10-1     compat-libcap1-1.10-1     matched   
Package existence for "compat-libcap1" check passed
 
Compatibility check: Package existence for "compat-libstdc++-33 (x86_64)" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    compat-libstdc++-33-3.2.3-69.el6 (x86_64),compat-libstdc++-33-3.2.3-69.el6 (i686)  compat-libstdc++-33-3.2.3-69.el6 (x86_64),compat-libstdc++-33-3.2.3-69.el6 (i686)  matched   
Package existence for "compat-libstdc++-33 (x86_64)" check passed
 
Compatibility check: Package existence for "libgcc (x86_64)" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    libgcc-4.4.7-3.el6 (x86_64),libgcc-4.4.7-3.el6 (i686)  libgcc-4.4.7-3.el6 (x86_64),libgcc-4.4.7-3.el6 (i686)  matched   
Package existence for "libgcc (x86_64)" check passed
 
Compatibility check: Package existence for "libstdc++ (x86_64)" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    libstdc++-4.4.7-3.el6 (x86_64),libstdc++-4.4.7-3.el6 (i686)  libstdc++-4.4.7-3.el6 (x86_64),libstdc++-4.4.7-3.el6 (i686)  matched   
Package existence for "libstdc++ (x86_64)" check passed
 
Compatibility check: Package existence for "libstdc++-devel (x86_64)" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    libstdc++-devel-4.4.7-3.el6 (x86_64),libstdc++-devel-4.4.7-3.el6 (i686)  libstdc++-devel-4.4.7-3.el6 (x86_64),libstdc++-devel-4.4.7-3.el6 (i686)  matched   
Package existence for "libstdc++-devel (x86_64)" check passed
 
Compatibility check: Package existence for "sysstat" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    sysstat-9.0.4-20.el6      sysstat-9.0.4-20.el6      matched   
Package existence for "sysstat" check passed
 
Compatibility check: Package existence for "gcc" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    gcc-4.4.7-3.el6           gcc-4.4.7-3.el6           matched   
Package existence for "gcc" check passed
 
Compatibility check: Package existence for "gcc-c++" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    gcc-c++-4.4.7-3.el6       gcc-c++-4.4.7-3.el6       matched   
Package existence for "gcc-c++" check passed
 
Compatibility check: Package existence for "ksh" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    ksh-20100621-19.el6       ksh-20100621-19.el6       matched   
Package existence for "ksh" check passed
 
Compatibility check: Package existence for "make" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    make-3.81-20.el6          make-3.81-20.el6          matched   
Package existence for "make" check passed
 
Compatibility check: Package existence for "glibc (x86_64)" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    glibc-2.12-1.107.el6 (x86_64),glibc-2.12-1.107.el6 (i686)  glibc-2.12-1.107.el6 (x86_64),glibc-2.12-1.107.el6 (i686)  matched   
Package existence for "glibc (x86_64)" check passed
 
Compatibility check: Package existence for "glibc-devel (x86_64)" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    glibc-devel-2.12-1.107.el6 (x86_64),glibc-devel-2.12-1.107.el6 (i686)  glibc-devel-2.12-1.107.el6 (x86_64),glibc-devel-2.12-1.107.el6 (i686)  matched   
Package existence for "glibc-devel (x86_64)" check passed
 
Compatibility check: Package existence for "libaio (x86_64)" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    libaio-0.3.107-10.el6 (x86_64),libaio-0.3.107-10.el6 (i686)  libaio-0.3.107-10.el6 (x86_64),libaio-0.3.107-10.el6 (i686)  matched   
Package existence for "libaio (x86_64)" check passed
 
Compatibility check: Package existence for "libaio-devel (x86_64)" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    libaio-devel-0.3.107-10.el6 (i686),libaio-devel-0.3.107-10.el6 (x86_64)  libaio-devel-0.3.107-10.el6 (i686),libaio-devel-0.3.107-10.el6 (x86_64)  matched   
Package existence for "libaio-devel (x86_64)" check passed
 
Compatibility check: Package existence for "nfs-utils" [reference node: 12102-rac1]
  Node Name     Status                    Ref. node status          Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    nfs-utils-1.2.3-36.el6    nfs-utils-1.2.3-36.el6    matched   
Package existence for "nfs-utils" check passed
 
Verification of peer compatibility was successful.
[oracle@12102-rac1 bin]$

要求Grid Infrastructure Management Repository 的大小至少500M。(下面显示单位是Seconds是不对的,应该是M)

[oracle@12102-rac1 ~]$
[oracle@12102-rac1 ~]$ oclumon manage -get repsize
 
CHM Repository Size = 136320 seconds
[oracle@12102-rac1 ~]$

检查一下,准备add node:

[oracle@12102-rac1 bin]$ ./cluvfy stage -pre nodeadd -n 12102-rac3 -verbose
 
Performing pre-checks for node addition
 
Checking node reachability...
 
Check: Node reachability from node "12102-rac1"
  Destination Node                      Reachable?             
  ------------------------------------  ------------------------
  12102-rac3                            yes                     
Result: Node reachability check passed from node "12102-rac1"
 
 
Checking user equivalence...
 
Check: User equivalence for user "oracle"
  Node Name                             Status                 
  ------------------------------------  ------------------------
  12102-rac3                            passed                 
Result: User equivalence check passed for user "oracle"
 
Checking CRS integrity...
The Oracle Clusterware is healthy on node "12102-rac1"
 
CRS integrity check passed
 
Clusterware version consistency passed.
 
Checking shared resources...
 
Checking CRS home location...
Location check passed for: "/u01/app/12.1.0.2/grid"
Result: Shared resources check for node addition passed
 
 
Checking node connectivity...
 
Checking hosts config file...
  Node Name                             Status                 
  ------------------------------------  ------------------------
  12102-rac1                            passed                 
  12102-rac2                            passed                 
  12102-rac3                            passed                 
 
Verification of the hosts config file successful
 
 
Interface information for node "12102-rac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.124  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:AD:B5:33 1500 
 eth0   192.168.56.26   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:AD:B5:33 1500 
 eth0   192.168.56.22   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:AD:B5:33 1500 
 eth1   192.168.57.34   192.168.57.0    0.0.0.0         192.168.56.1    08:00:27:35:04:BB 1500 
 eth1   169.254.161.44  169.254.0.0     0.0.0.0         192.168.56.1    08:00:27:35:04:BB 1500 
 
 
Interface information for node "12102-rac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.125  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:60:27:F9 1500 
 eth0   192.168.56.108  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:60:27:F9 1500 
 eth0   192.168.56.25   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:60:27:F9 1500 
 eth0   192.168.56.27   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:60:27:F9 1500 
 eth0   192.168.56.28   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:60:27:F9 1500 
 eth1   192.168.57.35   192.168.57.0    0.0.0.0         192.168.56.1    08:00:27:47:D4:A9 1500 
 eth1   169.254.7.3     169.254.0.0     0.0.0.0         192.168.56.1    08:00:27:47:D4:A9 1500 
 
 
Interface information for node "12102-rac3"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.127  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:A6:B7:99 1500 
 eth1   192.168.57.37   192.168.57.0    0.0.0.0         192.168.56.1    08:00:27:2C:DC:8C 1500 
 
 
Check: Node connectivity using interfaces on subnet "192.168.57.0"
 
Check: Node connectivity of subnet "192.168.57.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac2[192.168.57.35]       12102-rac3[192.168.57.37]       yes             
  12102-rac2[192.168.57.35]       12102-rac1[192.168.57.34]       yes             
  12102-rac3[192.168.57.37]       12102-rac1[192.168.57.34]       yes             
Result: Node connectivity passed for subnet "192.168.57.0" with node(s) 12102-rac2,12102-rac3,12102-rac1
 
 
Check: TCP connectivity of subnet "192.168.57.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac2 : 192.168.57.35      12102-rac2 : 192.168.57.35      passed         
  12102-rac3 : 192.168.57.37      12102-rac2 : 192.168.57.35      passed         
  12102-rac1 : 192.168.57.34      12102-rac2 : 192.168.57.35      passed         
  12102-rac2 : 192.168.57.35      12102-rac3 : 192.168.57.37      passed         
  12102-rac3 : 192.168.57.37      12102-rac3 : 192.168.57.37      passed         
  12102-rac1 : 192.168.57.34      12102-rac3 : 192.168.57.37      passed         
  12102-rac2 : 192.168.57.35      12102-rac1 : 192.168.57.34      passed         
  12102-rac3 : 192.168.57.37      12102-rac1 : 192.168.57.34      passed         
  12102-rac1 : 192.168.57.34      12102-rac1 : 192.168.57.34      passed         
Result: TCP connectivity check passed for subnet "192.168.57.0"
 
 
Check: Node connectivity using interfaces on subnet "192.168.56.0"
 
Check: Node connectivity of subnet "192.168.56.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac2[192.168.56.28]       12102-rac3[192.168.56.127]      yes             
  12102-rac2[192.168.56.28]       12102-rac1[192.168.56.124]      yes             
  12102-rac2[192.168.56.28]       12102-rac2[192.168.56.25]       yes             
  12102-rac2[192.168.56.28]       12102-rac2[192.168.56.27]       yes             
  12102-rac2[192.168.56.28]       12102-rac2[192.168.56.108]      yes             
  12102-rac2[192.168.56.28]       12102-rac2[192.168.56.125]      yes             
  12102-rac2[192.168.56.28]       12102-rac1[192.168.56.22]       yes             
  12102-rac2[192.168.56.28]       12102-rac1[192.168.56.26]       yes             
  12102-rac3[192.168.56.127]      12102-rac1[192.168.56.124]      yes             
  12102-rac3[192.168.56.127]      12102-rac2[192.168.56.25]       yes             
  12102-rac3[192.168.56.127]      12102-rac2[192.168.56.27]       yes             
  12102-rac3[192.168.56.127]      12102-rac2[192.168.56.108]      yes             
  12102-rac3[192.168.56.127]      12102-rac2[192.168.56.125]      yes             
  12102-rac3[192.168.56.127]      12102-rac1[192.168.56.22]       yes             
  12102-rac3[192.168.56.127]      12102-rac1[192.168.56.26]       yes             
  12102-rac1[192.168.56.124]      12102-rac2[192.168.56.25]       yes             
  12102-rac1[192.168.56.124]      12102-rac2[192.168.56.27]       yes             
  12102-rac1[192.168.56.124]      12102-rac2[192.168.56.108]      yes             
  12102-rac1[192.168.56.124]      12102-rac2[192.168.56.125]      yes             
  12102-rac1[192.168.56.124]      12102-rac1[192.168.56.22]       yes             
  12102-rac1[192.168.56.124]      12102-rac1[192.168.56.26]       yes             
  12102-rac2[192.168.56.25]       12102-rac2[192.168.56.27]       yes             
  12102-rac2[192.168.56.25]       12102-rac2[192.168.56.108]      yes             
  12102-rac2[192.168.56.25]       12102-rac2[192.168.56.125]      yes             
  12102-rac2[192.168.56.25]       12102-rac1[192.168.56.22]       yes             
  12102-rac2[192.168.56.25]       12102-rac1[192.168.56.26]       yes             
  12102-rac2[192.168.56.27]       12102-rac2[192.168.56.108]      yes             
  12102-rac2[192.168.56.27]       12102-rac2[192.168.56.125]      yes             
  12102-rac2[192.168.56.27]       12102-rac1[192.168.56.22]       yes             
  12102-rac2[192.168.56.27]       12102-rac1[192.168.56.26]       yes             
  12102-rac2[192.168.56.108]      12102-rac2[192.168.56.125]      yes             
  12102-rac2[192.168.56.108]      12102-rac1[192.168.56.22]       yes             
  12102-rac2[192.168.56.108]      12102-rac1[192.168.56.26]       yes             
  12102-rac2[192.168.56.125]      12102-rac1[192.168.56.22]       yes             
  12102-rac2[192.168.56.125]      12102-rac1[192.168.56.26]       yes             
  12102-rac1[192.168.56.22]       12102-rac1[192.168.56.26]       yes             
Result: Node connectivity passed for subnet "192.168.56.0" with node(s) 12102-rac2,12102-rac3,12102-rac1
 
 
Check: TCP connectivity of subnet "192.168.56.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac2 : 192.168.56.28      12102-rac2 : 192.168.56.28      passed         
  12102-rac3 : 192.168.56.127     12102-rac2 : 192.168.56.28      passed         
  12102-rac1 : 192.168.56.124     12102-rac2 : 192.168.56.28      passed         
  12102-rac2 : 192.168.56.25      12102-rac2 : 192.168.56.28      passed         
  12102-rac2 : 192.168.56.27      12102-rac2 : 192.168.56.28      passed         
  12102-rac2 : 192.168.56.108     12102-rac2 : 192.168.56.28      passed         
  12102-rac2 : 192.168.56.125     12102-rac2 : 192.168.56.28      passed         
  12102-rac1 : 192.168.56.22      12102-rac2 : 192.168.56.28      passed         
  12102-rac1 : 192.168.56.26      12102-rac2 : 192.168.56.28      passed         
  12102-rac2 : 192.168.56.28      12102-rac3 : 192.168.56.127     passed         
  12102-rac3 : 192.168.56.127     12102-rac3 : 192.168.56.127     passed         
  12102-rac1 : 192.168.56.124     12102-rac3 : 192.168.56.127     passed         
  12102-rac2 : 192.168.56.25      12102-rac3 : 192.168.56.127     passed         
  12102-rac2 : 192.168.56.27      12102-rac3 : 192.168.56.127     passed         
  12102-rac2 : 192.168.56.108     12102-rac3 : 192.168.56.127     passed         
  12102-rac2 : 192.168.56.125     12102-rac3 : 192.168.56.127     passed         
  12102-rac1 : 192.168.56.22      12102-rac3 : 192.168.56.127     passed         
  12102-rac1 : 192.168.56.26      12102-rac3 : 192.168.56.127     passed         
  12102-rac2 : 192.168.56.28      12102-rac1 : 192.168.56.124     passed         
  12102-rac3 : 192.168.56.127     12102-rac1 : 192.168.56.124     passed         
  12102-rac1 : 192.168.56.124     12102-rac1 : 192.168.56.124     passed         
  12102-rac2 : 192.168.56.25      12102-rac1 : 192.168.56.124     passed         
  12102-rac2 : 192.168.56.27      12102-rac1 : 192.168.56.124     passed         
  12102-rac2 : 192.168.56.108     12102-rac1 : 192.168.56.124     passed         
  12102-rac2 : 192.168.56.125     12102-rac1 : 192.168.56.124     passed         
  12102-rac1 : 192.168.56.22      12102-rac1 : 192.168.56.124     passed         
  12102-rac1 : 192.168.56.26      12102-rac1 : 192.168.56.124     passed         
  12102-rac2 : 192.168.56.28      12102-rac2 : 192.168.56.25      passed         
  12102-rac3 : 192.168.56.127     12102-rac2 : 192.168.56.25      passed         
  12102-rac1 : 192.168.56.124     12102-rac2 : 192.168.56.25      passed         
  12102-rac2 : 192.168.56.25      12102-rac2 : 192.168.56.25      passed         
  12102-rac2 : 192.168.56.27      12102-rac2 : 192.168.56.25      passed         
  12102-rac2 : 192.168.56.108     12102-rac2 : 192.168.56.25      passed         
  12102-rac2 : 192.168.56.125     12102-rac2 : 192.168.56.25      passed         
  12102-rac1 : 192.168.56.22      12102-rac2 : 192.168.56.25      passed         
  12102-rac1 : 192.168.56.26      12102-rac2 : 192.168.56.25      passed         
  12102-rac2 : 192.168.56.28      12102-rac2 : 192.168.56.27      passed         
  12102-rac3 : 192.168.56.127     12102-rac2 : 192.168.56.27      passed         
  12102-rac1 : 192.168.56.124     12102-rac2 : 192.168.56.27      passed         
  12102-rac2 : 192.168.56.25      12102-rac2 : 192.168.56.27      passed         
  12102-rac2 : 192.168.56.27      12102-rac2 : 192.168.56.27      passed         
  12102-rac2 : 192.168.56.108     12102-rac2 : 192.168.56.27      passed         
  12102-rac2 : 192.168.56.125     12102-rac2 : 192.168.56.27      passed         
  12102-rac1 : 192.168.56.22      12102-rac2 : 192.168.56.27      passed         
  12102-rac1 : 192.168.56.26      12102-rac2 : 192.168.56.27      passed         
  12102-rac2 : 192.168.56.28      12102-rac2 : 192.168.56.108     passed         
  12102-rac3 : 192.168.56.127     12102-rac2 : 192.168.56.108     passed         
  12102-rac1 : 192.168.56.124     12102-rac2 : 192.168.56.108     passed         
  12102-rac2 : 192.168.56.25      12102-rac2 : 192.168.56.108     passed         
  12102-rac2 : 192.168.56.27      12102-rac2 : 192.168.56.108     passed         
  12102-rac2 : 192.168.56.108     12102-rac2 : 192.168.56.108     passed         
  12102-rac2 : 192.168.56.125     12102-rac2 : 192.168.56.108     passed         
  12102-rac1 : 192.168.56.22      12102-rac2 : 192.168.56.108     passed         
  12102-rac1 : 192.168.56.26      12102-rac2 : 192.168.56.108     passed         
  12102-rac2 : 192.168.56.28      12102-rac2 : 192.168.56.125     passed         
  12102-rac3 : 192.168.56.127     12102-rac2 : 192.168.56.125     passed         
  12102-rac1 : 192.168.56.124     12102-rac2 : 192.168.56.125     passed         
  12102-rac2 : 192.168.56.25      12102-rac2 : 192.168.56.125     passed         
  12102-rac2 : 192.168.56.27      12102-rac2 : 192.168.56.125     passed         
  12102-rac2 : 192.168.56.108     12102-rac2 : 192.168.56.125     passed         
  12102-rac2 : 192.168.56.125     12102-rac2 : 192.168.56.125     passed         
  12102-rac1 : 192.168.56.22      12102-rac2 : 192.168.56.125     passed         
  12102-rac1 : 192.168.56.26      12102-rac2 : 192.168.56.125     passed         
  12102-rac2 : 192.168.56.28      12102-rac1 : 192.168.56.22      passed         
  12102-rac3 : 192.168.56.127     12102-rac1 : 192.168.56.22      passed         
  12102-rac1 : 192.168.56.124     12102-rac1 : 192.168.56.22      passed         
  12102-rac2 : 192.168.56.25      12102-rac1 : 192.168.56.22      passed         
  12102-rac2 : 192.168.56.27      12102-rac1 : 192.168.56.22      passed         
  12102-rac2 : 192.168.56.108     12102-rac1 : 192.168.56.22      passed         
  12102-rac2 : 192.168.56.125     12102-rac1 : 192.168.56.22      passed         
  12102-rac1 : 192.168.56.22      12102-rac1 : 192.168.56.22      passed         
  12102-rac1 : 192.168.56.26      12102-rac1 : 192.168.56.22      passed         
  12102-rac2 : 192.168.56.28      12102-rac1 : 192.168.56.26      passed         
  12102-rac3 : 192.168.56.127     12102-rac1 : 192.168.56.26      passed         
  12102-rac1 : 192.168.56.124     12102-rac1 : 192.168.56.26      passed         
  12102-rac2 : 192.168.56.25      12102-rac1 : 192.168.56.26      passed         
  12102-rac2 : 192.168.56.27      12102-rac1 : 192.168.56.26      passed         
  12102-rac2 : 192.168.56.108     12102-rac1 : 192.168.56.26      passed         
  12102-rac2 : 192.168.56.125     12102-rac1 : 192.168.56.26      passed         
  12102-rac1 : 192.168.56.22      12102-rac1 : 192.168.56.26      passed         
  12102-rac1 : 192.168.56.26      12102-rac1 : 192.168.56.26      passed         
Result: TCP connectivity check passed for subnet "192.168.56.0"
 
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "192.168.57.0".
Subnet mask consistency check passed.
 
Result: Node connectivity check passed
 
Checking multicast communication...
 
Checking subnet "192.168.57.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.57.0" for multicast communication with multicast group "224.0.0.251" passed.
 
Check of multicast communication passed.
Task ASM Integrity check started...
 
Checking if connectivity exists across cluster nodes on the ASM network
 
Checking node connectivity...
 
Checking hosts config file...
  Node Name                             Status                 
  ------------------------------------  ------------------------
  12102-rac3                            passed                 
 
Verification of the hosts config file successful
 
 
Interface information for node "12102-rac3"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth1   192.168.57.37   192.168.57.0    0.0.0.0         192.168.56.1    08:00:27:2C:DC:8C 1500 
 
 
Check: Node connectivity using interfaces on subnet "192.168.57.0"
 
Check: Node connectivity of subnet "192.168.57.0"
Result: Node connectivity passed for subnet "192.168.57.0" with node(s) 12102-rac3
 
 
Check: TCP connectivity of subnet "192.168.57.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac3 : 192.168.57.37      12102-rac3 : 192.168.57.37      passed         
Result: TCP connectivity check passed for subnet "192.168.57.0"
 
 
Result: Node connectivity check passed
 
Network connectivity check across cluster nodes on the ASM network passed
 
Task ASM Integrity check passed...
Checking the policy managed database home availability
PRVG-11751 : File "/u01/app/oracle/product/12.1.0.2/db_1/bin/oracle" does not exist on all the nodes.
PRVG-11885 : Oracle Home "/u01/app/oracle/product/12.1.0.2/db_1" for policy managed database "cdbrac" does not exist on nodes "12102-rac3"
Policy managed database home availability check failed
 
Check: Total memory
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    3.6476GB (3824748.0KB)    4GB (4194304.0KB)         failed   
  12102-rac1    3.6476GB (3824748.0KB)    4GB (4194304.0KB)         failed   
Result: Total memory check failed
 
Check: Available memory
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    3.4806GB (3649632.0KB)    50MB (51200.0KB)          passed   
  12102-rac1    2.1512GB (2255680.0KB)    50MB (51200.0KB)          passed   
Result: Available memory check passed
 
Check: Swap space
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    1.9687GB (2064376.0KB)    3.6476GB (3824748.0KB)    failed   
  12102-rac1    1.9687GB (2064376.0KB)    3.6476GB (3824748.0KB)    failed   
Result: Swap space check failed
 
Check: Free disk space for "12102-rac3:/usr,12102-rac3:/var,12102-rac3:/etc,12102-rac3:/u01/app/12.1.0.2/grid,12102-rac3:/sbin,12102-rac3:/tmp"
  Path              Node Name     Mount point   Available     Required      Status     
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              12102-rac3    /             17.4375GB     7.9635GB      passed     
  /var              12102-rac3    /             17.4375GB     7.9635GB      passed     
  /etc              12102-rac3    /             17.4375GB     7.9635GB      passed     
  /u01/app/12.1.0.2/grid  12102-rac3    /             17.4375GB     7.9635GB      passed     
  /sbin             12102-rac3    /             17.4375GB     7.9635GB      passed     
  /tmp              12102-rac3    /             17.4375GB     7.9635GB      passed     
Result: Free disk space check passed for "12102-rac3:/usr,12102-rac3:/var,12102-rac3:/etc,12102-rac3:/u01/app/12.1.0.2/grid,12102-rac3:/sbin,12102-rac3:/tmp"
 
Check: Free disk space for "12102-rac1:/usr,12102-rac1:/var,12102-rac1:/etc,12102-rac1:/u01/app/12.1.0.2/grid,12102-rac1:/sbin,12102-rac1:/tmp"
  Path              Node Name     Mount point   Available     Required      Status     
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              12102-rac1    /             3.7168GB      7.9635GB      failed     
  /var              12102-rac1    /             3.7168GB      7.9635GB      failed     
  /etc              12102-rac1    /             3.7168GB      7.9635GB      failed     
  /u01/app/12.1.0.2/grid  12102-rac1    /             3.7168GB      7.9635GB      failed     
  /sbin             12102-rac1    /             3.7168GB      7.9635GB      failed     
  /tmp              12102-rac1    /             3.7168GB      7.9635GB      failed     
Result: Free disk space check failed for "12102-rac1:/usr,12102-rac1:/var,12102-rac1:/etc,12102-rac1:/u01/app/12.1.0.2/grid,12102-rac1:/sbin,12102-rac1:/tmp"
 
Check: User existence for "oracle"
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  12102-rac3    passed                    exists(54321)           
  12102-rac1    passed                    exists(54321)           
 
Checking for multiple users with UID value 54321
Result: Check for multiple users with UID value 54321 passed
Result: User existence check passed for "oracle"
 
Check: Run level
  Node Name     run level                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    5                         3,5                       passed   
  12102-rac1    5                         3,5                       passed   
Result: Run level check passed
 
Check: Hard limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Status         
  ----------------  ------------  ------------  ------------  ----------------
  12102-rac3        hard          65536         65536         passed         
  12102-rac1        hard          65536         65536         passed         
Result: Hard limits check passed for "maximum open file descriptors"
 
Check: Soft limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Status         
  ----------------  ------------  ------------  ------------  ----------------
  12102-rac3        soft          1024          1024          passed         
  12102-rac1        soft          1024          1024          passed         
Result: Soft limits check passed for "maximum open file descriptors"
 
Check: Hard limits for "maximum user processes"
  Node Name         Type          Available     Required      Status         
  ----------------  ------------  ------------  ------------  ----------------
  12102-rac3        hard          16384         16384         passed         
  12102-rac1        hard          16384         16384         passed         
Result: Hard limits check passed for "maximum user processes"
 
Check: Soft limits for "maximum user processes"
  Node Name         Type          Available     Required      Status         
  ----------------  ------------  ------------  ------------  ----------------
  12102-rac3        soft          2047          2047          passed         
  12102-rac1        soft          2047          2047          passed         
Result: Soft limits check passed for "maximum user processes"
 
Check: System architecture
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    x86_64                    x86_64                    passed   
  12102-rac1    x86_64                    x86_64                    passed   
Result: System architecture check passed
 
Check: Kernel version
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    2.6.32-358.el6.x86_64     2.6.39                    failed   
  12102-rac1    2.6.32-358.el6.x86_64     2.6.39                    failed   
Result: Kernel version check failed
 
Check: Kernel parameter for "semmsl"
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  12102-rac1        250           250           250           passed         
  12102-rac3        250           250           250           passed         
Result: Kernel parameter check passed for "semmsl"
 
Check: Kernel parameter for "semmns"
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  12102-rac1        32000         32000         32000         passed         
  12102-rac3        32000         32000         32000         passed         
Result: Kernel parameter check passed for "semmns"
 
Check: Kernel parameter for "semopm"
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  12102-rac1        100           100           100           passed         
  12102-rac3        100           100           100           passed         
Result: Kernel parameter check passed for "semopm"
 
Check: Kernel parameter for "semmni"
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  12102-rac1        128           128           128           passed         
  12102-rac3        128           128           128           passed         
Result: Kernel parameter check passed for "semmni"
 
Check: Kernel parameter for "shmmax"
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  12102-rac1        4398046511104  4398046511104  1958270976    passed         
  12102-rac3        4398046511104  4398046511104  1958270976    passed         
Result: Kernel parameter check passed for "shmmax"
 
Check: Kernel parameter for "shmmni"
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  12102-rac1        4096          4096          4096          passed         
  12102-rac3        4096          4096          4096          passed         
Result: Kernel parameter check passed for "shmmni"
 
Check: Kernel parameter for "shmall"
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  12102-rac1        1073741824    1073741824    382474        passed         
  12102-rac3        1073741824    1073741824    382474        passed         
Result: Kernel parameter check passed for "shmall"
 
Check: Kernel parameter for "file-max"
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  12102-rac1        6815744       6815744       6815744       passed         
  12102-rac3        6815744       6815744       6815744       passed         
Result: Kernel parameter check passed for "file-max"
 
Check: Kernel parameter for "ip_local_port_range"
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  12102-rac1        between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed         
  12102-rac3        between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed         
Result: Kernel parameter check passed for "ip_local_port_range"
 
Check: Kernel parameter for "rmem_default"
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  12102-rac1        262144        262144        262144        passed         
  12102-rac3        262144        262144        262144        passed         
Result: Kernel parameter check passed for "rmem_default"
 
Check: Kernel parameter for "rmem_max"
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  12102-rac1        4194304       4194304       4194304       passed         
  12102-rac3        4194304       4194304       4194304       passed         
Result: Kernel parameter check passed for "rmem_max"
 
Check: Kernel parameter for "wmem_default"
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  12102-rac1        262144        262144        262144        passed         
  12102-rac3        262144        262144        262144        passed         
Result: Kernel parameter check passed for "wmem_default"
 
Check: Kernel parameter for "wmem_max"
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  12102-rac1        1048576       1048576       1048576       passed         
  12102-rac3        1048576       1048576       1048576       passed         
Result: Kernel parameter check passed for "wmem_max"
 
Check: Kernel parameter for "aio-max-nr"
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  12102-rac1        1048576       1048576       1048576       passed         
  12102-rac3        1048576       1048576       1048576       passed         
Result: Kernel parameter check passed for "aio-max-nr"
 
Check: Kernel parameter for "panic_on_oops"
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  12102-rac1        1             1             1             passed         
  12102-rac3        1             1             1             passed         
Result: Kernel parameter check passed for "panic_on_oops"
 
Check: Package existence for "binutils"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2      passed   
  12102-rac1    binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2      passed   
Result: Package existence check passed for "binutils"
 
Check: Package existence for "compat-libcap1"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    compat-libcap1-1.10-1     compat-libcap1-1.10       passed   
  12102-rac1    compat-libcap1-1.10-1     compat-libcap1-1.10       passed   
Result: Package existence check passed for "compat-libcap1"
 
Check: Package existence for "compat-libstdc++-33(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed   
  12102-rac1    compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed   
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"
 
Check: Package existence for "libgcc(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    libgcc(x86_64)-4.4.7-3.el6  libgcc(x86_64)-4.4.4      passed   
  12102-rac1    libgcc(x86_64)-4.4.7-3.el6  libgcc(x86_64)-4.4.4      passed   
Result: Package existence check passed for "libgcc(x86_64)"
 
Check: Package existence for "libstdc++(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    libstdc++(x86_64)-4.4.7-3.el6  libstdc++(x86_64)-4.4.4   passed   
  12102-rac1    libstdc++(x86_64)-4.4.7-3.el6  libstdc++(x86_64)-4.4.4   passed   
Result: Package existence check passed for "libstdc++(x86_64)"
 
Check: Package existence for "libstdc++-devel(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    libstdc++-devel(x86_64)-4.4.7-3.el6  libstdc++-devel(x86_64)-4.4.4  passed   
  12102-rac1    libstdc++-devel(x86_64)-4.4.7-3.el6  libstdc++-devel(x86_64)-4.4.4  passed   
Result: Package existence check passed for "libstdc++-devel(x86_64)"
 
Check: Package existence for "sysstat"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    sysstat-9.0.4-20.el6      sysstat-9.0.4             passed   
  12102-rac1    sysstat-9.0.4-20.el6      sysstat-9.0.4             passed   
Result: Package existence check passed for "sysstat"
 
Check: Package existence for "gcc"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    gcc-4.4.7-3.el6           gcc-4.4.4                 passed   
  12102-rac1    gcc-4.4.7-3.el6           gcc-4.4.4                 passed   
Result: Package existence check passed for "gcc"
 
Check: Package existence for "gcc-c++"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    gcc-c++-4.4.7-3.el6       gcc-c++-4.4.4             passed   
  12102-rac1    gcc-c++-4.4.7-3.el6       gcc-c++-4.4.4             passed   
Result: Package existence check passed for "gcc-c++"
 
Check: Package existence for "ksh"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    ksh                       ksh                       passed   
  12102-rac1    ksh                       ksh                       passed   
Result: Package existence check passed for "ksh"
 
Check: Package existence for "make"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    make-3.81-20.el6          make-3.81                 passed   
  12102-rac1    make-3.81-20.el6          make-3.81                 passed   
Result: Package existence check passed for "make"
 
Check: Package existence for "glibc(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    glibc(x86_64)-2.12-1.107.el6  glibc(x86_64)-2.12        passed   
  12102-rac1    glibc(x86_64)-2.12-1.107.el6  glibc(x86_64)-2.12        passed   
Result: Package existence check passed for "glibc(x86_64)"
 
Check: Package existence for "glibc-devel(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    glibc-devel(x86_64)-2.12-1.107.el6  glibc-devel(x86_64)-2.12  passed   
  12102-rac1    glibc-devel(x86_64)-2.12-1.107.el6  glibc-devel(x86_64)-2.12  passed   
Result: Package existence check passed for "glibc-devel(x86_64)"
 
Check: Package existence for "libaio(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed   
  12102-rac1    libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed   
Result: Package existence check passed for "libaio(x86_64)"
 
Check: Package existence for "libaio-devel(x86_64)"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed   
  12102-rac1    libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed   
Result: Package existence check passed for "libaio-devel(x86_64)"
 
Check: Package existence for "nfs-utils"
  Node Name     Available                 Required                  Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    nfs-utils-1.2.3-36.el6    nfs-utils-1.2.3-15        passed   
  12102-rac1    nfs-utils-1.2.3-36.el6    nfs-utils-1.2.3-15        passed   
Result: Package existence check passed for "nfs-utils"
 
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
 
Check: Current group ID
Result: Current group ID check passed
 
Starting check for consistency of primary group of root user
  Node Name                             Status                 
  ------------------------------------  ------------------------
  12102-rac3                            passed                 
  12102-rac1                            passed                 
 
Check for consistency of root user's primary group passed
 
Check: Group existence for "dba"
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  12102-rac3    passed                    exists                 
  12102-rac1    passed                    exists                 
Result: Group existence check passed for "dba"
 
Check: Membership of user "oracle" in group "dba"
  Node Name         User Exists   Group Exists  User in Group  Status         
  ----------------  ------------  ------------  ------------  ----------------
  12102-rac3        yes           yes           yes           passed         
Result: Membership check for user "oracle" in group "dba" passed
 
Check: Group existence for "asmoper"
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  12102-rac3    passed                    exists                 
  12102-rac1    passed                    exists                 
Result: Group existence check passed for "asmoper"
 
Check: Membership of user "oracle" in group "asmoper"
  Node Name         User Exists   Group Exists  User in Group  Status         
  ----------------  ------------  ------------  ------------  ----------------
  12102-rac3        yes           yes           no            failed         
Result: Membership check for user "oracle" in group "asmoper" failed
 
Check: Group existence for "oinstall"
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  12102-rac3    passed                    exists                 
  12102-rac1    passed                    exists                 
Result: Group existence check passed for "oinstall"
 
Check: Membership of user "oracle" in group "oinstall"
  Node Name         User Exists   Group Exists  User in Group  Status         
  ----------------  ------------  ------------  ------------  ----------------
  12102-rac3        yes           yes           yes           passed         
Result: Membership check for user "oracle" in group "oinstall" passed
 
Check: Group existence for "oinstall"
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  12102-rac3    passed                    exists                 
Result: Group existence check passed for "oinstall"
 
Check: Membership of user "oracle" in group "oinstall"
  Node Name         User Exists   Group Exists  User in Group  Status         
  ----------------  ------------  ------------  ------------  ----------------
  12102-rac3        yes           yes           yes           passed         
Result: Membership check for user "oracle" in group "oinstall" passed
 
Check: User existence for "root"
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  12102-rac3    passed                    exists(0)               
 
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Result: User existence check passed for "root"
 
Check: User existence for "oracle"
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  12102-rac3    passed                    exists(54321)           
 
Checking for multiple users with UID value 54321
Result: Check for multiple users with UID value 54321 passed
Result: User existence check passed for "oracle"
Check: Time zone consistency
Result: Time zone consistency check passed
 
Starting Clock synchronization checks using Network Time Protocol(NTP)...
 
Checking existence of NTP configuration file "/etc/ntp.conf" across nodes
  Node Name                             File exists?           
  ------------------------------------  ------------------------
  12102-rac3                            no                     
  12102-rac1                            no                     
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
No NTP Daemons or Services were found to be running
 
Result: Clock synchronization check using Network Time Protocol(NTP) passed
 
 
Checking to make sure user "oracle" is not in "root" group
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  12102-rac3    passed                    does not exist         
  12102-rac1    passed                    does not exist         
Result: User "oracle" is not part of "root" group. Check passed
Checking integrity of file "/etc/resolv.conf" across nodes
 
Checking the file "/etc/resolv.conf" to make sure only one of 'domain' and 'search' entries is defined
"domain" and "search" entries do not coexist in any "/etc/resolv.conf" file
Checking if 'domain' entry in file "/etc/resolv.conf" is consistent across the nodes...
"domain" entry does not exist in any "/etc/resolv.conf" file
Checking if 'search' entry in file "/etc/resolv.conf" is consistent across the nodes...
"search" entry does not exist in any "/etc/resolv.conf" file
Checking DNS response time for an unreachable node
  Node Name                             Status                 
  ------------------------------------  ------------------------
  12102-rac1                            passed                 
  12102-rac3                            passed                 
The DNS response time for an unreachable node is within acceptable limit on all nodes
checking DNS response from all servers in "/etc/resolv.conf"
checking response for name "12102-rac3" from each of the name servers specified in "/etc/resolv.conf"
  Node Name     Source                    Comment                   Status   
  ------------  ------------------------  ------------------------  ----------
checking response for name "12102-rac1" from each of the name servers specified in "/etc/resolv.conf"
  Node Name     Source                    Comment                   Status   
  ------------  ------------------------  ------------------------  ----------
  12102-rac1    192.168.56.126            IPv4                      passed   
 
Check for integrity of file "/etc/resolv.conf" failed
 
 
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
 
 
Checking GNS integrity...
Checking if the GNS subdomain name is valid...
The GNS subdomain name "gns.localdomain" is a valid domain name
Checking if the GNS VIP belongs to same subnet as the public network...
Public network subnets "192.168.56.0" match the GNS VIP "192.168.56.108"
Checking if the GNS VIP is a valid address...
GNS VIP "192.168.56.108" resolves to a valid IP address
Checking the status of GNS VIP...
Checking if FDQN names for domain "gns.localdomain" are reachable
 
GNS resolved IP addresses are reachable
 
GNS resolved IP addresses are reachable
 
GNS resolved IP addresses are reachable
Checking status of GNS resource...
  Node          Running?                  Enabled?               
  ------------  ------------------------  ------------------------
  12102-rac1    no                        yes                     
  12102-rac2    yes                       yes                     
 
GNS resource configuration check passed
Checking status of GNS VIP resource...
  Node          Running?                  Enabled?               
  ------------  ------------------------  ------------------------
  12102-rac1    no                        yes                     
  12102-rac2    yes                       yes                     
 
GNS VIP resource configuration check passed.
 
GNS integrity check passed
 
Checking Flex Cluster node role configuration...
Flex Cluster node role configuration check passed
 
Pre-check for node addition was unsuccessful on all the nodes.
[oracle@12102-rac1 bin]$   
[oracle@12102-rac1 bin]$
[oracle@12102-rac1 bin]$
[oracle@12102-rac1 bin]$
[oracle@12102-rac1 bin]$
[oracle@12102-rac1 bin]$
[oracle@12102-rac1 bin]$

开始add node,先加grid的,所以我们先到grid home下的add node脚本:

[oracle@12102-rac1 addnode]$ pwd
/u01/app/12.1.0.2/grid/addnode
[oracle@12102-rac1 addnode]$ ls
addnode_oraparam.ini  addnode_oraparam.ini.sbs  addnode.sh
[oracle@12102-rac1 addnode]$

注1:运行addnode.sh时,log在central inventory的logs目录下。有addNodeActionsxxxx-xx-xx_xx-xx-xxxx.log日志

注2:grid用户需要在asmoper组

[oracle@12102-rac1 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={12102-rac3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={12102-rac3-vip}" "CLUSTER_NEW_NODE_ROLES={hub}"
Starting Oracle Universal Installer...
 
Checking Temp space: must be greater than 120 MB.   Actual 3472 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 2001 MB    Passed
 [WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/addNodeActions2015-08-11_03-00-46PM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/addNodeActions2015-08-11_03-00-46PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
 
Prepare Configuration in progress.
 
Prepare Configuration successful.
..................................................   8% Done.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/addNodeActions2015-08-11_03-00-46PM.log
 
Instantiate files in progress.
 
Instantiate files successful.
..................................................   14% Done.
 
Copying files to node in progress.
        
Copying files to node successful.
..................................................   73% Done.
 
Saving cluster inventory in progress.
 ..................................................   80% Done.
 
Saving cluster inventory successful.
The Cluster Node Addition of /u01/app/12.1.0.2/grid was successful.
Please check '/tmp/silentInstall.log' for more details.
 
Setup Oracle Base in progress.
 
Setup Oracle Base successful.
..................................................   88% Done.
 
As a root user, execute the following script(s):
        1. /u01/app/oraInventory/orainstRoot.sh
        2. /u01/app/12.1.0.2/grid/root.sh
 
Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:
[12102-rac3]
Execute /u01/app/12.1.0.2/grid/root.sh on the following nodes:
[12102-rac3]
 
The scripts can be executed in parallel on all the nodes.
 
..........
Update Inventory in progress.
 ..................................................   100% Done.
 
Update Inventory successful.
Successfully Setup Software.
[oracle@12102-rac1 addnode]$             
[oracle@12102-rac1 addnode]$
[oracle@12102-rac1 addnode]$

按照提示的要求,运行root脚本:

[root@12102-rac3 ~]#
[root@12102-rac3 ~]# id
uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[root@12102-rac3 ~]#
[root@12102-rac3 ~]# sh /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
 
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@12102-rac3 ~]#
[root@12102-rac3 ~]# sh /u01/app/12.1.0.2/grid/root.sh
Check /u01/app/12.1.0.2/grid/install/root_12102-rac3_2015-08-11_15-29-35.log for the output of root script
[root@12102-rac3 ~]#       
[root@12102-rac3 ~]#
[oracle@12102-rac3 ContentsXML]$ tail -30 /u01/app/12.1.0.2/grid/install/root_12102-rac3_2015-08-11_15-29-35.log
CRS-2676: Start of 'ora.ons' on '12102-rac3' succeeded
CRS-2677: Stop of 'ora.scan2.vip' on '12102-rac3' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on '12102-rac2'
CRS-2676: Start of 'ora.scan2.vip' on '12102-rac2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on '12102-rac2'
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on '12102-rac2' succeeded
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on '12102-rac2'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on '12102-rac2' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on '12102-rac2'
CRS-2677: Stop of 'ora.scan2.vip' on '12102-rac2' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on '12102-rac3'
CRS-2676: Start of 'ora.scan2.vip' on '12102-rac3' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on '12102-rac3'
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on '12102-rac3' succeeded
CRS-2676: Start of 'ora.asm' on '12102-rac3' succeeded
CRS-2672: Attempting to start 'ora.DG_DATA.dg' on '12102-rac3'
CRS-2676: Start of 'ora.DG_DATA.dg' on '12102-rac3' succeeded
CRS-6016: Resource auto-start has completed for server 12102-rac3
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2015/08/11 15:37:09 CLSRSC-343: Successfully started Oracle Clusterware stack
 
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2015/08/11 15:37:34 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
 
[oracle@12102-rac3 ContentsXML]$

执行完root之后,可以看到除db资源外的其他资源已经加好:

[oracle@12102-rac3 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
ora.DG_DATA.dg
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
ora.net1.network
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
ora.ons
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.12102-rac1.vip
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.12102-rac2.vip
      1        ONLINE  ONLINE       12102-rac2               STABLE
ora.12102-rac3.vip
      1        ONLINE  ONLINE       12102-rac3               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       12102-rac2               STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       12102-rac3               STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       12102-rac1               169.254.161.44 192.1
                                                             68.57.34,STABLE
ora.asm
      1        ONLINE  ONLINE       12102-rac1               Started,STABLE
      2        ONLINE  ONLINE       12102-rac2               Started,STABLE
      3        ONLINE  ONLINE       12102-rac3               Started,STABLE
ora.cdbrac.db
      1        ONLINE  ONLINE       12102-rac1               Open,STABLE
      2        ONLINE  ONLINE       12102-rac2               Open,STABLE
ora.cvu
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.gns
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.gns.vip
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       12102-rac1               Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       12102-rac2               STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       12102-rac3               STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       12102-rac1               STABLE
--------------------------------------------------------------------------------
[oracle@12102-rac3 ~]$

到这里为止,grid的添加,已经完毕,下面我们来进行DB的软件添加,和DB的实例添加。

===================================================================================================
###################################################################################################
===================================================================================================

转到DB的 oracle home,运行addnode脚本,我们用-silent的模式添加,这样就不要图形界面:

[oracle@12102-rac1 addnode]$ db_env
[oracle@12102-rac1 addnode]$ cd $ORACLE_HOME/addnode
[oracle@12102-rac1 addnode]$ pwd
/u01/app/oracle/product/12.1.0.2/db_1/addnode
[oracle@12102-rac1 addnode]$
[oracle@12102-rac1 addnode]$ ls
addnode_oraparam.ini  addnode_oraparam.ini.sbs  addnode.sh
[oracle@12102-rac1 addnode]$

注:oracle用户需要在oper组

[oracle@12102-rac1 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={12102-rac3}"
Starting Oracle Universal Installer...
 
Checking Temp space: must be greater than 120 MB.   Actual 3163 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 1952 MB    Passed
 [WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/addNodeActions2015-08-11_04-02-37PM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/addNodeActions2015-08-11_04-02-37PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
 
Prepare Configuration in progress.
 
Prepare Configuration successful.
..................................................   8% Done.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/addNodeActions2015-08-11_04-02-37PM.log
 
Instantiate files in progress.
 
Instantiate files successful.
..................................................   14% Done.
 
Copying files to node in progress.
          
Copying files to node successful.
..................................................   73% Done.
 
Saving cluster inventory in progress.
 ..................................................   80% Done.
 
Saving cluster inventory successful.
The Cluster Node Addition of /u01/app/oracle/product/12.1.0.2/db_1 was successful.
Please check '/tmp/silentInstall.log' for more details.
 
Setup Oracle Base in progress.
 
Setup Oracle Base successful.
..................................................   88% Done.
 
As a root user, execute the following script(s):
        1. /u01/app/oracle/product/12.1.0.2/db_1/root.sh
 
Execute /u01/app/oracle/product/12.1.0.2/db_1/root.sh on the following nodes:
[12102-rac3]
 
 
..........
Update Inventory in progress.
 ..................................................   100% Done.
 
Update Inventory successful.
Successfully Setup Software.
[oracle@12102-rac1 addnode]$

再运行root脚本:

[root@12102-rac3 ~]# sh /u01/app/oracle/product/12.1.0.2/db_1/root.sh
Check /u01/app/oracle/product/12.1.0.2/db_1/install/root_12102-rac3_2015-08-11_16-35-49.log for the output of root script
[root@12102-rac3 ~]#

到此为止,DB的软件添加完成,接下来我们在把db的资源加到cluster中:

===================================================================================================
###################################################################################################
===================================================================================================

由于我是policy managed的管理模式,当我modify srvpool -serverpool mysrvpool -max 3之后,实例就自动添加上去了。
不是如果policy managed管理模式,则需要: dbca -silent -addInstance -nodeList 12102-rac3 -gdbName CDBRAC -instanceName cdbrac_3 -sysDBAUsername sys -sysDBAPassword oracle

我的policy managed管理模式,在一个窗口modify server pool:

[oracle@12102-rac1 addnode]$ srvctl config database -db cdbrac
Database unique name: cdbrac
Database name: cdbrac
Oracle home: /u01/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile: +DG_DATA/CDBRAC/PARAMETERFILE/spfile.296.883665849
Password file: +DG_DATA/CDBRAC/PASSWORD/pwdcdbrac.276.883650747
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: mysrvpool
Disk Groups: DG_DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: oper
Database instances:
Configured nodes:
Database is policy managed
[oracle@12102-rac1 addnode]$   
[oracle@12102-rac1 addnode]$
[oracle@12102-rac1 addnode]$
[oracle@12102-rac1 addnode]$ grid_env
[oracle@12102-rac1 addnode]$ srvctl status srvpool -detail
Server pool name: Free
Active servers count: 1
Active server names: 12102-rac3
NAME=12102-rac3 STATE=ONLINE
Server pool name: Generic
Active servers count: 0
Active server names:
Server pool name: mysrvpool
Active servers count: 2
Active server names: 12102-rac1,12102-rac2
NAME=12102-rac1 STATE=ONLINE
NAME=12102-rac2 STATE=ONLINE
[oracle@12102-rac1 addnode]$
[oracle@12102-rac1 addnode]$
[oracle@12102-rac1 addnode]$ srvctl modify srvpool -serverpool mysrvpool -max 3
[oracle@12102-rac1 addnode]$ 
[oracle@12102-rac1 addnode]$
[oracle@12102-rac1 addnode]$ srvctl status srvpool -detail
Server pool name: Free
Active servers count: 0
Active server names:
Server pool name: Generic
Active servers count: 0
Active server names:
Server pool name: mysrvpool
Active servers count: 3
Active server names: 12102-rac1,12102-rac2,12102-rac3
NAME=12102-rac1 STATE=ONLINE
NAME=12102-rac2 STATE=ONLINE
NAME=12102-rac3 STATE=ONLINE
[oracle@12102-rac1 addnode]$
[oracle@12102-rac1 addnode]$
[oracle@12102-rac1 addnode]$
[oracle@12102-rac1 addnode]$ srvctl config database -db cdbrac
Database unique name: cdbrac
Database name: cdbrac
Oracle home: /u01/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile: +DG_DATA/CDBRAC/PARAMETERFILE/spfile.296.883665849
Password file: +DG_DATA/CDBRAC/PASSWORD/pwdcdbrac.276.883650747
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: mysrvpool
Disk Groups: DG_DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: oper
Database instances:
Configured nodes:
Database is policy managed
[oracle@12102-rac1 addnode]$

在另一个窗口,反复运行crsctl stat res -t,就可以看到在modify之后,资源自动被加上去了。注意下面的ora.cdbrac.db的变换:

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
ora.DG_DATA.dg
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
ora.net1.network
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
ora.ons
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.12102-rac1.vip
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.12102-rac2.vip
      1        ONLINE  ONLINE       12102-rac2               STABLE
ora.12102-rac3.vip
      1        ONLINE  ONLINE       12102-rac3               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       12102-rac2               STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       12102-rac3               STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       12102-rac1               169.254.161.44 192.1
                                                             68.57.34,STABLE
ora.asm
      1        ONLINE  ONLINE       12102-rac1               Started,STABLE
      2        ONLINE  ONLINE       12102-rac2               Started,STABLE
      3        ONLINE  ONLINE       12102-rac3               Started,STABLE
ora.cdbrac.db
      1        ONLINE  ONLINE       12102-rac1               Open,STABLE
      2        ONLINE  ONLINE       12102-rac2               Open,STABLE
ora.cvu
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.gns
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.gns.vip
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       12102-rac1               Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       12102-rac2               STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       12102-rac3               STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       12102-rac1               STABLE
--------------------------------------------------------------------------------
 
 
 
 
 
 
 
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
ora.DG_DATA.dg
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
ora.net1.network
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
ora.ons
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.12102-rac1.vip
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.12102-rac2.vip
      1        ONLINE  ONLINE       12102-rac2               STABLE
ora.12102-rac3.vip
      1        ONLINE  ONLINE       12102-rac3               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       12102-rac2               STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       12102-rac3               STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       12102-rac1               169.254.161.44 192.1
                                                             68.57.34,STABLE
ora.asm
      1        ONLINE  ONLINE       12102-rac1               Started,STABLE
      2        ONLINE  ONLINE       12102-rac2               Started,STABLE
      3        ONLINE  ONLINE       12102-rac3               Started,STABLE
ora.cdbrac.db
      1        ONLINE  ONLINE       12102-rac1               Open,STABLE
      2        ONLINE  ONLINE       12102-rac2               Open,STABLE
      3        ONLINE  OFFLINE      12102-rac3               STARTING
ora.cvu
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.gns
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.gns.vip
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       12102-rac1               Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       12102-rac2               STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       12102-rac3               STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       12102-rac1               STABLE
--------------------------------------------------------------------------------
 
 
 
 
 
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
ora.DG_DATA.dg
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
ora.net1.network
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
ora.ons
               ONLINE  ONLINE       12102-rac1               STABLE
               ONLINE  ONLINE       12102-rac2               STABLE
               ONLINE  ONLINE       12102-rac3               STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.12102-rac1.vip
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.12102-rac2.vip
      1        ONLINE  ONLINE       12102-rac2               STABLE
ora.12102-rac3.vip
      1        ONLINE  ONLINE       12102-rac3               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       12102-rac2               STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       12102-rac3               STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       12102-rac1               169.254.161.44 192.1
                                                             68.57.34,STABLE
ora.asm
      1        ONLINE  ONLINE       12102-rac1               Started,STABLE
      2        ONLINE  ONLINE       12102-rac2               Started,STABLE
      3        ONLINE  ONLINE       12102-rac3               Started,STABLE
ora.cdbrac.db
      1        ONLINE  ONLINE       12102-rac1               Open,STABLE
      2        ONLINE  ONLINE       12102-rac2               Open,STABLE
      3        ONLINE  ONLINE       12102-rac3               Open,STABLE
ora.cvu
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.gns
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.gns.vip
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       12102-rac1               Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       12102-rac2               STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       12102-rac3               STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       12102-rac1               STABLE
--------------------------------------------------------------------------------

最后用cluster verify检查一下,没问题就ok了:

[oracle@12102-rac1 addnode]$ cluvfy stage -post nodeadd -n 12102-rac3 -verbose
 
Performing post-checks for node addition
 
Checking node reachability...
 
Check: Node reachability from node "12102-rac1"
  Destination Node                      Reachable?             
  ------------------------------------  ------------------------
  12102-rac3                            yes                     
Result: Node reachability check passed from node "12102-rac1"
 
 
Checking user equivalence...
 
Check: User equivalence for user "oracle"
  Node Name                             Status                 
  ------------------------------------  ------------------------
  12102-rac3                            passed                 
Result: User equivalence check passed for user "oracle"
 
Checking node connectivity...
 
Checking hosts config file...
  Node Name                             Status                 
  ------------------------------------  ------------------------
  12102-rac1                            passed                 
  12102-rac3                            passed                 
  12102-rac2                            passed                 
 
Verification of the hosts config file successful
 
 
Interface information for node "12102-rac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.124  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:AD:B5:33 1500 
 eth0   192.168.56.108  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:AD:B5:33 1500 
 eth0   192.168.56.26   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:AD:B5:33 1500 
 eth0   192.168.56.28   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:AD:B5:33 1500 
 
 
Interface information for node "12102-rac3"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.127  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:A6:B7:99 1500 
 eth0   192.168.56.27   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:A6:B7:99 1500 
 eth0   192.168.56.29   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:A6:B7:99 1500 
 
 
Interface information for node "12102-rac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.125  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:60:27:F9 1500 
 eth0   192.168.56.25   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:60:27:F9 1500 
 eth0   192.168.56.22   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:60:27:F9 1500 
 
 
Check: Node connectivity using interfaces on subnet "192.168.56.0"
 
Check: Node connectivity of subnet "192.168.56.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac3[192.168.56.127]      12102-rac2[192.168.56.22]       yes             
  12102-rac3[192.168.56.127]      12102-rac1[192.168.56.124]      yes             
  12102-rac3[192.168.56.127]      12102-rac1[192.168.56.26]       yes             
  12102-rac3[192.168.56.127]      12102-rac2[192.168.56.125]      yes             
  12102-rac3[192.168.56.127]      12102-rac3[192.168.56.29]       yes             
  12102-rac3[192.168.56.127]      12102-rac2[192.168.56.25]       yes             
  12102-rac3[192.168.56.127]      12102-rac1[192.168.56.28]       yes             
  12102-rac3[192.168.56.127]      12102-rac1[192.168.56.108]      yes             
  12102-rac3[192.168.56.127]      12102-rac3[192.168.56.27]       yes             
  12102-rac2[192.168.56.22]       12102-rac1[192.168.56.124]      yes             
  12102-rac2[192.168.56.22]       12102-rac1[192.168.56.26]       yes             
  12102-rac2[192.168.56.22]       12102-rac2[192.168.56.125]      yes             
  12102-rac2[192.168.56.22]       12102-rac3[192.168.56.29]       yes             
  12102-rac2[192.168.56.22]       12102-rac2[192.168.56.25]       yes             
  12102-rac2[192.168.56.22]       12102-rac1[192.168.56.28]       yes             
  12102-rac2[192.168.56.22]       12102-rac1[192.168.56.108]      yes             
  12102-rac2[192.168.56.22]       12102-rac3[192.168.56.27]       yes             
  12102-rac1[192.168.56.124]      12102-rac1[192.168.56.26]       yes             
  12102-rac1[192.168.56.124]      12102-rac2[192.168.56.125]      yes             
  12102-rac1[192.168.56.124]      12102-rac3[192.168.56.29]       yes             
  12102-rac1[192.168.56.124]      12102-rac2[192.168.56.25]       yes             
  12102-rac1[192.168.56.124]      12102-rac1[192.168.56.28]       yes             
  12102-rac1[192.168.56.124]      12102-rac1[192.168.56.108]      yes             
  12102-rac1[192.168.56.124]      12102-rac3[192.168.56.27]       yes             
  12102-rac1[192.168.56.26]       12102-rac2[192.168.56.125]      yes             
  12102-rac1[192.168.56.26]       12102-rac3[192.168.56.29]       yes             
  12102-rac1[192.168.56.26]       12102-rac2[192.168.56.25]       yes             
  12102-rac1[192.168.56.26]       12102-rac1[192.168.56.28]       yes             
  12102-rac1[192.168.56.26]       12102-rac1[192.168.56.108]      yes             
  12102-rac1[192.168.56.26]       12102-rac3[192.168.56.27]       yes             
  12102-rac2[192.168.56.125]      12102-rac3[192.168.56.29]       yes             
  12102-rac2[192.168.56.125]      12102-rac2[192.168.56.25]       yes             
  12102-rac2[192.168.56.125]      12102-rac1[192.168.56.28]       yes             
  12102-rac2[192.168.56.125]      12102-rac1[192.168.56.108]      yes             
  12102-rac2[192.168.56.125]      12102-rac3[192.168.56.27]       yes             
  12102-rac3[192.168.56.29]       12102-rac2[192.168.56.25]       yes             
  12102-rac3[192.168.56.29]       12102-rac1[192.168.56.28]       yes             
  12102-rac3[192.168.56.29]       12102-rac1[192.168.56.108]      yes             
  12102-rac3[192.168.56.29]       12102-rac3[192.168.56.27]       yes             
  12102-rac2[192.168.56.25]       12102-rac1[192.168.56.28]       yes             
  12102-rac2[192.168.56.25]       12102-rac1[192.168.56.108]      yes             
  12102-rac2[192.168.56.25]       12102-rac3[192.168.56.27]       yes             
  12102-rac1[192.168.56.28]       12102-rac1[192.168.56.108]      yes             
  12102-rac1[192.168.56.28]       12102-rac3[192.168.56.27]       yes             
  12102-rac1[192.168.56.108]      12102-rac3[192.168.56.27]       yes             
Result: Node connectivity passed for subnet "192.168.56.0" with node(s) 12102-rac3,12102-rac2,12102-rac1
 
 
Check: TCP connectivity of subnet "192.168.56.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac3 : 192.168.56.127     12102-rac3 : 192.168.56.127     passed         
  12102-rac2 : 192.168.56.22      12102-rac3 : 192.168.56.127     passed         
  12102-rac1 : 192.168.56.124     12102-rac3 : 192.168.56.127     passed         
  12102-rac1 : 192.168.56.26      12102-rac3 : 192.168.56.127     passed         
  12102-rac2 : 192.168.56.125     12102-rac3 : 192.168.56.127     passed         
  12102-rac3 : 192.168.56.29      12102-rac3 : 192.168.56.127     passed         
  12102-rac2 : 192.168.56.25      12102-rac3 : 192.168.56.127     passed         
  12102-rac1 : 192.168.56.28      12102-rac3 : 192.168.56.127     passed         
  12102-rac1 : 192.168.56.108     12102-rac3 : 192.168.56.127     passed         
  12102-rac3 : 192.168.56.27      12102-rac3 : 192.168.56.127     passed         
  12102-rac3 : 192.168.56.127     12102-rac2 : 192.168.56.22      passed         
  12102-rac2 : 192.168.56.22      12102-rac2 : 192.168.56.22      passed         
  12102-rac1 : 192.168.56.124     12102-rac2 : 192.168.56.22      passed         
  12102-rac1 : 192.168.56.26      12102-rac2 : 192.168.56.22      passed         
  12102-rac2 : 192.168.56.125     12102-rac2 : 192.168.56.22      passed         
  12102-rac3 : 192.168.56.29      12102-rac2 : 192.168.56.22      passed         
  12102-rac2 : 192.168.56.25      12102-rac2 : 192.168.56.22      passed         
  12102-rac1 : 192.168.56.28      12102-rac2 : 192.168.56.22      passed         
  12102-rac1 : 192.168.56.108     12102-rac2 : 192.168.56.22      passed         
  12102-rac3 : 192.168.56.27      12102-rac2 : 192.168.56.22      passed         
  12102-rac3 : 192.168.56.127     12102-rac1 : 192.168.56.124     passed         
  12102-rac2 : 192.168.56.22      12102-rac1 : 192.168.56.124     passed         
  12102-rac1 : 192.168.56.124     12102-rac1 : 192.168.56.124     passed         
  12102-rac1 : 192.168.56.26      12102-rac1 : 192.168.56.124     passed         
  12102-rac2 : 192.168.56.125     12102-rac1 : 192.168.56.124     passed         
  12102-rac3 : 192.168.56.29      12102-rac1 : 192.168.56.124     passed         
  12102-rac2 : 192.168.56.25      12102-rac1 : 192.168.56.124     passed         
  12102-rac1 : 192.168.56.28      12102-rac1 : 192.168.56.124     passed         
  12102-rac1 : 192.168.56.108     12102-rac1 : 192.168.56.124     passed         
  12102-rac3 : 192.168.56.27      12102-rac1 : 192.168.56.124     passed         
  12102-rac3 : 192.168.56.127     12102-rac1 : 192.168.56.26      passed         
  12102-rac2 : 192.168.56.22      12102-rac1 : 192.168.56.26      passed         
  12102-rac1 : 192.168.56.124     12102-rac1 : 192.168.56.26      passed         
  12102-rac1 : 192.168.56.26      12102-rac1 : 192.168.56.26      passed         
  12102-rac2 : 192.168.56.125     12102-rac1 : 192.168.56.26      passed         
  12102-rac3 : 192.168.56.29      12102-rac1 : 192.168.56.26      passed         
  12102-rac2 : 192.168.56.25      12102-rac1 : 192.168.56.26      passed         
  12102-rac1 : 192.168.56.28      12102-rac1 : 192.168.56.26      passed         
  12102-rac1 : 192.168.56.108     12102-rac1 : 192.168.56.26      passed         
  12102-rac3 : 192.168.56.27      12102-rac1 : 192.168.56.26      passed         
  12102-rac3 : 192.168.56.127     12102-rac2 : 192.168.56.125     passed         
  12102-rac2 : 192.168.56.22      12102-rac2 : 192.168.56.125     passed         
  12102-rac1 : 192.168.56.124     12102-rac2 : 192.168.56.125     passed         
  12102-rac1 : 192.168.56.26      12102-rac2 : 192.168.56.125     passed         
  12102-rac2 : 192.168.56.125     12102-rac2 : 192.168.56.125     passed         
  12102-rac3 : 192.168.56.29      12102-rac2 : 192.168.56.125     passed         
  12102-rac2 : 192.168.56.25      12102-rac2 : 192.168.56.125     passed         
  12102-rac1 : 192.168.56.28      12102-rac2 : 192.168.56.125     passed         
  12102-rac1 : 192.168.56.108     12102-rac2 : 192.168.56.125     passed         
  12102-rac3 : 192.168.56.27      12102-rac2 : 192.168.56.125     passed         
  12102-rac3 : 192.168.56.127     12102-rac3 : 192.168.56.29      passed         
  12102-rac2 : 192.168.56.22      12102-rac3 : 192.168.56.29      passed         
  12102-rac1 : 192.168.56.124     12102-rac3 : 192.168.56.29      passed         
  12102-rac1 : 192.168.56.26      12102-rac3 : 192.168.56.29      passed         
  12102-rac2 : 192.168.56.125     12102-rac3 : 192.168.56.29      passed         
  12102-rac3 : 192.168.56.29      12102-rac3 : 192.168.56.29      passed         
  12102-rac2 : 192.168.56.25      12102-rac3 : 192.168.56.29      passed         
  12102-rac1 : 192.168.56.28      12102-rac3 : 192.168.56.29      passed         
  12102-rac1 : 192.168.56.108     12102-rac3 : 192.168.56.29      passed         
  12102-rac3 : 192.168.56.27      12102-rac3 : 192.168.56.29      passed         
  12102-rac3 : 192.168.56.127     12102-rac2 : 192.168.56.25      passed         
  12102-rac2 : 192.168.56.22      12102-rac2 : 192.168.56.25      passed         
  12102-rac1 : 192.168.56.124     12102-rac2 : 192.168.56.25      passed         
  12102-rac1 : 192.168.56.26      12102-rac2 : 192.168.56.25      passed         
  12102-rac2 : 192.168.56.125     12102-rac2 : 192.168.56.25      passed         
  12102-rac3 : 192.168.56.29      12102-rac2 : 192.168.56.25      passed         
  12102-rac2 : 192.168.56.25      12102-rac2 : 192.168.56.25      passed         
  12102-rac1 : 192.168.56.28      12102-rac2 : 192.168.56.25      passed         
  12102-rac1 : 192.168.56.108     12102-rac2 : 192.168.56.25      passed         
  12102-rac3 : 192.168.56.27      12102-rac2 : 192.168.56.25      passed         
  12102-rac3 : 192.168.56.127     12102-rac1 : 192.168.56.28      passed         
  12102-rac2 : 192.168.56.22      12102-rac1 : 192.168.56.28      passed         
  12102-rac1 : 192.168.56.124     12102-rac1 : 192.168.56.28      passed         
  12102-rac1 : 192.168.56.26      12102-rac1 : 192.168.56.28      passed         
  12102-rac2 : 192.168.56.125     12102-rac1 : 192.168.56.28      passed         
  12102-rac3 : 192.168.56.29      12102-rac1 : 192.168.56.28      passed         
  12102-rac2 : 192.168.56.25      12102-rac1 : 192.168.56.28      passed         
  12102-rac1 : 192.168.56.28      12102-rac1 : 192.168.56.28      passed         
  12102-rac1 : 192.168.56.108     12102-rac1 : 192.168.56.28      passed         
  12102-rac3 : 192.168.56.27      12102-rac1 : 192.168.56.28      passed         
  12102-rac3 : 192.168.56.127     12102-rac1 : 192.168.56.108     passed         
  12102-rac2 : 192.168.56.22      12102-rac1 : 192.168.56.108     passed         
  12102-rac1 : 192.168.56.124     12102-rac1 : 192.168.56.108     passed         
  12102-rac1 : 192.168.56.26      12102-rac1 : 192.168.56.108     passed         
  12102-rac2 : 192.168.56.125     12102-rac1 : 192.168.56.108     passed         
  12102-rac3 : 192.168.56.29      12102-rac1 : 192.168.56.108     passed         
  12102-rac2 : 192.168.56.25      12102-rac1 : 192.168.56.108     passed         
  12102-rac1 : 192.168.56.28      12102-rac1 : 192.168.56.108     passed         
  12102-rac1 : 192.168.56.108     12102-rac1 : 192.168.56.108     passed         
  12102-rac3 : 192.168.56.27      12102-rac1 : 192.168.56.108     passed         
  12102-rac3 : 192.168.56.127     12102-rac3 : 192.168.56.27      passed         
  12102-rac2 : 192.168.56.22      12102-rac3 : 192.168.56.27      passed         
  12102-rac1 : 192.168.56.124     12102-rac3 : 192.168.56.27      passed         
  12102-rac1 : 192.168.56.26      12102-rac3 : 192.168.56.27      passed         
  12102-rac2 : 192.168.56.125     12102-rac3 : 192.168.56.27      passed         
  12102-rac3 : 192.168.56.29      12102-rac3 : 192.168.56.27      passed         
  12102-rac2 : 192.168.56.25      12102-rac3 : 192.168.56.27      passed         
  12102-rac1 : 192.168.56.28      12102-rac3 : 192.168.56.27      passed         
  12102-rac1 : 192.168.56.108     12102-rac3 : 192.168.56.27      passed         
  12102-rac3 : 192.168.56.27      12102-rac3 : 192.168.56.27      passed         
Result: TCP connectivity check passed for subnet "192.168.56.0"
 
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed.
 
Result: Node connectivity check passed
 
 
Checking cluster integrity...
 
  Node Name                           
  ------------------------------------
  12102-rac1                         
  12102-rac2                         
  12102-rac3                         
 
Cluster integrity check passed
 
 
Checking CRS integrity...
The Oracle Clusterware is healthy on node "12102-rac1"
 
CRS integrity check passed
 
Clusterware version consistency passed.
 
Checking shared resources...
 
Checking CRS home location...
"/u01/app/12.1.0.2/grid" is not shared
Result: Shared resources check for node addition passed
 
 
Checking node connectivity...
 
Checking hosts config file...
  Node Name                             Status                 
  ------------------------------------  ------------------------
  12102-rac1                            passed                 
  12102-rac3                            passed                 
  12102-rac2                            passed                 
 
Verification of the hosts config file successful
 
 
Interface information for node "12102-rac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.124  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:AD:B5:33 1500 
 eth0   192.168.56.108  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:AD:B5:33 1500 
 eth0   192.168.56.26   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:AD:B5:33 1500 
 eth0   192.168.56.28   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:AD:B5:33 1500 
 eth1   192.168.57.34   192.168.57.0    0.0.0.0         192.168.56.1    08:00:27:35:04:BB 1500 
 eth1   169.254.161.44  169.254.0.0     0.0.0.0         192.168.56.1    08:00:27:35:04:BB 1500 
 
 
Interface information for node "12102-rac3"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.127  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:A6:B7:99 1500 
 eth0   192.168.56.27   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:A6:B7:99 1500 
 eth0   192.168.56.29   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:A6:B7:99 1500 
 eth1   192.168.57.37   192.168.57.0    0.0.0.0         192.168.56.1    08:00:27:2C:DC:8C 1500 
 eth1   169.254.3.240   169.254.0.0     0.0.0.0         192.168.56.1    08:00:27:2C:DC:8C 1500 
 
 
Interface information for node "12102-rac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.56.125  192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:60:27:F9 1500 
 eth0   192.168.56.25   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:60:27:F9 1500 
 eth0   192.168.56.22   192.168.56.0    0.0.0.0         192.168.56.1    08:00:27:60:27:F9 1500 
 eth1   192.168.57.35   192.168.57.0    0.0.0.0         192.168.56.1    08:00:27:47:D4:A9 1500 
 eth1   169.254.7.3     169.254.0.0     0.0.0.0         192.168.56.1    08:00:27:47:D4:A9 1500 
 
 
Check: Node connectivity using interfaces on subnet "192.168.56.0"
 
Check: Node connectivity of subnet "192.168.56.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac2[192.168.56.125]      12102-rac2[192.168.56.25]       yes             
  12102-rac2[192.168.56.125]      12102-rac1[192.168.56.108]      yes             
  12102-rac2[192.168.56.125]      12102-rac1[192.168.56.26]       yes             
  12102-rac2[192.168.56.125]      12102-rac3[192.168.56.29]       yes             
  12102-rac2[192.168.56.125]      12102-rac2[192.168.56.22]       yes             
  12102-rac2[192.168.56.125]      12102-rac1[192.168.56.28]       yes             
  12102-rac2[192.168.56.125]      12102-rac3[192.168.56.27]       yes             
  12102-rac2[192.168.56.125]      12102-rac1[192.168.56.124]      yes             
  12102-rac2[192.168.56.125]      12102-rac3[192.168.56.127]      yes             
  12102-rac2[192.168.56.25]       12102-rac1[192.168.56.108]      yes             
  12102-rac2[192.168.56.25]       12102-rac1[192.168.56.26]       yes             
  12102-rac2[192.168.56.25]       12102-rac3[192.168.56.29]       yes             
  12102-rac2[192.168.56.25]       12102-rac2[192.168.56.22]       yes             
  12102-rac2[192.168.56.25]       12102-rac1[192.168.56.28]       yes             
  12102-rac2[192.168.56.25]       12102-rac3[192.168.56.27]       yes             
  12102-rac2[192.168.56.25]       12102-rac1[192.168.56.124]      yes             
  12102-rac2[192.168.56.25]       12102-rac3[192.168.56.127]      yes             
  12102-rac1[192.168.56.108]      12102-rac1[192.168.56.26]       yes             
  12102-rac1[192.168.56.108]      12102-rac3[192.168.56.29]       yes             
  12102-rac1[192.168.56.108]      12102-rac2[192.168.56.22]       yes             
  12102-rac1[192.168.56.108]      12102-rac1[192.168.56.28]       yes             
  12102-rac1[192.168.56.108]      12102-rac3[192.168.56.27]       yes             
  12102-rac1[192.168.56.108]      12102-rac1[192.168.56.124]      yes             
  12102-rac1[192.168.56.108]      12102-rac3[192.168.56.127]      yes             
  12102-rac1[192.168.56.26]       12102-rac3[192.168.56.29]       yes             
  12102-rac1[192.168.56.26]       12102-rac2[192.168.56.22]       yes             
  12102-rac1[192.168.56.26]       12102-rac1[192.168.56.28]       yes             
  12102-rac1[192.168.56.26]       12102-rac3[192.168.56.27]       yes             
  12102-rac1[192.168.56.26]       12102-rac1[192.168.56.124]      yes             
  12102-rac1[192.168.56.26]       12102-rac3[192.168.56.127]      yes             
  12102-rac3[192.168.56.29]       12102-rac2[192.168.56.22]       yes             
  12102-rac3[192.168.56.29]       12102-rac1[192.168.56.28]       yes             
  12102-rac3[192.168.56.29]       12102-rac3[192.168.56.27]       yes             
  12102-rac3[192.168.56.29]       12102-rac1[192.168.56.124]      yes             
  12102-rac3[192.168.56.29]       12102-rac3[192.168.56.127]      yes             
  12102-rac2[192.168.56.22]       12102-rac1[192.168.56.28]       yes             
  12102-rac2[192.168.56.22]       12102-rac3[192.168.56.27]       yes             
  12102-rac2[192.168.56.22]       12102-rac1[192.168.56.124]      yes             
  12102-rac2[192.168.56.22]       12102-rac3[192.168.56.127]      yes             
  12102-rac1[192.168.56.28]       12102-rac3[192.168.56.27]       yes             
  12102-rac1[192.168.56.28]       12102-rac1[192.168.56.124]      yes             
  12102-rac1[192.168.56.28]       12102-rac3[192.168.56.127]      yes             
  12102-rac3[192.168.56.27]       12102-rac1[192.168.56.124]      yes             
  12102-rac3[192.168.56.27]       12102-rac3[192.168.56.127]      yes             
  12102-rac1[192.168.56.124]      12102-rac3[192.168.56.127]      yes             
Result: Node connectivity passed for subnet "192.168.56.0" with node(s) 12102-rac2,12102-rac1,12102-rac3
 
 
Check: TCP connectivity of subnet "192.168.56.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac2 : 192.168.56.125     12102-rac2 : 192.168.56.125     passed         
  12102-rac2 : 192.168.56.25      12102-rac2 : 192.168.56.125     passed         
  12102-rac1 : 192.168.56.108     12102-rac2 : 192.168.56.125     passed         
  12102-rac1 : 192.168.56.26      12102-rac2 : 192.168.56.125     passed         
  12102-rac3 : 192.168.56.29      12102-rac2 : 192.168.56.125     passed         
  12102-rac2 : 192.168.56.22      12102-rac2 : 192.168.56.125     passed         
  12102-rac1 : 192.168.56.28      12102-rac2 : 192.168.56.125     passed         
  12102-rac3 : 192.168.56.27      12102-rac2 : 192.168.56.125     passed         
  12102-rac1 : 192.168.56.124     12102-rac2 : 192.168.56.125     passed         
  12102-rac3 : 192.168.56.127     12102-rac2 : 192.168.56.125     passed         
  12102-rac2 : 192.168.56.125     12102-rac2 : 192.168.56.25      passed         
  12102-rac2 : 192.168.56.25      12102-rac2 : 192.168.56.25      passed         
  12102-rac1 : 192.168.56.108     12102-rac2 : 192.168.56.25      passed         
  12102-rac1 : 192.168.56.26      12102-rac2 : 192.168.56.25      passed         
  12102-rac3 : 192.168.56.29      12102-rac2 : 192.168.56.25      passed         
  12102-rac2 : 192.168.56.22      12102-rac2 : 192.168.56.25      passed         
  12102-rac1 : 192.168.56.28      12102-rac2 : 192.168.56.25      passed         
  12102-rac3 : 192.168.56.27      12102-rac2 : 192.168.56.25      passed         
  12102-rac1 : 192.168.56.124     12102-rac2 : 192.168.56.25      passed         
  12102-rac3 : 192.168.56.127     12102-rac2 : 192.168.56.25      passed         
  12102-rac2 : 192.168.56.125     12102-rac1 : 192.168.56.108     passed         
  12102-rac2 : 192.168.56.25      12102-rac1 : 192.168.56.108     passed         
  12102-rac1 : 192.168.56.108     12102-rac1 : 192.168.56.108     passed         
  12102-rac1 : 192.168.56.26      12102-rac1 : 192.168.56.108     passed         
  12102-rac3 : 192.168.56.29      12102-rac1 : 192.168.56.108     passed         
  12102-rac2 : 192.168.56.22      12102-rac1 : 192.168.56.108     passed         
  12102-rac1 : 192.168.56.28      12102-rac1 : 192.168.56.108     passed         
  12102-rac3 : 192.168.56.27      12102-rac1 : 192.168.56.108     passed         
  12102-rac1 : 192.168.56.124     12102-rac1 : 192.168.56.108     passed         
  12102-rac3 : 192.168.56.127     12102-rac1 : 192.168.56.108     passed         
  12102-rac2 : 192.168.56.125     12102-rac1 : 192.168.56.26      passed         
  12102-rac2 : 192.168.56.25      12102-rac1 : 192.168.56.26      passed         
  12102-rac1 : 192.168.56.108     12102-rac1 : 192.168.56.26      passed         
  12102-rac1 : 192.168.56.26      12102-rac1 : 192.168.56.26      passed         
  12102-rac3 : 192.168.56.29      12102-rac1 : 192.168.56.26      passed         
  12102-rac2 : 192.168.56.22      12102-rac1 : 192.168.56.26      passed         
  12102-rac1 : 192.168.56.28      12102-rac1 : 192.168.56.26      passed         
  12102-rac3 : 192.168.56.27      12102-rac1 : 192.168.56.26      passed         
  12102-rac1 : 192.168.56.124     12102-rac1 : 192.168.56.26      passed         
  12102-rac3 : 192.168.56.127     12102-rac1 : 192.168.56.26      passed         
  12102-rac2 : 192.168.56.125     12102-rac3 : 192.168.56.29      passed         
  12102-rac2 : 192.168.56.25      12102-rac3 : 192.168.56.29      passed         
  12102-rac1 : 192.168.56.108     12102-rac3 : 192.168.56.29      passed         
  12102-rac1 : 192.168.56.26      12102-rac3 : 192.168.56.29      passed         
  12102-rac3 : 192.168.56.29      12102-rac3 : 192.168.56.29      passed         
  12102-rac2 : 192.168.56.22      12102-rac3 : 192.168.56.29      passed         
  12102-rac1 : 192.168.56.28      12102-rac3 : 192.168.56.29      passed         
  12102-rac3 : 192.168.56.27      12102-rac3 : 192.168.56.29      passed         
  12102-rac1 : 192.168.56.124     12102-rac3 : 192.168.56.29      passed         
  12102-rac3 : 192.168.56.127     12102-rac3 : 192.168.56.29      passed         
  12102-rac2 : 192.168.56.125     12102-rac2 : 192.168.56.22      passed         
  12102-rac2 : 192.168.56.25      12102-rac2 : 192.168.56.22      passed         
  12102-rac1 : 192.168.56.108     12102-rac2 : 192.168.56.22      passed         
  12102-rac1 : 192.168.56.26      12102-rac2 : 192.168.56.22      passed         
  12102-rac3 : 192.168.56.29      12102-rac2 : 192.168.56.22      passed         
  12102-rac2 : 192.168.56.22      12102-rac2 : 192.168.56.22      passed         
  12102-rac1 : 192.168.56.28      12102-rac2 : 192.168.56.22      passed         
  12102-rac3 : 192.168.56.27      12102-rac2 : 192.168.56.22      passed         
  12102-rac1 : 192.168.56.124     12102-rac2 : 192.168.56.22      passed         
  12102-rac3 : 192.168.56.127     12102-rac2 : 192.168.56.22      passed         
  12102-rac2 : 192.168.56.125     12102-rac1 : 192.168.56.28      passed         
  12102-rac2 : 192.168.56.25      12102-rac1 : 192.168.56.28      passed         
  12102-rac1 : 192.168.56.108     12102-rac1 : 192.168.56.28      passed         
  12102-rac1 : 192.168.56.26      12102-rac1 : 192.168.56.28      passed         
  12102-rac3 : 192.168.56.29      12102-rac1 : 192.168.56.28      passed         
  12102-rac2 : 192.168.56.22      12102-rac1 : 192.168.56.28      passed         
  12102-rac1 : 192.168.56.28      12102-rac1 : 192.168.56.28      passed         
  12102-rac3 : 192.168.56.27      12102-rac1 : 192.168.56.28      passed         
  12102-rac1 : 192.168.56.124     12102-rac1 : 192.168.56.28      passed         
  12102-rac3 : 192.168.56.127     12102-rac1 : 192.168.56.28      passed         
  12102-rac2 : 192.168.56.125     12102-rac3 : 192.168.56.27      passed         
  12102-rac2 : 192.168.56.25      12102-rac3 : 192.168.56.27      passed         
  12102-rac1 : 192.168.56.108     12102-rac3 : 192.168.56.27      passed         
  12102-rac1 : 192.168.56.26      12102-rac3 : 192.168.56.27      passed         
  12102-rac3 : 192.168.56.29      12102-rac3 : 192.168.56.27      passed         
  12102-rac2 : 192.168.56.22      12102-rac3 : 192.168.56.27      passed         
  12102-rac1 : 192.168.56.28      12102-rac3 : 192.168.56.27      passed         
  12102-rac3 : 192.168.56.27      12102-rac3 : 192.168.56.27      passed         
  12102-rac1 : 192.168.56.124     12102-rac3 : 192.168.56.27      passed         
  12102-rac3 : 192.168.56.127     12102-rac3 : 192.168.56.27      passed         
  12102-rac2 : 192.168.56.125     12102-rac1 : 192.168.56.124     passed         
  12102-rac2 : 192.168.56.25      12102-rac1 : 192.168.56.124     passed         
  12102-rac1 : 192.168.56.108     12102-rac1 : 192.168.56.124     passed         
  12102-rac1 : 192.168.56.26      12102-rac1 : 192.168.56.124     passed         
  12102-rac3 : 192.168.56.29      12102-rac1 : 192.168.56.124     passed         
  12102-rac2 : 192.168.56.22      12102-rac1 : 192.168.56.124     passed         
  12102-rac1 : 192.168.56.28      12102-rac1 : 192.168.56.124     passed         
  12102-rac3 : 192.168.56.27      12102-rac1 : 192.168.56.124     passed         
  12102-rac1 : 192.168.56.124     12102-rac1 : 192.168.56.124     passed         
  12102-rac3 : 192.168.56.127     12102-rac1 : 192.168.56.124     passed         
  12102-rac2 : 192.168.56.125     12102-rac3 : 192.168.56.127     passed         
  12102-rac2 : 192.168.56.25      12102-rac3 : 192.168.56.127     passed         
  12102-rac1 : 192.168.56.108     12102-rac3 : 192.168.56.127     passed         
  12102-rac1 : 192.168.56.26      12102-rac3 : 192.168.56.127     passed         
  12102-rac3 : 192.168.56.29      12102-rac3 : 192.168.56.127     passed         
  12102-rac2 : 192.168.56.22      12102-rac3 : 192.168.56.127     passed         
  12102-rac1 : 192.168.56.28      12102-rac3 : 192.168.56.127     passed         
  12102-rac3 : 192.168.56.27      12102-rac3 : 192.168.56.127     passed         
  12102-rac1 : 192.168.56.124     12102-rac3 : 192.168.56.127     passed         
  12102-rac3 : 192.168.56.127     12102-rac3 : 192.168.56.127     passed         
Result: TCP connectivity check passed for subnet "192.168.56.0"
 
 
Check: Node connectivity using interfaces on subnet "192.168.57.0"
 
Check: Node connectivity of subnet "192.168.57.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac2[192.168.57.35]       12102-rac1[192.168.57.34]       yes             
  12102-rac2[192.168.57.35]       12102-rac3[192.168.57.37]       yes             
  12102-rac1[192.168.57.34]       12102-rac3[192.168.57.37]       yes             
Result: Node connectivity passed for subnet "192.168.57.0" with node(s) 12102-rac2,12102-rac1,12102-rac3
 
 
Check: TCP connectivity of subnet "192.168.57.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  12102-rac2 : 192.168.57.35      12102-rac2 : 192.168.57.35      passed         
  12102-rac1 : 192.168.57.34      12102-rac2 : 192.168.57.35      passed         
  12102-rac3 : 192.168.57.37      12102-rac2 : 192.168.57.35      passed         
  12102-rac2 : 192.168.57.35      12102-rac1 : 192.168.57.34      passed         
  12102-rac1 : 192.168.57.34      12102-rac1 : 192.168.57.34      passed         
  12102-rac3 : 192.168.57.37      12102-rac1 : 192.168.57.34      passed         
  12102-rac2 : 192.168.57.35      12102-rac3 : 192.168.57.37      passed         
  12102-rac1 : 192.168.57.34      12102-rac3 : 192.168.57.37      passed         
  12102-rac3 : 192.168.57.37      12102-rac3 : 192.168.57.37      passed         
Result: TCP connectivity check passed for subnet "192.168.57.0"
 
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "192.168.57.0".
Subnet mask consistency check passed.
 
Result: Node connectivity check passed
 
Checking multicast communication...
 
Checking subnet "192.168.57.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.57.0" for multicast communication with multicast group "224.0.0.251" passed.
 
Check of multicast communication passed.
 
Checking node application existence...
 
Checking existence of VIP node application (required)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    yes                       yes                       passed   
  12102-rac2    yes                       yes                       passed   
  12102-rac1    yes                       yes                       passed   
VIP node application check passed
 
Checking existence of NETWORK node application (required)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    yes                       yes                       passed   
  12102-rac2    yes                       yes                       passed   
  12102-rac1    yes                       yes                       passed   
NETWORK node application check passed
 
Checking existence of ONS node application (optional)
  Node Name     Required                  Running?                  Comment   
  ------------  ------------------------  ------------------------  ----------
  12102-rac3    no                        yes                       passed   
  12102-rac2    no                        yes                       passed   
  12102-rac1    no                        yes                       passed   
ONS node application check passed
 
 
Checking Single Client Access Name (SCAN)...
   SCAN Name         Node          Running?      ListenerName  Port          Running?   
  ----------------  ------------  ------------  ------------  ------------  ------------
  flex-cluster-scan.gns.localdomain  12102-rac2    true          LISTENER_SCAN1  1521          true       
  flex-cluster-scan.gns.localdomain  12102-rac3    true          LISTENER_SCAN2  1521          true       
  flex-cluster-scan.gns.localdomain  12102-rac1    true          LISTENER_SCAN3  1521          true       
 
Checking TCP connectivity to SCAN listeners...
  Node          ListenerName              TCP connectivity?       
  ------------  ------------------------  ------------------------
  12102-rac3    LISTENER_SCAN1            yes                     
  12102-rac3    LISTENER_SCAN2            yes                     
  12102-rac3    LISTENER_SCAN3            yes                     
TCP connectivity to SCAN listeners exists on all cluster nodes
 
Checking name resolution setup for "flex-cluster-scan.gns.localdomain"...
 
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
 
  SCAN Name     IP Address                Status                    Comment   
  ------------  ------------------------  ------------------------  ----------
  flex-cluster-scan.gns.localdomain  192.168.56.108            passed                             
 
Checking SCAN IP addresses...
Check of SCAN IP addresses passed
 
Verification of SCAN VIP and listener setup passed
 
Checking to make sure user "oracle" is not in "root" group
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  12102-rac3    passed                    does not exist         
Result: User "oracle" is not part of "root" group. Check passed
 
Checking if Clusterware is installed on all nodes...
Oracle Clusterware is installed on all nodes.
 
Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
  Node Name                             Status                 
  ------------------------------------  ------------------------
  12102-rac3                            passed                 
CTSS resource check passed
 
Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed
 
Check CTSS state started...
Check: CTSS state
  Node Name                             State                   
  ------------------------------------  ------------------------
  12102-rac3                            Active                 
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs
Check: Reference Time Offset
  Node Name     Time Offset               Status                 
  ------------  ------------------------  ------------------------
  12102-rac3    0.0                       passed                 
 
Time offset is within the specified limits on the following set of nodes:
"[12102-rac3]"
Result: Check of clock time offsets passed
 
 
Oracle Cluster Time Synchronization Services check passed
 
Post-check for node addition was successful.
[oracle@12102-rac1 addnode]$

其他的话,还可以用orachk再次详细的检查。

添加步骤主要参考了官方文档:《Clusterware Administration and Deployment Guide – Adding and Deleting Cluster Nodes》

How to create cow db using acfs snapshot

$
0
0

这篇文章介绍了如何在一个已经安装12c rac的虚拟机上,如何建立acfs文件系统,并且利用acfs snapshot刷一个COW(Copy-On-Write)库出来做测试库。

Highlight Step:

一、给虚拟机增加asm盘,以便建立acfs文件系统
二、创建acfs文件系统
三、在节点1创建数据库在acfs文件系统上。(12c支持数据文件,控制文件,日志文件等数据库的文件放在acfs上。参考Doc ID 1369107.1中ACFS Advanced Features Platform Availability – Minimum Version)
四、在节点1上运行dml的同时,生成snapshot
五、利用上面生成的snapshot,在节点2上拉起来另外一个数据库。

一、给虚拟机增加asm盘,以便建立acfs文件系统

1.创建共享acfs盘,共3个盘,每个盘3G大小:

VBoxManage createhd --filename asm_acfs_3g_01.vdi --size 3072 --format VDI --variant Fixed
VBoxManage createhd --filename asm_acfs_3g_02.vdi --size 3072 --format VDI --variant Fixed
VBoxManage createhd --filename asm_acfs_3g_03.vdi --size 3072 --format VDI --variant Fixed

2.将创建的asm盘attach到虚拟机ol6-121-rac1上

VBoxManage storageattach ol6-121-rac1 --storagectl "SATA" --port 5 --device 0 --type hdd     --medium asm_acfs_3g_01.vdi --mtype shareable
VBoxManage storageattach ol6-121-rac1 --storagectl "SATA" --port 6 --device 0 --type hdd     --medium asm_acfs_3g_02.vdi --mtype shareable
VBoxManage storageattach ol6-121-rac1 --storagectl "SATA" --port 7 --device 0 --type hdd     --medium asm_acfs_3g_03.vdi --mtype shareable
 
VBoxManage storageattach ol6-121-rac2 --storagectl "SATA" --port 5 --device 0 --type hdd     --medium asm_acfs_3g_01.vdi --mtype shareable
VBoxManage storageattach ol6-121-rac2 --storagectl "SATA" --port 6 --device 0 --type hdd     --medium asm_acfs_3g_02.vdi --mtype shareable
VBoxManage storageattach ol6-121-rac2 --storagectl "SATA" --port 7 --device 0 --type hdd     --medium asm_acfs_3g_03.vdi --mtype shareable

3.将这些共享盘设置为可共享的:

VBoxManage modifyhd asm_acfs_3g_01.vdi --type shareable
VBoxManage modifyhd asm_acfs_3g_02.vdi --type shareable
VBoxManage modifyhd asm_acfs_3g_03.vdi --type shareable

4.进linux系统,为新加的盘进行分区

fdisk /dev/sd<n> --> n -->p -->1-->1-->w
如:
fdisk /dev/sdf --> n -->p -->1-->1-->w
fdisk /dev/sdg --> n -->p -->1-->1-->w
fdisk /dev/sdh --> n -->p -->1-->1-->w

5. 本文用到是udev的方式使用asm盘,没有使用asmlib。

/sbin/scsi_id -g -u -d /dev/sdf
/sbin/scsi_id -g -u -d /dev/sdg
/sbin/scsi_id -g -u -d /dev/sdh
如:
[root@ol6-121-rac1 dev]# /sbin/scsi_id -g -u -d /dev/sdf
1ATA_VBOX_HARDDISK_VBa36c3c6c-9da6bb20
[root@ol6-121-rac1 dev]# /sbin/scsi_id -g -u -d /dev/sdg
1ATA_VBOX_HARDDISK_VBcb790f45-de2f86fb
[root@ol6-121-rac1 dev]# /sbin/scsi_id -g -u -d /dev/sdh
1ATA_VBOX_HARDDISK_VB4489ed5a-e05a9613
[root@ol6-121-rac1 dev]#

6.获得上面的信息后,在两个节点的/etc/udev/rules.d/99-oracle-asmdevices.rules文件中,添加如下几行:

KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBa36c3c6c-9da6bb20",  NAME="asm_acfs_3g_01", OWNER="oracle", GROUP="dba", MODE="0660
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBcb790f45-de2f86fb",  NAME="asm_acfs_3g_02", OWNER="oracle", GROUP="dba", MODE="0660
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB4489ed5a-e05a9613",  NAME="asm_acfs_3g_03", OWNER="oracle", GROUP="dba", MODE="0660

7.重启两个节点的udev服务,或者直接重启两个节点:

crsctl stop crs
/sbin/udevadm control --reload-rules
/sbin/start_udev

二、创建acfs文件系统

8. 在两个节点,创建acfs文件系统的mount point

[root@ol6-121-rac1 ~]# mkdir -p /mnt/acfs
[root@ol6-121-rac1 ~]# chown oracle:oinstall /mnt/acfs
 
[root@ol6-121-rac2 ~]# mkdir -p /mnt/acfs
[root@ol6-121-rac2 ~]# chown oracle:oinstall /mnt/acfs

9. 先检查可以用于新建diskgroup的disk

SQL> select path, name, header_status, os_mb from v$asm_disk;
 
PATH                           NAME                           HEADER_STATU      OS_MB
------------------------------ ------------------------------ ------------ ----------
/dev/asm_acfs_3g_03                                           CANDIDATE          3067
/dev/asm_acfs_3g_01                                           CANDIDATE          3067
/dev/asm_acfs_3g_02                                           CANDIDATE          3067
/dev/asm-disk1                 DATA_0000                      MEMBER             5114
/dev/asm-disk2                 DATA_0001                      MEMBER             5114
/dev/asm-disk3                 DATA_0002                      MEMBER             5114
/dev/asm-disk4                 DATA_0003                      MEMBER             5114
 
7 rows selected.

10. 新建diskgroup DG_ACFS

SQL> CREATE DISKGROUP DG_ACFS EXTERNAL REDUNDANCY DISK
  2  '/dev/asm_acfs_3g_01' SIZE 3000M,                 
  3  '/dev/asm_acfs_3g_02' size 3000M,                 
  4  '/dev/asm_acfs_3g_03' size 3000M                 
  5  ATTRIBUTE  'compatible.asm' = '12.1.0.0.0';       
 
Diskgroup created.
 
SQL>

11. 设置compatible为12.1以上

SQL> alter diskgroup DG_ACFS set attribute 'compatible.advm'='12.1.0.0.0';
 
Diskgroup altered.
也可以:
ASMCMD> setattr -G DG_ACFS compatible.advm 12.1.0.0.0

12. 建立volumns

SQL> alter diskgroup DG_ACFS add volume VOL1 size 8000M;
 
Diskgroup altered.
也可以:
ASMCMD> volcreate -G DG_ACFS -s 8000M --column 1 VOL1

13. 检查设备名称:

SQL> select volume_name,volume_device from v$asm_volume
  2  /
 
VOLUME_NAME                    VOLUME_DEVICE
------------------------------ ----------------------------------------
VOL1                           /dev/asm/vol1-28
<<<<<<设备名称,下一步mkfs时会用
 
SQL>
也可以
ASMCMD> volinfo -G DG_ACFS VOL1
Diskgroup Name: DG_ACFS
 
         Volume Name: VOL1
         Volume Device: /dev/asm/vol1-28
<<<<<<设备名称,下一步mkfs时会用
        
State: ENABLED
        
Size (MB): 8000
        
Resize Unit (MB): 32
        
Redundancy: UNPROT
        
Stripe Columns: 4
        
Stripe Width (K): 128
        
Usage:
        
Mountpath:
 
ASMCMD>

14. 建立文件系统:

[root@ol6-121-rac1 ~]# mkfs -t acfs  /dev/asm/vol1-28
mkfs.acfs: version                   = 12.1.0.1.0
mkfs.acfs: on-disk version           = 39.0
mkfs.acfs: volume                    = /dev/asm/vol1-28
mkfs.acfs: volume size               = 8388608000
mkfs.acfs: Format complete.
[root@ol6-121-rac1 ~]#

15. 将acfs文件系统注册到crs:

[root@ol6-121-rac1 ~]# acfsutil registry -a /dev/asm/vol1-28 /mnt/acfs
acfsutil registry: mount point /mnt/acfs successfully added to Oracle Registry
[root@ol6-121-rac1 ~]#
或者:
[root@ol6-121-rac1 ~]# srvctl add filesystem -m /mnt/acfs -d /dev/asm/vol1-28

16. 将acfs文件系统mount到mount point上:

[root@ol6-121-rac1 ~]# df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg_ol6121rac1-lv_root
                      28423176  16096244  10883092  60% /
tmpfs                  2560000   2038752    521248  80% /dev/shm
/dev/sda1               495844     56258    413986  12% /boot
[root@ol6-121-rac1 ~]#
[root@ol6-121-rac1 ~]# mount -t acfs /dev/asm/vol1-28 /mnt/acfs
[root@ol6-121-rac1 ~]#
[root@ol6-121-rac1 ~]# df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg_ol6121rac1-lv_root
                      28423176  16096324  10883012  60% /
tmpfs                  2560000   2038752    521248  80% /dev/shm
/dev/sda1               495844     56258    413986  12% /boot
/dev/asm/vol1-190      9117696     57360   9060336   1% /mnt/acfs
[root@ol6-121-rac1 ~]#

17. 检查状态:

[root@ol6-121-rac1 ~]# srvctl config filesystem -d /dev/asm/vol1-28
Volume device: /dev/asm/vol1-190
Canonical volume device: /dev/asm/vol1-190
Mountpoint path: /mnt/acfs
User:
Type: ACFS
Mount options:
Description:
Nodes:
Server pools:
Application ID:
ACFS file system is enabled
[root@ol6-121-rac1 ~]#
[root@ol6-121-rac1 ~]# srvctl status filesystem -d /dev/asm/vol1-28
ACFS file system /mnt/acfs is mounted on nodes ol6-121-rac1,ol6-121-rac2
[root@ol6-121-rac1 ~]#

三、在节点1创建数据库在acfs文件系统上。

18. 使用dbca创建一个数据库到acfs文件系统上。注意,storage type的类型要选择file system,并且选择上面建立的mount point处。

具体步骤略。

19. 检查创建好的db,我们这里测试用的db实例名叫acfsdb:

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       ol6-121-rac1             STABLE
               ONLINE  ONLINE       ol6-121-rac2             STABLE
ora.DATA.dg
               ONLINE  ONLINE       ol6-121-rac1             STABLE
               ONLINE  ONLINE       ol6-121-rac2             STABLE
ora.DG_ACFS.VOL1.advm
               ONLINE  ONLINE       ol6-121-rac1             Volume device /dev/a
                                                             sm/vol1-28 is online
                                                             ,STABLE
               ONLINE  ONLINE       ol6-121-rac2             Volume device /dev/a
                                                             sm/vol1-28 is online
                                                             ,STABLE
ora.DG_ACFS.dg
               ONLINE  ONLINE       ol6-121-rac1             STABLE
               ONLINE  ONLINE       ol6-121-rac2             STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       ol6-121-rac1             STABLE
               ONLINE  ONLINE       ol6-121-rac2             STABLE
ora.dg_acfs.vol1.acfs
               ONLINE  ONLINE       ol6-121-rac1             mounted on /mnt/acfs
                                                             ,STABLE
               ONLINE  ONLINE       ol6-121-rac2             mounted on /mnt/acfs
                                                             ,STABLE
ora.net1.network
               ONLINE  ONLINE       ol6-121-rac1             STABLE
               ONLINE  ONLINE       ol6-121-rac2             STABLE
ora.ons
               ONLINE  ONLINE       ol6-121-rac1             STABLE
               ONLINE  ONLINE       ol6-121-rac2             STABLE
ora.proxy_advm
               ONLINE  ONLINE       ol6-121-rac1             STABLE
               ONLINE  ONLINE       ol6-121-rac2             STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       ol6-121-rac2             STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       ol6-121-rac1             STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       ol6-121-rac1             STABLE
ora.acfsdb.db
      1        ONLINE  ONLINE       ol6-121-rac1             Open,STABLE
ora.asm
      1        ONLINE  ONLINE       ol6-121-rac1             STABLE
      2        ONLINE  ONLINE       ol6-121-rac2             STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cdbrac.db
      1        OFFLINE OFFLINE                               Instance Shutdown,ST
                                                             ABLE
      2        OFFLINE OFFLINE                               Instance Shutdown,ST
                                                             ABLE
ora.cvu
      1        ONLINE  ONLINE       ol6-121-rac1             STABLE
ora.gns
      1        ONLINE  ONLINE       ol6-121-rac1             STABLE
ora.gns.vip
      1        ONLINE  ONLINE       ol6-121-rac1             STABLE
ora.oc4j
      1        OFFLINE OFFLINE                               STABLE
ora.ol6-121-rac1.vip
      1        ONLINE  ONLINE       ol6-121-rac1             STABLE
ora.ol6-121-rac2.vip
      1        ONLINE  ONLINE       ol6-121-rac2             STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       ol6-121-rac2             STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       ol6-121-rac1             STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       ol6-121-rac1             STABLE
--------------------------------------------------------------------------------
[oracle@ol6-121-rac1 ~]$ srvctl config database -d acfsdb
Database unique name: acfsdb
Database name: acfsdb
Oracle home: /u01/app/oracle/product/12.1.0.1/db_1
Oracle user: oracle
Spfile: /u01/app/oracle/product/12.1.0.1/db_1/dbs/spfileacfsdb.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: acfsdb
Database instance: acfsdb
Disk Groups:
Mount point paths:
Services:
Type: SINGLE
Database is administrator managed
SQL> select file_name from dba_data_files;
 
FILE_NAME
--------------------------------------------------------------------------------
/mnt/acfs/oradata/acfsdb/system01.dbf
/mnt/acfs/oradata/acfsdb/sysaux01.dbf
/mnt/acfs/oradata/acfsdb/undotbs01.dbf
/mnt/acfs/oradata/acfsdb/users01.dbf
 
SQL>
SQL> select name from v$controlfile;
 
NAME
--------------------------------------------------------------------------------
/mnt/acfs/oradata/acfsdb/control01.ctl
/mnt/acfs/oradata/acfsdb/control02.ctl
 
SQL>
SQL>
SQL>
SQL> select member from v$logfile;
 
MEMBER
--------------------------------------------------------------------------------
/mnt/acfs/oradata/acfsdb/redo01.log
/mnt/acfs/oradata/acfsdb/redo02.log
/mnt/acfs/oradata/acfsdb/redo03.log
 
SQL> select file_name from dba_temp_files;
 
FILE_NAME
--------------------------------------------------------------------------------
/mnt/acfs/oradata/acfsdb/temp01.dbf
 
SQL>

四、在节点1上运行dml的同时,生成snapshot

20. 我们创建snapshot的命令是用acfsutil,我们先来看看当前是没有snapshot的:

[oracle@ol6-121-rac1 logs]$ acfsutil snap info /mnt/acfs
    number of snapshots:  0
    snapshot space usage: 0
[oracle@ol6-121-rac1 logs]$

21. 我们先来试试创建一个只读(Read-Only, RO)的snapshot:

[oracle@ol6-121-rac1 logs]$ date
Wed Feb 24 22:40:34 CST 2016
[oracle@ol6-121-rac1 logs]$ acfsutil snap create asfsdb_snap01 /mnt/acfs
acfsutil snap create: Snapshot operation is complete.
[oracle@ol6-121-rac1 logs]$ date
Wed Feb 24 22:40:40 CST 2016
[oracle@ol6-121-rac1 logs]$ acfsutil snap info /mnt/acfs
snapshot name:               asfsdb_snap01
RO snapshot or RW snapshot:  RO
<<<<<<<<注意这里,类似是RO,即只读。
parent name:                 /mnt/acfs
snapshot creation time:      Wed Feb 24 22:40:36 2016
 
    
number of snapshots:  1
    
snapshot space usage: 122757120
[
oracle@ol6-121-rac1 logs]$

22. 注意,上述的snapshot的文件,就建立在了你的mount point下有个隐含目录 .ACFS 下:

[root@ol6-121-rac1 acfsdb]# cd /mnt/acfs/.ACFS/snaps/asfsdb_snap01/oradata/acfsdb
[root@ol6-121-rac1 acfsdb]# ls -al
total 1714424
drwxr-x---. 2 oracle oinstall      8192 Feb 24 21:27 .
drwxr-x---. 3 oracle oinstall      8192 Feb 24 21:25 ..
-rw-r-----. 1 oracle dba       10043392 Feb 24 22:40 control01.ctl
-rw-r-----. 1 oracle dba       10043392 Feb 24 22:40 control02.ctl
-rw-r-----. 1 oracle dba       52429312 Feb 24 22:03 redo01.log
-rw-r-----. 1 oracle dba       52429312 Feb 24 22:40 redo02.log
-rw-r-----. 1 oracle dba       52429312 Feb 24 22:01 redo03.log
-rw-r-----. 1 oracle dba      576724992 Feb 24 22:40 sysaux01.dbf
-rw-r-----. 1 oracle dba      734011392 Feb 24 22:39 system01.dbf
-rw-r-----. 1 oracle dba       20979712 Feb 24 22:40 temp01.dbf
-rw-r-----. 1 oracle dba      241180672 Feb 24 22:39 undotbs01.dbf
-rw-r-----. 1 oracle dba        5251072 Feb 24 22:38 users01.dbf
[root@ol6-121-rac1 acfsdb]#

23. 我们不妨再多建几个snapshot:

[oracle@ol6-121-rac1 ~]$ date
Wed Feb 24 22:52:45 CST 2016
[oracle@ol6-121-rac1 ~]$ acfsutil snap create -w asfsdb_snap02 /mnt/acfs
acfsutil snap create: Snapshot operation is complete.
[oracle@ol6-121-rac1 ~]$ date
Wed Feb 24 22:52:45 CST 2016
[oracle@ol6-121-rac1 ~]$
[oracle@ol6-121-rac1 ~]$ acfsutil snap info /mnt/acfs
snapshot name:               asfsdb_snap01
RO snapshot or RW snapshot:  RO
parent name:                 /mnt/acfs
snapshot creation time:      Wed Feb 24 22:40:36 2016
 
snapshot name:               asfsdb_snap02
RO snapshot or RW snapshot:  RW
parent name:                 /mnt/acfs
snapshot creation time:      Wed Feb 24 22:52:45 2016
 
    number of snapshots:  2
    snapshot space usage: 265420800
[oracle@ol6-121-rac1 ~]$
[oracle@ol6-121-rac1 ~]$ date
acfsutil snap create -w asfsdb_snap03 /mnt/acfs
dateThu Feb 25 15:15:08 CST 2016
[oracle@ol6-121-rac1 ~]$ acfsutil snap create -w asfsdb_snap03 /mnt/acfs
acfsutil snap create: Snapshot operation is complete.
[oracle@ol6-121-rac1 ~]$ date
Thu Feb 25 15:15:11 CST 2016
[oracle@ol6-121-rac1 ~]$
[oracle@ol6-121-rac1 ~]$
[oracle@ol6-121-rac1 ~]$
[oracle@ol6-121-rac1 ~]$
[oracle@ol6-121-rac1 ~]$ date
Thu Feb 25 15:15:33 CST 2016
[oracle@ol6-121-rac1 ~]$ acfsutil snap create -w asfsdb_snap04 /mnt/acfs
acfsutil snap create: Snapshot operation is complete.
[oracle@ol6-121-rac1 ~]$ date
Thu Feb 25 15:15:33 CST 2016
[oracle@ol6-121-rac1 ~]$

24. 可以看到已经建立了4个snapshot了。第一个是只读(RO),后面3个是读写(RW)。区别在于,用acfsutil创建的时候,是否加-w参数。否则,不加-w参数默认是只读的。

[oracle@ol6-121-rac1 ~]$ acfsutil snap info /mnt/acfs
snapshot name:               asfsdb_snap01
RO snapshot or RW snapshot:  RO
parent name:                 /mnt/acfs
snapshot creation time:      Wed Feb 24 22:40:36 2016
 
snapshot name:               asfsdb_snap02
RO snapshot or RW snapshot:  RW
parent name:                 /mnt/acfs
snapshot creation time:      Wed Feb 24 22:52:45 2016
 
snapshot name:               asfsdb_snap03
RO snapshot or RW snapshot:  RW
parent name:                 /mnt/acfs
snapshot creation time:      Thu Feb 25 15:15:09 2016
 
snapshot name:               asfsdb_snap04
RO snapshot or RW snapshot:  RW
parent name:                 /mnt/acfs
snapshot creation time:      Thu Feb 25 15:15:33 2016
 
    number of snapshots:  4
    snapshot space usage: 2430095360
[oracle@ol6-121-rac1 ~]$
 
[oracle@ol6-121-rac1 ~]$ acfsutil info fs /mnt/acfs
/mnt/acfs
    ACFS Version: 12.1.0.1.0
    flags:        MountPoint,Available
    mount time:   Thu Feb 25 13:37:36 2016
    volumes:      1
    total size:   8388608000
    total free:   3935641600
    primary volume: /dev/asm/vol1-28
        label:                 
        flags:                 Primary,Available,ADVM
        on-disk version:       43.0
        allocation unit:       4096
        major, minor:          251, 14337
        size:                  8388608000
        free:                  3935641600
        ADVM diskgroup         DG_ACFS
        ADVM resize increment: 33554432
        ADVM redundancy:       unprotected
        ADVM stripe columns:   4
        ADVM stripe width:     131072
    number of snapshots:  4
<<<<有4snapshot
    
snapshot space usage: 2430095360 <<<<<< 4snapshot,空间才使用2.4G。虽然一套数据文件的大小是1个多G
    
replication status: DISABLED
[
oracle@ol6-121-rac1 ~]$

ACFS的snapshot功能很强大,不仅可以建立只读,读写,还能把只读和读写之间进行互相convert,另外,还能建立snapshot-of-snapshot,你在acfsutil时加-p参数指定父级snapshot即可。

五、利用上面生成的snapshot,在节点2上拉起来另外一个数据库。

25. 先在节点2上创建一个pfile,可以从节点1拷贝过来,不过有些地方需要修改一下:

[oracle@ol6-121-rac2 dbs]$ cat initacfsdb.ora
*.audit_file_dest='/u01/app/oracle/admin/acfsdb/adump'
*.audit_trail='db'
*.compatible='12.1.0.0.0'
*.control_files='/mnt/acfs/.ACFS/snaps/asfsdb_snap01/oradata/acfsdb/control01.ctl','/mnt/acfs/.ACFS/snaps/asfsdb_snap01/oradata/acfsdb/control02.ctl'
<<<<<<<修改这里的路径为snapshot的路径
*.
db_block_size=8192
*.
db_domain=''
*.
db_name='acfsdb'
*.
db_unique_name='cowacfs' <<<<<<<这里必须加上db_unique_name,不然由于ocssd进程会检测到存在2个一样的instance,会报错instance_number busy
*.
diagnostic_dest='/u01/app/oracle'
*.
dispatchers='(PROTOCOL=TCP) (SERVICE=acfsdbXDB)'
#*.
local_listener='LISTENER_ACFSDB' <<<<<<修改这里
*.
memory_target=1160m
*.
open_cursors=300
*.
processes=300
*.
remote_login_passwordfile='EXCLUSIVE'
*.
undo_tablespace='UNDOTBS1'

记住,必须要加db_unique_name,不然,在一个cluster环境,即使你已经srvctl remove来database信息,但是加同样的instance的时候,还是会报错:

SQL> startup nomount
ORA-00304: requested INSTANCE_NUMBER is busy
SQL> exit

必须通过db_unique_name来解决。

26. 此时,应该是用读写(RW)的snapshot来启动的,如果是read only snapshot的话,不能启动到mount,因为文件只读。在alertlog中,你会看到如下报错:

Thu Feb 25 13:44:12 2016
alter database mount
Thu Feb 25 13:44:17 2016
Errors in file /u01/app/oracle/diag/rdbms/cowacfs/acfsdb/trace/acfsdb_ora_11237.trc:
ORA-00206: error in writing (block 1, # blocks 1) of control file
ORA-00202: control file: '/mnt/acfs/.ACFS/snaps/asfsdb_snap01/oradata/acfsdb/control01.ctl'
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 3
ORA-221 signalled during: alter database mount...

27. 需要用读写的那个snapshot来启动,我们这边用第四个的snapshot来在节点2上启动:
注,此时pfile中控制文件已经改好成了asfsdb_snap04的那个。

[oracle@ol6-121-rac2 dbs]$ sqlplus "/ as sysdba"
 
SQL*Plus: Release 12.1.0.1.0 Production on Thu Feb 25 15:18:41 2016
 
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
 
Connected to an idle instance.
 
SQL> startup nomount
ORACLE instance started.
 
Total System Global Area 1219260416 bytes
Fixed Size                  2287768 bytes
Variable Size             855639912 bytes
Database Buffers          352321536 bytes
Redo Buffers                9011200 bytes
SQL> alter database mount
  2  /
 
Database altered.
 
SQL>

28. 启动到mount后,我们将控制文件中的文件路径信息,也改成到acfsdb_snap04路径:

先检查当前路径
SQL> select name from v$datafile
  2  union all
  3  select name from v$tempfile
  4  union all
  5  select member from v$logfile;
 
NAME
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/mnt/acfs/oradata/acfsdb/system01.dbf
/mnt/acfs/oradata/acfsdb/sysaux01.dbf
/mnt/acfs/oradata/acfsdb/undotbs01.dbf
/mnt/acfs/oradata/acfsdb/users01.dbf
/mnt/acfs/oradata/acfsdb/temp01.dbf
/mnt/acfs/oradata/acfsdb/redo01.log
/mnt/acfs/oradata/acfsdb/redo02.log
/mnt/acfs/oradata/acfsdb/redo03.log
 
8 rows selected.
 
生成修改脚本
select distinct 'alter database rename file '||''''||a.name||''''|| ' to '||''''||substr(c.name,1,instr(c.name,'/',-1))|| substr(a.name,instr(a.name,'/',-1)+1)||''';' from v$controlfile c, v$datafile a
union all
select distinct 'alter database rename file '||''''||b.name||''''|| ' to '||''''||substr(c.name,1,instr(c.name,'/',-1))|| substr(b.name,instr(b.name,'/',-1)+1)||''';' from v$controlfile c, v$tempfile b
union all
select distinct 'alter database rename file '||''''||d.member||''''|| ' to '||''''||substr(c.name,1,instr(c.name,'/',-1))|| substr(d.member,instr(d.member,'/',-1)+1)||''';' from v$controlfile c, v$logfile d;
 
执行上面修改脚本生成的语句:
alter database rename file '/mnt/acfs/oradata/acfsdb/sysaux01.dbf' to '/mnt/acfs/.ACFS/snaps/asfsdb_snap04/oradata/acfsdb/sysaux01.dbf';
alter database rename file '/mnt/acfs/oradata/acfsdb/system01.dbf' to '/mnt/acfs/.ACFS/snaps/asfsdb_snap04/oradata/acfsdb/system01.dbf';
alter database rename file '/mnt/acfs/oradata/acfsdb/undotbs01.dbf' to '/mnt/acfs/.ACFS/snaps/asfsdb_snap04/oradata/acfsdb/undotbs01.dbf';
alter database rename file '/mnt/acfs/oradata/acfsdb/users01.dbf' to '/mnt/acfs/.ACFS/snaps/asfsdb_snap04/oradata/acfsdb/users01.dbf';
alter database rename file '/mnt/acfs/oradata/acfsdb/temp01.dbf' to '/mnt/acfs/.ACFS/snaps/asfsdb_snap04/oradata/acfsdb/temp01.dbf';
alter database rename file '/mnt/acfs/oradata/acfsdb/redo01.log' to '/mnt/acfs/.ACFS/snaps/asfsdb_snap04/oradata/acfsdb/redo01.log';
alter database rename file '/mnt/acfs/oradata/acfsdb/redo02.log' to '/mnt/acfs/.ACFS/snaps/asfsdb_snap04/oradata/acfsdb/redo02.log';
alter database rename file '/mnt/acfs/oradata/acfsdb/redo03.log' to '/mnt/acfs/.ACFS/snaps/asfsdb_snap04/oradata/acfsdb/redo03.log';
 
Database altered.
 
SQL>
Database altered.
 
SQL>
Database altered.
 
SQL>
Database altered.
 
SQL>
Database altered.
 
SQL>
Database altered.
 
SQL>
Database altered.
 
SQL>
Database altered.
 
检查修改后的结果:
SQL> select name from v$datafile
  2  union all
  3  select name from v$tempfile
  4  union all
  5  select member from v$logfile;
 
NAME
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/mnt/acfs/.ACFS/snaps/asfsdb_snap04/oradata/acfsdb/system01.dbf
/mnt/acfs/.ACFS/snaps/asfsdb_snap04/oradata/acfsdb/sysaux01.dbf
/mnt/acfs/.ACFS/snaps/asfsdb_snap04/oradata/acfsdb/undotbs01.dbf
/mnt/acfs/.ACFS/snaps/asfsdb_snap04/oradata/acfsdb/users01.dbf
/mnt/acfs/.ACFS/snaps/asfsdb_snap04/oradata/acfsdb/temp01.dbf
/mnt/acfs/.ACFS/snaps/asfsdb_snap04/oradata/acfsdb/redo01.log
/mnt/acfs/.ACFS/snaps/asfsdb_snap04/oradata/acfsdb/redo02.log
/mnt/acfs/.ACFS/snaps/asfsdb_snap04/oradata/acfsdb/redo03.log
 
8 rows selected.
 
打开数据库:
SQL> alter database open;
 
Database altered.
 
SQL>
SQL> select max(to_char(mydate,'yyyy-mm-dd hh24:mi:ss')) from t1;
 
MAX(TO_CHAR(MYDATE,
-------------------
2016-02-25 15:15:33
 
SQL>

可以看到,我在一边做snapsnot,一边做dml insert sysdate,恢复出来的t1表的最后记录是15:15:33,也是我的snapshot的创建时间:

snapshot name:               asfsdb_snap04
RO snapshot or RW snapshot:  RW
parent name:                 /mnt/acfs
snapshot creation time:      Thu Feb 25 15:15:33 2016

最后,再说一下,虽然我们可以snapshot主库,来做cow库,但是更好的一个方法是对dataguard的灾备库来做snapshot,从而刷出来一个cow库。此时也是copy-on-write,且对主库没有丝毫影响。做成架构图,如下:

12.1.0.2开始废弃使用crsctl对ora resource的修改

$
0
0

在12.1.0.2之后,如果使用crsctl进行ora resource的修改,启动,关闭,会遭遇CRS-4995的错误。要求你使用srvctl命名来进行操作

[oracle@12102-rac1 ~]$ crsctl stop resource ora.cdbrac.db
CRS-4995:  The command 'Stop  resource' is invalid in crsctl. Use srvctl for this command.
[oracle@12102-rac1 ~]$

参考下面3个文档:
Online Document:Clusterware Administration and Deployment Guide:
Note:
Do not use CRSCTL commands on Oracle entities (such as resources, resource types, and server pools) that have names beginning with ora unless you are directed to do so by My Oracle Support. The Server Control utility (SRVCTL) is the correct utility to use on Oracle entities.

crsctl modify ora.* resource fails with CRS-4995 in 12.1.0.2 and above (Doc ID 1918102.1)

PRKF-1085 : Command 'start' is not supported for object 'network' (Doc ID 1966448.1)

悲催的是,除非自建的资源,我们几乎所有的资源都是ora开头的:

[oracle@12102-rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       12102-rac1               STABLE
ora.DG_DATA.dg
               ONLINE  ONLINE       12102-rac1               STABLE
ora.LISTENER.lsnr
               ONLINE  OFFLINE      12102-rac1               STABLE
ora.net1.network
               ONLINE  ONLINE       12102-rac1               STABLE
ora.ons
               ONLINE  ONLINE       12102-rac1               STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.12102-rac1.vip
      1        ONLINE  OFFLINE                               STABLE
ora.12102-rac2.vip
      1        ONLINE  OFFLINE                               STABLE
ora.12102-rac3.vip
      1        ONLINE  OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  OFFLINE                               STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  OFFLINE                               STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  OFFLINE                               STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       12102-rac1               169.254.161.44 192.1
                                                             68.57.34,STABLE
ora.asm
      1        ONLINE  ONLINE       12102-rac1               STABLE
      2        ONLINE  OFFLINE                               STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.cdbrac.db
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  ONLINE       12102-rac1               Open,STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.gns
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.gns.vip
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.mgmtdb
      1        ONLINE  OFFLINE                               Instance Shutdown,ST
                                                             ABLE
ora.oc4j
      1        ONLINE  ONLINE       12102-rac1               STABLE
ora.scan1.vip
      1        ONLINE  OFFLINE      12102-rac1               STARTING
ora.scan2.vip
      1        ONLINE  OFFLINE                               STABLE
ora.scan3.vip
      1        ONLINE  OFFLINE                               STABLE
--------------------------------------------------------------------------------
[oracle@12102-rac1 ~]$

也就是说,几乎所有的资源,我们都不能用crsctl来做修改了。如过去这样改AUTO_START的操作:

[oracle@12102-rac1 ~]$ crsctl stat res ora.cdbrac.db -p |grep AUTO
AUTO_START=restore
MANAGEMENT_POLICY=AUTOMATIC
[oracle@12102-rac1 ~]$
crsctl modify res ora.cdbrac.db -attr "AUTO_START=always"

这些操作都不能像之前版本一样操作了。

解决方法:加-unsupported参数。(但建议还是按照官方文档使用srvctl操作)

先用eval参数看看模拟执行的效果:
[oracle@12102-rac1 ~]$ crsctl eval stop res ora.cdbrac.db -unsupported
 
Stage Group 1:
--------------------------------------------------------------------------------
Stage Number    Required        Action
--------------------------------------------------------------------------------
 
     1              Y           Resource 'ora.cdbrac.db' (2/1) will be in state
                                [OFFLINE]
 
--------------------------------------------------------------------------------
[oracle@12102-rac1 ~]$ ##然后实际操作:
[oracle@12102-rac1 ~]$ crsctl stop resource ora.cdbrac.db -unsupported
CRS-2673: Attempting to stop 'ora.cdbrac.db' on '12102-rac1'
CRS-2677: Stop of 'ora.cdbrac.db' on '12102-rac1' succeeded
[oracle@12102-rac1 ~]$

顺便多说两句12c中eval的参数:

在12c中,你可以eval模拟敲命令的后果,也可以predict资源失败的后果:
(1)crsctl eval=srvctl -eval
(2)crsctl eval fail=srvctl predict
 
(1)
[oracle@12102-rac1 ~]$ crsctl eval stop res ora.cdbrac.db -unsupported
 
Stage Group 1:
--------------------------------------------------------------------------------
Stage Number    Required        Action
--------------------------------------------------------------------------------
 
     1              Y           Resource 'ora.cdbrac.db' (2/1) will be in state
                                [OFFLINE]
 
--------------------------------------------------------------------------------
[oracle@12102-rac1 ~]$
[oracle@12102-rac1 ~]$ srvctl stop database -db cdbrac -eval
Database cdbrac will be stopped on node 12102-rac1
 
 
(2)
[oracle@12102-rac1 ~]$ crsctl eval fail res ora.cdbrac.db -unsupported
 
Stage Group 1:
--------------------------------------------------------------------------------
Stage Number    Required        Action
--------------------------------------------------------------------------------
 
     1              Y           Resource 'ora.cdbrac.db' (2/1) will be in state
                                [ONLINE|INTERMEDIATE] on server [12102-rac1]
 
--------------------------------------------------------------------------------
[oracle@12102-rac1 ~]$ srvctl predict database -db cdbrac
Database cdbrac will be stopped on node 12102-rac1
[oracle@12102-rac1 ~]$

Oracle sharding database的一些概念

$
0
0

2016年2月,oracle出了12.2的beta2版本,并且在4月更新了相关文档,如Concepts,Administrator’s Guide,Global Data Services Concepts and Administration Guide等等。这个版本的文档,比之前2015年10月底文档要好很多,许多概念,演示demo,操作步骤都得到了很好的说明。

这里来谈一下sharding相关的几个概念:

(1)Table family:
有相关关联关系的一组表,如客户表(customers),订单表(order),订单明细表(LineItems)。这些表之间往往有外键约束关系,可以通过如下2中方式建立table family:

(1.1)通过CONSTRAINT [FK_name] FOREIGN KEY (FK_column) REFERENCES [R_table_name]([R_table_column]) ——这种关系可以有级联关系

SQL> CREATE SHARDED TABLE Customers
  2  (
  3  CustId VARCHAR2(60) NOT NULL,
  4  FirstName VARCHAR2(60),
  5  LastName VARCHAR2(60),
  6  Class VARCHAR2(10),
  7  Geo VARCHAR2(8),
  8  CustProfile VARCHAR2(4000),
  9  Passwd RAW(60),
 10  CONSTRAINT pk_customers PRIMARY KEY (CustId),
 11  CONSTRAINT json_customers CHECK (CustProfile IS JSON)
 12  ) TABLESPACE SET TSP_SET_1
 13  PARTITION BY CONSISTENT HASH (CustId) PARTITIONS AUTO;
 
Table created.
 
SQL>
SQL> CREATE SHARDED TABLE Orders
  2  (
  3  OrderId INTEGER NOT NULL,
  4  CustId VARCHAR2(60) NOT NULL,
  5  OrderDate TIMESTAMP NOT NULL,
  6  SumTotal NUMBER(19,4),
  7  Status CHAR(4),
  8  constraint pk_orders primary key (CustId, OrderId),
  9  constraint fk_orders_parent foreign key (CustId)
 10  references Customers on delete cascade
 11  ) partition by reference (fk_orders_parent);
 
Table created.
 
SQL>
SQL> CREATE SEQUENCE Orders_Seq;
 
Sequence created.
 
SQL> CREATE SHARDED TABLE LineItems
  2  (
  3  OrderId INTEGER NOT NULL,
  4  CustId VARCHAR2(60) NOT NULL,
  5  ProductId INTEGER NOT NULL,
  6  Price NUMBER(19,4),
  7  Qty NUMBER,
  8  constraint pk_items primary key (CustId, OrderId, ProductId),
  9  constraint fk_items_parent foreign key (CustId, OrderId)
 10  references Orders on delete cascade
 11  ) partition by reference (fk_items_parent);
 
Table created.
 
SQL>

可以看到上面根表(root table)是customer表,主键是CustId,partition是根据CONSISTENT HASH,对CustId进行分区;
下一级的表是order表,主键是CustId+OrderId,外键是CustId且references Customers表,partition是参考外键;
再下一级表是LineItems表,主键是CustId+OrderId+ProductId,外键是CustId+OrderId,即上一层表达主键,partition是参考外键

(1.2)同关键字PARENT来显式的说明父子关系。——这种关系只有父子一层关系,不能级联。

SQL>  CREATE SHARDED TABLE Customers
 2  ( CustNo NUMBER NOT NULL
 3  , Name VARCHAR2(50)
 4  , Address VARCHAR2(250)
 5  , region VARCHAR2(20)
 6  , class VARCHAR2(3)
 7  , signup DATE
 8  )
 9  PARTITION BY CONSISTENT HASH (CustNo)
 10 TABLESPACE SET ts1
 11 PARTITIONS AUTO
 12 ;
 
SQL> CREATE SHARDED TABLE Orders         
 2  ( OrderNo NUMBER                     
 3  , CustNo NUMBER                     
 4  , OrderDate DATE                     
 5  )                                   
 6  PARENT Customers                     
 7  PARTITION BY CONSISTENT HASH (CustNo)
 8  TABLESPACE SET ts1                   
 9  PARTITIONS AUTO                     
 10 ;
 
SQL> CREATE SHARDED TABLE LineItems
 2  ( LineNo NUMBER
 3  , OrderNo NUMBER
 4  , CustNo NUMBER
 5  , StockNo NUMBER
 6  , Quantity NUMBER
 7  )
 8  PARENT Customers
 9  PARTITION BY CONSISTENT HASH (CustNo)
 10 TABLESPACE SET ts1
 11 PARTITIONS AUTO
 12 ;

注意上面的order表和LineItems表,都是属于同一个父表,即parent customers表。

另外,也注意上面的CustNo字段,在每个表中都是有的。而上面说的第一种的级联关系的table family,可以不在每个表中都存在CustNo字段。

(2)Sharded Table和Duplicated table:
上面创建在表,都是sharded table,即表的各个分区,可以分布在不同的shard node上。各个shard node上的分区,是不同的。即整个表的内容,是被切割成片,分配在不同的机器上的。
而duplicated table,是整个表达同样内容,在各个机器上是一样的。duplicate table在各个shard node上,是以read only mv的方式呈现:在shardcat中,存在mast table;在各个shard中,存在read only mv。duplicated table的同步:以物化视图的方式同步。

(3)chunk:
A chunk contains a single partition from each table of a table family. This guarantees that related data from different sharded tables can be moved together.
chunk的概念和table family密不可分。因为family之间的各个表都是有关系的,我们把某个table family的一组分区称作一个chunk。如
customers表中的1号~100万号客户信息在一个分区中;在order表中,也有1号~100万号的客户的order信息,也在一个分区中;另外LineItems表中的1号~100万号客户的明细信息,也在一个分区中,我们希望这些相关的分区,都是在一个shard node中,避免cross shard join。所以,我们把这些在同一个table family内,相关的分区叫做chunk。在进行re-sharding的时候,是以chunk为单位进行移动。因此可以避免cross shard join。

另外,虽然我们设计了chunk来避免cross shard join,但是在做查询的时候,还是有可能会查到非table family进行cross shard join,这在设计之初,就应该避免的。如果有cross shard,还不如用duplicated table。

注:chunk的数量在CREATE SHARDCATALOG的指定,如果不指定,默认值是每个shard 120个chunk

(4)chunk move:
chunk move的条件:
(1)re-sharding发生,即当shard的数量发生改变的时候,会发生chunk move。
注,re-sharding之后,chunk的数虽然平均,但并不连续。如:
原来是2个shard,1~6号chunk在shard 1,7~12号chunk在shard2。加多一个shard后,1~4号chunk在shard 1,7~10号chunk在shard 2,那么5~6,11~12号chunk在shard 3上。即:总是挪已经存在的shard node上的后面部分chunk。
(2)DBA手工发起:move chunk -chunk 7 -source sh2 -target sh1。将chunk 7从shard node sh2上,挪到shard node sh1上。

chunk move的过程:
在chunk migration的时候,chunk大部分时间是online的,但是期间会有几秒钟的时间chunk中的data处于read-only状态。
chunk migration的过程就是综合利用rman增量备份和TTS的过程:
level 0备份源chunk相关的TS,还原到新shard->开始FAN(等待几秒)->将源chunk相关的TS置于read-only->level 1备份还原->chunk up(更新routing table连新shard)->chunk down(更新routing table断开源shard)->结束FAN(等待几秒)->删除原shard上的老chunk

(5)shardspace:
create tablespace set的时候,指定shardspace。主要是在Composite sharding架构中使用多个shardspace。

ADD SHARDSPACE –SHARDSPACE shspace1, shspace2;
 
ADD SHARD –CONNECT shard1 –SHARDSPACE shspace1;
ADD SHARD –CONNECT shard2 –SHARDSPACE shspace1;
ADD SHARD –CONNECT shard3 –SHARDSPACE shspace1;
ADD SHARD –CONNECT shard4 –SHARDSPACE shspace2;
ADD SHARD –CONNECT shard5 –SHARDSPACE shspace2;
ADD SHARD –CONNECT shard6 –SHARDSPACE shspace2;
ADD SHARD –CONNECT shard7 –SHARDSPACE shspace2;
 
CREATE TABLESPACE SET tbs1 IN SHARDSPACE shspace1;
CREATE TABLESPACE SET tbs2 IN SHARDSPACE shspace2;

(6)sharding 方式:
在gdsctl create shardcatalog时指定,
System-Managed Sharding:partitioning by consistent hash.主要作用是打散数据。

Composite Sharding:create multiple shardgroups for different subsets of data in a table partitioned by consistent hash. 分层,可以用不同的shardspace,不同的tablespace set,使用不同的硬件。高级的customer用更好的硬件资源。

Using Subpartitions with Sharding:all of the subpartitioning methods provided by Oracle Database are also supported for sharding. 更细粒度的分区。

(7)如何部署sharding:
简单来说,就是在gdsctl中运行如下命令。关于详细内容,我会另外再写一篇,讲讲sharding的部署和架构中的注意点。

  1. CREATE SHARDCATALOG
  2. ADD GSM; START GSM (create and start shard directors)
  3. CREATE SHARD (for each shard)
  4. DEPLOY

创建Oracle sharding database

$
0
0

本文继『Oracle sharding database的一些概念』后,介绍如下搭建一个oracle sharding database的环境,以及可能在搭建过程中可能会遇到的known issue(有很多坑,且在mos上还没有解决方案,都是一个一个自己摸索解决的。)。

你在本文中可以看到:
(一)安装介质需求。
(二)HIGH LEVEL安装步骤。
(三)详细安装步骤。
(四)建立应用用户,利用应用用户建立sharded table和duplicated table:
(五)安装过程known issue。
(六)sharded table的一些测试,以及发现其对dml的一些限制。

关于sharding在架构上的一些想法和注意点,我准备下一篇文章再谈。

(一)安装介质:

1. 你需要12.2的database的安装介质(两个zip压缩包)来安装db软件,用于shardcat数据库,和shard node主机上的数据库。
2. 你还需要12.2的gsm安装介质(一个压缩包)来安装GDS框架和gsm服务。这是安装在shardcat主机上的。
3. 你还需要12.2.的client安装介质(一个压缩包)来装scheagent,这是安装在shard node主机上的。安装schagent是为了在shardcat主机上发起命令,在远程的shard node上,通过agent调起来netca和dbca来安装监听和数据库。另外,如果shard node有active dataguard,agent也会自动帮你配好dataguard,配好broker和FSFO。

(二)HIGH LEVEL安装步骤:

1.Oracle Sharding Prerequisites
2.Setting Up the Oracle Sharding Host Environment Scripts
3.Installing Oracle Database
4.Installing the Shard Director Software
5.Creating the Shard Catalog Database
6.Setting Up the Oracle Sharding Management and Routing Tier
7.Deploying and Managing a System-Managed SDB

(三)详细安装步骤:

1.Oracle Sharding Prerequisites

12.2企业版
non-cdb
使用文件系统而非ASM (12.2 Beta要求,正式发行后,可能会改)
主机hosts文件写上本机和各个shard node的IP解析
机器必须全新,不能残留之前有安装过oracle的信息。

2.Setting Up the Oracle Sharding Host Environment Scripts

目的是因为shardcat和gds都安装在一个主机上,同一个oracle用户,不同ORACLE_HOME,所以建立环境变量的脚本,会比较容易在database环境和gsm环境之间切换。
admin guide上是用shardcat.sh,shard-director1.sh脚本,但是我的可能更简单实用,直接定义成alias。(这种方法其实是跟ORACLE BASE学的。老DBA应该都听说过这个网站。)
##修改环境变量,在环境变量中设置2个alias别名
[oracle12c@sdb1 ~]$ cat .bash_profile
# .bash_profile
 
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi
 
# User specific environment and startup programs
 
PATH=$PATH:$HOME/bin
 
export PATH
 
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
 
ORACLE_BASE=/u01/ora12c/app/oracle; export ORACLE_BASE
DB_HOME=$ORACLE_BASE/product/12.2.0/db_1; export DB_HOME
GSM_HOME=$ORACLE_BASE/product/12.2.0/gsm; export GSM_HOME
ORACLE_HOME=$DB_HOME; export ORACLE_HOME
ORACLE_SID=shardcat; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH:$ORACLE_HOME/OPatch; export PATH
 
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
#LD_ASSUME_KERNEL=2.4.1; export LD_ASSUME_KERNEL
 
if [ $USER = "oracle12c" ]; then
  if [ $SHELL = "/bin/ksh" ]; then
    ulimit -p 16384
    ulimit -n 65536
  else
    ulimit -u 16384 -n 65536
  fi
fi
 
alias gsm_env='. /home/oracle12c/gsm_env'
alias db_env='. /home/oracle12c/db_env'
 
##创建2个脚本,gsm_env和db_env
[oracle12c@sdb1 ~]$ cat /home/oracle12c/gsm_env
ORACLE_HOME=$GSM_HOME; export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
[oracle12c@sdb1 ~]$
[oracle12c@sdb1 ~]$ cat /home/oracle12c/db_env
ORACLE_HOME=$DB_HOME; export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
[oracle12c@sdb1 ~]$
[oracle12c@sdb1 ~]$

3.Installing Oracle Database

安装db软件,解开2个压缩包,加载一下上面建好的db_env环境变量,开始跑runInstaller,选择software only,没啥好说的。注意ORACLE_HOME的路径和环境变量中定义的DB的ORACLE_HOME一致。
在shardcat主机和shard node主机,都需要安装好db软件。

4.Installing the Shard Director Software

安装gds框架和gsm服务,解开gsm的压缩包,加载一下上面建好的gsm_env环境变量开始跑runInstaller,注意选择不同与DB的ORACLE_HOME,注意ORACLE_HOME的路径和环境变量中定义的gsm的ORACLE_HOME一致。
本文中gds安装在和shardcat同一个主机上。即shardcat和shard Director在同一主机。(其实,如果有需要,也可以不同主机的)

4.b. Installer schagent in all shard node(admin guide文档没写这步骤,本人免费赠送)

选择client安装包,解压缩后,运行runInstaller,在每个shard node上建立agent

5.Creating the Shard Catalog Database

运行dbca开始建立数据库实例,这个实例是放分片数据的元数据的。我们把这个实例名叫shardcat。
安装好后,再建立listener。以便可以连接这个数据库。

6.Setting Up the Oracle Sharding Management and Routing Tier

登录shardcat主机,登录shardcat数据库:
--建立tablespace set需要使用omf,所以需要指定db_create_file_dest参数。
SQL> alter system set db_create_file_dest='/u01/ora12c/app/oracle/oradata' scope=both;
SQL> alter system set open_links=16 scope=spfile;
SQL> alter system set open_links_per_instance=16 scope=spfile;
SQL> startup force
 
SQL> alter user gsmcatuser account unlock;
SQL> alter user gsmcatuser identified by oracle;                 
SQL> CREATE USER mygdsadmin IDENTIFIED BY oracle;
SQL> GRANT connect, create session, gsmadmin_role to mygdsadmin;
SQL> grant inherit privileges on user SYS to GSMADMIN_INTERNAL; 
 
SQL> alter system set events 'immediate trace name GWM_TRACE level 7';   
SQL> alter system set event='10798 trace name context forever, level 7' scope=spfile; 
 
SQL> execute dbms_xdb.sethttpport(8080);
SQL> commit;
SQL> @?/rdbms/admin/prvtrsch.plb
SQL> exec DBMS_SCHEDULER.SET_AGENT_REGISTRATION_PASS('oracleagent');
登录shard node主机:
[oracle12c@sdb2 ~]$ schagent -start
 
Scheduler agent started using port 1025
[oracle12c@sdb2 ~]$
[oracle12c@sdb2 ~]$
[oracle12c@sdb2 ~]$ schagent -status
Agent running with PID 2084
 
Agent_version:12.2.0.1.2
Running_time:00:00:17
Total_jobs_run:0
Running_jobs:0
Platform:Linux
ORACLE_HOME:/u01/ora12c/app/oracle/product/12.2.0/db_1
ORACLE_BASE:/u01/ora12c/app/oracle
Port:1025
Host:sdb2
 
[oracle12c@sdb2 ~]$
[oracle12c@sdb2 ~]$ echo oracleagent|schagent -registerdatabase sdb1 8080
Agent Registration Password ? 
Oracle Scheduler Agent Registration for 12.2.0.1.2 Agent
Agent Registration Successful!
[oracle12c@sdb2 ~]$
[oracle12c@sdb2 oracle]$ mkdir -p /u01/ora12c/app/oracle/oradata     
[oracle12c@sdb2 oracle]$ mkdir -p /u01/ora12c/app/oracle/fast_recovery_area 
[oracle12c@sdb2 oracle]$
 
各个shard node主机都进行上述操作。

7.Deploying and Managing a System-Managed SDB
我们开始部署,以最简单的System-Managed SDB为例。
另外,admin guide中介绍的是4台主机做shard node,其中每2台互为dataguard主备。我们这边为了节约空间和资源,不搞dataguard了,只建立primary库。因此只要2台主机做shard node。

[oracle12c@sdb1 ~]$ gsm_env
[oracle12c@sdb1 ~]$ gdsctl
GDSCTL: Version 12.2.0.0.0 - Beta on Mon May 09 23:11:05 CST 2016
 
Copyright (c) 2011, 2015, Oracle.  All rights reserved.
 
Welcome to GDSCTL, type "help" for information.
 
Current GSM is set to SHARDDIRECTOR1
GDSCTL>set gsm -gsm sharddirector1 
GDSCTL>
GDSCTL>
GDSCTL>connect mygdsadmin/oracle
 
Catalog connection is established
GDSCTL>GDSCTL>
GDSCTL>
GDSCTL>
GDSCTL>add shardgroup -shardgroup primary_shardgroup -deploy_as primary -region region1
 
GDSCTL>create shard -shardgroup primary_shardgroup -destination sdb2 -credential oracle_cred
DB Unique Name: sh1
GDSCTL>
GDSCTL>add invitednode sdb3
GDSCTL>create shard -shardgroup primary_shardgroup -destination sdb3 -credential oracle_cred
DB Unique Name: sh2
GDSCTL>
GDSCTL>config
 
Regions
------------------------
region1                       
 
GSMs
------------------------
sharddirector1               
 
Sharded Database
------------------------
shardcat                     
 
Databases
------------------------
sh1                           
sh2                           
 
Shard Groups
------------------------
primary_shardgroup           
 
Shard spaces
------------------------
shardspaceora                 
 
Services
------------------------
 
GDSCTL pending requests
------------------------
Command                       Object                        Status                       
-------                       ------                        ------                       
 
Global properties
------------------------
Name: oradbcloud
Master GSM: sharddirector1
DDL sequence #: 0
 
 
GDSCTL>
GDSCTL>
GDSCTL>
GDSCTL>config shardspace
SHARDSPACE                    Chunks                       
----------                    ------                       
shardspaceora                 12                           
 
GDSCTL>config shardgroup
Shard Group         Chunks Region              SHARDSPACE         
-----------         ------ ------              ----------         
primary_shardgroup  12     region1             shardspaceora       
 
GDSCTL>config vncr
Name                          Group ID                     
----                          --------                     
sdb2                                                       
sdb3                                                       
192.168.56.21                                               
 
GDSCTL>config shard
Name                Shard Group         Status    State       Region    Availability
----                -----------         ------    -----       ------    ------------
sh1                 primary_shardgroup  U         none        region1   -           
sh2                 primary_shardgroup  U         none        region1   -           
 
GDSCTL>deploy
GDSCTL>

此时,就开始部署shard了。在shard node上的agent会自动的调用netca和dbca,创建listener和database,2个shard node的操作是并行进行的。(如果是有datauard,那么是先建立一对主备,再建立另一对主备。)你可以在分别是两个shard node上ps -ef|grep ora_ 看到已经有sh1和sh2的实例了。

等deploy完,我们可以检查一下shard的情况了:

GDSCTL>config shard
Name                Shard Group         Status    State       Region    Availability
----                -----------         ------    -----       ------    ------------
sh1                 primary_shardgroup  Ok        Deployed    region1   ONLINE       
sh2                 primary_shardgroup  Ok        Deployed    region1   ONLINE       
 
GDSCTL>databases
Database: "sh1" Registered: Y State: Ok ONS: N. Role: PRIMARY Instances: 1 Region: region1
   Registered instances:
     shardcat%1
Database: "sh2" Registered: Y State: Ok ONS: N. Role: PRIMARY Instances: 1 Region: region1
   Registered instances:
     shardcat%11
 
GDSCTL>
GDSCTL>config shard -shard sh1
Name: sh1
Shard Group: primary_shardgroup
Status: Ok
State: Deployed
Region: region1
Connection string: sdb2:1521/sh1:dedicated
SCAN address:
ONS remote port: 0
Disk Threshold, ms: 20
CPU Threshold, %: 75
Version: 12.2.0.0
Last Failed DDL:
DDL Error: ---
Failed DDL id:
Availability: ONLINE
 
 
Supported services
------------------------
Name                                                            Preferred Status   
----                                                            --------- ------

建立service:

GDSCTL>add service -service oltp_rw_srvc -role primary
GDSCTL>
GDSCTL>config service
 
 
Name           Network name                  Pool           Started Preferred all
----           ------------                  ----           ------- -------------
oltp_rw_srvc   oltp_rw_srvc.shardcat.oradbcl shardcat       No      Yes           
               oud                                                               
 
GDSCTL>
GDSCTL>start service -service oltp_rw_srvc
GDSCTL>
GDSCTL>status service
Service "oltp_rw_srvc.shardcat.oradbcloud" has 2 instance(s). Affinity: ANYWHERE
   Instance "shardcat%1", name: "sh1", db: "sh1", region: "region1", status: ready.
   Instance "shardcat%11", name: "sh2", db: "sh2", region: "region1", status: ready.
 
GDSCTL>
(2016-05-14更新:其实这个service,用于adg的主备切换后,这个service漂移到备库上。)

(四)建立应用用户,利用应用用户建立sharded table和duplicated table:

[oracle12c@sdb1 ~]$ db_env
[oracle12c@sdb1 ~]$ sqlplus "/ as sysdba"
 
SQL*Plus: Release 12.2.0.0.2 Beta on Mon May 9 23:37:34 2016
 
Copyright (c) 1982, 2015, Oracle.  All rights reserved.
 
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.0.2 - 64bit Beta
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 
SQL> alter session enable shard ddl;
 
Session altered.
 
SQL> create user app_schema identified by oracle;
 
User created.
 
SQL> grant all privileges to app_schema;
 
Grant succeeded.
 
SQL> grant gsmadmin_role to app_schema;
 
Grant succeeded.
 
SQL> grant select_catalog_role to app_schema;
 
Grant succeeded.
 
SQL> grant connect, resource to app_schema;
 
Grant succeeded.
 
SQL> grant dba to app_schema;
 
Grant succeeded.
 
SQL> grant execute on dbms_crypto to app_schema;
 
Grant succeeded.
 
SQL>

利用应用用户登录,创建sharded table和duplicated table

SQL> conn app_schema/oracle
Connected.
SQL>
SQL> alter session enable shard ddl;
 
Session altered.
 
SQL> CREATE TABLESPACE SET TSP_SET_1 using template (datafile size 100m extent
  2  management local segment space management auto );
 
Tablespace created.
 
SQL>
SQL> CREATE TABLESPACE products_tsp datafile size 100m extent management local uniform
  2  size 1m;
 
Tablespace created.
 
SQL>
SQL>-- Create sharded table family
SQL> CREATE SHARDED TABLE Customers
  2  (
  3  CustId VARCHAR2(60) NOT NULL,
  4  FirstName VARCHAR2(60),
  5  LastName VARCHAR2(60),
  6  Class VARCHAR2(10),
  7  Geo VARCHAR2(8),
  8  CustProfile VARCHAR2(4000),
  9  Passwd RAW(60),
 10  CONSTRAINT pk_customers PRIMARY KEY (CustId),
 11  CONSTRAINT json_customers CHECK (CustProfile IS JSON)
 12  ) TABLESPACE SET TSP_SET_1
 13  PARTITION BY CONSISTENT HASH (CustId) PARTITIONS AUTO;
 
Table created.
 
SQL>
SQL> CREATE SHARDED TABLE Orders
  2  (
  3  OrderId INTEGER NOT NULL,
  4  CustId VARCHAR2(60) NOT NULL,
  5  OrderDate TIMESTAMP NOT NULL,
  6  SumTotal NUMBER(19,4),
  7  Status CHAR(4),
  8  constraint pk_orders primary key (CustId, OrderId),
  9  constraint fk_orders_parent foreign key (CustId)
 10  references Customers on delete cascade
 11  ) partition by reference (fk_orders_parent);
 
Table created.
 
SQL> CREATE SEQUENCE Orders_Seq;
 
Sequence created.
 
SQL> CREATE SHARDED TABLE LineItems
  2  (
  3  OrderId INTEGER NOT NULL,
  4  CustId VARCHAR2(60) NOT NULL,
  5  ProductId INTEGER NOT NULL,
  6  Price NUMBER(19,4),
  7  Qty NUMBER,
  8  constraint pk_items primary key (CustId, OrderId, ProductId),
  9  constraint fk_items_parent foreign key (CustId, OrderId)
 10  references Orders on delete cascade
 11  ) partition by reference (fk_items_parent);
 
Table created.
 
SQL>
SQL> -- duplicated table
SQL> CREATE DUPLICATED TABLE Products
  2  (
  3  ProductId INTEGER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
  4  Name VARCHAR2(128),
  5  DescrUri VARCHAR2(128),
  6  LastPrice NUMBER(19,4)
  7  ) TABLESPACE products_tsp;
 
Table created.
 
SQL>
在shardcat检查:
SQL> select TABLESPACE_NAME, BYTES/1024/1024 MB from sys.dba_data_files order by
  2  tablespace_name;
 
TABLESPACE_NAME                        MB
------------------------------ ----------
PRODUCTS_TSP                          100
SYSAUX                                690
SYSTEM                                880
TSP_SET_1                             100
UNDOTBS1                              410
USERS                                   5
 
6 rows selected.
 
SQL>
SQL>
SQL> select table_name, partition_name, tablespace_name from dba_tab_partitions
  2  where tablespace_name like 'C%TSP_SET_1' order by tablespace_name;
 
no rows selected
 
SQL> select table_name, partition_name, tablespace_name from dba_tab_partitions where tablespace_name like '%SET%';
SQL> col TABLE_NAME for a20
SQL> col PARTITION_NAME for a20
SQL> col TABLESPACE_NAME for a20
SQL> /
 
TABLE_NAME           PARTITION_NAME       TABLESPACE_NAME
-------------------- -------------------- --------------------
CUSTOMERS            CUSTOMERS_P1         TSP_SET_1
ORDERS               CUSTOMERS_P1         TSP_SET_1
LINEITEMS            CUSTOMERS_P1         TSP_SET_1
 
SQL> select TABLESPACE_NAME, BYTES/1024/1024 MB from sys.dba_data_files;
 
TABLESPACE_NAME              MB
-------------------- ----------
SYSTEM                      880
SYSAUX                      690
UNDOTBS1                    410
USERS                         5
TSP_SET_1                   100
PRODUCTS_TSP                100
 
6 rows selected.
 
SQL> l
  1* select TABLESPACE_NAME, BYTES/1024/1024 MB from sys.dba_data_files
SQL> /
 
TABLESPACE_NAME              MB
-------------------- ----------
SYSTEM                      880
SYSAUX                      690
UNDOTBS1                    410
USERS                         5
TSP_SET_1                   100
PRODUCTS_TSP                100
 
6 rows selected.
 
SQL>   
SQL>
SQL>
SQL>
SQL>
SQL> select a.name Shard, count( b.chunk_number) Number_of_Chunks from
  2  gsmadmin_internal.database a, gsmadmin_internal.chunk_loc b where
  3  a.database_num=b.database_num group by a.name;
 
SHARD                          NUMBER_OF_CHUNKS
------------------------------ ----------------
sh1                                           6
sh2                                           6
 
SQL>
在on shard node 1上可以检查:
[oracle12c@sdb2 trace]$ export ORACLE_SID=sh1
[oracle12c@sdb2 trace]$ sqlplus "/ as sysdba"
 
SQL*Plus: Release 12.2.0.0.2 Beta on Mon May 9 23:51:44 2016
 
Copyright (c) 1982, 2015, Oracle.  All rights reserved.
 
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.0.2 - 64bit Beta
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 
SQL> set pages 1000
SQL> select TABLESPACE_NAME, BYTES/1024/1024 MB from sys.dba_data_files order by
  2  tablespace_name;
 
TABLESPACE_NAME                        MB
------------------------------ ----------
C001TSP_SET_1                         100
C002TSP_SET_1                         100
C003TSP_SET_1                         100
C004TSP_SET_1                         100
C005TSP_SET_1                         100
C006TSP_SET_1                         100
PRODUCTS_TSP                          100
SYSAUX                                650
SYSTEM                                890
SYS_SHARD_TS                          100
TSP_SET_1                             100
UNDOTBS1                              110
USERS                                   5
 
13 rows selected.
 
SQL>
SQL> col TABLE_NAME for a30   
SQL> col PARTITION_NAME for a30
SQL> col TABLESPACE_NAME for a30
SQL>
SQL> select table_name, partition_name, tablespace_name from dba_tab_partitions
  2  where tablespace_name like 'C%TSP_SET_1' order by tablespace_name;
 
TABLE_NAME                     PARTITION_NAME                 TABLESPACE_NAME
------------------------------ ------------------------------ ------------------------------
LINEITEMS                      CUSTOMERS_P1                   C001TSP_SET_1
CUSTOMERS                      CUSTOMERS_P1                   C001TSP_SET_1
ORDERS                         CUSTOMERS_P1                   C001TSP_SET_1
CUSTOMERS                      CUSTOMERS_P2                   C002TSP_SET_1
ORDERS                         CUSTOMERS_P2                   C002TSP_SET_1
LINEITEMS                      CUSTOMERS_P2                   C002TSP_SET_1
CUSTOMERS                      CUSTOMERS_P3                   C003TSP_SET_1
LINEITEMS                      CUSTOMERS_P3                   C003TSP_SET_1
ORDERS                         CUSTOMERS_P3                   C003TSP_SET_1
LINEITEMS                      CUSTOMERS_P4                   C004TSP_SET_1
CUSTOMERS                      CUSTOMERS_P4                   C004TSP_SET_1
ORDERS                         CUSTOMERS_P4                   C004TSP_SET_1
CUSTOMERS                      CUSTOMERS_P5                   C005TSP_SET_1
ORDERS                         CUSTOMERS_P5                   C005TSP_SET_1
LINEITEMS                      CUSTOMERS_P5                   C005TSP_SET_1
CUSTOMERS                      CUSTOMERS_P6                   C006TSP_SET_1
ORDERS                         CUSTOMERS_P6                   C006TSP_SET_1
LINEITEMS                      CUSTOMERS_P6                   C006TSP_SET_1
 
18 rows selected.
 
###########################################
在on shard node 2上可以检查:
[oracle12c@sdb3 trace]$ export ORACLE_SID=sh2
[oracle12c@sdb3 trace]$
[oracle12c@sdb3 trace]$
[oracle12c@sdb3 trace]$ sqlplus "/ as sysdba"
 
SQL*Plus: Release 12.2.0.0.2 Beta on Mon May 9 23:52:06 2016
 
Copyright (c) 1982, 2015, Oracle.  All rights reserved.
 
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.0.2 - 64bit Beta
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 
SQL> set pages 1000
SQL> select TABLESPACE_NAME, BYTES/1024/1024 MB from sys.dba_data_files order by
  2  tablespace_name;
 
TABLESPACE_NAME                        MB
------------------------------ ----------
C007TSP_SET_1                         100
C008TSP_SET_1                         100
C009TSP_SET_1                         100
C00ATSP_SET_1                         100
C00BTSP_SET_1                         100
C00CTSP_SET_1                         100
PRODUCTS_TSP                          100
SYSAUX                                650
SYSTEM                                890
SYS_SHARD_TS                          100
TSP_SET_1                             100
UNDOTBS1                              115
USERS                                   5
 
13 rows selected.
 
SQL>
SQL>
SQL> l
  1  select table_name, partition_name, tablespace_name from dba_tab_partitions
  2* where tablespace_name like 'C%TSP_SET_1' order by tablespace_name
SQL> /
 
TABLE_NAME                     PARTITION_NAME                 TABLESPACE_NAME
------------------------------ ------------------------------ ------------------------------
ORDERS                         CUSTOMERS_P7                   C007TSP_SET_1
LINEITEMS                      CUSTOMERS_P7                   C007TSP_SET_1
CUSTOMERS                      CUSTOMERS_P7                   C007TSP_SET_1
ORDERS                         CUSTOMERS_P8                   C008TSP_SET_1
CUSTOMERS                      CUSTOMERS_P8                   C008TSP_SET_1
LINEITEMS                      CUSTOMERS_P8                   C008TSP_SET_1
LINEITEMS                      CUSTOMERS_P9                   C009TSP_SET_1
ORDERS                         CUSTOMERS_P9                   C009TSP_SET_1
CUSTOMERS                      CUSTOMERS_P9                   C009TSP_SET_1
LINEITEMS                      CUSTOMERS_P10                  C00ATSP_SET_1
ORDERS                         CUSTOMERS_P10                  C00ATSP_SET_1
CUSTOMERS                      CUSTOMERS_P10                  C00ATSP_SET_1
ORDERS                         CUSTOMERS_P11                  C00BTSP_SET_1
LINEITEMS                      CUSTOMERS_P11                  C00BTSP_SET_1
CUSTOMERS                      CUSTOMERS_P11                  C00BTSP_SET_1
LINEITEMS                      CUSTOMERS_P12                  C00CTSP_SET_1
CUSTOMERS                      CUSTOMERS_P12                  C00CTSP_SET_1
ORDERS                         CUSTOMERS_P12                  C00CTSP_SET_1
 
18 rows selected.
 
SQL>

(五)安装过程known issue:
Known Issue(1)STANDARD_ERROR=”Launching external job failed: Invalid username or password”

现象:
GDSCTL>create shard -shardgroup shgrp1 -destination sdb2 -credential oracle_cred
GSM-45029: SQL error
ORA-02610: Remote job failed with error:
EXTERNAL_LOG_ID="job_23872_1",
USERNAME="oracle",
STANDARD_ERROR="Launching external job failed: Invalid username or password"
ORA-06512: at "GSMADMIN_INTERNAL.DBMS_GSM_POOLADMIN", line 6920
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "GSMADMIN_INTERNAL.DBMS_GSM_POOLADMIN", line 4596
ORA-06512: at line 1
 
解决方法:
GDSCTL>connect sdb1:1521:shardcat
username:sdb_admin
password:
Catalog connection is established
GDSCTL>
GDSCTL>remove credential -CREDENTIAL oracle_cred
GDSCTL>add credential -credential oracle_cred -osaccount oracle12c -ospassword oracle12c
GDSCTL>

Known Issue(2)ORA-06512: at “GSMADMIN_INTERNAL.DBMS_GSM_POOLADMIN”, line 14499

现象:
GDSCTL>deploy
GSM-45029: SQL error
ORA-02610: Remote job failed with error:
EXTERNAL_LOG_ID="job_23892_7",
USERNAME="oracle12c"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "GSMADMIN_INTERNAL.DBMS_GSM_POOLADMIN", line 14499
ORA-06512: at line 1
 
GDSCTL>
GDSCTL>config shard
Name                Shard Group         Status    State       Region    Availability
----                -----------         ------    -----       ------    ------------
sh1                 shgrp1              U         Created     region1   -           
sh2                 shgrp2              U         Created     region2   -           
 
 
解决方法:重建shard
GDSCTL>remove shard -shardgroup shgrp2 -force
GDSCTL>config shard
Name                Shard Group         Status    State       Region    Availability
----                -----------         ------    -----       ------    ------------
sh1                 shgrp1              U         Created     region1   -           
 
GDSCTL>
GDSCTL>
GDSCTL>
GDSCTL>
GDSCTL>
GDSCTL>create shard -shardgroup shgrp2 -destination sdb3 -credential oracle_cred
DB Unique Name: sh3
GDSCTL>config shard
Name                Shard Group         Status    State       Region    Availability
----                -----------         ------    -----       ------    ------------
sh1                 shgrp1              U         Created     region1   -           
sh3                 shgrp2              U         none        region2   -           
 
GDSCTL>deploy
 GDSCTL>
GDSCTL>
GDSCTL>
GDSCTL>
GDSCTL>config shard
Name                Shard Group         Status    State       Region    Availability
----                -----------         ------    -----       ------    ------------
sh1                 shgrp1              Ok        Replicated  region1   -           
sh3                 shgrp2              Ok        Replicated  region2   -           
 
GDSCTL>

Known Issue(3)NET-40002: GSM endpoint not found in GSM.ora

现象:
GDSCTL>databases
GSM-45054: GSM error
NET-40002: GSM endpoint not found in GSM.ora
GDSCTL>status database
GSM-45054: GSM error
NET-40002: GSM endpoint not found in GSM.ora
GDSCTL>
 
解决方式:指定gsm名登录gdsctl。
[oracle12c@sdb1 ~]$ gdsctl gsm1
GDSCTL: Version 12.2.0.0.0 - Beta on Sat Dec 12 19:31:12 CST 2015
 
Copyright (c) 2011, 2015, Oracle.  All rights reserved.
 
Welcome to GDSCTL, type "help" for information.
 
GDSCTL>databases
Database: "sh1" Registered: N State: Ok ONS: N. Role: N/A Instances: 0 Region: region1
Database: "sh3" Registered: N State: Ok ONS: N. Role: N/A Instances: 0 Region: region2
 
GDSCTL>exit


Known Issue(4)ORA-02511: SQL query not allowed; the shard DDL is disabled.

现象:
-- on shard cat
[oracle12c@sdb1 ~]$ db_env
[oracle12c@sdb1 ~]$
[oracle12c@sdb1 ~]$
[oracle12c@sdb1 ~]$ sqlplus "/ as sysdba"
 
SQL*Plus: Release 12.2.0.0.0 Beta on Mon Feb 15 13:44:26 2016
 
Copyright (c) 1982, 2015, Oracle.  All rights reserved.
 
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.0.1 - 64bit Beta
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 
SQL>
SQL> CREATE TABLESPACE SET ts1 IN SHARDSPACE shardspaceora
  2  using template
  3  (datafile size 10m
  4  extent management local uniform size 256k
  5  segment space management auto
  6  online
  7  )
  8  /
CREATE TABLESPACE SET ts1 IN SHARDSPACE shardspaceora
*
ERROR at line 1:
ORA-02511: SQL query not allowed; the shard DDL is disabled.
 
 
解决方式:
SQL> alter session enable shard ddl;
 
Session altered.
 
SQL> CREATE TABLESPACE SET ts1 IN SHARDSPACE shardspaceora
  2  using template
  3  (datafile size 10m
  4  extent management local uniform size 256k
  5  segment space management auto
  6  online
  7  )
  8  /
 
Tablespace created.
 
SQL>

Known Issue(5)Linux Error: 1: Operation not permitted

现象:deploy的时候,通过agent调用netca的时候报错,无法建立listener,报错Linux Error: 1: Operation not permitted
GDSCTL>deploy
GSM Errors:
CATALOG:ORA-45575: Deployment has terminated due to previous errors.
CATALOG:ORA-02610: Remote job failed with error:
EXTERNAL_LOG_ID="job_22857_8",
USERNAME="oracle12c"
For more details:
  select destination, output from all_scheduler_job_run_details
  where job_name='SHARD_SH1_NETCA'
CATALOG:ORA-02610: Remote job failed with error:
EXTERNAL_LOG_ID="job_22869_8",
USERNAME="oracle12c"
For more details:
  select destination, output from all_scheduler_job_run_details
  where job_name='SHARD_SH3_NETCA'
 
GDSCTL>
 
 
SQL> col OUTPUT for a60
SQL> /
 
DESTINATIO OUTPUT
---------- ------------------------------------------------------------
SDB2
           Parsing command line arguments:
               Parameter "silent" = true
               Parameter "responsefile" = /u01/ora12c/app/oracle/produc
           t/12.2.0/db_1/shard_sh1_netca.rsp
           Done parsing command line arguments.
           Oracle Net Services Configuration:
           Configuring Listener:LISTENER_sh1
           Listener configuration complete.
           Oracle Net Listener Startup:
               Running Listener Control:
                 /u01/ora12c/app/oracle/product/12.2.0/db_1/bin/lsnrctl
            start LISTENER_sh1
               Listener Control complete.
               Listener start failed.
           Profile configuration complete.
           Check the trace file for details: /u01/ora12c/app/oracle/cfg
           toollogs/netca/trace_OraDB12Home1-16050310PM5414.log
           Oracle Net Services configuration failed.  The exit code is
           1
 
 
SQL>           
SQL>
SQL>
SQL>
SQL> select destination, output from all_scheduler_job_run_details
  2    where job_name='SHARD_SH3_NETCA'
  3  /
 
DESTINATIO OUTPUT
---------- ------------------------------------------------------------
SDB4
           Parsing command line arguments:
               Parameter "silent" = true
               Parameter "responsefile" = /u01/ora12c/app/oracle/produc
           t/12.2.0/db_1/shard_sh3_netca.rsp
           Done parsing command line arguments.
           Oracle Net Services Configuration:
           Configuring Listener:LISTENER_sh3
           Listener configuration complete.
           Oracle Net Listener Startup:
               Running Listener Control:
                 /u01/ora12c/app/oracle/product/12.2.0/db_1/bin/lsnrctl
            start LISTENER_sh3
               Listener Control complete.
               Listener start failed.
           Profile configuration complete.
           Check the trace file for details: /u01/ora12c/app/oracle/cfg
           toollogs/netca/trace_OraDB12Home1-1605042PM0921.log
           Oracle Net Services configuration failed.  The exit code is
           1
 
 
SQL>
SQL>
 
 
[oracle12c@sdb2 ~]$ lsnrctl start LISTENER_sh1
 
LSNRCTL for Linux: Version 12.2.0.0.0 - Beta on 03-MAY-2016 23:04:02
 
Copyright (c) 1991, 2015, Oracle.  All rights reserved.
 
Starting /u01/ora12c/app/oracle/product/12.2.0/db_1/bin/tnslsnr: please wait...
 
TNSLSNR for Linux: Version 12.2.0.0.0 - Beta
System parameter file is /u01/ora12c/app/oracle/product/12.2.0/db_1/network/admin/listener.ora
Log messages written to /u01/ora12c/app/oracle/diag/tnslsnr/sdb2/listener_sh1/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=sdb2)(PORT=1521)))
Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))
TNS-12555: TNS:permission denied
 TNS-12560: TNS:protocol adapter error
  TNS-00525: Insufficient privilege for operation
   Linux Error: 1: Operation not permitted
 
Listener failed to start. See the error message(s) above...
 
[oracle12c@sdb2 ~]$
 
解决方法:删除/var/tmp/.oracle下的文件
[root@sdb2 ~]# cd /var/tmp/.oracle
[root@sdb2 .oracle]# ls
s#16010.1  s#1923.2  s#1949.1  s#1989.2   s#2047.1  s#2088.2  s#2212.1  s#2417.2  s#2494.1  s#2641.2  s#3130.1  s#8114.2
s#16010.2  s#1924.1  s#1949.2  s#1991.1   s#2047.2  s#2102.1  s#2212.2  s#2434.1  s#2494.2  s#2667.1  s#3130.2  s#9056.1
s#1886.1   s#1924.2  s#1955.1  s#1991.2   s#2047.3  s#2102.2  s#2274.1  s#2434.2  s#2503.1  s#2667.2  s#3249.1  s#9056.2
s#1886.2   s#1931.1  s#1955.2  s#19963.1  s#2047.4  s#2108.1  s#2274.2  s#2435.1  s#2503.2  s#2708.1  s#3249.2  sEXTPROC1258
s#1902.1   s#1931.2  s#1958.1  s#19963.2  s#2049.1  s#2108.2  s#2307.1  s#2435.2  s#2547.1  s#2708.2  s#3289.1  sEXTPROC1521
s#1902.2   s#1934.1  s#1958.2  s#1999.1   s#2049.2  s#2126.1  s#2307.2  s#2441.1  s#2547.2  s#2771.1  s#3289.2
s#1906.1   s#1934.2  s#1961.1  s#1999.2   s#2052.1  s#2126.2  s#2333.1  s#2441.2  s#2574.1  s#2771.2  s#3491.1
s#1906.2   s#1938.1  s#1961.2  s#2020.1   s#2052.2  s#2128.1  s#2333.2  s#2452.1  s#2574.2  s#2836.1  s#3491.2
s#1909.1   s#1938.2  s#1964.1  s#2020.2   s#2056.1  s#2128.2  s#2339.1  s#2452.2  s#2591.1  s#2836.2  s#3643.1
s#1909.2   s#1939.1  s#1964.2  s#2030.1   s#2056.2  s#2130.1  s#2339.2  s#2471.1  s#2591.2  s#2849.1  s#3643.2
s#1909.3   s#1939.2  s#1966.1  s#2030.2   s#2067.1  s#2130.2  s#2356.1  s#2471.2  s#2591.3  s#2849.2  s#3980.1
s#1909.4   s#1942.1  s#1966.2  s#2034.1   s#2067.2  s#2133.1  s#2356.2  s#2477.1  s#2591.4  s#3018.1  s#3980.2
s#1912.1   s#1942.2  s#1982.1  s#2034.2   s#2083.1  s#2133.2  s#2383.3  s#2477.2  s#2607.1  s#3018.2  s#7211.1
s#1912.2   s#1945.1  s#1982.2  s#2036.1   s#2083.2  s#2190.1  s#2383.4  s#2483.1  s#2607.2  s#3079.1  s#7211.2
s#1923.1   s#1945.2  s#1989.1  s#2036.2   s#2088.1  s#2190.2  s#2417.1  s#2483.2  s#2641.1  s#3079.2  s#8114.1
[root@sdb2 .oracle]# rm -rf *
[root@sdb2 .oracle]#
 
[oracle12c@sdb2 admin]$ lsnrctl start LISTENER_SH1
 
LSNRCTL for Linux: Version 12.2.0.0.0 - Beta on 03-MAY-2016 23:19:18
 
Copyright (c) 1991, 2015, Oracle.  All rights reserved.
 
Starting /u01/ora12c/app/oracle/product/12.2.0/db_1/bin/tnslsnr: please wait...
 
TNSLSNR for Linux: Version 12.2.0.0.0 - Beta
System parameter file is /u01/ora12c/app/oracle/product/12.2.0/db_1/network/admin/listener.ora
Log messages written to /u01/ora12c/app/oracle/diag/tnslsnr/sdb2/listener_sh1/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=sdb2)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
 
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=sdb2)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_SH1
Version                   TNSLSNR for Linux: Version 12.2.0.0.0 - Beta
Start Date                03-MAY-2016 23:19:18
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/ora12c/app/oracle/product/12.2.0/db_1/network/admin/listener.ora
Listener Log File         /u01/ora12c/app/oracle/diag/tnslsnr/sdb2/listener_sh1/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=sdb2)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
The listener supports no services
The command completed successfully
[oracle12c@sdb2 admin]$ 
[oracle12c@sdb2 admin]$
[oracle12c@sdb2 admin]$ lsnrctl stop LISTENER_SH1
 
LSNRCTL for Linux: Version 12.2.0.0.0 - Beta on 03-MAY-2016 23:21:33
 
Copyright (c) 1991, 2015, Oracle.  All rights reserved.
 
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=sdb2)(PORT=1521)))
The command completed successfully
[oracle12c@sdb2 admin]$

Known Issue(6)Listener “LISTENER_SH1″ already exists

现象:
GDSCTL>deploy
GSM Errors:
CATALOG:ORA-45575: Deployment has terminated due to previous errors.
CATALOG:ORA-02610: Remote job failed with error:
EXTERNAL_LOG_ID="job_22857_11",
USERNAME="oracle12c"
For more details:
  select destination, output from all_scheduler_job_run_details
  where job_name='SHARD_SH1_NETCA'
CATALOG:ORA-02610: Remote job failed with error:
EXTERNAL_LOG_ID="job_22869_11",
USERNAME="oracle12c"
For more details:
  select destination, output from all_scheduler_job_run_details
  where job_name='SHARD_SH3_NETCA'
 
GDSCTL>
 
 
SDB2       Parsing command line arguments:
               Parameter "silent" = true
               Parameter "responsefile" = /u01/ora12c/app/oracle/produc
           t/12.2.0/db_1/shard_sh1_netca.rsp
           Done parsing command line arguments.
           Oracle Net Services Configuration:
           Listener "LISTENER_SH1" already exists.
           Profile configuration complete.
           Check the trace file for details: /u01/ora12c/app/oracle/cfg
           toollogs/netca/trace_OraDB12Home1-16050311PM2533.log
           Oracle Net Services configuration failed.  The exit code is
           1
 
 
SQL> 
 
解决方法,删除已经存在的listener.ora文件


Known Issue(7)ERROR: Insecure database cannot be registered

现象:
[oracle12c@sdb2 ~]$ echo oracleagent|schagent -registerdatabase sdb1 8080
Agent Registration Password ? 
ERROR: Insecure database cannot be registered.http://sdb1:8080/remote_scheduler_agent/register_agent
 
[oracle12c@sdb2 ~]$
 
解决:在shardcat上,执行DBMS_SCHEDULER.SET_AGENT_REGISTRATION_PASS
SQL> !hostname
sdb1
 
SQL>
SQL> exec DBMS_SCHEDULER.SET_AGENT_REGISTRATION_PASS('oracleagent'); 
 
PL/SQL procedure successfully completed.

Known Issue(8)BEGIN dbms_gsm_fixed.validateParameters(0); END;

现象:
GDSCTL>deploy
        GSM Errors:
CATALOG:ORA-02610: Remote job failed with error:
For more details, check the contents of $ORACLE_BASE/cfgtoollogs/dbca/sh1/customScripts.log on the destination host.
CATALOG:ORA-02610: Remote job failed with error:
For more details, check the contents of $ORACLE_BASE/cfgtoollogs/dbca/sh3/customScripts.log on the destination host.
CATALOG:ORA-45575: Deployment has terminated due to previous errors.
 
GDSCTL>
[oracle12c@sdb2 ~]$ cat  $ORACLE_BASE/cfgtoollogs/dbca/sh1/customScripts.log
BEGIN CUSTOM SCRIPT
DBID=707889309,
BEGIN dbms_gsm_fixed.validateParameters(0); END;
 
                     *
ERROR at line 1:
ORA-06550: line 1, column 22:
PLS-00302: component 'VALIDATEPARAMETERS' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
 
 
Database closed.
Database dismounted.
ORACLE instance shut down.
ORACLE instance started.
Total System Global Area 1627389952 bytes
Fixed Size                  4411288 bytes
Variable Size            1040187496 bytes
Database Buffers          570425344 bytes
Redo Buffers               12365824 bytes
Database mounted.
END CUSTOM SCRIPT
[oracle12c@sdb2 ~]$
 
解决方法:shardcat数据库和shard node数据库软件版本要一致,不能shardcat用12.2 beta2,而shard node上用12.2 beta1

(六)sharded table的一些测试,以及发现其对dml的一些限制:
(1)当down掉一个shard node的时候进行查询sharded table的时候报错:

ORA-02519: cannot perform cross-shard operation. Chunk "7" is unavailable
ORA-06512: at "GSMADMIN_INTERNAL.DBMS_GSM_POOLADMIN", line 16487
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "GSMADMIN_INTERNAL.DBMS_GSM_POOLADMIN", line 16464
ORA-06512: at "GSMADMIN_INTERNAL.DBMS_GSM_POOLADMIN", line 16503
ORA-06512: at line 1
 
但是duplicated table可以,不报错。

(2)sharded table不允许insert as select,也不允许PL/SQL function,所以加载数据有点麻烦:

SQL> insert into customers
  2  select rownum,dbms_random.STRING('U',2),dbms_random.STRING('l',4),dbms_random.STRING('U',10),
  3  dbms_random.STRING('U',8),dbms_random.STRING('A',200),dbms_random.STRING('A',20) from dual  connect by level
<=1000000;
 
insert into customers
select rownum,dbms_random.STRING('U',2),dbms_random.STRING('l',4),dbms_random.STRING('U',10),
dbms_random.STRING('U',8),dbms_random.STRING('A',200),dbms_random.STRING('A',20) from dual  connect by level<=1000000
 
ORA-02670: unsupported SQL construct: Insert As Select on Sharded table
 
SQL> 
SQL> begin
  2    for k in 1 .. 10 loop
  3      insert into customers(custid,firstname,lastname,class,geo,custprofile,passwd)
  4      values
  5        (round(dbms_random.value(1, 10), 0),
  6         dbms_random.STRING('U', 2),
  7         dbms_random.STRING('l', 4),
  8         dbms_random.STRING('U', 10),
  9         dbms_random.STRING('U', 8),
 10         dbms_random.STRING('A', 200),
 11         dbms_random.STRING('A', 20));
 12    end loop;
 13    commit;
 14  end;
 15  /
 
begin
  for k in 1 .. 10000 loop
    insert into customers(custid,firstname,lastname,class,geo,custprofile,passwd)
    values
      (round(dbms_random.value(1, 10), 0),
       dbms_random.STRING('U', 2),
       dbms_random.STRING('l', 4),
       dbms_random.STRING('U', 10),
       dbms_random.STRING('U', 8),
       dbms_random.STRING('A', 200),
       dbms_random.STRING('A', 20));
  end loop;
  commit;
end;
 
ORA-02670: unsupported SQL construct: PL/SQL function
ORA-06512: at line 4
 
SQL>
 
最后我用一个很土鳖的方式加载了数据:
SQL> begin
  2    for k in 1 .. 1000 loop
  3      insert into customers(custid,firstname,lastname,class,geo,custprofile,passwd)
  4      values
  5        (k, 'HE', 'Jimmy', 'A', 'CHINA', 'DBA', '123456');
  6    end loop;
  7    commit;
  8  end;
  9  /
 
PL/SQL procedure successfully completed
 
SQL>

但是如果是对duplicated table,这不存在上述2种的限制:

SQL> insert into products
  2  select rownum,dbms_random.STRING('U',8),dbms_random.STRING('A',64),round(dbms_random.value(1,1000),2) from dual
  3  connect by level
<=1000;
 
1000 rows inserted
 
SQL> commit;
 
Commit complete
 
SQL>

(3)sharded table不允许跨shard做delete:

SQL> delete from customers;
 
delete from customers
 
ORA-02671: DML are not allowed on more than one shard
 
SQL>
 
我后来是很土鳖的到一个一个shard node上去删除的。

闲聊sharding database架构

$
0
0

今天我们来闲聊一下sharding的架构。在开始闲聊之前,如果你没阅读我前面两篇的文章,推荐你先阅读一下这两篇,以便对oracle sharding database有个初步的了解。
『Oracle sharding database的一些概念』『创建Oracle sharding database』

关于sharding的架构,你在第一篇文章中可以看到各种术语,你在第二篇文章中也可以看到非常多的部署的步骤,我们脱离繁琐细小的东西,站到高处来看一下就是什么是oracle的sharding。

爱因斯坦说过,如果你不能用简单的语言解释某个东西,你就不是真正的理解这个东西。通过这几天的实验和学习,我尽量用简单的语言来给大家描述下什么是oracle的sharding。

sharding,中文名叫数据分片,是对数据进行横向扩展的一种方式。数据量增加,我可以通过加一台机器,来扩展其容纳能力和处理能力。Sharding其实需要解决三个问题:一、数据的路由,二、数据的分片,三、分片的元数据信息保存。

1.数据路由是数据库告诉应用程序,你让我查的数据目前在哪个分片上,这条路怎么走过去。
2.数据分片就是实际数据的存放地点,往往每个分片就是一台单独的服务器(含存储)。
3.由于分片的数据实际是被切割放在不同的机器上,那么需要有个集中的地点存放数据分片的信息,即分片元数据的信息。

应用问路由怎么走,路由去查询元数据得知需要的数据在哪个分片上,最终应用访问到该分片上。

最著名的sharding database就是mongoDB了。mongoDB的sharding功能的架构也是为了解决上面的三个问题,MongoDB有路由服务器(Router)解决路由问题,分片服务器(Shard)存储实际数据(可能还有副本和仲裁),以及配置服务器(Config Server)存放分片元数据。

那么,对应在Oracle12.2的sharding DB上,就是GDS框架(GSM,shard directors),shard node,和shardcat数据库。

所以,到这里你就可以比较清楚的了解了oracle 12.2的sharding功能其实就是3个模块。

顺便说一下,MonogoDB支持多个副本和仲裁,oracle的adg也同样支持一主多备,由FSFO进行管理。

下面,再谈一下我个人对sharding架构的一些看法。

(1)Shardcat是非常重要的一个模块,上面不仅仅有分片的元数据信息,还有duplicated table的master table信息,另外,当进行cross shard query的时候,他还起着coordinator database的作用。所以建议对这个部分搭建RAC+adg架构,避免shardcat的单点故障。

(2)shard node,单个shard node的失效,将导致整个表的不可用。所以我们也要对shard node建立高可用的副本,这里可以用ADG或者OGG的技术。

(3)既然做sharding,又要在做HA,那么就变成了堆机器,堆存储的方式了。我们假设在一个10个shard node的环境,需要多少台机器:一个shardcate,做rac+adg,那么最少就是3台;10个shard node,如果都有adg,那么最少就是20台。那么当前这个环境,就至少要23台机器了。

(4)Sharding架构极其考验对应用的熟悉程度,需要配合应用进行合理的分区和分片。另外,如sharding key必须建索引,sharding的方式可以有一致性hash,让数据均匀分布,也还是可以是range或者list分区,或者hash-range,hash-list的子分区。分片和分区方式需要结合业务,有些场景需要相关数据都在一个分区,避免cross shard join,有些场景需要均匀分片,禁止集中分片,导致热块数据都在一个分片上(如序列增长,做range分区,热点数据将会都在一个分片上)。

(5)事实表和维度表,似乎可以很好的利用sharding功能。维度表做duplicated table,而事实表做sharded table。


12c数据泵导入报错KUP-11014

$
0
0

将10.2.0.5的一个大表导入到12.1.0.2的时候,
导出参数是:
[oracle10g@testdb tmp]$ cat expdp.par
userid='/ as sysdba'
DIRECTORY=DUMPDIR
dumpfile=mytable_%U.dmp
tables=schema.mytable
logfile=mytable.log
job_name=mytable
parallel=8
filesize=100M

导入参数是:
userid='/ as sysdba'
DIRECTORY=DUMPDIR
dumpfile=mytable_%U.dmp
tables=schema.mytable
logfile=mytable.log
job_name=mytable
parallel=8
content=data_only

报错KUP-11014。
ORA-31693: Table data object "SCHEMA"."MYTABLE" failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-11011: the following file is not valid for this load operation
KUP-11014: internal metadata in file /home/oracle12c/mytable_02.dmp is not valid
Job "SYS"."MYTABLE" completed with 1 error(s) at Thu May 19 12:55:34 2016 elapsed 0 00:10:03

同样文件导入到11g中没有报错。这是因为12c中一个Bug 20690515引起的(可以详见Doc ID 20690515.8)。

下面我对这个bug稍微解释一下:

1. 触发条件:
在导入多个dump file set(即多个dump file文件)的时候,数据泵如果使用access_method=external_table的方式进行导入(默认情况下12c的access_method值是AUTOMATIC,即自动选择是extenal_table还是direct_path。至于什么时候选择前者什么时候选择后者,可参考Doc ID552424.1),用external_table方式导入期间,会校验每个dump file的xml内容,且与第一个dump file的xml内容做对比。但导入10.x的dump file set的时候,第一个(作为参考的那个)dump file的xml格式被转换成11.1的格式,那么与后面的dump file文件做对比校验的时候,就失败了。

2. 受影响版本:
12.1.0.2

3. 修复版本:
12.2

4. 是否有patch:
有,Patch 20690515 已经存在,有基于12.1.0.2版本的linux x86-64平台,aix平台和solaris SPARC平台。目前linux平台已经下载次数200多。

5. 是否有workaround:
有workaround,设置access_method=direct_path
如果还是报错,再加上table_exists_action=replace

就我来看,这个bug触发需要满足2个条件:
1. 多个dump文件
2. access method自动走了external table,或者强制手工指定了ACCESS_METHOD=EXTERNAL_TABLE。
(3. 可能还和表的大小有关,在某环境测试时300多M的一个表故障不重现,但是增加数据到3.6G,再次测试故障重现。但是目前在mos中没有说明表大小的影响因素。)

我在我的虚拟机测试环境中用来一个500多M的表,分别导成一个大文件和6个小文件:

创建测试表,create table test_dmp as select * from dba_objects;
多次insert into test_dmp as select * from test_dmp; 直到数据大约200多万行,500多M的segment size。
 
然后导出成一个大文件和多个小文件。
[oracle12c@testdb2 dump_12c]$ ls -l
total 889504
-rw-r-----. 1 oracle12c oinstall 455401472 May 19 16:01 bigdump.dmp
<==单个大文件
-rw-r--r--. 1 oracle12c oinstall       173 May 19 16:56 impdp2.par
-rw-r--r--. 1 oracle12c oinstall       145 May 19 17:19 impdp3.par
-rw-r--r--. 1 oracle12c oinstall       171 May 19 17:08 impdp.par
-rw-r-----. 1 oracle12c oinstall 104857600 May 19 15:52 mydump_01.dmp <==多个小文件
-rw-r-----. 1 oracle12c oinstall 104857600 May 19 15:52 mydump_02.dmp
-rw-r-----. 1 oracle12c oinstall 104857600 May 19 15:52 mydump_03.dmp
-rw-r-----. 1 oracle12c oinstall 104857600 May 19 15:52 mydump_04.dmp
-rw-r-----. 1 oracle12c oinstall  34873344 May 19 15:52 mydump_05.dmp
-rw-r-----. 1 oracle12c oinstall   1118208 May 19 15:52 mydump_06.dmp
-rw-r-----. 1 oracle12c oinstall       677 May 19 17:20 mydump.log
[
oracle12c@testdb2 dump_12c]$

1. 测试导入一个大文件,且强制使用ACCESS_METHOD=EXTERNAL_TABLE,不会报错:

[oracle12c@testdb2 dump_12c]$ cat impdp2.par
userid='/ as sysdba'
DIRECTORY=MYDIR
dumpfile=bigdump.dmp
tables=test.test_dmp
logfile=mydump.log
job_name=mydump
parallel=2
content=data_only
ACCESS_METHOD=EXTERNAL_TABLE
 
[oracle12c@testdb2 dump_12c]$
[oracle12c@testdb2 dump_12c]$
[oracle12c@testdb2 dump_12c]$
[oracle12c@testdb2 dump_12c]$
[oracle12c@testdb2 dump_12c]$ impdp parfile=impdp2.par
 
Import: Release 12.1.0.2.0 - Production on Thu May 19 16:57:08 2016
 
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
 
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SYS"."MYDUMP" successfully loaded/unloaded
Starting "SYS"."MYDUMP":  /******** AS SYSDBA parfile=impdp2.par
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "TEST"."TEST_DMP"                           434.2 MB 5132800 rows
Job "SYS"."MYDUMP" successfully completed at Thu May 19 16:57:26 2016 elapsed 0 00:00:17
 
[oracle12c@testdb2 dump_12c]$

2. 测试导入多个文件,且强制使用ACCESS_METHOD=EXTERNAL_TABLE,就报错了:

[oracle12c@testdb2 dump_12c]$ cat impdp.par
userid='/ as sysdba'
DIRECTORY=MYDIR
dumpfile=mydump_%U.dmp
tables=test.test_dmp
logfile=mydump.log
job_name=mydump
parallel=2
content=data_only
ACCESS_METHOD=EXTERNAL_TABLE
 
[oracle12c@testdb2 dump_12c]$
[oracle12c@testdb2 dump_12c]$ impdp parfile=impdp.par
 
Import: Release 12.1.0.2.0 - Production on Thu May 19 16:55:30 2016
 
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
 
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SYS"."MYDUMP" successfully loaded/unloaded
Starting "SYS"."MYDUMP":  /******** AS SYSDBA parfile=impdp.par
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-31693: Table data object "TEST"."TEST_DMP" failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-11011: the following file is not valid for this load operation
KUP-11014: internal metadata in file /tmp/dump_12c/mydump_02.dmp is not valid
Job "SYS"."MYDUMP" completed with 1 error(s) at Thu May 19 16:55:34 2016 elapsed 0 00:00:03
 
[oracle12c@testdb2 dump_12c]$

3. 测试导入多个文件,且强制使用ACCESS_METHOD=DIRECT_PATH,也不会报错:

[oracle12c@testdb2 dump_12c]$ cat impdp.par
userid='/ as sysdba'
DIRECTORY=MYDIR
dumpfile=mydump_%U.dmp
tables=test.test_dmp
logfile=mydump.log
job_name=mydump
parallel=2
content=data_only
ACCESS_METHOD=DIRECT_PATH
 
[oracle12c@testdb2 dump_12c]$ impdp parfile=impdp.par
 
Import: Release 12.1.0.2.0 - Production on Thu May 19 17:08:12 2016
 
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
 
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "SYS"."MYDUMP" successfully loaded/unloaded
Starting "SYS"."MYDUMP":  /******** AS SYSDBA parfile=impdp.par
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "TEST"."TEST_DMP"                           434.2 MB 5132800 rows
Job "SYS"."MYDUMP" successfully completed at Thu May 19 17:08:31 2016 elapsed 0 00:00:17
 
[oracle12c@testdb2 dump_12c]$

注:以上测试是在从10.2.0.5导出,导入到12.1.0.2;如果从11.2导入到12.1.0.2,也不会有这个问题。考虑到可能有不少用户会从10g升级到12c,建议打上这个bug的补丁。

12.2 new feature of partition

$
0
0

Oracle database 12.2有不少分区加强的特性:

  • Multi-Column ListPartitioning
  • Auto list Partitioning
  • Interval SubPartitioning
  • Online Partition Maintenance Operation
  • Online Table Conversion to Partition Table
  • Filtered Partitioning Maintenance Operation
  • Read Only Partitions
  • 我们来列举几个看看:

    1. multi-column list partition。注:最多支持16个列

    Connected to Oracle Database 12c Enterprise Edition Release 12.2.0.0.1
    Connected as test@ORA122_windows_pdb122
     
    SQL>
    CREATE TABLE t_oracleblog (salername varchar(200),region VARCHAR2(50), channel VARCHAR2(50))
    PARTITION BY LIST (region, channel)  --Note keywork :region, channel, Here are 2 columns
    (
    partition p1 values ('USA','Direct'),
    partition p2 values ('USA','Partners'),
    partition p3 values ('GERMANY','Direct'),
    partition p4 values (('GERMANY','Partners'),('GERMANY','Web')),
    partition p5 values ('CHINA','Direct'),
    partition p6 values (('CHINA','Partners'),('CHINA','Web'),('CHINA','Oversee')),
    partition p7 values ('JAPAN','Direct'),
    partition p8 values (DEFAULT)
    )
    /

    insert into t_oracleblog values('AAA','USA','Direct');
    insert into t_oracleblog values('BBB','CHINA','Direct');
    insert into t_oracleblog values('CCC','CHINA','Web');
    insert into t_oracleblog values('DDD','CHINA','Partners');
    insert into t_oracleblog values('EEE','GERMANY','Direct');
    insert into t_oracleblog values('FFF','GERMANY','Partners');
    insert into t_oracleblog values('GGG','JAPAN','Direct');
    insert into t_oracleblog values('HHH','CHINA','Oversee');
    insert into t_oracleblog values('III','JAPAN','Web');
    insert into t_oracleblog values('JJJ','FRANCE','Direct');
    insert into t_oracleblog values('KKK','CHINA','DIRECT');

    SQL> select * from t_oracleblog partition(p1);
     
    SALERNAME            REGION                                             CHANNEL
    -------------------- -------------------------------------------------- --------------------------------------------------
    AAA                  USA                                                Direct
     
    SQL> select * from t_oracleblog partition(p2);
     
    SALERNAME            REGION                                             CHANNEL
    -------------------- -------------------------------------------------- --------------------------------------------------
     
    SQL> select * from t_oracleblog partition(p3);
     
    SALERNAME            REGION                                             CHANNEL
    -------------------- -------------------------------------------------- --------------------------------------------------
    EEE                  GERMANY                                            Direct
     
    SQL> select * from t_oracleblog partition(p4);
     
    SALERNAME            REGION                                             CHANNEL
    -------------------- -------------------------------------------------- --------------------------------------------------
    FFF                  GERMANY                                            Partners
     
    SQL> select * from t_oracleblog partition(p5);
     
    SALERNAME            REGION                                             CHANNEL
    -------------------- -------------------------------------------------- --------------------------------------------------
    BBB                  CHINA                                              Direct
     
    SQL> select * from t_oracleblog partition(p6);
     
    SALERNAME            REGION                                             CHANNEL
    -------------------- -------------------------------------------------- --------------------------------------------------
    CCC                  CHINA                                              Web
    DDD                  CHINA                                              Partners
    HHH                  CHINA                                              Oversee
     
    SQL> select * from t_oracleblog partition(p7);
     
    SALERNAME            REGION                                             CHANNEL
    -------------------- -------------------------------------------------- --------------------------------------------------
    GGG                  JAPAN                                              Direct
     
    SQL> select * from t_oracleblog partition(p8);
     
    SALERNAME            REGION                                             CHANNEL
    -------------------- -------------------------------------------------- --------------------------------------------------
    III                  JAPAN                                              Web
    JJJ                  FRANCE                                             Direct
    KKK                  CHINA                                              DIRECT
     
    SQL>

    2. auto-list partition

    CREATE TABLE t_car (brand VARCHAR2(50),model VARCHAR2(50), year char(4))
    PARTITION BY LIST (brand) AUTOMATIC --Note keywork :AUTOMATIC
    (
    partition p1 values ('BMW'),
    partition p2 values ('BENZ')
    )
    /
     
     
    SQL> select table_name,partition_name from dba_tab_partitions where table_name='T_CAR';
     
    TABLE_NAME                                                                       PARTITION_NAME
    -------------------------------------------------------------------------------- --------------------------------------------------------------------------------
    T_CAR                                                                            P1
    T_CAR                                                                            P2
     
    SQL>
    SQL>
    SQL>
    SQL>
    SQL> insert into t_car values('BMW','AAA','1984');
     
    1 row inserted
    SQL> insert into t_car values('BMW','BBB','1986');
     
    1 row inserted
    SQL> insert into t_car values('BENZ','CCC','1992');
     
    1 row inserted
    SQL> insert into t_car values('BENZ','DDD','1983');
     
    1 row inserted
     
    SQL>
    SQL> select table_name,partition_name from dba_tab_partitions where table_name='T_CAR';
     
    TABLE_NAME                                                                       PARTITION_NAME
    -------------------------------------------------------------------------------- --------------------------------------------------------------------------------
    T_CAR                                                                            P1
    T_CAR                                                                            P2
     
    SQL>
    SQL>
    SQL>
    SQL> insert into t_car values('JEEP','EEE','1991'); ---插入之前没有在partition key定义的行。
     
    1 row inserted
     
    SQL> select table_name,partition_name from dba_tab_partitions where table_name='T_CAR';
     
    TABLE_NAME                                                                       PARTITION_NAME
    -------------------------------------------------------------------------------- --------------------------------------------------------------------------------
    T_CAR                                                                            P1
    T_CAR                                                                            P2
    T_CAR                                                                            SYS_P1328
     
    SQL>
    SQL>
    SQL>
    SQL> insert into t_car values('BYD','FFF','2015');
     
    1 row inserted
    SQL> insert into t_car values('FORD','FFF','2015');
     
    1 row inserted
     
    SQL>
    SQL>
    SQL> select table_name,partition_name from dba_tab_partitions where table_name='T_CAR'; --可以看到自动生成了新分区。
     
    TABLE_NAME                                                                       PARTITION_NAME
    -------------------------------------------------------------------------------- --------------------------------------------------------------------------------
    T_CAR                                                                            P1
    T_CAR                                                                            P2
    T_CAR                                                                            SYS_P1328
    T_CAR                                                                            SYS_P1329
    T_CAR                                                                            SYS_P1330
     
    SQL>

    3. interval subpartition

    4.online DDL for partition

    CREATE TABLE t_oracleblog (salername varchar(200),region VARCHAR2(50), channel VARCHAR2(50));

    ALTER TABLE t_oracleblog MODIFY
    PARTITION BY LIST (region)
    (partition p1 values ('USA'),
    partition p2 values ('GERMANY'),
    partition p3 values ('JAPAN'),
    partition p4 values (DEFAULT))
    ONLINE ---Note keyword: ONLINE
    /

    注1:统计信息会收集

    注2:从10046的trace看,似乎是临时创建了SYS_JOURNAL_ ,SYS_RMTAB$$_H ,SYS_RMTAB$$_I以及上面的索引,在进行捣鼓,另外还有一堆数据字典的更新。没有看到类似dbms_redefinition在线重定义的功能的介入,没有看到在线重定义时关于物化视图create snaphot,和MLOG$_XXX这样的关键字。

    5. Filtered Partition on Maintenance Operations
    在MOVE,SPLIT,MERGE partition的时候,可以进行过滤:

    SQL> select * from T_ORACLEBLOG partition(p4);
     
    SALERNAME            REGION                                             CHANNEL
    -------------------- -------------------------------------------------- --------------------------------------------------
    BBB                  CHINA                                              Direct
    CCC                  CHINA                                              Web
    DDD                  CHINA                                              Partners
    HHH                  CHINA                                              Oversee
    JJJ                  FRANCE                                             Direct
    KKK                  CHINA                                              DIRECT
     
    6 rows selected
     
    SQL>
    SQL>
    SQL>
    SQL>
    SQL>
    SQL>
    SQL> ALTER TABLE T_ORACLEBLOG MOVE PARTITION p4
      2  TABLESPACE SYSAUX
      3  INCLUDING ROWS WHERE REGION = 'CHINA' --Note keyword INCLUDING ROW WHERE
      4  /
     
    Table altered
     
    SQL>
    SQL> select * from T_ORACLEBLOG partition(p4);
     
    SALERNAME            REGION                                             CHANNEL
    -------------------- -------------------------------------------------- --------------------------------------------------
    BBB                  CHINA                                              Direct
    CCC                  CHINA                                              Web
    DDD                  CHINA                                              Partners
    HHH                  CHINA                                              Oversee
    KKK                  CHINA                                              DIRECT
     
    SQL>

    注1:where条件后面的字段千万不能写错,不然数据全没了。如错写成INCLUDING ROWS WHERE channel = ‘CHINA’,MOVE之后则分区4的数据全没了。因为including row表示留下的数据,而channel = ‘CHINA’ 这样的数据一条都没有,所以就清空了分区。

    6.Read only partition

    SQL> CREATE TABLE orders
      2  (
      3  order_id number,
      4  order_date DATE,
      5  customer_name varchar2(200)
      6  ) read only  ----Note keyword read only, which mean table read only
      7  PARTITION BY RANGE(order_date)
      8  (
      9  partition q1_2015 values less than (to_date('2014-10-01','yyyy-mm-dd')),
     10  partition q2_2015 values less than (to_date('2015-01-01','yyyy-mm-dd')),
     11  partition q3_2015 values less than (to_date('2015-04-01','yyyy-mm-dd')),
     12  partition q4_2015 values less than (to_date('2015-07-01','yyyy-mm-dd')) read write ----Note keyword read only, which mean partition q4 read write
     13  )
     14  /
     
    Table created
     
    SQL>
    SQL>
    SQL> insert into orders values(1,to_date('2015-04-20','yyyy-mm-dd'),'AAA');
     
    1 row inserted
     
    SQL> insert into orders values(1,to_date('2015-06-20','yyyy-mm-dd'),'AAA');
     
    1 row inserted
     
    SQL> insert into orders values(1,to_date('2015-01-20','yyyy-mm-dd'),'AAA'); --对于read only的partition,插入数据会报错。
     
    insert into orders values(1,to_date('2015-01-20','yyyy-mm-dd'),'AAA')
     
    ORA-14466: Data in a read-only partition or subpartition cannot be modified.
     
    SQL>
    SQL>
    SQL> select * from orders;
     
      ORDER_ID ORDER_DATE  CUSTOMER_NAME
    ---------- ----------- --------------------------------------------------------------------------------
             1 2015/4/20   AAA
             1 2015/6/20   AAA
     
    SQL>

    PDB随CDB启动

    $
    0
    0

    12.1.0.2之前,用startup trigger:

    --在CDB中建立startup trigger
    CREATE TRIGGER open_all_pdbs
       AFTER STARTUP
       ON DATABASE
    BEGIN
       EXECUTE IMMEDIATE 'alter pluggable database all open';
    END open_all_pdbs;
    /

    12.1.0.2之后,用PDB save state

    下面的命令可以放pdb保持其关机前的状态,语法如下:
    ALTER PLUGGABLE DATABASE [all]|[PDB_NAME] SAVE STATE;
    取消设置的话,语法如下:
    ALTER PLUGGABLE DATABASE [all]|[PDB_NAME]  DISCARD STATE;
     
    如:
    alter pluggable database pdbrac1 save state;
    alter pluggable database all discard state;
     
     
    注意,savle state只是抓取当前的状态进行save,如果当前状态是mount,然后save state了。然后在open all pdb,再重启cdb,重启后只是会恢复到open all pdb之前的状态,即save state时的状态。
     
    SQL> show con_name
     
    CON_NAME
    ------------------------------
    CDB$ROOT
    SQL>
    SQL> startup
    ORACLE instance started.
     
    Total System Global Area 1560281088 bytes
    Fixed Size                  2924784 bytes
    Variable Size            1056968464 bytes
    Database Buffers          486539264 bytes
    Redo Buffers               13848576 bytes
    Database mounted.
    Database opened.
    SQL>  --最初时,2个pdb启动后状态都是mounted的。       
    SQL>  select NAME,OPEN_MODE from v$pdbs;
     
    NAME                           OPEN_MODE
    ------------------------------ ----------
    PDB$SEED                       READ ONLY
    PDBRAC1                        MOUNTED
    PDBRAC2                        MOUNTED
     
    SQL> alter pluggable database PDBRAC1 open;
     
    Pluggable database altered.
     
    SQL> alter pluggable database pdbrac1 save state;
     
    Pluggable database altered.
     
    SQL>
    SQL>
    SQL> shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL>
    SQL>
    SQL>
    SQL> startup
    ORACLE instance started.
     
    Total System Global Area 1560281088 bytes
    Fixed Size                  2924784 bytes
    Variable Size            1056968464 bytes
    Database Buffers          486539264 bytes
    Redo Buffers               13848576 bytes
    Database mounted.
     Database opened.
    SQL>
    SQL>
    SQL> --save state之后,PDBRAC1就是随cdb一起启动了。
    SQL> select NAME,OPEN_MODE from v$pdbs;
     
    NAME                           OPEN_MODE
    ------------------------------ ----------
    PDB$SEED                       READ ONLY
    PDBRAC1                        READ WRITE
    PDBRAC2                        MOUNTED
     
    SQL>
    SQL> --如果是先save state,在open all,那么记录的状态只是在open all 之前的。
     
    SQL> alter pluggable database all save state;
     
    Pluggable database altered.
     
    SQL> alter pluggable database all open;
     
    Pluggable database altered.
     
    SQL>
    SQL>
    SQL>
    SQL>
    SQL> shutdown immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL>
    SQL>
    SQL>
    SQL> startup
    ORACLE instance started.
     
    Total System Global Area 1560281088 bytes
    Fixed Size                  2924784 bytes
    Variable Size            1056968464 bytes
    Database Buffers          486539264 bytes
    Redo Buffers               13848576 bytes
    Database mounted.
     Database opened.
    SQL>
    SQL> --因此重启cdb之后,也只是恢复save state时候的状态,即open all之前的,只有PDBRAC1打开的状态。
    SQL> select NAME,OPEN_MODE from v$pdbs;
     
    NAME                           OPEN_MODE
    ------------------------------ ----------
    PDB$SEED                       READ ONLY
    PDBRAC1                        READ WRITE
    PDBRAC2                        MOUNTED
     
    SQL>

    解决12c flex cluster中实例乱跑问题

    $
    0
    0

    在12c中的RAC中,由于是flex cluster,常常会出现实例乱跑的现象,如实例3跑到了节点2上,实例2跑到节点3上。而且重启之后也还是如此。

    我们可以这样处理,让原来乱跑的实例改回去:

    1. 关闭数据库:

    srvctl stop database -d cdbrac -stopoption immediate

    2.检查crs中记录的实例和节点对应关系的信息:

    [oracle@12102-rac2 ~]$ crsctl stat res ora.cdbrac.db -p |grep SERVERNAME
    GEN_USR_ORA_INST_NAME@SERVERNAME(12102-rac1)=cdbrac_1
    GEN_USR_ORA_INST_NAME@SERVERNAME(12102-rac2)=cdbrac_3
    GEN_USR_ORA_INST_NAME@SERVERNAME(12102-rac3)=cdbrac_2
    [oracle@12102-rac2 ~]$

    3. 修改,需要使用unsupported参数。需要加unsupported参数的原因,参考这里

    crsctl modify res ora.cdbrac.db -attr "GEN_USR_ORA_INST_NAME@SERVERNAME(12102-rac2)=cdbrac_2" -unsupported
    crsctl modify res ora.cdbrac.db -attr "GEN_USR_ORA_INST_NAME@SERVERNAME(12102-rac3)=cdbrac_3" -unsupported

    4. 到每个节点检查,是否都已经改好:

    [oracle@12102-rac2 ~]$ crsctl stat res ora.cdbrac.db -p |grep SERVERNAME
    GEN_USR_ORA_INST_NAME@SERVERNAME(12102-rac1)=cdbrac_1
    GEN_USR_ORA_INST_NAME@SERVERNAME(12102-rac2)=cdbrac_2
    GEN_USR_ORA_INST_NAME@SERVERNAME(12102-rac3)=cdbrac_3
    [oracle@12102-rac2 ~]$

    5. 重启crs(含DB)

    (update 2016-05-17)发现其实只要关闭数据库后,用如下命令启动instance,指定到某个节点上,那么ora.cdbrac.db的资源会被自动更新,不需要手工修改。

    srvctl stop database -d cdbrac -stopoption immediate
    srvctl start instance -d cdbrac -n 12102-rac2 -i cdbrac_2
    srvctl start instance -d cdbrac -n 12102-rac3 -i cdbrac_3

    12c的Data guard中将废弃使用using current logfile

    $
    0
    0

    问题起源于客户的一个12c的数据库,需要启动到非real time apply的模式,但是发现执行:
    alter database recover managed standby database cancel;
    alter database recover managed standby database disconnect from session;

    之后,数据库还是一直工作在real time apply的模式。

    去alertlog中找了一下,发现了答案:

    Thu Jun 09 12:16:03 2016
    Errors in file /cust/mydb/rdbms/oracle/diag/rdbms/rmydb/mydb/trace/mydb_pr00_24168.trc:
    ORA-16037: user requested cancel of managed recovery operation
    Thu Jun 09 12:16:03 2016
    MRP0: Background Media Recovery process shutdown (mydb)
    Thu Jun 09 12:16:04 2016
    Managed Standby Recovery Canceled (mydb)
    Completed: alter database recover managed standby database cancel
    alter database recover managed standby database disconnect from session
    <==我们平时的发起语句
    Thu Jun 09 12:16:13 2016
    Attempt to start background Managed Standby Recovery process (mydb)
    Starting background process MRP0
    Thu Jun 09 12:16:13 2016
    MRP0 started with pid=27, OS id=17971
    Thu Jun 09 12:16:13 2016
    MRP0: Background Managed Standby Recovery process started (mydb)
    Thu Jun 09 12:16:19 2016
    Started logmerger process
    Thu Jun 09 12:16:19 2016
    Managed Standby Recovery starting Real Time Apply  <==使用了real time apply,而上述语句在11g中的效果是使用real time apply,在12c中行为发生了变化。
    Thu Jun 09 12:17:06 2016
    Only allocated 127 recovery slaves (requested 128)
    Thu Jun 09 12:17:06 2016
    Parallel Media Recovery started with 127 slaves
    Thu Jun 09 12:17:12 2016
    Waiting for all non-current ORLs to be archived...
    Thu Jun 09 12:17:12 2016
     
     
    Wed Apr 27 14:56:52 2016
    MRP0: Background Media Recovery process shutdown (mydb)
    Wed Apr 27 14:56:53 2016
    Managed Standby Recovery Canceled (mydb)
    Completed: alter database recover managed standby database cancel
    alter database recover managed standby database parallel 16 USING ARCHIVED LOGFILE   disconnect <== 使用using archived log
    Wed Apr 27 14:57:29 2016
    Attempt to start background Managed Standby Recovery process (mydb)
    Starting background process MRP0
    Wed Apr 27 14:57:29 2016
    MRP0 started with pid=27, OS id=23908
    Wed Apr 27 14:57:29 2016
    MRP0: Background Managed Standby Recovery process started (mydb)
    Started logmerger process
    Wed Apr 27 14:57:35 2016
    Managed Standby Recovery not using Real Time Apply <==可以看到,不使用real time apply了!
    Wed Apr 27 14:57:38 2016
    Parallel Media Recovery started with 16 slaves
    Wed Apr 27 14:57:38 2016
    Waiting for all non-current ORLs to be archived...
    Wed Apr 27 14:57:38 2016
    All non-current ORLs have been archived.
    Wed Apr 27 14:57:39 2016
    Media Recovery Waiting for thread 1 sequence 2287 (in transit)
    Completed: alter database recover managed standby database parallel 16 USING ARCHIVED LOGFILE   disconnect

    同时,在在线文档也发现了相关说明:

    即using current logfile 已经过期,如果要启用real time apply,不再需要加这个语句。(所以我们无论加了using current logfile,还是不加,都是使用real time apply的。)
    要使用非real time apply,就需要使用using archived log了。

    综上:
    在11g中,如要使用real time apply,需要加using current logfile,
    在12c中,如果要不使用real time apply,需要加using archived log,using current logfile已经过期作废。
    不带using语句,在11g中,默认是不使用real time apply,而在12c中是默认使用real time apply。

    Viewing all 61 articles
    Browse latest View live