`
yangzhiyong77
  • 浏览: 963906 次
文章分类
社区版块
存档分类
最新评论

一步一步在virtual box4.1.6中安装基于rhel5.5x86_64的oracle 10g R2双节点RAC

 
阅读更多
1. 配置单实例环境
参照:http://blog.csdn.net/t0nsha/article/details/7166582

2. 配置域名解析
vim /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.2.101 rac1.localdomain rac1
192.168.2.102 rac2.localdomain rac2
192.168.0.101 rac1-priv.localdomain rac1-priv
192.168.0.102 rac2-priv.localdomain rac2-priv
192.168.2.111 rac1-vip.localdomain rac1-vip
192.168.2.112 rac2-vip.localdomain rac2-vip

3. 创建安装目录
mkdir -p /u01/oracle/crs
mkdir -p /u01/oracle/10gR2
chown -R oracle:oinstall /u01
chmod -R 775 /u01

4. 配置环境变量
vi .bash_profile
export ORACLE_SID=RAC1
export ORACLE_BASE=/u01/oracle/10gR2
export ORACLE_HOME=/u01/oracle/crs
export PATH=$ORACLE_HOME/bin:$PATH

5. 创建共享磁盘
set path=C:\Program Files\Oracle\VirtualBox;%path%
VBoxManage createhd --filename ocr1.vdi --size 256 --format VDI --variant Fixed
VBoxManage createhd --filename ocr2.vdi --size 256 --format VDI --variant Fixed
VBoxManage createhd --filename vot1.vdi --size 256 --format VDI --variant Fixed
VBoxManage createhd --filename vot2.vdi --size 256 --format VDI --variant Fixed
VBoxManage createhd --filename vot3.vdi --size 256 --format VDI --variant Fixed
VBoxManage createhd --filename asm1.vdi --size 5120 --format VDI --variant Fixed
VBoxManage createhd --filename asm2.vdi --size 5120 --format VDI --variant Fixed
VBoxManage createhd --filename asm3.vdi --size 5120 --format VDI --variant Fixed

6. 关联共享磁盘
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 1 --device 0 --type hdd --medium ocr1.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 2 --device 0 --type hdd --medium ocr2.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 3 --device 0 --type hdd --medium vot1.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 4 --device 0 --type hdd --medium vot2.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 5 --device 0 --type hdd --medium vot3.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 6 --device 0 --type hdd --medium asm1.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 7 --device 0 --type hdd --medium asm2.vdi --mtype shareable
VBoxManage storageattach rac1 --storagectl "SATA 控制器" --port 8 --device 0 --type hdd --medium asm3.vdi --mtype shareable
http://www.oracledistilled.com/virtualbox/creating-shared-drives-in-oracle-vm-virtualbox/

7. 配置共享磁盘
VBoxManage modifyhd ocr1.vdi --type shareable
VBoxManage modifyhd ocr2.vdi --type shareable
VBoxManage modifyhd vot1.vdi --type shareable
VBoxManage modifyhd vot2.vdi --type shareable
VBoxManage modifyhd vot3.vdi --type shareable
VBoxManage modifyhd asm1.vdi --type shareable
VBoxManage modifyhd asm2.vdi --type shareable
VBoxManage modifyhd asm3.vdi --type shareable

8. 克隆第二台虚拟机
mkdir rac2
VBoxManage clonehd rac1\rac1.vdi rac2\rac2.vdi

创建基于rac2.vdi的虚拟机.

VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 1 --device 0 --type hdd --medium ocr1.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 2 --device 0 --type hdd --medium ocr2.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 3 --device 0 --type hdd --medium vot1.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 4 --device 0 --type hdd --medium vot2.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 5 --device 0 --type hdd --medium vot3.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 6 --device 0 --type hdd --medium asm1.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 7 --device 0 --type hdd --medium asm2.vdi --mtype shareable
VBoxManage storageattach rac2 --storagectl "SATA 控制器" --port 8 --device 0 --type hdd --medium asm3.vdi --mtype shareable

9. 配置rac2的环境变量
vi .bash_profile
export ORACLE_SID=RAC2
export ORACLE_BASE=/u01/oracle/10gR2
export ORACLE_HOME=/u01/oracle/crs
export PATH=$ORACLE_HOME/bin:$PATH

10. 测试域名解析
ping -c 3 rac1
ping -c 3 rac1-priv
ping -c 3 rac2
ping -c 3 rac2-priv

11. 配置ssh
ssh-keygen -t rsa
cat id_rsa.pub >>authorized_keys
scp authorized_keys rac2:/home/oracle/.ssh/
scp authorized_keys rac1:/home/oracle/.ssh/

rac1和rac2上执行以下4条命令,没有提示输入密码表示ssh配置成功:
ssh rac1 date
ssh rac1-priv date
ssh rac2 date
ssh rac2-priv date

12. 配置共享磁盘为裸设备
vim /etc/udev/rules.d/60-raw.rules
ACTION=="add", KERNEL=="sdb", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdc", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdd", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sde", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sdf", RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add", KERNEL=="sdg", RUN+="/bin/raw /dev/raw/raw6 %N"
ACTION=="add", KERNEL=="sdh", RUN+="/bin/raw /dev/raw/raw7 %N"
ACTION=="add", KERNEL=="sdi", RUN+="/bin/raw /dev/raw/raw8 %N"
ACTION=="add", KERNEL=="raw[1-8]", OWNER="oracle", GROUP="oinstall", MODE="0660"

13. 验证集群软件
cd /clusterware/cluvfy/
./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose

如下错误是bug导致,可以忽略
Could not find a suitable set of interfaces for VIPs.
(http://www.eygle.com/archives/2007/12/oracle10g_rac_linux_cluvfy.html)

14. 安装集群软件
cd /clusterware/
./runInstaller

运行/clusterware/runInstaller遇到以下问题时,
用root用户运行/clusterware/rootpre/rootpre.sh,
两个节点都需要执行:
Has 'rootpre.sh' been run by root? [y/n] (n)
# cd /clusterware/rootpre
./rootpre.sh

如遇系统版本不兼容,修改如下文件:
vim /etc/redhat-release
redhat-4

执行/u01/oracle/crs/root.sh时碰到:
Failed to upgrade Oracle Cluster Registry configuration
有两点需要解决:
1. 需要打一个patch(其实就是替换每个节点的clsfmt.bin): p4679769_10201_Linux-x86-64.zip
2. 作为ocr和voting的磁盘没有分区,用fdisk将sdb-sdi分区
(由于是共享磁盘,只需在一个节点rac1分区即可,到rac2上就可看到分区了),
然后用dd命令刷新ocr和voting裸设备即可:
dd if=/dev/zero of=/dev/raw/raw1 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw2 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw3 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw4 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw5 bs=1M count=256
* a.) Are the RAW devices you are using partitions or full disks? They have to be partitions.
(https://cn.forums.oracle.com/forums/thread.jspa?threadID=1122862&start=0&tstart=0)

再次执行root.sh:
[root@rac1 crs]# ./root.sh
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw3
Now formatting voting device: /dev/raw/raw4
Now formatting voting device: /dev/raw/raw5
Format of 3 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac1
CSS is inactive on these nodes.
rac2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.

在rac2节点上运行root.sh碰到如下错误:
[root@rac2 crs]# ./root.sh
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac1
rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/u01/oracle/crs/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
需要修改两个脚本:
For the VIPCA utility, alter the $CRS_HOME/bin/vipca script on all nodes to remove LD_ASSUME_KERNEL. After the "if" statement in line 123, add an unset

command to ensure LD_ASSUME_KERNEL is not set as follows:

if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ]
then
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
fi
unset LD_ASSUME_KERNEL
With the newly inserted line, root.sh should be able to call VIPCA successfully.

For the SRVCTL utility, alter the $CRS_HOME/bin/srvctl scripts on all nodes by adding a line, unset LD_ASSUME_KERNEL, after line 174 as follows:

LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
unset LD_ASSUME_KERNEL

http://docs.oracle.com/cd/B19306_01/relnotes.102/b15666/toc.htm

只得重新运行一遍/u01/oracle/crs/root.sh:
先删除cssfatal:
rm -f /etc/oracle/scls_scr/rac1/oracle/cssfatal
再刷新ocr与voting disk:
dd if=/dev/zero of=/dev/raw/raw1 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw2 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw3 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw4 bs=1M count=256
dd if=/dev/zero of=/dev/raw/raw5 bs=1M count=256

重新执行/u01/oracle/crs/root.sh时,在rac2节点上又出问题了:
[root@rac1 crs]# ./root.sh
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw3
Now formatting voting device: /dev/raw/raw4
Now formatting voting device: /dev/raw/raw5
Format of 3 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac1
CSS is inactive on these nodes.
rac2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@rac1 crs]# pwd
/u01/oracle/crs
[root@rac1 crs]# ssh rac2
root@rac2's password:
Last login: Wed Jan 4 21:34:06 2012 from rac1.localdomain
[root@rac2 ~]# source /home/oracle/.bash_profile
[root@rac2 ~]# cd $ORACLE_HOME
[root@rac2 crs]# ./root.sh
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/oracle' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac1
rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Error 0(Native: listNetInterfaces:[3])
[Error 0(Native: listNetInterfaces:[3])]

直接运行vipca碰到:
[root@rac2 bin]# ./vipca
Error 0(Native: listNetInterfaces:[3])
[Error 0(Native: listNetInterfaces:[3])]

解决方法:
[root@rac2 bin]# ./oifcfg iflist
eth0 192.168.2.0
eth1 192.168.0.0
[root@rac2 bin]# ./oifcfg setif -global eth0/192.168.2.0:public
[root@rac2 bin]# ./oifcfg setif -global eth1/192.168.0.0:cluster_interconnect
[root@rac2 bin]# ./oifcfg getif
eth0 192.168.2.0 global public
eth1 192.168.0.0 global cluster_interconnect
[root@rac2 bin]#
./vipca
(http://blog.chinaunix.net/space.php?uid=261392&do=blog&id=2138877)

clusterware安装好后,运行crs_stat -t即可查看集群服务的运行情况:
[root@rac2 bin]# ./crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2

15. 安装asm数据库
回到rac1,开始安装asm数据库:
[oracle@rac1 database]$ ./runInstaller

指定asm安装路径:
OraASM10g_home
/01/oracle/10gR2/asm

安装完成后,查看asm状态:
[oracle@rac1 database]$ srvctl status asm -n rac1
ASM instance +ASM1 is running on node rac1.
[oracle@rac1 database]$ srvctl status asm -n rac2
ASM instance +ASM2 is running on node rac2.
[oracle@rac1 database]$

[oracle@rac1 database]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
[oracle@rac1 database]$

16. 安装主数据库
回到rac1,开始安装主数据库:
[oracle@rac1 database]$ ./runInstaller

指定主库安装路径:
OraDb10g_home
/01/oracle/10gR2/db_1


17. 更新.bash_profile:
vi .bash_profile
export ORACLE_SID=RAC
export ORACLE_BASE=/opt/oracle/10gR2
export ORACLE_HOME=/opt/oracle/10gR2/db_1
export PATH=$PATH:$ORACLE_HOME/bin


18. 配置管理rac1的环境变量脚本
crs.env
export ORACLE_BASE=/u01/oracle/10gR2
export ORACLE_HOME=/u01/oracle/crs
export PATH=$ORACLE_HOME/bin:$PATH

asm.env
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/oracle/10gR2
export ORACLE_HOME=/u01/oracle/10gR2/asm
export PATH=$ORACLE_HOME/bin:$PATH

db.env
export ORACLE_SID=RAC1
export ORACLE_BASE=/u01/oracle/10gR2
export ORACLE_HOME=/u01/oracle/10gR2/db_1
export PATH=$ORACLE_HOME/bin:$PATH

REF:
Oracle Database 11g Release 2 RAC On Linux Using VirtualBox
http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVirtualBox.php
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics