现在的位置: 首页 > 数据库 > 正文

oracle11g rac 网卡变更

2019年09月29日 数据库 ⁄ 共 4442字 ⁄ 字号 暂无评论

环境描述:
数据库版本: 11.2.0.4.0
linux 版本:redhat 6.2
变更: eth0变成了eth2
eth1变成了eth3

一、变更前检查:
1、查看当前的接口配置信息(两个节点都执行)

[grid@rac1 ~]$ oifcfg getif //从OCR中读取
eth0 10.204.101.0 global public
eth1 192.168.101.0 global cluster_interconnect
[grid@rac1 ~]$

2、查看Node Application的配置
[grid@rac1 ~]$ srvctl config nodeapps -a
Network exists: 1/10.204.101.0/255.255.255.0/eth0, type static
VIP exists: /rac1-vip/10.204.101.47/10.204.101.0/255.255.255.0/eth0, hosting node rac1
VIP exists: /rac2-vip/10.204.101.48/10.204.101.0/255.255.255.0/eth0, hosting node rac2
查看单个节点:
srvctl config nodeapps -n node1 -a

3、OCR备份(root执行)
./ocrconfig -export /tmp/ocr_20160616.dmp

4、gpnp 备份
用grid用户两个节点执行备份:

$ cd $ORACLE_HOME/gpnp/`hostname`/profiles/peer/

$ cp -p profile.xml profile.xml.bak

二、修改PUBLIC和 Private信息

关闭其他节点

su - grid

添加私网地址(一端添加就行):
./oifcfg setif -global eth2/10.204.101.0:public
./oifcfg setif -global eth3/192.168.101.0:cluster_interconnect

//所有节点必须都起来,否则报:
PRIF-33: Failed to set or delete interface because hosts could not be discovered
CRS-02307: No GPnP services on requested remote hosts.
PRIF-32: Error in checking for profile availability for host dzzdb2
CRS-02306: GPnP service on host "dzzdb2" not found.

这里光添加,确保成功后最后再删除(如果删除后配置不正确会crs起不来,就没法修改了)

注意!!!!
如果在这里删除原来的配置网卡,另外一个节点的crs会down掉,本节点(操作的这个)不受影响。

查看:
[root@rac1 bin]# ./oifcfg getif
eth0 10.204.101.0 global public
eth1 192.168.101.0 global cluster_interconnect
eth2 10.204.101.0 global public
eth3 192.168.101.0 global cluster_interconnect

三、停crs、配置新网卡

单节点操作;

停crs:./crsctl stop crs
网卡名字变更:eth0变成eth2,eth1变成eth3
[grid@rac1 bin]$ ./oifcfg iflist
eth2 10.204.101.0
eth3 192.168.101.0
[grid@rac1 bin]$

启动crs:./crsctl start crs

提示!!!
另外一个节点不操作

四、修改vip信息
1>查看
[grid@rac1 ~]$ srvctl config nodeapps -n rac1 -a
-n option has been deprecated.
Network exists: 1/10.204.101.0/255.255.255.0/eth0, type static
VIP exists: /rac1-vip/10.204.101.47/10.204.101.0/255.255.255.0/eth0, hosting node rac1

2>停服务:

srvctl stop instance -d racdb -i racdb1
srvctl stop nodeapps -n rac1 -f

以上两个命令执行完,就剩下ASM资源(集群)没停掉,其他都停掉了。

查看:
[grid@rac1 ~]$ srvctl status instance -d racdb -i racdb1
Instance racdb1 is not running on node rac1

[grid@rac1 ~]$ srvctl status nodeapps -n rac1
VIP rac1-vip is enabled
VIP rac1-vip is not running
Network is enabled
Network is not running on node: rac1
GSD is disabled
GSD is not running on node: rac1
ONS is enabled
ONS daemon is not running on node: rac1

3>修改vip资源

[grid@rac1 ~]$ srvctl config nodeapps
Network exists: 1/10.204.101.0/255.255.255.0/eth0, type static
VIP exists: /rac1-vip/10.204.101.47/10.204.101.0/255.255.255.0/eth0, hosting node rac1
VIP exists: /rac2-vip/10.204.101.48/10.204.101.0/255.255.255.0/eth0, hosting node rac2

[root@rac1 bin]# ./srvctl modify nodeapps -n rac1 -A rac1-vip/255.255.255.0/eth2
//eth2如果系统中不存在会报错

[root@rac1 bin]# ./srvctl config nodeapps -a
Network exists: 1/10.204.101.0/255.255.255.0/eth2, type static
VIP exists: /rac1-vip/10.204.101.47/10.204.101.0/255.255.255.0/eth2, hosting node rac1
VIP exists: /rac2-vip/10.204.101.48/10.204.101.0/255.255.255.0/eth2, hosting node rac2

注意!!!
虽然指定了rac1,但是rac2的vip网卡端口也一起变成了eth2.(rac2的vip没受到影响)

4>启动资源验证

[root@rac1 bin]# ./srvctl start instance -d racdb -i racdb1

[root@rac1 bin]# ./srvctl status nodeapps -n rac1
VIP rac1-vip is enabled
VIP rac1-vip is running on node: rac1
Network is enabled
Network is running on node: rac1
GSD is disabled
GSD is not running on node: rac1
ONS is enabled
ONS daemon is running on node: rac1
[root@rac1 bin]#

rac1 就配置完了,下面开始rac2节点。从第三步开始继续

三、停crs、配置新网卡

单节点操作;

停crs: ./crsctl stop crs
网卡名字变更:eth0变成eth2,eth1变成eth3
[grid@rac2 bin]$ ./oifcfg iflist
eth2 10.204.101.0
eth3 192.168.101.0

启动crs:./crsctl start crs

五、查看服务正否正常
1>查看vip
[root@rac2 bin]# ./srvctl config nodeapps -n rac2 -a
-n option has been deprecated.
Network exists: 1/10.204.101.0/255.255.255.0/eth2, type static
VIP exists: /rac2-vip/10.204.101.48/10.204.101.0/255.255.255.0/eth2, hosting node rac2

以为修改网卡名字是全局的,刚刚rac1节点操作过了,所以这里不需要动了

2>查资源
[root@rac2 bin]# ./srvctl status nodeapps -n rac2
VIP rac2-vip is enabled
VIP rac2-vip is running on node: rac2
Network is enabled
Network is running on node: rac2
GSD is disabled
GSD is not running on node: rac2
ONS is enabled
ONS daemon is running on node: rac2
[root@rac2 bin]#

3>查实例:
[root@rac2 bin]# ./srvctl status instance -d racdb -i racdb2
Instance racdb2 is running on node rac2
[root@rac2 bin]#

六、收尾,删除旧网卡配置
[root@rac2 bin]# ./oifcfg getif
eth0 10.204.101.0 global public
eth1 192.168.101.0 global cluster_interconnect
eth2 10.204.101.0 global public
eth3 192.168.101.0 global cluster_interconnect

[root@rac2 bin]# ./oifcfg delif -global eth0/10.204.101.0
[root@rac2 bin]# ./oifcfg delif -global eth1/192.168.101.0
[root@rac2 bin]# ./oifcfg getif
eth2 10.204.101.0 global public
eth3 192.168.101.0 global cluster_interconnect
[root@rac2 bin]#

7、想了再次重启crs确认下:

一个节点执行就行:
./crsctl stop crs
./crsctl start crs

注意注意:
千万不能过早的删除ocr里的网卡配置,且ocr里的网卡配置只能在ASM起来才能修改(如果网卡修改错导致crs启动不起来,只能通过gpnptool修改)

//grid用户下操作:查看gpnp配置(包含集群的网卡配置)
gpnptool get

给我留言

留言无头像?