设为首页收藏本站

番薯窝-网络爱好者交流社区

 找回密码
 立即注册

QQ登录

只需一步,快速开始

用新浪微博连接

一步搞定

搜索
查看: 2933|回复: 0

超详细的OpenStack Kilo版加CEPH部署手册

[复制链接]
发表于 2015-7-23 09:35:15 | 显示全部楼层 |阅读模式
超详细的OpenStack Kilo版加CEPH部署手册
目录
实验环境   
前言   
架构图   
架构部署   
服务器系统安装   
全局工作   
导入软件源   
Iptables服务安装   
NTP服务安装   
Ceph安装   
第一个监视器节点安装   
其余监视器节点安装   
部署OSD   
与Openstack结合   
OpenStack控制节点安装   
Ceph配置   
Mysql服务安装   
Rabbitmq服务安装   
Keystone服务安装   
Glance服务安装   
Neutron服务安装   
Nova服务安装   
Cinder服务安装   
计算节点安装   
Ceph配置   
Neutron服务安装   
Nova服务安装   
Horizon安装   


实验环境
硬件:
曙光 I610r-GV(1台)
CPU:Intel(R) Xeon(R) CPU E5606  @ 2.13GHz * 1
内存:32GB
硬盘:SAS 10K 300G * 1
网卡:Intel Corporation 82574L Gigabit Network Connection * 2
网卡:Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection(双端口) * 1
曙光 I610r-GV(3台)
CPU:Intel(R) Xeon(R) CPU E5606  @ 2.13GHz * 1
内存:32GB
硬盘:SAS 10K 300G * 1,SSD 160G * 3
网卡:Intel Corporation 82574L Gigabit Network Connection * 2
网卡:Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection(双端口) * 1
系统:
CentOS 7.1 x64
OpenStack版本:
Kilo(2015.1.0)
Ceph版本:
Hammer(0.94.2)

前言
1.    文档很初级,只是为了带新手入门,很多参数使用默认值
2.    文档中Ceph不使用ceph-deploy部署
3.    后续会有telemetry,lbaas,sahara,swift,trove部署文档
4.    如果网卡不够,可以将管理,虚机,存储合并为一个网络
5.    文档中ceph只建立一个池子
6.    rdo中某些服务的conf文件红帽有一些修改,如果大家享用官方原版可以在launchpad下载源码包自行生成(这是说给强迫症用户的)
7.    dvr模式下每个l3节点所绑定网卡都需要有外网ip


架构图


架构部署



服务器系统安装
1.    CentOS 7.1 x64使用最小化安装方式
2.    设置主机名,关闭selinux基础工作不在文档出现


全局工作

导入软件源

1.    导入软件源
rpm -ivh https://dl.fedoraproject.org/pub ... latest-7.noarch.rpm
rpm -ivh https://repos.fedorapeople.org/r ... e-kilo-1.noarch.rpm
rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'

2.    新建/etc/yum.repos.d/ceph.repo文件添加如下内容
[ceph]
name=Ceph packages for $basearch
baseurl=http://ceph.com/rpm-hammer/el7/$basearch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-hammer/el7/noarch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

Iptables服务安装

1.    安装Iptables
yum install -y iptables-services

2.    启动并设置开机自启动
systemctl stop firewalld
systemctl disable firewalld
systemctl start iptables
systemctl enable iptables
NTP服务安装
1.    安装NTP服务
yum install -y ntp
      
2.    启动并设置开机自启动
systemctl start ntpd
systemctl enable ntpd

Ceph安装

第一个监视器节点安装
1.    安装Ceph
yum install -y ceph

2.    生成集群uuid
uuidgen

3.    新建/etc/ceph/ceph.conf文件添加如下内容
fsid = 第二步生成的值
mon initial members = Kilo-com-1,
mon host = 20.0.0.3,

4.    建立拥有操作监视器权限的凭证
ceph-authtool --create-keyring /etc/ceph/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'

5.    建立一个名称为client.admin的管理员凭证
ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'

6.    将管理员凭证添加到监视器权限凭证内
ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring

7.    建立监视器节点对应关系地图
monmaptool --create --add Kilo-com-1 20.0.0.3  --fsid 第二步生成的值也是 /tmp/monmap

8.    建立监视器服务数据存放目录
mkdir /var/lib/ceph/mon/ceph-Kilo-com-1

9.    初始化监视器数据目录
ceph-mon --mkfs -i Kilo-com-1  --monmap /tmp/monmap --keyring /etc/ceph/ceph.mon.keyring

10.    编辑/etc/ceph/ceph.conf文件添加如下内容
public network = 20.0.0.0/24
cluster network = 30.0.0.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
filestore xattr use omap = true
osd pool default pg num = 256
osd pool default pgp num = 256

11.    添加部署完毕标志文件及服务启动标志文件
touch /var/lib/ceph/mon/ceph-Kilo-com-1/done
touch /var/lib/ceph/mon/ceph-Kilo-com-1/sysvinit

12.    启动监视器服务
/etc/init.d/ceph start mon

13.    查看状态

属于正常状态

14.    添加防火墙规则允许其他节点连接
iptables -I INPUT 2 -s 20.0.0.0/24 -p tcp -m tcp --dport 6789 -j ACCEPT
iptables-save > /etc/sysconfig/iptables

其余监视器节点安装
1.    安装Ceph
yum install -y ceph

2.    从第一监控器节点靠谱配置文件和凭证
scp /etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.mon.keyring /etc/ceph/ceph.conf 10.0.0.4:/etc/ceph/

3.    获取监视器节点对应关系地图
ceph mon getmap -o /tmp/monmap

4.    建立监视器服务数据存放目录
mkdir /var/lib/ceph/mon/ceph-Kilo-com-2

5.    初始化监视器数据目录
ceph-mon --mkfs -i Kilo-com-2 --monmap /tmp/monmap --keyring /etc/ceph/ceph.mon.keyring

6.    编辑/etc/ceph/ceph.conf文件添加如下内容
mon initial members = 追加当前节点主机名,并以逗号结尾
mon host = 追加当前节点虚机网络,并以逗号结尾

7.    添加部署完毕标志文件及服务启动标志文件
touch /var/lib/ceph/mon/ceph-Kilo-com-2/done
touch /var/lib/ceph/mon/ceph-Kilo-com-2/sysvinit

8.    启动Ceph mon节点服务
/etc/init.d/ceph start mon

9.    添加防火墙规则允许其他节点连接
iptables -I INPUT 2 -s 20.0.0.0/24 -p tcp -m tcp --dport 6789 -j ACCEPT
iptables-save > /etc/sysconfig/iptables

10.    添加当前监视器到监视器集群
ceph mon add Kilo-com-2 20.0.0.4:6789

11.    将/etc/ceph/ceph.conf覆盖到其他节点的ceph目录并重启服务


部署OSD

1.    申请OSD号,计划当前节点有几个OSD就执行几次,输出结果后续使用,我这里每个节点三个OSD,我需要执行三次
ceph osd tree

2.    通过OSD号创建OSD数据目录
mkdir /var/lib/ceph/osd/ceph-0
mkdir /var/lib/ceph/osd/ceph-1
mkdir /var/lib/ceph/osd/ceph-2

3.    格式化OSD使用磁盘为xfs文件系统
mkfs.xfs /dev/sdb
mkfs.xfs /dev/sdc
mkfs.xfs /dev/sdd

4.    编辑/etc/fstab添加如下内容
/dev/sdb /var/lib/ceph/osd/ceph-0 xfs defaults 0 0
/dev/sdb /var/lib/ceph/osd/ceph-0 xfs remount,user_xattr 0 0
/dev/sdc /var/lib/ceph/osd/ceph-1 xfs defaults 0 0
/dev/sdc /var/lib/ceph/osd/ceph-1 xfs remount,user_xattr 0 0
/dev/sdd /var/lib/ceph/osd/ceph-2 xfs defaults 0 0
/dev/sdd /var/lib/ceph/osd/ceph-2 xfs remount,user_xattr 0 0

5.    挂载磁盘
mount /dev/sdb
mount /dev/sdc
mount /dev/sdd

6.    初始化OSD数据目录
ceph-osd -i 0 --mkfs --mkjournal --mkkey
ceph-osd -i 1 --mkfs --mkjournal --mkkey
ceph-osd -i 2 --mkfs --mkjournal --mkkey

7.    注册OSD凭证
ceph auth add osd.0 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-0/keyring
ceph auth add osd.1 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-1/keyring
ceph auth add osd.2 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-2/keyring

8.    添加OSD到CRUSH关系图
ceph osd crush add-bucket Kilo-com-1 host

9.    将节点移动到default树下
ceph osd crush move Kilo-com-1 root=default

10.    将OSD添加到Kilo-com-1下
ceph osd crush add osd.0 1.0 host=Kilo-com-1
ceph osd crush add osd.1 1.0 host=Kilo-com-1
ceph osd crush add osd.2 1.0 host=Kilo-com-1

11.    添加服务启动标志文件
touch /var/lib/ceph/osd/ceph-0/sysvinit
touch /var/lib/ceph/osd/ceph-1/sysvinit
touch /var/lib/ceph/osd/ceph-2/sysvinit

12.    添加防火墙规则
iptables -I INPUT 2 -s 20.0.0.0/24 -p tcp -m multiport --dports 6800:6900 -j ACCEPT
iptables -I INPUT 2 -s 30.0.0.0/24 -p tcp -m multiport --dports 6800:6900 -j ACCEPT
iptables-save > /etc/sysconfig/iptables

13.    启动当前节点OSD
/etc/init.d/ceph start osd

14.    查看当前OSD树状态


15.    查看ceph状态

会看到健康状态是HEALTH_WARN,原因是PGs过少,我们来添加

16.    增加PG和PGP数量
ceph osd pool set rbd pg_num 256
ceph osd pool set rbd pgp_num 256
注:如果在执行第二条命令时报Error EBUSY: currently creating pgs, wait,说明第一条命令还没处理完

17.    再次查看ceph状态


与Openstack结合

1.    建立存储池
ceph osd pool create storages 256

2.    建立一个名称为client.storages的凭证,对storages池有操作权限
ceph auth get-or-create client.storages mon 'allow rx' osd 'allow class-read object_prefix rbd_children, allow rwx pool=storages'

OpenStack控制节点安装

Ceph配置

1.    安装ceph-common
yum install -y ceph-common

2.    拷贝配置文件和管理员凭证到本机
scp /etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.conf 10.0.0.2:/etc/ceph/

3.    导出client.storages凭证
ceph auth get-or-create client.storages >> /etc/ceph/ceph.client.storages.keyring

4.    安装libvirt
yum install -y libvirt

5.    编辑/etc/libvirt/libvirtd.conf文件添加或修改如下行
listen_tls = 0
listen_tcp = 1
listen_addr = "0.0.0.0"
auth_tcp = "none"

6.    编辑/etc/sysconfig/libvirtd文件添加或修改如下行
LIBVIRTD_ARGS="--listen"

7.    启动并设置开机自启动
systemctl start libvirtd
systemctl enable libvirtd

8.    添加防火墙规则
iptables -I INPUT 2 -s 10.0.0.0/24 -p tcp -m tcp --dport 16509 -j ACCEPT
iptables-save > /etc/sysconfig/iptables

9.    生成libvirt secret使用uuid
uuidgen

10.    新建ceph-storages-secrets.xml文件添加如下内容
<secret ephemeral='no' private='no'>
  <uuid>第九步生成值</uuid>
  <usage type='ceph'>
<name>client.storages secret</name>
  </usage>
</secret>

11.    设置libvirt secret值
virsh secret-set-value --secret第九步生成值 --base64 $(ceph auth get-key client.storages)
Mysql服务安装
1.    安装MYSQL服务
yum install -y mariadb-server

2.    编辑/etc/my.cnf文件的[mysqld]下添加如下内容
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
skip-name-resolve
skip-host-cache

3.    启动并设置开机自启动
systemctl start mariadb
systemctl enable mariadb

4.    初始化mysql设置root密码为openstack
mysql_secure_installation  #此命令为交互命令,过程自行解决

5.    添加防火墙规则
iptables -I INPUT 2 -s 10.0.0.0/24 -p tcp -m tcp --dport 3306 -j ACCEPT
iptables-save > /etc/sysconfig/iptables

Rabbitmq服务安装

1.    安装RABBITMQ服务
yum -y install rabbitmq-server

2.    启动并设置开机自启动
systemctl start rabbitmq-server
systemctl enable rabbitmq-server

3.    更改RABBITMQ消息队列服务guest用户默认密码为openstack
rabbitmqctl change_password guest openstack

4.    添加防火墙规则
iptables -I INPUT 2 -s 10.0.0.0/24 -p tcp -m tcp --dport 5672 -j ACCEPT
iptables-save > /etc/sysconfig/iptables

Keystone服务安装

1. 安装Keystone
yum install -y openstack-keystone

2. 生成管理员token
openssl rand -hex 10

3. 建立数据库及数据库用户
mysql -uroot -popenstack -e 'create database keystone'
mysql -uroot -popenstack -e 'grant all on keystone.* to"keystone"@"%" identified by "keystone"'

4. 编辑/etc/keystone/keystone.conf文件添加或修改如下行
[DEFAULT]
admin_token = 第二步生成的值
log_dir = /var/log/keystone
use_stderr = false
[database]
connection = mysql://keystone:keystone@10.0.0.2/keystone
use_db_reconnect = true
[oslo_messaging_rabbit]
rabbit_host = 10.0.0.2
rabbit_password = openstack

5. 初始化数据库
su -s /bin/sh -c 'keystone-manage db_sync' keystone

6. 启动并设置开机自启动
systemctl start openstack-keystone
systemctl enable openstack-keystone

7. 添加防火墙规则
iptables -I INPUT 2 -s 10.0.0.0/24 -ptcp -m tcp --dport 5000 -j ACCEPT
iptables -I INPUT 2 -s 10.0.0.0/24 -ptcp -m tcp --dport 35357 -j ACCEPT
iptables-save > /etc/sysconfig/iptables

8. 建立tenant,user,role,service,endpoint
export OS_SERVICE_TOKEN=第二步生成的值
export OS_SERVICE_ENDPOINT=http://10.0.0.2:35357/v2.0
keystone tenant-create --name admin --description "AdminTenant"
keystone user-create --name admin --pass admin --email [url=]admin@example.com[/url]
keystone role-create --name admin
keystone user-role-add --tenant admin --user admin --role admin
keystone role-create --name _member_
keystone user-role-add --tenant admin --user admin --role _member_
keystone tenant-create --name service --description "ServiceTenant"
keystone service-create --name keystone --type identity --description"OpenStack Identity Service"
keystone endpoint-create --service-id $(keystone service-list | awk '/identity / {print $2}') --publicurl http://10.0.0.2:5000/v2.0 --internalurlhttp://10.0.0.2:5000/v2.0  --adminurlhttp://10.0.0.2:35357/v2.0 --region regionOne

9. 新建/root/admin-openrc文件添加如下行,作为以后新建公用镜像,网络使用的认证凭证
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://10.0.0.2:35357/v2.0

Glance服务安装
1. 安装Glance
yum install -y openstack-glance

2. 建立user,role,service,endpoint
export OS_SERVICE_TOKEN=安装Keystone时第二步生成的值
exportOS_SERVICE_ENDPOINT=http://10.0.0.2:35357/v2.0
keystone user-create --name glance--pass glance
keystone user-role-add --user glance--tenant service --role admin
keystone service-create --name glance--type image --description "OpenStack Image Service"
keystone endpoint-create --service-id$(keystone service-list | awk '/ image / {print $2}') --publicurlhttp://10.0.0.2:9292 --internalurl http://10.0.0.2:9292 --adminurlhttp://10.0.0.2:9292 --region regionOne

3. 建立数据库及数据库用户
mysql -uroot -popenstack -e 'createdatabase glance'
mysql -uroot -popenstack -e 'grant allon glance.* to "glance"@"%" identified by"glance"'

4. 编辑/etc/glance/glance-api.conf文件添加或修改如下行
[DEFAULT]
use_stderr = false
show_image_direct_url = true
rabbit_host = 10.0.0.2
rabbit_password = openstack
[database]
connection = mysql://glance:glance@10.0.0.2/glance
use_db_reconnect = true
[keystone_authtoken]
identity_uri = http://10.0.0.2:35357
admin_tenant_name = service
admin_user = glance
admin_password = glance
[paste_deploy]
flavor = keystone
[glance_store]
stores = glance.store.rbd.Store,
default_store = rbd
rbd_store_user = storages
rbd_store_pool = storages

5. 编辑/etc/glance/glance-registry.conf文件添加或修改如下行
[DEFAULT]
use_stderr = false
rabbit_host = 10.0.0.2
rabbit_password = openstack
[database]
connection = mysql://glance:glance@10.0.0.2/glance
use_db_reconnect = true
[keystone_authtoken]
identity_uri = http://10.0.0.2:35357
admin_tenant_name = service
admin_user = glance
admin_password = glance
[paste_deploy]
flavor = keystone

6. 初始化数据库
su -s /bin/sh -c 'glance-manage db_sync' glance

7. 启动并设置开机自启动
systemctl start openstack-glance-api
systemctl startopenstack-glance-registry
systemctl enable openstack-glance-api
systemctl enableopenstack-glance-registry

8. 添加防火墙规则
iptables -I INPUT 2 -s 10.0.0.0/24 -ptcp -m tcp --dport 9191 -j ACCEPT
iptables -I INPUT 2 -s 10.0.0.0/24 -ptcp -m tcp --dport 9292 -j ACCEPT
iptables-save > /etc/sysconfig/iptables

Neutron服务安装
1. 安装Neutron
yum install -y openstack-neutron openstack-neutron-ml2openstack-neutron-openvswitch

2. 建立user,role,service,endpoint
keystone user-create --name neutron--pass neutron
keystone user-role-add --user neutron--tenant service --role admin
keystone service-create --name neutron--type network --description "OpenStack Network Service"
keystone endpoint-create --service-id$(keystone service-list | awk '/ network / {print $2}') --publicurlhttp://10.0.0.2:9696 --adminurl http://10.0.0.2:9696 --internalurl http://10.0.0.2:9696--region regionOne

3. 建立数据库及数据库用户
mysql -uroot -popenstack -e 'createdatabase neutron'
mysql -uroot -popenstack -e 'grant allon neutron.* to "neutron"@"%" identified by"neutron"'

4. 编辑/etc/neutron/neutron.conf文件添加或修改如下行
[DEFAULT]
router_distributed = true
use_stderr = false
log_dir = /var/log/neutron
core_plugin = ml2
service_plugins = router
auth_strategy = keystone
host = Kilo-con
allow_overlapping_ips = true
notify_nova_on_port_status_changes =true
notify_nova_on_port_data_changes = true
nova_url = [url=]http://10.0.0.2:8774/v2[/url]
nova_region_name = regionOne
nova_admin_username = nova
nova_admin_tenant_name = service
nova_admin_password = nova
nova_admin_auth_url = [url=]http://10.0.0.2:35357/v2.0[/url]
rabbit_host = 10.0.0.2
rabbit_password = openstack
[agent]
root_helper = sudo neutron-rootwrap/etc/neutron/rootwrap.conf
[keystone_authtoken]
identity_uri = http://10.0.0.2:35357
admin_tenant_name = service
admin_user = neutron
admin_password = neutron
[database]
connection =mysql://neutron:neutron@10.0.0.2/neutron
use_db_reconnect = true
[oslo_messaging_rabbit]
rabbit_host = 10.0.0.2
rabbit_password = openstack

5. 编辑/etc/neutron/dhcp_agent.ini文件添加或修改如下行
[DEFAULT]
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
dnsmasq_config_file =/etc/neutron/neutron-dnsmasq.conf

6. 新建/etc/neutron/neutron-dnsmasq.conf添加如下行
dhcp-option-force=26,1450
log-facility =/var/log/neutron/dnsmasq.log

7. 编辑/etc/neutron/l3_agent.ini文件添加或修改如下行
[DEFAULT]
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = false
enable_metadata_proxy = false
agent_mode = dvr_snat

8. 编辑/etc/neutron/plugins/ml2/ml2_conf.ini文件添加或修改如下行
[ml2]
type_drivers = flat,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
[ml2_type_vxlan]
vni_ranges = 1000:5000

9. 编辑/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini文件添加或修改如下行
[ovs]
local_ip = 20.0.0.2
bridge_mappings = external:br-ex
[agent]
tunnel_types = vxlan
vxlan_udp_port = 4789
l2_population = true
arp_responder = true
enable_distributed_routing = true
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

10.建立ml2插件配置文件软链
ln -sv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

11.编辑/etc/sysctl.conf文件添加或修改如下行
net.ipv4.ip_forward = 1
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0

12.重加载内核参数
sysctl -p

13.初始化数据库
su -s /bin/sh -c "neutron-db-manage upgrade kilo" neutron

14.启动并设置开机自启动openvswitch
systemctl start openvswitch
systemctl enable openvswitch

15.建立openvswitch端口
ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex

16.像br-ex添加网卡
ovs-vsctl add-port br-ex eth3

17.启动并设置开机自启动neutron服务
systemctl start neutron-server
systemctl start neutron-dhcp-agent
systemctl startneutron-openvswitch-agent
systemctl start neutron-l3-agent
systemctl start neutron-ovs-cleanup
systemctl start neutron-netns-cleanup
systemctl enable neutron-server
systemctl enable neutron-dhcp-agent
systemctl enableneutron-openvswitch-agent
systemctl enable neutron-l3-agent
systemctl enable neutron-ovs-cleanup
systemctl enable neutron-netns-cleanup

18.添加防火墙规则
iptables -I INPUT 2 -s 10.0.0.0/24 -ptcp -m tcp --dport 9696 -j ACCEPT
iptables-save > /etc/sysconfig/iptables

19.建立内网,外网,路由及绑定子网和接口到路由器
source /root/admin-openrc
neutron net-create --shared--provider:network_type vxlan internal-network
neutron subnet-create internal-network100.100.100.0/24 --name internal-network-subnet --gateway 100.100.100.1--allocation-pool start=100.100.100.10,end=100.100.100.200 --enable-dhcp--ip-version 4 --dns-nameserver 202.106.0.20
neutronnet-create ext-net --shared --router:external --provider:network_type flat--provider:physical_network external
neutronsubnet-create ext-net 200.200.200.0/24 --name ext-network-subnet --gateway200.200.200.1 --allocation-pool start=200.200.200.10,end=200.200.200.200--disable-dhcp --ip-version 4 --dns-nameserver 202.106.0.20
neutronrouter-create router
neutronrouter-interface-add router internal-network-subnet
neutronrouter-gateway-set router ext-net



回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

手机版|番薯窝    

GMT+8, 2018-8-17 21:52 , Processed in 0.087297 second(s), 23 queries .

快速回复 返回顶部 返回列表