OpenStack-Queens详细安装部署(五)Nova控制节点集群

1. 创建nova相关数据库 (任意controller节点操作)

# nova服务含4个数据库,统一授权到nova用户;
# placement主要涉及资源统筹,较常用的api接口是获取备选资源与claim资源等
[root@DT_Node-172_17_7_1 ~]# mysql -uroot -pYTI1MTg4NGZiMGEzZTZmYTEw -hcontroller 
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> CREATE DATABASE nova_placement;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'ZDJkYTIwODBmMDM2NDBhNTNl';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'ZDJkYTIwODBmMDM2NDBhNTNl';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'ZDJkYTIwODBmMDM2NDBhNTNl';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'ZDJkYTIwODBmMDM2NDBhNTNl';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'ZDJkYTIwODBmMDM2NDBhNTNl';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'ZDJkYTIwODBmMDM2NDBhNTNl';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_placement.* TO 'nova'@'localhost' IDENTIFIED BY 'ZDJkYTIwODBmMDM2NDBhNTNl';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_placement.* TO 'nova'@'%' IDENTIFIED BY 'ZDJkYTIwODBmMDM2NDBhNTNl';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> \q
[root@DT_Node-172_17_7_1 ~]#

2. 创建nova/placement-api (任意controller节点操作)

# 调用nova相关服务需要认证信息,加载环境变量脚本即可
[root@DT_Node-172_17_7_1 ~]# . keystone_admin 

1)创建nova/plcement用户

# service项目已在glance章节创建;
# nova/placement用户在”default” domain中
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# openstack user create --domain default --password=ZTQ0NTdjOTI1YzY1Zjg2ZTE2 nova
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# openstack user create --domain default --password=ZTQ0NTdjOTI1YzY1Zjg2ZTE2 placement

OpenStack-Queens详细安装部署(五)Nova控制节点集群

2)nova/placement赋权

# 为nova/placement用户赋予admin权限
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# openstack role add --project service --user nova admin
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# openstack role add --project service --user placement admin

3)创建nova/placement服务实体

# nova服务实体类型”compute”;
# placement服务实体类型”placement”
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# openstack service create --name nova --description "OpenStack Compute" compute
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# openstack service create --name placement --description "Placement API" placement

OpenStack-Queens详细安装部署(五)Nova控制节点集群

4)创建nova/placement-api

# 注意--region与初始化admin用户时生成的region一致;
# api地址统一采用vip,如果public/internal/admin分别使用不同的vip,请注意区分;
# nova-api 服务类型为compute,placement-api服务类型为placement;
# nova public api
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# openstack endpoint create --region RegionTest compute public http://controller:8774/v2.1
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# openstack endpoint create --region RegionTest compute internal http://controller:8774/v2.1
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# openstack endpoint create --region RegionTest compute admin http://controller:8774/v2.1

OpenStack-Queens详细安装部署(五)Nova控制节点集群

# placement public api
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# openstack endpoint create --region RegionTest placement public http://controller:8778
# placement internal api
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# openstack endpoint create --region RegionTest placement internal http://controller:8778
# placement admin api
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# openstack endpoint create --region RegionTest placement admin http://controller:8778

OpenStack-Queens详细安装部署(五)Nova控制节点集群

3. 安装nova (所有controller节点操作)

# 在全部控制节点安装nova相关服务,以controller1节点为例
[root@DT_Node-172_17_7_1 ~]# yum install openstack-nova-api openstack-nova-conductor \
 openstack-nova-console openstack-nova-novncproxy \
 openstack-nova-scheduler openstack-nova-placement-api -y

4. 配置nova.conf (所有controller节点操作)

# 注意”my_ip”参数,根据节点修改;
# 注意nova.conf文件的权限:root:nova
[root@DT_Node-172_17_7_1 ~]# cp /etc/nova/nova.conf{,_original}
[root@DT_Node-172_17_7_1 ~]# egrep -v '^#|^$' /etc/nova/nova.conf
[DEFAULT]
my_ip=172.17.7.1
use_neutron=true
firewall_driver=nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
osapi_compute_listen=$my_ip
osapi_compute_listen_port=8774
metadata_listen=$my_ip
metadata_listen_port=8775
transport_url=rabbit://openstack:MWY1NTA5NGYzYmM1MWQ2MTFk@controller1:5672,openstack:MWY1NTA5NGYzYmM1MWQ2MTFk@controller2:5672,openstack:MWY1NTA5NGYzYmM1MWQ2MTFk@controller3:5672
[api]
auth_strategy=keystone
[api_database]
connection=mysql+pymysql://nova:ZDJkYTIwODBmMDM2NDBhNTNl@controller/nova_api
[barbican]
[cache]
backend=oslo_cache.memcache_pool
enabled=True
memcache_servers=controller1:11211,controller2:11211,controller3:11211
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
connection = mysql+pymysql://nova:ZDJkYTIwODBmMDM2NDBhNTNl@controller/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller1:11211,controller2:11211,controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = ZTQ0NTdjOTI1YzY1Zjg2ZTE2
[libvirt]
inject_password=true
vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
[matchmaker_redis]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
region_name = RegionTest
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = ZTQ0NTdjOTI1YzY1Zjg2ZTE2
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled=true
server_listen=$my_ip
server_proxyclient_address=$my_ip
novncproxy_base_url=http://$my_ip:6080/vnc_auto.html
novncproxy_host=$my_ip
novncproxy_port=6080
[workarounds]
[wsgi]
[xenserver]
[xvp]
[root@DT_Node-172_17_7_1 ~]# 

5. 配置00-nova-placement-api.conf (所有controller节点操作)

# 注意根据不同节点修改监听地址
[root@DT_Node-172_17_7_1 ~]# cp /etc/httpd/conf.d/00-nova-placement-api.conf /etc/httpd/conf.d/00-nova-placement-api.conf.bak
[root@DT_Node-172_17_7_1 ~]# sed -i "s/Listen\ 8778/Listen\ 172.17.7.1:8778/g" /etc/httpd/conf.d/00-nova-placement-api.conf
[root@DT_Node-172_17_7_1 ~]# sed -i "s/*:8778/172.17.7.1:8778/g" /etc/httpd/conf.d/00-nova-placement-api.conf
[root@DT_Node-172_17_7_1 ~]# echo "

#Placement API
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>
" >> /etc/httpd/conf.d/00-nova-placement-api.conf
# 重启httpd服务,启动placement-api监听端口
[root@DT_Node-172_17_7_1 ~]# systemctl restart httpd
[root@DT_Node-172_17_7_1 ~]# systemctl status httpd 
[root@DT_Node-172_17_7_1 ~]# ss -tnl | grep 8778

OpenStack-Queens详细安装部署(五)Nova控制节点集群

6. 同步nova相关数据库
1)同步nova相关数据库 (任意controller节点操作)

# 同步nova-api数据库
[root@DT_Node-172_17_7_1 ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
# 注册cell0数据库
[root@DT_Node-172_17_7_1 ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
# 创建cell1 cell
[root@DT_Node-172_17_7_1 ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
# 同步nova数据库;
# 忽略”deprecated”信息
[root@DT_Node-172_17_7_1 ~]# su -s /bin/sh -c "nova-manage db sync" nova
[root@DT_Node-172_17_7_1 ~]# 

OpenStack-Queens详细安装部署(五)Nova控制节点集群

补充:

此版本在向数据库同步导入数据表时,报错:

[root@DT_Node-172_17_7_1 ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning
[root@DT_Node-172_17_7_1 ~]# 

OpenStack-Queens详细安装部署(五)Nova控制节点集群

解决方案如下:
bug:https://bugs.launchpad.net/nova/+bug/1746530
pacth:https://github.com/openstack/oslo.db/commit/c432d9e93884d6962592f6d19aaec3f8f66ac3a2

[root@DT_Node-172_17_7_1 ~]# sed -i '175s/^/#/' /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py
sed -i "175a \                'db_max_retry_interval', 'backend', 'use_tpool'])" /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py

OpenStack-Queens详细安装部署(五)Nova控制节点集群

2)验证 (任意controller节点操作)

# cell0与cell1注册正确
[root@DT_Node-172_17_7_1 ~]# nova-manage cell_v2 list_cells

OpenStack-Queens详细安装部署(五)Nova控制节点集群

# 查看数据表

[root@DT_Node-172_17_7_1 ~]# mysql -hcontroller -u nova -pZDJkYTIwODBmMDM2NDBhNTNl nova_api -e "show tables;"
[root@DT_Node-172_17_7_1 ~]# mysql -hcontroller -u nova -pZDJkYTIwODBmMDM2NDBhNTNl nova -e "show tables;" 
[root@DT_Node-172_17_7_1 ~]# mysql -hcontroller -u nova -pZDJkYTIwODBmMDM2NDBhNTNl nova_cell0 -e "show tables;"

7. 启动服务 (所有controller节点操作)

# 在全部控制节点操作,以controller1节点为例;
# 开机启动
[root@DT_Node-172_17_7_1 ~]# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service

# 启动
[root@DT_Node-172_17_7_1 ~]# systemctl restart openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service

# 查看状态
[root@DT_Node-172_17_7_1 ~]# systemctl status openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service

# 查看端口

[root@DT_Node-172_17_7_1 ~]# ss -tnl| egrep '8774|8775|8778|6080'

8. HAproxy 配置 (所有controller节点操作)

# 加入以下配置文件到haproxy中
 listen nova_compute_api_cluster
  bind 172.17.7.100:8774
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller1 172.17.7.1:8774 check inter 2000 rise 2 fall 5
  server controller2 172.17.7.2:8774 check inter 2000 rise 2 fall 5
  server controller3 172.17.7.3:8774 check inter 2000 rise 2 fall 5

 listen nova_placement_cluster
  bind 172.17.7.100:8778
  balance  source
  option  tcpka
  option  tcplog
  server controller1 172.17.7.1:8778 check inter 2000 rise 2 fall 5
  server controller2 172.17.7.2:8778 check inter 2000 rise 2 fall 5
  server controller3 172.17.7.3:8778 check inter 2000 rise 2 fall 5

 listen nova_metadata_api_cluster
  bind 172.17.7.100:8775
  balance  source
  option  tcpka
  option  tcplog
  server controller1 172.17.7.1:8775 check inter 2000 rise 2 fall 5
  server controller2 172.17.7.2:8775 check inter 2000 rise 2 fall 5
  server controller3 172.17.7.3:8775 check inter 2000 rise 2 fall 5

 listen nova_vncproxy_cluster
  bind 172.17.7.100:6080
  balance  source
  option  tcpka
  option  tcplog
  server controller1 172.17.7.1:6080 check inter 2000 rise 2 fall 5
  server controller2 172.17.7.2:6080 check inter 2000 rise 2 fall 5
  server controller3 172.17.7.3:6080 check inter 2000 rise 2 fall 5
#重启HAproxy (VIP所在节点操作)
[root@DT_Node-172_17_7_2 ~]# systemctl restart haproxy.service 
[root@DT_Node-172_17_7_2 ~]# systemctl status haproxy.service        
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2018-09-10 20:46:48 CST; 8s ago
 Main PID: 40179 (haproxy-systemd)
   CGroup: /system.slice/haproxy.service
           ├─40179 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
           ├─40182 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
           └─40187 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

9月 10 20:46:48 controller2 systemd[1]: Starting HAProxy Load Balancer...
9月 10 20:46:48 controller2 haproxy-systemd-wrapper[40179]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
9月 10 20:46:48 controller2 haproxy-systemd-wrapper[40179]: [WARNING] 252/204648 (40182) : config : log format ignored for proxy 'keystone_admin_cluster' since it has no log address.
9月 10 20:46:48 controller2 haproxy-systemd-wrapper[40179]: [WARNING] 252/204648 (40182) : config : log format ignored for proxy 'keystone_public_cluster' since it has no log address.
9月 10 20:46:48 controller2 haproxy-systemd-wrapper[40179]: [WARNING] 252/204648 (40182) : config : log format ignored for proxy 'glance_api_cluster' since it has no log address.
9月 10 20:46:48 controller2 haproxy-systemd-wrapper[40179]: [WARNING] 252/204648 (40182) : config : log format ignored for proxy 'glance_registry_cluster' since it has no log address.
9月 10 20:46:48 controller2 haproxy-systemd-wrapper[40179]: [WARNING] 252/204648 (40182) : config : log format ignored for proxy 'nova_compute_api_cluster' since it has no log address.
9月 10 20:46:48 controller2 haproxy-systemd-wrapper[40179]: [WARNING] 252/204648 (40182) : config : log format ignored for proxy 'nova_placement_cluster' since it has no log address.
9月 10 20:46:48 controller2 haproxy-systemd-wrapper[40179]: [WARNING] 252/204648 (40182) : config : log format ignored for proxy 'nova_metadata_api_cluster' since it has no log address.
9月 10 20:46:48 controller2 haproxy-systemd-wrapper[40179]: [WARNING] 252/204648 (40182) : config : log format ignored for proxy 'nova_vncproxy_cluster' since it has no log address.
[root@DT_Node-172_17_7_2 ~]# ss -tnl| egrep '8774|8775|8778|6080'
LISTEN     0      4000   172.17.7.100:6080                     *:*                  
LISTEN     0      100    172.17.7.2:6080                     *:*                  
LISTEN     0      4000   172.17.7.100:8774                     *:*                  
LISTEN     0      128    172.17.7.2:8774                     *:*                  
LISTEN     0      4000   172.17.7.100:8775                     *:*                  
LISTEN     0      128    172.17.7.2:8775                     *:*                  
LISTEN     0      4000   172.17.7.100:8778                     *:*                  
LISTEN     0      511    172.17.7.2:8778                     *:*                  
[root@DT_Node-172_17_7_2 ~]#

9.验证 (任意controller节点操作)

[root@DT_Node-172_17_7_1 ~]# . keystone_admin

# 列出各服务组件,查看状态;
# 也可使用命令” nova service-list”
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# openstack compute service list

OpenStack-Queens详细安装部署(五)Nova控制节点集群

# 展示api端点
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# openstack catalog list

OpenStack-Queens详细安装部署(五)Nova控制节点集群

# 检查cell与placement api运行正常
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# nova-status upgrade check

OpenStack-Queens详细安装部署(五)Nova控制节点集群

10.添加实例类型 (非必要,可后期再web上添加) (任意controller节点操作)

[root@DT_Node-172_17_7_1 ~]# . keystone_admin 
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# openstack flavor list
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# openstack flavor create --id 1 --vcpus 1 --ram 128 --disk 0 m1.nano  && \
openstack flavor create --id 2 --vcpus 1 --ram 128 --disk 0 m1.micro  && \
openstack flavor create --id 3 --vcpus 1 --ram 512 --disk 1 m1.tiny  && \
openstack flavor create --id 4 --vcpus 1 --ram 2048 --disk 20 m1.small  && \
openstack flavor create --id 5 --vcpus 2 --ram 4096 --disk 40 m1.medium  && \
openstack flavor create --id 6 --vcpus 4 --ram 8192 --disk 80 m1.large  && \
openstack flavor create --id 7 --vcpus 8 --ram 16384 --disk 160 m1.xlarge
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# openstack flavor list

OpenStack-Queens详细安装部署(五)Nova控制节点集群

11. 设置pcs资源 (任意controller节点操作)

# 添加资源openstack-nova-api,openstack-nova-consoleauth,openstack-nova-scheduler,openstack-nova-conductor与openstack-nova-novncproxy
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# pcs resource create openstack-nova-api systemd:openstack-nova-api --clone interleave=true
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# pcs resource create openstack-nova-consoleauth systemd:openstack-nova-consoleauth --clone interleave=true
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# pcs resource create openstack-nova-scheduler systemd:openstack-nova-scheduler --clone interleave=true
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# pcs resource create openstack-nova-conductor systemd:openstack-nova-conductor --clone interleave=true
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# pcs resource create openstack-nova-novncproxy systemd:openstack-nova-novncproxy --clone interleave=true

# 经验证,建议openstack-nova-api,openstack-nova-consoleauth,openstack-nova-conductor与openstack-nova-novncproxy 等无状态服务以active/active模式运行;
# openstack-nova-scheduler等服务以active/passive模式运行
# 查看pcs资源;
[openstack-admin]-[root@DT_Node-172_17_7_1 ~]# pcs resource

OpenStack-Queens详细安装部署(五)Nova控制节点集群

lookback
  • 本文由 发表于 2018年9月8日12:11:49
  • 除非特殊声明,本站文章均为原创,转载请务必保留本文链接
匿名

发表评论

匿名网友 填写信息

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: