私有云 Openstack Ocata版安装(四)安装并配置计算节点

摘要

部署Openstack的服务器官方推荐两台以上,主要是创建的实例(Instances)实际上占用的是计算节点的资源,因此你的计算节点所拥有的VCPU、Memory,将决定你所创建的实例的最大Vcpu数和内存,或许是基于此种原因考虑,官方建议控制节点和计算节点分离,我此次部署是基于Vmware虚拟机,主要是用于测试环境,因此规划为单控制节点和两个计算节点,本系列会涉及到的部署组件为keystone,glance,nova,neutron,cinder,dashboard。我这次是在三台CentOS 7的服务器进行部署。

私有云 Openstack Ocata版安装(一)Controller部署:https://www.dwhd.org/20180213_234933.html

私有云 Openstack Ocata版安装(二)glance组件的安装和配置:https://www.dwhd.org/20180213_234933.html

私有云 Openstack Ocata版安装(三)nova组件的安装和配置:https://www.dwhd.org/20180214_163005.html

一:集群信息简介

节点名 网卡IP 网络和网关 运行业务 配置 系统
Openstack Controller 192.168.200.101
172.18.100.1
172.28.100.1
192.168.200/24   192.168.200.2
172.18.0.0/16      172.18.0.1
172.28.0.0/16      172.28.0.1
MySQL、RabbitMQ、Memcached、Keystone、Glance、Nova、Neutron 4c8g100G CentOS 7.4.1708
Openstack Computer 192.168.200.102
172.18.100.2
172.28.100.2
192.168.200/24   192.168.200.2
172.18.0.0/16      172.18.0.1
172.28.0.0/16      172.28.0.1
Nova-Compute、Neutron 4c8g100G CentOS 7.4.1708
Openstack Network 192.168.200.103
172.18.100.3
172.28.100.3
192.168.200/24   192.168.200.2
172.18.0.0/16      172.18.0.1
172.28.0.0/16      172.28.0.1
4c4g40G CentOS 7.4.1708

二:计算节点基础环境配置

1、配置网络

[[email protected]_168_200_102 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:6d:40:45 brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.102/24 brd 192.168.200.255 scope global eth0
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:6d:40:4f brd ff:ff:ff:ff:ff:ff
    inet 172.18.100.2/16 brd 172.18.255.255 scope global eth1
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:6d:40:59 brd ff:ff:ff:ff:ff:ff
    inet 172.28.100.2/16 brd 172.28.255.255 scope global eth2
       valid_lft forever preferred_lft forever
[[email protected]_168_200_102 ~]# ip r
default via 192.168.200.2 dev eth0 metric 100
default via 172.18.0.1 dev eth1 metric 1000
default via 172.28.0.1 dev eth2 metric 2000
169.254.0.0/16 dev eth0 scope link metric 1002
169.254.0.0/16 dev eth1 scope link metric 1003
169.254.0.0/16 dev eth2 scope link metric 1004
172.18.0.0/16 dev eth1 proto kernel scope link src 172.18.100.2
172.28.0.0/16 dev eth2 proto kernel scope link src 172.28.100.2
192.168.200.0/24 dev eth0 proto kernel scope link src 192.168.200.102
[[email protected]_168_200_102 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.200.101    LB-Controller-Node1-192_168_200_101.dwhd.org LB-Controller-Node1-192_168_200_101 Controller controller
172.18.100.1       LB-Controller-Node1-192_168_200_101.dwhd.org LB-Controller-Node1-192_168_200_101 Controller controller
172.28.100.1       LB-Controller-Node1-192_168_200_101.dwhd.org LB-Controller-Node1-192_168_200_101 Controller controller

192.168.200.102    LB-Compute-Nodei1-192_168_200_102.dwhd.org LB-Compute-Nodei1-192_168_200_102 Compute computer
172.18.100.2       LB-Compute-Nodei1-192_168_200_102.dwhd.org LB-Compute-Nodei1-192_168_200_102 Compute computer
172.28.100.2       LB-Compute-Nodei1-192_168_200_102.dwhd.org LB-Compute-Nodei1-192_168_200_102 Compute computer

192.168.200.103    LB-Network-Nodei1-192_168_200_103.dwhd.org LB-Network-Nodei1-192_168_200_103 Network network
172.18.100.3       LB-Network-Nodei1-192_168_200_103.dwhd.org LB-Network-Nodei1-192_168_200_103 Network network
172.28.100.3       LB-Network-Nodei1-192_168_200_103.dwhd.org LB-Network-Nodei1-192_168_200_103 Network network
[[email protected]_168_200_102 ~]# ping -c1 controller
PING LB-Controller-Node1-192_168_200_101.dwhd.org (192.168.200.101) 56(84) bytes of data.
64 bytes from LB-Controller-Node1-192_168_200_101.dwhd.org (192.168.200.101): icmp_seq=1 ttl=64 time=0.395 ms

--- LB-Controller-Node1-192_168_200_101.dwhd.org ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms
[[email protected]_168_200_102 ~]# ping -c1 computer
PING LB-Compute-Nodei1-192_168_200_102.dwhd.org (192.168.200.102) 56(84) bytes of data.
64 bytes from LB-Compute-Nodei1-192_168_200_102.dwhd.org (192.168.200.102): icmp_seq=1 ttl=64 time=0.017 ms

--- LB-Compute-Nodei1-192_168_200_102.dwhd.org ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms
[[email protected]_168_200_102 ~]# 

私有云 Openstack Ocata版安装(四)安装并配置计算节点

下面是三张网卡的配置信息

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="eth0"
DEVICE="eth0"
ONBOOT="yes"
#BOOTPROTO="dhcp"
BOOTPROTO="static"
DEFROUTE="yes"
METRIC=100
IPADDR="192.168.200.102"
NETMASK="255.255.255.0"
GATEWAY="192.168.200.2"
DNS1=47.90.33.131
DNS2=8.8.8.8
DNS3=8.8.4.4
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="eth1"
DEVICE="eth1"
ONBOOT="yes"
#BOOTPROTO="dhcp"
BOOTPROTO="static"
DEFROUTE="yes"
METRIC=1000
IPADDR="172.18.100.2"
NETMASK="255.255.0.0"
GATEWAY="172.18.0.1"
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="eth2"
DEVICE="eth2"
ONBOOT="yes"
#BOOTPROTO="dhcp"
BOOTPROTO="static"
DEFROUTE="yes"
METRIC=2000
IPADDR="172.28.100.2"
NETMASK="255.255.0.0"
GATEWAY="172.28.0.1"

2、时间同步

[[email protected]_168_200_102 ~]# { [ -x /usr/sbin/ntpdate ] || yum install ntpdate -y; } && \
{ if ! grep -q ntpdate /var/spool/cron/root; then echo -e "\n*/5 * * * * n/usr/sbin/ntpdate ntp.dtops.cc >/dev/null 2>&1" >> /var/spool/cron/root;fi; } && \
{ clear && /usr/sbin/ntpdate ntp.dtops.cc && echo -e "\n=======\n" && cat /var/spool/cron/root; }
14 Feb 22:30:22 ntpdate[2421]: adjust time server 180.150.154.108 offset -0.031160 sec

=======


*/5 * * * * /usr/sbin/ntpdate -u ntp.dtops.cc >/dev/null 2>&1
*/1 * * * * /usr/sbin/ss  -tan|awk 'NR>1{++S[$1]}END{for (a in S) print a,S[a]}' > /tmp/tcp-status.txt
*/1 * * * * /usr/sbin/ss -o state established '( dport = :http or sport = :http )' |grep -v Netid > /tmp/httpNUB.txt
[[email protected]_168_200_102 ~]# 

私有云 Openstack Ocata版安装(四)安装并配置计算节点

3、启用OpenStack库

[[email protected]_168_200_102 ~]# yum install -y centos-release-openstack-ocata

4、下载并安装RDO库转使OpenStack库

[[email protected]_168_200_102 ~]# yum install -y https://rdoproject.org/repos/rdo-release.rpm

5、更新所有软件包

[[email protected]_168_200_102 ~]# yum clean all && yum makecache && yum upgrade -y

6、安装openstack客户端

[[email protected]_168_200_102 ~]# yum install -y python-openstackclient

7、安装OpenStack SELinux包自动地管理安全策略为OpenStack服务

[root@LB-VM-Node-192_168_200_102 ~]# yum install -y openstack-selinux

三:安装并配置计算节点

1.安装软件包

[[email protected]_168_200_102 ~]# yum install -y openstack-nova-compute

2.编辑/etc/nova/nova.conf文件并完成以下操作

A. 在[DEFAULT]小节,只启用 compute 和 metadata 的 API。

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata

B.在[DEFAULT]小节配置 RabbitMQ 消息队列访问:

[DEFAULT]
# ...
transport_url = rabbit://openstack:[email protected]

C.在[api]和[keystone_authtoken]小节配置身份服务访问信息:

[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = lookback

D. 在[DEFAULT]小节,配置 my_ip 配置项为管理节点的管理接口 IP地址。

[DEFAULT]
# ...
my_ip = 192.168.200.102(计算节点ip)

E. 在[DEFAULT]小节,启用支持 neutron 网络服务:

[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

F. 在[vnc]小节,启用和配置远程控制台访问:

[vnc]
# ...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

G. 在[glance]小节,配置镜像服务 API 的位置:

[glance]
# ...
api_servers = http://controller:9292

H.在[oslo_concurrency]小节,配置锁路径:

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

I.在[placement]小节,配置 placement API 信息:

[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = lookback

下面是我的配置样例

[[email protected]_168_200_102 ~]# grep -Ev '^(#|$)' /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:[email protected]
my_ip = 192.168.200.102
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = lookback
[libvirt]
virt_type = qemu
[matchmaker_redis]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = lookback
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[trusted_computing]
[upgrade_levels]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[[email protected]_168_200_102 ~]# 

3.启动计算服务并设置开机自动运行

[[email protected]_168_200_102 ~]# systemctl enable libvirtd.service openstack-nova-compute.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.
[[email protected]_168_200_102 ~]# systemctl start libvirtd.service openstack-nova-compute.service
[[email protected]_168_200_102 ~]# systemctl status libvirtd.service openstack-nova-compute.service
● libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: active (running) since 三 2018-02-14 23:26:33 CST; 6min ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 21489 (libvirtd)
   CGroup: /system.slice/libvirtd.service
           └─21489 /usr/sbin/libvirtd

2月 14 23:26:33 LB-VM-Node-192_168_200_102.dwhd.org systemd[1]: Starting Virtualization daemon...
2月 14 23:26:33 LB-VM-Node-192_168_200_102.dwhd.org systemd[1]: Started Virtualization daemon.

● openstack-nova-compute.service - OpenStack Nova Compute Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2018-02-14 23:32:21 CST; 17s ago
 Main PID: 21506 (nova-compute)
   CGroup: /system.slice/openstack-nova-compute.service
           └─21506 /usr/bin/python2 /usr/bin/nova-compute

2月 14 23:26:33 LB-VM-Node-192_168_200_102.dwhd.org systemd[1]: Starting OpenStack Nova Compute Server...
2月 14 23:32:21 LB-VM-Node-192_168_200_102.dwhd.org systemd[1]: Started OpenStack Nova Compute Server.
[[email protected]_168_200_102 ~]#

私有云 Openstack Ocata版安装(四)安装并配置计算节点

如果nova-compute服务无法启动,请检查/var/log/nova/nova-compute.log。

2018-02-14 23:27:38.258 21506 ERROR oslo.messaging._drivers.impl_rabbit [req-3dd9b02f-5eb1-4c16-90dd-88deb2951513 - - - - -] [bb37531e-ea90-4602-804e-5d4f8151d15a] AMQP server on controller:5672 is unreachable: [Errno 113] EHOSTUNREACH. Trying again in 16 seconds. Client port: None: error: [Errno 113] EHOSTUNREACH

可能是控制节点上的防火墙阻止了对端口5672的访问。配置防火墙以在控制节点上打开端口5672,并在计算节点上重新启动nova-compute服务。


四:将计算节点添加到单元数据库(下面的内容在控制节点上运行

1.soursce一下管理员凭据以启用仅限管理员的CLI命令,然后确认数据库中有计算主机

[[email protected]_168_200_101 ~]# . admin-openrc
[[email protected]_168_200_101 ~]# openstack hypervisor list

[[email protected]_168_200_101 ~]# 

2.发现计算节点s:

[[email protected]_168_200_101 ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': d95ae531-ecaf-4f35-9193-36aee48cc779
Found 1 unmapped computes in cell: d95ae531-ecaf-4f35-9193-36aee48cc779
Checking host mapping for compute host 'LB-VM-Node-192_168_200_102.dwhd.org': b7716427-8e2d-446d-94a5-e0428f8b5399
Creating host mapping for compute host 'LB-VM-Node-192_168_200_102.dwhd.org': b7716427-8e2d-446d-94a5-e0428f8b5399
[[email protected]_168_200_101 ~]# openstack hypervisor list
+----+-------------------------------------+-----------------+-----------------+-------+
| ID | Hypervisor Hostname                 | Hypervisor Type | Host IP         | State |
+----+-------------------------------------+-----------------+-----------------+-------+
|  1 | LB-VM-Node-192_168_200_102.dwhd.org | QEMU            | 192.168.200.102 | up    |
+----+-------------------------------------+-----------------+-----------------+-------+
[[email protected]_168_200_101 ~]# 

私有云 Openstack Ocata版安装(四)安装并配置计算节点

3.你添加新的计算节点,您必须运行管理cell_v2 discover_hosts,控制器节点上注册那些新的计算节点。或者,您可以设置适当的在/etc/nova/nova.conf进行修改:

[scheduler]
discover_hosts_in_cells_interval = 300

五:验证操作(控制节点)(继续在控制节点运行)

1. 执行 admin 凭据脚本,以便以 admin 身份执行后续命令:

[[email protected]_168_200_101 ~]# . admin-openrc

2. 通过列出服务组件,确认每一个进程已经成功启动和注册

[[email protected]_168_200_101 ~]# openstack compute service list
+----+------------------+-------------------------------------+----------+---------+-------+----------------------------+
| ID | Binary           | Host                                | Zone     | Status  | State | Updated At                 |
+----+------------------+-------------------------------------+----------+---------+-------+----------------------------+
|  1 | nova-scheduler   | controller                          | internal | enabled | up    | 2018-02-14T15:46:26.000000 |
|  2 | nova-conductor   | controller                          | internal | enabled | up    | 2018-02-14T15:46:25.000000 |
|  3 | nova-consoleauth | controller                          | internal | enabled | up    | 2018-02-14T15:46:27.000000 |
|  7 | nova-compute     | LB-VM-Node-192_168_200_102.dwhd.org | nova     | enabled | up    | 2018-02-14T15:46:21.000000 |
+----+------------------+-------------------------------------+----------+---------+-------+----------------------------+
[[email protected]_168_200_101 ~]#

私有云 Openstack Ocata版安装(四)安装并配置计算节点

此输出应指示在控制器节点上启用三个服务组件,并在计算节点上启用一个服务组件。

3. 列出在身份服务中的端点,以验证与身份服务的连接正常

[[email protected]_168_200_101 ~]# openstack catalog list
+-----------+-----------+-----------------------------------------+
| Name      | Type      | Endpoints                               |
+-----------+-----------+-----------------------------------------+
| nova      | compute   | RegionOne                               |
|           |           |   internal: http://controller:8774/v2.1 |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:8774/v2.1    |
|           |           | RegionOne                               |
|           |           |   public: http://controller:8774/v2.1   |
|           |           |                                         |
| glance    | image     | RegionOne                               |
|           |           |   public: http://controller:9292        |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:9292      |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:9292         |
|           |           |                                         |
| placement | placement | RegionOne                               |
|           |           |   public: http://controller:8778        |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:8778         |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:8778      |
|           |           |                                         |
| keystone  | identity  | RegionOne                               |
|           |           |   public: http://controller:5000/v3/    |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:5000/v3/  |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:35357/v3/    |
|           |           |                                         |
+-----------+-----------+-----------------------------------------+
[[email protected]_168_200_101 ~]#

私有云 Openstack Ocata版安装(四)安装并配置计算节点

如果输出信息中有警告内容请忽略

4. 列出镜像服务中的镜像,以验证与镜像服务连接正常

[[email protected]_168_200_101 ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 62659fab-2181-46bd-8e71-b096077a0398 | cirros | active |
+--------------------------------------+--------+--------+
[[email protected]_168_200_101 ~]# 

私有云 Openstack Ocata版安装(四)安装并配置计算节点

5. 检查 cells 和 Placement API 工作是否正常

[[email protected]_168_200_101 ~]# nova-status upgrade check
+--------------------------+
| 升级检查结果             |
+--------------------------+
| 检查: Cells v2           |
| 结果: 成功               |
| 详情: None               |
+--------------------------+
| 检查: Placement API      |
| 结果: 成功               |
| 详情: None               |
+--------------------------+
| 检查: Resource Providers |
| 结果: 成功               |
| 详情: None               |
+--------------------------+
[[email protected]_168_200_101 ~]#

私有云 Openstack Ocata版安装(四)安装并配置计算节点

  • 本文由 发表于 2018年2月14日22:02:55
  • 除非特殊声明,本站文章均为原创,转载请务必保留本文链接
匿名

发表评论

匿名网友 填写信息

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen: