一.Glance集成Ceph
1. 配置glance-api.conf
# 在DEFAULT段中加入以下 show_image_direct_url = True # 将glance-api中的 glance_store 段改成如下 [glance_store] stores = file,http,swift,rbd default_store = rbd rbd_store_pool = glance-images rbd_store_user = glance-images rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8 # 重启服务 [[email protected]_Node-172_17_7_1 ~]# systemctl restart openstack-glance-api.service openstack-glance-registry.service
2. 上传镜像
[[email protected]_Node-172_17_7_1 ~]# ls anaconda-ks.cfg cirros-0.4.0-x86_64-disk.img galera-25.3.22-1.rhel7.el7.centos.x86_64.rpm keystone_admin keystone_demo [[email protected]_Node-172_17_7_1 ~]# . keystone_admin [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# openstack image create --disk-format qcow2 --container-format bare --public --file ~/cirros-0.4.0-x86_64-disk.img "cirros-0.4.0-ceph-qcow2" +------------------+-----------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------+-----------------------------------------------------------------------------------------------------------------+ | checksum | 443b7623e27ecf03dc9e01ee93f67afe | | container_format | bare | | created_at | 2018-09-22T10:56:09Z | | disk_format | qcow2 | | file | /v2/images/6b21cf4d-8b8c-4466-88cf-704ff8a1dd02/file | | id | 6b21cf4d-8b8c-4466-88cf-704ff8a1dd02 | | min_disk | 0 | | min_ram | 0 | | name | cirros-0.4.0-ceph-qcow2 | | owner | a1ba271dea9041d2b8d368d442abf14b | | properties | direct_url='rbd://882ead7b-dc3b-4049-b51a-289dbfb2ebde/glance-images/6b21cf4d-8b8c-4466-88cf-704ff8a1dd02/snap' | | protected | False | | schema | /v2/schemas/image | | size | 12716032 | | status | active | | tags | | | updated_at | 2018-09-22T10:56:12Z | | virtual_size | None | | visibility | public | +------------------+-----------------------------------------------------------------------------------------------------------------+ [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# rbd ls glance-images 6b21cf4d-8b8c-4466-88cf-704ff8a1dd02 [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# rbd -p glance-images info 6b21cf4d-8b8c-4466-88cf-704ff8a1dd02 rbd image '6b21cf4d-8b8c-4466-88cf-704ff8a1dd02': size 12 MiB in 2 objects order 23 (8 MiB objects) id: 5f0a19430598 block_name_prefix: rbd_data.5f0a19430598 format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten op_features: flags: create_timestamp: Sat Sep 22 18:56:11 2018 [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]#
3. 定义pool类型
# images启用后,ceph集群状态变为:HEALTH_WARN [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# ceph -s cluster: id: 882ead7b-dc3b-4049-b51a-289dbfb2ebde health: HEALTH_WARN application not enabled on 1 pool(s) services: mon: 3 daemons, quorum controller1,controller2,controller3 mgr: controller1_mgr(active), standbys: controller2_mgr, controller3_mgr osd: 16 osds: 16 up, 16 in data: pools: 4 pools, 512 pgs objects: 8 objects, 12 MiB usage: 16 GiB used, 1.5 TiB / 1.6 TiB avail pgs: 512 active+clean # 使用”ceph health detail”,能给出解决办法; # 未定义pool池类型,可定义为'cephfs', 'rbd', 'rgw'等 [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# ceph health detail HEALTH_WARN application not enabled on 1 pool(s) POOL_APP_NOT_ENABLED application not enabled on 1 pool(s) application not enabled on pool 'glance-images' use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# ceph osd pool application enable glance-images rbd enabled application 'rbd' on pool 'glance-images' [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# ceph osd lspools 1 cinder-backup 2 cinder-volumes 3 ephemeral-vms 4 glance-images [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# ceph osd pool application enable cinder-volumes rbd enabled application 'rbd' on pool 'cinder-volumes' [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# ceph osd pool application enable ephemeral-vms rbd enabled application 'rbd' on pool 'ephemeral-vms' [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# ceph osd pool application enable glance-images rbd enabled application 'rbd' on pool 'glance-images' [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]#
[openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# ceph health detail HEALTH_OK [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# for i in glance-images cinder-volumes ephemeral-vms cinder-backup;do ceph osd pool application get $i && echo ; done { "rbd": {} } { "rbd": {} } { "rbd": {} } {} [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]#
二.Cinder集成Ceph
1. 配置cinder.conf (所有controller节点操作)
# cinder利用插件式结构,支持同时使用多种后端存储; # 在cinder-volume所在节点设置cinder.conf中设置相应的ceph rbd驱动即可; # 含3个计算(存储)节点,以compute01节点为例; # 以下只列出涉及cinder集成ceph的section [[email protected]_Node-172_17_7_1 ~]# vim /etc/cinder/cinder.conf # 后端使用ceph存储 [DEFAULT] enabled_backends = ceph # 新增[ceph] section; # 注意红色字体部分前后一致 [ceph] [ceph] # ceph rbd驱动 volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 # 如果配置多后端,则“glance_api_version”必须配置在[DEFAULT] section, # 参考:https://wiki.openstack.org/wiki/Cinder-multi-backend # 参考:https://ceph.com/geen-categorie/ceph-and-cinder-multi-backend/ glance_api_version = 2 rbd_pool=cinder-volumes rbd_user=cinder-volumes rbd_secret_uuid = 92f87f28-cf15-4866-b5b6-5217e39d791e volume_backend_name = ceph # 变更配置文件,重启服务 [[email protected]_Node-172_17_7_1 ~]# systemctl restart openstack-glance-api.service openstack-glance-registry.service [[email protected]_Node-172_17_7_1 ~]# systemctl restart openstack-cinder-api.service openstack-cinder-backup.service openstack-cinder-scheduler.service openstack-cinder-volume.service
2. 验证
# 查看cinder服务状态,cinder-volume集成ceph后,状态”up”; # 或:cinder service-list [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# openstack volume service list
3. 生成volume
1)设置卷类型
# 在控制节点为cinder的ceph后端存储创建对应的type,在配置多存储后端时可区分类型; # 可通过“cinder type-list”查看 [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# cinder type-create ceph # 为ceph type设置扩展规格,键值” volume_backend_name”,value值”ceph” [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# cinder type-key ceph set volume_backend_name=ceph [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# cinder extra-specs-list
2)生成volume
# 生成volume; # 最后的数字”1”代表容量为1G [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# cinder create --volume-type ceph --name ceph-volume-test-1 1
# 检查生成的volume; # 或:cinder list [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# openstack volume list
# 检查ceph集群的volumes pool [openstack-admin]-[[email protected]_Node-172_17_7_1 ~]# rbd -p cinder-volumes ls
您可以选择一种方式赞助本站
支付宝扫一扫赞助
微信钱包扫描赞助
赏