Location>code7788 >text

Glance docking ceph

Popularity:934 ℃/2024-10-08 21:42:14

catalogs
  • Glance docking ceph
    • 1. Uploading mirrors
    • 2. Docking ceph
      • 2.1 Creating pools
      • 2.2 Creating users
      • 2.3 Issuance of ceph documents
      • 2.4 Modifying the globals file
      • 2.5 Updating the glance configuration
    • 3. Uploading mirrors to ceph

Glance docking ceph

Based on the previous openstack environment

By default, the image for glance is stored locally, if the glance node hangs, then the image will not exist, so now we need to store the mirror of glance on the ceph cluster, so that even if the glance node hangs, the other nodes start glance and then dock to the ceph, the mirror still exists.

1. Uploading mirrors

Here we have a glance that is not currently docked to ceph, now upload an image to see if he exists locally

[root@openstack01 ~]# source .venv/kolla/bin/activate
(kolla) [root@openstack01 ~]# source /etc/kolla/

Download a cirros image

(kolla) [root@openstack01 ~]# wget /0.4.0/cirros-0.4.0-x86_64
(kolla) [root@openstack01 kolla]# openstack image create --disk-format qcow2 --container-format bare --progress  --public --file /root/cirros-0.4.0-x86_64 test_image 

(kolla) [root@openstack01 kolla]# openstack image list
+--------------------------------------+------------+--------+
| ID                                   | Name       | Status |
+--------------------------------------+------------+--------+
| c5d3998d-51a7-4732-9cd9-fb34ff2d3e94 | cirros     | active |
| 1add255d-d797-4c5a-8e74-f902ca3c45b6 | test_image | active |
+--------------------------------------+------------+--------+

That way, a mirror is uploaded. Let's see if he's got a local image.

(kolla) [root@openstack01 kolla]# openstack image show test_image |grep file |head -1
| file             | /v2/images/1add255d-d797-4c5a-8e74-f902ca3c45b6/file 

His echo is/v2/images/1add255d-d797-4c5a-8e74-f902ca3c45b6/file This path, directly into the glance container

(kolla) [root@openstack01 kolla]# docker exec -it glance_api /bin/bash
(glance-api)[glance@openstack01 /]$ cd /var/lib/glance/images/
(glance-api)[glance@openstack01 /var/lib/glance/images]$ ls
1add255d-d797-4c5a-8e74-f902ca3c45b6  c5d3998d-51a7-4732-9cd9-fb34ff2d3e94

There are 2 files in this directory, each of which corresponds to our image ID, indicating that the image exists locally and within the container.

2. Docking ceph

Now let's start operating ceph, on ceph we have to create the pool first, then create the user authorization, and finally change the configuration file for glance

2.1 Creating pools

[root@ceph ~]# ceph osd pool create images
pool 'images' created
[root@ceph ~]# ceph osd pool application enable images rbd
enabled application 'rbd' on pool 'images'
[root@ceph ~]# rbd pool init -p images

The pool is created and initialized, and the next step is to create the user

2.2 Creating users

[root@ceph ~]# ceph auth get-or-create  mon 'profile rbd' osd 'profile rbd pool=images' -o /etc/ceph/

The glance user's keyring is then exported under /etc/ceph

2.3 Issuance of ceph documents

First go to the admin host and create a glance directory

The management host is the same machine where openstack was deployed.

[root@openstack01 config]# cd /etc/kolla/config/
[root@openstack01 config]# mkdir glance

Send the key ring with the glance user to the management host's /etc/kolla/config/

[root@ceph ~]# scp /etc/ceph/ /etc/ceph/ 192.168.200.130:/etc/kolla/config/glance/
[email protected]'s password: 
                                                           100%  181   256.9KB/s   00:00    
                                          100%   64    67.6KB/s   00:00 

Watch out for a pit.If you open these 2 files on the managed host, delete the indentation or change the indentation to spacebar, otherwise you will get an error, because the upgrade glance will use ansible to read the configuration, and yaml doesn't allow the tab key!

[root@openstack01 glance]# cat  
# minimal  for 601f8e36-2faa-11ef-9c62-000c294ff693
[global]
fsid = 601f8e36-2faa-11ef-9c62-000c294ff693
mon_host = [v2:192.168.200.100:3300/0,v1:192.168.200.100:6789/0]

[root@openstack01 glance]# cat  
[]
key = AQD+d5JmAtybHBAARluqjWc6/W4xYoWPC4VHXA==

In the end the 2 files should look like this

2.4 Modifying the globals file

[root@openstack01 kolla]# vim  
ceph_glance_user: "glance"
ceph_glance_keyring: "client.{{ ceph_glance_user }}.keyring"
ceph_glance_pool_name: "images"
glance_backend_ceph: "yes"
glance_backend_file: "no"

The keyring here should not be preceded by aceph.This is because ansible will automatically bring it with you when you execute it later, and writing it would be an error instead.

Changing the globals file to this will do the trick, and then we perform the upgrade glance

2.5 Updating the glance configuration

[root@openstack01 kolla]# source /root/.venv/kolla/bin/activate
(kolla) [root@openstack01 ~]# kolla-ansible -i multinode -t glance upgrade 

Wait for ansible's script to finish executing, and when it does, we'll upload another image

3. Upload mirrors to ceph

Now that our glance configuration file has been automatically updated, let's upload an image to see if it exists in the ceph cluster

(kolla) [root@openstack01 ~]# openstack image create --disk-format qcow2 --container-format bare --public --file ./cirros-0.4.0-x86_64 ceph_test_image
(kolla) [root@openstack01 ~]# openstack image list
+--------------------------------------+-----------------+--------+
| ID                                   | Name            | Status |
+--------------------------------------+-----------------+--------+
| cfe7ca03-896d-4020-90e8-bc45e71068aa | ceph_test_image | active |
| c5d3998d-51a7-4732-9cd9-fb34ff2d3e94 | cirros          | active |
| 1add255d-d797-4c5a-8e74-f902ca3c45b6 | test_image      | active |
+--------------------------------------+-----------------+--------+

Or go into the container to check

(glance-api)[glance@openstack01 /var/lib/glance/images]$ ls
1add255d-d797-4c5a-8e74-f902ca3c45b6  c5d3998d-51a7-4732-9cd9-fb34ff2d3e94

As you can see here, he still has 2 flashbacks, which means that only 2 are stored locally, and the 3rd image is stored in ceph. Let's go back to the ceph cluster and check it out

[root@ceph ~]# rbd ls -p images
cfe7ca03-896d-4020-90e8-bc45e71068aa

He gave us back exactly the ID of the mirror, which has indeed been stored within the ceph cluster