Install Ceph

apt install cephadm

Ceph Upgrades

Upgrade to v16.2.6

Start the upgrade

# cephadm shell
Inferring fsid 796b05ba-ce1a-4c4a-af4e-1941fb2e4f76
Inferring config /var/lib/ceph/796b05ba-ce1a-4c4a-af4e-1941fb2e4f76/mon.gchcph001/config
Using recent ceph image quay.io/ceph/ceph@sha256:a2c23b6942f7fbc1e15d8cfacd6655a681fe0e44f288e4a158db22030b8d58e3
root@gchcph001:/# ceph orch upgrade start --image quay.io/ceph/ceph:v16.2.6
Initiating upgrade to quay.io/ceph/ceph:v16.2.6

Verify the upgrade process

# cephadm shell
# ceph -s
  cluster:
    id:     796b05ba-ce1a-4c4a-af4e-1941fb2e4f76
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum gchcph001,gchcph002,gchcph003 (age 40m)
    mgr: gchcph001(active, since 40m), standbys: gchcph002, gchcph003
    osd: 6 osds: 6 up (since 40m), 6 in (since 18M)

  data:
    pools:   17 pools, 545 pgs
    objects: 5.54k objects, 21 GiB
    usage:   68 GiB used, 232 GiB / 300 GiB avail
    pgs:     545 active+clean

  progress:
    Upgrade to quay.io/ceph/ceph:v16.2.6 (0s)
      [............................]

Verify the upgrade process

# ceph -W cephadm
  cluster:
    id:     796b05ba-ce1a-4c4a-af4e-1941fb2e4f76
    health: HEALTH_WARN
            1 pools have too few placement groups

  services:
    mon: 3 daemons, quorum gchcph001,gchcph002,gchcph003 (age 7m)
    mgr: gchcph001(active, since 7m), standbys: gchcph002, gchcph003
    osd: 6 osds: 6 up (since 60m), 6 in (since 18M)

  data:
    pools:   17 pools, 545 pgs
    objects: 5.54k objects, 21 GiB
    usage:   68 GiB used, 232 GiB / 300 GiB avail
    pgs:     545 active+clean

  progress:
    Upgrade to 16.2.6 (14s)
      [===========.................] (remaining: 20s)

Observe the logs

2021-12-22T09:24:18.658441+0000 mgr.gchcph001 [INF] Reconfiguring mon.gchcph003 (monmap changed)...
2021-12-22T09:24:18.665059+0000 mgr.gchcph001 [INF] Reconfiguring daemon mon.gchcph003 on gchcph003
2021-12-22T09:24:19.131576+0000 mgr.gchcph001 [INF] Reconfiguring osd.1 (monmap changed)...
2021-12-22T09:24:19.134358+0000 mgr.gchcph001 [INF] Reconfiguring daemon osd.1 on gchcph003
2021-12-22T09:24:19.534216+0000 mgr.gchcph001 [INF] Upgrade: Setting container_image for all mon
2021-12-22T09:24:19.583832+0000 mgr.gchcph001 [INF] Upgrade: Setting container_image for all crash
2021-12-22T09:24:19.600584+0000 mgr.gchcph001 [INF] Upgrade: osd.7 is safe to restart
2021-12-22T09:24:19.601644+0000 mgr.gchcph001 [INF] Upgrade: osd.2 is also safe to restart
2021-12-22T09:24:19.602718+0000 mgr.gchcph001 [INF] Upgrade: osd.6 is also safe to restart
2021-12-22T09:24:19.603717+0000 mgr.gchcph001 [INF] Upgrade: osd.3 is also safe to restart
2021-12-22T09:24:20.162554+0000 mgr.gchcph001 [INF] Upgrade: Updating osd.7 (1/4)
2021-12-22T09:24:20.180384+0000 mgr.gchcph001 [INF] Deploying daemon osd.7 on gchcph001
2021-12-22T09:24:22.940855+0000 mgr.gchcph001 [INF] Upgrade: Updating osd.2 (2/4)
2021-12-22T09:24:22.958120+0000 mgr.gchcph001 [INF] Deploying daemon osd.2 on gchcph001
2021-12-22T09:24:26.410401+0000 mgr.gchcph001 [INF] Upgrade: Updating osd.6 (3/4)
2021-12-22T09:24:26.431054+0000 mgr.gchcph001 [INF] Deploying daemon osd.6 on gchcph002
2021-12-22T09:24:29.336329+0000 mgr.gchcph001 [INF] Upgrade: Updating osd.3 (4/4)
2021-12-22T09:24:29.358446+0000 mgr.gchcph001 [INF] Deploying daemon osd.3 on gchcph003

Upgrade Ceph v17.2.x to v17.2.7

# cephadm shell ceph orch upgrade start --ceph-version 17.2.7
Inferring fsid 796b05ba-ce1a-4c4a-af4e-1941fb2e4f76
Inferring config /var/lib/ceph/796b05ba-ce1a-4c4a-af4e-1941fb2e4f76/config/ceph.conf
Using ceph image with id 'cc65afd6173a' and tag 'v17.2.5' created on 2022-10-17 18:41:41 -0500 CDT
quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45
Initiating upgrade to quay.io/ceph/ceph:v17.2.7

Verify the upgrade status

# ceph -s
  cluster:
    id:     796b05ba-ce1a-4c4a-af4e-1941fb2e4f76
    health: HEALTH_OK

  services:
    mon: 2 daemons, quorum gchcph003,gchcph002 (age 4m)
    mgr: gchcph002.yomcjp(active, since 4m), standbys: gchcph001
    osd: 3 osds: 3 up (since 4m), 3 in (since 21M)

  data:
    pools:   18 pools, 548 pgs
    objects: 5.54k objects, 21 GiB
    usage:   62 GiB used, 28 GiB / 90 GiB avail
    pgs:     548 active+clean

  progress:
    Upgrade to quay.io/ceph/ceph:v17.2.7 (0s)
      [............................]

Check the Ceph version

# ceph version
ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)

Upgrade Ceph Reef v17.2.x to Squid v18.2.x

Start the upgrade

root@gchcph001:~# cephadm shell ceph orch upgrade start --ceph-version 18.2.2
Inferring fsid 796b05ba-ce1a-4c4a-af4e-1941fb2e4f76
Inferring config /var/lib/ceph/796b05ba-ce1a-4c4a-af4e-1941fb2e4f76/config/ceph.conf
Using ceph image with id 'ff4519c9e0a2' and tag 'v17.2.7' created on 2024-05-21 11:09:44 -0500 CDT
quay.io/ceph/ceph@sha256:d26c11e20773704382946e34f0d3d2c0b8bb0b7b37d9017faa9dc11a0196c7d9
Initiating upgrade to quay.io/ceph/ceph:v18.2.2

Check the upgrade status

# ceph -s
  cluster:
    id:     796b05ba-ce1a-4c4a-af4e-1941fb2e4f76
    health: HEALTH_WARN
            2 pool(s) do not have an application enabled
 
  services:
    mon: 2 daemons, quorum gchcph003,gchcph002 (age 2m)
    mgr: gchcph002.yomcjp(active, since 103s), standbys: gchcph001
    osd: 3 osds: 3 up (since 115s), 3 in (since 21M)
 
  data:
    pools:   18 pools, 548 pgs
    objects: 5.54k objects, 21 GiB
    usage:   63 GiB used, 27 GiB / 90 GiB avail
    pgs:     548 active+clean
 
  progress:
    Upgrade to 18.2.2 (98s)
      [............................] 

Check the monitor versions

# ceph mon versions
{
    "ceph version 18.2.2 (531c0d11a1c5d39fbfe6aa8a521f023abf3bf3e2) reef (stable)": 2
}
debug 2024-06-04T06:45:35.284+0000 7f2f17788700  0 [progress INFO root] update: starting ev cd713ec2-ef69-4a05-8255-c0563136886e (Updating mon deployment (-1 -> 1))
debug 2024-06-04T06:45:35.296+0000 7f2f17788700  0 [progress INFO root] complete: finished ev cd713ec2-ef69-4a05-8255-c0563136886e (Updating mon deployment (-1 -> 1))
debug 2024-06-04T06:45:35.300+0000 7f2f17788700  0 [progress INFO root] Completed event cd713ec2-ef69-4a05-8255-c0563136886e (Updating mon deployment (-1 -> 1)) in 0 seconds
debug 2024-06-04T06:45:35.304+0000 7f2f17788700  0 [progress WARNING root] complete: ev 33b05aad-ce3d-4f9a-a67e-82ebb5950abd does not exist
debug 2024-06-04T06:45:35.508+0000 7f2f17788700  0 [cephadm INFO cephadm.upgrade] Upgrade: Setting container_image for all osd
debug 2024-06-04T06:45:35.512+0000 7f2f17788700  0 log_channel(cephadm) log [INF] : Upgrade: Setting container_image for all osd
debug 2024-06-04T06:45:35.760+0000 7f2f17788700  0 [cephadm INFO cephadm.upgrade] Upgrade: Setting require_osd_release to 18 reef
debug 2024-06-04T06:45:35.760+0000 7f2f17788700  0 log_channel(cephadm) log [INF] : Upgrade: Setting require_osd_release to 18 reef
debug 2024-06-04T06:45:35.920+0000 7f2f24fa3700  0 log_channel(cluster) log [DBG] : pgmap v72: 548 pgs: 370 peering, 178 active+clean; 21 GiB data, 62 GiB used, 28 GiB / 90 GiB avail; 308 MiB/s, 1 keys/s, 84 objects/s recovering
debug 2024-06-04T06:45:36.872+0000 7f2f17788700  0 [cephadm INFO cephadm.upgrade] Upgrade: Setting container_image for all mds
debug 2024-06-04T06:45:36.876+0000 7f2f17788700  0 log_channel(cephadm) log [INF] : Upgrade: Setting container_image for all mds
debug 2024-06-04T06:45:36.964+0000 7f2f17788700  0 [cephadm INFO cephadm.upgrade] Upgrade: Setting container_image for all rgw
debug 2024-06-04T06:45:36.964+0000 7f2f17788700  0 log_channel(cephadm) log [INF] : Upgrade: Setting container_image for all rgw
debug 2024-06-04T06:45:37.020+0000 7f2f17788700  0 [cephadm INFO cephadm.upgrade] Upgrade: Setting container_image for all rbd-mirror
debug 2024-06-04T06:45:37.020+0000 7f2f17788700  0 log_channel(cephadm) log [INF] : Upgrade: Setting container_image for all rbd-mirror
debug 2024-06-04T06:45:37.064+0000 7f2f17788700  0 [cephadm INFO cephadm.upgrade] Upgrade: Setting container_image for all ceph-exporter
debug 2024-06-04T06:45:37.068+0000 7f2f17788700  0 log_channel(cephadm) log [INF] : Upgrade: Setting container_image for all ceph-exporter
debug 2024-06-04T06:45:37.136+0000 7f2f17788700  0 [cephadm INFO cephadm.upgrade] Upgrade: Setting container_image for all iscsi
debug 2024-06-04T06:45:37.136+0000 7f2f17788700  0 log_channel(cephadm) log [INF] : Upgrade: Setting container_image for all iscsi
debug 2024-06-04T06:45:37.208+0000 7f2f17788700  0 [cephadm INFO cephadm.upgrade] Upgrade: Setting container_image for all nfs
debug 2024-06-04T06:45:37.212+0000 7f2f17788700  0 log_channel(cephadm) log [INF] : Upgrade: Setting container_image for all nfs
debug 2024-06-04T06:45:37.272+0000 7f2f17788700  0 [cephadm INFO cephadm.upgrade] Upgrade: Setting container_image for all nvmeof
debug 2024-06-04T06:45:37.276+0000 7f2f17788700  0 log_channel(cephadm) log [INF] : Upgrade: Setting container_image for all nvmeof
debug 2024-06-04T06:45:37.580+0000 7f2f17788700  0 [cephadm INFO cephadm.upgrade] Upgrade: Finalizing container_image settings
debug 2024-06-04T06:45:37.580+0000 7f2f17788700  0 log_channel(cephadm) log [INF] : Upgrade: Finalizing container_image settings
debug 2024-06-04T06:45:37.924+0000 7f2f24fa3700  0 log_channel(cluster) log [DBG] : pgmap v74: 548 pgs: 548 active+clean; 21 GiB data, 62 GiB used, 28 GiB / 90 GiB avail; 268 MiB/s, 1 keys/s, 73 objects/s recovering
debug 2024-06-04T06:45:38.284+0000 7f2f17788700  0 [cephadm INFO cephadm.upgrade] Upgrade: Complete!
debug 2024-06-04T06:45:38.284+0000 7f2f17788700  0 log_channel(cephadm) log [INF] : Upgrade: Complete!

Upgrade Ceph Reef v18.2.2 to Reef v18.2.4

Start the upgrade

# cephadm shell ceph orch upgrade start --ceph-version 18.2.4
Inferring fsid 796b05ba-ce1a-4c4a-af4e-1941fb2e4f76
Inferring config /var/lib/ceph/796b05ba-ce1a-4c4a-af4e-1941fb2e4f76/mon.gchcph001/config
Using ceph image with id '3c937764e6f5' and tag 'v18.2.2' created on 2024-05-21 11:16:42 -0500 CDT
quay.io/ceph/ceph@sha256:f8d467dcf49d13b8ea42229d89be642581110175d8ce36e216aefc9b32b0854d
Initiating upgrade to quay.io/ceph/ceph:v18.2.4

Verify the upgrade status

# cephadm shell
Inferring fsid 796b05ba-ce1a-4c4a-af4e-1941fb2e4f76
Inferring config /var/lib/ceph/796b05ba-ce1a-4c4a-af4e-1941fb2e4f76/mon.gchcph001/config
Using ceph image with id '3c937764e6f5' and tag 'v18.2.2' created on 2024-05-21 11:16:42 -0500 CDT
quay.io/ceph/ceph@sha256:f8d467dcf49d13b8ea42229d89be642581110175d8ce36e216aefc9b32b0854d
# ceph -s
  cluster:
    id:     796b05ba-ce1a-4c4a-af4e-1941fb2e4f76
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum gchcph003,gchcph001,gchcph002 (age 11m)
    mgr: gchcph002.yomcjp(active, since 13m), standbys: gchcph001
    osd: 3 osds: 3 up (since 9m), 3 in (since 2y)

  data:
    pools:   18 pools, 548 pgs
    objects: 5.54k objects, 21 GiB
    usage:   62 GiB used, 28 GiB / 90 GiB avail
    pgs:     548 active+clean

Check the monitor versions

# ceph mon versions
{
    "ceph version 18.2.4 (e7ad5345525c7aa95470c26863873b581076945d) reef (stable)": 3
}

Check the OSD versions

# ceph osd versions
{
    "ceph version 18.2.4 (e7ad5345525c7aa95470c26863873b581076945d) reef (stable)": 3
}

Deploy a new monitor

# ceph orch daemon add mon gchcph001:192.168.174.133
Deployed mon.gchcph001 on host 'gchcph001'
# ceph health detail
HEALTH_WARN 2 pool(s) do not have an application enabled
[WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled
    application not enabled on pool 'cephfs_data'
    application not enabled on pool 'cephfs_metadata'
    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
# ceph osd pool application enable cephfs_data cephfs
enabled application 'cephfs' on pool 'cephfs_data'
# ceph osd pool application enable cephfs_metadata cephfs
enabled application 'cephfs' on pool 'cephfs_metadata'

Upgrade Ceph Reef v18.2.x to Squid v19.2.x

This section explains how to upgrade Ceph from Reef (18.2+) to Squid (19.2+) on Debian 12.

See the official upgrade guide for more information.

Change the package repositroy from reef to squid

sed -i 's/reef/squid/' /etc/apt/sources.list.d/ceph.list

The file /etc/apt/sources.list.d/ceph.list should look like:

deb https://download.ceph.com/debian-squid/ bookworm main

List the container images

root@gchcph002:~# docker images
REPOSITORY          TAG       IMAGE ID       CREATED         SIZE
quay.io/ceph/ceph   <none>    f2efb0401a30   2 weeks ago     1.3GB
quay.io/ceph/ceph   v18       2bc0b0f4375d   6 months ago    1.22GB
quay.io/ceph/ceph   <none>    3c937764e6f5   9 months ago    1.25GB
quay.io/ceph/ceph   <none>    ff4519c9e0a2   9 months ago    1.26GB
quay.io/ceph/ceph   v17       2d4527871605   17 months ago   1.26GB
quay.io/ceph/ceph   <none>    cc65afd6173a   2 years ago     1.36GB
quay.io/ceph/ceph   v17.2.3   0912465dcea5   2 years ago     1.34GB

Cleanup images

# docker rmi 0912465dcea5 cc65afd6173a 2d4527871605 ff4519c9e0a2 3c937764e6f5
Untagged: quay.io/ceph/ceph:v17.2.3
Untagged: quay.io/ceph/ceph@sha256:43f6e905f3e34abe4adbc9042b9d6f6b625dee8fa8d93c2bae53fa9b61c3df1a
Deleted: sha256:0912465dcea5159f56c7d4ccdf1d15a28abfe1be8a56c94eb86cab129c069726
Deleted: sha256:ae34c6a7be84c611f70402c904078d08f1e91eeca40394801f83909366af25d7
Deleted: sha256:5152144bdffca2f7a561485d5eecf4582165e38498530b36a913b5b18c8b2146
Deleted: sha256:0d1979cb2528a8c8ec9fd195807d270a44039dab327b08cd13fce29d22f89387
Deleted: sha256:774b51d5af0cda1622d34248bfd6224d89593698481113e847a1b5edf5c930ef
Deleted: sha256:5966005eac8d0b52bf676cd20f1ffb3435fe4d8245a3afadcd27b0b9e07c096b
Untagged: quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45
Deleted: sha256:cc65afd6173a093cb160dd85a79881c0e9b51fccdb4315d25021702e7bca1a5a
Deleted: sha256:9f86da69d71c67ae381b67b9e1cac8929a1bab4b8e14b5f1e7d71421dd013ad3
Deleted: sha256:d162e73679d8c73d9cada5eec671ebfd61f9a06037e2456e5976f3df6e396c14
Deleted: sha256:597eb6e681d07e69084bb9116b175ff593a24182d90ffe214e9600f3b0691c28
Deleted: sha256:67c2d7d88a6db9df7552dad20143d2f21339981f8b44948cf6b830503ca0bb11
Deleted: sha256:b38cb92596778e2c18c2bde15f229772fe794af39345dd456c3bf6702cc11eef
Untagged: quay.io/ceph/ceph:v17
Untagged: quay.io/ceph/ceph@sha256:1e442b0018e6dc7445c3afa7c307bc61a06189ebd90580a1bb8b3d0866c0d8ae
Deleted: sha256:2d45278716053f92517e447bc1a7b64945cc4ecbaff4fe57aa0f21632a0b9930
Deleted: sha256:6690900e3fadc0fef7c7c46ff22d1618b81df53faef4fa51d26ccf617bf3d7e9
Deleted: sha256:47cf9d3aac6c218d346565c2393092c326ec06d08f12e81205abaa7b924c1ee8
Untagged: quay.io/ceph/ceph@sha256:d26c11e20773704382946e34f0d3d2c0b8bb0b7b37d9017faa9dc11a0196c7d9
Deleted: sha256:ff4519c9e0a238162d39f92222382d9d75bcad69c95f5f9b0e14890100f0f5cd
Deleted: sha256:0300cf8524765c4d8ba2dc7b68e090cf09f1d5fd2b024a299b32493685cd0ee5
Untagged: quay.io/ceph/ceph@sha256:f8d467dcf49d13b8ea42229d89be642581110175d8ce36e216aefc9b32b0854d
Deleted: sha256:3c937764e6f5de1131b469dc69f0db09f8bd55cf6c983482cde518596d3dd0e5
Deleted: sha256:e75a9997a7b9d9ee2efa661a63d39523666238f74801e707e3f0779f9d68e1c8
Deleted: sha256:603ca7453abb095abab8efd6b96292ec6e0d3dc8c25503ec0ec7dd254a24eea3

Upgrade cephadm

# apt dist-upgrade -u
Paketlisten werden gelesen… Fertig
Abhängigkeitsbaum wird aufgebaut… Fertig
Statusinformationen werden eingelesen… Fertig
Paketaktualisierung (Upgrade) wird berechnet… Fertig
Die folgenden Pakete werden aktualisiert (Upgrade):
  cephadm libgnutls30
2 aktualisiert, 0 neu installiert, 0 zu entfernen und 0 nicht aktualisiert.
Es müssen 2’468 kB an Archiven heruntergeladen werden.
Nach dieser Operation werden 746 kB Plattenplatz zusätzlich benutzt.
Möchten Sie fortfahren? [J/n]
Holen:1 http://security.debian.org/debian-security bookworm-security/main amd64 libgnutls30 amd64 3.7.9-2+deb12u4 [1’405 kB]
Holen:2 https://download.ceph.com/debian-squid bookworm/main amd64 cephadm amd64 19.2.1-1~bpo12+1 [1’062 kB]
Es wurden 2’468 kB in 1 s geholt (2’313 kB/s).
(Lese Datenbank ... 23281 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von .../libgnutls30_3.7.9-2+deb12u4_amd64.deb ...
Entpacken von libgnutls30:amd64 (3.7.9-2+deb12u4) über (3.7.9-2+deb12u3) ...
libgnutls30:amd64 (3.7.9-2+deb12u4) wird eingerichtet ...
(Lese Datenbank ... 23281 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von .../cephadm_19.2.1-1~bpo12+1_amd64.deb ...
Entpacken von cephadm (19.2.1-1~bpo12+1) über (18.2.4-1~bpo12+1) ...
cephadm (19.2.1-1~bpo12+1) wird eingerichtet ...
Trigger für man-db (2.11.2-2) werden verarbeitet ...
Trigger für libc-bin (2.36-9+deb12u9) werden verarbeitet ...
Prüfe Prozesse...
Prüfe Linux-Kernel...

Der laufende Kernel ist aktuell.

Es müssen keine Dienste neugestartet werden.

Es müssen keine Container neu gestartet werden.

Es gibt keine Nutzer-Sitzungen mit veralteten Prozessen.

Start the upgrade

# cephadm shell
Inferring fsid 796b05ba-ce1a-4c4a-af4e-1941fb2e4f76
Inferring config /var/lib/ceph/796b05ba-ce1a-4c4a-af4e-1941fb2e4f76/mon.gchcph001/config
Using ceph image with id '2bc0b0f4375d' and tag 'v18.2.4' created on 2024-07-23 17:19:35 -0500 CDT
quay.io/ceph/ceph@sha256:6ac7f923aa1d23b43248ce0ddec7e1388855ee3d00813b52c3172b0b23b37906
root@gchcph001:/# ceph orch upgrade start --image quay.io/ceph/ceph:v19.2.1
Initiating upgrade to quay.io/ceph/ceph:v19.2.1
root@gchcph001:/# ceph -s
  cluster:
    id:     796b05ba-ce1a-4c4a-af4e-1941fb2e4f76
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum gchcph003,gchcph001,gchcph002 (age 18m)
    mgr: gchcph002.yomcjp(active, since 18m), standbys: gchcph001
    osd: 3 osds: 3 up (since 18m), 3 in (since 2y)

  data:
    pools:   18 pools, 548 pgs
    objects: 5.54k objects, 21 GiB
    usage:   62 GiB used, 28 GiB / 90 GiB avail
    pgs:     548 active+clean

  progress:
    Upgrade to quay.io/ceph/ceph:v19.2.1 (0s)
      [............................]

Verify the MON versions

# ceph mon versions
{
    "ceph version 19.2.1 (58a7fab8be0a062d730ad7da874972fd3fba59fb) squid (stable)": 3
}

Verify the OSD versions

# ceph osd versions
{
    "ceph version 18.2.4 (e7ad5345525c7aa95470c26863873b581076945d) reef (stable)": 2,
    "ceph version 19.2.1 (58a7fab8be0a062d730ad7da874972fd3fba59fb) squid (stable)": 1
}

Check if the upgrade process is completed

# ceph orch upgrade status
{
    "target_image": null,
    "in_progress": false,
    "which": "<unknown>",
    "services_complete": [],
    "progress": null,
    "message": "",
    "is_paused": false
}

The cluster is upgraded bus has a warning:

# ceph health
HEALTH_WARN Telemetry requires re-opt-in

The last warning requires to follow the post-upgrde steps from the offical blog.

Enable the telemetry. Of course you if you agree on.

# ceph telemetry on --license sharing-1-0
Telemetry is on.

After configuring the telemetry, the cluster is healthy again.

# ceph health
HEALTH_OK

List the container images

# docker images
REPOSITORY          TAG       IMAGE ID       CREATED        SIZE
quay.io/ceph/ceph   v19       f2efb0401a30   2 weeks ago    1.3GB
quay.io/ceph/ceph   v19.2.1   f2efb0401a30   2 weeks ago    1.3GB
quay.io/ceph/ceph   v18.2.4   2bc0b0f4375d   6 months ago   1.22GB

Delete the no longer required container image

# docker rmi 2bc0b0f4375d
Untagged: quay.io/ceph/ceph:v18.2.4
Untagged: quay.io/ceph/ceph@sha256:6ac7f923aa1d23b43248ce0ddec7e1388855ee3d00813b52c3172b0b23b37906
Deleted: sha256:2bc0b0f4375ddf4270a9a865dfd4e53063acc8e6c3afd7a2546507cafd2ec86a
Deleted: sha256:c4e7561c4789a7f1032bcbc3e09bdf28cbf9b9cd6f3992d929b882025500d2f8
Deleted: sha256:8a2b10aa09810320950712c0f01d4db513fa66747f986683b6f31120f1022ae4