«

pools have too many placement groups

指尖二进制 • 1 年前 • 810 次点击 • CEPH


问题原因是是因为pg数大导致的:修复方法1:调整三个pool的pg数,2:按照下面的操作做。

[root@controller ~]# ceph -s
  cluster:
    id:     8ad5bacc-b1d6-4954-adb4-8fd0bb9eab35
    health: HEALTH_WARN
            3 pools have too many placement groups

  services:
    mon: 3 daemons, quorum controller,compute01,compute02 (age 22m)
    mgr: compute02(active, since 9m), standbys: compute01, controller
    mds: cephfs:1 {0=compute01=up:active} 2 up:standby
    osd: 9 osds: 9 up (since 56m), 9 in (since 4d)
    rgw: 3 daemons active (compute01.rgw0, compute02.rgw0, controller.rgw0)

  task status:

  data:
    pools:   9 pools, 528 pgs
    objects: 249 objects, 12 MiB
    usage:   159 GiB used, 441 GiB / 600 GiB avail
    pgs:     528 active+clean

[root@controller ~]# ceph health detail
HEALTH_WARN 3 pools have too many placement groups
POOL_TOO_MANY_PGS 3 pools have too many placement groups
    Pool volumes has 128 placement groups, should have 32
    Pool images has 128 placement groups, should have 32
    Pool vms has 128 placement groups, should have 32
[root@controller ~]# ceph osd pool autoscale-status
POOL                  SIZE TARGET SIZE RATE RAW CAPACITY  RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE 
cephfs_metadata      4282               3.0       599.9G 0.0000                               4.0      8            off       
default.rgw.meta        0               3.0       599.9G 0.0000                               1.0     32            warn      
cephfs_data             0               3.0       599.9G 0.0000                               1.0      8         32 off       
default.rgw.control     0               3.0       599.9G 0.0000                               1.0     32            warn      
.rgw.root            1245               3.0       599.9G 0.0000                               1.0     32            warn      
volumes                 0               3.0       599.9G 0.0000                               1.0    128         32 warn      
images              12418k              3.0       599.9G 0.0001                               1.0    128         32 warn      
vms                     0               3.0       599.9G 0.0000                               1.0    128         32 warn      
default.rgw.log         0               3.0       599.9G 0.0000                               1.0     32            warn  

关闭mgr pg_autoscaler或者调整pg和pgp数量

[root@controller ~]# ceph mgr module disable pg_autoscaler
[root@controller ~]# ceph osd pool autoscale-status
Error ENOTSUP: Module 'pg_autoscaler' is not enabled (required by command 'osd pool autoscale-status'): use `ceph mgr module enable pg_autoscaler` to enable it

再次查看ceph集群状态

[root@controller ~]# ceph health detail
HEALTH_OK
[root@controller ~]#
[root@controller ~]# ceph -s
  cluster:
    id:     8ad5bacc-b1d6-4954-adb4-8fd0bb9eab35
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum controller,compute01,compute02 (age 22m)
    mgr: compute02(active, since 16s), standbys: compute01, controller
    mds: cephfs:1 {0=compute01=up:active} 2 up:standby
    osd: 9 osds: 9 up (since 57m), 9 in (since 4d)
    rgw: 3 daemons active (compute01.rgw0, compute02.rgw0, controller.rgw0)

  task status:

  data:
    pools:   9 pools, 528 pgs
    objects: 249 objects, 12 MiB
    usage:   159 GiB used, 441 GiB / 600 GiB avail
    pgs:     528 active+clean
还没收到回复