ceph unfound objects 处理

ceph Vol 45 Issue 1

1.unfound objects blocking cluster, need help!

Hi,

I have a production cluster on which 1 OSD on a failing disk was slowing the whole cluster down. I removed the OSD (osd.87) like usual in such case but this time it resulted in 17 unfound objects. I no longer have the files from osd.87. I was able to call “ceph pg PGID mark_unfound_lost delete” on 10 of those objects.

On the remaining objects 7 the command blocks. When I try to do “ceph pg PGID query” on this PG it also blocks. I suspect this is same reason why mark_unfound blocks.

Other client IO to PGs that have unfound objects are also blocked. When trying to query the OSDs which has the PG with unfound objects, “ceph tell” blocks.

I tried to mark the PG as complete using ceph-objectstore-tool but it did not help as the PG is in fact complete but for some reason blocks.

I tried recreating an empty osd.87 and importing the PG exported from other replica but it did not help.

Can someone help me please? This is really important.

这个问题是作者一个集群中(ceph 0.94.5)出现了一个磁盘损坏以后造成了一些对象的丢失,然后在做了一定的处理以后,集群状态已经正常了,但是还是新的请求会出现block的状态,这个情况下如何处理才能让集群正常,作者贴出了pg dump,ceph -s,ceph osd dump相关信息,当出现异常的时候,需要人协助的时候,应该提供这些信息方便其他人定位问题,最后这个问题作者自己给出了自己的解决办法,出现的时候影响是当时的流量只有正常情况下的10%了,影响还是很大的

复现问题过程

1
2
3
[root@lab8106 ceph]# rados -p rbd put testremove testremove
[root@lab8106 ceph]# ceph osd map rbd testremove
osdmap e85 pool 'rbd' (0) object 'testremove' -> pg 0.eaf226a7 (0.27) -> up ([1,0], p1) acting

写入文件,找到文件,然后去后台删除对象
然后停止掉其中一个OSD,这里选择停掉主OSD

1
2
systemctl stop ceph-osd@1
ceph osd out 1

查看状态pg被锁住状态active+degrade,不会迁移完整,并且会检测到了有数据unfound了

然后向这个对象发起get请求

1
[root@lab8106 ceph]# rados -p rbd get testremove testfile

前端rados请求会卡住,后端出现 requests are blocked

看下如何处理

1
ceph pg 0.27 mark_unfound_lost delete

邮件列表作者的环境,这个命令也无法执行,直接卡死,后来发现有个执行窗口,就是这个对象所在的PG的OSD在启动过程中还是可以接受命令的,就在这个执行窗口执行这个命令就可以解决了

执行了以后可以执行命令

1
2
[root@lab8106 ceph]# rados  -p rbd  get testremove  a
error getting rbd/testremove: (2) No such file or directory

这个时候查询集群的状态可以看到,集群已经正常的恢复了,不会因为一个对象的丢失造成集群的PG状态卡在待迁移状态

可以看到请求是失败的但是不会像之前一样卡死的状态,卡死是比失败更严重的一种状态

如果不想看到老的 slow request ,那么就重启这个卡住的PG所在的osd,如果本来就正常了,那么这个异常状态就会消失

这个是一个需要人工干预的状态,实际上模拟的就是对象丢失的场景,什么情况下会对象丢失,一般来说,底层磁盘的故障,写下去的对象当时记录着有,正好写入完成又准备写副本的时候,磁盘坏了,这个就有比较高的概率出现,所以出现了坏盘要尽早更换