Most of the data is backed up (CrashPlan) but not all and at this point I am not entirely sure what hasn't since it had a couple more hours of backing up to do. How valuable or hard to recreate is the data? What it be worth trying to clone or restore the drive to another disk before attempting -L?Ĭhances of data loss are pretty low, but not non-existent. Most is backed up offsite but some of it was still in progress while this happened. This drive I things like photos and other documents that are personal which is my only copy. If it was any other of the drives in my array it sucks but its fine. Phase 3:đ1/30 18:01:30đ1/30 18:04:19Ē minutes, 49 secondsīefore running the check I thought about just replacing the drive and rebuilding the data from the parity but the parity I am guessing also may just restore the same issue. No modify flag set, skipping filesystem flush and exiting. Inode allocation btrees are too corrupted, skipping phases 6 and 7 check for inodes claiming duplicate blocks.
Imap claims a free inode 255680755 is in use, would correct imap and clear inode Imap claims a free inode 255650322 is in use, would correct imap and clear inode Imap claims a free inode 255650321 is in use, would correct imap and clear inode process known inodes and perform inode discovery. scan (but don't clear) agi unlinked lists. Metadata corruption detected at xfs_agf block 0x105fc7a89/0x200įlfirst 118 in agf 3 too large (max = 118)Īgf 118 freelist blocks bad, skipping freelist scanīlock (0,103405884-103405884) multiply claimed by cnt space tree, state - 2Īgf_freeblks 6400825, counted 6396917 in ag 0Īgi unlinked bucket 41 is 567774825 in ag 3 (inode=7010225769) scan filesystem freespace and inode maps. I ran a check on the drive (-vf flag) and this is what came back. I was doing some updating and changes on my unRAID setup and at some point, one of my data drives is unable to mount.