Skip to content

Linux: mdadm – Modify a degraded mirror so that it has only one member/device

August 25, 2013

I recently virtualized a physical server that was running a soft-raid1, managed by mdadm. It had /dev/md1 and /dev/md2 for root and swap, respectively. In our environment, depending on the VM, the storage that backs the disk images is either a SAS RAID5 or a SAN…so I didn’t want to waste space by having two virtual disks just to keep this VM dumb and happy. After the P2V, it of course began complaining at the usual interval that it was missing a disk.

I knew it wasn’t uncommon for this particular “hosting-made-easy-distro” to be installed as a single-disk soft-raid1; its automated installer does this by default when there is one disk to make it easier to switch to a two disk mirror later if the user ever desires it. So I figured I could reverse the process and things would be okay again. Be sure you have good backups before attempting something like this, especially on systems that have their own custom mdadm magic as part of the distro. In this instance it worked though…

tldr…

I found this (http://board.issociate.de/thread/505938/How-to-remove-non-existant-device.html) and tried it…

(In quoting this, I fixed what I think was a typo. The page actually says “-r faileded” [which gave an error]):

>> Try `mdadm /dev/md0 -r missing`.

Close. “missing” is only meaningful with –re-add.
You really want “-r failed” or “-r detached”

NeilBrown

…both -r failed and -r detached gave no errors but also had no effect. I think both of them are more appropriate when your host is already running and loses a device. If you run `cat /proc/mdstat` in that case (esp. if you haven’t rebooted, but maybe even if you have), I believe it not only tells you that a device is failed…but it tells you which one(s).

In my situation running `cat /proc/mdstat` only showed one underlying disk even though it also showed the array itself was degraded and expected two physical disks to be there, and /etc/mdadm.conf also mentioned there being two devices expected. So it wasn’t upset/degraded because it was looking for any particular device, it was just mad that it had a 2 stored somewhere and that isn’t the same number as 1. These commands got it:…

mdadm /dev/md1 –grow –force -n1
mdadm /dev/md2 –grow –force -n1

Finally, the relevant excerpts from `man mdadm`

MODES
mdadm has several major modes of operation:

Grow   Grow (or shrink) an array, or otherwise reshape it in some  way.
Currently  supported  growth options include changing the active
size of component devices and  changing  the  number  of  active
devices  in RAID levels 1/4/5/6, as well as adding or removing a
write-intent bitmap.

For create, build, or grow:
-n, –raid-devices=
Specify the number of active devices in the array.   This,  plus
the number of spare devices (see below) must equal the number of
component-devices (including “missing” devices) that are  listed
on the command line for –create.  Setting a value of 1 is prob-
ably a mistake and so requires that –force be specified  first.
A  value  of 1 will then be allowed for linear, multipath, raid0
and raid1.  It is never allowed for raid4 or raid5.
This number can only be changed using –grow  for  RAID1,  RAID5
and  RAID6  arrays,  and only on kernels which provide necessary
support.

Advertisements
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: