If you have a dual-boot system that uses Linux software RAID and or LVM, you'll need support for those in coLinux in order to get your system up and running. This page goes over what's necessary for each.
Note : This is based on my experiences with a Fedora Core 2 native installation and a coLinux snapshot running a 2.6.7 kernel. YMMV. Also, this isn't an easy process; if you aren't rather familiar with the Linux command line, you might have some trouble.
Kernel Support Edit
For either, you'll need to build support into the kernel. To do this, you can either boot into native Linux, or use a regular coLinux filesystem image. Get the coLinux source, and the matching Linux kernel source. Patch the kernel with the `patch/linux` file from the co Linux source, copy the `conf/linux-config` file into the base Linux source directory as `.config`. Now run 'make menuconfig' and adjust the settings under "Device Drivers" => "Multi-device support (RAID and LVM)" (will be different on 2.4) as desired. Do a 'make vmlinux', then copy the resulting vmlinux file over to Windows; I use WinSCP from the Windows end to do this. I named the file vmlinux.custom on Windows so I could keep the original vmlinux in case of problems. Adjust your XML file to point at the new kernel, and you should now have the RAID and/or LVM support you need. If you need more info on compiling a kernel, the Raid solution page has a bit more detail.
RAID Specific Edit
For RAID, you first need to assign a `cobd` device for each RAID partition in your system that you want to access from coLinux. You can use "diskpart" (a Windows utility) and Win Obj to find out what names to use. Then, you need to adjust your Linux configuration; most RAID setups are normally autodetected at boot time. That doesn't work (yet) in coLinux, so you'll have to set things up to be initialized manually. I use the `mdadm` tool for this; it's probably the easiest to work with. First, you need to know which partitions are in which RAID devices. It will probably be set up something like this:
/dev/md0: /dev/hda2 /dev/hdc2 /dev/hde2 /dev/md1: /dev/hda3 /dev/hdc3 /dev/hde3
I.E. the *2 partitions are grouped into one array and the *3 partitions into another. Now, let's say you assign =cobd1= through `cobd6` to those partitions. Now 1-3 are in `md0` and 4-6 are in `md1`.
The Easy Way Edit
I just realized there's a much easier way to do this than the `initrd` method discussed below. Simply add parameters to your `bootparams` block that look like this (these are based on the above example):
So the `bootparams` line ends up looking something like this:
<bootparams>root=/dev/md0 ro md=0,/dev/cobd1,/dev/cobd2,/dev/cobd3 md=1,/dev/cobd4,/dev/cobd5,/dev/cobd6</bootparams>
The Hard Way Edit
If your root partition is in a RAID, you'll need to set up an `initrd` to build the arrays before your system boots. You'll want to be `root` in Linux again to set this up. First, make a 4MB filesystem image, like so:
dd if=/dev/zero of=initrd.img bs=1M count=4 mke2fs initrd.img
`mke2fs` will give you this message, just say yes:
initrd.img is not a block special device. Proceed anyway? (y, n)
Now, make a directory and mount your shiny new initrd:
mkdir img mount -o loop initrd.img img
Now, we need several things on this disk. The strategy I'm going to go over here takes advantage of a tool that Fedora uses for their initrd's, namely `nash`; it's a special-purpose shell built just for this. If you don't have it, you'll have to get it or work out your own way of doing things; I can't help much there. You'll also need `mdadm`, and the libraries it requires; I'm going to put in the ones I needed. Use `ldd /sbin/mdadm` to work out your system's equivalents. Now...
mkdir img/proc mkdir img/sysroot mkdir img/bin cp /sbin/nash img/bin cp /sbin/mdadm img/bin mkdir img/lib cp /lib/tls/libc-2.3.2.so img/lib cp -a /lib/tls/libc.so.6 img/lib cp /lib/ld-linux.so.2 img/lib
Now, our initrd needs a few device files.
mkdir img/dev cp -a /dev/null /dev/zero /dev/console /dev/tty[1-4] /dev/ram /dev/systty /dev/md[0-9] img/dev for i in `seq 0 31`; do mknod img/dev/cobd$i b 117 $i; done
Now, we need to create an `img/linuxrc` script to actually set things up. It should probably look about like this:
#!/bin/nash echo Mounting /proc filesystem mount -t proc /proc /proc echo Creating block devices mkdevices /dev echo Creating root device mkrootdev /dev/root echo Setting up RAID devices /sbin/mdadm -A /dev/md0 /dev/cobd0 /dev/cobd1 /dev/cobd2 /sbin/mdadm -A /dev/md1 /dev/cobd3 /dev/cobd4 /dev/cobd5 echo 0x0100 > /proc/sys/kernel/real-root-dev echo Mounting root filesystem mount -o defaults --ro -t ext3 /dev/root /sysroot pivot_root /sysroot /sysroot/initrd umount /initrd/proc
If you look closely you'll see how the two `mdadm` lines correspond to the example I gave earlier; you'll need to adjust them to fit your situation. Make sure the script is executable by running `chmod +x img/linuxrc`. Now you have an initrd! Unmount the image and compress it:
umount img gzip initrd.img
Now copy the `initrd.img.gz` file over to your coLinux directory, and add a line <initrd path="initrd.img.gz" /> to your configuration file. While you're at it, make sure the `image` tag is pointing at your custum `vmlinux` version. If all goes well, you should be up and running.
LVM Specific Edit
To use LVM, first you have to adjust your `/etc/lvm/lvm.conf` file inside the guest system to allow the LVM toolset to recognize `cobd` devices as possible LVM PV containers. To do this, open the file and add the following line somewhere in the `devices` section:
types = [ "cobd", 32 ]
If there is already a `types` line, just add the `cobd` part to the list, like this:
types = [ "cobd", 32, "fd", 16 ]
Also, it is my experience that Windows will not allow you to directly access partitions with the LVM partition type code `0x8E` directly. This should only affect actual LVM-only partitions; LVM-on-RAID shouldbe okay.
To deal with this issue, I had to change the partition type code before booting Windows. I'm not sure whether you can just change it permanently without Linux objecting. However, if you use GRUB, you can configure it to switch the partition type back and forth depending on the OS you choose to load. Just add a line like the following to the list for your Windows boot option for each LVM partition (it doesn't matter where you put the line):
parttype (hd0,4) 0x83
This changes the `(hd0,4)` partition's type code to `0x83`, the straight-Linux code normally used for `ext2`/`ext3` partitions. Then, add matching lines to (all of) your Linux option list(s), except with =0x8E= instead of `0x83` to change the partitions back.
Once you've taken care of these two steps and set up your coLinux XML file to allow direct access to all the LVM partitions (doesn't matter which indices you use), your LVM setup should "just work" the same way it does in Linux. Note that I don't use LVM-on-boot so I can't comment on whether this breaks in that case and you have to do something more.
MassTranslated on 25 Dec 2004.
Since default installs of Fedora use LVM'd partitions these days, which is handy for later expansion and such, and since coLinux is terribly useful for accessing Linux partitions while running windows, the default kernel distributed with coLinux really should have the Device Mapper/Raid options turned on, for greater utility.
Rebuilding a Kernel is a bit fussy under coLinux; it seemed to not like kernel's built with the version of GCC that came with Fedora Core 3, but wanted the previous version; I ended up using a rather bland Debian install to build my modified kernel for use with FC3.
If anyone would like a copy of the 2.6.10 kernel, with LVM/Raid enabled, you can contact me at dale at gass.ca. If there's sufficient interest, I'll put it on an FTP/web site somewhere.
Accessing Linux partitions from Windows, is truly wonderful. (And with Samba, one can make them accessible from Windows itself, through coLinux.) I think coLinux is the best kept secret in the Linux world, and will have a bright future. Great work everyone...
Possible Bugs (not for Native RAID) Edit
create emtpy 4-7 files =10Mb
mount /dev/md0 /mnt/md0
dd if=/dev/zero of=/mnt/md0/empty bs=3M count=11
tested with raid 0,linear,5
MassTranslated on Sun Apr 23 17:35:49 UTC 2006