Just a "quick" couple of questions about a raid1 setup on a clean desktop install of xubuntu 16.04.
A. Background
Disks /dev/sdb and /dev/sdc were part of a RAID + LVM configuration in the immediately previous (12.04) install.
The current raid1 array is being used for data only.
The current system resides on a separate disk /dev/sda and is not part of this raid configuration.
I created the raid array and added the ext4 file system to /dev/md0:
Code:
sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb /dev/sdc sudo mkfs.ext4 /dev/md0
Code:
# /dev/md0 UUID=x-x-x-x-x /home/server/shared ext4 defaults 0 0
Code:
sudo mdadm --detail --scan /dev/md0 >> /etc/mdadm/mdadm.conf
Code:
/dev/md0: Version : 1.2 Creation Time : Sun Aug 21 17:50:26 2016 Raid Level : raid1 Array Size : 976631488 (931.39 GiB 1000.07 GB) Used Dev Size : 976631488 (931.39 GiB 1000.07 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Mon Aug 22 09:38:30 2016 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : server:0 (local to host server) UUID : a:a:a:a Events : 2576 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc
Code:
/dev/sdb: Magic : same as sdc Version : 1.2 Feature Map : 0x1 Array UUID : a:a:a:a Name : server:0 (local to host server) Creation Time : Sun Aug 21 17:50:26 2016 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB) Array Size : 976631488 (931.39 GiB 1000.07 GB) Used Dev Size : 1953262976 (931.39 GiB 1000.07 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262056 sectors, after=48 sectors State : clean Device UUID : b:b:b:b Internal Bitmap : 8 sectors from superblock Update Time : Mon Aug 22 09:38:30 2016 Bad Block Log : 512 entries available at offset 72 sectors Checksum : b - correct Events : 2576 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
Code:
/dev/sdc: Magic : same as sdb Version : 1.2 Feature Map : 0x1 Array UUID : a:a:a:a Name : server:0 (local to host server) Creation Time : Sun Aug 21 17:50:26 2016 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB) Array Size : 976631488 (931.39 GiB 1000.07 GB) Used Dev Size : 1953262976 (931.39 GiB 1000.07 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262056 sectors, after=48 sectors State : clean Device UUID : c:c:c:c Internal Bitmap : 8 sectors from superblock Update Time : Mon Aug 22 09:38:30 2016 Bad Block Log : 512 entries available at offset 72 sectors Checksum : c - correct Events : 2576 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
1. In performing fsck.ext4 on devices /dev/sdb and /dev/sdc, I receive the following identical output, for both (sdb is shown). A "bad block" reference appears, again (as it did in --examine). Are these bad signs? Should this be reparable?
Code:
e2fsck 1.42.13 (17-May-2015) ext2fs_open2: Bad magic number in super-block fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/sdb The superblock could not be read or does not describe a valid ext2/ext3/ext4 filesystem. If the device is valid and it really contains an ext2/ext3/ext4 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> or e2fsck -b 32768 <device>
Thanks for any guidance!
raid trouble?
Aucun commentaire:
Enregistrer un commentaire