Config ASM share disk on OVM linux platform

Compare with ESXi platform, there is something different for config share disk under OVM virtualization.

for example, we have some share disks created by OVM and display at OS which highlight as red.

[root@drnode1 ~]# cat /proc/partitions

major minor  #blocks  name

 202        0  629145600 xvda
 202        1    2097152 xvda1
 202        2  290870574 xvda2
 202       16   20971520 xvdb
 202       32   20971520 xvdc
 202       48   20971520 xvdd
 202       64  104857600 xvde
 202       80  104857600 xvdf
 202       96  104857600 xvdg
 202      112  104857600 xvdh
 202      128  104857600 xvdi
 252        0    5242880 dm-0
 252        1    8388608 dm-1
 252        2    1048576 dm-2
 252        3   10485760 dm-3
 252        4    2097152 dm-4
 252        5    1048576 dm-5
 252        6    2097152 dm-6
 252        7    5242880 dm-7
 252        8  157286400 dm-8

[root@drnode1 ~]#

rather than using fdisk to get scsi_id for each disk, we use “parted” command to partition these share disks at one of the node.

At this example, we partition xvdb disk and name it as asmdisk1

[root@drnode1 ~]# parted /dev/xvdb

GNU Parted 3.1

Using /dev/xvdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mktable gpt

Warning: The existing disk label on /dev/xvdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? y

(parted) mkpart asmdisk1 0% 100%

(parted) print

Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdb: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name      Flags
 1      1049kB  21.5GB  21.5GB               asmdisk1

(parted) quit

[root@drnode1 ~]#

We can repeat above step to partition all share disk with different naming.

At there, after execute above command, new partition created as follow

[root@drnode1 ~]# cat /proc/partitions

major minor  #blocks  name

 202        0  629145600 xvda
 202        1    2097152 xvda1
 202        2  290870574 xvda2
 202       16   20971520 xvdb
 202       17   20969472 xvdb1
 202       32   20971520 xvdc
 202       48   20971520 xvdd
 202       64  104857600 xvde
 202       80  104857600 xvdf
 202       96  104857600 xvdg
 202      112  104857600 xvdh
 202      128  104857600 xvdi
 252        0    5242880 dm-0
 252        1    8388608 dm-1
 252        2    1048576 dm-2
 252        3   10485760 dm-3
 252        4    2097152 dm-4
 252        5    1048576 dm-5
 252        6    2097152 dm-6
 252        7    5242880 dm-7
 252        8  157286400 dm-8
[root@drnode1 ~]#

We can get the unique uuid by following command

[root@drnode1 ~]# udevadm info --query=property /dev/xvdb1

DEVLINKS=/dev/disk/by-partlabel/asmdisk1 /dev/disk/by-partuuid/55da41e2-7757-4708-b2c6-b6f4cc4343b4

DEVNAME=/dev/xvdb1
DEVPATH=/devices/vbd-51728/block/xvdb/xvdb1
DEVTYPE=partition
ID_PART_ENTRY_DISK=202:16
ID_PART_ENTRY_NAME=asmdisk1
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SCHEME=gpt
ID_PART_ENTRY_SIZE=41938944ID_PART_ENTRY_TYPE=ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
ID_PART_ENTRY_UUID=55da41e2-7757-4708-b2c6-b6f4cc4343b4
ID_PART_TABLE_TYPE=gpt
MAJOR=202
MINOR=17
PARTN=1
PARTNAME=asmdisk1
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=55181645
[root@drnode1 ~]#

Assume we using OS user grid with oinstall group to owning these share disks. Then, we create udev rule on all nodes which contains these share disks by following command.

[root@drnode1 ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules

And the file contains as follow. we can repeat more lines if there is more shared disks with different UUID, and make the symbol link which same as the naming during parted command.

KERNEL==”xvd??”, ENV{ID_PART_ENTRY_UUID}==”55da41e2-7757-4708-b2c6-b6f4cc4343b4“, SYMLINK+=”oracleasm/asmdisk1“, OWNER=”grid”, GROUP=”oinstall”, MODE=”0660″

Leave a Reply