Logical Volume Manager

AIX V3 extends traditional UNIX disk management facilities through the logical volume manager (LVM). The logical volume manager provides sophisticated disk management services that allow system administrators to establish and manage their storage environment without significant effort or extensive experience. This chapter outlines the advantages of the logical volume manager and explains how to use these concepts.

Introducing Key LVM Components

The logical volume manager introduces many terms and concepts that may be new to users of traditional UNIX systems. This section defines some of these terms.

Physical Volume

In the logical volume manager, a Physical Volume, or PV, represents a single disk drive (see Figure - Physical Volumes, Volume Groups and Physical Partitions). Each physical volume has a unique, permanently assigned, system wide identifier called its physical volume ID or PVID. A physical volume will often be referred to by its logical name. By default, this name takes the form hdiskx where x is a number, for example hdisk2. See Managing Physical Volumes for more details on physical volumes.

Volume Group

To provide improved flexibility in dealing with the storage space contained in these physical volumes, the LVM introduces the concept of a volume group or VG. A volume group is a collection of physical volumes which are represented to the user or system administrator as a single pool of disk space. See Figure - Physical Volumes, Volume Groups and Physical Partitions. Each volume group can consist of 1 to 32 physical volumes. Each AIX system can have up to 255 volume groups. System and user file systems are defined within volume groups and not within physical disks. This allows file systems to span multiple physical disks without the knowledge of the user.

Each volume group in the system has a unique name of up to 15 characters and a unique ID number called a Volume Groupe ID or a VGID that is generated by the system.

Each physical volume belonging to the volume group contains a Volume Group Descriptor Area or VGDA, which describes the volume group and its contents. This includes information on the physical volumes, physical partitions, and logical volumes it contains. The VGDA allows a volume group to be self-describing. See Volume Group Descriptor Area (VGDA).

This self-describing quality of volume groups has the advantage of allowing them to be dynamically added to or removed from a system. This is advantageous in configurations including removable disk technology or external disks which will occasionally be moved from one system to another. For example, a volume group containing removable disks or a string of external disks can be removed (exported) from one system and added (imported) to another system, making its file systems and their contents available to the new system without having to explicitly define them.

Volume groups not only ease the definition and administration of filesystem space in AIX, but the ability to move them easily between systems makes possible enhanced levels of application availability. See Managing Volume Groups for more details on volume groups.

rootvg

The AIX V3 installation process creates the first volume group called the rootvg. The rootvg consists of a base set of logical volumes required to start the system (additional logical volumes can be defined by the systems administrator). You can choose which physical volumes will be incorporated into the rootvg at installation time.

Additional physical volumes which are attached to a system can be added to the root volume group at a later date, or to a different volume group defined by the systems administrator.

See Size of rootvg: for more details about rootvg.

Physical Partition

When a volume group is created, it is logically one large pool of disk space. In order to allocate this space to file systems, the large pool is broken down into smaller pieces called Physical Partitions or PPs. See Figure - Physical Volumes, Volume Groups and Physical Partitions. This is analogous to changing a dollar into 100 one cent pieces so that you can spend exactly the right amount. You can better spend your volume group dollar by dividing it into physical partition pennies.

Physical partitions are the smallest units of disk space that the logical volume manager can manage and allocate. All physical volumes in a particular volume group share the same physical partition size. The default size for a physical partition is 4MB. However, the system administrator has the option of setting the physical partition size for any new volume group to a value between 1MB and 256MB. The value must be equal to a power of 2 (for example, 1 2 4 8 and so on). Each physical volume in the system can contain up to 65 535 physical partitions.


Figure: Physical Volumes, Volume Groups and Physical Partitions

Logical Volume

Within each volume group, the system administrator is able to define Logical Volumes or LVs. A logical volume is a collection of physical partitions which is logically viewed as a single piece of contiguous disk storage by its users (see Figure - Logical Volumes). A logical volume can have several uses, the most common being to contain an AIX journaled file system or jfs. However, it can also be used to contain an AIX paging space, a file system log or jfslog, a boot partition, system dump area, (see Special System Logical Volumes) or simply a raw disk area usable by a database. The concept of a logical volume is similar to that of a disk partition or minidisk seen in traditional UNIX implementations. However, a logical volume offers far more power and flexibility.

Data on logical volumes appears to the user to be contiguous. In reality, however, the logical volume manager can place the physical partitions making up the logical volume anywhere within the volume group. This means they can be spread across two or more physical volumes or in two or three non-contiguous areas of the same physical volume. The user can, at their option, specify upon creation of the logical volume how he wishes the logical volume physical partitions to be selected (that is, whether they should be on one physical volume or spread across many, and where on the physical volume the partitions should be placed, if space is available). The user can also select to have the system create and manage mirrored copies of the physical partitions in the logical volume. In this way, mirroring (see Disk Mirroring) of an AIX file system can be accomplished without any application modification.

Logical volumes are extensible. A logical volume can be made larger while the system is running simply by adding more physical partitions to it.


Figure: Logical Volumes

Logical Partition

Each logical volume is made up of a collection of logical partitions. Each of these logical partitions is represented by one or more physical partitions in the volume group. In the absence of mirroring in the logical volume, each logical partition is contained in one physical partition. However, if mirroring (see Disk Mirroring) has been implemented, each logical partition is actually represented by two or three physical partitions, each a mirrored copy of the other. Logical partitions and physical partitions are referred to by number. A logical partition is numbered by its position relative to the start of the logical volume to which it belongs, and a physical partition by its position relative to the beginning of the physical volume on which it resides. Thus, logical partition1 of a logical volume might be contained in physical partition23 of physical volume hdisk1; logical partition2 in physical partition45, and so on. See Figure - Logical Partitions and Mirroring. Since each logical partition is really one or more occurrences of a physical partition, it also has a default size of 4MB, but is definable between 1MB and 256MB.


Figure: Logical Partitions and Mirroring

Bad Block Relocation Policy

Bad block relocation is the process whereby read-write requests are redirected to a new block when a disk block becomes error prone. One of the features of LVM is it hides bad block relocation from the using application (for example, the AIX file system or the virtual memory manager).

Under LVM, the process is transparent in the sense that the application is unaware that requests directed to a physical block are actually being resent to a different block. In some cases, disk hardware subsystems exist that are able to perform this service independently of the LVM software. In this case, the LVM device driver will take advantage of this service to improve performance. However, a single interface is still presented to the higher level application.

Disk Quorum

The LVM requires that a quorum, or majority, of VGDAs and VGSAs be accessible in order for a volume group to remain active, or to be varied on. The idea of the quorum is to ensure that a volume group is kept in a known and recoverable state.

The number of VGDAs contained on a single disk varies according to the number of disks in the volume group:

Single physical volume in a volume group
Two VGDAs on one disk
Two physical volumes in a volume group
Two VGDAs on the first disk, one VGDA on the second disk
Three or more PVs in a volume group
One VGDA on each disk

So, a volume group containing two physical volumes has two VGDAs placed on the first disk and one placed on the second. Figure - Disk Quorum includes an example of this scenario. In this case, the failure of the first disk will cause two of the three VGDAs to be inaccessible, and therefore cause the failure of the entire volume group. If the second of the two disks failed, however, only one of the three VGDAs are inaccessible, and the volume group remains accessible. If you have three or more disks in the volume group, then a quorum will generally remain upon failure of any one of the disks.

Obviously, this has implications when one comes to use disk mirroring in order to ensure high availability. In a two disk mirrored system, if the first disk fails, then you have lost 66% of your VGDAs, and the entire volume group becomes unavailable. This defeats the purpose of mirroring. For this reason, three or more (and generally an odd number) disk units provide a higher degree of availability and are highly recommended where mirroring is desired.


Figure: Disk Quorum

Note: With AIX/6000* Version 3.2.3, there is the ability to turn off disk quorum protection on any volume group. Turning off quorum protection allows a volume group to remain online, even when a quorum or majority of its VGDA's are not online. This would allow the volume group to remain online in the situation described above. This capability provides for a less expensive mirroring solution, but does carry the risk of data loss, as after a disk failure, data is accessible but no longer mirrored.

Disk Mirroring

AIX V3 and the logical volume manager provide a disk mirroring facility. Disk mirroring works by associating two or three physical partitions with each logical partition in a logical volume. When you write data to your logical volume, it is written to all the physical partitions that are associated with the affected logical partition. See Figure - Logical Partitions and Mirroring.

Mirroring is established when a logical volume is created. The mklv command allows you to select one or two additional copies for each logical volume. Mirroring can also be added to an existing logical volume using the mklvcopy command. When you create a logical volume, you associate with it policies that help determine how mirroring is established. You can tell the logical volume manager to ensure that mirrored copies of logical partitions are placed on different physical volumes. These policies are discussed in Making a New Logical Volume.

A key point about mirroring is that it occurs at the level of a logical volume. This means that any user, application, or operating system facility that uses disk at the logical volume level can take advantage of mirroring.

Mirroring data increases its availability. With mirroring in operation, the following factors can further affect availability:

Advantages of LVM

IBM has implemented the logical volume manager to overcome some of the major weaknesses seen in traditional UNIX storage management. The key advantages are as follows:

Multiple Physical Volume File Systems

Traditional UNIX implementations confined each file system to a single physical volume (fixed disk). AIX V3 allows file systems to span multiple physical volumes. The logical volume manager groups physical volumes into pools called volume groups, from which file systems are defined. This concept of the volume group allows file systems under AIX V3 to span multiple physical volumes(disks), or parts thereof.

Support for Extensible File Systems

Traditional UNIX systems limited file system flexibility by not allowing dynamic expansion of file systems beyond their initial size. The AIX file system implementation, in conjunction with the Logical Volume Manager, allows for the dynamic extension of file system sizes beyond their initial definition. Anyone who has had to reorganize a UNIX system when key file systems have become full will understand immediately the advantage of this capability.

Mirroring for Higher Availability

The logical volume manager has a function whereby critical data can be automatically and transparently replicated. Logical volume mirroring provides for the creation and maintenance of two or three online copies of any AIX file system. Mirroring will allow the system to withstand the failure of a disk containing a mirrored file system. In this case, the system continues, using the remaining online copy see Disk Quorum for additional details.

The majority of the LVM technology that is available in AIX V3 has been included in the Open Software Foundation's OSF/1** product. AT&T**'s USL group has indicated that future releases of UNIX V.4 will include technology that is similar to the logical volume manager. While the logical volume manager technology is an IBM offering today, it is clear that the logical volume manager or similar technology will be adopted by other UNIX systems over time.

Where is LVM Configuration Data Kept?

The data that describes the components of the LVM is not kept in one place. It is important to understand this descriptive data on volume groups, logical volumes, and physical volumes is kept in a number of places.

Object Data Manager (ODM) Database

The ODM database is the place where the most AIX V3 system configuration data is kept. We will not enter into a detailed discussion of the ODM database here. Object Data Manager (ODM) contains more details about ODM.

ODM contains information about all configured physical volumes, volume groups and logical volumes. This information mirrors the information found in the VGDA. The process of importing a VGDA, for example, involves copying the VGDA data for the imported volume group into the ODM. When a volume group is exported the data held in the ODM about that volume group is removed from the ODM database.

The ODM data also mirrors the information held in the Logical Volume Control Block see Logical Volume Control Block (LVCB).

Volume Group Descriptor Area (VGDA)

The VGDA, located at the beginning of each physical volume, contains information that describes all the logical volumes and all the physical volumes that belong to the volume group of which that physical volume is a member. The VGDA is updated by almost all the LVM commands and subroutines, including such apparently read-only mechanisms as the vary on process.

The VGDA is a key part of the logical volume manager and the function that volume groups can provide. In effect, the VGDA makes each volume group self describing. An AIX system can read the VGDA on a disk, and from that, the system can determine what physical volumes and logical volumes are part of this volume group. This allows a system to import this information, and therefore import the volume group and all the configuration work that has already been performed on that volume group. See Importing and Exporting Volume Groups for details on the importing and exporting of volume groups.

Each disk contains at least one VGDA. This is important at vary on time. The time stamps in the VGDAs are used to determine which VGDAs correctly reflect the state of the VG. VGDAs can get out of sync when, for example, a volume group of four disks has one disk failure. The VGDA on that disk cannot be updated while it is not operational. Therefore, we need a way to update this VGDA when the disk comes back to life, and this is what the varyon process will do .

The VGDA is allocated when the disk is assigned as a physical volume (with the command mkdev). This actually only reserves a space for the VGDA at the start of the disk. The actual volume group information is placed in the VGDA when the physical volume is assigned to a volume group (using the mkvg or extendvg commands). When a physical volume is removed from the volume group (using the reducevg command) the volume group information is removed from the VGDA.

Volume Group Status Area (VGSA)

The VGSA contains state information about physical partitions and physical volumes. For example, the VGSA knows if one physical volume in a volume group is unavailable. The VGSA is managed by the logical volume device driver in the AIX V3 kernel.

Both the Volume Group Descriptor Area and the Volume Group Status Area have beginning and ending time stamps which are very important. These time stamps enable LVM to identify the most recent copy of the VGDA and the VGSA at vary on time, that is when a volume group is initialized see Varying On and Off Volume Groups. The LVM requires that the timestamps for the chosen VGDA be the same as those for the chosen VGSA.

Logical Volume Control Block (LVCB)

The LVCB is located at the start of every logical volume. It contains information about the logical volume, and takes up a few hundred bytes. Applications writing directly to a raw logical volume must allow for the LVCB, and not write into the first 512-byte block. The command getlvcb -TA displays the information held in the LVCB as follows:


# getlvcb -TA hd2
AIX LVCB
intrapolicy = c
copies = 1
interpolicy = m
lvid = 00011187ca9acd3a.7
lvname = hd2
label = /usr
machine id = 111873000
number lps = 72
relocatable = y
strict = y
type = jfs
upperbound = 32
fs = log=/dev/hd8:mount=automatic:type=bootfs:vol=/usr:free=false
time created = Tue Jul 27 13:38:45 1993
time modified = Tue Jul 27 10:58:14 1993


AIX V3 Files

Some LVM configuration data is stored in AIX V3 files. In particular, most of the LVM storage constructs appear also as AIX V3 devices. Each volume group has a device associated with it, for example /dev/rootvg. Each physical volume has a device associated with it, for example /dev/hdisk3. Each logical volume has a device associated with it, for example /dev/hd2. This is the logical volume created at installation time to house the /usr file system. Information is also stored in /etc/filesystems.

How to Allocate Disk Space ?

When you install AIX V3 it will create a number of file systems and logical volumes that you must have for the system to work correctly. The information to help plan the amount of space required for these file systems is contained in Disk Space Considerations.

Predefined File Systems and Logical Volumes

The file systems created by the operating system at installation time for AIX V3 are described below. /(root), /usr, /tmp, /home and /var. are the main file systems.


Table: AIX V3 Predefined File Systems and Logical Volumes

The AIX V3 installation process creates the first volume group called the rootvg. You can see in the following figure the organization of this volume group.


Figure: Predefined Logical Volumes with AIX V3

Special System Logical Volumes

These special logical volumes are created as part of the installation process.

Boot Logical Volume

The Boot Logical Volume contains a stripped-down version of the operating system, which is required in order to boot the system and enable all other processes. A single boot logical volume is created in the rootvg at installation time.

A volume group may contain multiple boot logical volumes, at a maximum of one per disk. It is useful to have more than one boot logical volume per system as this provides an alternate way to boot the system when the primary boot disk fails. Boot logical volumes are created using the bosboot command. Besides initializing the new boot logical volume, this command creates a boot record at the start of the disk pointing to the boot logical volumes on the disk.

Once created, the system must be told to look at the new boot logical volume, if necessary, when starting the system. This is configured into nvram using the bootlist command.

The default size of the boot logical volume is 8MB.

System Dump Device

The System Dump Device is a logical volume that captures a system dump in the event of a system failure or a user-generated dump. It is initialized outside of the normal LVM interface when the system is booted, and remains open while the system is in operation. A dump logical volume is created in the rootvg at installation time. Its size is 8MB and its default position is Outer Edge. See more information about the position of logical volume in Listing the Characteristics of a Physical Volume.

Paging Space

A paging space is fixed disk storage for information that is resident in virtual memory, but is not currently being accessed. When the amount of the free real memory in the system is low, programs or data that have not been used recently are moved from real memory to paging space in order to free real memory for other activities. Paging space is allocated when data is first loaded into memory, and not when data is paged out of real memory.

You can use the command lsps -a which displays the characteristics of paging spaces. The flag -a specifies that the characteristics of all paging spaces are to be given. The size is in megabytes. As long as %Used isn't above 80, then you have nothing to worry about.


# lsps -a
Page Space Physical Volume Volume Group Size %Used Active Auto Type
hd61 hdisk1 rootvg 32MB 25 yes yes lv
hd6 hdisk0 rootvg 32MB 47 yes yes lv

If paging space goes above 80 % used then you will begin to get messages on the console stating that paging space is low. As a result:

The size of the default paging space is determined by the boot process. The size is determined by the size of the real memory. The general rule is:

A recommendation is to create a paging space on each physical volume. You can use SMIT with the fastpath smit pgsp.

Journaled File System Log (jfslog)

Planning Volume Groups

The next issue of some importance is to plan is which disks will be allocated into volume groups.

Size of rootvg:

The AIX V3 installation process creates the root volume group. The major choice you are faced with is how many of your physical volumes or disks should you allocate to this volume group. You need to consider the following issues when making this decision:

Recommendations for rootvg

Disk Space Considerations

The first task you must complete when planning an installation is to analyze your requirements for disk space and for file systems. Use the following process to determine the required amount of disk space:

  1. When you install AIX V3 it will create a number of file systems and logical volumes. See How to Allocate Disk Space ?. You must determine how much disk space is required for each file system or logical volume. Add reasonable amounts of spare space to each logical volume to allow for unplanned growth and ad-hoc requirements. Try to make sure that no file system would be more than 90% full.
  2. You must then look at the applications and data that you will introduce to the system and calculate how much disk space they need. Do not forget to look at your requirements for disk space, both immediately and in the near future. Allow space for growth and a small buffer as mentioned above.
  3. Having determined the disk use needs of the operating system and your applications, you will need to divide the requirements you have for disk space into file systems. The operating system's file systems are predetermined. The major effort here is to determine the file systems required for the data and applications that you will add. The issues to consider here are:

    An example of designing a file system layout is included in Example of a File System and Volume Group Design.

  4. Having aligned disk requirements with file systems you can round up those requirements into multiples of 4MB and then sum them up to give total space requirements. An example of this space requirement calculation is in Example of a File System and Volume Group Design.

Example of a File System and Volume Group Design

We will use the system named pippin as an example of designing a file system layout.

The system pippin has three disks :

File Systems Requirement

The system pippin has a range of disk requirements. These are summarized in the following table.


Table: Disk Space Requirements for pippin

This data is obtained from IBM for the AIX V3 related information and from the providers of the applications that will be used. If this information is not available estimates will need to be made.

To design our file systems we will use the standard file systems provided by AIX V3 for the operating system and will create separate file systems for each logically associated set of files. The file systems design and rationale is summarized in the table below:


Table: File Systems Required

LVM Commands and Command Levels

To use the functionality provided in AIX V3 by the logical volume manager, AIX V3 provides logical volume commands that are organized into three categories:

LVM uses shell scripts for the high level commands that manage logical volumes and volume groups. The high level commands call intermediate level and low level commands. The low level commands in turn call the LVM library subroutines, usually on a one-to-one basis. The intermediate level commands deal with the ODM database and the logical volume control block.

You can find lots of documentation in InfoExplorer about the high level LVM commands, but none on the intermediate and low level commands.

The following sections discuss the intermediate and low level LVM commands.

Intermediate Level LVM Commands

The following is a description of intermediate level logical volume commands which are not documented because IBM reserves the right to modify them at any time. The options listed below are the current options for these commands. These commands are: getlvcb, getlvname, getlvodm, getvgname, lvgenmajor, lvgenminor, lvrelmajor, lvrelminor, putlvcb, putlvodm.

Low Level LVM Commands

The following is a description of low level logical volume commands which are not documented because IBM reserves the right to modify them at any time. The options listed below are the current options for these commands. These commands are: lchangelv, lcreatelv, ldeletelv, lextendlv, lquerylv, lreducelv, lresynclv, lchange pv, ldeletepv, linstallpv, lquerypv, lresyncpv, lcreatevg, lqueryvg, lqueryvgs, lvaryonvg, lvaryoffvg, lresynclp, lmigratepp.

Managing Physical Volumes

This section will review the following physical volume related topics:

Adding Physical Volumes

Adding a new physical volume (disk) to a RISC System/6000 requires that the disk be physically installed and then configured via the operating system. The process of installing disks is described in the Disk Drive Removal/Replacement section of the Installation and Service Guide that is provided with each RISC System/6000.

If we are dealing with an IBM disk unit, and if the disk unit is physically installed, the configuration process can be initiated by using the cfgmgr -v command or re-booting the system (which also runs cfgmgr). This command will make the device known to the system by adding it to the CuDv class in ODM. In case the disk you have added was powered off (external disk), the disk will now show as being defined but not available. To make a disk available for use, first power it on, use SMIT with the fastpath smit disk and select the option Configure a Defined Disk. This will use the mkdev -l command to make the physical volume available.

Changing Physical Volume Characteristics

There is very little configuration data that can be changed for a physical volume. The two things that can be manipulated are:

Setting Allocation Permission for a Physical Volume

The allocation permission for a physical volume determines if physical partitions contained on this disk, which are not allocated to a logical volume yet, can be allocated for use by logical volumes. Setting the allocation permission defines whether the allocation of new physical partitions is permitted for the specified physical volume or not permitted.

Allocation permission is set using the chpv command with the -a flag. To turn off allocation permission for a disk enter:

# chpv -a n pvname
This will prevent any further physical partitions being allocated for the physical volume pvname. To turn the allocation permission back on, enter
# chpv -a y pvname
Setting the Availability of a Physical Volume

The availability of a physical volume defines whether any logical input/output operations can be performed to the specified physical volume. Physical volumes should be made unavailable when they are to be removed from the system or are lost due to failure. To set the state of a physical volume to be unavailable, enter:

# chpv -v r pvname

This will remove all VGDA and VGSA copies from the physical volume, and the physical volume will not take part in future vary on quorum checking. Also information about the specified volume will be removed from the VGDAs of the other physical volumes in that volume group.

If the physical volume is required in order to maintain a volume group open, the attempt to remove the physical volume will fail.

To make a physical volume available enter:

# chpv -v a pvname

**** Possible Problems **** The chpv command uses space in the /tmp directory to store information while it is executing. If it fails mysteriously it could be due to lack of space in the /tmp directory. You should create more space in that directory and try again.
Accessing the chpv Command Using smit chpv

Typing smit chpv will take you to a menu from which you can:

Listing the Physical Volumes on the System

If you enter lspv without any flags, it will list the physical volumes available on the system.

For example:


# lspv
hdisk0 00000061efc8f50f rootvg
hdisk1 000024977ba43a0d rootvg
hdisk2 00008898603471ca rootvg

The fields shown, from left to right, are:

  1. Physical volume name
  2. Physical volume identifier (PVID)
  3. Name of the volume group to which the physical volume has been allocated. If it has not yet been allocated to a Volume Group, this value will appear as None.

Listing the Characteristics of a Physical Volume

If you enter lspv pvname, detailed information for the physical volume pvname will be displayed.

For example:


# lspv hdisk0
PHYSICAL VOLUME: hdisk0 VOLUME GROUP: rootvg
PV IDENTIFIER: 00000061efc8f50f VG IDENTIFIER 00011187ca9acd3a
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 4 megabyte(s) LOGICAL VOLUMES: 8
TOTAL PPs: 95 (380 megabytes) VG DESCRIPTORS: 1
FREE PPs: 3 (12 megabytes)
USED PPs: 92 (368 megabytes)
FREE DISTRIBUTION: 03..00..00..00..00
USED DISTRIBUTION: 16..19..19..19..19

The left hand pair of columns holds information about the physical volume itself. The right hand pair displays information concerning the volume group of which the physical volume is a member; this information is read from the VGDA.

The meaning of the fields is as follows:

PHYSICAL VOLUME
The name of the specified physical volume.
PV IDENTIFIER
The physical volume identifier (unique to the system).
PV STATE
The state of the physical volume. This defines whether the physical volume is available for logical input/output operations or not. It can be changed using the chpv command.
STALE PARTITIONS
The number of stale partitions.
PP SIZE
The size of a physical partition. This is a characteristic of the volume group and is set only at the creation of the volume group as an argument to the mkvg command. The default size is 4MB.
TOTAL PPs
The total number of physical partitions, including both free and used partitions available on the physical volume.
FREE PPs
The number of free partitions available on the physical volume.
USED PPs
The number of used partitions on the physical volume.
FREE DISTRIBUTION
This field summarizes the distribution of free physical partitions across the physical volume, according to the sections of the physical volume on which they reside. See figure Figure - Physical Partitions Distribution Summary. The sectors are, in order:
  1. Outer edge
  2. Outer middle
  3. Centre
  4. Inner middle
  5. Inner edge


    Figure: Physical Partitions Distribution Summary

USED DISTRIBUTION
Same as free distribution, except that it displays the allocation of used physical partitions.
VOLUME GROUP
The name of the volume group to which the physical volume is allocated.
VG IDENTIFIER
The numerical identifier of the volume group to which the physical volume is allocated.
VG STATE
State of the volume group. If the volume group is activated with the varyonvg command, the state is either active/complete (indicating all physical volumes are active) or active/partial (indicating some physical volumes are not active). If the volume group is not activated with the varyonvg command, the state is inactive.
ALLOCATABLE
Whether the system is permitted to allocate new physical partitions on this physical volume. See Setting Allocation Permission for a Physical Volume.
LOGICAL VOLUMES
The number of the logical volumes in the volume group.
VG DESCRIPTORS
The number of VGDAs for this volume group which reside on this particular physical volume. See Disk Quorum.

Summarizing the Allocation of Logical Volumes Within a Physical Volume

Entering lspv -l pvname will display the allocation of logical volumes within the physical volume specified by the pvname parameter.

For example:


# lspv -l hdisk0
hdisk0:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
hd5 2 2 02..00..00..00..00 /blv
hd7 2 2 02..00..00..00..00 /mnt
hd3 3 3 03..00..00..00..00 /tmp
oa 5 5 05..00..00..00..00 /oa
hd2 70 70 04..11..17..19..19 /usr
hd6 8 8 00..08..00..00..00 N/A
hd8 1 1 00..00..01..00..00 N/A
hd4 1 1 00..00..01..00..00 /

This shows the allocation of hdisk0 physical partitions to various logical volumes. The meaning of the fields is as follows:

LV NAME
The name of the logical volume.
LPs
The number of logical partitions for that logical volume allocated on that particular physical disk.
PPs
The physical partitions for that logical volume allocated on that particular physical disk.
DISTRIBUTION
The distribution of physical partitions by physical volume section.
MOUNT POINT
Where the logical volume is associated with a file system, the mount point for that file system. In the above example, hd6 and hd8 are not associated with file systems, and the mount point is therefore marked as not applicable (hd6 is page space, hd8 is the jfslog).

Listing Physical Partition Allocation by Disk Region

Entering the command lspv -p pvname returns a table of physical partition allocation in terms of disk region.

For example:


# lspv -p hdisk0
hdisk0:
PP RANGE STATE REGION LV NAME TYPE MOUNT POINT
1-2 used outer edge hd5 boot /blv
3-4 used outer edge hd7 sysdump /mnt
5-7 used outer edge hd3 jfs /tmp
8-12 used outer edge oa jfs /oa
13-15 free outer edge
16-19 used outer edge hd2 jfs /usr
20-27 used outer middle hd6 paging N/A
28-38 used outer middle hd2 jfs /usr
39-39 used center hd8 jfslog N/A
40-40 used center hd4 jfs /
41-57 used center hd2 jfs /usr
58-76 used inner middle hd2 jfs /usr
77-95 used inner edge hd2 jfs /usr

This is useful for checking the distribution of the partitions belonging to logical volumes in terms of the intra-physical volume allocation policy.

Here is what the different columns indicate:

PP RANGE
The range of physical partitions for which the current row of data applies.
STATE
Whether or not the partitions have been allocated. Value can be either used or free.
REGION
Notional region of the disk within which the partitions are located. See Figure - Physical Partitions Distribution Summary.
LV NAME
Name of the logical volume to which the partitions in question have been allocated.
TYPE
Type of file system residing on the logical volume.
MOUNT POINT
Mount point of the file system, if applicable.

Displaying a Map of Physical Partition Allocation

Entering the command lspv -M pvname returns a table of physical partition allocation in terms of disk region.

For example:


# lspv -M hdisk1
hdisk1:1-19
hdisk1:20 hd61:1
hdisk1:21 hd61:2
hdisk1:22 hd61:3
hdisk1:23 hd61:4
hdisk1:24 hd61:5
hdisk1:25 hd61:6
hdisk1:26 hd61:7
hdisk1:27 hd61:8
hdisk1:28 lh:1
hdisk1:29 lh:2
hdisk1:30 lh:3
hdisk1:31 lh:4
hdisk1:32 lh:5
hdisk1:33 lh:6
hdisk1:34 lh:7
hdisk1:35 lh:8
hdisk1:36-39
hdisk1:40 hd1:1
hdisk1:41 hd1:2
hdisk1:42 hd1:3
hdisk1:43 hd1:4
hdisk1:44 hd2:71
hdisk1:45 hd2:72
hdisk1:46 hd4:2
hdisk1:47 hd1:5
hdisk1:48 hd1:6
hdisk1:49 hd1:7
hdisk1:50 hd1:8
hdisk1:51-95

The first column indicates the physical partition (if a group of contiguous partitions are free, it will indicate a range of partitions) for a particular hard disk. The second column indicates which logical partition of which logical volume is associated with that physical partition.

This is useful for determining the degree of contiguity of data on the system. It can also provide useful information should the system administrator decide to reorganize the system using a physical partition map.

Managing Volume Groups

This section discusses the different operations you can do on volume groups.

Creating a New Volume Group

The mkvg command creates new volume groups. At the time of creation, the most important volume group characteristics to be set are :

name
The volume group name, which must be unique system wide.
physical volumes
The physical volumes that will be part of the volume group.
activate at restart
This determines whether or not the volume group should be activated at each restart of the system. This may be modified at a later date.

An example of this command could be:

# mkvg -y newvg -d 4 hdisk5 hdisk6 hdisk7

This would create the volume group called newvg, consisting of the physical volumes hdisk5, hdisk6, and hdisk7, and only one physical volume can be added later (maximum number of disks in the volume group is 4).

The mkvg command can also be accessed via the smit mkvg command. Differences with mkvg entered on the command line are:

Changing Volume Group Characteristics

The only characteristic that can be changed after the volume group is created is whether or not the volume group is to be automatically varied on at system startup. This is accomplished using the chvg command.

To specify that a particular volume group is to be automatically varied on at system startup, enter:

# chvg -a y vgname

If you don't want the volume group to be varied on, enter:

# chvg -a n vgname

Adding Physical Volumes

When a volume group is created you allocate one or more physical volumes to that volume group. You may increase the number of physical volumes in a volume group at a later date by using the extendvg command. An example of this command use:

# extendvg newvg hdisk3

In this example, physical volume hdisk3 is added to volume group newvg. If the physical volume belongs to a varied on volume group, the command will exit without adding the physical volume to the volume group. If the physical volume belongs to a volume group (it needs not be a volume group from the current system) but is not varied on, the user will be prompted to ask if they wish to continue. If the volume is not part of a volume group (non VGDA) then it is added to the volume group without interaction with the user. The extendvg will destroy the previous contents of any physical volume added to a volume group.

The only flag supported by this command is the -f flag. Using this flag, the command will add the physical volume to the volume group, without prompting the user, even when the physical volume contains a VGDA, as long as that physical volume is not known to any volume group on the system.

Removing Physical Volumes

The logical counterpart to the extendvg command is the reducevg command. This command removes one or more physical volumes from a volume group. If all the physical partitions in a volume group are removed then the volume group is also removed.

For example, to remove hdisk3 from the volume group newvg use:

# reducevg newvg hdisk3
To remove a physical volume from a volume group, you must first remove all logical volumes present on the physical volume and the volume group must be varied on.

This command has two flags: -d and -f

reducevg -d forces the removal of all logical volumes on the physical volume. The user is prompted to confirm that they want the logical volumes removed.


**** Warning **** This flag can be dangerous because it may remove parts of logical volumes that reside on the physical volume in question. This will in all likelihood corrupt the contents of the logical volume and make all data in the logical volume, not just the data on the physical volume in question, unavailable.

The reducevg -f makes the -d flag even more dangerous by suppressing the interaction with a user requesting confirmation that the logical volume should be deleted.

Importing and Exporting Volume Groups

To make a volume group known to the system, you have to import the volume group. If you want a volume group removed from a system configuration you have to export the volume group. The importvg and exportvg commands are used for adding and removing volume group entries to and from the system ODM database. This is most useful for the removal and addition of portable storage devices (specifically, external disks) from and to the system.

importvg Command

The importvg command adds a new volume group definition to ODM, by using the VGDA data located on a specified physical volume. The following functions may be performed via the importvg command:

exportvg Command

The command exportvg vgname removes the definition of the vgname volume group from the system. After the volume group has been exported it can no longer be accessed. This is because the system no longer knows it exists even though physically it may still be connected.

The exportvg command does not modify any user data in the volume group. It only removes the definition from the ODM device configuration database. The primary use of the exportvg command is to move portable storage devices between systems or remove dysfunctional volume groups from a working system. You may export a volume group even if it has already been removed from your system.

A volume group should not be accessed by a second system until it has been exported from the first. This is because the second system may change the volume group and when it is re-imported to the original system, its database of configuration data is no longer consistent with the contents on the volume group. It is possible for a second system to vary on a volume group without it being first exported by the originally controlling system. The exportvg cleans up the system rather than the physical volume. Only a complete volume group can be exported, not individual physical volumes.

To access this command via smit, enter:

# smit exportvg

Varying On and Off Volume Groups

Once a volume group exists, it can be made available for use via the varyonvg process. This process involves a number of steps:

  1. Each VGDA on each physical volume in a volume group is read.
  2. The header and trailer time stamps within each VGDA are read. These time stamps must match for a VGDA to be valid.
  3. If a majority of VGDAs (that is a quorum) are valid, then the vary on process proceeds. If they are not then the vary on fails.
  4. The system will take the most recent VGDA (the one with the latest timestamp) and write it over all other VGDAs so they match.
  5. The syncvg command is run to resynchronize any stale partition present (in the case where mirroring is in use).
varyonvg Command

The command varyonvg vgname is used to vary on the volume group called vgname. The command has a range of options that can be used to overcome damage to the volume group structure or give status information. The key options of the varyonvg command are:

varyoffvg Command

The command varyoffvg will deactivate a volume group and its associated logical volumes. This requires that the logical volumes be closed which requires that file systems associated with logical volumes be unmounted. The varyoffvg also allows the use of the -s flag to move the volume group from being active to being in maintenance or systems management mode.

Using varyonvg and varyoffvg Via SMIT

The varyonvg and varyoffvg commands can be access by using the smit varyonvg and smit varyoffvg fastpaths respectively.

Listing the Volume Groups on the System

If you simply enter lsvg without a parameter, you will be presented with a list of all the defined volume groups, for example:


# lsvg
rootvg

By default, this form of the lsvg command lists all defined volume groups. You may list only the currently active volume groups by using the -o flag.

Listing the Characteristics of a Volume Group

If you enter lsvg with a volume group name as a parameter, you will receive some detailed information about the volume group in question. For example:


# lsvg rootvg
VOLUME GROUP: rootvg VG IDENTIFIER: 00011187ca9acd3a
VG STATE: active PP SIZE: 4 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 274 (1096 megabytes)
MAX LVs: 256 FREE PPs: 71 (284 megabytes)
LVs: 14 USED PPs: 203 (812 megabytes)
OPEN LVs: 12 QUORUM: 2
TOTAL PVs: 3 VG DESCRIPTORS: 3
STALE PVs: 0 STALE PPs 0
ACTIVE PVs: 3 AUTO ON: yes

The meaning of the fields is as follows:

VOLUME GROUP
The name of the specified volume group.
VG STATE
The state of the volume group. Indicates whether the volume group is varied on or off. It can be:
  • active/complete - varied on and all physical volumes active
  • active/partial - varied on but not all physical volumes active
  • off - varied off
VG PERMISSION
Permission can be read/write or read-only.
MAX LVs
Maximum number of logical volumes permitted in the volume group. The default is 256.
LVs
Actual number of logical volumes currently existing within the volume group.
OPEN LVs
The number of LVs currently open. In the above example, the closed logical volume is the boot logical volume (or blv).
TOTAL PVs
The total number of physical volumes currently within the volume group.
ACTIVE PVs
Number of physical volumes currently active.
VG INDENTIFIER
Identifier by which the volume group is known to LVM. It is unique to the system.
PP SIZE
The size of a physical partition.
TOTAL PPs
The total number of physical partitions, including both free and used partitions, available to the volume group.
FREE PPs
The number of free partitions available on the volume group.
USED PPs
The number of used partitions on the volume group.
QUORUM
The number of physical volumes required for a quorum, if this is for a two disk system, then the figure is misleading, since you actually require a majority of VGDAs. See Disk Quorum.
VG DESCRIPTORS
Number of VGDAs existing within the volume group.
STALE PPs
Number of physical partitions marked as stale within the volume group.
AUTO ON
Whether the volume group is to be automatically varied on at system startup.

Listing the Logical Volumes in a Volume Group

If you enter lsvg in the form lsvg -l vgname (the -l flag won't work without the vgname parameter), a list of the logical volumes in the volume group is displayed, with associated characteristics. For example:


# lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd6 paging 8 8 1 open/syncd N/A
hd61 paging 8 8 1 open/syncd N/A
hd5 boot 2 2 1 closed/syncd /blv
hd7 sysdump 2 2 1 open/syncd /mnt
hd8 jfslog 1 1 1 open/syncd N/A
hd4 jfs 2 2 2 open/syncd /
hd2 jfs 72 72 2 open/syncd /usr
hd9var jfs 1 1 1 open/syncd /var
hd3 jfs 3 3 1 open/syncd /tmp
hd1 jfs 8 8 1 open/syncd /home
paging00 paging 8 8 1 closed/syncd N/A
data jfs 75 75 1 open/syncd /lh/data
oa jfs 5 5 1 open/syncd /oa
lh jfs 8 8 1 open/syncd /lh

Here is what the columns of output mean:

LV NAME
The name of the logical volume name.
TYPE
What type of logical volume, type could be:
  • paging
  • boot (boot logical volume)
  • sysdump (system dump device)
  • jfslog (jfslog device)
  • jfs (journaled file system)
LPs
Number of logical partitions allocated to the logical volume.
PPs
Number of physical partitions allocated to the logical volume in question. Basically, this number should be the LP count multiplied by the number of copies specified for the logical volume.
PVs
The number of physical volumes across which the physical partitions allocated for this logical volume are spread.
LV STATE
Whether the logical volume is open or closed, and whether it has been synchronized.
MOUNT POINT
The mount point for any associated file system, if applicable.

Summarizing Physical Volume Status within a Volume Group

Finally the format lsvg -p vgname displays a list of the physical volumes contained in a volume group as well as some status information including physical partition allocation. An example of output follows:


# lsvg -p rootvg
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 95 3 03..00..00..00..00
hdisk1 active 95 67 19..03..07..19..19
hdisk2 active 84 1 01..00..00..00..00

Here is a summary of the meaning of the command output:

PV_NAME
The name of the physical volume.
PV STATE
Whether or not this physical volume is active.
TOTAL PPs
The total number of physical partitions on this physical volume.
FREE PPs
The total number of unused physical partitions on this physical volume.
FREE DISTRIBUTION
The location of the free physical partitions on the physical volumes. There are five columns, one for each disk region, in the following order, see Figure - Physical Partitions Distribution Summary:

This form of the lsvg command is useful for summarizing the concentrations of free space on the system. If the system administrator wished to create a new logical volume, then they could ascertain from the free distribution column the most likely intra-physical volume allocation strategy to provide the new logical volume with contiguity of data. If no free space were available in the desired region of disk, then the system administrator would have to change the physical volume allocation strategies of existing physical volumes, and reorganize them with respect to that strategy using the reorgvg command.

Managing Logical Volumes

We have previously discussed physical volumes, volume groups and physical partitions and how they can be manipulated and managed. All these constructs are in place to support the use of disk space by users, application or the operating system. It is logical volumes that are used by users, applications, and the operating system.

The management of logical volumes, is therefore the management of disk space that is available for use. This section will review the following topics:

Making a New Logical Volume

The command:

# mklv -y testlv -c 2 rootvg 10
will create a logical volume called testlv in the rootvg volume group. It will contain 10 logical partitions, and each logical partition consists of two physical partitions.

The volume group to which the logical volume will belong, and the number of logical partition the volume group will contain, must be specified.

Removing a Logical Volume

The command:

# rmlv testlv
will remove the logical volume testlv from the system. This deallocates all physical partitions and logical partitions associated with the logical volume and removes the logical volume information in the VGDA, ODM and the /dev entry. All data contained in the LV is lost. The logical volume space is now available for use by other logical volumes.

Increasing the Size of an Existing Logical Volume

An existing logical volume can be increased in size by using the extendlv or smit extendlv command or if the logical volume is used by a jfs file system you can use the chfs or smit chjfs command. The extendlv command uses many of the flags available to the mklv command.

As mentioned above, logical volumes size can also increase by using the chfs or smit chjfs menu if the logical volume contained a jfs file system. In this case, I would not have the ability to allocate the new physical partitions specifically by naming the physical volume, using map files, or the position flag.

Copying a Logical Volume

Existing logical volumes can be copied by means of the cplv or smit cplv command. You can copy a logical volume to a new logical volume or over-write an existing logical volume. You can copy into a different volume group.

Migrating and Reorganizing Logical Volumes

Over time it is probable that the logical volumes in your system will no longer be optimally placed. AIX provides two commands to help assist with reorganizing logical volumes:

Listing a Logical Volume

All those details that you selected (or didn't select) when the logical volume was built, can be displayed using the command lslv lvname or smit lslv and then selecting the status option. For example:


# lslv hd2
LOGICAL VOLUME: hd2 VOLUME GROUP: rootvg
LV IDENTIFIER: 00011187ca9acd3a.7 PERMISSION: read/write
VG STATE: inactive LV STATE: opened/syncd
TYPE: jfs WRITE VERIFY: off
MAX LPs: 500 PP SIZE: 4 megabyte(s)
COPIES: 1 SCHED POLICY: parallel
LPs: 72 PPs: 72
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: center UPPER BOUND 32
MOUNT POINT: /usr LABEL: /usr
MIRROR WRITE CONSISTENCY: on
EACH LP COPY ON A SEPARATE PV ?: yes

The meanings of these fields are:
LOGICAL VOLUME
The logical volume name.
VG STATE
The current state of the logical volume.
TYPE
The logical volume type as set by the -t flag.
MAX LPs
The maximum number of logical partitions allowable for the logical volume. This is set via the -x flag.
COPIES
The number of copies or mirrors active. Set via the -c flag and changed using the mklvcopy and the rmlvcopy commands.
LPs
The number of logical partitions currently allocated to the logical volume. This can be modified via the extendlv command.
STALE PPs
The number of physical partitions that are marked as stale. This is only relevant in mirrored situations. You can eliminate stale PPs by using the syncvg or the varyonvg command.
INTER-POLICY
Indicates if the inter-logical volume allocation policy is set to minimum or maximum. This policy is set using the -e flag.
INTRA-POLICY
Indicates the preferential location on a physical volume for placement of physical partitions when they are allocated to a logical volume. It is set via the -a flag.
MIRROR WRITE CONSISTENCY
Indicates if mirror write consistency is set on or off. This attribute is set via the -w flag.
EACH LP COPY ON A SEPARATE PV
This reflects whether the allocation policy for mirrored physical partitions is set to strict or not. The -s flag sets this policy.
VOLUME GROUP
Indicates the volume group which this logical volume is part of.
PERMISSION
Indicates if the logical volume is set to either read/write or read only status. Only the chlv with the -p flag can affect this attribute.
LV STATE
The current state of the logical volume.
WRITE VERIFY
Indicates whether the write verify option is set on or off. This policy is set using the -v flag.
PP SIZE
The size of the physical partitions used for the logical volume.
SCHED POLICY
Indicates the scheduling policy in place for writing to logical volume mirrors when mirroring is active. This policy is set via the -d flag. The default policy is parallel.
PPs
The number of physical partitions allocated to the logical volume. This should be a mutiple of LPs.
BB POLICY
Indicates the current bad block relocation policy. This is set to either relocatable, the default, which indicates that the LVM should do bad block relocation for you. The alternative is to turn bad block relocation off. This attribute is set via the -b flag.
RELOCATABLE
This attribute indicates if this logical volume could be relocated automatically when the reorgvg command is used to reorganize the logical volumes within a volume group. This attribute is set via the -r flag.
UPPER BOUND
The maximum number of physical volumes that this logical volume can be allocated onto. Set via the -u flag.
LABEL
The name associated with the logical volume. For logical volumes associated with a jfs file system this is generally the file system name.

Getting a Summary of Logical Volume Allocation

The command lslv -l lvname will give a summary of the manner in which the logical volume is physically allocated. Using smit lslv and then selecting the physical volume map will provide the same result. An example:


# lslv -l hd2
hd2:/usr
PV COPIES IN BAND DISTRIBUTION
hdisk0 070:000:000 24% 004:011:017:019:019
hdisk1 002:000:000 100% 000:000:002:000:000

This command is useful for getting a quick summary of how the logical volume is allocated across physical volumes and within each physical volume it resides on. The different columns have the following meaning:
PV
This is the physical volume name.
COPIES
This shows you how the physical partitions for mirrored and non-mirrored logical volumes are allocated on the specific physical volume. In the example above we have a non-mirrored logical volume.

If we were dealing with a mirrored logical volume we could expect to see output like:


# lslv -l hd2
lv01:/u/mirror
PV COPIES IN BAND DISTRIBUTION
hdisk2 010:000:000 100% 010:000:000:000:000
hdisk3 000:010:000 100% 005:000:000:000:005

In this example a logical volume called lv01 has been created with two copies. You can see that all of the physical partitions for the first copy are located on hdisk2 and all the physical partitions for the second copy on hdisk3.

IN BAND
This statistic indicates what percentage of physical partitions for a particular logical volume are allocated in the preferred intra-physical volume location.
DISTRIBUTION
This shows how the physical partitions associated with the logical volume in question are allocated to across the various portions of the disk (inner edge, inner middle, center, outer middle, outer edge).

Using lslv to Interrogate the VGDA of a Physical Volume

When you use the command lslv in the form lslv -n PVID LVname you can query what the VGDA on a particular physical volume thinks the status of a logical volume is. This is very useful for tracking problems down when they occur. This variation of the lslv command cannot be acccessed via smit.

Scenarios

This section discusses different scenarios about the logical volume management.

Resolving Problems and Recovering From Errors

LVM configuration information is stored in the ODM, as well as in the /dev directory, and in the Volume Group Descriptor Area (VGDA) on each of the disks in the volume group. If the ODM and VGDA do not correlate, this may result in messages being issued stating they are out of sync.

If the volume group in question is not the root volume group, This may be rectified by varying off the volume group in question, and then exporting and re-importing it. Use the following sequence of commands:

# varyoffvg vgname
# exportvg vgname
# importvg -y vgname pvname
# varyonvg vgname

where vgname is the name of the volume group, and pvname is the name of a physical volume (disk) in the volume group.

If the problem is with the root volume group, then the following script should rebuild the information for the root volume group in the ODM (syntax is fixlvodm hdisk# where # is the number of the bootable disk of rootvg and fixlvodm the name of the script):


#!/bin/ksh
# fixlvodm - Export and re-import the root volume group ODM data

# check arguments
case "$1" in
hdisk*) ;;
*) echo "Usage: fixlvodm PVname"
exit 1;;
esac

# make sure the disk is in the root volume group
lquerypv -p `getlvodm -p $1` -g `getlvodm -v rootvg` > /dev/null 2>&1
if [ "$?" != "0" ]
then
echo "PV $1 does not appear to be in rootvg"
echo "Are you sure you want to continue? (y/n) > \c"
read answer
if ["$answer" != "y"]
then
exit 1
fi
fi

# delete odm entries for all logical volumes on the specified disk
lqueryvg -p $1 -L | cut -c22-80 | cut -d" " -f1 | \
while read LVname
do
echo "Deleting ODM entry for Logical Volume $LVname"
odmdelete -q "name=$LVname" -o CuAt
odmdelete -q "name=$LVname" -o CuDv
odmdelete -q "value3=$LVname" -o CuDvDr
done

# delete VG customized attributes
odmdelete -q "name=rootvg" -o CuAt

# LV and VG customized devices
odmdelete -q "parent=rootvg" -o CuDv
odmdelete -q "name=rootvg" -o CuDv

# LV and VG customised dependencies
odmdelete -q "name=rootvg" -o CuDep
odmdelete -q "dependency=rootvg" -o CuDep

# customised device drivers
odmdelete -q "value1=rootvg" -o CuDvDr
odmdelete -q "value1=10" -o CuDvDr # run this only for rootvg
odmdelete -q "value3=rootvg" -o CuDvDr

# re-import the root volume group (ignore lvaryoffvg errors)
importvg -y rootvg $1

# rebuild the logical volume info from the VGDA
varyonvg rootvg

Migrating Logical Volumes

The migratepv command may be used to migrate, or move, logical volumes from one disk to another within the same volume group. When the movement takes place, LVM attempts to allocate the newly placed partitions according to the policies set out in the logical volume configuration (for example the intra-physical volume allocation policy).

In the example below we will migrate a single logical volume from one disk to another. In this case it is to free space at the center of the disk. The process is as follows:

Migrating Physical Volumes

  1. With the following command, determine which disks are included in the volume group to be migrated (replace vgname with the volume group name).
    # lsvg -p vgname
    rootvg:
    PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
    hdisk0 active 159 0 00..00..00..00..00

  2. If the disk to which you are migrating is new, perform these steps:
  3. With the following command, determine the number of PPs you will need for the migratepv (replace hdiskx with the source disk name).
    # lspv hdiskx | grep "USED PPs"
    USED PPs: 159 (636 megabytes)

    For the above example, you would need 159 FREE PPs in this volume group to successfully complete the migratepv.
  4. Determine the number of FREE PPs on the destination disk(s). Run the following command for each destination disk in the volume group (replace hdiskx with the destination disk name).
    # lspv hdiskx | grep "FREE PPs"
    FREE PPs: 204 (816 megabytes)

  5. Now, add up the FREE PPs from all of the destination disks (determined in step 4). If the sum is larger than the USED PPs from step 3, then you will have enough space to complete the migratepv procedure. If not, then another disk will be needed to get enough FREE PPs.
  6. With the following command, check to see if hd5 is on the source disk (replace hdiskx with the source disk name).
    # lspv -l hdiskx | grep hd5
    hd5 2 2 02..00..00..00..00 /blv

    If you get no output, then hd5 is not on that disk. Skip to step 7. If you get output similar to the sample shown above, perform the following:
  7. Now, migrate the disk. This may take an hour if the source disk is large.
    # migratepv <source_disk> <destination_disk(s)>

    If the source disk on which you run migratepv contains hd5 and you run migratepv without first running it with the -l hd5 (see step 6), you will get the following errors:


    0516-1011 migraptepv: Logical volume hd5 is labled as
    a boot logical volume.
    0516-812 migratepv: Warning, migratepv did not completely
    succeed; all physical partitions have not been
    moved off the PV.

    If you attempt to run migratepv with a destination disk that is not a part of the volume group, you will get these errors:


    0516-320  getlvodm: Physical volume hdisk1 is not assigned to
    a volume group.
    0516-812 migratepv: Warning, migratepv did not completely
    succeed; all physical partitions have not been
    moved off the PV.

  8. If you want to remove the source disk from the volume group, run the following command:
    # reducevg VG_NAME <source_disk>

  9. If you plan to physically remove the old disk from the system, run the following command:
    # rmdev -l -d  <source_disk>

  10. If you migrated your primary dump device (step 5c), run the following command (where /dev/hdx is usually /dev/hd7):
    # sysdumpdev -P -p /dev/hdx

Reorganize Logical Volumes within a Volume Group

The reorgvg command can be used to reorganize the logical volumes within a volume group and reallocate them to better observe allocation policies for each logical volume. The reorgvg command will take the logical volumes you give it in the order given and attempt to place the logical volume in the location that is specified by the intra-physical volume allocation policy for each logical volume. It does this one logical volume at a time and migrates one physical partition at a time to the location you select. The example below will help explain what is happening:

The preceding example helps demonstrate the importance of allocation policies in the reorgvg process. By changing policies for selected logical volumes we substantially changed the resulting disk partition allocations.

Mirroring the Root Volume Group

The following procedure assumes a system with three physical disks: hdisk0, hdisk1 and hdisk2. rootvg was installed on hdisk0.

  1. Extend rootvg to cover three physical volumes.
    # extendvg rootvg hdisk1 hdisk2
    

    If you don't have three disks, this procedure probably isn't worthwhile, since the statistical likelihood of it being useful drops to 50% of the chance of a hard disk failure; if hdisk0 fails, rootvg loses its disk quorum and itself will fail.

  2. Make two copies (that is the original, plus one copy) of each logical volume:
    # mklvcopy hd6 2 hdisk1
    # mklvcopy hd8 2 hdisk1
    # mklvcopy hd4 2 hdisk1
    # mklvcopy hd2 2 hdisk2
    # mklvcopy hd3 2 hdisk2
    # mklvcopy hd1 2 hdisk2
    

    The logical volumes are, in order, page space, the jfslog, /, /usr, / tmp, /home (this is if the system is configured as standard). You might use three copies instead of two but it is not obligatory - it will increase the availability of data rather than of the system. If you wish to use three copies then you should substitute the two in the above commands for three, and specify both hdisk1 and hdisk2 in each case.

  3. Synchronize copies:
    # syncvg -v rootvg
    
  4. Put the following in your /etc/filesystems file:

    The next few steps set up spare blv's in case hdisk0 is the disk that fails. This will provide the system with a mean of booting in such circumstances.

    /blv:
       dev      = /dev/hd5x
       vol      = "spare"
       mount    = false
       check    = false
       free     = false
       vfs      = jfs
       log      = /dev/hd8
    
    /blv:
       dev      = /dev/hd5y
       vol      = "spare"
       mount    = false
       check    = false
       free     = false
       vfs      = jfs
       log      = /dev/hd8
    
  5. Make boot logical volumes on each disk:
    # mklv -y hd5x -t boot -a e rootvg 2 hdisk1
    # mklv -y hd5y -t boot -a e rootvg 2 hdisk2
    
  6. Prepare blvs and update the IPL bootlist:
    # bosboot -a -l/dev/hd5x -d/dev/hdisk1
    # bosboot -a -l/dev/hd5y -d/dev/hdisk2
    # bootlist -m normal hdisk0 hdisk1 hdisk2
    
  7. Create secondary system dump device:
    # mklv -y hd7x -t sysdump -a e rootvg 2 hdisk1
    # sysdumpdev -P -s/dev/hd7x
    

Mirroring Other Volume Groups

The general approach to create mirrored volume groups is described below.

  1. Create a volume group:
    # mkvg -y uservg hdisk3 hdisk4 hdisk5
    
  2. Create required logical volumes:
    # mklv -y userlv1 -c2 uservg 40 hdisk3 hdisk4
    # mklv -y userlv2 -c2 uservg 40 hdisk4 hdisk5
    # mklv -y userlv3 -c2 uservg 40 hdisk3 hdisk5
    
  3. Create file systems on existing logical volumes:
    # crfs -v jfs -d userlv1 -m /home/tom
    # crfs -v jfs -d userlv2 -m /home/dick
    # crfs -v jfs -d userlv3 -m /home/harry
    
  4. Mirror the file system log:
    # mklvcopy loglv00 hdisk4 hdisk5
    

Replacing and Resynchronizing Failed Disks

Warning: This procedure assumes that the fixed disk does not contain any system file systems ( /, /usr, /var, /dev/hd5, /dev/hd6, and so on). If you are replacing a disk that contains these file systems, back up your system with smit startup (to make an image backup) and then replace the disk and reinstall the machine.

This procedure may only work on disks that are still recognized by the system.

If you are on AIX 3.1.7 or less, contact the AIX Support Center to order APAR fix IX20478 before following this procedure.

You should always back up your system before you make any changes to the system. That way you can always restore back to your original status if anything should go wrong.

  1. Log in as root.
  2. Unmount all single-copy file systems on the disk:
    # unmount /directory

    (directory is the mount point of the file system).
  3. Remove all single-copy file systems on the disk.
    # rmfs /Directory

  4. For mirroring only, remove physical-partition copies from the disk.
    # rmlvcopy LVname maxcopiesx PVname

    where PVname is the disk that you are replacing.
  5. If you have any paging space in this drive:
  6. Remove the disk from the volume group:
    # reducevg -df VGname PVname

  7. Delete the disk from the system configuration:
    # rmdev -l PVname -d

  8. Run the following command, making sure the PVname that you removed is not listed:
    # lspv

  9. Shutdown the machine.
  10. The IBM CE should remove and replace the disk drive.
  11. Power the machine back up.
  12. Run lspv again, making sure the PVname you added is listed:
    # lspv

    If the PVname is not listed, configure the new disk into the system:
    # cfgmgr

    Then run lspv again:
    # lspv

    If the PVname is still not listed, the problem may be hardware. Run diagnostics.

    If the PVname is listed, continue with the following steps.

  13. Add the disk to the volume group.
    # extendvg VGname PVname

  14. Remake logical volumes and file systems. Use SMIT if you are not familiar with the syntax of the following commands.
    # mklv -OPTIONS VGname #PPs PVname(s)
    # crfs -v jfs -d LVname -m /Directory

  15. For mirroring only, extend multiple-copy logical volumes onto the disk:
    # mklvcopy LVname maxcopiesx PVname

    See InfoExplorer for correct syntax and parameters of the mklvcopy command.
  16. For mirroring only, resynchronize copied physical partitions (PPs):
    # syncvg -p PVname

  17. Now reboot your system with the reboot command:
    # reboot

    We recommend that you back up your system when you get it running properly. Then, if you have to reinstall sometimes in the future, you can restore the system to its current configurations.

Using a Mirrored External Drive on another System

Although it is possible to mirror data onto an external drive, and then access that information from another system, there are a number of complications associated with this approach.

The problem occurs when you attempt to place that disk back into the original system. If a disk is removed from a volume group, updated, and then returned, there is no way to control which copy of the data is going to be used to resynchronize the other copy. Normally the disk which was last varied on is the disk which will be used to resynchronise the other copies. This means that, if your original system is brought back on-line without the disk re-inserted, it will be varied on and considered the latest version. If, however, you either re-insert the disk and then vary on the VG, the disk which was re-inserted will be used to resynchronise the original copy.

Note, however, that if any LVM information is changed while the disk is in your backup system, those changes will not be known to your primary system even if the backup disk is used to resync the primary disk. LVM changes include creating, removing, or expanding any file system, paging space, other logical volume.

Therefore, be sure to take the following steps:

  1. Remove the disk from the primary system and vary the volume group on the primary system offline.
  2. Add the disk to the backup system. Do not change any LVM information; in other words, you can read and write to the disk, but nothing may be done to allocate new disk partitions, change LV attributes, and others.
  3. Before returning the disk to the primary system, export the volume group on the primary system.
  4. Add the disk to the primary system, and re-import the volume group using the disk which was just re-inserted.
  5. Vary on the volume group; at this time the copies on the primary disk should be resynchronized.

Accessing a Volume Group without a Quorum

An attempt is made to vary on a volume group:


# varyonvg vg03
varyonvg: physical Volume hdisk6 is missing
varyonvg: physical Volume hdisk7 is missing
varyonvg: physical Volume hdisk8 is active
varyonvg: physical Volume hdisk9 is missing
varyonvg: physical Volume hdisk10 is missing
varyonvg: physical Volume hdisk11 is active
lvaryonvg: unable to vary on volume group vg03 without a quorum
varyonvg: volume group is not varied on

Since there is not a quorum, the only way to vary on the two physical volumes that are available is to change the quorum count and then to reissue the varyonvg command:


# chvg -Qn vg03
# varyonvg vg03
varyonvg: Physical Volume hdisk6 is missing
varyonvg: Physical Volume hdisk7 is missing
varyonvg: Physical Volume hdisk8 is active
varyonvg: Physical Volume hdisk9 is missing
varyonvg: Physical Volume hdisk10 is missing
varyonvg: Physical Volume hdisk11 is active
varyonvg: Volume group vg03 is varied on

Reorganizing Physical Partition Allocation

The system administrator wishes to create a new logical volume with five logical partitions, allocated directly in the center of physical volume hdisk07, without incurring any fragmentation. The administrator first examines the current allocation for physical volume hdisk7:


# lslv -p hdisk7
hdisk7:
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE 1-10
FREE FREE FREE FREE FREE FREE 11-16

USED USED USED USED USED USED USED USED USED USED 17-26
USED USED USED USED USED USED 27-32

USED USED USED USED USED USED USED USED USED USED 33-42
USED USED USED USED USED FREE 43-48

FREE USED FREE USED USED FREE USED USED USED USED 49-58
USED USED USED USED FREE FREE 59-64

FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE 65-74
FREE FREE FREE FREE FREE FREE 75-80

There are enough free partitions on hdisk7 but there is no free space in the center of the disk. Nevertheless, the administrator can go ahead and create the new logical volume, and then reorganize hdisk07 so that the new volume is located in the center (read on):


# mklv -a c -y lvtest vg03 5 hdisk7
lvtest

The logical volume lvtest is created with five partitions on volume group vg03, physical volume hdisk7. The intra-physical volume policy was specified to be c (center allocation). The current allocation of lvtest is shown below.


# lslv -p hdisk7 lvtest
hdisk7:lvtest:N/A
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE 1-10
FREE FREE FREE FREE FREE FREE 11-16

USED USED USED USED USED USED USED USED USED USED 17-26
USED USED USED USED USED USED 27-32

USED USED USED USED USED USED USED USED USED USED 33-42
USED USED USED USED USED 0001 43-48

0002 USED 0003 USED USED 0004 USED USED USED USED 49-58
USED USED USED USED 0005 FREE 59-64

FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE 65-74
FREE FREE FREE FREE FREE FREE 75-80

For lvtest to be placed in the center during a reorganization, it has to be given priority. In order to grant lvtest priority, the individual logical volumes that are to be reorganized need to be explicitly specified in order of priority; therefore, the systems administrator looks to see what other logical volumes are located on hdisk7 and which are allocated in the center.


# lspv -p hdisk7
hdisk7:
PP RANGE STATE REGION LV ID TYPE MOUNT POINT
1-16 free edge
17-25 used middle lv0302 jfs /usr/etc
26-32 used middle lv0303 jfs /usr/bin
33-37 used center lv0303 jfs /usr/bin
38-47 used center lv0304 jfs /home/src
48 used center lvtest jfs
49 used middle lvtest jfs
50 used middle lv0308 jfs /systest
51 used middle lvtest jfs
52-53 used middle lv0308 jfs /systest
54 used middle lvtest jfs
55-62 used middle lv0308 jfs /testdata
63 used middle lvtest jfs
64 used middle
65-80 used edge

Since logical volume lv0304 is the only volume with enough space in the center, that has to be the one used to reorganize lvtest (but first confirm lv0304 is relocatable).


# lslv lv0304
LOGICAL VOLUME: lv0304 VOLUME GROUP: vg02
LV IDENTIFIER: 919F8392982C9301.5 PERMISSION: read/write
VG STATE: active/complete LV STATE: closed/syncd
TYPE: jfs WRITE VERIFY: off
MAX LPS: 150 PP SIZE: 2 megabyte(s)
COPIES: 3 SCHED POLICY: parallel
LPS 24 PPs: 72
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY center UPPER BOUND 32
MOUNT POINT: /home/src LABEL: None
MIRROR WRITE CONSISTENCY: on
EACH LP COPY ON A SEPARATE PV ?: yes

Now use the command reorgvg:


# echo hdisk7 | reorgvg vg03 lvtest lv0304

Now, list the new allocation of lvtest:


# lslv -p hdisk7 lvtest
hdisk7:lvtest:N/A
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE 1-10
FREE FREE FREE FREE FREE FREE 11-16

USED USED USED USED USED USED USED USED USED USED 17-26
USED USED USED USED USED USED 27-32

USED USED USED USED USED 0001 0002 0003 0004 0005 33-42
USED USED USED USED USED USED 43-48

USED USED USED USED USED USED USED USED USED USED 49-58
USED USED USED USED USED FREE 59-64

FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE 65-74
FREE FREE FREE FREE FREE FREE 75-80

Recovering Partitions from a Down Physical Volume

A logical volume is defined with 15 logical partitions, using three copies to maintain high data reliability and availability. A strict allocation is specified, so that no logical partition has any two of its physical partitions on the same physical volume. The logical volume is also defined to use a maximum of six physical volumes.


# mklv -c 3 -u 6 vg02 15
lv0204

Later, the system administrator sees on the system error log that physical volume hdisk6 has failed. They list the contents of the physical volume to see what is stored on it:


# lspv -l hdisk6
hdisk6:
LV NAME LPS PPS DISTRIBUTION MOUNT POINT
lv0201 10 10 05..05..00..00..00
lv0202 10 10 00..10..00..00..00 /home/smith/test
lv0204 15 15 00..00..15..00..00 /transactions
*free* 0 20 10..00..15..00..15

Since hdisk6 contains part of the logical volume lv0204 and maintaining maximum availability is the goal, the administrator decides to migrate the physical partitions for logical volume lv0204 that are allocated on hdisk6 to the other physical volumes used by the logical volume lv0204. First the other physical volumes used by the lv0204 need to be determined; then the migrate command ca n be invoked.


# lslv -l lv0204
lv0204:/transactions
PV COPIES IN BAND DISTRIBUTION
hdisk3 015:000:000 100% 000:000:015:000:000
hdisk4 015:000:000 80% 000:000:012:003:000
hdisk5 015:000:000 73% 000:004:011:000:000
hdisk6 015:000:000 100% 000:000:015:000:000
hdisk9 015:000:000 60% 000:006:009:000:000
hdisk12 015:000:000 100% 000:000:015:000:000
# migratepv -l /transactions hdisk6 hdisk3 hdisk4 hdisk5 hdisk9 hdisk12

The administrator then examines the new allocation for the logical volume:


# lslv -l lv0204
lv0204:/transactions
PV COPIES IN BAND DISTRIBUTION
hdisk3 018:000:000 83% 000:003:015:000:000
hdisk4 018:000:000 80% 000:000:012:003:000
hdisk5 015:000:000 73% 000:004:011:000:000
hdisk9 016:000:000 56% 000:006:009:001:000
hdisk12 023:000:000 83% 000:000:019:004:000