AIX V3 extends traditional UNIX disk management facilities through the logical volume manager (LVM). The logical volume manager provides sophisticated disk management services that allow system administrators to establish and manage their storage environment without significant effort or extensive experience. This chapter outlines the advantages of the logical volume manager and explains how to use these concepts.
The logical volume manager introduces many terms and concepts that may be new to users of traditional UNIX systems. This section defines some of these terms.
In the logical volume manager, a Physical Volume, or PV, represents a single disk drive (see Figure - Physical Volumes, Volume Groups and Physical Partitions). Each physical volume has a unique, permanently assigned, system wide identifier called its physical volume ID or PVID. A physical volume will often be referred to by its logical name. By default, this name takes the form hdiskx where x is a number, for example hdisk2. See Managing Physical Volumes for more details on physical volumes.
To provide improved flexibility in dealing with the storage space contained in these physical volumes, the LVM introduces the concept of a volume group or VG. A volume group is a collection of physical volumes which are represented to the user or system administrator as a single pool of disk space. See Figure - Physical Volumes, Volume Groups and Physical Partitions. Each volume group can consist of 1 to 32 physical volumes. Each AIX system can have up to 255 volume groups. System and user file systems are defined within volume groups and not within physical disks. This allows file systems to span multiple physical disks without the knowledge of the user.
Each volume group in the system has a unique name of up to 15 characters and a unique ID number called a Volume Groupe ID or a VGID that is generated by the system.
Each physical volume belonging to the volume group contains a Volume Group Descriptor Area or VGDA, which describes the volume group and its contents. This includes information on the physical volumes, physical partitions, and logical volumes it contains. The VGDA allows a volume group to be self-describing. See Volume Group Descriptor Area (VGDA).
This self-describing quality of volume groups has the advantage of allowing them to be dynamically added to or removed from a system. This is advantageous in configurations including removable disk technology or external disks which will occasionally be moved from one system to another. For example, a volume group containing removable disks or a string of external disks can be removed (exported) from one system and added (imported) to another system, making its file systems and their contents available to the new system without having to explicitly define them.
Volume groups not only ease the definition and administration of filesystem space in AIX, but the ability to move them easily between systems makes possible enhanced levels of application availability. See Managing Volume Groups for more details on volume groups.
The AIX V3 installation process creates the first volume group called the rootvg. The rootvg consists of a base set of logical volumes required to start the system (additional logical volumes can be defined by the systems administrator). You can choose which physical volumes will be incorporated into the rootvg at installation time.
Additional physical volumes which are attached to a system can be added to the root volume group at a later date, or to a different volume group defined by the systems administrator.
See Size of rootvg: for more details about rootvg.
When a volume group is created, it is logically one large pool of disk space. In order to allocate this space to file systems, the large pool is broken down into smaller pieces called Physical Partitions or PPs. See Figure - Physical Volumes, Volume Groups and Physical Partitions. This is analogous to changing a dollar into 100 one cent pieces so that you can spend exactly the right amount. You can better spend your volume group dollar by dividing it into physical partition pennies.
Physical partitions are the smallest units of disk space that the logical
volume manager can manage and allocate. All physical volumes in a particular
volume group share the same physical partition size. The default size for a
physical partition is 4MB. However, the system administrator has the option of
setting the physical partition size for any new volume group to a value between
1MB and 256MB. The value must be equal to a power of 2 (for example, 1 2 4 8
and so on). Each physical volume in the system can contain up to 65 535
physical partitions.
Figure: Physical Volumes, Volume Groups and Physical
Partitions
Within each volume group, the system administrator is able to define Logical Volumes or LVs. A logical volume is a collection of physical partitions which is logically viewed as a single piece of contiguous disk storage by its users (see Figure - Logical Volumes). A logical volume can have several uses, the most common being to contain an AIX journaled file system or jfs. However, it can also be used to contain an AIX paging space, a file system log or jfslog, a boot partition, system dump area, (see Special System Logical Volumes) or simply a raw disk area usable by a database. The concept of a logical volume is similar to that of a disk partition or minidisk seen in traditional UNIX implementations. However, a logical volume offers far more power and flexibility.
Data on logical volumes appears to the user to be contiguous. In reality, however, the logical volume manager can place the physical partitions making up the logical volume anywhere within the volume group. This means they can be spread across two or more physical volumes or in two or three non-contiguous areas of the same physical volume. The user can, at their option, specify upon creation of the logical volume how he wishes the logical volume physical partitions to be selected (that is, whether they should be on one physical volume or spread across many, and where on the physical volume the partitions should be placed, if space is available). The user can also select to have the system create and manage mirrored copies of the physical partitions in the logical volume. In this way, mirroring (see Disk Mirroring) of an AIX file system can be accomplished without any application modification.
Logical volumes are extensible. A logical volume can be made larger
while the system is running simply by adding more physical
partitions to it.
Figure: Logical Volumes
Each logical volume is made up of a collection of
logical partitions. Each of these logical partitions is represented by one or
more physical partitions in the volume group. In the absence of mirroring in
the logical volume, each logical partition is contained in one physical
partition. However, if mirroring (see Disk Mirroring)
has been implemented, each logical partition is actually
represented by two or three physical partitions, each a mirrored copy of the
other. Logical partitions and physical partitions are referred to by number. A
logical partition is numbered by its position relative to the start of the
logical volume to which it belongs, and a physical partition by its position
relative to the beginning of the physical volume on which it resides. Thus,
logical partition1 of a logical volume might be contained in physical
partition23 of physical volume hdisk1; logical partition2 in physical
partition45, and so on. See Figure - Logical Partitions and
Mirroring. Since each logical partition is really one or more occurrences
of a physical partition, it also has a default size of 4MB, but is definable
between 1MB and 256MB.
Figure: Logical Partitions and Mirroring
Bad block relocation is the process whereby read-write requests are redirected to a new block when a disk block becomes error prone. One of the features of LVM is it hides bad block relocation from the using application (for example, the AIX file system or the virtual memory manager).
Under LVM, the process is transparent in the sense that the application is unaware that requests directed to a physical block are actually being resent to a different block. In some cases, disk hardware subsystems exist that are able to perform this service independently of the LVM software. In this case, the LVM device driver will take advantage of this service to improve performance. However, a single interface is still presented to the higher level application.
The LVM requires that a quorum, or majority, of VGDAs and VGSAs be accessible in order for a volume group to remain active, or to be varied on. The idea of the quorum is to ensure that a volume group is kept in a known and recoverable state.
The number of VGDAs contained on a single disk varies according to the number of disks in the volume group:
So, a volume group containing two physical volumes has two VGDAs placed on the first disk and one placed on the second. Figure - Disk Quorum includes an example of this scenario. In this case, the failure of the first disk will cause two of the three VGDAs to be inaccessible, and therefore cause the failure of the entire volume group. If the second of the two disks failed, however, only one of the three VGDAs are inaccessible, and the volume group remains accessible. If you have three or more disks in the volume group, then a quorum will generally remain upon failure of any one of the disks.
Obviously, this has implications when one comes to use disk mirroring in
order to ensure high availability. In a two disk mirrored system, if the first
disk fails, then you have lost 66% of your VGDAs, and the entire volume group
becomes unavailable. This defeats the purpose of mirroring. For this reason,
three or more (and generally an odd number) disk units provide a higher degree
of availability and are highly recommended where mirroring is desired.
Figure: Disk Quorum
Note: With AIX/6000* Version 3.2.3, there is the ability to turn off disk quorum protection on any volume group. Turning off quorum protection allows a volume group to remain online, even when a quorum or majority of its VGDA's are not online. This would allow the volume group to remain online in the situation described above. This capability provides for a less expensive mirroring solution, but does carry the risk of data loss, as after a disk failure, data is accessible but no longer mirrored.
AIX V3 and the logical volume manager provide a disk mirroring facility. Disk mirroring works by associating two or three physical partitions with each logical partition in a logical volume. When you write data to your logical volume, it is written to all the physical partitions that are associated with the affected logical partition. See Figure - Logical Partitions and Mirroring.
Mirroring is established when a logical volume is created. The mklv command allows you to select one or two additional copies for each logical volume. Mirroring can also be added to an existing logical volume using the mklvcopy command. When you create a logical volume, you associate with it policies that help determine how mirroring is established. You can tell the logical volume manager to ensure that mirrored copies of logical partitions are placed on different physical volumes. These policies are discussed in Making a New Logical Volume.
A key point about mirroring is that it occurs at the level of a logical volume. This means that any user, application, or operating system facility that uses disk at the logical volume level can take advantage of mirroring.
Mirroring data increases its availability. With mirroring in operation, the following factors can further affect availability:
IBM has implemented the logical volume manager to overcome some of the major weaknesses seen in traditional UNIX storage management. The key advantages are as follows:
Traditional UNIX implementations confined each file system to a single physical volume (fixed disk). AIX V3 allows file systems to span multiple physical volumes. The logical volume manager groups physical volumes into pools called volume groups, from which file systems are defined. This concept of the volume group allows file systems under AIX V3 to span multiple physical volumes(disks), or parts thereof.
Traditional UNIX systems limited file system flexibility by not allowing dynamic expansion of file systems beyond their initial size. The AIX file system implementation, in conjunction with the Logical Volume Manager, allows for the dynamic extension of file system sizes beyond their initial definition. Anyone who has had to reorganize a UNIX system when key file systems have become full will understand immediately the advantage of this capability.
The logical volume manager has a function whereby critical data can be automatically and transparently replicated. Logical volume mirroring provides for the creation and maintenance of two or three online copies of any AIX file system. Mirroring will allow the system to withstand the failure of a disk containing a mirrored file system. In this case, the system continues, using the remaining online copy see Disk Quorum for additional details.
The majority of the LVM technology that is available in AIX V3 has been included in the Open Software Foundation's OSF/1** product. AT&T**'s USL group has indicated that future releases of UNIX V.4 will include technology that is similar to the logical volume manager. While the logical volume manager technology is an IBM offering today, it is clear that the logical volume manager or similar technology will be adopted by other UNIX systems over time.
The data that describes the components of the LVM is not kept in one place. It is important to understand this descriptive data on volume groups, logical volumes, and physical volumes is kept in a number of places.
The ODM database is the place where the most AIX V3 system configuration data is kept. We will not enter into a detailed discussion of the ODM database here. Object Data Manager (ODM) contains more details about ODM.
ODM contains information about all configured physical volumes, volume groups and logical volumes. This information mirrors the information found in the VGDA. The process of importing a VGDA, for example, involves copying the VGDA data for the imported volume group into the ODM. When a volume group is exported the data held in the ODM about that volume group is removed from the ODM database.
The ODM data also mirrors the information held in the Logical Volume Control Block see Logical Volume Control Block (LVCB).
The VGDA, located at the beginning of each physical volume, contains information that describes all the logical volumes and all the physical volumes that belong to the volume group of which that physical volume is a member. The VGDA is updated by almost all the LVM commands and subroutines, including such apparently read-only mechanisms as the vary on process.
The VGDA is a key part of the logical volume manager and the function that volume groups can provide. In effect, the VGDA makes each volume group self describing. An AIX system can read the VGDA on a disk, and from that, the system can determine what physical volumes and logical volumes are part of this volume group. This allows a system to import this information, and therefore import the volume group and all the configuration work that has already been performed on that volume group. See Importing and Exporting Volume Groups for details on the importing and exporting of volume groups.
Each disk contains at least one VGDA. This is important at vary on time. The time stamps in the VGDAs are used to determine which VGDAs correctly reflect the state of the VG. VGDAs can get out of sync when, for example, a volume group of four disks has one disk failure. The VGDA on that disk cannot be updated while it is not operational. Therefore, we need a way to update this VGDA when the disk comes back to life, and this is what the varyon process will do .
The VGDA is allocated when the disk is assigned as a physical volume (with the command mkdev). This actually only reserves a space for the VGDA at the start of the disk. The actual volume group information is placed in the VGDA when the physical volume is assigned to a volume group (using the mkvg or extendvg commands). When a physical volume is removed from the volume group (using the reducevg command) the volume group information is removed from the VGDA.
The VGSA contains state information about physical partitions and physical volumes. For example, the VGSA knows if one physical volume in a volume group is unavailable. The VGSA is managed by the logical volume device driver in the AIX V3 kernel.
Both the Volume Group Descriptor Area and the Volume Group Status Area have beginning and ending time stamps which are very important. These time stamps enable LVM to identify the most recent copy of the VGDA and the VGSA at vary on time, that is when a volume group is initialized see Varying On and Off Volume Groups. The LVM requires that the timestamps for the chosen VGDA be the same as those for the chosen VGSA.
The LVCB is located at the start of every logical volume. It contains information about the logical volume, and takes up a few hundred bytes. Applications writing directly to a raw logical volume must allow for the LVCB, and not write into the first 512-byte block. The command getlvcb -TA displays the information held in the LVCB as follows:
# getlvcb -TA hd2
AIX LVCB
intrapolicy = c
copies = 1
interpolicy = m
lvid = 00011187ca9acd3a.7
lvname = hd2
label = /usr
machine id = 111873000
number lps = 72
relocatable = y
strict = y
type = jfs
upperbound = 32
fs = log=/dev/hd8:mount=automatic:type=bootfs:vol=/usr:free=false
time created = Tue Jul 27 13:38:45 1993
time modified = Tue Jul 27 10:58:14 1993
Some LVM configuration data is stored in AIX V3 files. In particular, most of the LVM storage constructs appear also as AIX V3 devices. Each volume group has a device associated with it, for example /dev/rootvg. Each physical volume has a device associated with it, for example /dev/hdisk3. Each logical volume has a device associated with it, for example /dev/hd2. This is the logical volume created at installation time to house the /usr file system. Information is also stored in /etc/filesystems.
When you install AIX V3 it will create a number of file systems and logical volumes that you must have for the system to work correctly. The information to help plan the amount of space required for these file systems is contained in Disk Space Considerations.
The file systems created by the operating system at
installation time for AIX V3 are described below. /(root), /usr, /tmp, /home
and /var. are the main file systems.
Table: AIX V3 Predefined File Systems and Logical Volumes
The AIX V3 installation process creates the first volume group called the
rootvg. You can see in the following figure the
organization of this volume group.
Figure: Predefined Logical Volumes with AIX V3
These special logical volumes are created as part of the installation process.
The Boot Logical Volume contains a stripped-down version of the operating system, which is required in order to boot the system and enable all other processes. A single boot logical volume is created in the rootvg at installation time.
A volume group may contain multiple boot logical volumes, at a maximum of one per disk. It is useful to have more than one boot logical volume per system as this provides an alternate way to boot the system when the primary boot disk fails. Boot logical volumes are created using the bosboot command. Besides initializing the new boot logical volume, this command creates a boot record at the start of the disk pointing to the boot logical volumes on the disk.
Once created, the system must be told to look at the new boot logical volume, if necessary, when starting the system. This is configured into nvram using the bootlist command.
The default size of the boot logical volume is 8MB.
The System Dump Device is a logical volume that captures a system dump in the event of a system failure or a user-generated dump. It is initialized outside of the normal LVM interface when the system is booted, and remains open while the system is in operation. A dump logical volume is created in the rootvg at installation time. Its size is 8MB and its default position is Outer Edge. See more information about the position of logical volume in Listing the Characteristics of a Physical Volume.
A paging space is fixed disk storage for information that is resident in virtual memory, but is not currently being accessed. When the amount of the free real memory in the system is low, programs or data that have not been used recently are moved from real memory to paging space in order to free real memory for other activities. Paging space is allocated when data is first loaded into memory, and not when data is paged out of real memory.
You can use the command lsps -a which displays the characteristics of paging spaces. The flag -a specifies that the characteristics of all paging spaces are to be given. The size is in megabytes. As long as %Used isn't above 80, then you have nothing to worry about.
# lsps -a
Page Space Physical Volume Volume Group Size %Used Active Auto Type
hd61 hdisk1 rootvg 32MB 25 yes yes lv
hd6 hdisk0 rootvg 32MB 47 yes yes lv
If paging space goes above 80 % used then you will begin to get messages on the console stating that paging space is low. As a result:
The size of the default paging space is determined by the boot process. The size is determined by the size of the real memory. The general rule is:
A recommendation is to create a paging space on each physical volume. You can use SMIT with the fastpath smit pgsp.
The next issue of some importance is to plan is which disks will be allocated into volume groups.
In the case of a two-disk volume group, one disk is given double votes. This is because if both had equal votes how could a majority be sensed if a single disk was lost? Because of quorums, if the disk with double votes fails, the quorum is lost because it has 2/3 of the members of the group for quorum purposes.
The quorum mechanism helps ensure consistency of the system but can impact availability. It is important to understand that even though a disk may be physically available, it may not be available for use because of the quorum issue.
For more information see Importing and Exporting Volume Groups.
The following is a list of recommendations for rootvg:
The first task you must complete when planning an installation is to analyze your requirements for disk space and for file systems. Use the following process to determine the required amount of disk space:
It makes sense to group things such as applications code, user data, program libraries, database files, and other similar items.
Your backup policy will in some ways be shaped by how much information changes. It makes sense to try to group, for example, software that changes very infrequently together. It would not always make sense to group large highly volatile files with that same software.
File systems are a way of breaking file groupings down into more manageable pieces. For example, the operating system includes a file system called /home into which you would normally store users files. If you have 10 users, nine who have files that need about 8MB of disk space and one user who required 200MB of disk space, it would be reasonable to create a separate file system and logical volume for that one user's data. That file system could be in the same or a different volume group.
An example of designing a file system layout is included in Example of a File System and Volume Group Design.
We will use the system named pippin as an example of designing a file system layout.
The system pippin has three disks :
The system pippin has a range of disk requirements. These are
summarized in the following table.
Table: Disk Space Requirements for pippin
This data is obtained from IBM for the AIX V3 related information and from the providers of the applications that will be used. If this information is not available estimates will need to be made.
To design our file systems we will use the standard file systems provided by
AIX V3 for the operating system and will create separate file systems for each
logically associated set of files. The file systems design and rationale is
summarized in the table below:
Table: File Systems Required
To use the functionality provided in AIX V3 by the logical volume manager, AIX V3 provides logical volume commands that are organized into three categories:
LVM uses shell scripts for the high level commands that manage logical volumes and volume groups. The high level commands call intermediate level and low level commands. The low level commands in turn call the LVM library subroutines, usually on a one-to-one basis. The intermediate level commands deal with the ODM database and the logical volume control block.
You can find lots of documentation in InfoExplorer about the high level LVM commands, but none on the intermediate and low level commands.
The following sections discuss the intermediate and low level LVM commands.
Purpose: Gets data from the logical volume control block.
Syntax: getlvcb [-acefilLmnrstuxyAT] lvname
Description:
The getlvcb command returns the control block information
for logical volume lvname. The command options determine
which information is written to standard out.
Options:
-a Returns intra-policy field.
-c Returns copies field.
-e Returns inter-policy field.
-f Returns filesystem label field.
-i Returns logical volume identifier field.
-l Returns lvname field.
-L Returns label field.
-m Returns machine id field.
-n Returns numlps field.
-r Returns relocatable field.
-s Returns strictness field.
-t Returns type field.
-u Returns upper bound field.
-x Returns date/time created.
-y Returns date/time modified.
-A Returns all of the control block fields.
-T Prints tag field with all output values.
Return Codes: 0 Command successfully completed
1 Command error
Purpose: Generates logical volume name.
Syntax: getlvname -Y prefix | -y lvname
getlvname [type]
Description:
The getlvname command generates a logical volume name for the
logical volume block device. If the type argument is contained
in the Configuration Database (PdAt), then the corresponding name
prefix is taken from the PdAt class and used to build the logical
volume name. If the type is not contained in the PdAt class, then
the default prefix (lv) is used to build the logical volume name.
If the prefix is given then a sequence is generated. Names are
formed by concatenating the name prefix with a sequence number.
If the -y flag is used, the CuDv class is checked for that name
to ensure that the same name is not used twice.
Options:
-Y Specifies prefix used to generate the logical volume name.
-y Specifies the logical volume name.
Return Codes: 0 Command successfully completed
-1 Illegal syntax.
Purpose: Gets logical volume data values from the Configuration Database.
Syntax: getlvodm [-a lvdescript] [-b lvid] [-B lvdescript] [-c lvid] [-C]
[-d vgdescript] [-e lvid] [-F]
[-g pvid] [-h] [-j pvname]
[-L vgdescript] [-m lvid] [-p pvdescript] [-P] [-r lvid]
[-s vgname] [-t vgid] [-w vgid] [-u vgdescript] [-y lvid]
Description:
The getlvodm command gets logical volume data from the Configuration
Database and writes it to standard out (stdout). The command line
option specifies what information will be retrieved.
An lvdescript, vgdescript, or pvdescript can be either an ID or
a name (for example, lvdescript can be hd1 or 0000000012345678.1).
Options:
-a lvdescript Returns logical volume name for logical volume
lvdescript.
-b lvid Returns volume group name for logical volume lvid.
-B lvdescript Returns the label for the logical volume lvdescript.
-c lvid Returns the logical volume allocation characteristics
for the lvid logical volume. The following
characteristics are returned (in the same order):
type value
intra-policy value
inter-policy value
upperbound value
strict value
copies value
reloc value
-C Returns all configured physical Volumes.
-d vgdescript Returns the major number of the volume group vgid.
-e lvid Returns the logical volume name for logical volume
lvid.
-F Returns all the free configured physical volumes.
-g pvid Returns the physical volume name for the physical
volume pvid.
-h Returns list of volume group names known to the system.
-j pvdescript Returns the vgid for the physical volume pvdescript.
-l lvdescript Returns the logical volume identifier for logical
volume lvdescript.
-L vgdescript Returns the list of logical volume names and logical
volume identifiers for volume group vgdescript.
-m lvid Returns the file system mount point for logical
volume lvid.
-p pvdescript Returns the physical volume identifier for physical
volume pvdescript.
-P Returns a list of all configured physical volumes,
their pvids (if applicable) and the name of the
volume group they belong to (if applicable).
-r lvid Returns the reloc value for the logical volume lvid.
-s vgname Returns the volume group state for volume group vgname.
0 == varied off
1 == varied on with all PVs
2 == varied on with missing PVs
-t vgid Returns the volume group name for the vgid.
-u vgdescript Returns the auto-on value for the volume group
vgdescript.
-v vgdescript Returns the volume group identifier for volume group
vgdescript.
-w vgid Returns the pvids and pvnames for the volume group
vgid.
-y lvid Returns the type for logical volume lvid.
Return Codes: 0 Command successfully completed
1 Illegal syntax
2 Unable to access ODM
3 Object not found in the Configuration Database
Purpose: Returns a volume group name.
Syntax: getvgname [-y vgname]
Description:
The getvgname command returns a volume group name. The name
is written to standard out (stdout). The name is formed by
concatenating the volume group prefix (vg) to the next sequence
number available from the CuDv class. If the -y option is used
then the CuDv class will be checked to make sure the name does
not already exist.
Options: -y Specifies the volume group name.
Return Codes: 0 Command successfully completed
-1 Illegal syntax
Purpose: Gets the next available device major number for a volume group.
Syntax: lvgenmajor vgname
Description:
Generates a major number for vgname. If the major number already
exists for the vgname, then that same number will be returned.
Return Codes: 0 Command successfully completed
-1 Illegal syntax
-2 Unable to access ODM
-3 Volume group not found in the Configuration Database
Purpose: Gets the next available device minor number.
Syntax: lvgenmajor [-p preferedminornum] majornum newdevicename
Description:
Generates a minor number for LVs or VGs. If a preferred minor
number is needed, use the -p option. If a preferred number is
not available, then an error message is returned along with a
return code of 1.
Return Codes: 0 Command successfully completed
1 Preferred minor number not available
-1 Illegal syntax
-2 Unable to access ODM
-3 Volume group not found in the Configuration Database
Purpose: Releases a volume group's major number.
Syntax: lvrelmajor vgname
Description: Releases the major number from the CuDvDr class.
Return Codes: 0 Command successfully completed
-1 Illegal syntax
-2 Unable to access ODM
-3 Volume group vgname not found
Purpose: Releases a logical volume's minor number.
Syntax: lvrelminor name
Description:
Releases the minor number for device name and deletes all
/dev entries associated with the minor number.
Return Codes: 0 Command successfully completed
-1 Illegal syntax
-2 Unable to access ODM
-3 Name not found in the Configuration Database
Purpose: Writes the logical volume control block.
Syntax:
putlvcb [-a intrapolicy] [-c copies] [-e interpolicy] [-f fslabel]
[-L label] [-r reloc] [-s strict] [-t type] [-u upperbound]
[-N] [-v vgname] [-x vgautoa_on] lvname
Description:
The putlvcb command writes the control block information into
block 0 of the logical volume lvname. Only the fields specified
are written. putlvcb can be used to write a new control block or
update an existing one.
Options:
(Each of these options (except -N) writes something to the logical volume
lvname's control block.)
-a intrapolicy the intra-physical volume allocation policy
-c copies the copy allocation values
-e interpolicy the inter-physical volume allocation policy
-f fslabel the filesystem label field
-i intrapolicy the logical volume identifier
-L label the label field
-n numlps the number of logical partitions for lvname
-r reloc the relocation policy
-s strict the strictness allocation policy
-t type the logical volume type
-u upperbound the upperbound allocation policy
-v vgname the volume group name
-x vgauto_on the volume group auto_on value
-N This option indicates a new logical volume control
block is being written. If this flag is not set then
a control block must already exist on the logical
volume to be updated.
Return Codes: 0 Command successfully completed
1 Command failed
Purpose: Puts logical volume data values into the Configuration Database.
Syntax:
putlvodm [-a intra-policy] [-B label] [-c copies] [-e inter-policy]
[-l lvname] [-n newlvname] [-r relocatable] [-s strict-state]
[-t type] [-u upperbound] [-y copyflag] [-z size] lvid
putlvodm [-o auto-on] [-k] [-K] [-q state] [-v vgname] vgid
putlvodm [-p vgid] pvid
putlvodm [-V vgid]
putlvodm [-L lvid]
putlvodm [-P pvid]
Description:
The putlvodm command reads data from the command line and writes
it to the appropriate Configuration Database class fields. The
command line options specify what information is being written
to the Configuration Database.
Options:
The following options apply to logical volume lvid:
-a intra-policy sets the intra-policy (m, e, or c).
-B label sets the label field.
-c copies sets the copies field (1-3).
-e inter-policy sets the inter-policy (m, x, or p).
-l lvname Adds a new logical volume lvid.
-L lvid Removes logical volume lvid data.
-n newlvname sets the logical volume name to newlvname.
-r relocatable sets the relocatable flag (y or n).
-s strict-state sets the strict-state (y or n).
-t type sets the type (example: jfs).
-u upperbound sets the upperbound (1-32).
-y copyflag sets the copy flag
-z size sets the number of partitions (1-20000).
The following options apply to volume group vgid:
-o auto_on sets the auto_on flag (y or n).
-k vgid locks the volume group.
-K vgid unlocks the volume group.
-q state sets the auto_on flag
0 == varied off
1 == varied on with all PVs
2 == varied on with missing PVs
-v vgname Adds a new volume group vgid.
-V vgid Removes volume group vgid.
-p vgid Adds the physical volume pvid to the vg.
-P pvid Removes the physical volume pvid from the vg.
Return Codes: 0 Command successfully completed
1 Illegal syntax
2 Unable to access ODM
3 Object not found in the Configuration Database
Purpose: Changes the attributes of a logical volume.
Usage: lchangelv -l LVid [-s MaxPartitions] [-n LVname] [-M SchedulePolicy]
[-p Permissions] [-r BadBlocks] [-v WriteVerify] [-w mirwrt_consist]
1. ID of Logical Volume to change.
Flag Syntax: -l LVid [where LVid = VGid.MinorNum]
Domain: VGid is a numeric symbol representing the ID of the owning VG.
MinorNum, an integer between 0 and 256, is the minor number of
the LV. Note that these 2 fields are separated by a dot (.).
2. New Maximum Number of Partitions for the Logical Volume.
Flag Syntax: -s MaxPartitions
Domain: An integer between 1 and LVM_MAXLVS (256)
Effect: Changes the maximum size attribute of the Logical Volume but
does not actually change the current space allocation.
3. New Logical Volume Name. Flag Syntax: -n LVname
Domain: The size of the name string must be between 1 and LVM_NAMESIZ (64).
4. New Schedule Policy. Flag Syntax: -M SchedulePolicy
Effect: If SchedulePolicy equals 1, then schedule policy is Sequential
If SchedulePolicy equals 2, then schedule policy is Parallel
5. New Permissions. Flag Syntax: -p Permissions
Effect: If Permissions equals 1, then Logical Volume is Read-Write
If Permissions equals 2, then Logical Volume is Read-Only
6. Bad Block Relocation. Flag Syntax: -r BadBlocks
Effect: If Relocation equals 1, then bad block relocation is enabled.
If Relocation equals 2, then bad block relocation is not enabled.
7. Write Verify. Flag Syntax: -v WriteVerify
Effect: If WriteVerify equals 1, writes to the LV are verified.
If WriteVerify equals 2, writes to the LV are not verified.
8. Mirror Write Consistency. Flag Syntax: -w mirwrt_consist
Effect: If MirrorWriteConsistency equals 1, keeps consistency on.
If MirrorWriteConsistency equals 2, keeps consistency off.
Purpose: Creates an empty LV that belongs to a specified Volume Group.
Usage: lcreatelv -N LVname -g VGid -n MinorNumber [ -M MirrorPolicy]
[-s MaxLPs] [-p Permissions] [ -r Badblocks] [-v WriteVerify]
[-w mirwrt_consist]
1. Minor Number assigned to the Logical Volume.
Flag Syntax: -n MinorNumber
Domain: MinorNumber, an interget between 0 and LVM_MAXLVS (256).
2. Mirror Policy. Flag Syntax: -M MirrorPolicy
Effect: If MirrorPolicy equals 1, then mirror policy is Sequential
If MirrorPolicy equals 2, then mirror policy is Parallel
3. Maximum Number of Logical Partitions allowed in the Logical Volume.
Flag Syntax: -s MaxLPs
Domain: An integer between 1 and LVM_MAXLPS (65535)
Effect: The command does not actually allocate the logical partitions during
Logical Volume creation -- merely sets the Max size attribute.
Permissions, Bad Block Relocation, Write Verify, Mirror Write Consistency.
< for these four items, refer to the lchangelv command >
Purpose: Deletes a Logical Volume from its parent Volume Group.
Usage: ldeletelv -l LVid
< refer to description of LVid under lchangelv command >
Purpose: Extends or allocates additional partitions to a Logical Volume.
Usage: lextendlv -l LVid -s Size Filename
< refer to description of LVid under lchangelv command >
1. Size or Number of Logical Partitions to allocate.
Flag Syntax: -s Size
Domain: An integer where Size + the CurrentSize of the Logical
Volume should not exceed LVM_MAXLPS (65535).
2. Filename containing Map information. Flag Syntax: Filename
Domain: Partition Map (Always required).
The map has an entry for each partition to be allocated
in the Logical Volume. Each entry is a triplet containing
1) a Physical Volume ID, 2) a Physical Partiton Number, and
3) a Logical Partition Number. Each entry specifies the exact
location of the physical partition and the Logical Partition
in storage space.
Purpose: Queries the attributes of a Logical Volume.
Usage: lquerylv -L LVid [-p PVname] [-NGnMScsPRvoadlArtw]
< refer to description of LVid under lchangelv command >
1. Name of Physcial Volume containing volume group descriptor area.
If the PVname is specified, the LVid is not required.
In this case, the LVid may be specified as follows: -L 0.MinorNum
Flag Syntax: -p PVname
Options:
-N Selects Logical Volume Name.
-G Selects associated Volume group ID.
-n Selects Maximum Number of Logical Partitions Allowed in the LV.
-M Selects the Logical Volume Mirror Policy.
-S Selects the Logical Volume current state.
-c Selects Current Size in Logical Partitions.
-s Selects the Physical Partition Size in the Logical Volume.
-P Selects the Permission attribute of the Logical Volume.
-R Selects the Bad Block Relocation attribute of the LV.
-v Selects the Write Verify state of the Logical Volume.
-o Selects the open/close state of the Logical Volume.
-a Selects all static attributes of the Logical Volume,
that is -NGnMScsPRvo. If this flag is combined with any
other flags (except -A), the other flags are ignored.
-d Selects Logical Partition Map (dynamic attributes) of the LV.
-l Selects the long format of output (valid with -d and -t flags).
-A Selects all attributes of the Logical Volume.
-r Selects output in lreducelv format.
-t Include tags/labels in the query output.
-w Selects the Mirror Write Consistency state of the LV.
State of the Logical Volume. Tag: LVstate
Range: 0 (UNDEFINED), 1 (DEFINED), 2 (STALE)
Open/Close Flag Tag: open_close
Range: If open_close is 1, then the Logical Volume is open
If open_close is 2, then the Logical Volume is closed
Partition Map Tag: LVMAP
Output for the -dt options:
LVMAP: pvid1:ppnum1 LVstate type lvid lpnum pvid2:ppnum2 pvid3:ppnum3
Output for the -dtl options:
LVMAP: pvid1:ppnum1 LVstate type lvid lpnum
LVMAP: pvid2:ppnum2 LVstate type lvid lpnum
LVMAP: pvid3:ppnum3 LVstate type lvid lpnum
Output for the -rt options:
LVMAP: pvid ppnum lvid
where each field is defined as follows:
LVMAP ASCII string constant which labels the output as a LV map
pvid1 physical volume identifier for first copy of logical parition
ppnum1 physical partition number for first copy of logical partition
LVstate the state of the logical volume: 0,1,2 (see above).
type the type of logical volume the pvid is allocated to
lvid logical volume identifier specifying LV that pvid1 is
allocated to
lpnum logical partition number that pvid1 is allocated to
pvid2 physical volume identifier for second copy of logical
partition (blank, if none exists).
ppnum2 physical partition number for second copy of logical
partition (blank, if none exists).
pvid3 physical volume identifier for third copy of logical
partition (blank, if none exists).
ppnum3 physical partition number for third copy of logical
partition (blank, if none exists).
pvid physical volume identifier for logical partition
ppnum physical partition number for logical partition
lpnum logical partition number that pvid is allocated to
Purpose: Reduces the number of allocated Logical Partitions in a LV.
Usage: lreducelv -l LVid -s Size Filename
< refer to description of LVid under lchangelv command >
1. Number of Logical Partitions to deallocate. Flag Syntax: -s Size
Domain: An integer that is within the current size of the Logical Volume.
2. Filename containing Map information. Flag Syntax: Filename
Domain: Deallocation Map (Always required).
The map has an entry for each of the allocated partitions
in the Logical Volume. Each entry is a triplet containing
1) a Physical Volume ID, 2) a Physical Partiton Number, and
3) a Logical Partition Number. The triplet specifies the
exact location and Logical Partition Number of the said partition.
Purpose: Synchronizes all the mirrored Logical Partition(s) in the
Logical Volume.
Usage: lresynclv -l LVid
< refer to description of LVid under lchangelv command >
Purpose: Changes the attributes of a Physical Volume.
Usage: lchangepv -g VGid -p PVid [-r RemoveMode] [-a AllocateMode]
1. Remove or Return state of Physical Volume
Flag Syntax: -r RemoveMode
Effect: 1 means PV temporarily REMOVED from VG.
2 means PV RETURNED to VG. Note that returning a PV to the VG
requires execution of lvaryonvg to perform recovery and
thereby activate the returned Physical Volume.
2. Allocate/No-allocate state of the Physical Volume
Flag Syntax: -a AllocateMode
Effect: 4 means partitons in PV cannot be allocated for LV (NOALLOCPV).
8 means partitons in PV can be allocated for LV (ALLOCPV).
Purpose: Deletes a Physical Volume from its Parent Volume Group.
Usage: ldeletepv -g VGid -p PVid
Purpose: Installs or Adds a Physical Volume to a Volume Group.
Usage: linstallpv -N PVname -g VGid [-f]
Options:
-f Use the force flag to add a physical volume to a VG
when the physical volume appears to be a member of
another VG. Caution -- All data on the physical
volume is destroyed.
Purpose: Queries the attributes of a Physical Volume.
Usage: lquerypv -p PVid [-g VGid | -N PVname] [-scPnaDdAt]
Note: Either VGid or PVname is required. When both are specified,
the VGid is ignored.
Options:
-s Selects the Physical Partition Size of the Physical Volume.
-c Selects the current state of the Physical Volume.
-P Selects the Total Number of Phyical Partitions in the PV.
-n Selects the Number of allocated Physical Partitions in the PV.
-a Selects all static attributes of the Physical Volume,
that is -scPn. If this flag is combined with any other
flag (except -A), the other flag is ignored.
-D Selects the number of VGDA's on the Physical Volume.
-d Selects the Physical Parition Map of the Physical Volume.
-A Selects all attributes of the Physical Volume.
-t Includes tags/labels in the query results.
Physical Volume Size. Tag: PP Size
Range: 20<=n<=28 (Size in bytes = 2 ** n)
State of the Physical Volume Tag: PV State
Range: 1 (ACTIVE), 2 (MISSING), 4 (REMOVED), 8 (NOALLOC), 16 (STALE)
Output for the -dt options:
PVMAP: pvid1:ppnum1 LVstate type lvid lpnum pvid2:ppnum2 pvid3:ppnum3
<for definitions of these fields, refer to the lquerylv command.>
Purpose: Synchronizes all mirrored partitions on a Physical Volume.
Usage: lresyncpv -g VGid -p PVid
Purpose: Creates a new VG and installs the first PV in the VG.
Usage: lcreatevg -a VGname -V MajorNumber -N PVname -n MaxLVs
-D VGDescriptorSize -s PPSize [-f] [-t]
1. Major Number (an integer) to be assigned to the new Volume Group.
Flag Syntax: -V MajorNumber
2. Size of Volume Group Descriptor Area.
Flag Syntax: -D VGDescriptorSize
Domain: An integer between 32 and 8192 that is the number of
blocks (512 bytes) to be reserved for one copy of the VGDA.
This Descriptor Area will contain the information on the
Physical and Logical Volumes to be installed and created in the
Volume Group. The Descriptor Area is reserved at the beginning
of each Physical Volume installed in the Volume Group.
3.Size of Physical Partitions in the Volume Group.
Flag Syntax: -s PPSize
Domain: An integer between 20 and 28. The physical partition size in
bytes can be computed as 2 ** PPsize.
Options:
-t Includes tags/labels in the query results.
-f Use the force flag to create a VG on a PV when the
physical volume appears to be a member of another VG
that is not varied on. Caution -- All data on the
physical volume is destroyed.
Purpose: Queries the attributes of a Volume Group.
Usage: lqueryvg [-g VGid | -p PVname] [-NsFncDaLPAvt]
Note: If VGid is not specified, PVname must be specified. When both
VGid and PVname are specified, the VGid is ignored.
Options:
-N Selects the Maximum Number of LVs allowed in the VG only.
-s Selects Physical Partition Size only.
-F Selects Number of Free Physical Partitions in the VG only.
-n Selects Current Number of LVs created in the VG only.
-c Selects the Current Number of PVs installed in the VG only.
-D Selects Total Number of VGDA's for the entire Volume Group.
-a Selects all static attributes of the Volume Group,
that is -NsFncD. If this flag is combined with any other
flag (except -A), the other flag is ignored.
-L Selects the Logical attributes of the Volume Group.
-P Selects PV_IDs and States associated with the Volume Group.
-A Selects all attributes of the Volume Group.
-v Selects VGid only. An input PVname must be specified.
-t Includes tags/labels in the query output.
Maximum Number of Logical Volumes allowed. Tag: Max LVs
Range: 1 to 256 (LVM_MAXLVS)
Physical Partition Size. Tag: PP Size
Range: An interger n between 20 and 28. The partition size in
bytes can be calcuated using 2**n.
Number of Free Physical Partitions. Tag: Free PPs
Range: An integer between 0 and a number which can be calculated
by multiplying the current number of Physical Volumes in
the VG by the maximum number of Physical Partitions
in a Physical Volume which is 65535 (LVM_MAXPPS).
IDs and States of Logical Volumes under the Volume Group.
Tag: Logical (Precedes first tuple in the list)
Range: One entry for each Logical Volume associated with the
VG. Each tuple includes: Logical Volume ID, LVname,
State of the Logical Volume. The Logical Volume
states can be 0 (UNDEFINED), 1 (DEFINED), or 2 (STALE).
IDs and States of Physical Volumes installed under Volume Group.
Tag: Physical (Precedes first tuple in the list)
Range: One entry for each Physical Volume installed in the VG.
Each tuple includes: Physical Volume ID, pvnum_vgdas,
State of the PV. The PV states can be 1 (ACTIVE),
2 (MISSING), 4 (REMOVED), 8 (NOALLOC) or 16 (STALE).
Purpose: Queries the IDs of all Volume Groups in the System.
It is assumed that there is only one System in the environment.
Usage: lqueryvgs [-NGAt]
Options:
-N Selects Number of Volume Group created so far in the System.
-G Selects IDs of Volume Groups in the System.
-A Selects all attributes of the System Volume Group set.
that is, -NG. If this flag is combined with any other
flag, the other flag is ignored.
-t Includes tags/labels in the query output.
Number of Volume Groups in the System. Tag: VG Count
Range: 1 to 255 (LVM_MAXVGS)
Purpose:
Varies a Volume Group on-line. After varying a Volume Group on-line,
the user gets permission to do the following:
1) if the '-o' flag is not invoked, queries and other system
management functions can be performed on the Volume Group but
opens to the Logical Volumes in the Volume Group is not permitted.
2) if the '-o' is invoked, opens and thereby access to the Logical
Volumes in the Volume Group is permitted.
Usage: lvaryonvg -a VGname -V MajorNumber -g VGid [-ornpft] Filename
1. Major Number (an integer) of the Volume Group to be varied on.
Flag Syntax: -V MajorNumber
Options:
-o opens and access to the LVs in the VG is permitted.
If '-o' is not invoked, opens and access to the LVs
in the VG is not permitted; however, queries and other
system management functions can be performed on the VG.
-r Automatic Resynchronization of LVs and PVs in the VG
containing stale partitions will be performed.
If '-r' is not invoked, then automatic resynchronization
is not performed and should be initiated by the user.
-n the VG will be varied on if a quorum is available, even if
the names of one or more PVs in the VG are missing from the
input list. The missing PVs cannot be accessed. If '-n'
is not invoked, then the VG will not be varied on if the
VG contains a PV not in the input list of PV Names.
-p the VG will be varied on if a quorum is available even if
one or more of the PVs in the VG are missing or removed.
If '-p' is not invoked, then the VG will not be varied on
if the VG contains a PV that is missing or removed.
-f The force flag can be used to force the VG on-line
even if a quorum is not present. Caution--Data
consistency may be lost if normal operations are
attempted without a quorum.
-t Includes tags/labels in output.
2. Name of Physical Volumes in the to be varied-on Volume Group. The
Names are fed via stdin or via a file (one name per line in both cases)
where the name of the file is indicated on the command line.
Flag Syntax: <filename>
Purpose: Varies a Volume Group off-line. It is assumed that all Logical
Volumes in the Volume Group to vary off-line must be closed.
Usage: lvaryoffvg -g VGid [-f]
1. Vary the Volume Group off-line in system management mode.
Flag Syntax: -f
Domain: The Volume Group is still available to system management commands.
Purpose: Synchronize all Physical Partitions belonging to a Logical Partition.
Usage: lresynclp -l LVid -n LPnumber
< refer to description of LVid under lchangelv command >
1. Partition Number of the Logical Partition to be synchronized.
Flag Syntax: -n LPnumber
Domain: An integer between 1 and LVM_MAXLPS (65535)
Purpose: Moves a Physical Partition to a specified Physical Volume.
Usage: lmigratepp -g VGid -p SourcePVid -n SourcePPnumber
-P DestinationPVid -N DestinationPPnumber
1. ID of Volume Group containing both the source and
destination Physical Volumes.
Flag Syntax: -g VGid
ALLOCERR: not enough space can be allocated to hold intermediate
query results. Internally, when lqueryxx calls the appropriate
LVM subroutine, the latter default passes all the attributes of
the item in question, including all Map information.
The command routine does the query filtering.
CFGRECERR: the configuration record could not be read.
DALVOPN: the descriptor area logical volume cannot be opened.
INVALID_PARM: an input parameter is invalid.
LPNUM_INVAL: an input logical partition number is invalid.
LVMRECERR: the Logical Volume Manager record could not be read.
LVOPEN: Logical Volume is open when it should be closed.
MAPFOPN: the LVM cannot open its internal mapped file.
MAPFHSHMAT: the LVM cannot attach to its internal mapped file.
MAPFRDWR: the LVM received an error while trying to read or write
its internal mapped file.
MISSINGPV: the Volume Group was not varied on because one of
the Physical Volumes is missing or removed.
MISSPVNAME: the Volume Group was not varied on because the
Volume Group contains a PV_ID for which no name was entered.
NOALLOCLP: the logical partition specified alrady has three copies.
NODELLV: the Logical Volume cannot be deleted because there are
existing logical partitions.
NOQUORUM: the Volume Group could not be varied on because access to
a majority of all volume group descriptor areas could not
be obtained.
OFFLINE: the volume group is off-line when it should be on-line.
PARTFND: that the Physical Volume cannot be deleted because it
contains active Logical Partitions belonging to a Logical Volume.
PPNUM_INVAL: an input physical partition number is invalid.
PVDAREAD: an error occurred while trying to read the Volume Group
Descriptor Area (VGDA) from the specified Physical Volume.
PVMAXERR: the Physical Volume cannot be installed because the
maximum number of Physical Volumes are already installed in
this Volume Group.
PVOPERR: the Physical Volume could not be opened.
PVSTATE_INVAL: an input Physical Volume does not allow allocation.
VGDASPACE: the Physical Volume cannot be installed because there
is not enough space in the Volume Group Descriptor Area to
add a description of the Physical Volume and its partitions.
VGFULL: the volume group that the logical volume was requested to be
a member of already has the maximum number of logical volumes.
VGMEMBER: the Physical Volume cannot be installed because it is
already a member of another Volume Group.
WRTDAERR: an error occured while trying to write the Volume
Group Descriptor Area (VGDA) to the Physical Volume.
This section will review the following physical volume related topics:
Adding a new physical volume (disk) to a RISC System/6000 requires that the disk be physically installed and then configured via the operating system. The process of installing disks is described in the Disk Drive Removal/Replacement section of the Installation and Service Guide that is provided with each RISC System/6000.
If we are dealing with an IBM disk unit, and if the disk unit is physically installed, the configuration process can be initiated by using the cfgmgr -v command or re-booting the system (which also runs cfgmgr). This command will make the device known to the system by adding it to the CuDv class in ODM. In case the disk you have added was powered off (external disk), the disk will now show as being defined but not available. To make a disk available for use, first power it on, use SMIT with the fastpath smit disk and select the option Configure a Defined Disk. This will use the mkdev -l command to make the physical volume available.
There is very little configuration data that can be changed for a physical volume. The two things that can be manipulated are:
The allocation permission for a physical volume determines if physical partitions contained on this disk, which are not allocated to a logical volume yet, can be allocated for use by logical volumes. Setting the allocation permission defines whether the allocation of new physical partitions is permitted for the specified physical volume or not permitted.
Allocation permission is set using the chpv command with the -a flag. To turn off allocation permission for a disk enter:
# chpv -a n pvnameThis will prevent any further physical partitions being allocated for the physical volume pvname. To turn the allocation permission back on, enter
# chpv -a y pvname
The availability of a physical volume defines whether any logical input/output operations can be performed to the specified physical volume. Physical volumes should be made unavailable when they are to be removed from the system or are lost due to failure. To set the state of a physical volume to be unavailable, enter:
# chpv -v r pvname
This will remove all VGDA and VGSA copies from the physical volume, and the physical volume will not take part in future vary on quorum checking. Also information about the specified volume will be removed from the VGDAs of the other physical volumes in that volume group.
If the physical volume is required in order to maintain a volume group open, the attempt to remove the physical volume will fail.
To make a physical volume available enter:
# chpv -v a pvname
Typing smit chpv will take you to a menu from which you can:
If you enter lspv without any flags, it will list the physical volumes available on the system.
For example:
# lspv
hdisk0 00000061efc8f50f rootvg
hdisk1 000024977ba43a0d rootvg
hdisk2 00008898603471ca rootvg
The fields shown, from left to right, are:
If you enter lspv pvname, detailed information for the physical volume pvname will be displayed.
For example:
# lspv hdisk0
PHYSICAL VOLUME: hdisk0 VOLUME GROUP: rootvg
PV IDENTIFIER: 00000061efc8f50f VG IDENTIFIER 00011187ca9acd3a
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 4 megabyte(s) LOGICAL VOLUMES: 8
TOTAL PPs: 95 (380 megabytes) VG DESCRIPTORS: 1
FREE PPs: 3 (12 megabytes)
USED PPs: 92 (368 megabytes)
FREE DISTRIBUTION: 03..00..00..00..00
USED DISTRIBUTION: 16..19..19..19..19
The left hand pair of columns holds information about the physical volume itself. The right hand pair displays information concerning the volume group of which the physical volume is a member; this information is read from the VGDA.
The meaning of the fields is as follows:
Entering lspv -l pvname will display the allocation of logical volumes within the physical volume specified by the pvname parameter.
For example:
# lspv -l hdisk0
hdisk0:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
hd5 2 2 02..00..00..00..00 /blv
hd7 2 2 02..00..00..00..00 /mnt
hd3 3 3 03..00..00..00..00 /tmp
oa 5 5 05..00..00..00..00 /oa
hd2 70 70 04..11..17..19..19 /usr
hd6 8 8 00..08..00..00..00 N/A
hd8 1 1 00..00..01..00..00 N/A
hd4 1 1 00..00..01..00..00 /
This shows the allocation of hdisk0 physical partitions to various logical volumes. The meaning of the fields is as follows:
Entering the command lspv -p pvname returns a table of physical partition allocation in terms of disk region.
For example:
# lspv -p hdisk0
hdisk0:
PP RANGE STATE REGION LV NAME TYPE MOUNT POINT
1-2 used outer edge hd5 boot /blv
3-4 used outer edge hd7 sysdump /mnt
5-7 used outer edge hd3 jfs /tmp
8-12 used outer edge oa jfs /oa
13-15 free outer edge
16-19 used outer edge hd2 jfs /usr
20-27 used outer middle hd6 paging N/A
28-38 used outer middle hd2 jfs /usr
39-39 used center hd8 jfslog N/A
40-40 used center hd4 jfs /
41-57 used center hd2 jfs /usr
58-76 used inner middle hd2 jfs /usr
77-95 used inner edge hd2 jfs /usr
This is useful for checking the distribution of the partitions belonging to logical volumes in terms of the intra-physical volume allocation policy.
Here is what the different columns indicate:
Entering the command lspv -M pvname returns a table of physical partition allocation in terms of disk region.
For example:
# lspv -M hdisk1
hdisk1:1-19
hdisk1:20 hd61:1
hdisk1:21 hd61:2
hdisk1:22 hd61:3
hdisk1:23 hd61:4
hdisk1:24 hd61:5
hdisk1:25 hd61:6
hdisk1:26 hd61:7
hdisk1:27 hd61:8
hdisk1:28 lh:1
hdisk1:29 lh:2
hdisk1:30 lh:3
hdisk1:31 lh:4
hdisk1:32 lh:5
hdisk1:33 lh:6
hdisk1:34 lh:7
hdisk1:35 lh:8
hdisk1:36-39
hdisk1:40 hd1:1
hdisk1:41 hd1:2
hdisk1:42 hd1:3
hdisk1:43 hd1:4
hdisk1:44 hd2:71
hdisk1:45 hd2:72
hdisk1:46 hd4:2
hdisk1:47 hd1:5
hdisk1:48 hd1:6
hdisk1:49 hd1:7
hdisk1:50 hd1:8
hdisk1:51-95
The first column indicates the physical partition (if a group of contiguous partitions are free, it will indicate a range of partitions) for a particular hard disk. The second column indicates which logical partition of which logical volume is associated with that physical partition.
This is useful for determining the degree of contiguity of data on the system. It can also provide useful information should the system administrator decide to reorganize the system using a physical partition map.
This section discusses the different operations you can do on volume groups.
The mkvg command creates new volume groups. At the time of creation, the most important volume group characteristics to be set are :
An example of this command could be:
# mkvg -y newvg -d 4 hdisk5 hdisk6 hdisk7
This would create the volume group called newvg, consisting of the physical volumes hdisk5, hdisk6, and hdisk7, and only one physical volume can be added later (maximum number of disks in the volume group is 4).
The mkvg command can also be accessed via the smit mkvg command. Differences with mkvg entered on the command line are:
To specify that a particular volume group is to be automatically varied on at system startup, enter:
# chvg -a y vgname
If you don't want the volume group to be varied on, enter:
# chvg -a n vgname
When a volume group is created you allocate one or more physical volumes to that volume group. You may increase the number of physical volumes in a volume group at a later date by using the extendvg command. An example of this command use:
# extendvg newvg hdisk3
In this example, physical volume hdisk3 is added to volume group newvg. If the physical volume belongs to a varied on volume group, the command will exit without adding the physical volume to the volume group. If the physical volume belongs to a volume group (it needs not be a volume group from the current system) but is not varied on, the user will be prompted to ask if they wish to continue. If the volume is not part of a volume group (non VGDA) then it is added to the volume group without interaction with the user. The extendvg will destroy the previous contents of any physical volume added to a volume group.
The only flag supported by this command is the -f flag. Using this flag, the command will add the physical volume to the volume group, without prompting the user, even when the physical volume contains a VGDA, as long as that physical volume is not known to any volume group on the system.
The logical counterpart to the extendvg command is the reducevg command. This command removes one or more physical volumes from a volume group. If all the physical partitions in a volume group are removed then the volume group is also removed.
For example, to remove hdisk3 from the volume group newvg use:
# reducevg newvg hdisk3To remove a physical volume from a volume group, you must first remove all logical volumes present on the physical volume and the volume group must be varied on.
This command has two flags: -d and -f
reducevg -d forces the removal of all logical volumes on the physical volume. The user is prompted to confirm that they want the logical volumes removed.
The reducevg -f makes the -d flag even more dangerous by suppressing the interaction with a user requesting confirmation that the logical volume should be deleted.
To make a volume group known to the system, you have to import the volume group. If you want a volume group removed from a system configuration you have to export the volume group. The importvg and exportvg commands are used for adding and removing volume group entries to and from the system ODM database. This is most useful for the removal and addition of portable storage devices (specifically, external disks) from and to the system.
The importvg command adds a new volume group definition to ODM, by using the VGDA data located on a specified physical volume. The following functions may be performed via the importvg command:
To import a volume group, enter:
# importvg pvnameYou only need to specify one of the physical volumes in the volume group, because the physical volume's VGDA contains information on all the other physical volumes in the group. The imported volume group is not automatically varied on. You must use the varyonvg command to activate the volume group in order to access it.
The importvg command reads the VGDA for a volume group and makes the volume group and its contents known to the system. The importvg command also makes the logical volumes in a volume group known to the system. During the importvg process the system will create new names for any duplicate logical volumes and for the volume group to ensure the uniqueness of volume group and logical volume names.
The system will automatically generate a name for the volume group when it processes the importvg command. To override the system generated name, use the -y flag, and enter:
# importvg -y vgname pvnameThis will set the new name to vgname. This renaming allows you to import volume groups with the same name as existing volume groups on your system.
To access this command via smit, enter:
# smit importvg
The command exportvg vgname removes the definition of the vgname volume group from the system. After the volume group has been exported it can no longer be accessed. This is because the system no longer knows it exists even though physically it may still be connected.
The exportvg command does not modify any user data in the volume group. It only removes the definition from the ODM device configuration database. The primary use of the exportvg command is to move portable storage devices between systems or remove dysfunctional volume groups from a working system. You may export a volume group even if it has already been removed from your system.
A volume group should not be accessed by a second system until it has been exported from the first. This is because the second system may change the volume group and when it is re-imported to the original system, its database of configuration data is no longer consistent with the contents on the volume group. It is possible for a second system to vary on a volume group without it being first exported by the originally controlling system. The exportvg cleans up the system rather than the physical volume. Only a complete volume group can be exported, not individual physical volumes.
To access this command via smit, enter:
# smit exportvg
Once a volume group exists, it can be made available for use via the varyonvg process. This process involves a number of steps:
The command varyonvg vgname is used to vary on the volume group called vgname. The command has a range of options that can be used to overcome damage to the volume group structure or give status information. The key options of the varyonvg command are:
The -f flag can be used to force a volume group to be varied on even when inconsistencies are detected. These inconsistencies are generally differences between the configuration data for each volume group held in the ODM database and VGDA.
When a volume group is varied on, and stale partitions are detected, the vary on process will invoke the syncvg command to synchronize the stale partitions. The -n flag will suppress the invocation of the syncvg command at vary on time. This option is of value when you wish to carefully recover a volume group and you want to ensure that you do not accidentally write bad mirrored copies of data over good copies.
The -s flag allows a volume group to be varied on in maintenance or systems management mode. Logical volume commands can operate on the volume group, but no logical volume can be opened for input or output.
The command varyoffvg will deactivate a volume group and its associated logical volumes. This requires that the logical volumes be closed which requires that file systems associated with logical volumes be unmounted. The varyoffvg also allows the use of the -s flag to move the volume group from being active to being in maintenance or systems management mode.
The varyonvg and varyoffvg commands can be access by using the smit varyonvg and smit varyoffvg fastpaths respectively.
If you simply enter lsvg without a parameter, you will be presented with a list of all the defined volume groups, for example:
# lsvg
rootvg
By default, this form of the lsvg command lists all defined volume groups. You may list only the currently active volume groups by using the -o flag.
If you enter lsvg with a volume group name as a parameter, you will receive some detailed information about the volume group in question. For example:
# lsvg rootvg
VOLUME GROUP: rootvg VG IDENTIFIER: 00011187ca9acd3a
VG STATE: active PP SIZE: 4 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 274 (1096 megabytes)
MAX LVs: 256 FREE PPs: 71 (284 megabytes)
LVs: 14 USED PPs: 203 (812 megabytes)
OPEN LVs: 12 QUORUM: 2
TOTAL PVs: 3 VG DESCRIPTORS: 3
STALE PVs: 0 STALE PPs 0
ACTIVE PVs: 3 AUTO ON: yes
The meaning of the fields is as follows:
If you enter lsvg in the form lsvg -l vgname (the -l flag won't work without the vgname parameter), a list of the logical volumes in the volume group is displayed, with associated characteristics. For example:
# lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd6 paging 8 8 1 open/syncd N/A
hd61 paging 8 8 1 open/syncd N/A
hd5 boot 2 2 1 closed/syncd /blv
hd7 sysdump 2 2 1 open/syncd /mnt
hd8 jfslog 1 1 1 open/syncd N/A
hd4 jfs 2 2 2 open/syncd /
hd2 jfs 72 72 2 open/syncd /usr
hd9var jfs 1 1 1 open/syncd /var
hd3 jfs 3 3 1 open/syncd /tmp
hd1 jfs 8 8 1 open/syncd /home
paging00 paging 8 8 1 closed/syncd N/A
data jfs 75 75 1 open/syncd /lh/data
oa jfs 5 5 1 open/syncd /oa
lh jfs 8 8 1 open/syncd /lh
Here is what the columns of output mean:
Finally the format lsvg -p vgname displays a list of the physical volumes contained in a volume group as well as some status information including physical partition allocation. An example of output follows:
# lsvg -p rootvg
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 95 3 03..00..00..00..00
hdisk1 active 95 67 19..03..07..19..19
hdisk2 active 84 1 01..00..00..00..00
Here is a summary of the meaning of the command output:
This form of the lsvg command is useful for summarizing the concentrations of free space on the system. If the system administrator wished to create a new logical volume, then they could ascertain from the free distribution column the most likely intra-physical volume allocation strategy to provide the new logical volume with contiguity of data. If no free space were available in the desired region of disk, then the system administrator would have to change the physical volume allocation strategies of existing physical volumes, and reorganize them with respect to that strategy using the reorgvg command.
We have previously discussed physical volumes, volume groups and physical partitions and how they can be manipulated and managed. All these constructs are in place to support the use of disk space by users, application or the operating system. It is logical volumes that are used by users, applications, and the operating system.
The management of logical volumes, is therefore the management of disk space that is available for use. This section will review the following topics:
The command:
# mklv -y testlv -c 2 rootvg 10will create a logical volume called testlv in the rootvg volume group. It will contain 10 logical partitions, and each logical partition consists of two physical partitions.
The volume group to which the logical volume will belong, and the number of logical partition the volume group will contain, must be specified.
The command:
# rmlv testlvwill remove the logical volume testlv from the system. This deallocates all physical partitions and logical partitions associated with the logical volume and removes the logical volume information in the VGDA, ODM and the /dev entry. All data contained in the LV is lost. The logical volume space is now available for use by other logical volumes.
An existing logical volume can be increased in size by using the extendlv or smit extendlv command or if the logical volume is used by a jfs file system you can use the chfs or smit chjfs command. The extendlv command uses many of the flags available to the mklv command.
As mentioned above, logical volumes size can also increase by using the chfs or smit chjfs menu if the logical volume contained a jfs file system. In this case, I would not have the ability to allocate the new physical partitions specifically by naming the physical volume, using map files, or the position flag.
Existing logical volumes can be copied by means of the cplv or smit cplv command. You can copy a logical volume to a new logical volume or over-write an existing logical volume. You can copy into a different volume group.
Over time it is probable that the logical volumes in your system will no longer be optimally placed. AIX provides two commands to help assist with reorganizing logical volumes:
This command will move the contents of one physical volume to another. The physical volumes must be within the same volume group. You can migrate selected logical volumes contained on the physical volume in question. See Migrating Logical Volumes for more details on migratepv.
The reorgvg command will attempt to re-allocate physical partitions for existing logical volumes in a way that better optimizes compliance with defined logical volume allocation policies. The command can be applied to a whole volume group, or just selected logical volumes. Only logical volumes marked as relocatable can be reorganized. See Reorganize Logical Volumes within a Volume Group for more details.
All those details that you selected (or didn't select) when the logical volume was built, can be displayed using the command lslv lvname or smit lslv and then selecting the status option. For example:
# lslv hd2The meanings of these fields are:
LOGICAL VOLUME: hd2 VOLUME GROUP: rootvg
LV IDENTIFIER: 00011187ca9acd3a.7 PERMISSION: read/write
VG STATE: inactive LV STATE: opened/syncd
TYPE: jfs WRITE VERIFY: off
MAX LPs: 500 PP SIZE: 4 megabyte(s)
COPIES: 1 SCHED POLICY: parallel
LPs: 72 PPs: 72
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY: center UPPER BOUND 32
MOUNT POINT: /usr LABEL: /usr
MIRROR WRITE CONSISTENCY: on
EACH LP COPY ON A SEPARATE PV ?: yes
The command lslv -l lvname will give a summary of the manner in which the logical volume is physically allocated. Using smit lslv and then selecting the physical volume map will provide the same result. An example:
# lslv -l hd2This command is useful for getting a quick summary of how the logical volume is allocated across physical volumes and within each physical volume it resides on. The different columns have the following meaning:
hd2:/usr
PV COPIES IN BAND DISTRIBUTION
hdisk0 070:000:000 24% 004:011:017:019:019
hdisk1 002:000:000 100% 000:000:002:000:000
If we were dealing with a mirrored logical volume we could expect to see output like:
# lslv -l hd2
lv01:/u/mirror
PV COPIES IN BAND DISTRIBUTION
hdisk2 010:000:000 100% 010:000:000:000:000
hdisk3 000:010:000 100% 005:000:000:000:005
In this example a logical volume called lv01 has been created with two copies. You can see that all of the physical partitions for the first copy are located on hdisk2 and all the physical partitions for the second copy on hdisk3.
When you use the command lslv in the form lslv -n PVID LVname you can query what the VGDA on a particular physical volume thinks the status of a logical volume is. This is very useful for tracking problems down when they occur. This variation of the lslv command cannot be acccessed via smit.
This section discusses different scenarios about the logical volume management.
LVM configuration information is stored in the ODM, as well as in the /dev directory, and in the Volume Group Descriptor Area (VGDA) on each of the disks in the volume group. If the ODM and VGDA do not correlate, this may result in messages being issued stating they are out of sync.
If the volume group in question is not the root volume group, This may be rectified by varying off the volume group in question, and then exporting and re-importing it. Use the following sequence of commands:
# varyoffvg vgname # exportvg vgname # importvg -y vgname pvname # varyonvg vgname
where vgname is the name of the volume group, and pvname is the name of a physical volume (disk) in the volume group.
If the problem is with the root volume group, then the following script should rebuild the information for the root volume group in the ODM (syntax is fixlvodm hdisk# where # is the number of the bootable disk of rootvg and fixlvodm the name of the script):
#!/bin/ksh
# fixlvodm - Export and re-import the root volume group ODM data
# check arguments
case "$1" in
hdisk*) ;;
*) echo "Usage: fixlvodm PVname"
exit 1;;
esac
# make sure the disk is in the root volume group
lquerypv -p `getlvodm -p $1` -g `getlvodm -v rootvg` > /dev/null 2>&1
if [ "$?" != "0" ]
then
echo "PV $1 does not appear to be in rootvg"
echo "Are you sure you want to continue? (y/n) > \c"
read answer
if ["$answer" != "y"]
then
exit 1
fi
fi
# delete odm entries for all logical volumes on the specified disk
lqueryvg -p $1 -L | cut -c22-80 | cut -d" " -f1 | \
while read LVname
do
echo "Deleting ODM entry for Logical Volume $LVname"
odmdelete -q "name=$LVname" -o CuAt
odmdelete -q "name=$LVname" -o CuDv
odmdelete -q "value3=$LVname" -o CuDvDr
done
# delete VG customized attributes
odmdelete -q "name=rootvg" -o CuAt
# LV and VG customized devices
odmdelete -q "parent=rootvg" -o CuDv
odmdelete -q "name=rootvg" -o CuDv
# LV and VG customised dependencies
odmdelete -q "name=rootvg" -o CuDep
odmdelete -q "dependency=rootvg" -o CuDep
# customised device drivers
odmdelete -q "value1=rootvg" -o CuDvDr
odmdelete -q "value1=10" -o CuDvDr # run this only for rootvg
odmdelete -q "value3=rootvg" -o CuDvDr
# re-import the root volume group (ignore lvaryoffvg errors)
importvg -y rootvg $1
# rebuild the logical volume info from the VGDA
varyonvg rootvg
The migratepv command may be used to migrate, or move, logical volumes from one disk to another within the same volume group. When the movement takes place, LVM attempts to allocate the newly placed partitions according to the policies set out in the logical volume configuration (for example the intra-physical volume allocation policy).
In the example below we will migrate a single logical volume from one disk to another. In this case it is to free space at the center of the disk. The process is as follows:
# lspv -p hdisk1
hdisk1:
PP RANGE STATE REGION LV ID TYPE MOUNT POINT
1-19 free outer edge
20-27 used outer middle hd61 paging N/A
28-35 used outer middle lh jfs /lh
36-38 free outer middle
39-39 used center hd9var jfs /var
40-43 used center hd1 jfs /home
44-45 used center hd2 jfs /usr
46-46 used center hd4 jfs /
47-50 used center hd1 jfs /home
51-57 free center
58-76 free inner middle
77-95 free inner edge
Do the same for hdisk0.
# lspv -p hdisk0
hdisk0:
PP RANGE STATE REGION LV NAME TYPE MOUNT POINT
1-2 used outer edge hd5 boot /blv
3-4 used outer edge hd7 sysdump /mnt
5-7 used outer edge hd3 jfs /tmp
8-12 used outer edge oa jfs /oa
13-15 free outer edge
16-19 used outer edge hd2 jfs /usr
20-27 used outer middle hd6 paging N/A
28-38 used outer middle hd2 jfs /usr
39-39 used center hd8 jfslog N/A
40-40 used center hd4 jfs /
41-57 used center hd2 jfs /usr
58-76 used inner middle hd2 jfs /usr
77-95 used inner edge hd2 jfs /usr
# migratepv -l hd9var hdisk1 hdisk0
# lspv -p hdisk1
hdisk1:
PP RANGE STATE REGION LV ID TYPE MOUNT POINT
1-19 free outer edge
20-27 used outer middle hd61 paging N/A
28-35 used outer middle lh jfs /lh
36-38 free outer middle
39-39 free center
40-43 used center hd1 jfs /home
44-45 used center hd2 jfs /usr
46-46 used center hd4 jfs /
47-50 used center hd1 jfs /home
51-57 free center
58-76 free inner middle
77-95 free inner edge
For hdisk0:
# lspv -p hdisk0
hdisk0:
PP RANGE STATE REGION LV NAME TYPE MOUNT POINT
1-2 used outer edge hd5 boot /blv
3-4 used outer edge hd7 sysdump /mnt
5-7 used outer edge hd3 jfs /tmp
8-12 used outer edge oa jfs /oa
13-13 used outer edge hd9var jfs /var
14-15 free outer edge
16-19 used outer edge hd2 jfs /usr
20-27 used outer middle hd6 paging N/A
28-38 used outer middle hd2 jfs /usr
39-39 used center hd8 jfslog N/A
40-40 used center hd4 jfs /
41-57 used center hd2 jfs /usr
58-76 used inner middle hd2 jfs /usr
77-95 used inner edge hd2 jfs /usr
# lsvg -p vgname
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 159 0 00..00..00..00..00
# lsdev -Cc diskIf the disk is not listed or is listed but not in the available state, you may need a CE's help to check/install your hardware. If it is listed and in the available state, then make sure it does not belong to another volume group:
hdisk0 Available 00-08-00-30 670 MB SCSI Disk Drive
hdisk1 Available 00-08-00-20 857 MB SCSI Disk Drive
# lspv
hdisk0 0000078752249812 rootvg
hdisk1 000000234ac56e9e none
# extendvg VG_NAME hdisk#
If extendvg finds that the disk contains information, it will issue the following warning but will not prevent you from including it in another volume group.
0516-014 linstallpv: The physical volume appears to belong to
another volume group.
0516-631 extendvg: Warning, all data belonging to physical
volume hdisk3 will be destroyed.
extendvg: Do you wish to continue? y(es) n(o)?
If you answer yes, all information on the physical disk will be lost.
# lspv hdiskx | grep "USED PPs"For the above example, you would need 159 FREE PPs in this volume group to successfully complete the migratepv.
USED PPs: 159 (636 megabytes)
# lspv hdiskx | grep "FREE PPs"
FREE PPs: 204 (816 megabytes)
# lspv -l hdiskx | grep hd5If you get no output, then hd5 is not on that disk. Skip to step 7. If you get output similar to the sample shown above, perform the following:
hd5 2 2 02..00..00..00..00 /blv
# migratepv -l hd5 <source_disk> <destination_disk>After the command is completed, a message will be displayed which will warn you to run bosboot on the destination disk. (See following step.) Note the disk name listed in the message.
If you attempt to run migratepv with a destination disk that is not a part of the volume group, you will get these errors:
0516-320 getlvodm: Physical volume hdisk1 is not assigned to
a volume group.
0516-812 migratepv: Warning, migratepv did not completely
succeed; all physical partitions have not been
moved off the PV.
# bosboot -a -d /dev/<destination_disk>
# bootlist -m normal <destination_disk>
# mkboot -c -d /dev/<source_disk>
# lsvg -l rootvg | grep sysdumpIf you get no output, then there are no logical volumes of type sysdump on that disk. Skip to step 7.
hd7 sysdump 2 2 1 open/syncd /mnt
If you get output similar to the sample shown above, run the following command, where <yyy> is the logical volume name (usually hd7) found in the output of the previous command and where hdiskx is the source disk name.
# lspv -l hdiskx | grep <yyy>If you get no output, skip to step 7.
hd7 2 2 00..00..02..00..00 /mnt
If you do get output from the lspv command, run the sysdumpdev command, to check if this logical volume is the primary dump device:
# sysdumpdev -lIf /dev/hd7 is listed as primary in the output, then logical volume hd7 is the primary dump device (/dev/hd7), and you will need to change the primary dump device. Use the following command:
primary /dev/hd7
secondary /dev/sysdumpnull
# sysdumpdev -P -p /dev/sysdumpnull
# migratepv <source_disk> <destination_disk(s)>
If the source disk on which you run migratepv contains hd5 and you run migratepv without first running it with the -l hd5 (see step 6), you will get the following errors:
0516-1011 migraptepv: Logical volume hd5 is labled as
a boot logical volume.
0516-812 migratepv: Warning, migratepv did not completely
succeed; all physical partitions have not been
moved off the PV.
If you attempt to run migratepv with a destination disk that is not a part of the volume group, you will get these errors:
0516-320 getlvodm: Physical volume hdisk1 is not assigned to
a volume group.
0516-812 migratepv: Warning, migratepv did not completely
succeed; all physical partitions have not been
moved off the PV.
# reducevg VG_NAME <source_disk>
# rmdev -l -d <source_disk>
# sysdumpdev -P -p /dev/hdx
The reorgvg command can be used to reorganize the logical volumes within a volume group and reallocate them to better observe allocation policies for each logical volume. The reorgvg command will take the logical volumes you give it in the order given and attempt to place the logical volume in the location that is specified by the intra-physical volume allocation policy for each logical volume. It does this one logical volume at a time and migrates one physical partition at a time to the location you select. The example below will help explain what is happening:
# chlv -r n hd5Alternatively, use the smit chlv menu to perform the same task.
# stopsrc -a # killall
As an alternative you may want to enter single user mode by issueing the command:
# shutdown -m
# reorgvg rootvg hd6 hd61 lv00 hd2
0516-962 reorgvg: Logical volume 00014732eb8e7bec.1 migrated
0516-962 reorgvg: Logical volume 00014732eb8e7bec.2 migrated
0516-962 reorgvg: Logical volume 00014732eb8e7bec.10 migrated
0516-962 reorgvg: Logical volume 00014732eb8e7bec.7 migrated
As each logical volume reorganized (based upon the order with which you gave the logical volumes to the command), you will see a line of output.
hdisk0:
PP RANGE STATE REGION LV NAME TYPE MOUNT POINT
1-2 used edge hd5 boot /blv
3-4 used edge hd7 sysdump /mnt
5-7 used edge hd4 jfs /
8-8 used edge rbackup jfs N/A
9-10 free edge
11-11 used edge hd2 jfs /usr
12-18 free edge
19-21 used edge hd3 jfs /tmp
22-32 used edge hd2 jfs /usr
33-64 used middle hd2 jfs /usr
65-65 used center hd8 jfslog N/A
66-78 used center hd2 jfs /usr
79-79 free center
80-95 used center hd2 jfs /usr
96-120 used middle hd2 jfs /usr
121-127 used middle hd1 jfs /home
128-143 used edge hd1 jfs /home
144-155 used edge hd2 jfs /usr
156-159 free edge
Also a listing for hdisk1.
hdisk1:
PP RANGE STATE REGION LV ID TYPE MOUNT POINT
1-10 free edge
11-22 used edge hd1 jfs /home
23-25 free edge
26-28 used edge hd1 jfs /home
29-32 free edge
33-36 used middle hd6 paging N/A
37-56 used middle hd61 paging N/A
57-58 used middle lv00 jfs /usr/local
59-64 free middle
65-76 used center hd4 jfs /
77-91 used center lv00 jfs /usr/local
92-95 used center hd1 jfs /home
96-102 used middle hd1 jfs /home
103-127 used middle lv00 jfs /usr/local
128-159 used edge lv00 jfs /usr/local
Below is a summary of how the key logical volumes have been moved:
Initial disk allocation
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
hdisk0:hd4 3 3 03..00..00..00..00 /
hdisk1:hd4 12 12 00..00..12..00..00 /
hdisk0:hd2 12 12 10..00..00..00..02 /usr
hdisk1:hd2 98 98 10..19..12..25..32 /usr
hdisk0:hd6 4 4 00..04..00..00..00 N/A
hdisk1:hd61 20 20 07..13..00..00..00 N/A
hdisk0:lv00 74 74 00..19..30..25..00 /usr/local
After first reorgvg
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
hdisk0:hd4 3 3 03..00..00..00..00 /
hdisk1:hd4 12 12 00..00..12..00..00 /
hdisk0:hd2 110 110 12..32..29..25..12 /usr
hdisk1:hd6 4 4 00..04..00..00..00 N/A
hdisk1:hd61 20 20 00..20..00..00..00 N/A
hdisk1:lv00 74 74 00..02..15..25..32 /usr/local
As you can see, the logical volumes nominated have all been reorganized.
Some have all been brought onto one disk. Others have been migrated to better
meet the policies defined. This did not do exactly what was wanted so now let's
change some of the allocation policies for the key logical volumes and see what
results we can obtain. The table below summarizes the old setting for
allocation policies for the logical volumes of importance and the new one that
was set.
Table: Logical Volume Allocation Policies - Old vs. New
The change in intra-logical volume allocation policy is performed using the chlv command.
# reorgvg rootvg hd6 hd61 lv00 hd2
hdisk0:
PP RANGE STATE REGION LV NAME TYPE MOUNT POINT
1-2 used edge hd5 boot /blv
3-4 used edge hd7 sysdump /mnt
5-7 used edge hd4 jfs /
8-8 used edge rbackup jfs N/A
9-18 free edge
19-21 used edge hd3 jfs /tmp
22-32 used edge lv00 jfs /usr/local
33-64 used middle lv00 jfs /usr/local
65-65 used center hd8 jfslog N/A
66-69 used center hd6 paging N/A
70-89 used center hd61 paging N/A
90-95 used center lv00 jfs /usr/local
96-120 used middle lv00 jfs /usr/local
121-127 used middle hd1 jfs /home
128-143 used edge hd1 jfs /home
144-159 free edge
hdisk1:
PP RANGE STATE REGION LV NAME TYPE MOUNT POINT
1-2 used edge hd2 jfs /usr
3-10 free edge
11-22 used edge hd1 jfs /home
23-25 free edge
26-28 used edge hd1 jfs /home
29-32 used edge hd2 jfs /usr
33-64 used middle hd2 jfs /usr
65-76 used center hd4 jfs /
77-91 used center hd2 jfs /usr
92-95 used center hd1 jfs /home
96-102 used middle hd1 jfs /home
103-127 used middle hd2 jfs /usr
128-159 used edge hd2 jfs /usr
After first reorgvg
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
hdisk0:hd4 3 3 03..00..00..00..00 /
hdisk1:hd4 12 12 00..00..12..00..00 /
hdisk0:hd2 110 110 12..32..29..25..12 /usr
hdisk1:hd6 4 4 00..04..00..00..00 N/A
hdisk1:hd61 20 20 00..20..00..00..00 N/A
hdisk1:lv00 74 74 00..02..15..25..32 /usr/local
After second reorgvg
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
hdisk0:hd4 3 3 03..00..00..00..00 /
hdisk1:hd4 12 12 00..00..12..00..00 /
hdisk1:hd2 110 110 06..32..15..25..32 /usr
hdisk0:hd6 4 4 00..00..04..00..00 N/A
hdisk0:hd61 20 20 00..00..20..00..00 N/A
hdisk0:lv00 74 74 11..32..06..25..00 /usr/local
The preceding example helps demonstrate the importance of allocation policies in the reorgvg process. By changing policies for selected logical volumes we substantially changed the resulting disk partition allocations.
The following procedure assumes a system with three physical disks: hdisk0, hdisk1 and hdisk2. rootvg was installed on hdisk0.
# extendvg rootvg hdisk1 hdisk2
If you don't have three disks, this procedure probably isn't worthwhile, since the statistical likelihood of it being useful drops to 50% of the chance of a hard disk failure; if hdisk0 fails, rootvg loses its disk quorum and itself will fail.
# mklvcopy hd6 2 hdisk1 # mklvcopy hd8 2 hdisk1 # mklvcopy hd4 2 hdisk1 # mklvcopy hd2 2 hdisk2 # mklvcopy hd3 2 hdisk2 # mklvcopy hd1 2 hdisk2
The logical volumes are, in order, page space, the jfslog, /, /usr, / tmp, /home (this is if the system is configured as standard). You might use three copies instead of two but it is not obligatory - it will increase the availability of data rather than of the system. If you wish to use three copies then you should substitute the two in the above commands for three, and specify both hdisk1 and hdisk2 in each case.
# syncvg -v rootvg
The next few steps set up spare blv's in case hdisk0 is the disk that fails. This will provide the system with a mean of booting in such circumstances.
/blv: dev = /dev/hd5x vol = "spare" mount = false check = false free = false vfs = jfs log = /dev/hd8 /blv: dev = /dev/hd5y vol = "spare" mount = false check = false free = false vfs = jfs log = /dev/hd8
# mklv -y hd5x -t boot -a e rootvg 2 hdisk1 # mklv -y hd5y -t boot -a e rootvg 2 hdisk2
# bosboot -a -l/dev/hd5x -d/dev/hdisk1 # bosboot -a -l/dev/hd5y -d/dev/hdisk2 # bootlist -m normal hdisk0 hdisk1 hdisk2
# mklv -y hd7x -t sysdump -a e rootvg 2 hdisk1 # sysdumpdev -P -s/dev/hd7x
The general approach to create mirrored volume groups is described below.
# mkvg -y uservg hdisk3 hdisk4 hdisk5
# mklv -y userlv1 -c2 uservg 40 hdisk3 hdisk4 # mklv -y userlv2 -c2 uservg 40 hdisk4 hdisk5 # mklv -y userlv3 -c2 uservg 40 hdisk3 hdisk5
# crfs -v jfs -d userlv1 -m /home/tom # crfs -v jfs -d userlv2 -m /home/dick # crfs -v jfs -d userlv3 -m /home/harry
# mklvcopy loglv00 hdisk4 hdisk5
Warning: This procedure assumes that the fixed disk does not contain any system file systems ( /, /usr, /var, /dev/hd5, /dev/hd6, and so on). If you are replacing a disk that contains these file systems, back up your system with smit startup (to make an image backup) and then replace the disk and reinstall the machine.
This procedure may only work on disks that are still recognized by the system.
If you are on AIX 3.1.7 or less, contact the AIX Support Center to order APAR fix IX20478 before following this procedure.
You should always back up your system before you make any changes to the system. That way you can always restore back to your original status if anything should go wrong.
# unmount /directory(directory is the mount point of the file system).
# rmfs /Directory
# rmlvcopy LVname maxcopiesx PVnamewhere PVname is the disk that you are replacing.
# chps -a n /dev/pagingXX
# lsps -aYou should now see that the paging space is not active.
# rmlv pagingxx
# reducevg -df VGname PVname
# rmdev -l PVname -d
# lspv
# lspvIf the PVname is not listed, configure the new disk into the system:
# cfgmgrThen run lspv again:
# lspvIf the PVname is still not listed, the problem may be hardware. Run diagnostics.
If the PVname is listed, continue with the following steps.
# extendvg VGname PVname
# mklv -OPTIONS VGname #PPs PVname(s)
# crfs -v jfs -d LVname -m /Directory
# mklvcopy LVname maxcopiesx PVnameSee InfoExplorer for correct syntax and parameters of the mklvcopy command.
# syncvg -p PVname
# reboot
We recommend that you back up your system when you get it running properly. Then, if you have to reinstall sometimes in the future, you can restore the system to its current configurations.
Although it is possible to mirror data onto an external drive, and then access that information from another system, there are a number of complications associated with this approach.
The problem occurs when you attempt to place that disk back into the original system. If a disk is removed from a volume group, updated, and then returned, there is no way to control which copy of the data is going to be used to resynchronize the other copy. Normally the disk which was last varied on is the disk which will be used to resynchronise the other copies. This means that, if your original system is brought back on-line without the disk re-inserted, it will be varied on and considered the latest version. If, however, you either re-insert the disk and then vary on the VG, the disk which was re-inserted will be used to resynchronise the original copy.
Note, however, that if any LVM information is changed while the disk is in your backup system, those changes will not be known to your primary system even if the backup disk is used to resync the primary disk. LVM changes include creating, removing, or expanding any file system, paging space, other logical volume.
Therefore, be sure to take the following steps:
An attempt is made to vary on a volume group:
# varyonvg vg03
varyonvg: physical Volume hdisk6 is missing
varyonvg: physical Volume hdisk7 is missing
varyonvg: physical Volume hdisk8 is active
varyonvg: physical Volume hdisk9 is missing
varyonvg: physical Volume hdisk10 is missing
varyonvg: physical Volume hdisk11 is active
lvaryonvg: unable to vary on volume group vg03 without a quorum
varyonvg: volume group is not varied on
Since there is not a quorum, the only way to vary on the two physical volumes that are available is to change the quorum count and then to reissue the varyonvg command:
# chvg -Qn vg03
# varyonvg vg03
varyonvg: Physical Volume hdisk6 is missing
varyonvg: Physical Volume hdisk7 is missing
varyonvg: Physical Volume hdisk8 is active
varyonvg: Physical Volume hdisk9 is missing
varyonvg: Physical Volume hdisk10 is missing
varyonvg: Physical Volume hdisk11 is active
varyonvg: Volume group vg03 is varied on
The system administrator wishes to create a new logical volume with five logical partitions, allocated directly in the center of physical volume hdisk07, without incurring any fragmentation. The administrator first examines the current allocation for physical volume hdisk7:
# lslv -p hdisk7
hdisk7:
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE 1-10
FREE FREE FREE FREE FREE FREE 11-16
USED USED USED USED USED USED USED USED USED USED 17-26
USED USED USED USED USED USED 27-32
USED USED USED USED USED USED USED USED USED USED 33-42
USED USED USED USED USED FREE 43-48
FREE USED FREE USED USED FREE USED USED USED USED 49-58
USED USED USED USED FREE FREE 59-64
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE 65-74
FREE FREE FREE FREE FREE FREE 75-80
There are enough free partitions on hdisk7 but there is no free space in the center of the disk. Nevertheless, the administrator can go ahead and create the new logical volume, and then reorganize hdisk07 so that the new volume is located in the center (read on):
# mklv -a c -y lvtest vg03 5 hdisk7
lvtest
The logical volume lvtest is created with five partitions on volume group vg03, physical volume hdisk7. The intra-physical volume policy was specified to be c (center allocation). The current allocation of lvtest is shown below.
# lslv -p hdisk7 lvtest
hdisk7:lvtest:N/A
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE 1-10
FREE FREE FREE FREE FREE FREE 11-16
USED USED USED USED USED USED USED USED USED USED 17-26
USED USED USED USED USED USED 27-32
USED USED USED USED USED USED USED USED USED USED 33-42
USED USED USED USED USED 0001 43-48
0002 USED 0003 USED USED 0004 USED USED USED USED 49-58
USED USED USED USED 0005 FREE 59-64
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE 65-74
FREE FREE FREE FREE FREE FREE 75-80
For lvtest to be placed in the center during a reorganization, it has to be given priority. In order to grant lvtest priority, the individual logical volumes that are to be reorganized need to be explicitly specified in order of priority; therefore, the systems administrator looks to see what other logical volumes are located on hdisk7 and which are allocated in the center.
# lspv -p hdisk7
hdisk7:
PP RANGE STATE REGION LV ID TYPE MOUNT POINT
1-16 free edge
17-25 used middle lv0302 jfs /usr/etc
26-32 used middle lv0303 jfs /usr/bin
33-37 used center lv0303 jfs /usr/bin
38-47 used center lv0304 jfs /home/src
48 used center lvtest jfs
49 used middle lvtest jfs
50 used middle lv0308 jfs /systest
51 used middle lvtest jfs
52-53 used middle lv0308 jfs /systest
54 used middle lvtest jfs
55-62 used middle lv0308 jfs /testdata
63 used middle lvtest jfs
64 used middle
65-80 used edge
Since logical volume lv0304 is the only volume with enough space in the center, that has to be the one used to reorganize lvtest (but first confirm lv0304 is relocatable).
# lslv lv0304
LOGICAL VOLUME: lv0304 VOLUME GROUP: vg02
LV IDENTIFIER: 919F8392982C9301.5 PERMISSION: read/write
VG STATE: active/complete LV STATE: closed/syncd
TYPE: jfs WRITE VERIFY: off
MAX LPS: 150 PP SIZE: 2 megabyte(s)
COPIES: 3 SCHED POLICY: parallel
LPS 24 PPs: 72
STALE PPs: 0 BB POLICY: relocatable
INTER-POLICY: minimum RELOCATABLE: yes
INTRA-POLICY center UPPER BOUND 32
MOUNT POINT: /home/src LABEL: None
MIRROR WRITE CONSISTENCY: on
EACH LP COPY ON A SEPARATE PV ?: yes
Now use the command reorgvg:
# echo hdisk7 | reorgvg vg03 lvtest lv0304
Now, list the new allocation of lvtest:
# lslv -p hdisk7 lvtest
hdisk7:lvtest:N/A
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE 1-10
FREE FREE FREE FREE FREE FREE 11-16
USED USED USED USED USED USED USED USED USED USED 17-26
USED USED USED USED USED USED 27-32
USED USED USED USED USED 0001 0002 0003 0004 0005 33-42
USED USED USED USED USED USED 43-48
USED USED USED USED USED USED USED USED USED USED 49-58
USED USED USED USED USED FREE 59-64
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE 65-74
FREE FREE FREE FREE FREE FREE 75-80
A logical volume is defined with 15 logical partitions, using three copies to maintain high data reliability and availability. A strict allocation is specified, so that no logical partition has any two of its physical partitions on the same physical volume. The logical volume is also defined to use a maximum of six physical volumes.
# mklv -c 3 -u 6 vg02 15
lv0204
Later, the system administrator sees on the system error log that physical volume hdisk6 has failed. They list the contents of the physical volume to see what is stored on it:
# lspv -l hdisk6
hdisk6:
LV NAME LPS PPS DISTRIBUTION MOUNT POINT
lv0201 10 10 05..05..00..00..00
lv0202 10 10 00..10..00..00..00 /home/smith/test
lv0204 15 15 00..00..15..00..00 /transactions
*free* 0 20 10..00..15..00..15
Since hdisk6 contains part of the logical volume lv0204 and maintaining maximum availability is the goal, the administrator decides to migrate the physical partitions for logical volume lv0204 that are allocated on hdisk6 to the other physical volumes used by the logical volume lv0204. First the other physical volumes used by the lv0204 need to be determined; then the migrate command ca n be invoked.
# lslv -l lv0204
lv0204:/transactions
PV COPIES IN BAND DISTRIBUTION
hdisk3 015:000:000 100% 000:000:015:000:000
hdisk4 015:000:000 80% 000:000:012:003:000
hdisk5 015:000:000 73% 000:004:011:000:000
hdisk6 015:000:000 100% 000:000:015:000:000
hdisk9 015:000:000 60% 000:006:009:000:000
hdisk12 015:000:000 100% 000:000:015:000:000
# migratepv -l /transactions hdisk6 hdisk3 hdisk4 hdisk5 hdisk9 hdisk12
The administrator then examines the new allocation for the logical volume:
# lslv -l lv0204
lv0204:/transactions
PV COPIES IN BAND DISTRIBUTION
hdisk3 018:000:000 83% 000:003:015:000:000
hdisk4 018:000:000 80% 000:000:012:003:000
hdisk5 015:000:000 73% 000:004:011:000:000
hdisk9 016:000:000 56% 000:006:009:001:000
hdisk12 023:000:000 83% 000:000:019:004:000