标签:war several sns most effect gui tool ase issue
Creates a GPFS? file system.
mmcrfs Device {"DiskDesc[;DiskDesc...]" | -F StanzaFile}
[-A {yes | no | automount}] [-B BlockSize] [-D {posix | nfs4}]
[-E {yes | no}] [-i InodeSize] [-j {cluster | scatter}]
[-k {posix | nfs4 | all}] [-K {no | whenpossible | always}]
[-L LogFileSize] [-m DefaultMetadataReplicas]
[-M MaxMetadataReplicas] [-n NumNodes] [-Q {yes | no}]
[-r DefaultDataReplicas] [-R MaxDataReplicas]
[-S {yes | no | relatime}] [-T Mountpoint] [-t DriveLetter]
[-v {yes | no}] [-z {yes | no}] [--filesetdf | --nofilesetdf]
[--inode-limit MaxNumInodes[:NumInodesToPreallocate]]
[--log-replicas LogReplicas] [--metadata-block-size MetadataBlockSize]
[--perfileset-quota | --noperfileset-quota]
[--mount-priority Priority] [--version VersionString]
[--write-cache-threshold HAWCThreshold]
Available on all IBM Spectrum Scale? editions.
Use the mmcrfs command to create a GPFS file system. The first two parameters must be Device and either DiskDescList or StanzaFile and they must be in that order. The block size and replication factors chosen affect file system performance. A maximum of 256 file systems can be mounted in a GPFS cluster at one time, including remote file systems.
When deciding on the maximum number of files (number of inodes) in a file system, consider that for file systems that will be doing parallel file creates, if the total number of free inodes is not greater than 5% of the total number of inodes, there is the potential for slowdown in file system access. The total number of inodes can be increased using the mmchfs command.
DiskName:::DiskUsage:FailureGroup::StoragePool:
For backward compatibility, the mmcrfs command will still accept the traditional disk descriptors, but their use is discouraged.File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0. However, file system names must be unique within a GPFS cluster. Do not specify an existing entry in /dev.
This must be the first parameter.
%nsd:
nsd=NsdName
usage={dataOnly | metadataOnly | dataAndMetadata | descOnly}
failureGroup=FailureGroup
pool=StoragePool
servers=ServerList
device=DiskName
where:
GPFS uses this information during data and metadata placement to ensure that no two replicas of the same block can become unavailable due to a single failure. All disks that are attached to the same NSD server or adapter must be placed in the same failure group.
If the file system is configured with data replication, all storage pools must have two failure groups to maintain proper protection of the data. Similarly, if metadata replication is in effect, the system storage pool must have two failure groups.
Disks that belong to storage pools in which write affinity is enabled can use topology vectors to identify failure domains in a shared-nothing cluster. Disks that belong to traditional storage pools must use simple integers to specify the failure group.
Only the system storage pool can contain metadataOnly, dataAndMetadata, or descOnly disks. Disks in other storage pools must be dataOnly.
%pool:
pool=StoragePoolName
blockSize=BlockSize
usage={dataOnly | metadataOnly | dataAndMetadata}
layoutMap={scatter | cluster}
allowWriteAffinity={yes | no}
writeAffinityDepth={0 | 1 | 2}
blockGroupFactor=BlockGroupFactor
where:
The cluster allocation method may provide better disk performance for some disk subsystems in relatively small installations. The benefits of clustered block allocation diminish when the number of nodes in the cluster or the number of disks in a file system increases, or when the file system‘s free space becomes fragmented. The cluster allocation method is the default for GPFS clusters with eight or fewer nodes and for file systems with eight or fewer disks.
The scatter allocation method provides more consistent file system performance by averaging out performance variations due to block location (for many disk subsystems, the location of the data relative to the disk edge has a substantial effect on performance). This allocation method is appropriate in most cases and is the default for GPFS clusters with more than eight nodes or file systems with more than eight disks.
The block allocation map type cannot be changed after the storage pool has been created.
A write affinity depth of 0 indicates that each replica is to be striped across the disks in a cyclical fashion with the restriction that no two disks are in the same failure group. By default, the unit of striping is a block; however, if the block group factor is specified in order to exploit chunks, the unit of striping is a chunk.
A write affinity depth of 1 indicates that the first copy is written to the writer node. The second copy is written to a different rack. The third copy is written to the same rack as the second copy, but on a different half (which can be composed of several nodes).
This behavior can be altered on an individual file basis by using the --write-affinity-failure-group option of the mmchattr command.
This parameter is ignored if write affinity is disabled for the storage pool.
See the section about File Placement Optimizer in the IBM Spectrum Scale: Administration Guide.
The administrator is allowing a mixture of ACL types. For example, fileA may have a posix ACL, while fileB in the same file system may have an NFS V4 ACL, implying different access characteristics for each file depending on the ACL type that is currently assigned. The default is -k all.
Avoid specifying nfs4 or all unless files will be exported to NFS V4 or Samba clients, or the file system will be mounted on Windows. NFS V4 and Windows ACLs affect file attributes (mode) and have access and authorization characteristics that are different from traditional GPFS ACLs.
For more information, see the topic "Strict replication" in the IBM Spectrum Scale: Problem Determination Guide.
In most cases, allowing the log file size to default works well. An increased log file size is useful for file systems that have a large amount of metadata activity, such as creating and deleting many small files or performing extensive block allocation and deallocation of large files.
When you create a GPFS file system, you might want to overestimate the number of nodes that will mount the file system. GPFS uses this information for creating data structures that are essential for achieving maximum parallelism in file system operations (For more information, see GPFS architecture ). If you are sure there will never be more than 64 nodes, allow the default value to be applied. If you are planning to add nodes to your system, you should specify a number larger than the default.
If relatime is specified, the file access time is updated only if the existing access time is older than the value of the atimeDeferredSeconds configuration attribute or the existing file modification time is greater than the existing access time.
For file systems on which you intend to create files in parallel, if the total number of free inodes is not greater than 5% of the total number of inodes, file system access might slow down. Take this into consideration when creating your file system.
The parameter NumInodesToPreallocate specifies the number of inodes that the system will immediately preallocate. If you do not specify a value for NumInodesToPreallocate, GPFS will dynamically allocate inodes as needed.
You can specify the NumInodes and NumInodesToPreallocate values with a suffix, for example 100K or 2M. Note that in order to optimize file system operations, the number of inodes that are actually created may be greater than the specified value.
This option is only applicable if the recovery log is stored in the system.log storage pool.
File systems with higher Priority numbers are mounted after file systems with lower numbers. File systems that do not have mount priorities are mounted last. A value of zero indicates no priority. This is the default.
The default value is the current product version, which enables all currently available features but prevents nodes that are running earlier GPFS releases from accessing the file system.Windows nodes can mount only file systems that are created with GPFS 3.2.1.5 or later.
The file system attributes will be applied at file system creation. If there is a current profile in place on the system (use mmlsconfig profile to check), then the file system will be created with those attributes and values listed in the profile‘s file system stanza. The default is to use whatever attributes and values associate with the current profile setting.
Furthermore, any and all file system attributes from an installed profile file can be by-passed with ‘--profile=userDefinedProfile‘, where the userDefinedProfile is a profile file has been installed by the user in /var/mmfs/etc/.
%cluster:
[CommaSeparatedNodesOrNodeClasses:]ClusterConfigurationAttribute=Value
...
%filesystem:
FilesystemConfigurationAttribute=Value
...
A sample file can be found in /usr/lpp/mmfs/samples/sample.profile. See the mmchconfig command for a detailed description of the different configuration parameters.
User-defined profiles should be used only by experienced administrators. When in doubt, use the mmchconfig command instead.
A value of 0 disables this feature. 64K is the maximum supported value. Specify in multiples of 4K.
This feature can be enabled or disabled at any time (the file system does not need to be unmounted). For more information about this feature, see Highly-available write cache (HAWC).
You must have root authority to run the mmcrfs command.
The node on which the command is issued must be able to execute remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages. For more information, see Requirements for administering a GPFS file system.
mmcrfs gpfs1 -F /tmp/freedisks -B 512K -m 2 -r 2 -Q yes -T /gpfs1
The system displays output similar to:
The following disks of gpfs1 will be formatted on node c21f1n13:
hd2n97: size 1951449088 KB
hd3n97: size 1951449088 KB
hd4n97: size 1951449088 KB
Formatting file system ...
Disks up to size 16 TB can be added to storage pool ‘system‘.
Creating Inode File
Creating Allocation Maps
Creating Log Files
Clearing Inode Allocation Map
Clearing Block Allocation Map
Formatting Allocation Map for storage pool ‘system‘
19 % complete on Tue Feb 28 18:03:20 2012
42 % complete on Tue Feb 28 18:03:25 2012
62 % complete on Tue Feb 28 18:03:30 2012
79 % complete on Tue Feb 28 18:03:35 2012
96 % complete on Tue Feb 28 18:03:40 2012
100 % complete on Tue Feb 28 18:03:41 2012
Completed creation of file system /dev/gpfs1.
mmcrfs: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
/usr/lpp/mmfs/bin
https://www.ibm.com/support/knowledgecenter/zh/STXKQY_4.2.1/com.ibm.spectrum.scale.v4r21.doc/bl1adm_mmcrfs.htm
标签:war several sns most effect gui tool ase issue
原文地址:http://www.cnblogs.com/menkeyi/p/7376248.html