tsmtlb01:/usr/lpp/mmfs/4.1 # ls gpfs.base-4.1.0-0.x86_64.rpm gpfs.docs-4.1.0-0.noarch.rpm gpfs.ext-4.1.0-0.x86_64.rpm gpfs.gpl-4.1.0-0.noarch.rpm gpfs.gskit-8.0.50-16.x86_64.rpm gpfs.msg.en-us_4.1.0-0_all.deb license gpfs.base_4.1.0-0_amd64.deb gpfs.docs_4.1.0-0_all.deb gpfs.ext_4.1.0-0_amd64.deb gpfs.gpl_4.1.0-0_all.deb gpfs.gskit_8.0.50-16_amd64.deb gpfs.msg.en_US-4.1.0-0.noarch.rpm tsmtlb01:/usr/lpp/mmfs/4.1 # rpm -ivh *.rpm Preparing... ########################################### [100%] 1:gpfs.base ########################################### [ 17%] 2:gpfs.docs ########################################### [ 33%] 3:gpfs.ext ########################################### [ 50%] 4:gpfs.gpl ########################################### [ 67%] 5:gpfs.gskit ########################################### [ 83%] 6:gpfs.msg.en_US ########################################### [100%] tsmtlb01:/usr/lpp/mmfs/4.1 # cd - /home/mduersch/GPFS tsmtlb01:/home/mduersch/GPFS # ll total 154692 -rw-r--r-- 1 mduersch users 43909114 Mar 29 18:12 GPFS-4.1.0.7-x86_64-Linux.standard.tar.gz -rw-r--r-- 1 mduersch users 114495444 Apr 27 2014 gpfs_install-4.1.0-0_x86_64 tsmtlb01:/home/mduersch/GPFS # tar -xzvf GPFS-4.1.0.7-x86_64-Linux.standard.tar.gz changelog gpfs.base_4.1.0-7_amd64_update.deb gpfs.base-4.1.0-7.x86_64.update.rpm gpfs.docs_4.1.0-7_all.deb gpfs.docs-4.1.0-7.noarch.rpm gpfs.ext_4.1.0-7_amd64_update.deb gpfs.ext-4.1.0-7.x86_64.update.rpm gpfs.gpl_4.1.0-7_all.deb gpfs.gpl-4.1.0-7.noarch.rpm gpfs.gskit_8.0.50-32_amd64.deb gpfs.gskit-8.0.50-32.x86_64.rpm gpfs.msg.en-us_4.1.0-7_all.deb gpfs.msg.en_US-4.1.0-7.noarch.rpm README tsmtlb01:/home/mduersch/GPFS # rpm -Uvh *rpm Preparing... ########################################### [100%] 1:gpfs.base ########################################### [ 17%] 2:gpfs.docs ########################################### [ 33%] 3:gpfs.ext ########################################### [ 50%] 4:gpfs.gpl ########################################### [ 67%] 5:gpfs.gskit ########################################### [ 83%] 6:gpfs.msg.en_US ########################################### [100%] tsmtlb01:/home/mduersch/GPFS # tsmtlb01:/home/mduersch/GPFS # tsmtlb01:/home/mduersch/GPFS # cd /usr/lpp/mmfs/ 4.1/ READMES/ bin/ data/ fpo/ include/ lib/ messages/ properties/ samples/ src/ tsmtlb01:/home/mduersch/GPFS # cd /usr/lpp/mmfs/src/ tsmtlb01:/usr/lpp/mmfs/src # ll total 28 -rw-r--r-- 1 root root 7012 Mar 12 01:24 README drwxr-xr-x 3 root root 126 Apr 7 12:23 config drwxr-xr-x 2 root root 4096 Apr 7 12:23 gpl-linux drwxr-xr-x 2 root root 4096 Apr 7 12:23 ibm-kxi drwxr-xr-x 2 root root 4096 Apr 7 12:23 ibm-linux -rw-r--r-- 1 root root 6424 Mar 12 01:24 makefile tsmtlb01:/usr/lpp/mmfs/src # make world tsmtlb01:~ # ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: e6:5b:23:18:9b:e6:10:c6:ee:fd:c5:f3:e6:d9:f3:59 [MD5] root@tsmtlb01 The key's randomart image is: +--[ RSA 2048]----+ | | | | | | | . | | + . S | | o . B . | | o = o * E| | . = = +.o. o| | . o.o o+ .+.| +--[MD5]----------+ tsmtlb01:~ # tsmtlb01:~ # tsmtlb01:~ # tsmtlb01:~ # tsmtlb01:~ # cd /root/.ssh/ tsmtlb01:~/.ssh # ll total 8 -rw------- 1 root root 1675 Apr 7 12:28 id_rsa -rw-r--r-- 1 root root 395 Apr 7 12:28 id_rsa.pub tsmtlb01:~/.ssh # ll total 8 -rw------- 1 root root 1675 Apr 7 12:28 id_rsa -rw-r--r-- 1 root root 395 Apr 7 12:28 id_rsa.pub 50 2015-04-07 12:32:19 ll 1051 2015-04-07 12:32:25 mkdir gpfscfg 1052 2015-04-07 12:32:26 cd gpfscfg/ 1053 2015-04-07 12:32:27 ll 1054 2015-04-07 12:32:39 vi nodes.cfg 1055 2015-04-07 12:34:35 mmcrcluster -N nodes.cfg -p tsmtlb01 -r /usr/bin/ssh -R /usr/bin/scp -C tsmpoc -A 1056 2015-04-07 12:34:55 mmlslicense 1057 2015-04-07 12:35:10 mmchlicense 1058 2015-04-07 12:35:35 mmchlicense server --acccept -N tsmtlb01 1059 2015-04-07 12:35:39 mmchlicense server --accept -N tsmtlb01 1060 2015-04-07 12:35:45 mmchlicense 1061 2015-04-07 12:35:51 mmlslicense 1062 2015-04-07 12:36:07 mmlscluster 1063 2015-04-07 12:38:54 top 1064 2015-04-07 12:39:30 mmchconfig maxFilesToCache=10000 1065 2015-04-07 12:39:48 mmchconfig maxMBpS=1200 1066 2015-04-07 12:40:15 mmchconfig nsdSmallThreadRatio=1 1067 2015-04-07 12:41:08 mmchconfig prefetchPct=60 1068 2015-04-07 12:41:32 mmchconfig pagepool=8G 1069 2015-04-07 12:42:12 mmchconfig maxblocksize=8M 1070 2015-04-07 12:42:26 cd / 1071 2015-04-07 12:42:27 ll 1072 2015-04-07 12:42:30 multipath -l 1073 2015-04-07 12:42:45 multipath -l | grep -i fastt 1074 2015-04-07 13:03:35 cd /root/gpfscfg/ 1075 2015-04-07 13:03:36 ll 1076 2015-04-07 13:39:13 vi 1077 2015-04-07 14:27:17 df 1078 2015-04-07 14:27:20 df -h 1079 2015-04-07 15:03:03 cd /tsmstg34/ 1080 2015-04-07 15:03:52 gpfsperf create seq test01 -r 256K -n 10G -th 4 -fsync tsmtlb01:~/gpfscfg # tsmtlb01:~/gpfscfg # mmstartup Tue Apr 7 13:57:27 CEST 2015: mmstartup: Starting GPFS ... tsmtlb01:~/gpfscfg # tsmtlb01:~/gpfscfg # tsmtlb01:~/gpfscfg # tsmtlb01:~/gpfscfg # mmgetstate Node number Node name GPFS state ------------------------------------------ 1 tsmtlb01 arbitrating tsmtlb01:~/gpfscfg # mmcrfs tsmstg34 -F nsdds34.cfg -A yes -B 2M -i 4K -n 1 -T /tsmstg34 --inode-limit 1M:10000 --metadata-block-size 64K Warning: file system is not 4k aligned due to small metadata block size: 65536; metadata subblock size: 2048. Native 4k sector disks cannot be added to this file system. The following disks of tsmstg34 will be formatted on node tsmtlb01: ds34m1: size 51200 MB ds3401: size 2859568 MB ds3402: size 2859568 MB ds3403: size 2859568 MB ds3404: size 2859568 MB ds3405: size 2859568 MB Formatting file system ... Disks up to size 430 GB can be added to storage pool system. Disks up to size 24 TB can be added to storage pool tsmstg34. Creating Inode File 31 % complete on Tue Apr 7 13:57:51 2015 61 % complete on Tue Apr 7 13:57:56 2015 90 % complete on Tue Apr 7 13:58:01 2015 100 % complete on Tue Apr 7 13:58:03 2015 Creating Allocation Maps Creating Log Files Clearing Inode Allocation Map Clearing Block Allocation Map Formatting Allocation Map for storage pool system Formatting Allocation Map for storage pool tsmstg34 Completed creation of file system /dev/tsmstg34. tsmtlb01:~/gpfscfg # mmmount all Tue Apr 7 13:59:27 CEST 2015: mmmount: Mounting file systems ... tsmtlb01:~/gpfscfg # mmdf tsmstg34 disk disk size failure holds holds free KB free KB name in KB group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ------------------- Disks in storage pool: system (Maximum disk size allowed is 400 GB) ds34m1 52428800 1 yes no 50216512 ( 96%) 60 ( 0%) ------------- -------------------- ------------------- (pool total) 52428800 50216512 ( 96%) 60 ( 0%) Disks in storage pool: tsmstg34 (Maximum disk size allowed is 22 TB) ds3401 2928197632 1 no yes 2928128000 (100%) 3904 ( 0%) ds3402 2928197632 1 no yes 2928128000 (100%) 3904 ( 0%) ds3403 2928197632 1 no yes 2928128000 (100%) 3904 ( 0%) ds3404 2928197632 1 no yes 2928128000 (100%) 3904 ( 0%) ds3405 2928197632 1 no yes 2928128000 (100%) 3904 ( 0%) ------------- -------------------- ------------------- (pool total) 14640988160 14640640000 (100%) 19520 ( 0%) ============= ==================== =================== (data) 14640988160 14640640000 (100%) 19520 ( 0%) (metadata) 52428800 50216512 ( 96%) 60 ( 0%) ============= ==================== =================== (total) 14693416960 14690856512 (100%) 19580 ( 0%) Inode Information ----------------- Number of used inodes: 4007 Number of free inodes: 495993 Number of allocated inodes: 500000 Maximum number of inodes: 1048576 tsmtlb01:~/gpfscfg # cat policy.tsmstg34 /* Default placement rule */ RULE 'default' SET POOL 'tsmstg34' tsmtlb01:~/gpfscfg # mmchpolicy tsmstg34 policy.tsmstg34 -I yes Validated policy `policy.tsmstg34': Parsed 1 policy rules. Policy `policy.tsmstg34' installed and broadcast to all nodes. Shutting the filesystem down You can use the mmshutdown command to unmount and shutdown the filesystem on one or more of the cluster nodes. The standard options are: mmshutdown shut down the filesystem on the local machine mmshutdown -a shutdown the filesystem on all the cluster nodes. mmshutdown -N Node[,Node...] | NodeFile shutdown the filesystem on the listed nodes (or the nodes contained in the file Starting the filesystem The filesystem can be started on one or more of the cluster nodes by running mmstartup. The standard optiosn are mmstartup start down the filesystem on the local machine mmstartup -a start up the filesystem on all the cluster nodes. mmstartup -N Node[,Node...] | NodeFile startup the filesystem on the listed nodes (or the nodes contained in the file mmadddisk gpfsdev gpfs73nsd:::descOnly:4119:: The following disks of gpfsdev will be formatted on node illustrious.inf.ed.ac.uk: gpfs73nsd: size 244198584 KB Extending Allocation Map Checking Allocation Map for storage pool 'system' Warning: No xauth data; using fake authentication data for X11 forwarding. 66 % complete on Tue Apr 27 15:20:20 2010 100 % complete on Tue Apr 27 15:20:22 2010 Completed adding disks to file system gpfsdev. mmadddisk: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process. The disk description format is of the form DiskName:::DiskUsage:FailureGroup::StoragePool: where DiskUsage is one of dataAndMetadata Indicates that the disk contains both data and metadata. This is the default for disks in the system pool. dataOnly Indicates that the disk contains data and does not contain metadata. metadataOnly Indicates that the disk contains metadata and does not contain data. descOnly Indicates that the disk contains no data and no file metadata. Such a disk is used solely to keep a copy of the file system descriptor, and can be used as a third failure group in certain disaster recovery configurations. For more information, see General Parallel File System: Advanced Administration and search on Synchronous mirroring utilizing GPFS replication.