大きな地図で見る
Today's session is "Managing storage on FreeBSD (vol.2)".
- How to test the data transfer spped of devices
- internal kernel
dd if=/dev/zero of=/dev/null bs=1024x1024 count=80
- read only from device da2
dd if=/dev/da2 of=/dev/null bs=1024x1024 count=80
- write only to device da3
dd if=/dev/zero of=/dev/da3 bs=1024x1024 count=80
- read and write data from device da2 to device da3
dd if=/dev/da2 of=/dev/da3 bs=1024x1024 count=80
- internal kernel
- (4) must be slower than (2) and (3)
- limit data transfer spped is which slower one of (2) or (3)
- iostat: statics command of device I/O
- IOPS
- data transfer spped
- %busy
- svc_t
- gstat: statics command of GEOM I/O (iostat GEOM version)
- The true right way to create gmirror
- set gm0 as the mirror of ada1
- copy whole ada0 to gm0
- set gm0 as boot disk, and reboot
- add ada0 to mirror member of gm0, and wait sync between ada0 and ada1
- The way written in FreeBSD handbook is now incorrect :-)
- New file system feature since 9.0 (1) : GPT as default
- because MBR cannot deal partitions over 2TB
- New file system feature since 9.0 (2) : GPT HAST
- Highly Available Storage
- like gmirror over the TCP/IP network
- deals /dev/hast/XXXX block
- setting /etc/hast.conf in each machines mirroring
- ZFS
- architecture of UFS : device -> volume manager -> file system
- architecture of ZFS : devices -> ZPOOL -> DMU -> ZFS
- Good purposes for using ZFS
- many devices
- dealing petabyte-class data
- Not good purposes for using ZFS (just a sales talk :-P)
- speed
- robustness
- scalability
- ZPOOL : Give the name as one block from one or more devices
- ZFS Data Set : Give namespaces onto the ZPOOL
- DMU : internal kernel for manipulating dnode
- dnode : ZPOOL version inode
- ZFS Data Sets on ZPOOL are dealt like inode
- Files ans directories on each ZFS Data Sets are dealt like inode
- ZFS' I/O = copy-on-write
- ZFS Pros.
- transactionable
- lock free
- ZFS Cons.
- heavy for dnode
- updating cost for large size file
- Tips
- SHOULD USE amd64
- set vfs.zfs.cache.size as 0
- tune vfs.zfs.meta_limit for your machine
No comments:
Post a Comment