foo@bar> zpool iostat
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
ei01 123G 109G 1 1 19.7K 18.5K
ei02 46.8G 112G 2 0 335K 92.8K
---------- ----- ----- ----- ----- ----- -----
The meaning of these figures is not really documented in the man page, but easy to deduce: It is the average number of read and write requests per second and the average amount of data read and written per second, counted from boot (or, to be more correct, zpool import) time.
With an additional -v parameter, we get the same information not only on pool level, but also on device level:
foo@bar> zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
ei01 123G 109G 1 1 19.7K 18.5K
mirror 123G 109G 1 1 19.7K 18.5K
c0d1 - - 0 0 13.9K 18.5K
c1d0 - - 0 0 13.6K 18.5K
---------- ----- ----- ----- ----- ----- -----
ei02 46.8G 112G 2 0 335K 92.8K
c0d0s7 46.8G 112G 2 0 335K 92.8K
---------- ----- ----- ----- ----- ----- -----
If you want the raw data for this, then you have to help yourself. See the zpstat.c I have just written. Compile instructions inside (very simple!). Usage: zpstat <pool1> [<pool2> ... <pooln>]. For each given pool, it will iterate through the in-core vdev tree structure and for each vdev it will try to extract statistics information. Output looks like this:
ei02
type: 'root'
id: 0
vdev_stats.vs_timestamp: 693868 seconds
vdev_stats.vs_ops[ZIO_TYPE_NULL]: 1
vdev_stats.vs_ops[ZIO_TYPE_READ]: 1835587
vdev_stats.vs_ops[ZIO_TYPE_WRITE]: 646427
vdev_stats.vs_ops[ZIO_TYPE_FREE]: 0
vdev_stats.vs_ops[ZIO_TYPE_CLAIM]: 0
vdev_stats.vs_ops[ZIO_TYPE_IOCTL]: 7185
vdev_stats.vs_bytes[ZIO_TYPE_NULL]: 0
vdev_stats.vs_bytes[ZIO_TYPE_READ]: 238795578368
vdev_stats.vs_bytes[ZIO_TYPE_WRITE]: 65714164736
vdev_stats.vs_bytes[ZIO_TYPE_FREE]: 0
vdev_stats.vs_bytes[ZIO_TYPE_CLAIM]: 0
vdev_stats.vs_bytes[ZIO_TYPE_IOCTL]: 0
vdev_stats.vs_read_errors: 0
vdev_stats.vs_write_errors: 0
vdev_stats.vs_checksum_errors: 0
vdev_stats.vs_self_healed: 0
type: 'disk'
id: 0
path: '/dev/dsk/c0d0s7'
path: 'id1,cmdk@AMaxtor_6L200P0=L41FZEGH/h'
path: '/pci@0,0/pci-ide@1f,1/ide@0/cmdk@0,0:h'
vdev_stats.vs_timestamp: 693868 seconds
vdev_stats.vs_ops[ZIO_TYPE_NULL]: 1
vdev_stats.vs_ops[ZIO_TYPE_READ]: 1835587
vdev_stats.vs_ops[ZIO_TYPE_WRITE]: 646427
vdev_stats.vs_ops[ZIO_TYPE_FREE]: 0
vdev_stats.vs_ops[ZIO_TYPE_CLAIM]: 0
vdev_stats.vs_ops[ZIO_TYPE_IOCTL]: 7185
vdev_stats.vs_bytes[ZIO_TYPE_NULL]: 0
vdev_stats.vs_bytes[ZIO_TYPE_READ]: 238795578368
vdev_stats.vs_bytes[ZIO_TYPE_WRITE]: 65714164736
vdev_stats.vs_bytes[ZIO_TYPE_FREE]: 0
vdev_stats.vs_bytes[ZIO_TYPE_CLAIM]: 0
vdev_stats.vs_bytes[ZIO_TYPE_IOCTL]: 0
vdev_stats.vs_read_errors: 0
vdev_stats.vs_write_errors: 0
vdev_stats.vs_checksum_errors: 0
vdev_stats.vs_self_healed: 0
You get basically the same information as from "zpool iostat", but before averaging, so you can a) calculate an average I/O size, and b) extract the figures, do something (for a longer while), extract the figures again and get an average figure for exactly this time period.
DISCLAIMER:
a) This code uses private interfaces of Solaris. It is not portable across releases. In fact, you will have to compile seperately for each release.
b) This is not the proper way to seriously gather information for performance-tuning or so. For this purpose, use real weapons. Like "dtrace".
c) I have written this for my own educational purposes (to learn about the libzfs interfaces and nvpairs and ZFS internals in general). I do not claim fitness for any particular purpose...