FreeBSD: Use ZFS volume as an iSCSI Target

Briefly, we can set the extent of iscsi-target to a ZFS volume…
But I have no idea what will happen if the size of extent is larger than it of the ZFS volume…:)

# zfs create -V 10g tank/iscsi
# cd /usr/ports/net/iscsi-target/ ; make install clean
# cat > /usr/local/etc/iscsi/targets
extent0         /dev/zvol/tank/iscsi     0       10GB
target0         rw      extent0
# /usr/local/etc/rc.d/iscsi_target forcestart

Now we can connect this iSCSI Target with some iSCSI initiator.

Taking KMRT

I went to Kaohsiung last weekend, and took KMRT, which officially started passenger service in March 2008.

The World Games Station.

Kaohsiung Main Station. There are platform screen doors for all underground stations.

Central Park Station.

I feel the KMRT is very like TMRT, but it is more crowded because of only 3 carriages for each train.
And the turnstiles are not functioned so well. I failed to return the single journey ticket in my 3 trips, and tried for the second time for success.

Employee of a Web Company Traveling…


MySQL on Mtron SSD

We (PIXNET) have bought 2 Mtron MSP-SATA70 (Spec: write 90MB/s, read 120MB/s), installed on 8-way, 12 G RAM, running Debian Linux x64 and MySQL 5.1 Slave Server. We use MyISAM as backend, and the largest MyISAM table is about 3GB.

We setup the SSDs with RAID 0, 4KB stripe size, XFS, and noop disk scheduler. In the beginning, we spend about one-third time to copy data from master server than the time spent copying from 2 SCSI 10k RPM HDDs (RAID 0). The rate is about 70MB/s. When replicating, we have about 3000~4000 qps, and the largest qps is about 15000.

In production use (consists SELECT and UPDATE queries), we have 8000 qps then get CPU bound. Some complicated SQL queries start blocking. If we only execute simple SELECT queries, 15000 qps is possible.

FreeBSD: vfs.read_max for Hardware RAID

Tested with: bonnie -s 2048, FreeBSD 7.0 UFS2 , Hardware RAID 5 (6 PATA 7200rpm 250G disks)
vfs.read_max=8 (default)

-------Sequential Output--------
-Per Char- --Block--- -Rewrite--
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU
2048 25812 24.8 26483 6.6 13886 4.4
---Sequential Input-- --Random--
-Per Char- --Block--- --Seeks---
K/sec %CPU K/sec %CPU /sec %CPU
32162 32.5 33386 5.1 232.3 1.5


-------Sequential Output--------
-Per Char- --Block--- -Rewrite--
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU
2048 25380 24.3 25949 6.5 13956 4.3
---Sequential Input-- --Random--
-Per Char- --Block--- --Seeks---
K/sec %CPU K/sec %CPU /sec %CPU
41060 43.4 42839 8.3 224.9 1.4


-------Sequential Output--------
-Per Char- --Block--- -Rewrite--
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU
2048 25714 24.3 25939 6.5 13966 4.3
---Sequential Input-- --Random--
-Per Char- --Block--- --Seeks---
K/sec %CPU K/sec %CPU /sec %CPU
41442 43.8 43737 8.6 225.2 1.5

Conclusion: No performance gain on random access, but about 25% better sequential read performance.

MySQL on Transcend SSD

After reading Kevin Burton’s article, gslin and I also find a SSD and do some test with it. We bought 4 Transcend 32GB MLC SSDs, and installed two on a 8-way Debian Linux Server, with 12GB RAM. We runs MySQL 5.1 slave with MyISAM table on it. The largest MyISAM table is about 3.0GB.

In the beginning, we striped the two SSD with 64KB stripe size RAID0, and found MySQL only did 5~20 replication update query per second on XFS. Running on EXT3 is not better, so we decided to tried decreasing block size.

We tried 4KB stripe size but no performance gain. And disabling disk scheduler makes no difference either. We also found that the Linux md(4) driver and LVM supports the minimum 4KB stripe only, so we didn’t try 512B or 1KB stripe size.

Finally, we found that the Transcend SSD supports only UDMA Mode 4, about 66.7 MB/s in theory. The specification shows that it only does 1.6MB random write per second, and our replication update runs about average 3MB/s and peak 11MB/s with XFS. The Transcend SLC chip supports about 4MB random write per second, which is still too slow for us.

Conclusion: SSD in Taiwan Market now provides no good performance, and Mtron SSD is still too expensive in Taiwan.

FreeBSD: Set the order of SCSI Cards

In /boot/device.hints:"ahd0"

FreeBSD gjournal and UFS snapshot

Recently I am testing a new feature in FreeBSD 7.0: gjournal. However, taking a snapshot on a journalized ufs causes kernel panic. (Size of the RAID is 5TB)


And I cannot write into the filesystem when I am taking a snapshot, and ever can not read data from it. We should use ZFS or NetApp when we need filesystem snapshot in FreeBSD…