ServerStack » Storage https://www.serverstack.com/blog Scalability Blog Sun, 03 Mar 2013 23:58:43 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.2 Upgrading Your Managed Server to SSD for Maximum Performance and Cost Savings https://www.serverstack.com/blog/2013/01/29/upgrading-your-managed-server-to-ssd-for-maximum-performance-and-cost-savings/ https://www.serverstack.com/blog/2013/01/29/upgrading-your-managed-server-to-ssd-for-maximum-performance-and-cost-savings/#comments Tue, 29 Jan 2013 20:49:06 +0000 https://www.serverstack.com/blog/?p=336 The biggest advantage of Solid State Drives is lack of moving parts as compared to traditional hard drives.  This allows the drives to survive longer, and have faster read and write times.  It makes them ideal for an enterprise environment where performance and reliability are expected.  The SSD drives use less than a third of power compared to SAS or SATA, and promise twice the life expectancy.   Power consumption alone should save ... ]]>

The biggest advantage of Solid State Drives is lack of moving parts as compared to traditional hard drives.  This allows the drives to survive longer, and have faster read and write times.  It makes them ideal for an enterprise environment where performance and reliability are expected.  The SSD drives use less than a third of power compared to SAS or SATA, and promise twice the life expectancy.   Power consumption alone should save you money in the long run.

ssd-drives

Let us review one particular server that was upgraded from SAS to SSD drives.  Although SSD drives are more expensive, the cost can be offset by higher productivity, stability, and faster load times. The best uses for SSD drives are for applications that require a lot of reads and writes to the disk, such as a MySQL database, as well as a low latency.  You will also benefit by having SSD for disk-based caching.  An SSD drive for NFS caching with Nginx can be used for static content delivery, significantly improving load times on your server.

Having a slow hard drive increases CPU load and can lock up the server during high traffic:

sas-drives

Even the network speed picked up because the LAMP stack was able to serve more requests:

lamp-stack-more-requests

What makes a server a great candidate for SSD upgrade is its current Inputs/Outputs Per Second rate (IOPS). For most SATA drives the upper limit is 150 IOPS, and for SAS it is 200 IOPS.  If your server is constantly going above 200 IOPS you should consider upgrading to SSD drives.

Lets review another server with 2x 73GB 2.5” SAS drives in RAID1 :

server-analytics-ssd-1

Reviewing the Disk I/O graphs :

Disk-IO-graph

Since current IOPS rates are 167 reads/second and 1035 writes/second, the overall IOPS rate far exceeds the SAS drive capacity.

We can further troubleshoot the cause of high IOPS by using iotop :

high-iops

In this particular case, solving the issue of why the Qmgr was writing so much to disk was the right course of action.  It was due to deferred messages being logged by Postfix.

After purging Postfix queue, all of the deferred messaged stopped, which solved the high IO issue:

high-io-issue

The IOPS rate was down significantly:

iops-rate-dropped

With iotop confirming that the issue has been resolved:

iotop-issue-resolved

The biggest deterrent to getting SSD drives is price.  Currently a new Seagate Savvio 10K.3 300GB SAS drive costs approximately $190, and Seagate Cheetah 15K.7 ST3450857SS 450GB drive costs approximately $220.  While consumer grade SSD drives like Crucial m4 CT256M4SSD2 256GB cost $200, and Crucial m4 CT512M4SSD1 512GB cost $400.  We would stick to comparing enterprise grade drives, since there is a hidden bonus for going enterprise – power consumption.

Enterprise grade SSD drives like Intel 520 480GB SSDSC2CW480A310 cost $500, and Samsung 840 MZ-7PD512BW 512GB drives cost $600.

This places the initial costs at $0.48/GB – $0.63/GB for SAS drives, $0.78/GB for consumer grade SSD drives, and $1.04/GB – $1.17/GB for enterprise SSD drives.

Before we dive into mathematics of how SSD drives save you money, here is a map of average prices for kWh of electricity (in cents):

save-with-ssd-drives

The higher the price of electricity, the quicker will SSD drives break even as compared to SAS drives.

Comparing SAS to SSD drive in terms of power consumption, we have 4.8 kWh/day with SAS, and 0.06 kWh/day for SSD drive. This in turn comes out to 1752 kWh/year with SAS, and 21.9 kWh/year with SSD drive.  You would be saving $207/year with SSD drives on power consumption alone.

(For reference: http://www.storageperformance.org/spc-1ce_results/Seagate/e00002_Seagate_Savvio-10K3/e00002_Seagate_Savvio-10K3_SPC1CE-executive-summary.pdf and http://www.storagereview.com/ocz_vertex_4_ssd_review Where Annual energy use in kWh = Nominal Power * 24 * 0.365.  The annual cost is calculated at average of $0.12/kWh.)

So when would an SSD drive reach a break-even point?  If you purchased 2x 300GB 10,000 RPM drives for your server, you would be paying $380 for drives, and $420/year for electricity.  If you purchased 2x 450GB 15,000 RPM SAS drives you would be paying $440 for drives, and over $420/year for electricity.

Meanwhile, with 2x 480GB Intel 520 you would pay $1000 for drives.  If you went with 2x 512GB Samsung 840s, you would pay $1200 for drives, and as as for power consumption, there is a bonus. Intel 520 has 0.85W power consumption rate, would use 7.446 kWh/year, and cost $0.89352 per year. Samsung 840 Pro has 0.068W power consumption rate, would use 0.5956 kWh/year and cost $0.0713 per year.  For dual drives, this is still below $2/year in electrical charges.

Given all of these data points, even with dual Samsung 840 Pro SSD drives at $1200, as compared to dual SAS drives at $440, you are faced with a difference of $760 in initial cost for 2x 500GB drives.  This $760 price saving is quickly eroded away by $420/year electrical bill, which after less than 2 years starts to cost you more money if you went with SAS drives.  In the long run, SSD drives save you money, improve efficiency, and last longer.

]]>
https://www.serverstack.com/blog/2013/01/29/upgrading-your-managed-server-to-ssd-for-maximum-performance-and-cost-savings/feed/ 0
Using GlusterFS On Your Managed Server https://www.serverstack.com/blog/2013/01/25/using-glusterfs-on-your-managed-server/ https://www.serverstack.com/blog/2013/01/25/using-glusterfs-on-your-managed-server/#comments Fri, 25 Jan 2013 15:55:24 +0000 https://www.serverstack.com/blog/?p=313 You will first need to setup a distributed GlusterFS storage cluster and follow these instructions: First, we will have to install  EPEL repository: [root@webserver ~]# rpm -Uvh http://mirror.symnds.com/distributions/fedora-epel/6/x86_64/epel-release-6-8.noarch.rpm Retrieving http://mirror.symnds.com/distributions/fedora-epel/6/x86_64/epel-release-6-8.noarch.rpm warning: /var/tmp/rpm-tmp.CjOwN6: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY Preparing...                ########################################### [100%] 1:epel-release           ########################################### [100%] Now we’ll install the necessary packages: [root@webserver ~]# yum -y install glusterfs-fuse glusterfs Place the same hosts file in /etc/hosts as on GlusterFS nodes.  We will also create a folder /mnt/glusterfs to use as our ... ]]>

You will first need to setup a distributed GlusterFS storage cluster and follow these instructions:

First, we will have to install  EPEL repository:

[root@webserver ~]# rpm -Uvh http://mirror.symnds.com/distributions/fedora-epel/6/x86_64/epel-release-6-8.noarch.rpm
Retrieving http://mirror.symnds.com/distributions/fedora-epel/6/x86_64/epel-release-6-8.noarch.rpm
warning: /var/tmp/rpm-tmp.CjOwN6: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY
Preparing...                ########################################### [100%]
1:epel-release           ########################################### [100%]

Now we’ll install the necessary packages:

[root@webserver ~]# yum -y install glusterfs-fuse glusterfs

Place the same hosts file in /etc/hosts as on GlusterFS nodes.  We will also create a folder /mnt/glusterfs to use as our mount point, and place any node in /etc/fstab :

gluster1:/gluster /mnt/glusterfs glusterfs rw,allow_other,default_permissions,max_read=131072 0 0

To mount, type mount -a

To manually mount the GlusterFS storage node:

mount -t glusterfs gluster1:/gluster /mnt/glusterfs

And as a final touch, we can verify just how much storage space we have :

[root@webserver ~]# df -h /mnt/glusterfs/
Filesystem            Size  Used Avail Use% Mounted on
gluster1:/gluster      99G  4.2G   90G   5% /mnt/glusterfs

That is a total of 90GB of storage capacity distributed across 5 servers.
We can test this setup by writing a 1GB file to this mount:

[root@webserver ~]# dd if=/dev/zero of=/mnt/glusterfs/1GB bs=1024 count=1048576

1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 239.994 s, 4.5 MB/s

This file ended up on gluster1:

[root@gluster1 ~]# ls -lah /exp1/
total 1.1G
drwxr-xr-x  2 root root 4.0K Jan  1 11:12 .
drwxr-xr-x 23 root root 4.0K Jan  1 09:52 ..
-rw-r--r--  1 root root 1.0G Jan  1 11:16 1GB

We should verify that files will be randomly written across all 5 servers by generating a hundred smaller files:

[root@webserver ~]# for i in `seq 1 100`; do dd if=/dev/zero of=/mnt/glusterfs/$i bs=1024 count=1; done

Out of 100 files generated, the distribution was:

24 on gluster1, 24 on gluster2, 17 on gluster3, 20 on gluster4, and 15 on gluster5.

Therefore, the files are distributed pretty evenly across the entire cluster.  Using this setup you can scale your storage quickly and with hardware of variable storage capacity.

But what if you will have files that are too large to fit on any one storage node?  You can create a distributed striped GlusterFS volume, striped across 5 storage nodes:

[root@gluster1 ~]# gluster volume create largefiles stripe 5 transport tcp gluster1:/large1 gluster2:/large2 gluster3:/large3 gluster4:/large4 gluster5:/large5
Creation of volume stripe has been successful

Start the new volume:

[root@gluster1 ~]# gluster volume start largefiles

Now this volume can be mounted on your webserver:

[root@webserver ~]# mkdir /mnt/largefiles && mount -t glusterfs gluster1:/largefiles /mnt/largefiles

The great thing about this setup is that it can co-exist with other volumes and volume types.  Your nodes will use the same amount of available space for both distributed and striped volumes, so you don’t have to worry about resizing.  Just remember to place really large files (greater than 18GB for our example) in /mnt/largefiles.  This will automatically distribute the large file in blocks across 5 storage nodes, and you will still have enough space on each gluster node for smaller files.  If one GlusterFS node was to go offline, you would lose access to large files, since the entire file is stored in pieces on all GlusterFS nodes:

This setup can be used for storing raw videos in /mnt/largefiles and FFMpeg encoded versions on /mnt/glusterfs. For example, original 50GB file can be stored in block format on /mnt/largefiles.  While FLV, x264, Divx, WMV, and AVI versions can be stored on different GlusterFS nodes under /mnt/glusterfs.  It is a great setup if you have an encoding server, a webserver, and storage nodes all accessing same information.  Whether you are into video on demand, live streaming, or file sharing, GlusterFS can be a viable solution for a distributed network file storage.

]]>
https://www.serverstack.com/blog/2013/01/25/using-glusterfs-on-your-managed-server/feed/ 0