ServerStack » Managed Server https://www.serverstack.com/blog Scalability Blog Sun, 03 Mar 2013 23:58:43 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.2 Upgrading Your Managed Server to SSD for Maximum Performance and Cost Savings https://www.serverstack.com/blog/2013/01/29/upgrading-your-managed-server-to-ssd-for-maximum-performance-and-cost-savings/ https://www.serverstack.com/blog/2013/01/29/upgrading-your-managed-server-to-ssd-for-maximum-performance-and-cost-savings/#comments Tue, 29 Jan 2013 20:49:06 +0000 https://www.serverstack.com/blog/?p=336 The biggest advantage of Solid State Drives is lack of moving parts as compared to traditional hard drives.  This allows the drives to survive longer, and have faster read and write times.  It makes them ideal for an enterprise environment where performance and reliability are expected.  The SSD drives use less than a third of power compared to SAS or SATA, and promise twice the life expectancy.   Power consumption alone should save ... ]]>

The biggest advantage of Solid State Drives is lack of moving parts as compared to traditional hard drives.  This allows the drives to survive longer, and have faster read and write times.  It makes them ideal for an enterprise environment where performance and reliability are expected.  The SSD drives use less than a third of power compared to SAS or SATA, and promise twice the life expectancy.   Power consumption alone should save you money in the long run.

ssd-drives

Let us review one particular server that was upgraded from SAS to SSD drives.  Although SSD drives are more expensive, the cost can be offset by higher productivity, stability, and faster load times. The best uses for SSD drives are for applications that require a lot of reads and writes to the disk, such as a MySQL database, as well as a low latency.  You will also benefit by having SSD for disk-based caching.  An SSD drive for NFS caching with Nginx can be used for static content delivery, significantly improving load times on your server.

Having a slow hard drive increases CPU load and can lock up the server during high traffic:

sas-drives

Even the network speed picked up because the LAMP stack was able to serve more requests:

lamp-stack-more-requests

What makes a server a great candidate for SSD upgrade is its current Inputs/Outputs Per Second rate (IOPS). For most SATA drives the upper limit is 150 IOPS, and for SAS it is 200 IOPS.  If your server is constantly going above 200 IOPS you should consider upgrading to SSD drives.

Lets review another server with 2x 73GB 2.5” SAS drives in RAID1 :

server-analytics-ssd-1

Reviewing the Disk I/O graphs :

Disk-IO-graph

Since current IOPS rates are 167 reads/second and 1035 writes/second, the overall IOPS rate far exceeds the SAS drive capacity.

We can further troubleshoot the cause of high IOPS by using iotop :

high-iops

In this particular case, solving the issue of why the Qmgr was writing so much to disk was the right course of action.  It was due to deferred messages being logged by Postfix.

After purging Postfix queue, all of the deferred messaged stopped, which solved the high IO issue:

high-io-issue

The IOPS rate was down significantly:

iops-rate-dropped

With iotop confirming that the issue has been resolved:

iotop-issue-resolved

The biggest deterrent to getting SSD drives is price.  Currently a new Seagate Savvio 10K.3 300GB SAS drive costs approximately $190, and Seagate Cheetah 15K.7 ST3450857SS 450GB drive costs approximately $220.  While consumer grade SSD drives like Crucial m4 CT256M4SSD2 256GB cost $200, and Crucial m4 CT512M4SSD1 512GB cost $400.  We would stick to comparing enterprise grade drives, since there is a hidden bonus for going enterprise – power consumption.

Enterprise grade SSD drives like Intel 520 480GB SSDSC2CW480A310 cost $500, and Samsung 840 MZ-7PD512BW 512GB drives cost $600.

This places the initial costs at $0.48/GB – $0.63/GB for SAS drives, $0.78/GB for consumer grade SSD drives, and $1.04/GB – $1.17/GB for enterprise SSD drives.

Before we dive into mathematics of how SSD drives save you money, here is a map of average prices for kWh of electricity (in cents):

save-with-ssd-drives

The higher the price of electricity, the quicker will SSD drives break even as compared to SAS drives.

Comparing SAS to SSD drive in terms of power consumption, we have 4.8 kWh/day with SAS, and 0.06 kWh/day for SSD drive. This in turn comes out to 1752 kWh/year with SAS, and 21.9 kWh/year with SSD drive.  You would be saving $207/year with SSD drives on power consumption alone.

(For reference: http://www.storageperformance.org/spc-1ce_results/Seagate/e00002_Seagate_Savvio-10K3/e00002_Seagate_Savvio-10K3_SPC1CE-executive-summary.pdf and http://www.storagereview.com/ocz_vertex_4_ssd_review Where Annual energy use in kWh = Nominal Power * 24 * 0.365.  The annual cost is calculated at average of $0.12/kWh.)

So when would an SSD drive reach a break-even point?  If you purchased 2x 300GB 10,000 RPM drives for your server, you would be paying $380 for drives, and $420/year for electricity.  If you purchased 2x 450GB 15,000 RPM SAS drives you would be paying $440 for drives, and over $420/year for electricity.

Meanwhile, with 2x 480GB Intel 520 you would pay $1000 for drives.  If you went with 2x 512GB Samsung 840s, you would pay $1200 for drives, and as as for power consumption, there is a bonus. Intel 520 has 0.85W power consumption rate, would use 7.446 kWh/year, and cost $0.89352 per year. Samsung 840 Pro has 0.068W power consumption rate, would use 0.5956 kWh/year and cost $0.0713 per year.  For dual drives, this is still below $2/year in electrical charges.

Given all of these data points, even with dual Samsung 840 Pro SSD drives at $1200, as compared to dual SAS drives at $440, you are faced with a difference of $760 in initial cost for 2x 500GB drives.  This $760 price saving is quickly eroded away by $420/year electrical bill, which after less than 2 years starts to cost you more money if you went with SAS drives.  In the long run, SSD drives save you money, improve efficiency, and last longer.

]]>
https://www.serverstack.com/blog/2013/01/29/upgrading-your-managed-server-to-ssd-for-maximum-performance-and-cost-savings/feed/ 0
Protecting Your Managed Server From Packet Flood https://www.serverstack.com/blog/2013/01/28/protecting-your-managed-server-from-packet-flood/ https://www.serverstack.com/blog/2013/01/28/protecting-your-managed-server-from-packet-flood/#comments Mon, 28 Jan 2013 17:29:58 +0000 https://www.serverstack.com/blog/?p=320 There are instances when your DNS server, such as BIND or PowerDNS, comes under a heavy packet flood.  Here is a network activity on two nameservers undergoing UDP flooding to port 53: network-utilization-1 network-utilization-2 To mitigate this issue, we need to do a little investigation as to where the packets are coming from.  Tcpdump is an excellent tool for ... ]]> packet-flood-header

There are instances when your DNS server, such as BIND or PowerDNS, comes under a heavy packet flood.  Here is a network activity on two nameservers undergoing UDP flooding to port 53:

network-utilization-1

network-utilization-2

To mitigate this issue, we need to do a little investigation as to where the packets are coming from.  Tcpdump is an excellent tool for this and combined with iptables can be used as quick measure against the UDP flood.  We can use the following command to check for connections to port 53:

# tcpdump -i eth0 -nnn port 53 and not port 22

tcpdump

Now it is important to establish which subnets are friendly and which ports can be ignored.  In our case, we’ll ignore 69.55.x.x/16 subnet and port 22 (SSH).

Although we have established that the traffic is UDP, technically the DNS server also listens on TCP port, so we will include both of these protocols when banning the flooder’s IP address later.  By filtering out this output we drop the last part of the tcpdump output, which is the source port of the IP address that was used to connect to this server.  It is usually a value between 1024 and 65535.  The following tcpdump command would let us see all the IP addresses that are connecting to port 53 on interface eth0 :

tcpdump -i eth0 -nnn port 53 and not port 22 and not src net 69.55 | awk '{print $3}' | sed 's/\./ /g' | awk '{print $1"."$2"."$3"."$4}'

This would generate a list of IPs that are currently connecting to port 53.  Once we have verified that the IPs seem foreign and repetitive, we can begin logging them into a file that will be used later:

tcpdump -i eth0 -nnn port 53 and not port 22 and not src net 69.55 | awk '{print $3}' | sed 's/\./ /g' | awk '{print $1"."$2"."$3"."$4}' >> /root/ips.txt

To parse through this file, we will use sort and uniq commands:

cat /root/ips.txt | sort | uniq -c | sort -n

A quick parsing through the log file reveals IP addresses where the spam originated from:

quick-parsing

The first column is number of instances this IP appears in log file, and second column is the IP address itself.  We can now ban these repetitive IPs with iptables :

iptables -I INPUT -s 85.236.105.27 -j DROP -m comment --comment "UDP Spam"
iptables -I INPUT -s 71.174.225.224 -j DROP -m comment --comment "UDP Spam"
...

Now as you can see, there is a stark contrast between legitimate DNS requests which were around 100-180 in our log window, and spamming requests.  The amount of spam that these IPs generated can be seen with iptables using iptables -nL -v:

# iptables -nL –v

iptables

Once we are done with this set of IPs, we can clear out the log file:

# :> /root/ips.txt

Since spammers often use scripts that modify their source IP address, this process can be repeated every few minutes to capture new IP addresses:

# tcpdump -i eth0 -nnn port 53 and not port 22 and not src net 69.55 | awk '{print $3}' | sed 's/\./ /g' | awk '{print $1"."$2"."$3"."$4}' >> /root/ips.txt
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
2008557 packets captured
2008643 packets received by filter
0 packets dropped by kernel
# cat /root/ips.txt | sort | uniq -c | sort –n

cat-root

 

We can verify that the network activity due to spam has decreased on both servers:

network-activity-1

network-activity-2
It is a good idea to occasionally review the output of “iptables -nL -v” and to remove entries that are no longer spamming.  Keeping your list of rules small helps minimize CPU loads due to iptables.

This entire process could be automated to automatically scan for spamming IPs and place them in iptables rules, re-scan the iptables output every minute and drop rules that are stale.  Unfortunately this will not work if the spammer is constantly changing his IP address and only using 1-2 IPs at a time for requests, at a very high rate.  In this case, we need to use a more restrictive iptables rule that would flag packets for DNS inquires of ANY type and drop them if more than 4 were requested per minute :

# iptables -I INPUT 1 -i eth0 -d MY_IP -p udp --dport 53 -m string --from 50 --algo bm --hex-string '|0000FF0001|' -m recent --set --name dnsanyquery
# iptables -I INPUT 2 -i eth0 -d MY_IP -p udp --dport 53 -m string --from 50 --algo bm --hex-string '|0000FF0001|' -m recent --name dnsanyquery --rcheck --seconds 60 --hitcount 5 -j DROP

With this rule, there is still a heavy influx of UDP packets, but 99% of them are dropped, and there is very little outgoing traffic:

UDP
A similar approach can be taken with webservers and high connectivity rate to port 80 using netstat.  However, generally you don’t want to have a large set of iptables rules on production servers because it will increase CPU and memory usage.  An Nginx proxy in front of your webserver can be used to mitigate spam, and also add caching if you so desire.

]]>
https://www.serverstack.com/blog/2013/01/28/protecting-your-managed-server-from-packet-flood/feed/ 0