DarkXander Owner of FurrTrax Post ID: 91 Posted: 05-31-2015 14:00 PM |
So it seems there is a lot of contention online about whether using SSDs for MySQL is a better option, or if your more likely to bottleneck somewhere else before the Hard Disks become an issue. I have done extensive research into this problem, and experienced it first hand on this very site. Some people have said to just optimise code and MySQL transactions better, well i have already gone over all of them several times optimising and indexing everything as perfect as possible to keep the site responding quickly. Problem is originally the web files and MySQL files were on the same drive and partition, so even having a fast RAID 0 or 10 array was not enough due to the high seek time for reads and writes. People see a RAID 0 system with 200MB/s of speed and think, how can a social network like this max that out, not even possible they say. Well they would be wrong, the site may not consume 200MB/s of Read/Write IO speed, but that 200MB/s speed is rated on contiguous block read/writes, not the random seeking read/write that a system actually uses. The same RAID system with 200MB/s Contiguous read/write usually only has 20-30MB/s Random Seek and read/write speed, and usually 100-500ms Seek Time. This is where even the GOD of all RAID 0, 10, 5 systems gets overwhelmed by a social network like this, constantly being forced to jump back and fourth seeking random files, DB tables, Code pages, etc. The only thing a RAID 0, 10, or 5 system built with Rotating Drives is good for is a download or CDN content server to host static resources. Now Enter SSDs, with an average single drive contiguous read/write equal to or greater than the RAID systems mentioned, BUT the detail that matters most is most high quality SSDs Random Seek and read/write speeds can approach the same speeds as their contiguous speeds, making them worlds faster and quicker to respond then rotating drives. Seek time is effectively 0ms, and rear/write regardless of mode is 200-500MB/s depending on model. People dont realise Contiguous read/write speed is like straight line performance, a drag race with no turns, weaves, etc. And the Random Seek, read/write is more like a hairpin rally with U-Turns, zig zags, and complete reversals, SSDs can do that much quicker then a mechanical drive can. FurrTrax original RAID system had a contiguous read speed of 245MB/s and contiguous write speed of 178MB/s, and yet the site began to lag horribly with lots and lots of people online. After some extensive analysis of logs, performance graphs in VMware Console, and the like it was plain to see, the server was waiting on its drives to seek 80% of the time it was running, once it found the file it read the file blazing fast, but seeks were taking so long page loads would delay several seconds and leave the CPU idle waiting for the RAID system to catch up. And this was on a RAID Controller with 3 x SATA 300 Seagate Baracuda 1TB drives with 64 MB Caches. I added the first SSD to an extra port in NON-RAID mode and moved all the MySQL databases there, and the site saw a speed increase that was astronomical, and the % of time the system was waiting for seeks dropped to 35%, which was mostly seeks for content files as in Images, PHP Files, Videos, Etc. Adding another SSD and moving all of the content to there dropped the Seek percentage to 5% flat. Litterally proving that the RAID 0 system for all its speed, was not cut out for Social Network Hosting. Today the server stands differently, its been retrofitted with all SSDs 4 of them as i recall, in a configuration setup to best take advantage of their amazing speed as follows. Linux Logs are quite noisy and do a lot of writes so i dedicated a 32GB SSD to the logs of the entire system, that is all it does so log writing IO doesnt have any impact on the other drives. MySQL got its own 500MB/s Phison Controller 32GB SSD which is all it does, all MySQL databases live on that drive, with nothing else. So MySQL has enough seek and IO power to fly like the devil. Content files, like the sites PHP, Images, and other content files live on a 120GB SSD of their own and load amazingly fast. The operating systems, VMware ESXi, and Linux, live on another 32GB Phison SSD so Updating, management tasks, etc, do not affect the IO of MySQL or the websites. This setup has brought performance about 9 times higher than it was on the original RAID 0 System, And we are still using the same Processor, RAM, and Motherboard as before. After these changes the RAID Controller itself was retired, and backups flow regularly to the CDN server which is cloud hosted, and to my own home office server just in case an SSD ever decides to pop. | ||
Loben Member Post ID: 92 Posted: 05-31-2015 14:21 PM |
At Atatasdfasdfsdfasdfasdf hmmm my text isn't showing up in the box as I type it. I'm trying to see if it comes out how I type it. | ||
Loben Member Post ID: 93 Posted: 05-31-2015 14:22 PM |
This is the quick reply box, and the edit post button had the same effect. Firefox version 38.01, gonna update my fl;ash? | ||
Loben Member Post ID: 94 Posted: 05-31-2015 14:26 PM |
No luck, still can't see text as I type. Uhh, what software does the forum here use?
I oringally menat to post a story about my old work place, we had drive imaging machines with ssds and windows XP. They were so fast you'd clikc the power button and you'd be to the desktop before you got your hand to the mouse. | ||
DarkXander Owner of FurrTrax Post ID: 96 Posted: 05-31-2015 19:27 PM |
This article was posted more because ive found a lot of articles recently by people who claim to have experience in this subject matter, and claim the gain from using SSDs is minimal, and that there are other bottlenecks to solve before SSDs are even worth it. This article is to dispell those beliefs.... Ive seen it first hand here, and explained why it makes such a difference.
As for why the editor doesnt work, ill look into it, it doesnt use flash, it only uses Ajax, Javascript, and CSS. Its also reported to work in firefox. | ||
DarkXander Owner of FurrTrax Post ID: 97 Posted: 05-31-2015 20:01 PM |
| ||
Loben Member Post ID: 120 Posted: 06-02-2015 02:04 AM |
It seems to be working for me today. I'm not sure what happened yesterday. Thanks anyways! | ||
Grey Member Post ID: 793 Posted: 12-27-2015 09:12 AM |
So why not set up a Raid 5 system with SSDs in case of failure? That would give you the speed of SSDs and the fallback of Raid 5.
--Grey--
| ||
DarkXander Owner of FurrTrax Post ID: 794 Posted: 12-27-2015 12:21 PM |
RAID 5 does not work well with SSDs due to the way in which it does Parity, it will kill the SSDs a lot quicker than using RAID 1 or RAID 10 modes. You may or may not know that SSDs have a finite number of write cycles, and Setting SSDs up in RAID 5 is simular to running a Defrag on them and will eat up 20% or more of the write cycles needlessly and the SSDs will begin to fail in much shorter time and need to be replaced. RAID 0, 1, and 10 are far less detrimental but still suffer the issue a small bit. So i run the drives independantly, and back them up regularly to other media as reading them for a backup does not reduce their life at all. The logs generated by linux also generate a lot of writes that can reduce the life time of an SSD and for that reason, all the logs for the server are remapped to go to a small throwaway SSD that is cheap to replace, a 60GB i got on sale for 30 bucks. When it dies i can throw in anything just to hold the logs. The servers are up to 8 SSDs now and 2 SAS Drives |