How To Optimise MySQL & Apache On cPanel/WHM

On this optimization process we will go over the Apache core configuration and modules that are part of Apache core. We think that with the correct settings of Apache and MySQL you can get excellent results and the correct level of resource use without installing third-party proxy and cache modules. So let’s start,


Apache & PHP

In the first stage we run the Easy Apache and selected the following:

* Apache Version 2.4+

* PHP Version 5.4+

* In step 5 “Exhaustive Options List” select

– Deflate

– Expires

– MPM Prefork

– MPM Worker

After Easy Apache finished go to your WHM » Service Configuration » Apache Configuration » “Global Configuration” and set the values by the level of resources available on your server.

Apache Directive 	 	(From 2GB memory or less and up to 12GB memory) 	 	

StartServers 	 	 	4 	 	8 	 	16 	
MinSpareServers 	 	4 	 	8 	 	16 	
MaxSpareServers 	 	8 	 	16 	 	32 	
ServerLimit 	 	 	64 	 	128 	 	256 	
MaxRequestWorkers 	 	50 	 	120 	 	250 	
MaxConnectionsPerChild 	 	1000 	 	2500 	 	5000 
Keep-Alive			On		On		On
Keep-Alive Timeout	 	5	 	5	 	 5
Max Keep-Alive Requests		50	 	120	 	120
Timeout				30		60		60


Now go to WHM » Service Configuration » Apache Configuration » Include Editor » “Pre VirtualHost Include” and allow users minimal cache and data compression to allow the server to work less for the same things by pasting the code below into the text field.

# Cache Control Settings for one hour cache
<FilesMatch ".(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$">
Header set Cache-Control "max-age=3600, public"

<FilesMatch ".(xml|txt)$">
Header set Cache-Control "max-age=3600, public, must-revalidate"

<FilesMatch ".(html|htm)$">
Header set Cache-Control "max-age=3600, must-revalidate"

# Mod Deflate performs data compression
<IfModule mod_deflate.c>
<FilesMatch ".(js|css|html|php|xml|jpg|png|gif)$">
SetOutputFilter DEFLATE
BrowserMatch ^Mozilla/4 gzip-only-text/html
BrowserMatch ^Mozilla/4.0[678] no-gzip
BrowserMatch bMSIE no-gzip


Go to WHM » Service Configuration » “PHP Configuration Editor” and set the parameters according to your needs:

– memory_limit

– max_execution_time

– max_input_time



For MySQL you need to update the configuration file that usually in /etc/my.cnf

Best config base on 1 core & 2GB memory MySQL 5.5:

    local-infile = 0
    max_connections = 250
    key_buffer = 64M
    myisam_sort_buffer_size = 64M
    join_buffer_size = 1M
    read_buffer_size = 1M
    sort_buffer_size = 2M
    max_heap_table_size = 16M
    table_cache = 5000
    thread_cache_size = 286
    interactive_timeout = 25
    wait_timeout = 7000
    connect_timeout = 15
    max_allowed_packet = 16M
    max_connect_errors = 10
    query_cache_limit = 2M
    query_cache_size = 32M
    query_cache_type = 1
    tmp_table_size = 16M


    max_allowed_packet = 16M
    key_buffer = 64M
    sort_buffer = 64M
    read_buffer = 16M
    write_buffer = 16M


Best config base on 8 core & 12GB memory (Shared server) MySQL 5.5:

max_connections = 600
key_buffer_size = 512M
myisam_sort_buffer_size = 64M
read_buffer_size = 1M
table_open_cache = 5000
thread_cache_size = 384
wait_timeout = 20
connect_timeout = 10
tmp_table_size = 256M
max_heap_table_size = 128M
max_allowed_packet = 64M
net_buffer_length = 16384
max_connect_errors = 10
concurrent_insert = 2
read_rnd_buffer_size = 786432
bulk_insert_buffer_size = 8M
query_cache_limit = 5M
query_cache_size = 128M
query_cache_type = 1
query_prealloc_size = 262144
query_alloc_block_size = 65535
transaction_alloc_block_size = 8192
transaction_prealloc_size = 4096
max_write_lock_count = 8


max_allowed_packet = 16M

key_buffer = 384M
sort_buffer = 384M
read_buffer = 256M
write_buffer = 256M

key_buffer = 384M
sort_buffer = 384M
read_buffer = 256M
write_buffer = 256M

#### Per connection configuration ####
sort_buffer_size = 1M
join_buffer_size = 1M
thread_stack = 192K


Repair & optimize databases then restart MySQL:

mysqlcheck --check --auto-repair --all-databases
mysqlcheck --optimize --all-databases
/etc/init.d/mysql restart


Security & Limit Resources


Install CSF (ConfigServer Security & Firewall) at:

1) Go to WHM » Plugins » ConfigServer Security & Firewall » “Check Server Security” And pass on what appears as required to repair:

2) Go to WHM » Plugins » ConfigServer Security & Firewall » “Firewall Configuration” and set the parameters according to your needs:






Now enjoy your new fast and more effective server.

How To Use Rsync To Backup Your Data

rsync is a protocol built for Unix-like systems that provides unbelievable versatility for backing up and synchronizing data.  It can be used locally to back up files to different directories or can be configured to sync across the Internet to other hosts.

It can be used on Windows systems but is only available through various ports (such as Cygwin), so in this how-to we will be talking about setting it up on Linux.  First, we need to install/update the rsync client.  On Red Hat distributions, the command is “yum install rsync” and on Debian it is “sudo apt-get install rsync.”

The command on Red Hat/CentOS, after logging in as root (note that some recent distributions of Red Hat support the sudo method).

The command on Debian/Ubuntu.

Using rsync for local backups

In the first part of this tutorial, we will back up the files from Directory1 to Directory2. Both of these directories are on the same hard drive, but this would work exactly the same if the directories existed on two different drives. There are several different ways we can approach this, depending on what kind of backups you want to configure. For most purposes, the following line of code will suffice:

$ rsync -av --delete /Directory1/ /Directory2/

The code above will synchronize the contents of Directory1 to Directory2, and leave no differences between the two. If rsync finds that Directory2 has a file that Directory1 does not, it will delete it. If rsync finds a file that has been changed, created, or deleted in Directory1, it will reflect those same changes to Directory2.

There are a lot of different switches that you can use for rsync to personalize it to your specific needs. Here is what the aforementioned code tells rsync to do with the backups:

1. -a = recursive (recurse into directories), links (copy symlinks as symlinks), perms (preserve permissions), times (preserve modification times), group (preserve group), owner (preserve owner), preserve device files, and preserve special files.
2. -v = verbose. The reason I think verbose is important is so you can see exactly what rsync is backing up. Think about this: What if your hard drive is going bad, and starts deleting files without your knowledge, then you run your rsync script and it pushes those changes to your backups, thereby deleting all instances of a file that you did not want to get rid of?
3. –delete = This tells rsync to delete any files that are in Directory2 that aren’t in Directory1. If you choose to use this option, I recommend also using the verbose options, for reasons mentioned above.

Using the script above, here’s the output generated by using rsync to backup Directory1 to Directory2. Note that without the verbose switch, you wouldn’t receive such detailed information.


The screenshot above tells us that File1.txt and File2.jpg were detected as either being new or otherwise changed from the copies existent in Directory2, and so they were backed up. Noob tip: Notice the trailing slashes at the end of the directories in my rsync command – those are necessary, be sure to remember them.

We will go over a few more handy switches towards the end of this tutorial, but just remember that to see a full listing you can type “man rsync” and view a complete list of switches to use.

That about covers it as far as local backups are concerned. As you can tell, rsync is very easy to use. It gets slightly more complex when using it to sync data with an external host over the Internet, but we will show you a simple, fast, and secure way to do that.

Using rsync for external backups

rsync can be configured in several different ways for external backups, but we will go over the most practical (also the easiest and most secure) method of tunneling rsync through SSH. Most servers and even many clients already have SSH, and it can be used for your rsync backups. We will show you the process to get one Linux machine to backup to another on a local network. The process would be the exact same if one host were out on the internet somewhere, just note that port 22 (or whatever port you have SSH configured on), would need to be forwarded on any network equipment on the server’s side of things.

On the server (the computer that will be receiving the backups), make sure SSH and rsync are installed.

# yum -y install ssh rsync

# sudo apt-get install ssh rsync

Other than installing SSH and rsync on the server, all that really needs to be done is to setup the repositories on the server where you would like the files backed up, and make sure that SSH is locked down. Make sure the user you plan on using has a complex password, and it may also be a good idea to switch the port that SSH listens on (default is 22).

We will run the same command that we did for using rsync on a local computer, but include the necessary additions for tunneling rsync through SSH to a server on my local network. For user “geek” connecting to “” and using the same switches as above (-av –delete) we will run the following:

$ rsync -av –delete -e ssh /Directory1/ [email protected]:/Directory2/

If you have SSH listening on some port other than 22, you would need to specify the port number, such as in this example where I use port 12345:

$ rsync -av –delete -e 'ssh -p 12345' /Directory1/ [email protected]:/Directory2/


As you can see from the screenshot above, the output given when backing up across the network is pretty much the same as when backing up locally, the only thing that changes is the command you use. Notice also that it prompted for a password. This is to authenticate with SSH. You can set up RSA keys to skip this process, which will also simplify automating rsync.

Automating rsync backups

Cron can be used on Linux to automate the execution of commands, such as rsync. Using Cron, we can have our Linux system run nightly backups, or however often you would like them to run.

To edit the cron table file for the user you are logged in as, run:

$ crontab -e

You will need to be familiar with vi in order to edit this file. Type “I” for insert, and then begin editing the cron table file.

Cron uses the following syntax: minute of the hour, hour of the day, day of the month, month of the year, day of the week, command.

It can be a little confusing at first, so let me give you an example. The following command will run the rsync command every night at 10 PM:

0 22 * * * rsync -av --delete /Directory1/ /Directory2/

The first “0” specifies the minute of the hour, and “22” specifies 10 PM. Since we want this command to run daily, we will leave the rest of the fields with asterisks and then paste the rsync command.

After you are done configuring Cron, press escape, and then type “:wq” (without the quotes) and press enter. This will save your changes in vi.

Cron can get a lot more in-depth than this, but to go on about it would be beyond the scope of this tutorial. Most people will just want a simple weekly or daily backup, and what we have shown you can easily accomplish that. For more info about Cron, please see the man pages.

Other useful features

Another useful thing you can do is put your backups into a zip file. You will need to specify where you would like the zip file to be placed, and then rsync that directory to your backup directory. For example:

$ zip /ZippedFiles/ /Directory1/ && rsync -av --delete /ZippedFiles/ /Directory2/


The command above takes the files from Directory1, puts them in /ZippedFiles/ and then rsyncs that directory to Directory2. Initially, you may think this method would prove inefficient for large backups, considering the zip file will change every time the slightest alteration is made to a file. However, rsync only transfers the changed data, so if your zip file is 10 GB, and then you add a text file to Directory1, rsync will know that is all you added (even though it’s in a zip) and transfer only the few kilobytes of changed data.

There are a couple of different ways you can encrypt your rsync backups. The easiest method is to install encryption on the hard drive itself (the one that your files are being backed up to). Another way is to encrypt your files before sending them to a remote server (or other hard drive, whatever you happen to be backing up to). We’ll cover these methods in later articles.

Whatever options and features you choose, rsync proves to be one of the most efficient and versatile backup tools to date, and even a simple rsync script can save you from losing your data.

How To Setup WHMCS – One Installation, Multiple Domains

These 6 installations where all on separate servers. Maintaining each one and having to manually upgrade was ridiculous. So I used brain power and combined them into one installation.

For this example, I will be using and This setup can handle as many as you need. I am assuming you have root access to a basic Linux server.

Step By Step

– Create a user on the server “whmcs”
– – This user should be pointed to /home/whmcs

# adduser -d /home/whmcs whmcs

– Extract the WHMCS script into /home/whmcs/master
– – This will be the single install of WHMCS and it’s root web directory – Create symbolic links
– – This is used for DOCUMENT_ROOT reference

# ln -s /home/whmcs/master
# ln -s /home/whmcs/master

– Edit /home/whmcs/master/configuration.php to the following
– – We are creating logic to include a different configuration file depending on which domain is being visited.

if ($_SERVER["DOCUMENT_ROOT"] == '/home/whmcs/') {
if ($_SERVER["DOCUMENT_ROOT"] == '/home/whmcs/') {

– Create and edit your new configuration files
– – Edit the settings as necessary, but make sure to use and where specified


$db_host = "";
$db_username = "";
$db_password = "";
$db_name = "whmcs_domain1";
$cc_encryption_hash = ""; 
$templates_compiledir = "/home/whmcs/";
$api_access_key = "";
$display_errors = true;


$db_host = "";
$db_username = "";
$db_password = "";
$db_name = "whmcs_domain2";
$cc_encryption_hash = ""; 
$templates_compiledir = "/home/whmcs/";
$api_access_key = "";
$display_errors = true;

– Create template caching dirs for the domains, then chmod 777
– – Due to the WHMCS’s devs wisdom, they require the directory to be 777.

# mkdir /home/whmcs/master/templates_domain1_c
# mkdir /home/whmcs/master/templates_domain2_c
# chmod 777 /home/whmcs/master/templates_domain1_c
# chmod 777 /home/whmcs/master/templates_domain2_c


Now your configuration and directory structure is completed. WHMCS is now ready for both domains to work from one installation.

The easiest part of this is setting up the web server. You can use anything, Nginx, Apache, etc. All you do is setup the document root to the symbolic links you created.

Your directory structure should look something like this.

# ll /home/whmcs
drwxr-xr-x 21 whmcs whmcs    4096 Jan 21 13:49 master
lrwxrwxrwx  1 root  root        7 Dec 11 11:41 domain1 -> master/
lrwxrwxrwx  1 root  root        7 Dec 11 11:41 domain2 -> master/


Just to give you an idea, this is what an Nginx configuration could look like.

server {
    listen       123.456.789.1:443;
    root /home/whmcs/;
    access_log /var/log/nginx/;
    error_log /var/log/nginx/;
    ...more stuff below...
server {
    listen       123.456.789.2:443;
    root /home/whmcs/;
    access_log /var/log/nginx/;
    error_log /var/log/nginx/;
    ...more stuff below...


Now when you visit or they will be accessing the same WHMCS installation. You can change the themes and templates as needed through the admin interface.

For cron job setup, make sure to use WGET with the full domain name so that each configuration is populated correctly.

Cron example:

0 8 * * *  /usr/bin/wget -O /dev/null >/dev/null 2>&1
0 9 * * *  /usr/bin/wget -O /dev/null >/dev/null 2>&1


This setup works flawlessly. We’ve had zero problems and have gone through two WHMCS updates already.


Linux Static IP Address Configuration

How do I configure the Internet Protocol version 4 (IPv4) properties of a network connection with a static IP address for servers running Linux operating systems? How do I configure static IP address under Debian Linux or Redhat / RHEL / Fedora / Redhat Enterprise Linux server?

You need to update and/or edit the network configuration files. This tutorial provides procedures to configure a static IP address on a computer running the following operating systems:

  1. RHEL / Red hat / Fedora / CentOS Linux eth0 config file – /etc/sysconfig/network-scripts/ifcfg-eth0
  2. RHEL / Red hat / Fedora / CentOS Linux eth1 config file – /etc/sysconfig/network-scripts/ifcfg-eth1
  3. Debian / Ubuntu Linux – /etc/network/interfaces

Sample Setup: Linux Static TCP/IP Settings

In this example you will use the following Internet Protocol Version 4 (TCP/IPv4) Properties including IP, default gateway, and preferred DNS servers:

  • IP address:
  • Netmask:
  • Hostname:
  • Domain name:
  • Gateway IP:
  • DNS Server IP # 1:
  • DNS Server IP # 2:
  • DNS Server IP # 3:

RHEL / Red hat / Fedora / CentOS Linux Static IP Configuration

For static IP configuration you need to edit the following files using a text editor such as vi. Edit /etc/sysconfig/network as follows, enter:
# cat /etc/sysconfig/network
Sample static ip configuration:


Edit /etc/sysconfig/network-scripts/ifcfg-eth0, enter:
# cat /etc/sysconfig/network-scripts/ifcfg-eth0
Sample static ip configuration:

# Intel Corporation 82573E Gigabit Ethernet Controller (Copper)

Edit /etc/resolv.conf and setup DNS servers, enter:
# cat /etc/resolv.conf
Sample static IP configurations:


Finally, you need to restart the networking service, enter:
# /etc/init.d/network restart
To verify new static ip configuration for eth0, enter:
# ifconfig eth0
# route -n
# ping
# ping

Debian / Ubuntu Linux Static IP Configuration

Edit /etc/hostname, enter:
# cat /etc/hostname
Sample ip config:

Edit /etc/network/interfaces, enter:
# cat /etc/network/interfaces
Sample static ip config:

iface eth0 inet static

Edit /etc/resolv.conf and setup DNS servers, enter:
# cat /etc/resolv.conf
Sample dns static IP configurations:


Finally, you need to restart the networking service under Debian / Ubuntu Linux, enter:
# /etc/init.d/networking restart

Type the following commands to verify your new setup, enter:
# ifconfig eth0
# route -n
# ping

How To Change The Primary IP Address Of A WHM/cPanel Server

Steps in WHM:

  • Log into WHM and go to Basic cPanel & WHM Setup
  • Change the Primary IP here with the option that says “The IP address (only one address) that will be used for setting up shared IP virtual hosts
  • Note: This might not actually be necessary.

Log in to SSH, and do the following:

  1. Edit /etc/sysconfig/network-scripts/ifcfg-eth0
    • Change the IPADDR and GATEWAY lines to match the new IP and Gateway for the new ip
  2. Edit /etc/sysconfig/network
    • Change the GATEWAY line here if it does not exist in the ifcfg-* file.
  3. Edit /etc/ips
    • Remove the new primary IP from this file if it is present
    • Add the old primary IP to this file with the format <IP address>:<Net Mask>:<Gateway>
  4. Edit /var/cpanel/mainip
    • Replace the old primary IP with the new primary IP
  5. Edit /etc/hosts
    • Replace the old primary IP with the new one if needed. The hostname’s dnswill need to be updated too
  6. Restart the network service to make the new IP the primary
    • service network restart
    • Note: You’re probably going to be disconnected at this point, and have to log in to ssh using the new primary ip.
  7. Restart the ipaliases script to bring up the additional IPs
    • service ipaliases restart
  8. Run ifconfig and make sure all IPs show up correctly
  9. Update the cpanel license to the new primary IP
  10. Verify you can still log in to WHM and there is no license warning