So, recently I decided I was paying too much for my server because I was not maximizing performance across all the various daemons. So I decided to split my larger server into a handful of smaller servers to be able to fine tune each one to dedicated purposes. All went well, but I had some trouble for a few evenings figuring out how I could port forward localhost:3306 to the, now remote, database server. This should have been dirt simple with an iptables rule – but after digging in, I discovered MySQL treats localhost “special” by sending connections through the unix socket file, which is absolutely faster, but only works if the database daemon is on the same host as the connecting application.
After doing some research, I found it is possible to use a tool like socat and autossh to wrap an ssh tunnel to forward connections through the socket file to a remote IP over TCP. This however, was more complex and one off than I cared to explore for my simple problem. I finally resorted to using DNS and to stop using localhost as the host name. However, a few tid bits for the weary traveler:
- The mysql client library is responsible for selecting the protocol.
- PHP’s internal mysql libraries, unfortunately, as far as I could discover ( please correct me if I am wrong here ), do not allow you to select the protocol.
- So if you’re using “localhost” as your host name in a PHP mysql_connect, you’re forced to go through the socket file, however, you can use 127.0.0.1 instead of localhost to force TCP.
- The linux mysql-client package command line tool offers a –protocol=tcp flag if you want to force TCP. You can also set this as a default inside /etc/mysql/my.cnf under the [client] heading
[client] port = 3306 socket = /var/run/mysqld/mysqld.sock protocol = TCP
Again, this appears to work fine if you’re not using PHP as your client.
I hope this lesson learned ( use DNS ) comes as a helping hand to others out there. If anybody has some other suggestions, please do leave a comment!