25 January 2013

MySQL 5.6: Testing innodb_read_only with multiple instances

There are many good things in the upcoming MySQL 5.6 release. One thing that caught my eye early on was the ability to start the server with InnoDB set to a read only mode.

A few months ago Todd Farmer wrote about this ability from the perspective of setting up an instance on read-only media (InnoDB now works with read-only media). And I encourage you to read that post first.
I decided to test this from a data warehouse perspective (as Sunny Bains points out in a comment to Todd's post). I used machines I have available: 1 OL6 desktop running 5.6.9 and 1 MacBook Pro running 10.8.

First thing to try was adding the innodb_read_only parameter to the config file on an instance I already had running on the Linux box. My setting:

[mysqld]
user = mysql
port = 5602
basedir = /opt/mysql/5.6.9rc
datadir = /data/5.6.ro
socket = /tmp/mysql5602.sock
tmpdir = /tmp
pid-file = /tmp/mysql_5602.pid
log-error = /tmp/mysql_5602.err
#
innodb-read-only
lower_case_table_names = 1
innodb-log-file-size = 256M
innodb_buffer_pool_size = 2G
innodb_change_buffering = none


One thing you may notice is that I have the log file size set to 256M, which is really a waste since there will be no logging taking place. I'll explain why towards the end of this post.

So, nothing unexpected after starting the instance with the settings above. SELECTs work fine but DML operations fail:

craig@localhost[5.6_r02] (employees) > select * from departments where dept_no = 'd001';
+---------+-----------+
| dept_no | dept_name |
+---------+-----------+
| d001    | Marketing |
+---------+-----------+
1 row in set (0.00 sec)

craig@localhost[5.6_r02] (employees) > delete from departments where dept_no = 'd001';
ERROR 1015 (HY000): Can't lock file (errno: 165 - Table is read only)
craig@localhost[5.6_r02] (employees) >


OK, now to set up a separate instance and point it to the same data directory.
The my.cnf file for the second instance:

[mysqld]
user            = mysql
port            = 5601
socket          = /tmp/mysql5601.sock
basedir         = /opt/mysql/5.6.9rc
datadir         = /data/5.6.ro
tmpdir          = /tmp
pid-file        = /tmp/mysql_5601.pid
log-error       = /tmp/mysql_5601.err
#
innodb-read-only
innodb-log-file-size    = 256M
innodb_buffer_pool_size = 2G
innodb_change_buffering = none
lower_case_table_names  = 1

Again, try to DELETE from the employees.departments table:

craig@localhost[5.6_r01] (employees) > select * from departments where dept_no = 'd001';
+---------+-----------+
| dept_no | dept_name |
+---------+-----------+
| d001    | Marketing |
+---------+-----------+
1 row in set (0.00 sec)

craig@localhost[5.6_r01] (employees) > delete from departments where dept_no = 'd001';
ERROR 1015 (HY000): Can't lock file (errno: 165 - Table is read only)
craig@localhost[5.6_r01] (employees) >

So, two instances using the same InnoDB data. Now to try setting up an instance on a separate machine accessing the same datadir on my Linux server. After learning enough of NFS to be dangerous and flailing around for half a day, I was finally able the get the exported NFS file system mounted on my laptop. On the Linux side (NFS server), the /etc/exports file has the following entry:

[craig@craighp msb_56_ro2]$ cat /etc/exports
/data  craig-mbp(rw,sync,insecure)

A screenshot from the laptop of mounting this directory:

mount -t nfs -o noowners,rw craighp:/data/5.6.ro /Volumes/data_56
And here is the my.cnf file I used, starting of the server, and SQL commands showing I can read the data but changes are not allowed (at least to InnoDB data).

So, multiple instances across multiple machines accessing one data directory of InnoDB data. May have some uses in a data warehouse, operational data store, or reporting environment by spreading processing across servers and saving disk space.

Now, back to why I set innodb_log_file_size = 256M. I wouldn't necessarily recommend this but I wanted to see I could restart one of my instances as a RW instance (commenting out the innodb_read_only parameter). And I could. And I could make changes, load new data, delete old data, etc. And after you make your changes, you can restart the instance as read only. While this is going on, the other instances are blissfully unaware and will see none of it until they are restarted. I will reiterate that this worked in my limited testing. Production is an entirely different beast.

The MySQL server and InnoDB has done an amazing job with the 5.6 release. Kudos. And thanks to Giuseppe Maxia for his MySQL Sandbox toolkit. Spinning up test instances is fast and easy.

01 July 2009

Status 96 error on svc.startd

I've been working with SMF on and off for a few weeks now and still get stung by this error. If you see something like this in the SVC log after enabling a service (i.e., svcadm enable mysql):

bash-3.00# cat /var/svc/log/application-database-mysql:v51.log
[ Jul 1 07:36:37 Enabled. ]
[ Jul 1 07:36:37 Executing start method ("/lib/svc/method/mysql start") ]
svc.startd could not set context for method: chdir: No such file or directory
[ Jul 1 07:36:37 Method "start" exited with status 96 ]


head over to the manifest file for the service and check the method_context section. In my case, I had no working_directory element set so adding this element solves the problem.

Before:

<method_context>
<method_credential user='mysql' group='mysql' />
</method_context>

After:

<method_context working_directory='/usr/local/mysql'>
<method_credential user='mysql' group='mysql' />
</method_context>

After making this change, don't forget to import the manifest:
svccfg import /var/svc/manifest/application/database/mysql.xml


-- Craig

29 June 2009

Dynamically add Cluster data nodes using Solaris SMF

As a follow on to my previous post on using Solaris SMF to manage Cluster, I'll expand my setup and show how to dynamically add data nodes to Cluster and control the new nodes with SMF.

First, the setup. Since I wanted something that would run completely on my laptop, I decided to clone my current OpenSolaris VM (hostname = craigOS_0609) to create a second "machine" (hostname = craigOS_0609_vm2). Thought this would be easy and it was, except for the networking piece. When I just had a single VM, NAT worked fine. However, trying to use two VMs of OpenSolaris with NAT turned out to be problematic. They both ended up with a 10.0.2.15 IP address. Researched it a bit and didn't find anything definitive but did see that bridged networking might be the way to go. Well, after futzing with it for a day or so, it turned out that simply setting both VMs to use bridged networking allowed each to obtain an IP address from my router and have both VMs communicating with no problems.

One thing I was hoping I could do is control the startup of the data nodes on the second VM from the SMF services on the primary VM. I was also hoping I could define a dependency that spanned machines so the new data nodes would require the Management Server on the primary machine to be running before starting. Found out that that is currently not supported but is a planned feature.

OK, first things first. Modify the current Cluster config file to have entries for the two new data nodes we will add:

[ndbd default]
NoOfReplicas= 2
MaxNoOfConcurrentOperations= 10000
DataMemory= 80M
IndexMemory= 24M
TimeBetweenWatchDogCheck= 30000
DataDir= /usr/local/mysql/data_cluster
MaxNoOfOrderedIndexes= 512

[ndb_mgmd default]
DataDir= /usr/local/mysql/data_cluster

[ndb_mgmd]
Id=10
HostName= craigOS_0609

[ndbd]
Id= 1
HostName= craigOS_0609
[ndbd]
Id= 2
HostName= craigOS_0609
[ndbd]
Id= 3
HostName= 192.168.1.10
[ndbd]
Id= 4
HostName= 192.168.1.10

[mysqld]
Id= 20
[mysqld]
Id= 21
[mysqld]
Id= 22
[mysqld]
Id= 23

Restart the management server and wait a bit for it to load the new configuration:

# svcadm restart mysql_ndb_mgmd
# ndb_mgm -e show
Connected to Management Server at: craigOS_0609:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 4 node(s)
id=1 @192.168.1.8  (mysql-5.1.32 ndb-7.0.5, Nodegroup: 0, Master)
id=2 @192.168.1.8  (mysql-5.1.32 ndb-7.0.5, Nodegroup: 0)
id=3 (not connected, accepting connect from 192.168.1.10)
id=4 (not connected, accepting connect from 192.168.1.10)

[ndb_mgmd(MGM)] 1 node(s)
id=10 @192.168.1.8  (mysql-5.1.32 ndb-7.0.5)

[mysqld(API)] 4 node(s)
id=20 @192.168.1.8  (mysql-5.1.32 ndb-7.0.5)
id=21 (not connected, accepting connect from any host)
id=22 (not connected, accepting connect from any host)
id=23 (not connected, accepting connect from any host)
#
Now we need to create a new nodegroup with our newly added nodes:
# ndb_mgm -e "create nodegroup 3,4"
Connected to Management Server at: craigOS_0609:1186
Nodegroup 1 created
#
Checking memory usage in Cluster shows that the new nodes are recognized but currently have no data:

# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> all report memory
Connected to Management Server at: craigOS_0609:1186

ndb_mgm> Node 1: Data usage is 2%(58 32K pages of total 2560)
Node 1: Index usage is 1%(36 8K pages of total 3104)
Node 2: Data usage is 2%(58 32K pages of total 2560)
Node 2: Index usage is 1%(36 8K pages of total 3104)
Node 3: Data usage is 0%(16 32K pages of total 2560)
Node 3: Index usage is 0%(0 8K pages of total 3104)
Node 4: Data usage is 0%(16 32K pages of total 2560)
Node 4: Index usage is 0%(0 8K pages of total 3104)

ndb_mgm> quit
#

Also, checking the statistics for one of our Cluster tables confirms that we are still only using the original two data nodes:
# ndb_desc -c craigOS_0609 -d world City -p
-- City --
Version: 1
Fragment type: 9
K Value: 6
Min load factor: 78
Max load factor: 80
Temporary table: no
Number of attributes: 5
Number of primary keys: 1
Length of frm data: 324
Row Checksum: 1
Row GCI: 1
SingleUserMode: 0
ForceVarPart: 1
FragmentCount: 2
TableStatus: Retrieved
-- Attributes -- 
ID Int PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY AUTO_INCR
Name Char(35;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
CountryCode Char(3;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
District Char(20;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
Population Int NOT NULL AT=FIXED ST=MEMORY

-- Indexes -- 
PRIMARY KEY(ID) - UniqueHashIndex
PRIMARY(ID) - OrderedIndex

-- Per partition info -- 
Partition Row count Commit count Frag fixed memory Frag varsized memory 
0          2084      2084         196608            0                     
1         1995      1995         196608            0                     

NDBT_ProgramExit: 0 - OK
OK, now it is time to add our new data nodes. Since I cloned the primary VM, all the software I needed was already installed and ready to go in the new VM.

Step 0: Make sure the following services are disabled ("svcadm disable <service>"): mysql_ndb_mgmd, mysql_ndbd:node1, mysql_ndbd:node2, mysql

Also, make sure the connect string parameters in the /etc/mysql/my.cnf file is set to the primary VM (the one running our management server):
...
[mysqld]
ndbcluster
ndb-connectstring = craigOS_0609
...
[ndbd]
connect-string = craigOS_0609
[ndb_mgm]
connect-string = craigOS_0609
...
Step 1: Remove the SMF service for the Management node since I would only be using one:
# svccfg delete mysql_ndb_mgmd
Step 2: Since our new nodes with be 3 and 4, let's just remove the SMF services for node1 and node2. In the next step we will modify the manifest and rename the nodes to node3 and node4.
# svccfg delete mysql_ndbd:node1
# svccfg delete mysql_ndbd:node2
Step 3: Change the manifest for the data nodes and import into the SMF repository:
# cd /var/svc/manifest/application/database
# vi mysql_ndbd.xml
For each of the two instances defined, make sure the mgmd_hosts and node_id properties are set correctly:

...
      <instance enabled="false" name="node3">

 <method_context>
     <method_credential group="mysql" user="mysql">
 </method_context>

 <property_group name="cluster" type="application">
   <propval name="bin" type="astring" value="/usr/local/mysql/bin">
   <propval name="data" type="astring" value="/usr/local/mysql/data_cluster">
   <propval name="mgmd_hosts" type="astring" value="craigOS_0609">
   <propval name="node_id" type="integer" value="3">
 </property_group>

      </instance>

       <instance enabled="false" name="node4">

 <method_context>
     <method_credential group="mysql" user="mysql">
 </method_context>

 <property_group name="cluster" type="application">
   <propval name="bin" type="astring" value="/usr/local/mysql/bin">
   <propval name="data" type="astring" value="/usr/local/mysql/data_cluster">
   <propval name="mgmd_hosts" type="astring" value="craigOS_0609">
   <propval name="node_id" type="integer" value="4">
 </property_group>

      </instance>
...
Step 4: We can now import our settings for our new data nodes and enable them:
# svccfg import mysql_ndbd.xml
# svcadm enable mysql_ndbd:node3
# svcadm enable mysql_ndbd:node4
Step 5: Now let's check if our new nodes have joined the Cluster (this can be done from either VM):
# ndb_mgm -e show
Connected to Management Server at: craigOS_0609:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 4 node(s)
id=1 @192.168.1.8  (mysql-5.1.32 ndb-7.0.5, Nodegroup: 0, Master)
id=2 @192.168.1.8  (mysql-5.1.32 ndb-7.0.5, Nodegroup: 0)
id=3 @192.168.1.10  (mysql-5.1.32 ndb-7.0.5, Nodegroup: 1)
id=4 @192.168.1.10  (mysql-5.1.32 ndb-7.0.5, Nodegroup: 1)

[ndb_mgmd(MGM)] 1 node(s)
id=10 @192.168.1.8  (mysql-5.1.32 ndb-7.0.5)

[mysqld(API)] 4 node(s)
id=20 @192.168.1.8  (mysql-5.1.32 ndb-7.0.5)
id=21 @192.168.1.10  (mysql-5.1.32 ndb-7.0.5)
id=22 (not connected, accepting connect from any host)
id=23 (not connected, accepting connect from any host)

So, things look good so far. And you can see that I actually started a second MySQL Server from the new VM.

For the last step, we will redistribute the existing Cluster tables to take advantage of the new nodes. Otherwise, only data added after the new nodes go online will be placed on the nodes.

Our sample "world" database contains the clustered tables, so it's easy enough just to type in the three commands:

# mysql world
mysql> alter table online table City reorganize partition;
mysql> alter table online table Country reorganize partition;
mysql> alter table online table CountryLanguage reorganize partition;
mysql> quit
#

Now that that's done, let's recheck our table distribution using ndb_desc:
# ndb_desc -c craigOS_0609 -d world City -p
-- City --
Version: 16777217
Fragment type: 9
K Value: 6
Min load factor: 78
Max load factor: 80
Temporary table: no
Number of attributes: 5
Number of primary keys: 1
Length of frm data: 324
Row Checksum: 1
Row GCI: 1
SingleUserMode: 0
ForceVarPart: 1
FragmentCount: 4
TableStatus: Retrieved
-- Attributes -- 
ID Int PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY AUTO_INCR
Name Char(35;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
CountryCode Char(3;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
District Char(20;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
Population Int NOT NULL AT=FIXED ST=MEMORY

-- Indexes -- 
PRIMARY KEY(ID) - UniqueHashIndex
PRIMARY(ID) - OrderedIndex

-- Per partition info -- 
Partition Row count Commit count Frag fixed memory Frag varsized memory 
0          1058      4136         196608            0                     
1         1018      3949         196608            0                     
2         1026      1026         98304             0                     
3         977       977          98304             0                     

NDBT_ProgramExit: 0 - OK


That's it!

18 June 2009

Convert single row INSERTs to multi-row INSERT

Ronald's post regarding multi-row INSERTs with MySQL reminded me of a quick utility I wrote about a year ago. I wanted to test for myself how much performance can be gained over singleton INSERTs. The perl script below just takes the output from mysqldump and produces a file for each table, with the file containing the table DDL and the associated INSERT statements. Thought others may find it useful so here it is

-------- cut here ---------

#!/usr/bin/perl -W
#
##
##------------------------------------------------------------------------
## multi_inserts.pl
## ------------
##
## Take singular INSERT statements and create multi-row INSERT statements.
##------------------------------------------------------------------------
##
# Modification History
# --------------------
# C Sylvester, 3 July 2008, Initial release

use strict;

my $VALUES_PER_INSERT = 50;
my $MAX_ROW_LENGTH = $VALUES_PER_INSERT * 1000;

my $initial_comments = "";
my $table = "";
my $cnt = 0;
my $multi_values = "";
my $multi_row_length = 0;
my $total_row_count = 0;

my $t = "";
my $v = "";

open( PF, $ARGV[0] ) or die "\"$ARGV[0]\": $!\nUsage $0 \n";

LINE: while () {

if ( /^INSERT INTO `(\w+)` VALUES (.*);/ ) {
$t = $1;
$v = $2;
if ( $table eq "" ) {
$table = lc ($t).".sql";
open (DDL, '>>', $table) or die "Cannot open file for appending\n$!\n";
}

$multi_values .= $v . ",";
$multi_row_length += length($v);
if ( ++$cnt == $VALUES_PER_INSERT || $multi_row_length >= $MAX_ROW_SIZE) {
$multi_values =~ s/,$/\;/;
print DDL "INSERT INTO ", $t, " VALUES ", $multi_values, "\n";
$total_row_count += $cnt;
$cnt = 0;
$multi_values = "";
$multi_row_length = 0;
}

next LINE;
}

if ( $cnt != 0 ) {
$multi_values =~ s/,$/\;/;
print DDL "INSERT INTO ", $t, " VALUES ", $multi_values, "\n";
$total_row_count += $cnt;
$cnt = 0;
$multi_values = "";
$multi_row_length = 0;

}

if (/-- MySQL dump .*/ ) {
do  {
$initial_comments .= $_;
$_ = ;
} until ( /-- Table structure for table .*/ )
}

if ( /-- Table structure for table `(\w+)`/ ) {
$table = lc ($+).".sql";

close(DDL);
open (DDL, '+>', $table) or die "Cannot open file for writing\n$!\n";

print DDL $initial_comments, "\n";

print STDOUT "Printing to ", $table, "\n";
}

print DDL if fileno DDL;
}

if ( $cnt != 0 ) {
$multi_values =~ s/,$/\;/;
$total_row_count += $cnt;
print DDL "INSERT INTO ", $t, " VALUES ", $multi_values, "\n";
}

print STDOUT "Table \"", $t, "\" = ", $total_row_count;

close(DDL);
close(PF);
exit;


-------- cut here ---------

Part of my motivation for doing this was to learn PERL a little better so I'm sure this could be improved :-)

12 June 2009

Using OpenSolaris SMF for managing MySQL Cluster

One of the things I really like about Solaris (and OpenSolaris) is the Services Management Framework (SMF). And one of the things I like about my job is the opportunity to learn and work with new stuff. And, of course, the best way to learn something "non-lethal" is to try it yourself and do something with it.

So, how about using SMF for managing a MySQL Cluster setup? Something simple like two data nodes, one mgmt node, and one MySQL server node running on one OS instance? Sounds like a good exercise and, from a quick search, I couldn't find where it had been done before. Loaded up the latest release of OpenSolaris (2009.06) into a Virtual Box VM, downloaded the latest tar ball of MySQL Cluster (7.0.5) for Solaris x86 32-bit, kicked off Pandora (gotta have music), and got started.

OK, OS installed and running, time to install Cluster. For simplicity, I put all MySQL related files and data in /usr/local/mysql/. This included creating the Cluster data directory at /usr/local/mysql/data_cluster. The only exceptions were the my.cnf and config.ini files, which were put in /etc/mysql/. All the files I reference are provided in the ndb_smf_files.tar.gz compressed tar file. (If you are already familiar with SMF and are just looking for the files, just grab the archive file above and you can go on your merry way.)

Rather than include the full text of the my.cnf file, I will just list the relevant changes:
[mysqld]
...
ndbcluster
ndb-connectstring = localhost
skip-innodb # not needed for my test system
...

[ndbd]
connect-string = localhost
[ndb_mgm]
connect-string = localhost
[ndb_mgmd]
config-file = /etc/mysql/config.ini


For the Cluster configuration file, I just copied the
$MYSQL_HOME/support-files/ndb-config-2-node.ini file to config.ini and changed the data directory parameters:
[ndbd default]
...
DataDir= /usr/local/mysql/data_cluster
...
[ndb_mgmd default]
DataDir= /usr/local/mysql/data_cluster
...
Now, onto the SMF setup. There are a pair of files for each service we want to manage: a service configuration file and a "scripts" file. For example, the Cluster management service (the ndb_mgmd process) configuration is described in the mysql_ndb_mgmd.xml file and the methods for starting and stopping the service are provided in the mysql_ndb_mgmd script. So, let's download the tar file, unpack, and move the configuration and script files to their respective directories. (For the steps below I prefer to open a root shell instead of continually invoking sudo but do whatever works for you.)
  1. Grab the ndb_smf_files.tar.gz and extract the files:
    root# cd /tmproot# wget https://dl.dropbox.com/u/1880934/ndb_smf_files.tar.gz
    root# tar xzf ndb_smf_files.tar.gz
  2. Create the /var/svc/manifest/application/database directory if it does not exist.
    root# mkdir /var/svc/manifest/application/database
  3. Copy the three service configuration files to /var/svc/manifest/application/database:
    root# cd /tmp/ndb_smf_files/svcmanifests
    root# cp mysql*.xml /var/svc/manifest/application/database
  4. A "request for action" script is similar to an "rc" script and is used for starting/stopping/restarting a service. Copy the three scripts to the /lib/svc/method directory and make sure they are executable:
    root# cp /tmp/ndb_smf_files/svcmethods/mysql* /lib/svc/method
    root# chmod +x /lib/svc/method/mysql*
  5. Now time to get the manifests into the SMF repository. For this step, we use the svccfg utility to import the XML files::
    root# cd /var/svc/manifest/application/database
    root# svccfg import mysql.xml
    root# svccfg import mysql_ndb_mgmd.xml
    root# svccfg import mysql_ndbd.xml
  6. Create the Cluster data directory.
    root# mkdir /usr/local/mysql/data_cluster
    root# chown mysql:mysql /usr/local/mysql/data_cluster
    root# chmod 750 /usr/local/mysql/data_cluster
  7. If you are not familiar with SMF, this is a good time to take a look at the xml files. A few things to note are the dependencies defined, the methods that will be invoked to start and stop the service (which shows the relationship to the files we moved to /lib/svc/method), the method_credential (which user to use to start the service), and the properties defined (which can be referenced from the methods script).
For Cluster, there is a defined order of the processes that need to be started. Since the NDB Mgmt server controls the configuration of the Cluster, it must be started before the NDBD (data node) processes. And if you are using the MySQL server to access the Cluster, the data nodes need to be up and running before starting the mysqld process(es). So, for the data nodes (mysql_ndbd.xml file), we see:

which means that before the data nodes can start, the mgmt server must be up and running. In the MySQL server service configuration file, the dependencies are the network, file system, and the Cluster data nodes. From looking at the data node dependencies, just make sure that at least one of the two nodes is running:

Ok, everything should be ready to go so let's start things up. Start the Cluster mgmt server and verify it is up and running:
root# svcadm enable mysql_ndb_mgmd
root# svcs -l mysql_ndb_mgmd
In the svcs listing, you should see "enabled true" and "state online". If not, run "svcs -x mysql_ndb_mgmd" and check the log listed for the reason why (logs are in the /var/svc/log directory).

If everything looks good, start the data nodes and the MySQL server:
root# svcadm enable mysql_ndbd:node1
root# svcadm enable mysql_ndbd:node2
root# svcs -l mysql_ndbd
root# svcadm enable mysql
root# svcs -l mysql
That's it! Hopefully everything worked (and if not, that's what man pages and cursing are for). To stop a service, just "disable" instead of "enable".

Next thing to try is dynamically adding nodes to Cluster and seeing how SMF handles services on other servers.

-- Craig