MySQL 8.0 InnoDB Cluster and Persistent Configurations


There are lots of new features in MySQL 8.0 that was recently released including our Document Store capability.  There are a few core capabilities related to InnoDB Cluster in 8.0 that I’d like to share but some will arrive in some coming blogs.  Primarily here I’ll point out some nice details with InnoDB Cluster 8.0 deployments, plus point out how our ability to SET PERSIST GLOBAL configurations allows us to see the beneficial impact of the new replication default configurations in 8.0.

I use a set of scripts to build and rebuild my clustered test setups.  I can fully automate a Group Replication clustered setup, but it requires that I explicitly implement all the configurations and have precision know-how that I’m not missing anything that is fundamental in the setup.

With InnoDB Cluster setups, their initial design is intended as an “easy & reliable” step through process…and with 8.0, it really is easy & reliable.  Let’s look.

Building a base instance

To get the new instances prepared, on my lab system, I do the following things:

(1) Initial Security Setups

Initial steps since this is a lab environment are to enable selinux in permissive mode.

In a real setup, you would need to add port access for MySQL Group Communication System to your selinux setup and for your firewall.  Windows may have similar necessary configurations too.

  • InnoDB Cluster uses 33061 for its chosen default GCS port number
    • ** GCS is the distributed communication layer of the system
  • But you can choose any port that best works for your environment, you just need to configure it up front.  Depending on the customizations, you may need to build a Group Replication core setup, then adopt that GR setup as an InnoDB Clustered setup by using the MySQL-Shell.

(2) Installing the Binaries

Installing the MySQL 8.0 binaries via your package manager or other means as suited for your supported operating system.  Take note that the rpm package named “backup” is in fact the MySQL Enterprise Backup utility which is provided/bundled as part of the Enterprise Edition Server packages. MySQLRouter and MySQL Shell are still separate.

(3) Handling the initial reset of the DB Root user Account

Getting the root user account’s password reset as a scripted process. In my case its for ease of building test systems, but real-world it could be scripted for capturing strong and randomized password for use.  Then registering that password into a vault of some form.

(4) Creating DB Admin Users Accounts

Since I would rather not modify the root@localhost user account to enable it with remote access to other MySQL instances and vice versa, I will create an admin@localhost and admin@% on all members to support remote configuring the members to be clustered in the setup.

Take note of the last command “ RESET MASTER; “.  This is done to prevent GTIDs getting created that belong to the local server instance just installed.  We don’t want these replicated.  Why?

  • These activities belong to GTIDs owned by the local server, which would then get replicated to other members and may appear to be something problematic.
  • As a reminder, when the Group Replication core is enabled, then all GTIDs belong to the “Group” or equally the “InnoDB Cluster”

What is an alternate method that does NOT use “ RESET MASTER; “?

Instead of finishing with the “ RESET MASTER; “, you can precede the prior statements with “ SET SQL_LOG_BIN=0; ” and that way GTIDs and binary log entries won’t get created.

Confirming that no GTIDs are set

There are two commands that I will direct you to here, both confirm the status of GTIDs.

Yes…this performance_schema view above shows us lots of information about the status of any given member.  This is NEW in MySQL 8.0.

Monitoring with MySQL Enterprise Monitor

Confirming the individual member status of these instances in the monitoring system.  We’ll use this to compare when we are done.

So, now down to business. Building the cluster with the shell.

In brief, here are the commands to build the first instance and start “the cluster”

Here is the long version (commands are highlighted in yellow)

Reviewing the Configuration Changes

So that’s great!!  First member is up and running.

So it made these configurations changes.  How do we confirm them?  What did it change the server_id value to?

Below we see the PERSIST key word in-action.  It is using this facility to push changes into a configuration file.  Plus the list of those configurations is query-able as seen below.

Now that the first member is up, we more or less repeat those steps.

The usage of the mysqlshell is as follows:

What does our status look like in the MySQL Enterprise Monitor?

In the screen shot below, the members are now “auto-grouped” as a “Replication Topology” shows they’re all linked.

Inline with that replication grouping noted above…the image below shows how we can see each members has:

  • its own line entry which gives quick high level details
  • Also, each line entry can be expanded so that we can easily drill into any member that needs extra attention.

What do our PERSISTed Configurations look like now?

Many more configurations have been added to it, just from adding the additional members.  The ability to PERSIST configurations in this way is a real benefit to dynamically & restart-based provisioned configurations, truly adding to the ease of managing our MySQL instances.

Hope you enjoy and always happy for feedback.

The Source of subject MySQL 8.0 InnoDB Cluster and Persistent Configurations