Start Newsfeeds Planet MySQL
Newsfeeds
Planet MySQL
Planet MySQL - https://planet.mysql.com

  • MySQL Router HA with Keepalived
    After having explained how to achieve HA for MySQL Router for people who doesn’t want to install the MySQL Router on the application servers and after having illustrated how to use Pacemaker, this article explains how to setup HA for MySQL Router using keepalived. Keepalived is very popular, maybe because it’s also very easy to use. We can of course use 2 or more servers. The principle is the same as on the previous articles, if the router dies, the virtual IP used by the application server(s) to connect to MySQL is sent to another machine where mysqlrouter is still running. Let’s have a look at the configuration, in this case we use 2 machines, mysql1 and mysql2. Configuration Let’s configure our 2 routers. The configuration file is /etc/keepalived/keepalived.conf and the VIP are using to contact the router is 192.168.87.5 We also have to decide which one will act as master and which one will the backup: mysqlrouter1 will be the master and mysqlrouter2 the backup. mysqlrouter1 global_defs { notification_email { lefred @ lefred.be } notification_email_from mycluster @ lefred.be smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_script chk_mysqlrouter { script "/bin/killall -0 /usr/bin/mysqlrouter" # check the haproxy process interval 2 # every 2 seconds weight 2 # add 2 points if OK fall 2 } vrrp_instance VI_1 { state MASTER interface eth1 virtual_router_id 51 priority 102 advert_int 1 virtual_ipaddress { 192.168.87.5 } track_script { chk_mysqlrouter } } The important here is the state which is set to MASTER and the priority. mysqlrouter2 global_defs { notification_email { lefred @ lefred.be } notification_email_from mycluster @ lefred.be smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_script chk_mysqlrouter { script "/bin/killall -0 /usr/bin/mysqlrouter" # check the haproxy process interval 2 # every 2 seconds weight 2 # add 2 points if OK fall 2 } vrrp_instance VI_1 { state BACKUP interface eth1 virtual_router_id 51 priority 101 advert_int 1 virtual_ipaddress { 192.168.87.5 } track_script { chk_mysqlrouter } } We can see that the state and the priority are different. Now we can start keepalived and see that the VIP will be enabled on the master (mysqlrouter1): [root@mysqlrouter1 ~]# systemctl start keepalived [root@mysqlrouter1 ~]# ip add sho eth1 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:ab:eb:b4 brd ff:ff:ff:ff:ff:ff inet 192.168.87.3/24 brd 192.168.87.255 scope global eth1 valid_lft forever preferred_lft forever inet 192.168.87.5/32 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:feab:ebb4/64 scope link valid_lft forever preferred_lft forever We can see that 192.168.87.5 is now available on eth1. If we stop mysqlrouter on mysqlrouter1, we will see after 2 seconds max that the ip will be moved to mysqlrouter2: [root@mysqlrouter1 ~]# systemctl stop mysqlrouter [root@mysqlrouter1 ~]# ip add sho eth1 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:ab:eb:b4 brd ff:ff:ff:ff:ff:ff inet 192.168.87.3/24 brd 192.168.87.255 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:feab:ebb4/64 scope link valid_lft forever preferred_lft forever [root@mysqlrouter2 ~]# ip add sho eth1 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:f9:23:f1 brd ff:ff:ff:ff:ff:ff inet 192.168.87.4/24 brd 192.168.87.255 scope global eth1 valid_lft forever preferred_lft forever inet 192.168.87.5/32 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fef9:23f1/64 scope link valid_lft forever preferred_lft forever Conclusion Once again, we reached MySQL Router High Availability easily, very quickly and with standard tools included in most GNU/Linux distributions. Let me know if you are interested in other open source tools available to achieve MySQL Router HA.

  • MySQL Router HA with Pacemaker
    This article will explain how to setup HA for MySQL Router as described in a previous article about where should the router stand. For this setup, I will use Pacemaker (part of RedHat High Availability Add-on and available on RHEL, CentOS, Oracle Linux, …). Of course we need a MySQL InnoDB Cluster but we won’t really use it for the HA setup of the MySQL Router. Installing Pacemaker The first step is to install pacemaker on all the machines we will use for our “MySQL Router Cluster”: # yum install pacemaker pcs resource-agents Now we need to start the pcsd service and enable it at boot (on all machines): # systemctl start pcsd.service # systemctl enable pcsd.service It’s time now to setup authentication, this operation is again executed on all machines part of the MySQL Router Cluster: # echo MyStrongPassw0rd | passwd --stdin hacluster Changing password for user hacluster. passwd: all authentication tokens updated successfully. And on one of the nodes, we can now execute the following command: [root@mysql1 ~]# pcs cluster auth mysql1 mysql2 mysql3 -u hacluster -p MyStrongPassw0rd --force mysql1: Authorized mysql2: Authorized mysql3: Authorized Note that mysql1, mysql2 and mysql3 are the 3 nodes that will be used for the MySQL Router Cluster. We will now create the cluster (the name cannot exceed 15 chars) by launching the following command on one node: [root@mysql1 ~]# pcs cluster setup --force --name routercluster mysql1 mysql2 mysql3 The output of the command should looks like this: Destroying cluster on nodes: mysql1, mysql2, mysql3... mysql3: Stopping Cluster (pacemaker)... mysql1: Stopping Cluster (pacemaker)... mysql2: Stopping Cluster (pacemaker)... mysql1: Successfully destroyed cluster mysql2: Successfully destroyed cluster mysql3: Successfully destroyed cluster Sending 'pacemaker_remote authkey' to 'mysql1', 'mysql2', 'mysql3' mysql1: successful distribution of the file 'pacemaker_remote authkey' mysql3: successful distribution of the file 'pacemaker_remote authkey' mysql2: successful distribution of the file 'pacemaker_remote authkey' Sending cluster config files to the nodes... mysql1: Succeeded mysql2: Succeeded mysql3: Succeeded Synchronizing pcsd certificates on nodes mysql1, mysql2, mysql3... mysql1: Success mysql2: Success mysql3: Success Restarting pcsd on the nodes in order to reload the certificates... mysql3: Success mysql1: Success mysql2: Success Starting the cluster We can now start the cluster (running the following command on one node): [root@mysql1 ~]# pcs cluster start --all mysql3: Starting Cluster... mysql1: Starting Cluster... mysql2: Starting Cluster... And we can already check its status: [root@mysql1 ~]# crm_mon -1 Stack: corosync Current DC: mysql1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Fri Jul 20 10:00:36 2018 Last change: Fri Jul 20 10:00:08 2018 by hacluster via crmd on mysql1 3 nodes configured 0 resources configured Online: [ mysql1 mysql2 mysql3 ] No active resources Properties Now we can set some properties to our cluster: [root@mysql1 ~]# pcs property set stonith-enabled=false [root@mysql1 ~]# pcs property set no-quorum-policy=ignore [root@mysql1 ~]# pcs resource defaults migration-threshold=1 Adding resources VIP The first resource we will use is the VIP (Virtual IP) that the application will use to connect to MySQL (to the router): [root@mysql1 ~]# pcs resource create Router_VIP ocf:heartbeat:IPaddr2 \ ip=192.168.87.5 cidr_netmask=24 op monitor interval=5s We can check if our resource runs in the cluster now: [root@mysql1 ~]# crm_mon -1 Stack: corosync Current DC: mysql1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Fri Jul 20 10:20:43 2018 Last change: Fri Jul 20 10:19:44 2018 by root via cibadmin on mysql1 3 nodes configured 1 resource configured Online: [ mysql1 mysql2 mysql3 ] Active resources: Router_VIP (ocf::heartbeat:IPaddr2): Started mysql1 This is perfect ! MySQL Router Before adding the MySQL Router as resource, we need to be sure that the router is installed and bootstrapped on each nodes. As reminder, this is how to proceed: [root@mysql1 ~]# mysqlrouter --bootstrap clusteradmin@mysql1:3306 --user=mysqlrouter --force Please enter MySQL password for clusteradmin: Bootstrapping system MySQL Router instance... MySQL Router has now been configured for the InnoDB cluster 'mycluster'. The following connection information can be used to connect to the cluster. Classic MySQL protocol connections to cluster 'mycluster': - Read/Write Connections: localhost:6446 - Read/Only Connections: localhost:6447 X protocol connections to cluster 'mycluster': - Read/Write Connections: localhost:64460 - Read/Only Connections: localhost:64470 Now we can add the resource that will manage MySQL Router in our pacemaker cluster. We will use systemd. You can verify if systemd service for mysqlrouter is available using the following command: [root@mysql1 ~]# pcs resource list | grep router service:mysqlrouter - systemd unit file for mysqlrouter systemd:mysqlrouter - systemd unit file for mysqlrouter OK, so let’s add the resource and let’s it run on all nodes (clone), no need to start mysqlrouter only if the running one has issue: [root@mysql1 ~]# pcs resource create mysqlrouter systemd:mysqlrouter clone Let’s verify if the resource is now added: [root@mysql1 ~]# crm_mon -1 Stack: corosync Current DC: mysql1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Fri Jul 20 12:12:39 2018 Last change: Fri Jul 20 12:12:31 2018 by root via cibadmin on mysql1 3 nodes configured 4 resources configured Online: [ mysql1 mysql2 mysql3 ] Active resources: Router_VIP (ocf::heartbeat:IPaddr2): Started mysql1 Clone Set: mysqlrouter-clone [mysqlrouter] Started: [ mysql1 mysql2 mysql3 ] Now we need to tell the cluster that the VIP needs to stand where a router instance also runs: [root@mysql1 ~]# pcs constraint colocation add Router_VIP \ with mysqlrouter-clone score=INFINITY Test Let’s verify that the VIP is located on mysql1 as crm shows it: [root@mysql2 ~]# ip add sh eth1 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:f9:23:f1 brd ff:ff:ff:ff:ff:ff inet 192.168.87.3/24 brd 192.168.87.255 scope global eth1 valid_lft forever preferred_lft forever inet 192.168.87.5/24 brd 192.168.87.255 scope global secondary eth1 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fef9:23f1/64 scope link valid_lft forever preferred_lft forever Perfect ! Now we will kill mysqlrouter process on mysql1 and see what the cluster does: [root@mysql2 ~]# crm_mon -1 Stack: corosync Current DC: mysql1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Fri Jul 20 12:37:29 2018 Last change: Fri Jul 20 12:36:51 2018 by hacluster via crmd on mysql1 3 nodes configured 4 resources configured Online: [ mysql1 mysql2 mysql3 ] Active resources: Router_VIP (ocf::heartbeat:IPaddr2): Started mysql2 Clone Set: mysqlrouter-clone [mysqlrouter] Started: [ mysql1 mysql2 mysql3 ] Excellent, now we see that the VIP is located on mysql2, let’s verify: [root@mysql2 ~]# ip add sh eth1 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:f9:23:f1 brd ff:ff:ff:ff:ff:ff inet 192.168.87.3/24 brd 192.168.87.255 scope global eth1 valid_lft forever preferred_lft forever inet 192.168.87.5/24 brd 192.168.87.255 scope global secondary eth1 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fef9:23f1/64 scope link valid_lft forever preferred_lft forever So now that the VIP runs on mysql2 we will shutdown the machine and see what happens: root@mysql3 ~]# crm_mon -1 Stack: corosync Current DC: mysql1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Fri Jul 20 12:38:09 2018 Last change: Fri Jul 20 12:36:51 2018 by hacluster via crmd on mysql1 3 nodes configured 4 resources configured Online: [ mysql1 mysql3 ] OFFLINE: [ mysql2 ] Active resources: Router_VIP (ocf::heartbeat:IPaddr2): Starting mysql1 Clone Set: mysqlrouter-clone [mysqlrouter] Started: [ mysql1 mysql3 ] We can see that the cluster does its job and that MySQL Router even if moved out of the application server is not anymore a single point of failure !

  • MySQL InnoDB Cluster: is the router a single point of failure ?
    As you know, MySQL InnoDB Cluster is composed of 3 elements: a group replication cluster of at least 3 servers the MySQL Shell used to manage the cluster the MySQL Router that send the traffic from the application server(s) to the cluster When presenting the solution in conferences, one the main question is Where should I put the router ? and the answer is always the same: the best place to install the router is the application server ! The router is a very lightweight process that gets its configuration from the cluster’s metadata and doesn’t require a lot of resources or maintenance. So the ideal setup is the following: However for many (obscure?) reasons, sometimes people doesn’t want to have the MySQL Router on the same server as the application server and would like to have it on its own server instance (let’s hope they are not VM on the same physical machine ). Of course this setup is possible as illustrated below, but then, yes in that specific case, the router is a spof (Single Point Of Failure). Which means that if the router or the server where the router is installed crashes, none of your application servers are able to contact the database anymore: If you really want to have such setup, you will have to setup HA for the MySQL Router. Such HA setup is very dependent of your  infrastructure as you may have to play with virtual IPs and this is not managed the same way on Linux and Windows for example. So if you want to install the MySQL Router on another place than the recommended application server, you will have to take care by yourself of the router’s HA. Of course they are plenty solution to achieve such HA setup for the MySQL Router and I will try to illustrate some of them on this blog. The next article on the topic will be How to setup MySQL Router HA with Pacemaker, dont’ miss it Principle The principle is simple, you will have multiple resources (the MySQL Router) running on multiple machines (at least more than one), and a VIP (virtual IP) resource that the application will use to connect to the router. If the “router resource” where the VIP resource runs, crashes, the VIP will be moved automatically to another server/node where the “router resource” is available: If the router goes away, then the VIP is sent to another nodes: Of course, if your application can use some round-robin (most of the MySQL Connectors allows that), you can have multiple VIP resources like this: Conclusion If you follow the recommendation, the MySQL Router is not a single-point-of-failure, but if you want to have it on its own server, then you will have to setup HA for it as this is very dependent of your infrastructure.

  • Hackathon process per week Sprints Idea
    I like hackathons. Hackathons provide the freedom to build outside the process. The forced speed to deliver something to demo and the fun self-deprecation of "ooh this is really ugly/bad TODO don't do this." in the source/commit logs which tells a great story. Also, a great side effect; people are really interested in refactoring and fixing the code especially if the demo went well.So, I started thinking what if we can take this naturally formed fun process and define a weekly sprint, with a daily standup reporting on the process to achieve the product goal, using a hackathon method.Day 1 and 2"How much can you get done in two days for the demo" This portion is no more than an hour planing. You talk to your team and divide up tasks for the hack you want to demo-in two days. For instance, "Johnny says I'll write the service" and "Amanda says I'll provide the data-it will be in MySQL". Sammy says "I'll write the front end to demo, Johnny let's agree what you'll send me, for now, I will simulate some pho data." Then each person builds their part. During the process, Johnny is building the interface from an un-authenticated HTTP Get request that has a JSON response to define what his service will return. Amanda finishes the process of testing some queries for functionality she checks in her part of how to get data, massage it and what tables are what, NOT performance. Johnny sends a sample interface to Sammy so some dynamic data can be injected into the mockup when Sammy requests data. They agreed that a REST API using GET with a JSON response. There are PR requests when sharing the same addition to the same place otherwise frequent merges When fixing something that made it into master fix forward so check into master :P Each check-in should be filled with a series of TODO, FIXME or "TODO don't do this" statements for speed until that's not needed when you have a refined process. Demo What does the individual developer each get?  Each developer produced something quick to verify the viability of the idea. A vested interest to fix the hacks and beautify the code, reusing reusable parts, etc.  What does the team get? The team feels that they got something out pretty quick, the team has some talking points of what to fix next and what systems the team envisions that could possibly be used in other parts of the code.  Finally,  the chance to learn something new in the knowledge transfer or the ability to fix an approach before going too far down the rabbit hole. Day 3  The next day is mapping out what the developer wants to refactor, has to change and gets to delete. With knowledge transfer of the good, bad, and embarrassing things with an idea of the direction each person took. It is fun. This is looking over the queries to make they make sense.   Are the correct indexes there?  Are we really answering the correct questions efficiently if not how can we?  What hacks do we need to undo to provide what we delivered? How do I test this thing? I need to make sure before I refactor I have reproducible tests.  Day 4  Document, Test, Refactor agree more as a team and focus on a code structure that enables adding the next round of features while setting standards of the direction going forward or revisiting them if need be. Day 5 Do more of the same or get a beer with the team. This process makes me feel that I am building something fast. The reason for the speed was to validate the idea or approach. Time is built into the process for testing, refactoring and documenting. The refactoring takes into account how to add new things faster. 50% building 50% testing, documenting, refactoring, making better. Producing a 4 day work week with daily standups What about a really big project and delivering constantly Whiteboard what is needed to deliver such as what the product is, what does it solve, what are the features.  Answer what is alpha Answer what is beta. Divide and conquer the vision for each "hackathon period" Adjust projection of delivery based on the previous hackathon progress Keep working and visit each hackathon period to verify the correct thing is built correctly. Profit from a fun fast paced delivery of code that treats features and delivery of great code the team all validates as equal partners.

  • De-Normalization of Your Database with JSON
    One of the humbling things about working at Oracle with the various MySQL personnel is that you are often blown away by something one of them says or does.  And that is on a regular basis.  In this case it is Dr. Charles Bell who gave a great series of presentations last June at the Southeast Linuxfest. In particular he presented in a full formed state some ideas that had been rattling around in my  skull (but no way near as coherent) on how to take advantage of the MySQL JSON data type.  Below are his points from his slide deck.  I was reviewing my notes from his presentation when I realized that this information really needs to be more widely disseminated.   And I would like your feedback on these ideas? 1.. We can use a JSON field to eliminate one of the issues of traditional database solutions: many-to-many-joins This allows more freedom to store unstructured data (data with pieces missing) You still use SQL to work with the data via a database connector but the JSON documents in the table can be manipulated directly in code. Joins can be expensive. Reducing how many places you need to join data can help speed up your queries.  Removing joins may result in some level of denormalization but can result in fast access to the data. 2. Plan For Mutability Schemaless designs are focused on mutability. Build your applications with the ability to modify the document as needed (and within reason) 3. Remove Many-to-Many Relationships Use embedded arrays and lists to store relationships among documents.  This can be as simple as embedding the data in the document or embedding an array of document ids in the document. In the first case data is available as soon as you can read the document and in the second it only takes one additional step to retrieve the data. In cases of seldom read (used) relationships, having the data linked with an array of ids can be more efficient (less data to read on the first pass) This presentation and others from the Southeast Linuxfest will be available online and I will be sure to post about that when it happens. And a big thank you to Dr. Chuck for these ideas.

Banner
Copyright © 2018 TVA-Faustball. Alle Rechte vorbehalten.
Joomla! ist freie, unter der GNU/GPL-Lizenz veröffentlichte Software.