Start Newsfeeds Planet MySQL
Newsfeeds
Planet MySQL
Planet MySQL - https://planet.mysql.com

  • Design A Highly Available MySQL Clusters With Orchestrator And ProxySQL In GCP — Part 2
    Design A Highly Available MySQL Clusters With Orchestrator And ProxySQL In GCP — Part 2 In part1, we explained how we are going to approach the HA setup. Here we can see how to install and configure Orchestrator and ProxySQL, then do the failover testing. Install and configure MySQL Replication: We need a MySQL with 4 Read replica and the 4'th replica will have a replica for it. And we must have to use GTID replication. Because once the master failover done, the remaining replicas will start replicating from the new master. Without GTID its not possible, but as an alternate Orchestrator provides Pseudo-GTID. VM Details: Subnet: 10.142.0.0/24 OS: Ubuntu 18.04LTS Installing MySQL on all servers: wget https://dev.mysql.com/get/mysql-apt-config_0.8.10-1_all.debdpkg -i mysql-apt-config_0.8.10-1_all.deb apt-get update apt-get install -y mysql-server Enable GTID & Other settings: Do the below changes on all the servers on my.cnf file and restart mysql service.Note: server-id must be unique for all the servers. So use different ids for other servers. vi /etc/mysql/mysql.conf.d/mysqld.cnf server-id = 101gtid_mode = ONenforce_gtid_consistency = ONlog_slave_updates = ONbinlog-format = ROWlog_bin = /var/log/mysql/mysql-bin.logmaster_info_repository = TABLE service mysql restart Create a database with sample data: Run the below queries on mysql-01 mysql> create database sqladmin;mysql> use sqladminmysql> create table test (id int );mysql> insert into test values(1); Backup the database: Run the command on mysql-01 mysqldump -u root -p --databases sqladmin --routines --events --triggers > sqladmin.sql Create the user for replication: Run the query on mysql-01 create user 'rep_user'@'10.142.0.%' identified by 'rep_password';GRANT REPLICATION SLAVE ON *.* TO 'rep_user'@'10.142.0.%';flush privileges; Establish the Replication: Restore the database on the below servers and run the below query. mysql-ha replica-01 replica-02 report-01 -- Restore the databasemysql -u root -p < sqladmin.sql -- Start replicationCHANGE MASTER TO MASTER_HOST='10.142.0.13',MASTER_USER='rep_user',MASTER_PASSWORD='rep_password', MASTER_AUTO_POSITION = 1; start slave; -- Check the replication statusshow slave status\G Master_Host: 10.142.0.13 Master_User: rep_user Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 1381 Relay_Log_File: replica-03-relay-bin.000002 Relay_Log_Pos: 1033 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Setup Replication for Report Server: Take the dump of report-01server and restore it on report-ha mysqldump -u root -p --databases sqladmin --routines --events --triggers > sqladmin.sql mysql -u root -p < sqladmin.sql -- from mysql shell CHANGE MASTER TO MASTER_HOST='10.142.0.21',MASTER_USER='rep_user',MASTER_PASSWORD='rep_password', MASTER_AUTO_POSITION = 1; start slave; Enable SEMI-SYNC replication: To prevent the dataloss and make sure the pending binlogs on the failover instances during the failover process, we need to enable Semi_sync between mysql-01 and mysql-ha. Install the plugin on both servers. INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so';INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so'; Enable Semi-Sync on Master: rpl_semi_sync_master_timeout — Master will wait for the acknowledgment till this value. Please give this value in milliseconds. SET GLOBAL rpl_semi_sync_master_enabled = 1;SET GLOBAL rpl_semi_sync_master_timeout = 5000; Enable Semi-Sync on Slave (replica-03): SET GLOBAL rpl_semi_sync_slave_enabled = 1; STOP SLAVE IO_THREAD; START SLAVE IO_THREAD; We need to add this parameter in my.cnf as well. On the master:[mysqld]rpl_semi_sync_master_enabled =1rpl_semi_sync_master_timeout = 5000 On each slave:[mysqld]rpl_semi_sync_slave_enabled = 1 Sometimes it’ll say invalid parameter then use the below lines instead. loose-rpl_semi_sync_master_enabled = 1loose-rpl_semi_sync_slave_enabled = 1 Replication part has been done. Now its time to play with Orchestrator. Install Orchestrator: Orchestrator VM’s IP address: 10.142.0.4 wget https://github.com/github/orchestrator/releases/download/v3.0.13/orchestrator_3.0.13_amd64.deb dpkg -i orchestrator_3.0.13_amd64.deb It’s installed on /usr/local/orchestrator Configuring Orchestrator: We have a sample conf file the orchestrator’s home location. We need to copy that file as the main config file. cd /usr/local/orchestratorcp orchestrator-sample.conf.json orchestrator.conf.json MySQL Backend: Orchestrator needs a backend database either SQLite or MySQL. I prefer MySQL. And to make sure this will be in HA, we are going to use CloudSQL with Failover. But for this PoC I have installed MySQL on the server where I have Orchestrator. So install MySQL and create a database and user for orchestrator. apt-get install -y mysql-server mysql -u root -p CREATE DATABASE IF NOT EXISTS orchestrator; CREATE USER 'orchestrator'@'127.0.0.1' IDENTIFIED BY '0rcP@sss'; GRANT ALL PRIVILEGES ON `orchestrator`.* TO 'orchestrator'@'127.0.0.1'; Orchestrator needs to login on all of your nodes to detect the topology and perform seamless failover and etc. So we need to create a user for Orchestrator on all the servers. Run the below query on mysql-01 it’ll replicate it to all other slaves. We are using orchestrator in autoscaling, so while creating the user use subnet range for the host. CREATE USER 'orchestrator'@'10.142.0.%' IDENTIFIED BY '0rcTopology'; GRANT SUPER, PROCESS, REPLICATION SLAVE, RELOAD ON *.* TO 'orchestrator'@'10.142.0.%'; GRANT SELECT ON mysql.slave_master_info TO 'orchestrator'@'10.142.0.%'; Edit the Conf file: Now we need to make the below changes on /usr/local/orchestrator/orchestrator.conf.json Orchestrator backend details: ... "MySQLOrchestratorHost": "127.0.0.1", "MySQLOrchestratorPort": 3306, "MySQLOrchestratorDatabase": "orchestrator", "MySQLOrchestratorUser": "orchestrator", "MySQLOrchestratorPassword": "0rcP@sss",... MySQL Topology User: "MySQLTopologyUser": "orchestrator","MySQLTopologyPassword": "0rcTopology", Promotion Node filters: We want to Promote replica-03 when mysql-01 went down. It should not promote any other replica. So we need to tell don’t promote these nodes. "PromotionIgnoreHostnameFilters": ["replica-01","replica-02","report-01","report-ha"], Other Parameters for failover: "DetachLostSlavesAfterMasterFailover": true, "ApplyMySQLPromotionAfterMasterFailover": true, "MasterFailoverDetachSlaveMasterHost": false, "MasterFailoverLostInstancesDowntimeMinutes": 0, Then start the Orchestrator. service orchestrator start The Web UI will run on port 3000. To Read about the exact meaning of all parameters read the link. github/orchestrator Add the Topology to Orchestrator: Open the web UI. In Clusters select Discovery. In IP address provide the mysql-01 IP address and Click submit. You’ll get a notification that its detected. To view your topology, Click on Clusters -> mysql-01:3306 ProxySQL Setup: Now we can move to ProxySQL setup. Lets install and configure it. Install: wget https://github.com/sysown/proxysql/releases/download/v1.4.12/proxysql_1.4.12-ubuntu16_amd64.deb dpkg -i proxysql_1.4.12-ubuntu16_amd64.debservice proxysql start Connect to ProxySQL: mysql -h 127.0.0.1 -uadmin -p -P6032 --prompt='ProxySQL> 'Enter Password: admin --This is default password. you can change it. Add MySQL servers to proxySQL: INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (10, '10.142.0.13', 3306);INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (20, '10.142.0.16', 3306);INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (20, '10.142.0.17', 3306);INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (20, '10.142.0.20', 3306); LOAD MYSQL SERVERS TO RUNTIME;SAVE MYSQL SERVERS TO DISK; Create ProxySQL Monitor User: ProxySQL needs an user to check read_only flag on all the mysql servers. So we need to create this user on mysql-01 then it’ll replicate to all the servers. Create user 'monitor'@'10.142.0.%' identified by 'moniP@ss';Grant REPLICATION CLIENT on *.* to 'monitor'@'10.142.0.%';Flush privileges; Update the Proxysql with monitor user’s password. UPDATE global_variables SET variable_value='monitor' WHERE variable_name='mysql-monitor_username';UPDATE global_variables SET variable_value='moniP@ss' WHERE variable_name='mysql-monitor_password'; LOAD MYSQL VARIABLES TO RUNTIME;SAVE MYSQL VARIABLES TO DISK; Add Read/Write host groups: For us HostGroup ID 10 is writer and 20 is reader. INSERT INTO mysql_replication_hostgroups VALUES (10,20,'mysql-01');LOAD MYSQL SERVERS TO RUNTIME;SAVE MYSQL SERVERS TO DISK; Once its added, Proxysql will continually check the read_only flag on all the servers and those records will be logged into monitor.mysql_server_read_only_log. mysql> select hostname, success_time_us, read_only from monitor.mysql_server_read_only_log ORDER BY time_start_us DESC limit 10;+-------------+-----------------+-----------+| hostname | success_time_us | read_only |+-------------+-----------------+-----------+| 10.142.0.20 | 644 | 1 || 10.142.0.17 | 596 | 1 || 10.142.0.13 | 468 | 0 || 10.142.0.16 | 470 | 1 || 10.142.0.20 | 474 | 1 || 10.142.0.17 | 486 | 1 || 10.142.0.16 | 569 | 1 || 10.142.0.13 | 676 | 0 || 10.142.0.17 | 463 | 1 || 10.142.0.13 | 473 | 0 |+-------------+-----------------+-----------+ MySQL Users for benchmark test: Create an user for sysbench test on mysql-01 It’ll replicate the user to all the nodes. Create user 'sysdba'@'10.142.0.%' identified by 'DB@dmin';Grant all privileges on *.* to 'sysdba'@'10.142.0.%';Flush privileges; But we are connecting the DB via ProxySQL right, then we need to added this user to ProxySQL as well. And this user should connect to Writer host (HostID 10) INSERT INTO mysql_users(username, password, default_hostgroup) VALUES ('sysdba', 'DB@dmin', 10); LOAD MYSQL USERS TO RUNTIME;SAVE MYSQL USERS TO DISK; Sysbench for HA test: We are going to use sysbench. To install sysbench, run the below command. sudo apt-get install sysbench Create a database for benchmark on mysql-01. create database sbtest; Prepare the sysbech database: Before running the sysbench we need to create tables. Sysbench will do that for us. sysbench --test=/usr/share/sysbench/oltp_read_write.lua \--mysql-host=127.0.0.1 \--mysql-port=6033 \--mysql-user=sysdba \--mysql-password=DB@dmin \prepare WARNING: the --test option is deprecated. You can pass a script name or path on the command line without any options.sysbench 1.0.15 (using bundled LuaJIT 2.1.0-beta2)Creating table 'sbtest1'...Inserting 10000 records into 'sbtest1'Creating a secondary index on 'sbtest1'... Test HA for mysql-01 Now start inserting the data for 1min and after 10s stop mysql service on mysql-01 sysbench --test=/usr/share/sysbench/oltp_read_write.lua \--time=60 \--mysql-host=127.0.0.1 \--mysql-port=6033 \--mysql-user=sysdba \--mysql-password=DB@dmin \--report-interval=1 \run [ 9s ] thds: 1 tps: 89.01 qps: 1780.18 (r/w/o: 1246.13/356.04/178.02) lat (ms,95%): 12.98 err/s: 0.00 reconn/s: 0.00[ 10s ] thds: 1 tps: 17.00 qps: 320.96 (r/w/o: 223.98/63.99/33.00) lat (ms,95%): 13.70 err/s: 0.00 reconn/s: 0.00[ 11s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00[ 12s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00[ 13s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00[ 14s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00[ 15s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 1.00[ 16s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00[ 17s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00[ 18s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00[ 19s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00[ 20s ] thds: 1 tps: 52.00 qps: 1058.96 (r/w/o: 741.97/211.99/105.00) lat (ms,95%): 11.45 err/s: 0.00 reconn/s: 0.00[ 21s ] thds: 1 tps: 95.00 qps: 1888.91 (r/w/o: 1322.94/375.98/189.99) lat (ms,95%): 11.45 err/s: 0.00 reconn/s: 0.00[ 22s ] thds: 1 tps: 95.01 qps: 1900.11 (r/w/o: 1330.08/380.02/190.01) lat (ms,95%): 11.87 err/s: 0.00 reconn/s: 0.00 Within 9 seconds it came up. So proxysql detected that there read_only flag changed for mysql-ha Then immediately moved the Hostgroup ID to 10. mysql> select hostgroup,srv_host,status from stats_mysql_connection_pool;+-----------+-------------+--------+| hostgroup | srv_host | status |+-----------+-------------+--------+| 10 | 10.142.0.20 | ONLINE || 20 | 10.142.0.16 | ONLINE || 20 | 10.142.0.17 | ONLINE || 20 | 10.142.0.20 | ONLINE || 20 | 10.142.0.13 | ONLINE |+-----------+-------------+--------+ HA for Report-01: The main master HA part is done. Now we can work on report-01 HA. For this we need to use a VIP (Alias IP). Your application will talk to that VIP. Im going 10 add the IP 10.142.0.142 to Report-01 node. VIP for report-01During the replica-01 downtime, Orchestrator will trigger a hook to remove the Alias IP from the failed node and attach that IP to the Failover node. In our case, we VIP will switch from replica-01 to replica-ha Create the hook in /opt/report-hook.sh echo "Removing VIP"gcloud compute instances network-interfaces update report-01 \--zone us-east1-b \--aliases ""echo "Done" echo "attaching IP"gcloud compute instances network-interfaces update report-ha \--zone us-east1-b \--aliases "10.142.0.142/32"echo "Done" Add this hook to Orchestrator’s conf file under PostIntermediateMasterFailoverProcesses "PostIntermediateMasterFailoverProcesses": [ "/opt/report-hook.sh >> /tmp/recovery.log", "echo 'Recovered from {failureType} on {failureCluster}. Failed: {failedHost}:{failedPort}; Successor: {successorHost}:{successorPort}' >> /tmp/recovery.log" ], Now test the failover. sysbench \--test=/usr/share/sysbench/oltp_read_write.lua \--time=60 \--mysql-host=10.142.0.142 \--mysql-user=admin \--mysql-password=admin \--report-interval=1 \--mysql-ignore-errors=all \run [ 3s ] thds: 1 tps: 71.00 qps: 1439.08 (r/w/o: 1008.05/288.02/143.01) lat (ms,95%): 15.27 err/s: 0.00 reconn/s: 0.00[ 4s ] thds: 1 tps: 79.00 qps: 1580.00 (r/w/o: 1106.00/316.00/158.00) lat (ms,95%): 14.73 err/s: 0.00 reconn/s: 0.00[ 5s ] thds: 1 tps: 16.00 qps: 309.00 (r/w/o: 217.00/60.00/32.00) lat (ms,95%): 15.00 err/s: 0.00 reconn/s: 0.00[ 6s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00[ 7s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00................[ 30s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00[ 31s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00[ 32s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00[ 33s ] thds: 1 tps: 70.00 qps: 1406.93 (r/w/o: 985.95/279.99/140.99) lat (ms,95%): 12.98 err/s: 0.00 reconn/s: 1.00[ 34s ] thds: 1 tps: 84.00 qps: 1685.98 (r/w/o: 1181.99/336.00/168.00) lat (ms,95%): 13.46 err/s: 0.00 reconn/s: 0.00[ 35s ] thds: 1 tps: 82.01 qps: 1646.19 (r/w/o: 1150.13/332.04/164.02) lat (ms,95%): 13.22 err/s: 0.00 reconn/s: 0.00 This time it took 25Sec, but I have reproduced this many times, each time I got different values. But the average value is 40second. report-ha got the VIPConclusion: Finally we achieved what were thinking. But in this solution is also having few bugs. But its production ready HA solution. In the next part I’ll explain the bugs in this solution and possible workaround for them. Design A Highly Available MySQL Clusters With Orchestrator And ProxySQL In GCP — Part 2 was originally published in Searce Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Design A Highly Available MySQL Clusters With Orchestrator And ProxySQL In GCP — Part 1
    Design A Highly Available MySQL Clusters With Orchestrator And ProxySQL In GCP — Part 1 Recently we have migrated one of our customer's infra to GCP and post the migration we published some adventures on ProxySQL which we implemented for them. Reduce MySQL Memory Utilization With ProxySQL Multiplexing How max_prepared_stmt_count can bring down production Now, we are going to implement an HA solution with customer filter for failover. We have done a PoC and the blog is about this PoC configurations. And again the whole setup has been implemented in GCP. You can follow the same steps for AWS and Azure. Credits: A big thanks to Shlomi (Developer of Orchestrator) who helped a lot to setup Orchestrator. René Cannaò (Author of ProxySQL) who cleared our doubts while merging Orchestrator setup with Proxysql. Background of the DB setup: We are using MySQL 5.6. GTID Replication Enabled. ProxySQL is configured on top of MySQL Layer. ProxySQL implemented to split read/write workloads. One Master and 4 slaves (and one more slave for an existing slave) Slave 1,2 will handle all the read workloads and data science/analytic queries. The 4'th replica has a separate replica. And this server is used for internal Application and other services but it needs 90% of Production tables with real time data. Writes are enabled on this replica. But writeable tables are not available on Production, So replication never gets affect because of this write. HA Requirements: Slave 1,2 are splitting read loads, So we can achieve HA even with if one node down. So as of now, we didn’t set any addition HA for read group servers. Slave 4 has additional writable tables, and internal applications are using this. So HA is mandatory for this. Finally Master node. Yeah, of course, it should be in HA or in other terms auto failover must be in place. During the main master failover, there should not be any data loss between the failed node and the new master. Our approach for HA: For read groups, we already have 2 instances. It not a big problem for now. Slave 4 already have a replica. So we’ll make automatic failover to that replica. And we can use Virtual IP and during the failover will swap that IP to the new master. Still, we have one more replica (Replica 3). So we can use this for failover the master. Overall Replication TopologyThe Risks/Restrictions in Failover: We can achieve the above solution with some HA software for MySQL. But we have some restrictions. If the master goes down then the HA tool will failover to Replica3. It shouldn’t promote any other nodes. Once the master failover has been performed then all other slaves will start replicating from the newly promoted Master. The Report Server is an intermediate Master. So if it went down, the responsibilities of this server will move to its own replica means the Replica of Report Server will continue the replication from any node (from the master or any slaves) and the report application will communicate to the replica server. In simple words, the replica will be promoted like a master. I have simulated some various failover scenarios from the below image. Top Layer - Reader (Blue)Middle Layer - Writer (Yellow)Low Layer - Report (Pink) Problems with Other HA tools: They will promote any replica as a master. In our case finance db is also a replica of Production DB, So there are many possibilities that the Finance DB will become a master for production. Due to network outage between HA tool and Master Node, but the connectivity between App servers and Slave nodes are fine. Then they will consider that maser has been down and start promoting any slaves. There is no guarantee for make sure the HA tools are not in single point of failure. During the planned maintenance, graceful master failover (manual failover) is not possible. The Orchestrator: To achieve the auto failover with the filters (don’t promote the report server and prefer Replica3 for new master) we decided to use Orchestrator. There are couple of impressive features of Orchestrator made us to try it. If the master is down, Orchestrator will ask all the slave nodes that the master is really down or not. If slaves responds as Down, then the failover will happen. We can define, which node can be a new master. Graceful failover will help us during the maintenance window, so weCan perform the maintenance activity on all the nodes without breaking the replication and major downtime. We can setup HA for the Orchestrator tool. So the Orchestrator service will not be in a single point of failure. Once the failover has been done, Orchestrator will help the slaves to replicating from the new master. Web UI is also available. HOOKS are there to perform/run/automate any scripts during the detection of failure/pre-failover/post-failover. You can read more about Orchestrator here. ProxySQL — An alternate for Virtual IP: After the promotion has been done, Orchestrator will make READ_ONLY=OFF on the promoted replica. In general mostly we are using VIP. Once the failover has performed, then a hook or script will switch the VIP from the Dead node to the new Master. But in many cases this is not a real quick task. Here ProxySQL will help us and act as an alternate for VIP. How this magic happens? In ProxySQL there is a table called mysql_replication_hostgroups. Once we insert the data initially with which host ID id current read and writer. Then every N seconds(I guess its 3) they will check the read_only flag. Whenever it identified there is a change in this flag, and mean while writer node is down, then it’ll understand that the failover has been performed and the read become a master. So it’ll swap the read nodes and writer nodes host IDs. So all the connections will continue to go with the Hostgroup id 1. Semi-Sync Replication: This is a great feature to prevent dataloss during the failover. With this replication master will wait to get the acknowledgement from any one of the slave and the commit the transaction. So we enabled Semi-Sync between Master and Replica3. Alias IP in GCP: This the point where we got stuck for sometime. Because for Report server, we are not using ProxySQL. So once the failover done, then there is no way to intimate that the replica of Report server is the new master. Report server and its replica are in same subnet but different zone. So we decided to use VIP for this. In GCP, we can use Alias IP address for this. And we can swap this IP address to the any node which is available on the same subnet. We need to worry about zone. (AWS and Azure — they don’t provide a subnet across multi zones, so your report and replica must be in a same subnet). Final Solution: We used ProxySQL servers in managed instance group and at point of time 2 instances must be running with autoscaling. We have deployed the Orchestrator also in managed instance group and keep 3 nodes at any time. Orchestrator needs a MySQL backend to work. But they will not care about this database. So we used a tiny CloudSQL instance with Failover. The MySQL HA Solution with Orchestrator and ProxySQLIn the next part, we explained how to configure this setup in GCP. So stay tune and hit the below link. Design A Highly Available MySQL Clusters With Orchestrator And ProxySQL In GCP - Part 2 Design A Highly Available MySQL Clusters With Orchestrator And ProxySQL In GCP — Part 1 was originally published in Searce Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.

  • Fun with Bugs #72 - On MySQL Bug Reports I am Subscribed to, Part IX
    I've subscribed to more than 60 new bug reports since my previous post in this series. It means that I'd need 4-5 posts to cover all new subscriptions and reasons behind them. I still plan to write about most of the bug reports I was interested in recently, but for this post I decided to pick up only MySQL 8.0 regression bugs and pay special attention to those that could be handled better or faster by Oracle engineers, as well as those handled perfectly.The initial reason for this special attention originally was Bug #93085 - "Stall when concurrently execute create/alter user with flush privilege", that caused a lot of interesting Twitter discussions. It took some time, comments (in the bug report and in social media) and pressure from MySQL Community (including yours truly) to get it accepted as a real (regression!) bug to work on, and got "Verified". Unfortunately too often recently I see more time spent on arguing that something is not a bug or can not be reproduced, or is an example of improper usage of some MySQL feature etc instead of simply checking how things worked before MySQL 8.0 and how this changed, to worse.Another example of "interesting" approach to bugs in MySQL 8.0 is Bug #93102 - "I_S queries don't return column names in the same case as requested.". It's indeed a duplicate of old and well known Bug #84456 - "column names in metadata appearing as uppercase when selecting from I_S" reported at early 8.0 development stage by Shane Bester from Oracle and community user (see Bug #85947). Still, it was decided NOT to fix it and tell users to rely on workaround, while this breaks application compatibility and is a regression.Take a look at Bug #92998 - "After finishing the backup with mysqldump, mysql crashes and restarts" also. It ended up in "Unsupported" status, with some statements that "Dumping and restoring data between different 8.0 releases is not supported". This can be classified as a regression by itself. What I miss is a link to the manual saying it's not supported (was not able to find it in 5 minutes) any explanation of crash and restart - supported or not, running mysqldump should NOT cause server restarts in a general case. I think this bug report could end up in many statuses, but of them all "Unsupported" is hardly correct. This my photo is far from ideal and can be criticized from different points of view, but there is no point to argue with the fact that it shows clouds in the sky. I wish the fact that MySQL 8.0 GA releases still have regression bugs is accepted with less arguing and more attention. Now let me continue with a list of recently reported regression bugs in MySQL 8.0 that were handled mostly properly: Bug #93215 - "Case statement use in conditional with sub_part index regression in 8.0". MySQL of versions < 8.0 (and MariaDB 10.3.7) work as expected also. The bug was verified fast, but it still misses explicit "regression" tag. Bug #93214 - "LIMIT is applied before HAVING when you have a subquery". The bug was "Verified" quickly, but I still miss the exact 8.0.x version(s) affected and the results of checking with older versions. I strongly suspect it's a regression, as MariaDB 10.3.7 provides expected result: MariaDB [test]> CREATE TABLE test (id INT PRIMARY KEY, value INT);Query OK, 0 rows affected (0.510 sec)MariaDB [test]> INSERT INTO test VALUES (1, 99), (2,98), (3, 97);Query OK, 3 rows affected (0.057 sec)Records: 3  Duplicates: 0  Warnings: 0MariaDB [test]> SELECT t1.id, (SELECT t2.value FROM test t2 WHERE t1.id = t2.id) AS sub_value FROM test t1 HAVING sub_value = 99 ORDER BY value LIMIT 1;+----+-----------+| id | sub_value |+----+-----------+|  1 |        99 |+----+-----------+1 row in set (0.116 sec) Bug #93170 - "undo truncation in 8.0.13 is not crash safe". The bug was quickly verified (after all, it's a failure of existing innodb_undo.truncate_recover MTR test case), but had not got "regression" tag. I am still not sure how it was missed during regular testing and ended up in the MySQL 8.0.13 release. Bug #93147 - "Upgrade to 8.0.13 from 8.0.11 fails". In pre-8.0 releases there was no strict need to update to every intermediate minor version, so it's also a regression of a kind for any production DBA. Bug #92979 - "MySQL 8.0 performance degradation on INSERT with foreign_key_checks=0". This is a verified performance regression comparing to MySQL 5.7, but "regression" tag is still missing.  To summarize, there are some regressions noted by community users recently in MySQL 8.0 GA releases. Some of them were demonstrated with simple test cases, so it's strange they were not noted by Oracle's QA. What's worse, it seems some of Oracle engineers are not ready to accept the fact that the best ever MySQL 8.0 GA release they worked on may get some things done incorrectly and worse than before, so they seem to waste time on useless discussions that everything is OK, work as expected and nothing can be done differently.  I also see some processed and verified bug reports without detailed check for regressions presented to users or even with "regression" tag NOT added when needed.I hope this is not going to become a new trend. I wish all community bug reports and features of MySQL get as much attention and detailed study from Oracle employees as (far from perfect) JSON support in MariaDB...

  • Vue Laravel CRUD Example Tutorial From Scratch
    Vue Laravel CRUD Example Tutorial From Scratch is today’s leading topic.  Laravel is the PHP’s fastest growing Framework with scalability and flexibility. VueJS is the fastest growing Front end Library in the Javascript community. Laravel is providing VueJS support out of the box.  For this demo, we will not create separate projects for the frontend because Laravel provides the best support for Vue.js. So, we will build the Vue components inside the Laravel’s assets folder. We will create a Single Page Application(SPA) using the Laravel and Vue.js. In this SPA, you can create a post, edit the post, read the post and delete the post. We will create Laravel API Backend. Vue Laravel CRUD Example Tutorial Now, first, we need to install the Laravel application. So let us install new Laravel by the following command. #1: Install Laravel 5.7 laravel new vuelaravelcrud   Now, go inside the project folder and install the frontend dependencies using the following command. npm install Also, open the project in your editor. I am using Visual Studio Code. code . Setup the database configuration inside the .env file. DB_CONNECTION=mysql DB_HOST=127.0.0.1 DB_PORT=3306 DB_DATABASE=vuecrud DB_USERNAME=root DB_PASSWORD=root Save the file, and now your backend can connect the MySQL database.  Now, start the compiling assets using the following command. npm run dev   We can also run the following command to compile the assets as you write the new code or modify the existing code. npm run watch #2: Install Vue dependency and edit configurations. Type the following command to install the vue-router and vue-axios dependencies. The vue-router is used for routing our Vue.js application to use the different components and the vue-axios for sending the network request to the server. npm install vue-router vue-axios --save Now, we need to change the app.js file inside resources  >> js folder. Modify with the following code inside the app.js file. // App.js require('./bootstrap'); window.Vue = require('vue'); import VueRouter from 'vue-router'; Vue.use(VueRouter); import VueAxios from 'vue-axios'; import axios from 'axios'; Vue.use(VueAxios, axios); Vue.component('example-component', require('./components/ExampleComponent.vue')); const router = new VueRouter({ mode: 'history'}); const app = new Vue(Vue.util.extend({ router })).$mount('#app'); Here, we have configured the vue-router and vue-axios library. Now, inside the resources >> views folder, create one file called the post.blade.php file. Add the following code inside the post.blade.php file. <!doctype html> <html lang="{{ str_replace('_', '-', app()->getLocale()) }}"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Laravel</title> <link href="https://fonts.googleapis.com/css?family=Nunito:200,600" rel="stylesheet" type="text/css"> <link href="/{{ mix('css/app.css') }}" type="text/css" rel="stylesheet" /> <meta name="csrf-token" value="{{ csrf_token() }}" /> </head> <body> <div id="app"> <example-component></example-component> </div> <script src="/{{ mix('js/app.js') }}" type="text/javascript"></script> </body> </html> Now, change the route inside the routes >> web.php file. We are building the Single Page Application using Laravel and Vue. So we need to define the following route, where we can push any route and then it will give the correct route without any error. Otherwise, it will give the 404 error because, in Laravel, you have not defined any route, it is the Vue where you have defined the various component route. Due to the following code, now you can be able to use Laravel route proxy as a Vue route, and you can display the Vue components depending on the current URL. <?php Route::get('/{any}', function () { return view('post'); })->where('any', '.*'); Save the file and go to the browser and see the result. You can see that we have successfully integrated the Vue component in our Laravel application. #3: Create Vue Components Inside the resources >>  js folder, create a folder called components and inside that folder, create four following vue components. HomeComponent.vue CreateComponent.vue EditComponent.vue IndexComponent.vue // HomeComponent.vue <template> <div class="row justify-content-center"> <div class="col-md-8"> <div class="card card-default"> <div class="card-header">Home Component</div> <div class="card-body"> I'm the Home Component component. </div> </div> </div> </div> </template> <script> export default { mounted() { console.log('Component mounted.') } } </script> // CreateComponent.vue <template> <div class="row justify-content-center"> <div class="col-md-8"> <div class="card card-default"> <div class="card-header">Create Component</div> <div class="card-body"> I'm the Create Component component. </div> </div> </div> </div> </template> <script> export default { mounted() { console.log('Component mounted.') } } </script> // EditComponent.vue <template> <div class="row justify-content-center"> <div class="col-md-8"> <div class="card card-default"> <div class="card-header">Edit Component</div> <div class="card-body"> I'm an Edit component. </div> </div> </div> </div> </template> <script> export default { mounted() { console.log('Example Component mounted.') } } </script> // IndexComponent.vue <template> <div class="row justify-content-center"> <div class="col-md-8"> <div class="card card-default"> <div class="card-header">Index Component</div> <div class="card-body"> I'm an Index component. </div> </div> </div> </div> </template> <script> export default { mounted() { console.log('Index Component mounted.') } } </script> #4: Configure the vue-router Inside the app.js file, write the following code. // app.js require('./bootstrap'); window.Vue = require('vue'); import VueRouter from 'vue-router'; Vue.use(VueRouter); import VueAxios from 'vue-axios'; import axios from 'axios'; import App from './App.vue'; Vue.use(VueAxios, axios); import HomeComponent from './components/HomeComponent.vue'; import CreateComponent from './components/CreateComponent.vue'; import IndexComponent from './components/IndexComponent.vue'; import EditComponent from './components/EditComponent.vue'; const routes = [ { name: 'home', path: '/', component: HomeComponent }, { name: 'create', path: '/create', component: CreateComponent }, { name: 'posts', path: '/posts', component: IndexComponent }, { name: 'edit', path: '/edit/:id', component: EditComponent } ]; const router = new VueRouter({ mode: 'history', routes: routes}); const app = new Vue(Vue.util.extend({ router }, App)).$mount('#app'); Here, we have imported the four components and also defined the routes for our application. Then we have created a router object and passed it to our Vue application. Now, create one more vue component inside the resources >> js folder called App.vue and add the following code in it. // App.vue <template> <div class="container"> <div> <transition name="fade"> <router-view></router-view> </transition> </div> </div> </template> <style> .fade-enter-active, .fade-leave-active { transition: opacity .5s } .fade-enter, .fade-leave-active { opacity: 0 } </style> <script> export default{ } </script> Here, we have defined our router- view. It is the directive that will render the component according to current URL. So, if our URL is /create, then it will present the CreateComponent on the webpage. Save the file, and if your npm run watch is not running, then you might need to compile again and then go to the browser and go to these URLs for testing and see if the project is working or not. http://vuelaravelcrud.test/create http://vuelaravelcrud.test/posts http://vuelaravelcrud.test/edit/21 If you are getting the exact component, then you are perfectly fine and if not then check out for the error on the terminal as well as the console panel in the browser. #5: Create a Navigation bar Write the following code inside the App.vue file. I have added the navbar. // App.vue <template> <div class="container"> <nav class="navbar navbar-expand-sm bg-dark navbar-dark"> <ul class="navbar-nav"> <li class="nav-item"> <router-link to="/" class="nav-link">Home</router-link> </li> <li class="nav-item"> <router-link to="/create" class="nav-link">Create Post</router-link> </li> <li class="nav-item"> <router-link to="/posts" class="nav-link">Posts</router-link> </li> </ul> </nav><br /> <transition name="fade"> <router-view></router-view> </transition> </div> </template> <style> .fade-enter-active, .fade-leave-active { transition: opacity .5s } .fade-enter, .fade-leave-active { opacity: 0 } </style> <script> export default{ } </script> Save the file and see the output on the browser.   #6: Create a Form Write the following code inside the CreateComponent.vue file. We will add the Bootstrap Form to create a post. // CreatePost.vue <template> <div> <h1>Create A Post</h1> <form @submit.prevent="addPost"> <div class="row"> <div class="col-md-6"> <div class="form-group"> <label>Post Title:</label> <input type="text" class="form-control" v-model="post.title"> </div> </div> </div> <div class="row"> <div class="col-md-6"> <div class="form-group"> <label>Post Body:</label> <textarea class="form-control" v-model="post.body" rows="5"></textarea> </div> </div> </div><br /> <div class="form-group"> <button class="btn btn-primary">Create</button> </div> </form> </div> </template> <script> export default { data(){ return { post:{} } }, methods: { addPost(){ console.log(this.post); } } } </script> So, we have taken the two fields. Post Title and Post Body. We have made one method called addPost().  So, when a user submits the form, we will get the input inside the addPost() method. From then, we will send a POST request to the Laravel server and to save the data into the database. I am skipping the validation of each field because this article is getting long and long. So we will do it in another post. Save the file and go to this URL: http://vuelaravelcrud.test/create or /create. You can see the form like below.   #7: Create a Laravel Backend The primary purpose of the Laravel Framework in this example is to build a backend API. So, first, we will create a schema for post table. Also, we need a Post model. So let’s create both using the following command. php artisan make:model Post -m Now, write the following schema inside [timestamp]create_posts_table.php file. public function up() { Schema::create('posts', function (Blueprint $table) { $table->increments('id'); $table->string('title'); $table->text('body'); $table->timestamps(); }); } Migrate the database using the following command. php artisan migrate Next, inside the Post.php model, write the following code to prevent the mass assignment exception. <?php // Post.php namespace App; use Illuminate\Database\Eloquent\Model; class Post extends Model { protected $fillable = ['title', 'body']; } Also, create a controller using the following command. php artisan make:controller PostController We are using Laravel Resource Collection for API development. So let us create a Laravel Resource Collection using the following command. php artisan make:resource PostCollection When building an API, you may need a transformation layer that sits between your Eloquent models and the JSON responses that are returned to your application’s users. Laravel’s resource classes allow you to expressively and quickly transform your models and model collections into JSON. The PostCollection resource is generated inside app >> Http >>Resources >> PostCollection.php file. <?php // PostCollection.php namespace App\Http\Resources; use Illuminate\Http\Resources\Json\ResourceCollection; class PostCollection extends ResourceCollection { /** * Transform the resource collection into an array. * * @param \Illuminate\Http\Request $request * @return array */ public function toArray($request) { return parent::toArray($request); } } #8: Define the CRUD operations First, we define the function that stores the data inside the MySQL database. <?php // PostController.php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Http\Resources\PostCollection; use App\Post; class PostController extends Controller { public function store(Request $request) { $post = new Post([ 'title' => $request->get('title'), 'body' => $request->get('body') ]); $post->save(); return response()->json('success'); } } Now, write the edit, update, index, and delete functions as well. <?php // PostController.php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Http\Resources\PostCollection; use App\Post; class PostController extends Controller { public function store(Request $request) { $post = new Post([ 'title' => $request->get('title'), 'body' => $request->get('body') ]); $post->save(); return response()->json('successfully added'); } public function index() { return new PostCollection(Post::all()); } public function edit($id) { $post = Post::find($id); return response()->json($post); } public function update($id, Request $request) { $post = Post::find($id); $post->update($request->all()); return response()->json('successfully updated'); } public function delete($id) { $post = Post::find($id); $post->delete(); return response()->json('successfully deleted'); } } #9: Define the api routes Now, we need to define the API routes inside the routes >> api.php file. <?php // api.php use Illuminate\Http\Request; Route::middleware('auth:api')->get('/user', function (Request $request) { return $request->user(); }); Route::post('/post/create', 'PostController@store'); Route::get('/post/edit/{id}', 'PostController@edit'); Route::post('/post/update/{id}', 'PostController@update'); Route::delete('/post/delete/{id}', 'PostController@delete'); Route::get('/posts', 'PostController@index'); #10: Use Axios to send a network request Now, we have created the backend. Next step is to send a POST request to the Laravel development server. Write the following code inside the CreateComponent.vue file’s addPost() function. // CreateComponent.vue addPost(){ let uri = 'http://vuelaravelcrud.test/api/post/create'; this.axios.post(uri, this.post).then((response) => { this.$router.push({name: 'posts'}); }); } So, it will post a request to the server’s api point with the data and the server saves the data. Now, display the posts. So let us edit the IndexComponent.vue file. // IndexComponent.vue <template> <div> <h1>Posts</h1> <div class="row"> <div class="col-md-10"></div> <div class="col-md-2"> <router-link :to="{ name: 'create' }" class="btn btn-primary">Create Post</router-link> </div> </div><br /> <table class="table table-hover"> <thead> <tr> <th>ID</th> <th>Item Name</th> <th>Item Price</th> <th>Actions</th> </tr> </thead> <tbody> <tr v-for="post in posts" :key="post.id"> <td>{{ post.id }}</td> <td>{{ post.title }}</td> <td>{{ post.body }}</td> <td><router-link :to="{name: 'edit', params: { id: post.id }}" class="btn btn-primary">Edit</router-link></td> <td><button class="btn btn-danger">Delete</button></td> </tr> </tbody> </table> </div> </template> <script> export default { data() { return { posts: [] } }, created() { let uri = 'http://vuelaravelcrud.test/api/posts'; this.axios.get(uri).then(response => { this.posts = response.data.data; }); } } </script> Here, when the component is created, we send the network request to fetch the data. Now, here, we need to write the data because axios is by default wrap our backend data posts with data and we are using the Resource Collection. So it will also wrap the data in the data key. So to get the actual data, we need to write like response.data.data. If we have not used axios then we might write the response.data, but axios also has backend data template like response.data, and that is why we need to write the response.data.data. #11: Send edit and update request Now, when our edit component loads, we need to fetch the data from the server to display the existing data. Then, after changing the data in the textbox and textarea, we hit the update button and we call the updatePost() function to send a post request to the server to update the data. // EditComponent.vue <template> <div> <h1>Edit Post</h1> <form @submit.prevent="updatePost"> <div class="row"> <div class="col-md-6"> <div class="form-group"> <label>Post Title:</label> <input type="text" class="form-control" v-model="post.title"> </div> </div> </div> <div class="row"> <div class="col-md-6"> <div class="form-group"> <label>Post Body:</label> <textarea class="form-control" v-model="post.body" rows="5"></textarea> </div> </div> </div><br /> <div class="form-group"> <button class="btn btn-primary">Update</button> </div> </form> </div> </template> <script> export default { data() { return { post: {} } }, created() { let uri = `http://vuelaravelcrud.test/api/post/edit/${this.$route.params.id}`; this.axios.get(uri).then((response) => { this.post = response.data; }); }, methods: { updatePost() { let uri = `http://vuelaravelcrud.test/api/post/update/${this.$route.params.id}`; this.axios.post(uri, this.post).then((response) => { this.$router.push({name: 'posts'}); }); } } } </script> Now, you can try to edit the data and update the form, and you can see that we can now update the data. #12: Delete the data Now, the only remaining thing is to delete or remove the data from the database. So, let us write the final code inside the IndexComponent.vue file. // IndexComponent.vue <template> <div> <h1>Posts</h1> <div class="row"> <div class="col-md-10"></div> <div class="col-md-2"> <router-link :to="{ name: 'create' }" class="btn btn-primary">Create Post</router-link> </div> </div><br /> <table class="table table-hover"> <thead> <tr> <th>ID</th> <th>Item Name</th> <th>Item Price</th> <th>Actions</th> </tr> </thead> <tbody> <tr v-for="post in posts" :key="post.id"> <td>{{ post.id }}</td> <td>{{ post.title }}</td> <td>{{ post.body }}</td> <td><router-link :to="{name: 'edit', params: { id: post.id }}" class="btn btn-primary">Edit</router-link></td> <td><button class="btn btn-danger" @click.prevent="deletePost(post.id)">Delete</button></td> </tr> </tbody> </table> </div> </template> <script> export default { data() { return { posts: [] } }, created() { let uri = 'http://vuelaravelcrud.test/api/posts'; this.axios.get(uri).then(response => { this.posts = response.data.data; }); }, methods: { deletePost(id) { let uri = `http://vuelaravelcrud.test/api/post/delete/${id}`; this.axios.delete(uri).then(response => { this.posts.splice(this.posts.indexOf(id), 1); }); } } } </script> So, here, we have taken the click event in vue to grab the id of the current post and send that id to remove the data from the database as well as remove the data from the array at the client side using the splice function as well. Finally, Vue Laravel CRUD Example Tutorial From Scratch is over. Thanks for taking. You can use the following Github Code for reference. Github Code The post Vue Laravel CRUD Example Tutorial From Scratch appeared first on AppDividend.

  • Visualizing the MySQL Bug Tide (2018 edition)
    I've updated the bug tide graph I made in 2016 with today's data. The source code and details are here.

Banner
Copyright © 2018 TVA-Faustball. Alle Rechte vorbehalten.
Joomla! ist freie, unter der GNU/GPL-Lizenz veröffentlichte Software.