MySQL Server log Maintenance

As a part database administration, DBA has to take care of sub-components of database like server logs and has to plan for maintenance activity for these components regularly.

log_rotate

MySQL has various types of log i.e binary log, error log, slow query log, general log for different purposes. And after certain time these logs will grow and you will start seeing issues like low disk space, a large number of logs etc.

MySQL allows you to flush logs using flush logs command, next “How to rotate and clean up old MySQL logs? ”

Linux has a utility called “logrotate” , using logrotate we can implement log rotation for MySQL server logs.

Binary logs: This one is critical if you have replication setup, By enabling  expire_logs_days mysql variable you can manage cleanup and flush logs cmd will rotate binary log.

For General and Slow query that’s not the case, “flush log” cmd will flush the content from memory to respective log files, but it will not rotate these logs. logrotate by default configured and managed with OS root user.On a Linux (Red Hat) installation, you can use the mysql-log-rotate script for this. If you installed MySQL from an RPM distribution, this script should have been installed automatically. It kind of sample script for full implementation, let’s create a separate mysql-log-rotate script.

Prerequisites:

USER and Privileges:

CREATE USER  'logadmin'@'localhost' IDENTIFIED BY 'xyzpwd';
GRANT RELOAD ON *.* TO 'logadmin'@'localhost';

Secure user credentials using mysql_config_editor:

shell> mysql_config_editor set --login-path=logadmin_client --host=localhost --user=monitor --password                                                                                     

Enter password:<enter_mysql_logadmin_user_password>

NOTE: It will store user credentials info into .mylogin.cnf (This conf file will be get created under current OS user home directory)

mysql-log-rotate script

/PATH/log/mysqld.log /PATH/log/slow-query.log /PATH/log/general-query.log {
create 640 mysql mysql
rotate 5
daily
minsize 1M
notifempty
missingok
compress
postrotate
# just if mysqld is really running
if test -x /usr/bin/mysqladmin && \
/usr/bin/mysqladmin --login-path=logadmin_client ping &>/dev/null
then
/usr/bin/mysqladmin --login-path=logadmin_client flush-logs
fi
endscript
}

NOTE: Above script will flush logs 3 times since we have 3 logs in one code block.To flush log only at once you can create separate rotate code block for each log and add postrotate script only in the last rotation code block.

Automation:

00 03 * * * /usr/sbin/logrotate -s /PATH/log/logrotate.status /PATH/monitor/mysql-log-rotate.sh > /PATH/log/logrotate_cron.log 2>&1

Key points:

  • You can set rotation on the basis of SIZE, TIME or both. Explore logrotate option for more options.
  • -s /PATH/log/logrotate.status file will get create/update with log name and timestamp, Which will get use for next rotation on the basis of filename and timestamp it has.
  • -f, --force
    Tells logrotate to force the rotation, even if it doesn’t think
    this is necessary. Sometimes this is useful after adding new
    entries to logrotate, or if old log files have been removed by
    hand, as the new files will be created, and logging will continue correctly.

All Set !!

MySQL Reporting using AutoSQL Tool

Three ways to schedule a MySQL query

If you walk through any office you see people working in Excel. With MySQL for Excel (https://www.mysql.com/why-mysql/windows/excel/) you can already let them pull information from Excel themselves. However, in some cases it saves a lot of time if they don’t have to pull the information, but it’s pushed automatically.
Exception lists are the best examples of queries you want to push to users. If you have a query with occasional results, you don’t want to check for this every day. In this case you just want to receive a mail if there are any results. Eg a list of stuck invoices which can’t be processed automatically.
How can we do this?

1. Using the MySQL Event Scheduler

The MySQL Event scheduler can be used to run a query on a predefined schedule and output the results in CSV format.
Let’s write a simple query to output stuck invoices to a CSV file (which can be imported with Excel).


SELECT * FROM erp.invoice WHERE status = 'stuck'
INTO OUTFILE 'c:\\DailyReports\\stuck_invoices.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n';

Note that you can specify some formatting options of the file like the separator and the line terminator. Choosing the options above will allow Excel to open the file without problems.
Now let’s create a new MySQL Event to have it run every day at 9 AM:

CREATE
EVENT
erp.daily_invoice_stock
ON SCHEDULE AT '2017-01-29 09:00:00' + INTERVAL 1 day
DO
SELECT * FROM erp.invoice WHERE status = 'stuck'
INTO OUTFILE 'c:\\DailyReports\\stuck_invoices.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n';

Creating events in MySQL does requires the ‘EVENT’ privilege on the schema in which the event is created.

2. Scripting

Using command line tools you can do a lot. The main benefit from using the events is that you don’t need specific database permissions to create them. Instead you could create a simple batch script and schedule it using the Windows Task Scheduler.
Create a file C:\invoice_stuck.sql with the query you want to schedule:


SELECT * FROM erp.invoice WHERE status = 'stuck'
INTO OUTFILE 'c:\\DailyReports\\stuck_invoices.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n';

Create a batch script c:\execute_query.cmd with the following contents:

mysql.exe –u query_exec –p my_password <c:\invoice_stuck.sql

Now schedule this batch file with Windows Task Scheduler to run at any interval.

3. Using a commercial application like AutoSQL

There are several 3rd party tools to do the same and more. The major benefits are:
 No special MySQL database privileges needed to create events (like with scripting)
 Outputting to genuine Excel format (dates, numbers and columns nicely formatted)
 Option to email the results
 Easy for any user to setup
Here we will use AutoSQL (http://www.autosql.net), a simple single purpose tool to schedule queries and specify the output. It actually creates action lists which can be scheduled by Windows Task Scheduler.
AutoSQL uses an ODBC connection to connect to a data source. Therefore you need to have the MySQL ODBC driver installed (https://dev.mysql.com/downloads/connector/odbc/) and setup an ODBC connection (https://www.youtube.com/watch?v=K3GZidOwGmM).
Setup the query
Now choose the MySQL ODBC connection in the ‘Get DSN’ Window and copy the SQL query in the query window:
Specifying the

autosql1

Specifying the output
On the ‘Output’ tab we can specify that we want to output it to an Excel and send an email with the file attached. Here you can also specify you only want to send it if there are more than 0 results.

autosql2

After you tested it, you can click on ‘Get Background Command’. This copies the background execution command to the clipboard. You can use this in Windows Task Scheduler to actually execute the command on a schedule.

Conclusion

If you need to schedule a query and have no specific needs on the output format using the Event scheduler from MySQLor a simple script is perfect. However the output options are limited (CSV) and it is not really user friendly for the average user.
If you need better output and you are not into heavy scripting, take a look at the commercial options available.

 

 

MSSQL to MySQL Data migration using MySQL workbench 6.3

Recently I was testing data migration from MSSQL to MySQL using MySQL Workbench. My aim was to include data with all datatype available in MSSQL for migration. In this following blog post will see data migration prerequisites, migration steps and few common errors.

About MySQL Workbench

MySQL Workbench is a unified visual tool for database architects, developers, and DBAs. MySQL Workbench provides data modeling, SQL development, and comprehensive administration tools for server configuration, user administration, backup, Data migration and much more. MySQL Workbench is available on Windows, Linux and Mac OS X.

When a supported RDBMS product is being migrated, the MySQL Workbench Migration Wizard will automatically convert as much information as it can, but you may still be required to manually edit the automatically migrated schema for difficult cases, or when the default mapping is not as desired.

Generally speaking, only table information and its data are automatically converted to MySQL. Code objects such as views, stored procedures, and triggers, are not. But supported RDBMS products will be retrieved and displayed in the wizard. You can then manually convert them, or save them for converting at a later time.

The following MS-SQL versions are currently tested and supported by the MySQL Workbench Migration Wizard.

Microsoft SQL Server 2000, 2005, 2008, 2012

Prerequisite

Download and install MySQL Workbench GUI tool.

The MySQL Workbench Migration Wizard uses ODBC to connect to a source database, except for MySQL. You will need the ODBC driver installed that corresponds to the database you want to migrate from.

Preparation

To be able to migrate from Microsoft SQL Server, ensure the following:

  • The source SQL Server instance is running, and accepts TCP connections.
  • You know the IP and port of the source SQL server instance. If you will be migrating using a Microsoft ODBC driver for SQL Server (the default in Windows), you will need to know the host and the name of the SQL Server instance.
  • Make sure that the SQL Server is reachable from where you will be running MySQL Workbench. More specifically, check the firewall settings.
  • Make sure that the user account has proper privileges to the database that will be migrated.

DATA SOURCE ODBC Configuration

In order to set up a connectivity between MSSQL and MySQL, We need to configure DATA SOURCE drivers ODBC with MS-SQL connection information.

Add new System data source.

odbc_pic

Required privileges on MS-SQL DB

CONNECT SQL

VIEW ANY DATABASE

VIEW ANY DEFINITION

If proper permission not given then at the migration it will throw warning as follow,

blog_lalit_pic2

DB Servers Data Types Mapping

Refer:

https://dev.mysql.com/doc/workbench/en/wb-migration-database-mssql-typemapping.html

Available datatype in MS-SQL:

https://msdn.microsoft.com/en-us/library/ms187752.aspx

Sample MS-SQL data

Sample table and data with different MS-SQL datatype

-- integer data types

CREATE TABLE dbo.int_table
(
MyBigIntColumn bigint
,MyIntColumn  int
,MySmallIntColumn smallint
,MyTinyIntColumn tinyint
);

ALter table int_table add MyBitColumn bit;

ALter table int_table add CONSTRAINT bit_def default 1  for MyBitColumn;

INSERT INTO dbo.int_table VALUES (9223372036854775807, 214483647,32767,255);

update int_table SET MyBitColumn=1;

Alter table int_table alter column MyBitColumn bit not null;

-- decimal and numeric Data datatypes

CREATE TABLE dbo.num_table
(
MyDecimalColumn decimal(5,2)
,MyNumericColumn numeric(10,5)
);

INSERT INTO dbo.num_table VALUES (123, 12345.12);

Alter table num_table add MyMoneycolumn money;

Alter table num_table add MysmallMoneycolumn smallmoney;

INSERT INTO num_table (MyMoneyColumn,MysmallMoneycolumn) values(20.098,45.68);

alter table num_table add c_real real,c_float float (32);

INSERT INTO num_table (c_real,c_float) values(2.0,43.67897);

-- datetime datatype

create table date_table(
ID               VARCHAR(4)         NOT NULL,
First_Name         VARCHAR(20),
Last_Name          VARCHAR(20),
Start_Date         DATE,
End_Date           DATE,
c_time             Time,
Salary             Money,
City               VARCHAR(20),
Description        VARCHAR(80),
c_datetime             datetime,
cs_smalldatetime    smalldatetime,
c_datetime2         datetime2
)

ALTER TABLE date_table ADD CONSTRAINT cc_datetime DEFAULT GETDATE() FOR c_datetime;

ALTER TABLE date_table ADD CONSTRAINT cc_time DEFAULT convert(time, getdate()) FOR c_time;

ALTER TABLE date_table ADD CONSTRAINT cc_startdate DEFAULT convert(date,getdate()) FOR start_date;

ALTER TABLE date_table ADD CONSTRAINT cc_enddate DEFAULT convert(date,getdate()) FOR end_date;

-- prepare data

insert into date_table(ID,  First_Name, Last_Name,Salary,  City, Description,cs_smalldatetime,c_datetime2)

values ('01','Jason',    'Martin', 1234.56, 'Toronto',  'Programmer','2007-05-08 12:35:00','2007-05-08 12:35:29. 1234567');

insert into date_table(ID,  First_Name, Last_Name,Salary,  City, Description,cs_smalldatetime,c_datetime2)

values('02','Alison',   'Mathews', 2234.78, 'Vancouver','Tester','2016-07-08 12:36:00','2006-07-08 12:36:29. 1234567'); 

-- char,varchar,nvarchar,tinyint,int,text datatypes

CREATE TABLE [dbo].[Employee_2](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](150) NULL,
[empid] [int] NOT NULL,
[age] [tinyint] NOT NULL,
[gender] VARCHAR(10) NOT NULL CHECK (gender IN('male', 'Female', 'Unknown'))
);

Alter table Employee_2 add primary key (Id);

-- Create a nonclustered index on a table or view

CREATE INDEX i1 ON dbo.Employee_2 (Name);

--Create a clustered index on a table and use a 3-part name for the table

CREATE CLUSTERED INDEX ci1 ON Employee_2([Id]);

CREATE UNIQUE INDEX pk1 ON dbo.Employee_2 (Id);

-- Create a nonclustered index with a unique constraint on 3 columns and specify the sort order for each column

CREATE UNIQUE INDEX ui1 ON dbo.Employee_2 (name DESC, empid ASC, age DESC);

INSERT INTO Employee_2 (Name,empid,age,gender) values ('lalit',268981,27,'male');

INSERT INTO Employee_2 (Name,empid,age,gender) values ('harsh',268982,28,'male');

INSERT INTO Employee_2 (Name,empid,age,gender) values ('jina',268983,27,'female');

INSERT INTO Employee_2 (Name,empid,age,gender) values ('xyz',268984,32,'Unknown');

ALTER table employee_2 add emp_notes2 text;

update employee_2 SET emp_notes2='test data Migration from mssql- mysql';

CREATE TABLE Persons
(
P_Id int NOT NULL,
Lastname varchar(40),
Firstname varchar(40) NOT NULL,
Address varchar(100),
City char(50),
PRIMARY KEY (P_Id)
);

INSERT INTO Persons values(1,'c','lalit','IT park','PUNE');

CREATE TABLE Orders
(
O_Id int NOT NULL,
OrderNo int NOT NULL,
P_Id int,
PRIMARY KEY (O_Id),
CONSTRAINT fk_PerOrders FOREIGN KEY (P_Id)
REFERENCES Persons(P_Id)
);

INsert INTO orders values(1,2234,1);

create table binary_table (c_binary binary , c_varbinary varbinary (max) ,c_image image);

INSERT INTO binary_table (c_varbinary)
values (convert(VARBINARY(max),44));

Few extra tables MS-SQL Table:

  1. [dbo].[Altiris, Inc_$Item] table with 20686 rows
CREATE TABLE [dbo].[Altiris, Inc_$Item](
[No_] [varchar](20) NOT NULL,
[Description] [varchar](100) NOT NULL,
[Description 2] [varchar](50) NULL,
[Blocked] [tinyint] NOT NULL,
[Last Date Modified] [datetime] NULL,
[Inactive] [tinyint] NOT NULL,
[Fixed Node] [tinyint] NOT NULL,
[Minimum Nodes] [int] NOT NULL,
[Maximum Nodes] [int] NOT NULL,
[Tax Group Code] [varchar](10) NOT NULL,
[Current Price List] [tinyint] NOT NULL,
[PrimeKey] [int] IDENTITY(1,1) NOT NULL,
CONSTRAINT [primeKey_FK1] PRIMARY KEY CLUSTERED
(
[PrimeKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

  1. [dbo].[Employee]
CREATE TABLE [dbo].[Employee](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](150) NULL,
CONSTRAINT [PK_Employee] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

  1. [dbo].[CUSTOMERS]
CREATE TABLE [dbo].[CUSTOMERS](
[ID] [int] NOT NULL,
[NAME] [varchar](20) NOT NULL,
[AGE] [int] NOT NULL,
[ADDRESS] [char](25) NULL,
[SALARY] [decimal](18, 2) NULL,
PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

 
  1. [dbo].[Products]
CREATE TABLE [dbo].[Products](
[ProductID] [int] NOT NULL,
[ProductName] [varchar](25) NOT NULL,
[Price] [money] NULL,
[ProductDescription] [text] NULL,
[c_time] [datetime] NOT NULL,
PRIMARY KEY CLUSTERED
(
[ProductID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[Products] ADD  CONSTRAINT [DF_YourTable]  DEFAULT (getdate()) FOR [c_time]
GO

Note: Insert appropriate data in above tables for respective datatypes.

Migration using MySQL Workbench

mig1

MS-SQL Connection test:
mig2

NOTE:  Connection method ‘ODBC Data Source’ option will cause for following error. To avoid such error at the time of data migration use Connection method as ‘ODBC Data Source [FreeTDS]

mig3

MySQL Connection test:
mig4

Fetching schema:
mig5

Schema selection and DB naming settings:
mig6

Reverse Engineering:
mig7

Source object list:
mig9

Migration:
mig10

Manual editing: ( If needed)
mig11

NOTE: Will Fix these warnings in next steps.

Target creation options:
mig12

Create schemas:
mig13

Create target result:
mig14

As we have seen warnings for date_table as ‘Defaults value CONVERT(……) is not supported’. In MySQL we cannot use functions in with default, to tackle this problem will create trigger on table for insert operation as follows,

CREATE TRIGGER before_insert_date_table
before INSERT ON date_table
FOR EACH ROW
SET new.Start_Date = curdate() , new.End_Date= curdate() , new.c_time= curtime();

The DEFAULT value clause in a data type specification indicates a default value for a column. With one exception, the default value must be a constant; it cannot be a function or an expression. This means, for example, that you cannot set the default for a date column to be the value of a function such as NOW() or CURRENT_DATE. The exception is that you can specify CURRENT_TIMESTAMP as the default for TIMESTAMP and DATETIME columns.

https://dev.mysql.com/doc/refman/5.7/en/data-type-defaults.html

Another table that we need to fix is  Employee_2

MS-SQL Employee_2 table column definition:

[gender] VARCHAR(10) NOT NULL CHECK (gender IN('male', 'Female', 'Unknown'))

MySQL Employee_2 table column definition:
mig15

Appropriate datatype for ‘gender’ column, Let change it:

`gender` enum('Male','Female','Unknown') NOT NULL default 'Unknown',

Data transfer setup:
mig16

Bulk data transfer:
mig17

Data and Log analysis

Migration Report:

------------------------------------------------------------------------------------

MySQL Workbench Migration Wizard Report

Source: Microsoft SQL Server 12.0.4100

Target: MySQL 5.7.15

------------------------------------------------------------------------------------
-- Migration

-- Summary


Number of migrated schemas: 1

-- mysql_migration

Source Schema:   mysql_migration

- Tables:             11
- Triggers:           0
- Views:              4
- Stored Procedures:  3
- Functions:          0


2. Migration Issues
  - mysql_migration
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - CUSTOMERS
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - NAME
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - ADDRESS
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - Products
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - ProductName
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - ProductDescription
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - c_time
    note  Default value is getdate(), so type was changed from DATETIME to TIMESTAMP
  - Employee
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - Name
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - Employee_2
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - Name
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - gender
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - emp_notes
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - emp_notes2
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - Persons
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - Lastname
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - Firstname
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - Address
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - City
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - Orders
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - int_table
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - MyBitColumn
    note  Source column type BIT was migrated to TINYINT(1)
  - num_table
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - date_table
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - ID
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - First_Name
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - Last_Name
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - Start_Date
    warning  Default value CONVERT([date],getdate(),0) is not supported
  - End_Date
    warning  Default value CONVERT([date],getdate(),0) is not supported
  - c_time
    warning  Default value CONVERT([time],getdate(),0) is not supported
  - City
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - Description
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - c_datetime
    note  Default value is getdate(), so type was changed from DATETIME to TIMESTAMP
  - binary_table
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - Altiris, Inc_$Item
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - No_
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - Description
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - Description 2
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci
  - Tax Group Code
    note  Collation SQL_Latin1_General_CP1_CI_AS migrated to utf8_general_ci


3. Object Creation Issues


4. Migration Details

4.1. Table mysql_migration.CUSTOMERS (CUSTOMERS)

Columns:
  - ID INT    
  - NAME VARCHAR(20)    
  - AGE INT    
  - ADDRESS CHAR(25)    
  - SALARY DECIMAL(18,2)    

Foreign Keys:

Indices:
  - PRIMARY (ID)


4.2. Table mysql_migration.Products (Products)

Columns:
  - ProductID INT    
  - ProductName VARCHAR(25)    
  - Price DECIMAL(19,4)    
  - ProductDescription LONGTEXT    
  - c_time TIMESTAMP  CURRENT_TIMESTAMP  

Foreign Keys:

Indices:
  - PRIMARY (ProductID)


4.3. Table mysql_migration.Employee (Employee)

Columns:
  - Id INT    
  - Name VARCHAR(150)    

Foreign Keys:

Indices:
  - PRIMARY (Id)


4.4. Table mysql_migration.Employee_2 (Employee_2)

Columns:
  - Id INT    
  - Name VARCHAR(150)    
  - empid INT    
  - age TINYINT UNSIGNED   
  - gender VARCHAR(10)    
  - emp_notes LONGTEXT    
  - emp_notes2 LONGTEXT    

Foreign Keys:

Indices:
  - PRIMARY (Id)
  - ci1 (Id)
  - i1 (Name)
  - pk1 (Id)
  - ui1 (Name, empid, age)


4.5. Table mysql_migration.Persons (Persons)

Columns:
  - P_Id INT    
  - Lastname VARCHAR(40)    
  - Firstname VARCHAR(40)    
  - Address VARCHAR(100)    
  - City CHAR(50)    

Foreign Keys:

Indices:
  - PRIMARY (P_Id)


4.6. Table mysql_migration.Orders (Orders)

Columns:
  - O_Id INT    
  - OrderNo INT    
  - P_Id INT    

Foreign Keys:
  - fk_PerOrders (P_Id) ON Persons (P_Id)

Indices:
  - PRIMARY (O_Id)


4.7. Table mysql_migration.int_table (int_table)

Columns:
  - MyBigIntColumn BIGINT    
  - MyIntColumn INT    
  - MySmallIntColumn SMALLINT    
  - MyTinyIntColumn TINYINT UNSIGNED   
  - MyBitColumn TINYINT(1)  1  

Foreign Keys:

Indices:


4.8. Table mysql_migration.num_table (num_table)

Columns:
  - MyDecimalColumn DECIMAL(5,2)    
  - MyNumericColumn DECIMAL(10,5)    
  - MyMoneycolumn DECIMAL(19,4)    
  - MysmallMoneycolumn DECIMAL(10,4)    
  - c_real FLOAT(24,0)    
  - c_float DOUBLE    

Foreign Keys:

Indices:


4.9. Table mysql_migration.date_table (date_table)

Columns:
  - ID VARCHAR(4)    
  - First_Name VARCHAR(20)    
  - Last_Name VARCHAR(20)    
  - Start_Date DATE    
  - End_Date DATE    
  - c_time TIME(6)    
  - Salary DECIMAL(19,4)    
  - City VARCHAR(20)    
  - Description VARCHAR(80)    
  - c_datetime TIMESTAMP  CURRENT_TIMESTAMP  
  - cs_smalldatetime DATETIME    
  - c_datetime2 DATETIME(6)    

Foreign Keys:

Indices:


4.10. Table mysql_migration.binary_table (binary_table)

Columns:
  - c_binary BINARY(1)    
  - c_varbinary LONGBLOB    
  - c_image LONGBLOB    

Foreign Keys:

Indices:


4.11. Table mysql_migration.Altiris, Inc_$Item (Altiris, Inc_$Item)

Columns:
  - No_ VARCHAR(20)    
  - Description VARCHAR(100)    
  - Description 2 VARCHAR(50)    
  - Blocked TINYINT UNSIGNED   
  - Last Date Modified DATETIME(6)    
  - Inactive TINYINT UNSIGNED   
  - Fixed Node TINYINT UNSIGNED   
  - Minimum Nodes INT    
  - Maximum Nodes INT    
  - Tax Group Code VARCHAR(10)    
  - Current Price List TINYINT UNSIGNED   
  - PrimeKey INT    

Foreign Keys:

Indices:
  - PRIMARY (PrimeKey)


II. Data Copy

  - `mysql_migration`.`Employee`
            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[Employee]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[Employee]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[Employee]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[Employee]    
  - `mysql_migration`.`binary_table`
            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[binary_table]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[binary_table]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[binary_table]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[binary_table]    
  - `mysql_migration`.`num_table`
            Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[num_table]            Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[num_table]            Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[num_table]            Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[num_table]    
  - `mysql_migration`.`Products`
          error  `mysql_migration`.`Products`:malloc(1073741824) failed for blob transfer buffer          error  `mysql_migration`.`Products`:Failed copying 2 rows          error  `mysql_migration`.`Products`:malloc(1073741824) failed for blob transfer buffer          error  `mysql_migration`.`Products`:Failed copying 2 rows            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[Products]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[Products]    
  - `mysql_migration`.`int_table`
            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[int_table]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[int_table]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[int_table]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[int_table]    
  - `mysql_migration`.`Orders`
            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Orders]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Orders]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Orders]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Orders]    
  - `mysql_migration`.`Employee_2`
            Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[Employee_2]            Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[Employee_2]            Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[Employee_2]            Succeeded : copied 4 of 4 rows from [mysql_migration].[dbo].[Employee_2]    
  - `mysql_migration`.`CUSTOMERS`
            Succeeded : copied 6 of 6 rows from [mysql_migration].[dbo].[CUSTOMERS]            Succeeded : copied 6 of 6 rows from [mysql_migration].[dbo].[CUSTOMERS]            Succeeded : copied 6 of 6 rows from [mysql_migration].[dbo].[CUSTOMERS]            Succeeded : copied 6 of 6 rows from [mysql_migration].[dbo].[CUSTOMERS]    
  - `mysql_migration`.`date_table`
            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[date_table]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[date_table]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[date_table]            Succeeded : copied 2 of 2 rows from [mysql_migration].[dbo].[date_table]    
  - `mysql_migration`.`Persons`
            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Persons]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Persons]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Persons]            Succeeded : copied 1 of 1 rows from [mysql_migration].[dbo].[Persons]    
  - `mysql_migration`.`Altiris, Inc_$Item`
            Succeeded : copied 20686 of 20686 rows from [mysql_migration].[dbo].[Altiris, Inc_$Item]

Validate data in MySQL database and All set !!

MySQL script automation and security

After MySQL installation, If you don’t have any enterprise level / any GUI interface for monitoring, backup then one of the option is, write your own scripts to automate these tasks.
In this Blog post, we are going to see few monitoring and backup scripts with covering common security issues.

Credential security

Following is a simple script, which will monitor MySQL service. In case MySQL service [mysqld] is down, then it will send email alert.

#!/bin/sh

# Connection details
MYSQL_USER="monitor"
MYSQL_PASS="lemon_pwd@123" ----> [# Plain text password, Security thread]
MYSQL_HOST="localhost"
SERVER_HOST=$( hostname )
EMAIL=user1@xyz.com,user2@xyz.com

# MySQL status
mysqladmin -u${MYSQL_USER} -p${MYSQL_PASS} -h${MYSQL_HOST} ping 2>/dev/null 1>/dev/null
if [ $? -ne 0 ]; then
echo "MySQL database is down " | mail -s " MySQL not running on $SERVER_HOST" "$EMAIL"
fi
# END!!

The Problem with above script is, it has mysql user credentials info in plain text. We should store mysql user credentials to somewhere safe in encrypted file.

mysql_config_editor and Credential security

The mysql_config_editor utility enables you to store authentication credentials in an encrypted login path file named .mylogin.cnf.

1. Create a user for monitoring with required privileges.

CREATE USER IF NOT EXISTS 'monitor'@'localhost' IDENTIFIED BY 'strong_pwd'; 

GRANT SELECT, RELOAD, PROCESS, REPLICATION CLIENT ON *.* TO 'monitor'@'localhost';

2. Store user credentials info into .mylogin.cnf (This conf file will get created under current OS user home directory)

shell> mysql_config_editor set --login-path=monitor_client --host=localhost --user=monitor --password                                                                                     

Enter password:<enter_mysql_monitor_user_password>

3. Verify the file contains

shell> mysql_config_editor print --login-path=monitor_client
[monitor_client]
user = monitor
password = *****
host = localhost

4. Implementation and use.

MySQL access with credentials:

 
shell> mysql -u monitor -h localhost -p  
  

MySQL access with mysql_config_editor login_path:

shell> mysql --login-path=minitor_client

Monitoring scripts

Shell script for MySQL service and replication monitoring.


#!/bin/bash

SERVER_HOST=$( hostname )
EMAIL=user1@xyz.com,user2@xyz.com

# MySQL service monitoring
mysqladmin --login-path=monitor_client ping 2>/dev/null 1>/dev/null
if [ $? -ne 0 ]; then
echo "MySQL database is down " | mail -s " MySQL not running on $SERVER_HOST" "$EMAIL"
fi

# Replication monitoring 

for MYSQL_HOST in localhost
do
  MSG1=`/usr/bin/mysql --login-path=monitor_client -e "show slave status\G;" | grep 'Slave_IO_Running:'`
  OUTPUT=(${MSG1//:/ })
  STATUS1=`echo ${OUTPUT[1]}`
  MSG2=`/usr/bin/mysql --login-path=monitor_client -e "show slave status\G;" | grep 'Slave_SQL_Running:'`
  OUTPUT=(${MSG1//:/ })
  STATUS2=`echo ${OUTPUT[1]}`
  if [ "$STATUS1" == "Yes" ] && [ "$STATUS2" == "Yes" ];
  then
    echo "$MYSQL_HOST = $STATUS1 - $STATUS2"
  else
    echo "$SERVER_HOST replication not working"
    mail -s "Replication DOWN - $SERVER_HOST" "$EMAIL" <<EOF
Please check $SERVER_HOST database replication.
EOF
  fi
done
# End!!

Backup scripts

1. Create a user for backup with required privileges.

CREATE USER IF NOT EXISTS 'backup'@'localhost' IDENTIFIED BY 'strong_pwd'; 

GRANT SELECT, RELOAD, SHOW DATABASES, LOCK TABLES,SHOW VIEW, EVENT, TRIGGER ON *.* TO 'backup'@'localhost';

2. Store backup user credentials info into .mylogin.cnf (This conf file will get created under current OS user home directory)

shell> mysql_config_editor set --login-path=backup_client --host=localhost --user=backup --password                                                                                          

Enter password:<enter_mysql_backup_user_password>

3. Backup script


#!/bin/bash

NOW="$(date +"%d-%m-%Y")"
BKP_DIR="/backups/full_backups/$NOW/"
SERVER_HOST=$( hostname )

mkdir -p "$BKP_DIR"
touch "$BKP_DIR/backup.log"

echo "Dumping MySQL databases into separate SQL command files into dir=$BKP_DIR" >> $BKP_DIR/backup.log

db_count=0

for d in $(mysql -NBA --login-path=backup_client -e 'show databases')
do
   if [[ "$d" != information_schema ]] && [[ "$d" != mysql ]] && [[ "$d" !=  performance_schema ]] && [[ "$d" !=  sys ]]; then
    (( db_count++ ))
   echo "DUMPING DATABASE: $d " >> $BKP_DIR/backup.log
mysqldump --login-path=backup_client --single-transaction  $d | gzip > $BKP_DIR/$d.sql.gz
echo "Dumping --triggers --routines --events for databases $d into dir=$BKP_DIR" >> $BKP_DIR/backup.log
mysqldump --login-path=backup_client --triggers --routines --events --no-create-info --no-data --no-create-db --skip-opt $d | gzip > $BKP_DIR/$d-routines.sql.gz
  fi
done

echo "$db_count databases dumped into dir=$BKP_DIR" >> $BKP_DIR/backup.log

find /apps/backups/full_backups/ -type d -ctime +6 -exec rm -rf {} \;

echo "OLDER than 6 days BACKUPS deleted successfully" >> $BKP_DIR/backup.log

# End!!

Automate these scripts runs and All set!!

MySQL service : Unable to setup unix socket lock file.

How to solve mysqld service restart problem for above error?

Problem :
I was adding shell and home directory for mysql user,executed following cmd,

shell> usermod -m -d /home/mysql -s /bin/bash mysql

If mysql is running and process running with mysql , we need to stop mysql otherwise it will throw an error like usermod: user mysql is currently used by process 27768

After stopping MySQL service and adding shell and homedir for mysql user, at the time mysqld service startup it started throwing error.


shell> service mysqld restart
Redirecting to /bin/systemctl restart mysqld.service
Job for mysqld.service failed because the control process exited with error code. See "systemctl status mysqld.service" and "journalctl -xe" for details.
Shell> systemctl status mysqld.service

● mysqld.service - MySQL Server
Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)
Active: deactivating (final-sigterm) (Result: exit-code) since Wed 2016-12-14 22:16:42 MST; 7min ago
Docs: man:mysqld(8)
http://dev.mysql.com/doc/refman/en/using-systemd.html
Process: 35222 ExecStart=/usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid $MYSQLD_OPTS (code=exited, status=1/FAILURE)
Process: 35204 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
Main PID: 27768 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/mysqld.service
└─35226 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid

mysqld error logs:

[ERROR] Could not create unix socket lock file /var/lib/mysql/mysql.sock.lock.
[ERROR] Unable to setup unix socket lock file.
[ERROR] Aborting

Tried to stop, kill mysqld service but still it’s not going out from process list.

Root cause : Suspecting change in process id after modifying mysql user properties.

Solution:

Just Move/Rename your my.cnf and start mysqld service with default configuration.You will see no more error at the time for service startup.
Move backuped up /renamed original my.cnf and restart mysqld service again.

service mysqld restart

And should start working fine, as it is.

ALl set!!

Strict sql mode and errors

After MySQL version upgrade, there is a possibility that application will start getting error/exception for INSERT/UPDATE operation due missing default data/values as follows,

ERROR:  Field ‘column_name’ doesn’t have a default value

Also above error will trigger for the wrong data type or out of range value for the column with INSERT/UPDATE sql operation.

Reason for this error is new SQL_MODE default values (MySQL 5.7 ) that included STRICT_TRANS_TABLES value in it and sql_mode value in default my.cnf created after installation[ Under /etc/my.cnf or /usr/my.cnf ]

sql_mode MySQL version Default
<=5.6.5
>= 5.6.6 NO_ENGINE_SUBSTITUTION
5.7 ONLY_FULL_GROUP_BY STRICT_TRANS_TABLES NO_ZERO_IN_DATE NO_ZERO_DATE ERROR_FOR_DIVISION_BY_ZERO NO_AUTO_CREATE_USER NO_ENGINE_SUBSTITUTION

Lets take scenario where application was running with MySQL 5.5 /5.6 and then upgraded to MySQL version to 5.7.
After upgrade your application will start getting error/exception for INSERT/UPDATE operation  missing default values as follows.

ERROR:  Field ‘column_name’ doesn’t have a default value

If you do further troubleshooting for table which you are having issue, you will see that in the table definition column using NOT NULL constraint but there is no default value for it.
So, in such case if your application sending INSERT with no value for column with not null constraint then mysql will throw an error for that INSERT operation, because of  STRICT_TRANS_TABLES in sql_mode.

Example:

mysql> show variables like 'sql_mode';
+---------------+-------------------------------------------------------------------------------------------------------------------------------------------+
| Variable_name | Value |
+---------------+-------------------------------------------------------------------------------------------------------------------------------------------+
| sql_mode | ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION |
+---------------+-------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

mysql> create table sqlmode_test (id int(11) primary key auto_increment, uname varchar(30) , language varchar(12) not null);
Query OK, 0 rows affected (0.04 sec)

mysql> set session sql_mode='STRICT_TRANS_TABLES';
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> show variables like 'sql_mode';
+---------------+---------------------+
| Variable_name | Value |
+---------------+---------------------+
| sql_mode | STRICT_TRANS_TABLES |
+---------------+---------------------+
1 row in set (0.00 sec)

mysql> insert into sqlmode_test(uname) values ('harsh');
ERROR 1364 (HY000): Field 'language' doesn't have a default value
mysql> set session sql_mode='';
Query OK, 0 rows affected (0.00 sec)

mysql> show variables like 'sql_mode';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| sql_mode | |
+---------------+-------+
1 row in set (0.00 sec)

mysql> insert into sqlmode_test(uname) values ('harsh');
Query OK, 1 row affected, 1 warning (0.00 sec)

mysql> show warnings;
+---------+------+-----------------------------------------------+
| Level | Code | Message |
+---------+------+-----------------------------------------------+
| Warning | 1364 | Field 'language' doesn't have a default value |
+---------+------+-----------------------------------------------+
1 row in set (0.00 sec)

mysql> select * from sqlmode_test;
+----+--------+----------+
| id | uname | language |
+----+--------+----------+
| 1 | harsh | |
+----+--------+----------+
1 rows in set (0.00 sec)

 

Solution:

Option 1:
Correct table column definition [ To avoid Error and warnings ]

Option 2:  [SQL with throw warning and not an error]
Remove strict sql mode
-Dynamically
SET GLOBAL sql_mode=”;
-Persist in my.cnf
sql_mode=
OR
sql_mode= ”

All set !!

MySQL Architecture and Components

This blog post is all about new MySQL 5.7 physical, logical architecture and it’s components.In this blog post, I will try to explain things in flow including data processing and SQL execution in MySQL with the help of diagrams.

Unlike the other databases ,MySQL is a very flexible and offers different kinds of storage engines as a plugin for different kinds of needs. Because of this, MySQL architecture and behavior will also change as per the use of storage engines, for example transactional [InnoDB] and non-transactional [MyISAM] engines data storage and SQL execution methods will be different and within the server it will use engine specific components like memory and buffers depending on type storage engine will get used for the SQL operation.
Will discuss more about InnoDB, since it’s default and main storage engine for MySQL.

MySQL Physical Architecture:

mysql_physical_arch

Configuration files:

auto.cnf :  Contains server_uuid
my.cnf :     MySQL Configuration file.

Miscellaneous files:

–basedir=dir_name
The path to the MySQL binaries installation directory.
–datadir=dir_name
The path to the MySQL data directory,contains data,status and log files.
–pid-file=file_name
The path name of the file in which the server should write its process ID.
–socket=file_name, -S file_name
On Unix, the name of the Unix socket file to use, for connections made using a named pipe to a local server.
–log-error=file_name
Log errors and startup messages to this file.

MySQL Logical Architecture:

mysql_logical_arch

Client :
Utility to connect MySQL server.

Server :
MySQL instance where actual data getting stored and data processing is happening.

mysqld:
MySQL Server daemon program which runs in background and manages database related incoming and outgoing requests from clients. mysqld is a multi-threaded process which allows connection to multiple sessions,listen for all connections and manages MySQL instance.

MySQL memory allocation:
Main MySQL memory is dynamic, examples innodb_buffer_pool_size (from 5.7.5), key_buffer_size etc.Working on shared nothing principal which means, every session has unique execution plan and we can share data sets only for the same session.

GLOBAL:

  • Allocated once
  • Shared by the server process and its threads

SESSION:

  • Allocated for each mysql client session
  • Dynamically allocated and deallocated
  • Used for handling query result
  • Buffer size per session

Connection/Thread handling:
Manages client connections/sessions [mysql threads]

Parser:
Check for SQL syntax by checking every character in SQL query and generate SQL_ID for each SQL query.

Optimizer :
Created efficient query execution plan as per the storage engine.It will rewrite a query.Example: InnoDB has shared buffer so optimizer will get pre-cached data from it. Using table statistics optimizer will generate an execution plan for a SQL query.

Metadata cache:
Cache for object metadata information and statistics.

Query cache :
Shared identical queries from memory.If an identical query from client found in query cache then, the mysql server retrieves the results from the query cache rather than parsing and executing that query again. It’s a shared cache for sessions , so a result set generated by one client can be sent in response to the same query issued by another client. Query cache based on SQL_ID.SELECT data into view is a best example to pre-cache data using query cache

key cache:
Cache table indexes.In MySQL keys are indexes(In oracle keys are constraints) if index size is small then it will cache index structure and data leaf.If an index is large then it will only cache index structure.Used by MyISAM storage engine.

Storage engine:
MySQL component that manages physical data (file management) and locations. Storage engine responsible for SQL statement execution and fetching data from data files. Use as a plugin and can load/unload from running MySQL server.Few of them as following,

  1. InnoDB :
    • Fully transactional ACID.
    • Offers REDO and UNDO for transactions.
    • Data storage in tablespace :

    – Multiple data files
    – Logical object structure using innodb data and log buffer

    • Row level locking.
  2. NDB (For MySQL Cluster):
    • Fully Transactional and ACID Storage engine.
    • Distribution execution of data and using multiple mysqld.
    • NDB use logical data with own buffer for each NDB engine.
    • Offers REDO and UNDO for transactions.
    • Row level locking.
  3. MyISAM:
    • Non-transactional storage engine
    • Speed for read
    • Data storage in files and use key ,metadata and query cache

    – FRM  for table structure
    – MYI for table index
    – MYD for table data

    • Table level locking.
  4. MEMORY:
    • Non-transactional storage engine
    • All data store in memory other than table metadata and structure.
    • Table level locking.
  5. ARCHIVE:
    • Non-transactional storage engine,
    • Store large amounts of compressed and unindexed data.
    • Allow INSERT, REPLACE, and SELECT, but not DELETE or UPDATE sql operations.
    • Table level locking.
  6. CSV:
    • Stores data in flat files using comma-separated values format.
    • Table structure need be created within MySQL server (.frm)

SQL execution

sql_execution

Other storage engines  like InnoDB,NDB are having logical structure for data and they have their own data buffer. This buffer define on storage engine level.

About InnoDB Storage engine:

  • Fully transactional ACID.
  • Row level locking.
  • Offers REDO and UNDO for transactions.
  • Data storage in tablespace :
    – Multiple data files
    – Logical object structure using InnoDB data and log buffer
  • Use shared file to store objects [Data and Index in the same file]
  • InnoDB data is 100% of a logical structure, data stored physically.
  • InnoDB Read physical data and build logical structure[Blocks and Rows]
  • Logical storage called as TABLESPACE.

InnoDB storage engine architecture:

innodb_arch

Tablespace:

Storage for InnoDB is divided into tablespaces. Tablespace is a logical structure associated with multiple data files(objects). Each tablespace contains pages(blocks),extents and segments.

innodb_tablespace

Pages: Smallest piece of data for InnoDB also called as blocks. Default page size is 16kb and page can hold one or more rows depending on row size.

Available page sizes: 4kb,8kb,16kb,32kb,64kb
Variable name           : innodb_page_size
Need  to configure before initializing mysqld server.

Extents:  It’s a group of pages.For better I/O throughput innodb read or write a collection of pages i.e one extent at a time.
For a group of pages with default page size 16kb, extent size up to 1mb.
Doublewrite buffer read/write/allocate or free data as one extent at a time.

Segments :  Collection of files in InnoDB tablespace.Use up to 4 extents in a segment.

Innodb components:

In Memory:

InnodDB buffer pool:
Central buffer for InnoDB storage engine.In this data buffer, we load blocks and cache table and index data.
– Main memory where InnoDB cache table data and indexes.
–  Size up to 80% of physical memory on dedicated DB server.
– Shared buffer across all sessions.
– InnoDB use LRU (Least Recently Used ) page replacement algorithm.
– Data that is being reused is always in the same memory.
– Data that does not  used get phased out eventually.

Change buffer:
In a memory change buffer is a part of InnoDB buffer pool and on disk it is part of system tablespace, so even after database restart index changes remain buffered.Change buffer is  a special data structure that caches changes to secondary index pages when affected pages not in the buffer pool.

Redo log buffer:
Buffer for redo logs, hold data to written to the redo log.Periodically data getting flushed from redo log buffer to redo logs. Flushing data from memory to disk managed by innodb_log_at_trx_commit and innodb_log_at_timeout configuration option.

– A large size of redo log buffer enables a large transaction to run without writing                       redo logs to disk before the transaction commit.
– Variable:
innodb_log_buffer_size (default 16M)

On Disk:

System tablespace:  
Apart from the table data storage, InnoDB’s functionality requires looking for table metadata and storing and retrieving MVCC info to support ACID compliance and Transaction Isolation. It contains several types of information for InnoDB objects.

  • Contains:
    Table Data Pages
    Table Index Pages
    Data Dictionary
    MVCC Control Data
    Undo Space
    Rollback Segments
    Double Write Buffer (Pages Written in the Background to avoid OS caching)
    Insert Buffer (Changes to Secondary Indexes)
  • Variables:
    innodb_data_file_path = /ibdata/ibdata1:10M:autoextend

    By enabling innodb_file_per_table (the default) option, we can store each newly created table (data and index) in a separate tablespace. Advantage for this storage method is less fragmentation within disk data file.

General tablespace:
Shared tablespace to store multiple table data. Introduce in MySQL 5.7.6. User has to create this using CREATE TABLESPACE syntax. TABLESPACE option can be used with CREATE TABLE to create table and ALTER TABLE to move a table in general table.

– Memory advantage over innodb_file_per_table  storage method.
– Support both Antelope and barracuda file formats.
–  Supports all row formats and associated features.
–  Possible to create outside data directory.

InnoDB data dictionary:
Storage area in system tablespace made up of internal system tables with metadata information for objets[tables,index,columns etc.]

Double write buffer:
Storage area in system tablespace where innodb writes pages from innodb buffer pool, before writing to their proper location in the data files.
In case mysqld  process crash in the middle of a page write, at the time of crash recovery InnoDB can find a good copy of  the page from doublewrite buffer.

Variable: inndb_doublewrite (default enable)

REDO logs:
Use for crash recovery. At the time of mysqld startup InnoDB perform auto recovery to correct data written by incomplete transactions. Transactions that not finish updating data files before an unexpected  mysqld shutdown are replayed automatically at the time of mysqld startup even before taking any connection. It uses LSN(Log Sequence Number) value.
A large amount of data changes can not get written in disk quickly, so it will go under redo and then to the disk.

Why we need a redo for recovery ?
Let’s take an example, User changing data in innodb buffer and commit, somewhere it needs to go before writing into a disk. Because in the case of crash buffer data will lost, that’s why we need redo logs.

– In redo all changes will go with info like row_id,old column value,new column    value, session_id and time.
– One commit complete data will under disk in a data file.
– Variables:
Innodb_log_file_in_group= [# of redo file groups]
Innodb_log_file_size= [Size for each redo file]

UNDO tablespace and logs:
UNDO tablespace contains one or more undo logs files.
UNDO manages consistent reads by  keeping modified uncommitted data for active transaction [MVCC] . Unmodified data is retrieved from this storage area.Undo logs also called as rollback segments
By default,UNDO logs are part of system tablespace,MySQL allow to store undo logs in separate UNDO tablespace/s [Introduce in MySQL 5.6]. Need  to configure before initializing mysqld server.

– When we configure separate undo tablespace, the undo logs in the system tablespace             become inactive.
– Need  to configure before initializing mysqld server and can not change after that.
– We truncate undo logs, but can not drop.
– Variables :
innodb_undo_tablespace : # of undo tablespaces, default 0
innodb_undo_directory:Location for undo tablespace,default is data_dir with 10MB size.
innodb_undo_logs : # of undo logs, default and max value is ‘128’

Temporary tablespace:
Storage to keep and retrieve modified uncommitted data for temporary tables and related objects.Introduce in MySQL 5.7.2 and use for rollback temp table changes while a server is running.

– Undo logs for temporary tables reside in the temp tablespace.
– Default tablespace file ibtmp1 getting recreated on server startup.
–  Not getting use for crash recovery.
– Advantage: Performance gain by avoiding redo logging IO for temp tables and related    objects.
– Variable:
innodb_temp_data_file_path = ibtmp1:12M:autoextend (default)

And All SET !!