Monday, September 5, 2016

Cursors

Cursor
----------
To iterate through a result set returned by a SELECT statement. To handle a result set inside a stored procedure, you use a cursor. A cursor allows you to iterate a set of rows returned by a query and process each row accordingly.

4 stages:
------------
DECLARE
 - DECLARE cursor_name CURSOR FOR SELECT_statement;
 - The cursor declaration must be after any variable declaration. If you declare a cursor before variables declaration, MySQL will issue an error.
 - A cursor must always be associated with a SELECT statement.

OPEN
 - OPEN cursor_name;
 - The OPEN statement initializes the result set for the cursor, therefore, you must call the OPEN statement before fetching rows from the result set.

FETCH
 - FETCH cursor_name INTO variables list;
 - FETCH statement to retrieve the next row pointed by the cursor and move the cursor to the next row in the result set.
 - Can check to see if there is any row available before fetching it.

CLOSE
 - CLOSE cursor_name;
 - call the CLOSE statement to deactivate the cursor and release the memory associated with it as follows.

Note: When working with MySQL cursor, you must also declare a NOT FOUND handler to handle the situation when the cursor could not find any row. Because each time you call the FETCH statement, the cursor attempts to read the next row in the result set. When the cursor reaches the end of the result set, it will not be able to get the data, and a condition is raised. The handler is used to handle this condition.

 - DECLARE CONTINUE HANDLER FOR NOT FOUND SET finished = 1;

Where finished is a variable to indicate that the cursor has reached the end of the result set.

Note:
 - That the handler declaration must appear after variable and cursor declaration inside the stored procedures.
 - Can use MySQL cursors in stored procedures, stored functions, and triggers.

References:
http://www.mysqltutorial.org/mysql-cursor/

Procedure Vs Function

- CREATE PROCEDURE and CREATE FUNCTION require the CREATE ROUTINE privilege.

Procedure
--------------
- A procedure does not return a value. Instead, it is invoked with a CALL statement to perform an operation such as modifying a table or processing retrieved records.
- Cannot invoke a procedure in an expression.
- Procedure parameters can be defined as input-only, output-only, or both. This means that a procedure can pass values back to the caller by using output parameters. These values can be accessed in statements that follow the CALL statement. Functions have only input parameters. As a result, although both procedures and functions can have parameters, procedure parameter declaration differs from that for functions.
- To invoke a stored procedure, use the CALL statement.
- Specifying a parameter as IN, OUT, or INOUT is valid only for a PROCEDURE.
- A procedure is invoked using a CALL statement, and can only pass back values using output variables.

Function
-----------
- A function is invoked within an expression and returns a single value directly to the caller to be used in the expression.
- Cannot invoke a function with a CALL statement.
- Functions return value, so there must be a RETURNS clause in a function definition to indicate the data type of the return value. Also, there must be at least one RETURN statement within the function body to return a value to the caller. RETURNS and RETURN do not appear in procedure definitions.
- To invoke a stored function, refer to it in an expression. The function returns a value during expression evaluation.
- For a FUNCTION, parameters are always regarded as IN parameters.
- A function can be called from inside a statement just like any other function (that is, by invoking the function's name), and can return a scalar value.

Note:  log_bin_trust_function_creators affects how MySQL enforces restrictions on stored function and trigger creation.

References:
http://stackoverflow.com/questions/3744209/mysql-stored-procedure-vs-function-which-would-i-use-when
http://dev.mysql.com/doc/refman/5.7/en/create-procedure.html

Friday, September 2, 2016

Optimize & Analyze table

optimize table
-------------------
 - Reorganizes the physical storage of table data and associated index data, to reduce storage space and improve I/O efficiency when accessing the table. The exact changes made to each table depend on the storage engine used by that table. This statement does not work with views.

 - Usage: After doing substantial insert, update, or delete operations on an InnoDB table that has its own .ibd file because it was created with the innodb_file_per_table option enabled. The table and indexes are reorganized, and disk space can be reclaimed for use by the operating system.

 - OPTIMIZE TABLE is also supported for partitioned tables
 - OPTIMIZE TABLE works for InnoDB, MyISAM, and ARCHIVE tables.

 - For InnoDB tables, OPTIMIZE TABLE is mapped to ALTER TABLE ... FORCE, which rebuilds the table to update index statistics and free unused space in the clustered index. This is displayed in the output of OPTIMIZE TABLE when you run it on an InnoDB table:

 mysql> OPTIMIZE TABLE foo;
+----------+----------+----------+-------------------------------------------------------------------+
| Table    | Op       | Msg_type | Msg_text                                                          |
+----------+----------+----------+-------------------------------------------------------------------+
| test.foo | optimize | note     | Table does not support optimize, doing recreate + analyze instead |
| test.foo | optimize | status   | OK                                                                |
+----------+----------+----------+-------------------------------------------------------------------+

 - Prior to Mysql 5.7.4, OPTIMIZE TABLE does not use online DDL (ALGORITHM=INPLACE). Consequently, concurrent DML (INSERT, UPDATE, DELETE) is not permitted on a table while OPTIMIZE TABLE is running, i.e. the table is locked. Also, secondary indexes are not created as efficiently because keys are inserted in the order they appeared in the primary key.

 - As of 5.7.4, OPTIMIZE TABLE uses online DDL (ALGORITHM=INPLACE) for both regular and partitioned InnoDB tables. The table rebuild, triggered by OPTIMIZE TABLE and performed under the cover by ALTER TABLE ... FORCE, is now performed using online DDL (ALGORITHM=INPLACE) and only locks the table for a brief interval, which reduces downtime for concurrent DML operations.


Analyze table
------------------
 - ANALYZE TABLE analyzes and stores the key distribution for a table. During the analysis, the table is locked with a read lock for InnoDB and MyISAM. This statement works with InnoDB, NDB, and MyISAM tables.

 - It gather current stats of table and update information_schema accordingly.
 mysql> analyze table foo;
+---------------------+---------+----------+----------+
| Table               | Op      | Msg_type | Msg_text |
+---------------------+---------+----------+----------+
| test.foo | analyze | status   | OK       |
+---------------------+---------+----------+----------+
1 row in set (0.16 sec)


Note: For both operations, INSERT & SELECT privilege required.

References:
http://dev.mysql.com/doc/refman/5.7/en/optimize-table.html
http://dev.mysql.com/doc/refman/5.7/en/analyze-table.html

OS File types w.r.t DB objects

OS File types w.r.t DB objects
=========================

On creation of every table (engine=InnoDB):
  .frm (contains table defintion)
  .ibd (contains table data)

On creation of procedure/function :: No separate OS file been generated with that name.

On creation of view:
  .frm file been created.

On creation of trigger:
   .TRN (one file per Trigger)
   .TRG (per table that has triggers)

Note:
 - On INSERT/UPDATE/DELETE - only .ibd file get changes.
 - On ALTER table: Adding/drop column/index - both .ibd/.frm files gets changed.
 - Definition of procedure/function been stored at mysql.PROC.MYI and mysql.PROC.MYD

Thursday, September 1, 2016

Load .csv data to table

Load .csv data to table:
===================

Need to load data from .csv file to table 'loadTest'

mysql> desc loadTest;
+-----------+-------------+------+-----+---------+-------+
| Field     | Type        | Null | Key | Default | Extra |
+-----------+-------------+------+-----+---------+-------+
| id        | int(11)     | NO |     | NULL |       |
| name      | varchar(20) | NO |     | NULL |       |
| sal       | int(11)     | NO |     | NULL |       |
| createdBy | varchar(20) | NO |     | NULL |       |
+-----------+-------------+------+-----+---------+-------+
4 rows in set (0.00 sec)

Note: 2 INT columns, 2 VARCHAR columns

Content in File:
mysql> \! cat /tmp/loadTest.csv
id,name,sal
1,gaurav,200
2,gagan,300

Note: Table contains 4 columns, where as spreadsheet contains 3 columns. (Need to pass 1 value while loading)

Load Content:
mysql -utest -ptest -Dpractice <<EOF >> /tmp/loadTest.log
SET max_error_count=10000;
LOAD DATA LOCAL INFILE '/tmp/loadTest.csv' IGNORE INTO TABLE loadTest character set utf8
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n' IGNORE 1 LINES
(@id,@name,@sal)
  SET id=@id,
      name = @name,
      sal = @sal,
      createdBy = 'Admin';
  COMMIT;
  SHOW WARNINGS;
  SELECT @@session.warning_count\G
EOF

Check LogFile:
mysql> \! cat /tmp/loadTest.log
*************************** 1. row ***************************
@@session.warning_count: 0

Special Keywords:
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES              

Note:
In case we don't want to replicate data, please use:
SET SQL_LOG_BIN=0;

Generate .csv file from table data

Generate .csv file from table data
===========================

2 ways:
A> From MySQL Prompt
:
1> Logon to MySQL
      mysql -utest -ptest -Dpractice
2> Load Data: 
      mysql> select locationId,country,latitude,longitude from cityLoc limit 10 INTO OUTFILE '/tmp/cityLocs1.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n';
Query OK, 10 rows affected (0.00 sec)

Note: The above method will generate an error, if the file already exists.

B> From shell script:
1> export del=','
2>
mysql -utest -ptest -Dpractice --skip-column-names -e"select concat(locationId,'$del',IFNULL(country,''),'$del',latitude,'$del',longitude) from cityLoc limit 10 ;" > /tmp/cityLocs2.csv 

Note: For both cases, need to add column header line at the top
COLHEADER='
locationId,country,latitude,longitude'
sed -i 1i"$
COLHEADER" /tmp/file.csv 

Modification: Lets say, need to add "'" (single quote) around varchar columns:
mysql> select locationId,concat('\'',country,'\''),latitude,longitude from cityLoc limit 10 INTO OUTFILE '/tmp/cityLocs2.csv' FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n';

Special keywords:
INTO OUTFILE '/tmp/cityLocs.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n';

mysqldump

mysqldump - Take data dump

export MYSQL_HOME=/u01/app/mysql/product/5.6.16/bin
export MYSQL_CONF=/u01/app/mysql/practice/etc
export CONF_FILE=practice.cnf
export MYSQL_CONF_FILE=$MYSQL_CONF/$CONF_FILE
export MYSQL_CONNECT=$MYSQL_HOME/mysql
export MYSQL_DUMP=$MYSQL_HOME/mysqldump
export DB_USER=test
export DB_PASSWORD=test
export DB_NAME=practice
export TABLES="t1 t2"

### Comments ###
### opt -> --add-drop-table --add-locks --create-options --disable-keys --extended-insert --lock-tables --quick --set-charset ###
### compact -> --skip-add-drop-table, --skip-add-locks, --skip-comments, --skip-disable-keys, and --skip-set-charset ###

1) Particular table structure
$MYSQL_DUMP --defaults-file=$MYSQL_CONF_FILE -u$DB_USER -p$DB_PASSWORD $DB_NAME $TABLES --skip-opt --compact --skip-triggers --no-data --create-options --add-drop-table --log-error=/tmp/only_table_structure.err > /tmp/only_table_structure.sql

2) Particular table data
$MYSQL_DUMP --defaults-file=$MYSQL_CONF_FILE -u$DB_USER -p$DB_PASSWORD $DB_NAME $TABLES --skip-opt --compact --skip-triggers --extended-insert --no-create-info --log-error=/tmp/only_table_data.err > /tmp/only_table_data.sql

3) Particular table structure and data
$MYSQL_DUMP --defaults-file=$MYSQL_CONF_FILE -u$DB_USER -p$DB_PASSWORD $DB_NAME $TABLES --skip-opt --compact --skip-triggers --create-options --add-drop-table --extended-insert --log-error=/tmp/table_str_data.err > /tmp/only_table_str_data.sql

4) All tables structure (no data) and NO procs/functions/triggers
$MYSQL_DUMP --defaults-file=$MYSQL_CONF_FILE -u$DB_USER -p$DB_PASSWORD $DB_NAME --skip-opt --compact --skip-triggers --no-data --create-options --add-drop-table --log-error=/tmp/all_table_structure.err > /tmp/all_tables_structure.sql

5) All tables data (no create table) and NO procs/functions/triggers
$MYSQL_DUMP --defaults-file=$MYSQL_CONF_FILE -u$DB_USER -p$DB_PASSWORD $DB_NAME --skip-opt --compact --skip-triggers --extended-insert --no-create-info --log-error=/tmp/all_table_data.err > /tmp/all_tables_data.sql

6) Whole Schema structure including procs/functions/triggers but NO data
$MYSQL_DUMP --defaults-file=$MYSQL_CONF_FILE -u$DB_USER -p$DB_PASSWORD $DB_NAME --skip-opt --compact --no-data --create-options --add-drop-table --routines --events --log-error=/tmp/whole_schema_structure.err > /tmp/whole_schema_structure.sql

7) Whole Schema backup including data.
$MYSQL_DUMP --defaults-file=$MYSQL_CONF_FILE -u$DB_USER -p$DB_PASSWORD $DB_NAME --skip-opt --compact --create-options --add-drop-table --extended-insert --routines --events --log-error=/tmp/schema_str_data.err > /tmp/schema_str_data.sql

Note: Add "COMMIT;" at the last line of dump file
echo "COMMIT;" >> /tmp/<file_name>.sql

To Import data from mysqldump:
$MYSQL_CONNECT --defaults-file=$MYSQL_CONF_FILE -u$DB_USER -p$DB_PASSWORD -D$DB_NAME --show-warnings
mysql> set FOREIGN_KEY_CHECKS=0;
mysql> SET sql_warnings=1;
mysql> source /tmp/<file_name>.sql
mysql> SET sql_warnings=0;
mysql> set FOREIGN_KEY_CHECKS=1;
mysql> exit