Understand Query Processing Basic
Before tuning your slow query, you better understand how your database process your query first. Here are the steps:
- Each client connections gets its own thread within the server process (MySQL). The connetion’s queries execute within that single thread, which in turn resides on one core or CPU. The database server caches threads, so they don’t need to be created and destroyed for each new connection (ie. has its own thread pool).
- Look up the query cache to see whether the resultset of your query is available. MySQL does this via hashing your query and use the hashed value to check for the results in the cache (ie. Map[ hashed query : resultset] ). MySQL uses the exact query text it receives, so the cache is sensitive to the most trivial variations like case, space and etc. Best practice: Have your application generates the queries. If your query cannot be found in the query cache, go to step 3. Once the table is updated, the query in the cache will be flushed and updated.
- Parse the query to create an internal structure (the parse tree).
- Analysis the query
- Optimize the query – Create the query plan that is believed to be the most efficient way to execute a query. The effort center around indexes and table join order. Use Explain to look into the query plan that database uses for your query.
- Send the resultset back to client
Approaches that can speed up your queries
- Index tables to allow the database server to look up rows more quickly.
- Write effective SQL that takes advantage of those indexes to the fullest extent, and use the EXPLAIN statement to check whether the RDBMS really is doing so.
- Tune the hardware
Concept of Indexing
The article from Paul Dubios gives a great summary of the benefit and cost of indexing and when we should use it. If you don’t want to read up 6 pages of his article (but I really suggest you do), here is my summary.
Benefits/Cost of Indexing
- The column you indexed are sorted and you can combine few columns together (order is important) as single index (ie. multi-columns index).
- Indexed column is sorted. To locate record with last name equals to “HON”, you will look it up via lastname index and pull consecutive entries until it is not equal to “HON” and you can ignore the rest. Therefore, one efficiency gained by using the index is that we can tell where the matching rows end and can skip the rest. Another efficiency comes about through the use of positioning algorithms for finding the first matching entry without doing a linear scan from the start of the index (for example, a binary search is much quicker than a scan). Normally, B-Tree index is applied by default.
- If you search your table in different ways, you can have more than 1 index. Remember at most one index per table is picked by RDBMS each time you query a table. You can get around this limitation via composite index.
- Index may be stored separately from data. But MySQL InnoDB stores them together.
- Indexes are always sorted, but the data on disk is not. Using an index means accessing the rows in index-sorted order (ie. random seek) rather than in the order they reside on the disk (ie. sequential access). The end result is more time spent moving around the disk and less time reading data. You can draw 2 conclusions form this knowledge:
- If a table is going to remain very small, you may want to leave off the indexes.
- Always use the production data for your test
Choose columns to index
- Index columns that you use for searching, sorting, or grouping, not columns you only display as output. In other words, the best candidate columns for indexing are the columns that appear in your WHERE clause, columns named in join clauses, or columns that appear in ORDER BY or GROUP BY clauses. If RDBMS can optimize a query using joined columns, it cuts down the potential table-row combinations quite a bit by eliminating full table scans.
- Consider column cardinality. The cardinality of a column is the number of distinct values that it contains. Indexes work best for columns that have a high cardinality relative to the number of rows in the table (that is, columns that have many unique values and few duplicates). The query optimizer generally skips an index in favor of a full table scan if it determines that a value occurs in a large percentage of a table’s rows. The conventional wisdom for this percentage used to be “30%.” In that case, your index gives you no benefit but cost to maintain!
- Favor the column with smaller data type to index. Faster for comparsion, less disk IO required, more can be cached in memory. For the InnoDB and BDB storage engines that use clustered indexes, it’s especially beneficial to keep the primary key short. A clustered index is one where the data rows are stored together with (that is, clustered with) the primary key values. Other indexes are secondary indexes; these store the primary key value with the secondary index values. A lookup in a secondary index yields a primary key value, which then is used to locate the data row. The implication is that primary key values are duplicated into each secondary index, so if primary key values are longer, the extra storage is required for each secondary index as well.
- Take advantage of leftmost prefixes. When you create an n-column composite index, you actually create n indexes that MySQL can use. A composite index serves as several indexes because any leftmost set of columns in the index can be used to match rows. Such a set is called a “leftmost prefix.” Suppose that you have a table with a composite index on columns named state, city, and zip. Rows in the index are sorted in state/city/zip order, so they’re automatically sorted in state/city order and in state order as well. This means that MySQL can take advantage of the index even if you specify only state values in a query, or only state and city values. However, it cannot use the index for searches that don’t involve a leftmost prefix. For example, if you search by city or by zip, the index isn’t used
Concept of Concurrency
To provide better concurrency, below are the rules of thumbs that I used
- Locking level: Locking at a finer level allows better concurrency (ie. row level is better than page level, and page level is better than table level), because more clients can be using a table at the same time if they use different parts of it.
- Table locking does have an advantage over finer levels of locking in terms of deadlock prevention.
- MVCC is new mechanism to achieve better concurrency b/c read doesn’t block write and vice versa..
Schema design and structuring
This section will cover the techniques that you can leverage to design your schema with performance in mind.
- Normalize first then denormalize later. Don’t over normalize. Use 3rd Normal Form
- Use the right data type. Don’t waste space unnecessarily (general)
- Use INT unsigned for IPv4 address
- Pay attention on TEXT, BLOB and BIGINT data types. Consider separate table for TEXT, filesystem for BLOB and other numeric data type for BIGINT.
- Vertical partitioning – split table with many columns into multiple tables (ex. static vs dynamic columns, frequent vs infrequent access pattern)
- Horizontal partitioning – split table with many rows into multiple tables (ex. user-based)
- MySQL 5.1 partitioning has issues (Postgresql and Oracle are more mature in this area).
- Understand the benefit and disadvantages of different storage engines (mysql)
- Leverage query cache effectively
- Each time an update on the any record in the table, all queries referencing the table are invalidated in the query cache. So if you split out the dynamic fields into another table, you can make your query cache less frequent in thrashing.
- Index smartly
Write effective SQL
- Use ANSI SQL style that explicitly declare JOIN conditions using the ON clause. This syntax can handle Outer join.
- Use Explain plan to find out how RDBMS chose the execution plan for your SQL. Tune it accordingly.
- Don’t use indexed field in function call
- In explain plan, “Extra: Using Index” means FULL Index Scan. It happens when scan is better than seek and index table is enough to satisfy the query.
- Why full table scan?
- No WHERE condition
- No index on any type of field in the WHERE clause
- Poor selectivity/ cardinality on an indexed field
- Too many records meet WHERE condition (scan favors seek).
- < MySQL 5.0 and using OR in WHERE clause
Put it all together
- Think in set instead of iteration.
Example: For each customer, fetch the last payment record in the Payment History table. You can solve this query via correlated subquery. That is:
select p.* from Payment_History p
where p.payment_date = (select max(payment_date) from Payment_History where customer_id = p.customer_id)
The above query can be optimized a bit if you use compound index (customer_id, payment_date). However, if you want to achieve the ultimate performance, you should not use correlated subquery b/c it will execute n times and n = number of rows in the Payment_History table. To avoid that, you can use derived table. The query will be:
SELECT p.* FROM (select customer_id, max(payment_date) as last_order from Payment_History group by customer_id) as last_orders
INNER JOIN Payment_History p
ON p.customer_id = last_orders.customer_id
AND p.payment_date = last_orders.last_order
Optimize the N:M joins
If you write an application that you can tag a project many times. You may model your database schema with Project table, Tag table and Tag2Project join table. When you want to grab all projects that either tag by “mysql” OR “php”. You may write a select statement that have Project inner join Tag2Project with project_id and resultset inner join Tag table filtered by “mysql” or “php” with tag_id. However, optimizer may not join it as the order you specified. It may have Tag table filtered first then inner join Tag2Project table and pass the resultset to inner join Project table. You can force the sequence of join via using derived table.
On the other hand, if you want to grab all project that has tagged by both mysql AND php. You may end up solve it via using derived table from Tag2Project and Tag that “GROUP BY” project_id and having count(*) = 2 with Tag filtered by IN clause.
Before you start tuning your server for mysql performance, you need to obtain the system information first. To do that, there are some linux command that may be useful. top command
How to interpret the output from top command? Here you go.
- The first line is telling you that the system is up 211 days 9 hr and 38 minutes. There are 2 users using the system (from process table, they are mysql and ec users). For the last 1, 5 and 15 minutes, the load of the system shows 0.01, 0.11 and 0.21 respectively.
- The 2nd line shows you that there are 82 processes running in the system, only 1 of them are active (In the process table, S=R means Running) and 81 of them are in sleeping mode (S = S means sleeping).
- The 3rd – 5th line are very useful. It gives you the current status of our system. CPU is 99% idle. If it is not, you want to find out who is using it. The process table is sort by %CPU (if you have dual core processor, the % here can reach 200). In term of memory, the total memory in the system is around 4G in total. As you can see, 3.5 G is in used. For our mysql instance, it is using 1.2G of memory that is about 32% of the used memory. Another interesting thing to examine is the swap memory. There is 2G of swap memory allocated in this system and the system barely uses it. It is good because system takes time to swap in and out the data from memory to disk and vice versa.
- Troubleshooting: When you see mysql process is taking high %CPU – CPU-bound, there may be queries costing intensive cpu cycles. You can look into it via mysql administrative commands. When you see high % of IO wait – ie. IO-bound, you may have inefficient queries that cause mysql reading too many rows to locate the data you are interested in. In system level, you could reduce the # of IO via better use of memory/buffer or reduce the seek time via faster disk and less random data storage.
Other linux commands
- more /proc/version (check linux os version)
- more /proc/cpuinfo (for dual processor with speed 2.80MHz: cpu core = 2, processor = 1, cpu MHz = 2.80)
Hardware Tuning Tips
There are several key areas that we normally look at when we try to tune our database in hardware perspective.
- Add more memory if you can – reduce # of IOs (primary bottleneck).
- Achieve parallelism - multi-processor for processing and parallel write to different disk devices (eg. log and database store in different disk device). MySQL is multi-threaded.
- Faster disk to improve IO latency
- RAID can give you some advantages of parallelism
In MySQL, most of the memory mysql allocates is used for various internal buffers and caches. These buffers fall into 2 major groups: global vs per-connection buffers. The 2 most important global buffers are MyISAM key buffer (key_buffer_size) and InnoDB’s buffer pool (innodb_buffer_pool_size).
- For MyISAM, remember that msyql doesn’t cache row data but the index block. So, the less often mysql needs to hit the disk to scan a table’s index, the faster the queries will be. In practice, OS may somehow help you out via caching some row data in cache. Even this, if possible, consider making the key buffer large enough to hold the indexes for your most actively used tables. By adding up the size of the .MYI files for the tables, you will have a good idea to set the buffer.
- For InnoDB, both cache index and row data are together in its buffer pool via using the clustered indexes. So if the table size is not large, you will achieve the best performance via storing the whole table in cache because you don’t need to do any IO. However, if you modify the data in the table, you need to sync up your disk either per min or per commit.
Allocate enough memory for mysql
min_memory_needed = global_buffers + (thread_buffers * max_connections)
When there are large number of concurrent users like 300-400, a particular nasty series of events may happen. The reason is large number of threads need to allocate additional memory, it can cause mysql to allocate so much memory that the OS begins swapping, which causes performance to degrade further, which means that each query takes longer to complete. With queries running more slowly, the odds of more threads needing memory increases. It is a vicious spiral. The only solution is to restore balance between the system’s memory and mysql’s memory needs. That means doing one of the following:
- Add more memory
- Decrease max_connections
- Decrease some of the per-thread buffer sizes
Be proactive, monitor memory use on your server.
Below are some quick tips for MySQL tuning that I have learnt from my colleagues and books. To keep this article short and useful as quick reference, I will just put the common ones here.
- key_buffer size. (for MyISAM – default at only 8MB)
- innodb_buffer_pool_size (for InnoDB – default at only 8MB) – reference
You need buffer pool a bit (say 10%) larger than your data (total size of Innodb TableSpaces) because it does not only contain data pages – it also contain adaptive hash indexes, insert buffer, locks which also take some time – Peter Zaitsev
- Bumping these values up can decrease disk IO and make your database run a lot smoother.Use indexes. Enable the slow query log and look for queries that show up most often, then learn the explain command on them.
- Check Created_tmp_disk_tables in show status; This value usually goes in hand with huge queries that aren’t using indexes. If it keeps increasing it means you are having to create on disk files to execute the query. Consider increasing tmp_table_size or putting your disk files on a TMPFS by setting tmpdir = /tmp.
- Use the query cache if it can help you. Check the query_cache_size. Check how well it is performing. You can do this by doing a “show status” and then using this formula:
qcache_hit_ratio = qcache_hits / (qcache_hits + qcache_inserts + qcache_not_cached)
- If there are a lot of Qcache_lowmem_prunes, that means your query cache size isnâ€™t big enough.
- If Opened_tables is increasing you probably have table_cache too small.
- If you have a lot of connections, check Threads_created in show status; If it keeps increasing, you need to up the thread_cache_size in my.cnf. The max connections in MySQL are default at 200.
Below are some references I used to write this article
- MySQL Toolkit
- One large table vs many small tables
- The Art of SQL Tuning (Jay Pipes has a great presentation on sql tuning)
- MySQL Performance Blog
- Peteris’s Blog on SQL Tuning
- MySQL Query Optimization