The impressive thing is just entered a company, gave an excel, the contents of the line is slow query sql. because sql is too long and too complex, and then opened up along the sql comb business mode.
Here is just purely optimize sql, but there is a situation where the system is slow, IO optimization should be carried out in accordance with the following ideas.
Optimization Ideas
Everyone understands performance optimization differently, and for database performance optimization, I understand shorter response times.
Hardware level
- cpu. use faster cpu, even more cpu, especially cpu level 1 and level 2 cache size
- Memory read/write speed
- Hard disk read/write speed
- Network IO speed
software level
- The operating system. unix is faster, more secure and more stable.
- File system. ext4 is better for sequential reads and writes, random if some. NTFS is prone to disk fragmentation for frequently modified data.
- Hard disk scheduling algorithm
- Virtual Memory Settings
The above biased hardware optimization elements, these are directly related to the underlying O&M and expense costs.
mysql server configuration optimization
configuration item | scope of action | functionality | suggestion |
---|---|---|---|
query_cache_size | security situation | query cache | Not recommended |
sort_buffer _size | Each thread can be individually set | Sort Buffer Pool Size | default (setting) |
join_buffer_size | Each thread can be individually set | If multiple tables are associated with a query, a join buffer can be allocated for each association, so each query may have multiple join buffers. | |
table_cache_size | security situation | Cache table information | default (setting) |
thread_cache_size | security situation | Thread Pool Size | default (setting) |
query_cache_size | security situation | Under the query cache band, mysql will allocate that space at startup, and later modifications will also reallocate and initialize the size. | Not recommended |
read_buffer_size | security situation | The size of the buffer allocated for each thread's sequential scan of the table. Too small and it results in sequential frequent disk IO. too large and it wastes memory | Default. Try to avoid full table scans |
read_rnd_buffer_size | security situation | The size of the buffer allocated to each thread for random reads. Advantages and disadvantages as above | ibid |
sort_buffer_size | security situation | The size of the buffer allocated for each thread to execute the sql need to sort operation. Same as above | ibid |
innodb_log_file_size | security situation | redolog log file size , set too small will lead to frequent disk flushing, low performance, if there is a failure, there may also be incomplete transaction log records; too large, the crash recovery is slow, and will take up a larger file | Adjusted to system load, performance, business requirements |
innodb_1og_fles_in_group | security situation | Sets the number of innodb_log_files for the | The system default is sufficient. |
Innodb_os_1og_written | show innodb status to see | See how much data innodb has written to the log file | default (setting) |
innnodb buffer pool
show innodb status You can view the amount of dirty pages refreshed: Inodbbuffer_pool_pages_dirty
The buffer pool is too big, a lot of data is in memory, the query is faster. When it needs to be flushed to disk, it will be very slow, and the warm-up and shutdown time will be longer.
The buffer pool is too small, the swipe are time gets faster, but there is less data in memory and the query gets slower.
You can monitor the inodbbuffer_pool_pages_dirty status variable or use innotop to monitor SHOW INNODB STATUS to observe the amount of dirty pages flushed. A smaller value for the innodb_max dirty_pages_pct variable does not guarantee that InnoDB will keep fewer dirty pages in the buffer pool.It's just a threshold that controls whether InnoDB can be "Lazy" or not.InnoDB flushes dirty pages by default through a background thread and will merge writers to write out to disk more efficiently and sequentially. This behavior is called "Lazy" because it causes InnoDB to delay flushing dirty pages from the buffer pool until some other data must use the space. When the percentage of dirty pages exceeds this threshold , InnoDB flushes dirty pages quickly to try to keep the number of dirty pages lower. InnoDB will also go into "FuriousFlushing" mode when there is not enough space left in the transaction log, which is one reason why large logs can improve performance.
How InnoDB flushes the log buffer. When InnoDB flushes the log buffer to a disk log file, it first locks the buffer with a Mutex, flushes to the desired location, and then moves the remaining entries to the front of the buffer. By the time the Mutex is released, more than one transaction may be ready to flush its log entries.InnoDB has a GroupC ommit feature that allows multiple transactions to be committed within a single I/0 operation.
The log buffer must be chant flushed to disk so that committed transactions are fully persisted. If performance is more of a concern compared to persistence, then adjust theinnodb_flush_1og_at_trxcommit to control how often the redolog buffer is flushed.
innodb_flush_1og_at_trxcommit configurable options
configuration item | element |
---|---|
0 | Flushes the data in the buffer to the relodlog every second, but the data is not really persisted when the transaction commits |
1 | The buffer is flushed to the redolog file every time a transaction commits, and transaction commits are flushed to disk. This ensures that any committed transactions are not lost. Unless the disk or the operating system is "pseudo-flushed".This level is also the default innodb setting |
2 | Writes the log buffer to the log file each time a transaction commits, but does not persist it to storage; this setting loses count in the event of a power failure or database crash. |
double write buffer
The double write buffer is in place to prevent incomplete data due to system failure, power failure, etc. when writing buffer data to disk.
In actuality, it means that the data in these memories are sequentially written to the double write buffer (sequential file in the disk) before they are written to the disk, and the data disk is written only after the write is successful.
During database startup and recovery, InnoDB checks to see if the data pages in the data file are complete. If a data page is found to be faulty, the corresponding complete copy of the data is read from the double write buffer. This ensures data integrity.
It's to write to disk twice, but this double write buffer is a sequential write, and the speed is OK.
innodb concurrency control
innodb_thread_concurrency You can control how many threads enter the kernel at a time, 0 means no limit.
Because most of them are disk IO operations, it is recommended to set the value = CPU * number of disks * 2
concurrent_insert You can control the number of threads that myisam inserts concurrently.
options (as in computer software settings) | corresponds English -ity, -ism, -ization |
---|---|
0 | Concurrent inserts are not allowed, all inserts add a table lock |
1 | Concurrent inserts are allowed if there are no holes in the table |
2 | Forced insertion into the end of the table will fragment over time |
Optimizing Big Data
Blob and text types should not be used in conjunction with other fields as much as possible. If they must be used, they must be isolated from other tables.
Sort Optimization
One-way sorting is used when the length of the sorted data rows does not exceed max_length_for_sort_data;
If the length of the sorted data exceeds max_length_for_sort_data, a multiplexed sort is used;
Therefore, configuring this field affects the sorting algorithm selection.
Other parameters
temp_table_size Controls the temporary table size;
max_connections Maximum number of connections;
thread_cache_size Memory pool size;
innodb_io_capacity How many IO operations per second;
innodb_buffer_pool_size inno_db_file_size Very important, if you don't know it, you can learn the innodb engine of mysql by yourself;
。。。。。