summarize
1. Purpose of the manual:
This manual is intended to provide a systematic methodology for discovering and analyzing slow SQL statements. This is accomplished by using theob_tools
package that collects and analyzes slow SQL statements generated when the application is pressure tested in different scenarios during delivery, enabling performance tuning and optimization recommendations.
2. Document content:
This manual contains the following main sections:
1. ob_toolsIntroduction to stored procedures and functions in the package:
Detailed introductionob_tools
The functionality, usage, and parameter descriptions of the individual stored procedures and functions in the package provide the reader with a clear foundation for understanding and application.
2. ob_toolsUse cases for different scenarios of stored procedures and functions within the package:
Provides specific use cases for different application scenarios, showing how these stored procedures and functions can be utilized to collect and analyze data to meet specific performance needs.
3. Introduction to SQL for Data Analysis Reports and Use Cases
Provide examples of SQL statements used for data analysis, illustrate how to use reports to analyze SQL, and demonstrate their application effects with specific cases to help deliver students better understand performance bottlenecks and optimization directions.
3. Demand context:
More customer feedbackv$ob_sql_audit
cap (a poem)gv$ob_sql_audit
View queries are slow and the data is not persistent, making it difficult to get specific records when reviewing issues or diagnosing performance failures, except where OCP can find the relevant records.
Therefore, in order to address this issue, during a bank project, we specially wrote the relevant persistencev$ob_sql_audit
+ An approach to analyze data in order to analyze the on-line transaction pressure testing requirements of a core system ISV V8 microservices (TCC architecture) core system in different scenarios.
Through systematic means of data collection and analysis, this manual is designed to help users identify and solve performance problems more effectively.
1. Prerequisites
Before using the ob_tools package, you first need to create the appropriate database user and give that user the privileges to connect to the database, create, modify, and delete objects. In addition, the following commands need to be executed using the sys user to ensure that the user is able to perform the relevant operations.
CREATE USER YZJ IDENYIFIED BY 123456;
GRANT DBA TO YZJ;
GRANT EXECUTE ON dbms_metadata TO YZJ;
After you have created the user, you need to make sure that the relevant database objects have been created correctly.
For this purpose, we provide a program calledmanage_objects
procedure to initialize the necessary database objects.
2、manage_objects Initialize Stored Procedures
I. manage_objects.proc code
create or replace PROCEDURE manage_objects(p_action IN VARCHAR2) IS
-- Defining time variables
v_start_time TIMESTAMP;
v_end_time TIMESTAMP;
v_elapsed_time INTERVAL DAY(2) TO SECOND(6);
-- Exceptions for handling parameter errors
ex_invalid_action EXCEPTION;
PRAGMA EXCEPTION_INIT(ex_invalid_action, -20001);
-- Used to check if incoming parameters are legal
PROCEDURE check_action(p_action IN VARCHAR2) IS
BEGIN
IF UPPER(p_action) NOT IN ('INIT', 'UPDATE', 'DELETE') THEN
RAISE ex_invalid_action;
END IF;
END check_action;
-- Printing operation information
PROCEDURE log_info(p_message IN VARCHAR2) IS
BEGIN
DBMS_OUTPUT.PUT_LINE(p_message);
END log_info;
-- Record operation time
PROCEDURE log_time(p_start_time IN TIMESTAMP, p_end_time IN TIMESTAMP) IS
BEGIN
v_elapsed_time := p_end_time - p_start_time;
log_info('Starting time: ' || TO_CHAR(p_start_time, 'YYYY-MM-DD HH24:MI:'));
log_info('end time: ' || TO_CHAR(p_end_time, 'YYYY-MM-DD HH24:MI:'));
log_info('take a period of (x amount of time): ' || TO_CHAR(v_elapsed_time));
END log_time;
BEGIN
/*
herald
Process name: manage_objects
role of a function (math.): This procedure is used to perform initialization based on incoming parameters(INIT)、update(UPDATE)、removing(DELETE)Database object manipulation。
Date of creation: 2024-09-23
author: (YZJ、Sudo (name))
releases: v1.0
Statement of Requirements:
This procedure implements the database table、Index and sequence management operations。Controls different operations based on incoming parameters:
- INIT:Create all related tables、indexing、sequences。
- UPDATE:update部分a meter (measuring sth)数据,并重建indexing。
- DELETE:removing所有相关a meter (measuring sth)、indexing、sequences。
Time to perform each operation(commencement、end time)及manipulatetake a period of (x amount of time)会被记录并输出,to facilitate commissioning and monitoring。
Prerequisites:
Before creating this procedure,You need to ensure that you have the following object management privileges:
- establish/removinga meter (measuring sth)
- establish/removingindexing
- establish/removingsequences
Operation content:
1. INIT manipulate:
- establish以下a meter (measuring sth)结构:XZ_SQL_AUDIT, XZ_CDC_LOG, XZ_SQL_AUDIT_CDC。
- establishsequences:SEQ_XZ_SQL_AUDIT_CDC_CID。
- establishindexing:IDX_XZSQLAUDIT_INDEX2, IDX_XZ_INDEXES_INDEX1, IDX_XZ_OBJECTS_INDEX1。
- EACH TABLE、sequencescap (a poem)indexing的establish过程都会记录详细的执行时间。
2. UPDATE manipulate:
- TRUNCATE a meter (measuring sth) `XZ_INDEXES` cap (a poem) `XZ_OBJECTS`,and re-import the data。
- re-import DBA_INDEXES cap (a poem) DBA_OBJECTS data contained in,并重建indexing。
- manipulate期间将记录执行时间cap (a poem)manipulate结果。
3. DELETE manipulate:
- removing所有establish的a meter (measuring sth)、indexingcap (a poem)sequences。
- removing过程中将记录执行的manipulatecap (a poem)时间。
invoke a method:
根据所需的manipulate,Pass in different parameters:
```
BEGIN
manage_objects('INIT'); -- establish对象
END;
/
BEGIN
manage_objects('UPDATE'); -- update数据
END;
/
BEGIN
manage_objects('DELETE'); -- removing对象
END;
/
```
caveat:
- The parameter must be 'INIT', 'UPDATE' maybe 'DELETE',If other parameters are passed in,The procedure will throw an exception and stop execution。
- A large number of log messages are output during execution,Make sure it's on. `DBMS_OUTPUT.PUT_LINE`。
*/
-- Check if the parameter is legal
check_action(UPPER(p_action));
IF UPPER(p_action) = 'INIT' THEN
-- 初始化对象establish
log_info('================ INIT - establisha meter (measuring sth)、indexing、sequences ================');
-- establisha meter (measuring sth) XZ_SQL_AUDIT
log_info('commencementestablisha meter (measuring sth): XZ_SQL_AUDIT');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE '
CREATE TABLE XZ_SQL_AUDIT (
C_ID NUMBER(38),C_RQ_TIME TIMESTAMP(6),C_BATCH_ID NUMBER(38),C_NAME VARCHAR2(100),C_FLAG CHAR(2),SVR_IP VARCHAR2(46),SVR_PORT NUMBER(38),
REQUEST_ID NUMBER(38),SQL_EXEC_ID NUMBER(38),TRACE_ID VARCHAR2(128),SID NUMBER(38),CLIENT_IP VARCHAR2(46),CLIENT_PORT NUMBER(38),TENANT_ID NUMBER(38),
EFFECTIVE_TENANT_ID NUMBER(38),TENANT_NAME VARCHAR2(64),USER_ID NUMBER(38),USER_NAME VARCHAR2(64),USER_GROUP NUMBER(38),USER_CLIENT_IP VARCHAR2(46),
DB_ID NUMBER(38),DB_NAME VARCHAR2(128),SQL_ID VARCHAR2(32),QUERY_SQL VARCHAR2(4000),PLAN_ID NUMBER(38),AFFECTED_ROWS NUMBER(38),RETURN_ROWS NUMBER(38),
PARTITION_CNT NUMBER(38),RET_CODE NUMBER(38),QC_ID NUMBER(38),DFO_ID NUMBER(38),SQC_ID NUMBER(38),WORKER_ID NUMBER(38),EVENT VARCHAR2(64),P1TEXT VARCHAR2(64),
P1 NUMBER(38),P2TEXT VARCHAR2(64),P2 NUMBER(38),P3TEXT VARCHAR2(64),P3 NUMBER(38),WAIT_CLASS_ID NUMBER(38),WAIT_CLASS# NUMBER(38),WAIT_CLASS VARCHAR2(64),
STATE VARCHAR2(19),WAIT_TIME_MICRO NUMBER(38),TOTAL_WAIT_TIME_MICRO NUMBER(38),TOTAL_WAITS NUMBER(38),RPC_COUNT NUMBER(38),PLAN_TYPE NUMBER(38),
IS_INNER_SQL NUMBER(38),IS_EXECUTOR_RPC NUMBER(38),IS_HIT_PLAN NUMBER(38),REQUEST_TIME NUMBER(38),ELAPSED_TIME NUMBER(38),NET_TIME NUMBER(38),NET_WAIT_TIME NUMBER(38),
QUEUE_TIME NUMBER(38),DECODE_TIME NUMBER(38),GET_PLAN_TIME NUMBER(38),EXECUTE_TIME NUMBER(38),APPLICATION_WAIT_TIME NUMBER(38),CONCURRENCY_WAIT_TIME NUMBER(38),
USER_IO_WAIT_TIME NUMBER(38),SCHEDULE_TIME NUMBER(38),ROW_CACHE_HIT NUMBER(38),BLOOM_FILTER_CACHE_HIT NUMBER(38),BLOCK_CACHE_HIT NUMBER(38),DISK_READS NUMBER(38),
RETRY_CNT NUMBER(38),TABLE_SCAN NUMBER(38),CONSISTENCY_LEVEL NUMBER(38),MEMSTORE_READ_ROW_COUNT NUMBER(38),SSSTORE_READ_ROW_COUNT NUMBER(38),DATA_BLOCK_READ_CNT NUMBER(38),
DATA_BLOCK_CACHE_HIT NUMBER(38),INDEX_BLOCK_READ_CNT NUMBER(38),INDEX_BLOCK_CACHE_HIT NUMBER(38),BLOCKSCAN_BLOCK_CNT NUMBER(38),BLOCKSCAN_ROW_CNT NUMBER(38),
PUSHDOWN_STORAGE_FILTER_ROW_CNT NUMBER(38),REQUEST_MEMORY_USED NUMBER(38),EXPECTED_WORKER_COUNT NUMBER(38),USED_WORKER_COUNT NUMBER(38),SCHED_INFO VARCHAR2(16384),
PS_CLIENT_STMT_ID NUMBER(38),PS_INNER_STMT_ID NUMBER(38),TX_ID NUMBER(38),SNAPSHOT_VERSION NUMBER(38),REQUEST_TYPE NUMBER(38),IS_BATCHED_MULTI_STMT NUMBER(38),
OB_TRACE_INFO VARCHAR2(4096),PLAN_HASH NUMBER(38),PARAMS_VALUE CLOB,RULE_NAME VARCHAR2(256),TX_INTERNAL_ROUTING NUMBER,TX_STATE_VERSION NUMBER(38),FLT_TRACE_ID VARCHAR2(1024),
NETWORK_WAIT_TIME NUMBER(38),PRIMARY KEY (C_ID, C_BATCH_ID)
) partition by range(C_BATCH_ID) subpartition by hash(C_ID)
(partition P1 values less than (200) (subpartition subp0,subpartition subp1,subpartition subp2,subpartition subp3,subpartition subp4,subpartition subp5,subpartition subp6,subpartition subp7,subpartition subp8,subpartition subp9),
partition P2 values less than (400) (subpartition subp10,subpartition subp11,subpartition subp12,subpartition subp13,subpartition subp14,subpartition subp15,subpartition subp16,subpartition subp17,subpartition subp18,subpartition subp19),
partition P3 values less than (600) (subpartition subp20,subpartition subp21,subpartition subp22,subpartition subp23,subpartition subp24,subpartition subp25,subpartition subp26,subpartition subp27,subpartition subp28,subpartition subp29),
partition P4 values less than (800) (subpartition subp30,subpartition subp31,subpartition subp32,subpartition subp33,subpartition subp34,subpartition subp35,subpartition subp36,subpartition subp37,subpartition subp38,subpartition subp39),
partition P5 values less than (1000) (subpartition subp40,subpartition subp41,subpartition subp42,subpartition subp43,subpartition subp44,subpartition subp45,subpartition subp46,subpartition subp47,subpartition subp48,subpartition subp49),
partition P_MAX VALUES LESS THAN (MAXVALUE) (subpartition subp50,subpartition subp51,subpartition subp52,subpartition subp53,subpartition subp54,subpartition subp55,subpartition subp56,subpartition subp57,subpartition subp58,subpartition subp59))
';
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
-- establishindexing IDX_XZSQLAUDIT_INDEX2
log_info('commencementestablishindexing: IDX_XZSQLAUDIT_INDEX2');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE 'CREATE INDEX IDX_XZSQLAUDIT_INDEX2 on XZ_SQL_AUDIT (C_BATCH_ID, C_NAME, USER_NAME, SQL_ID, RETRY_CNT, RPC_COUNT,QUERY_SQL, NET_TIME, NET_WAIT_TIME, QUEUE_TIME, DECODE_TIME, GET_PLAN_TIME, EXECUTE_TIME, ELAPSED_TIME) LOCAL';
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
-- establishsequences SEQ_XZ_SQL_AUDIT_CDC_CID
log_info('commencementestablishsequences: SEQ_XZ_SQL_AUDIT_CDC_CID');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE 'CREATE SEQUENCE SEQ_XZ_SQL_AUDIT_CDC_CID MINVALUE 1 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1 CACHE 10000 NOORDER NOCYCLE';
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
-- establish日志a meter (measuring sth) XZ_CDC_LOG
log_info('commencementestablisha meter (measuring sth): XZ_CDC_LOG');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE 'CREATE TABLE XZ_CDC_LOG (C_BATCH_ID NUMBER, C_START_TIME TIMESTAMP(6), C_END_TIME TIMESTAMP(6), C_START_SCN NUMBER, C_END_SCN NUMBER, C_EXECUTE_TIME INTERVAL DAY(2) TO SECOND(6), PRIMARY KEY (C_BATCH_ID))';
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
-- establish分析a meter (measuring sth) XZ_SQL_AUDIT_CDC
log_info('commencementestablisha meter (measuring sth): XZ_SQL_AUDIT_CDC');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE '
CREATE TABLE XZ_SQL_AUDIT_CDC (
C_ID NUMBER(38) DEFAULT SEQ_XZ_SQL_AUDIT_CDC_CID.NEXTVAL,C_BATCH_ID NUMBER(38),SVR_IP VARCHAR2(46),SVR_PORT NUMBER(38),REQUEST_ID NUMBER(38),SQL_EXEC_ID NUMBER(38),
TRACE_ID VARCHAR2(128),SID NUMBER(38),CLIENT_IP VARCHAR2(46),CLIENT_PORT NUMBER(38),TENANT_ID NUMBER(38),EFFECTIVE_TENANT_ID NUMBER(38),TENANT_NAME VARCHAR2(64),
USER_ID NUMBER(38),USER_NAME VARCHAR2(64),USER_GROUP NUMBER(38),USER_CLIENT_IP VARCHAR2(46),DB_ID NUMBER(38),DB_NAME VARCHAR2(128),SQL_ID VARCHAR2(32),QUERY_SQL VARCHAR2(4000),
PLAN_ID NUMBER(38),AFFECTED_ROWS NUMBER(38),RETURN_ROWS NUMBER(38),PARTITION_CNT NUMBER(38),RET_CODE NUMBER(38),QC_ID NUMBER(38),DFO_ID NUMBER(38),SQC_ID NUMBER(38),
WORKER_ID NUMBER(38),EVENT VARCHAR2(64),P1TEXT VARCHAR2(64),P1 NUMBER(38),P2TEXT VARCHAR2(64),P2 NUMBER(38),P3TEXT VARCHAR2(64),P3 NUMBER(38),WAIT_CLASS_ID NUMBER(38),
WAIT_CLASS# NUMBER(38),WAIT_CLASS VARCHAR2(64),STATE VARCHAR2(19),WAIT_TIME_MICRO NUMBER(38),TOTAL_WAIT_TIME_MICRO NUMBER(38),TOTAL_WAITS NUMBER(38),RPC_COUNT NUMBER(38),
PLAN_TYPE NUMBER(38),IS_INNER_SQL NUMBER(38),IS_EXECUTOR_RPC NUMBER(38),IS_HIT_PLAN NUMBER(38),REQUEST_TIME NUMBER(38),ELAPSED_TIME NUMBER(38),NET_TIME NUMBER(38),
NET_WAIT_TIME NUMBER(38),QUEUE_TIME NUMBER(38),DECODE_TIME NUMBER(38),GET_PLAN_TIME NUMBER(38),EXECUTE_TIME NUMBER(38),APPLICATION_WAIT_TIME NUMBER(38),
CONCURRENCY_WAIT_TIME NUMBER(38),USER_IO_WAIT_TIME NUMBER(38),SCHEDULE_TIME NUMBER(38),ROW_CACHE_HIT NUMBER(38),BLOOM_FILTER_CACHE_HIT NUMBER(38),BLOCK_CACHE_HIT NUMBER(38),
DISK_READS NUMBER(38),RETRY_CNT NUMBER(38),TABLE_SCAN NUMBER(38),CONSISTENCY_LEVEL NUMBER(38),MEMSTORE_READ_ROW_COUNT NUMBER(38),SSSTORE_READ_ROW_COUNT NUMBER(38),
DATA_BLOCK_READ_CNT NUMBER(38),DATA_BLOCK_CACHE_HIT NUMBER(38),INDEX_BLOCK_READ_CNT NUMBER(38),INDEX_BLOCK_CACHE_HIT NUMBER(38),BLOCKSCAN_BLOCK_CNT NUMBER(38),BLOCKSCAN_ROW_CNT NUMBER(38),
PUSHDOWN_STORAGE_FILTER_ROW_CNT NUMBER(38),REQUEST_MEMORY_USED NUMBER(38),EXPECTED_WORKER_COUNT NUMBER(38),USED_WORKER_COUNT NUMBER(38),SCHED_INFO VARCHAR2(16384),
PS_CLIENT_STMT_ID NUMBER(38),PS_INNER_STMT_ID NUMBER(38),TX_ID NUMBER(38),SNAPSHOT_VERSION NUMBER(38),REQUEST_TYPE NUMBER(38),IS_BATCHED_MULTI_STMT NUMBER(38),
OB_TRACE_INFO VARCHAR2(4096),PLAN_HASH NUMBER(38),PARAMS_VALUE CLOB,RULE_NAME VARCHAR2(256),TX_INTERNAL_ROUTING NUMBER,TX_STATE_VERSION NUMBER(38),FLT_TRACE_ID VARCHAR2(1024),
NETWORK_WAIT_TIME NUMBER(38),PRIMARY KEY (C_ID, REQUEST_TIME)) partition by hash(REQUEST_TIME)
(partition P0,partition P1,partition P2,partition P3,partition P4,partition P5,partition P6,partition P7,partition P8,partition P9,partition P10,partition P11,partition P12,
partition P13,partition P14,partition P15,partition P16,partition P17,partition P18,partition P19,partition P20,partition P21,partition P22,partition P23,partition P24,partition P25,
partition P26,partition P27,partition P28,partition P29,partition P30,partition P31,partition P32,partition P33,partition P34,partition P35,partition P36,partition P37,partition P38,partition P39)
';
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
-- establishindexing IDX_XZSQLAUDITCDC_INDEX2
log_info('commencementestablishindexing: IDX_XZSQLAUDITCDC_INDEX2');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE 'CREATE INDEX IDX_XZSQLAUDITCDC_INDEX2 on XZ_SQL_AUDIT_CDC (C_BATCH_ID, DB_NAME,USER_NAME, SQL_ID, RETRY_CNT, RPC_COUNT,QUERY_SQL, NET_TIME, NET_WAIT_TIME, QUEUE_TIME, DECODE_TIME, GET_PLAN_TIME, EXECUTE_TIME, ELAPSED_TIME) LOCAL';
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
-- establish辅助a meter (measuring sth) XZ_INDEXES
log_info('commencementestablisha meter (measuring sth): XZ_INDEXES');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE 'CREATE TABLE XZ_INDEXES AS
SELECT /*+PARALLEL(30)*/
upper(a.table_owner) table_owner,
upper(a.table_name) table_name,
count(distinct case when = ''YES'' then a.index_name else null end) local_index_cnt,
count(distinct case when = ''NO'' then a.index_name else null end ) global_index_cnt,
count(distinct case when b.constraint_type = ''P'' then b.constraint_name else null end) pk_cnt
FROM dba_indexes a inner join dba_constraints b on a.table_owner = and a.table_name = b.table_name
group by a.table_owner, a.table_name';
v_end_time := SYSTIMESTAMP;
commit;
log_time(v_start_time, v_end_time);
-- establishindexing IDX_XZINDEX_INDEX1
log_info('commencementestablishindexing: IDX_XZINDEX_INDEX1');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE 'CREATE INDEX IDX_XZINDEX_INDEX1 on XZ_INDEXES (TABLE_OWNER,TABLE_NAME)';
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
-- establish辅助a meter (measuring sth) XZ_OBJECTS
log_info('commencementestablisha meter (measuring sth): XZ_OBJECTS');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE 'CREATE TABLE XZ_OBJECTS AS SELECT * FROM DBA_OBJECTS';
commit;
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
-- establishindexing IDX_XZOBJECTS_INDEX1
log_info('commencementestablishindexing: IDX_XZOBJECTS_INDEX1');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE 'CREATE INDEX IDX_XZOBJECTS_INDEX1 on XZ_OBJECTS (OWNER,OBJECT_NAME,OBJECT_TYPE)';
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
ELSIF UPPER(p_action) = 'UPDATE' THEN
-- updatemanipulate
log_info('================ UPDATE - updatea meter (measuring sth) XZ_INDEXES cap (a poem) XZ_OBJECTS ================');
-- TRUNCATE TABLE XZ_INDEXES and re-import the data
log_info('commencement清空a meter (measuring sth): XZ_INDEXES');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE 'TRUNCATE TABLE XZ_INDEXES';
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
log_info('re-import数据到 XZ_INDEXES');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE 'INSERT /*+PARALLEL(30)*/ INTO XZ_INDEXES
SELECT upper(a.table_owner), upper(a.table_name),
count(distinct case when = ''YES'' then a.index_name else null end) local_index_cnt,
count(distinct case when = ''NO'' then a.index_name else null end) global_index_cnt,
count(distinct case when b.constraint_type = ''P'' then b.constraint_name else null end) pk_cnt
FROM dba_indexes a INNER JOIN dba_constraints b ON a.table_owner = AND a.table_name = b.table_name
GROUP BY a.table_owner, a.table_name';
commit;
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
-- TRUNCATE TABLE XZ_OBJECTS and re-import the data
log_info('commencement清空a meter (measuring sth): XZ_OBJECTS');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE 'TRUNCATE TABLE XZ_OBJECTS';
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
log_info('re-import数据到 XZ_OBJECTS');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE 'INSERT INTO XZ_OBJECTS SELECT * FROM DBA_OBJECTS';
commit;
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
ELSIF UPPER(p_action) = 'DELETE' THEN
-- removing所有对象
log_info('================ DELETE - removing所有a meter (measuring sth)、indexingcap (a poem)sequences ================');
-- removingindexing、sequences、a meter (measuring sth)
log_info('commencementremovinga meter (measuring sth): XZ_SQL_AUDIT');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE 'DROP TABLE XZ_SQL_AUDIT CASCADE CONSTRAINTS';
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
log_info('commencementremovingsequences: SEQ_XZ_SQL_AUDIT_CDC_CID');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE 'DROP SEQUENCE SEQ_XZ_SQL_AUDIT_CDC_CID';
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
log_info('commencementremovinga meter (measuring sth): XZ_CDC_LOG');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE 'DROP TABLE XZ_CDC_LOG CASCADE CONSTRAINTS';
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
log_info('commencementremovinga meter (measuring sth): XZ_SQL_AUDIT_CDC');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE 'DROP TABLE XZ_SQL_AUDIT_CDC CASCADE CONSTRAINTS';
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
log_info('commencementremovinga meter (measuring sth): XZ_INDEXES');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE 'DROP TABLE XZ_INDEXES CASCADE CONSTRAINTS';
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
log_info('commencementremovinga meter (measuring sth): XZ_OBJECTS');
v_start_time := SYSTIMESTAMP;
EXECUTE IMMEDIATE 'DROP TABLE XZ_OBJECTS CASCADE CONSTRAINTS';
v_end_time := SYSTIMESTAMP;
log_time(v_start_time, v_end_time);
END IF;
EXCEPTION
WHEN ex_invalid_action THEN
DBMS_OUTPUT.PUT_LINE('Incorrect parameter passed in!The parameter should be INIT, UPDATE maybe DELETE');
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('abnormalities: ' || SQLERRM);
END manage_objects;
**manage_objects**
The main function of a stored procedure is to create, update, or delete database objects based on incoming parameters.
The process can perform three operations:
- INIT: Creates all required tables, indexes, and sequences.
- UPDATE: Updates some table data and rebuilds indexes.
- DELETE: Deletes all related tables, indexes, and sequences.
Two,manage_objects
Usage
Before calling the manage_objects procedure, make sure that you have sufficient privileges (e.g. to create/delete tables, indexes, sequences, etc.).
BEGIN
manage_objects( p_action => 'INIT'); -- create objects
END;
/
BEGIN
manage_objects( p_action => 'UPDATE'); -- update data
END;
/
BEGIN
manage_objects( p_action => 'DELETE'); -- Delete objects
END;
/
obclient [YZJ]> set serveroutput on;
obclient [YZJ]> BEGIN
-> manage_objects('DELETE'); -- delete objects
-> END; -> / -- delete_objects('DELETE')
-> /
Query OK, 1 row affected (1.359 sec)
================ DELETE - Delete all tables, indexes and sequences ================
Started deleting table: XZ_SQL_AUDIT
Start time: 2024-09-28 11:22:39.684759
End time: 2024-09-28 11:22:39.864360
Elapsed time: +00 00:00:00.179601
Started deletion sequence: SEQ_XZ_SQL_AUDIT_CDC_CID
Start: 2024-09-28 11:22:39.865110
End time: 2024-09-28 11:22:39.903495
Elapsed time: +00 00:00:00.038385
Started deleting table: XZ_CDC_LOG
Start: 2024-09-28 11:22:39.904163
End time: 2024-09-28 11:22:39.957957
Elapsed time: +00 00:00:00.053794
Started deleting table: XZ_SQL_AUDIT_CDC
Start: 2024-09-28 11:22:39.958671
End time: 2024-09-28 11:22:40.101779
Elapsed time: +00 00:00:00.143108
Started deleting table: XZ_INDEXES
Start: 2024-09-28 11:22:40.102526
End time: 2024-09-28 11:22:40.163580
Elapsed time: +00 00:00:00.061054
Started deleting table: XZ_OBJECTS
Start: 2024-09-28 11:22:40.164344
End time: 2024-09-28 11:22:40.236159
elapsed time: +00 00:00:00.071815
obclient [YZJ]>
obclient [YZJ]>
obclient [YZJ]> BEGIN
-> manage_objects('INIT'); -- create objects
-> END.
-> /
Query OK, 1 row affected (3.797 sec)
================ INIT - create table, index, sequence ================
Start creating table: XZ_SQL_AUDIT
Start time: 2024-09-28 11:21:35.216756
End time: 2024-09-28 11:21:35.465584
Elapsed time: +00 00:00:00.248828
Started index creation: IDX_XZSQLAUDIT_INDEX2
Start: 2024-09-28 11:21:35.466338
End time: 2024-09-28 11:21:37.852109
Elapsed time: +00 00:00:02.385771
Started creating sequence: SEQ_XZ_SQL_AUDIT_CDC_CID
Start time: 2024-09-28 11:21:37.852915
End time: 2024-09-28 11:21:37.893742
Elapsed time: +00 00:00:00.040827
Start creating table: XZ_CDC_LOG
Start: 2024-09-28 11:21:37.894449
End time: 2024-09-28 11:21:37.977595
Elapsed time: +00 00:00:00.083146
Started table creation: XZ_SQL_AUDIT_CDC
Started: 2024-09-28 11:21:37.978392
End time: 2024-09-28 11:21:38.178886
Elapsed time: +00 00:00:00.200494
Started index creation: IDX_XZSQLAUDITCDC_INDEX2
Start: 2024-09-28 11:21:38.179642
End time: 2024-09-28 11:21:39.721940
Elapsed time: +00 00:00:01.542298
Start creating table: XZ_INDEXES
Started: 2024-09-28 11:21:39.722672
End time: 2024-09-28 11:21:40.373707
Elapsed time: +00 00:00:00.651035
Started index creation: IDX_XZINDEX_INDEX1
Start: 2024-09-28 11:21:40.374425
End time: 2024-09-28 11:21:40.653932
Elapsed time: +00 00:00:00.279507
Start creating table: XZ_OBJECTS
Start: 2024-09-28 11:21:40.654757
End time: 2024-09-28 11:21:41.069441
Elapsed time: +00 00:00:00.414684
Started index creation: IDX_XZOBJECTS_INDEX1
Start time: 2024-09-28 11:21:41.070205
End time: 2024-09-28 11:21:41.349171
Elapsed time: +00 00:00:00.278966
obclient [YZJ]>
obclient [YZJ]>
obclient [YZJ]> BEGIN
-> manage_objects('UPDATE'); -- update data
-> END;
-> /
Query OK, 1 row affected (0.741 sec)
================ UPDATE - update tables XZ_INDEXES and XZ_OBJECTS ================
Start emptying table: XZ_INDEXES
Start: 2024-09-28 11:24:55.138446
End time: 2024-09-28 11:24:55.211236
Elapsed time: +00 00:00:00.072790
Re-import data to XZ_INDEXES
Start: 2024-09-28 11:24:55.211981
End time: 2024-09-28 11:24:55.586314
Elapsed time: +00 00:00:00.374333
Start emptying table: XZ_OBJECTS
Start: 2024-09-28 11:24:55.587017
End time: 2024-09-28 11:24:55.656094
Elapsed time: +00 00:00:00.069077
Re-import data to XZ_OBJECTS
Started: 2024-09-28 11:24:55.656805
End time: 2024-09-28 11:24:55.937035
Elapsed time: +00 00:00:00.280230
When calling themanage_objects
When you pass in a parameter, the procedure performs the appropriate action:
-
INIT operation: Create the following objects:
- a meter (measuring sth)
XZ_SQL_AUDIT
- a meter (measuring sth)
XZ_CDC_LOG
- a meter (measuring sth)
XZ_SQL_AUDIT_CDC
- sequences
SEQ_XZ_SQL_AUDIT_CDC_CID
- indexing
IDX_XZSQLAUDIT_INDEX2
and other supporting tablesxz_indexes
cap (a poem)xz_objects
。
- a meter (measuring sth)
-
UPDATE operation: Updates
xz_indexes
cap (a poem)xz_objects
The data in the auxiliary table, the specific role of which will be described later. - DELETE operations: Delete all created tables, indexes, and sequences.
Three,***Precautions***
After running this procedure, make sure it completes successfully before continuing to compile the**ob_tools**
Package.
3、ob_tools package introduction to the use of
ob_tools
package currently contains the following procedures and functions:
-
**extract_sqlaudit_proc**
: fromgv$ob_sql_audit
The view extracts the SQL execution data for the specified time period and stores it to thexz_sql_audit
table for subsequent analysis and optimization. -
**real_time_cdc_proc**
: Real-time collectiongv$ob_sql_audit
The data in this section is solidified into thexz_sql_audit_cdc
table in order to keep the data valid during long-running tasks. -
**get_table_name**
: Extracts one or more table names from a SQL statement to help the user understand the tables involved. -
**get_table_info**
: Get detailed information about multiple tables, including partition rules, number of indexes and number of rows of data, so that users can quickly learn the relevant information about the tables involved during the SQL optimization period.
It will continue to be updated in the future to add more features to meet different needs.
****** Note ******
The tool code was started in ob 4.2.1.7 version to write the first procedure, in ob 4.2.1.8 version to add new procedures and functions. If you use this tool to analyze sql performance problems, it is recommended to use it after ob 4.2.1.7 version.
CREATE OR REPLACE PACKAGE ob_tools IS
-- Procedure for collecting data from an online pressure test
PROCEDURE extract_sqlaudit_proc(p_NAME VARCHAR2, p_FLAG char, p_START_TIME VARCHAR2, p_END_TIME VARCHAR2);
-- Run the batch real-time data pulling procedure
PROCEDURE real_time_cdc_proc(p_END_TIME VARCHAR2 );
-- get_table_name_by_sql_text
FUNCTION get_table_name(p_owner varchar2 ,p_string_sql clob) RETURN VARCHAR2;
-- Get table information by table name
FUNCTION get_table_info(p_tablename_list VARCHAR2) RETURN VARCHAR2; -- get_table_info_by_table_name(p_owner varchar2 ,p_string_sql clob)
END ob_tools;
/
CREATE OR REPLACE PACKAGE BODY ob_tools IS
PROCEDURE extract_sqlaudit_proc(P_NAME VARCHAR2, p_FLAG char, P_START_TIME VARCHAR2, p_END_TIME VARCHAR2)
IS
V_NAME VARCHAR2(100):=upper(P_NAME); -- Scene Name
V_FLAG CHAR(2) :=upper(p_FLAG); -- s For pulling a single transaction,m For pulling multiple transactions(pressure measurement)
V_MAX_BATCH INT;
V_START_TIME VARCHAR2(100):=P_START_TIME; --Starting time
V_END_TIME VARCHAR2(100):=p_END_TIME; --end time
V_START_SCN NUMBER;
V_END_SCN NUMBER;
/*
V_NAME:Scene Name
V_FLAG:SFor pulling a single transaction,MFor pulling multiple transactions(pressure measurement)
V_MAX_BATCH:Number of batches recorded
*/
V_SQL VARCHAR2(4000);
V_FLAG_EXCEPTION EXCEPTION;
V_END_TIME_SMALL_EXCEPTION EXCEPTION;
BEGIN
/*
herald
Process name: extract_sqlaudit_proc
Functional Description: This procedure is used to retrieve data from the `gv$ob_sql_audit` Extracting data from a view for a specified time period,and store it in the `xz_sql_audit` statistical tables,for follow-up analysis。
Date of creation: 2024-08-15
author: (YZJ、Sudo (name))
releases: v1.2
Requirements background:
This procedure is designed to satisfy the XX A core system of a bankISV V8 microservice(TCC build)核心系统exist不同场景下的联机交易pressure measurement需求而编写。
通过收集pressure measurement期间的digital,It is possible to analyze the slow SQL,and provide suggestions for optimization。
Analytical tools require manual preparation of analyses SQL。
Prerequisites:
Must be pre-created `xz_sql_audit` analytical table,in order to ensure `extract_sqlaudit_proc` The process can be successfully compiled。
invoke a method:
Recommended Use obclient command-line tool,and turn on the service output option:
```
set serveroutput on;
```
Recall Example:
```
BEGIN
extract_sqlaudit_proc(
P_NAME => 'IB1261one loan single transaction',
-- Describe specific scenarios and uses for collecting data。
-- 若为并发pressure measurementdigital,can be written as:'IB1261 200并发pressure measurement 5minutes'
p_FLAG => 'S',
-- 指定pressure measurement类型:A single transaction is 'S',concurrently 'M'
P_START_TIME => '2024-09-03 18:06:00',
-- 指定pressure measurement的Starting time
p_END_TIME => '2024-09-03 18:06:20'
-- 指定pressure measurement的end time
);
END;
/
```
*/
IF TO_TIMESTAMP(V_START_TIME,'YYYY-MM-DD HH24:MI:SS:FF6') > TO_TIMESTAMP(V_END_TIME,'YYYY-MM-DD HH24:MI:SS:FF6') THEN
RAISE V_END_TIME_SMALL_EXCEPTION;
END IF;
IF V_FLAG != 'S' AND V_FLAG != 'M' THEN
RAISE V_FLAG_EXCEPTION;
END IF;
-- Find the maximum number of times plus1
SELECT NVL(MAX(C_BATCH_ID),0) +1 INTO V_MAX_BATCH FROM XZ_SQL_AUDIT;
-- Converts incoming values into a string timestamp format。
V_START_TIME := TO_CHAR(TO_TIMESTAMP(V_START_TIME,'YYYY-MM-DD HH24:MI:SS:FF6'),'YYYY-MM-DD HH24:MI:SS:FF6');
V_END_TIME := TO_CHAR(TO_TIMESTAMP(V_END_TIME,'YYYY-MM-DD HH24:MI:SS:FF6'),'YYYY-MM-DD HH24:MI:SS:FF6');
V_START_SCN := timestamp_to_scn(TO_TIMESTAMP(V_START_TIME,'YYYY-MM-DD HH24:MI:SS:FF6')) / 1000 ; -- 将Starting time转成开始scn
V_END_SCN := timestamp_to_scn(TO_TIMESTAMP(V_END_TIME,'YYYY-MM-DD HH24:MI:SS:FF6')) / 1000 ; -- 将end time转成结束scn
V_SQL:='
insert /*+ ENABLE_PARALLEL_DML PARALLEL(16)*/ into XZ_SQL_AUDIT
select (:1 * 10000000000)+i , scn_to_timestamp(x.request_time * 1000),:2,:3,:4,
x.SVR_IP,x.SVR_PORT,x.REQUEST_ID,x.SQL_EXEC_ID,x.TRACE_ID,,x.CLIENT_IP,x.CLIENT_PORT,
x.TENANT_ID,x.EFFECTIVE_TENANT_ID,x.TENANT_NAME,x.USER_ID,x.USER_NAME,x.USER_GROUP,x.USER_CLIENT_IP,
x.DB_ID,x.DB_NAME,x.SQL_ID,to_char(substr(x.QUERY_SQL,1,3995)),x.PLAN_ID,x.AFFECTED_ROWS,x.RETURN_ROWS,x.PARTITION_CNT,
x.RET_CODE,x.QC_ID,x.DFO_ID,x.SQC_ID,x.WORKER_ID,,x.P1TEXT,x.P1,x.P2TEXT,x.P2,x.P3TEXT,x.P3,
x.WAIT_CLASS_ID,x.WAIT_CLASS#,x.WAIT_CLASS,,x.WAIT_TIME_MICRO,x.TOTAL_WAIT_TIME_MICRO,x.TOTAL_WAITS,
x.RPC_COUNT,x.PLAN_TYPE,x.IS_INNER_SQL,x.IS_EXECUTOR_RPC,x.IS_HIT_PLAN,x.REQUEST_TIME,x.ELAPSED_TIME,
x.NET_TIME,x.NET_WAIT_TIME,x.QUEUE_TIME,x.DECODE_TIME,x.GET_PLAN_TIME,x.EXECUTE_TIME,x.APPLICATION_WAIT_TIME,
x.CONCURRENCY_WAIT_TIME,x.USER_IO_WAIT_TIME,x.SCHEDULE_TIME,x.ROW_CACHE_HIT,x.BLOOM_FILTER_CACHE_HIT,
x.BLOCK_CACHE_HIT,x.DISK_READS,x.RETRY_CNT,x.TABLE_SCAN,x.CONSISTENCY_LEVEL,x.MEMSTORE_READ_ROW_COUNT,
x.SSSTORE_READ_ROW_COUNT,x.DATA_BLOCK_READ_CNT,x.DATA_BLOCK_CACHE_HIT,x.INDEX_BLOCK_READ_CNT,
x.INDEX_BLOCK_CACHE_HIT,x.BLOCKSCAN_BLOCK_CNT,x.BLOCKSCAN_ROW_CNT,x.PUSHDOWN_STORAGE_FILTER_ROW_CNT,
x.REQUEST_MEMORY_USED,x.EXPECTED_WORKER_COUNT,x.USED_WORKER_COUNT,x.SCHED_INFO,x.PS_CLIENT_STMT_ID,
x.PS_INNER_STMT_ID,x.TX_ID,x.SNAPSHOT_VERSION,x.REQUEST_TYPE,x.IS_BATCHED_MULTI_STMT,x.OB_TRACE_INFO,
x.PLAN_HASH,x.PARAMS_VALUE,x.RULE_NAME,x.TX_INTERNAL_ROUTING,x.TX_STATE_VERSION,x.FLT_TRACE_ID,x.NETWORK_WAIT_TIME
from(
select /*+ PARALLEL(16) query_timeout(50000000000) */ rownum i, a.* from GV$OB_SQL_AUDIT a where
query_sql not like ''%tsp_datasource_check_config%''
and query_sql not like ''%tsp_instans_heartbeat%''
and query_sql not like ''%tsp_instans_param_version%''
and query_sql not like ''%tsp_service_in%''
and query_sql not like ''%tsp_mutex%''
and query_sql not like ''%tsp_param_sync_task%''
and query_sql not like ''%ob_sql_audit%''
and query_sql not like ''%ALTER%TABLE%''
and query_sql not like ''%extract_sqlaudit_proc%''
and TENANT_NAME <> ''sys''
) x where x.request_time >= :5 and x.request_time <= :6
';
EXECUTE IMMEDIATE V_SQL USING V_MAX_BATCH,V_MAX_BATCH,V_NAME,V_FLAG,V_START_SCN,V_END_SCN;
COMMIT;
DBMS_OUTPUT.PUT_LINE(P_START_TIME || ' ~ ' || p_END_TIME || 'The time period can be queried:select * from xz_sql_audit where c_batch_id = ' || V_MAX_BATCH || ' ;');
/*
-- debugging code
dbms_output.put_line(V_NAME);
dbms_output.put_line(V_MAX_BATCH);
dbms_output.put_line(V_START_TIME);
dbms_output.put_line(V_END_TIME);
dbms_output.put_line(V_SQL);
*/
EXCEPTION
WHEN V_FLAG_EXCEPTION THEN
DBMS_OUTPUT.PUT_LINE('p_FLAGWrong value passed in for the parameter,S:For pulling a single transaction、M:For pulling multiple transactions(pressure measurement)');
WHEN V_END_TIME_SMALL_EXCEPTION THEN
DBMS_OUTPUT.PUT_LINE('end time: ' || V_END_TIME || ' (particle used for comparison and "-er than")' || ' Starting time: ' || V_START_TIME ||' few,Please check the incoming time。' );
END;
FUNCTION get_table_name(p_owner varchar2 ,p_string_sql clob)
RETURN VARCHAR2 IS
v_sql CLOB := p_string_sql;
v_table_names VARCHAR2(4000) := '';
v_pos INTEGER := 1;
v_table_name VARCHAR2(200);
v_match_count INTEGER := 0;
v_owner VARCHAR2(200):= upper(p_owner);
BEGIN
/*
herald
function name: get_table_name
role of a function (math.): through (a gap) SQL statement to extract one or more table names。
Date of creation: 2024-09-22
author: (YZJ、Sudo (name))
releases: v1.0
Functional Description:
This function is used to parse a given SQL statement,并through (a gap)中提取出一个或多个table name。
Mainly suitable for standard SQL statement,including but not limited to SELECT statement。
restrictive condition:
当前releases不支持以下类型的 SQL statement:
- Use comma-separated table joins,as if:
```
SELECT * FROM a a1, b b1 WHERE = ;
SELECT * FROM a, b WHERE = ;
```
- Standard support only SQL statement格式。
Prerequisites:
No additional prerequisites required,Just call it directly。
invoke a method:
typical example:
```
select get_table_name(query_sql) from gv$ob_sql_audit;
```
caveat:
- Please make sure to enter the SQL statement格式正确,Otherwise the table name may not be extracted correctly。
- This function does not support all SQL (an official) standard,后续releases将逐步增强功能。
Maintenance records:
- 2024-09-22: 初始releases发布。
*/
/* Loop step-by-step regular expression matching,Extract Table Name */
LOOP
v_table_name := REGEXP_SUBSTR(v_sql, '(?i)(FROM|JOIN|INTO|UPDATE|DELETE)\s+("[^"]+"\.)?"?(\w+)"?', v_pos, 1, NULL, 3);
/* as if果找不到更多的table name,exit cycle */
EXIT WHEN v_table_name IS NULL;
/* Cumulative table name */
v_table_names := v_table_names || UPPER(v_owner) || '.' || UPPER(v_table_name) || ',';
/* Calculate the position of the next match */
v_pos := INSTR(v_sql, v_table_name, v_pos) + LENGTH(v_table_name);
/* Increase in the number of matches */
v_match_count := v_match_count + 1;
END LOOP;
/* v_match_count as if果大于0 ,Indicates a match to a table name,Then remove the last redundant comma and space
v_match_count as if果不大于0,then it means that the wholeSQLNo match for table name,come (or go) back '' string (computer science)。
*/
IF v_match_count > 0 THEN
v_table_names := RTRIM(v_table_names, ',');
RETURN v_table_names;
ELSE
RETURN '';
END IF;
/*
-- debugging code
dbms_output.put_line(p_string_sql);
dbms_output.put_line(v_table_names);
dbms_output.put_line(v_pos);
dbms_output.put_line(v_match_count);
*/
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('err_message: ' || SQLERRM);
END get_table_name;
FUNCTION get_table_info(p_tablename_list VARCHAR2)
RETURN VARCHAR2 IS
v_tablename_list VARCHAR2(4000) := UPPER(p_tablename_list); -- 输入string (computer science)
v_separator CHAR := ','; -- splitter
v_part VARCHAR2(100); -- Temporary variable to store the segmented portion
v_index NUMBER := 1; -- Current Index
v_length NUMBER := LENGTH(v_tablename_list); -- string (computer science)长度
v_tablename VARCHAR2(200);
v_schemaname VARCHAR2(200);
v_partition_rule VARCHAR2(200); -- Table partitioning rules
v_object_type VARCHAR2(200); -- object type
v_outinfo CLOB; -- function ultimatelyreturnsuch information
v_exception_info VARCHAR2(2000); -- Storing Exception Information
v_local_index_cnt INT;
v_global_index_cnt INT;
v_pk_cnt INT;
v_row_cnt INT; -- Number of rows of data in the table or view
BEGIN
/*
herald
function name: get_table_info
role of a function (math.): Extract details from multiple tables(Includes zoning rules、Number of local indexes、Number of global indexes、Number of primary keys and rows of data)。
Date of creation: 2024-09-23
author: (YZJ、Sudo (name))
releases: v1.2
Functional Description:
This function is used to parse the list of incoming table names(comma-delimited),and get the details of each table,including through:
- Partitioning rules(as if果有)
- Number of local indexes量
- Number of global indexes量
- Number of primary keys
- Number of rows of table or view data
invoke a method:
typical example:
```
SELECT get_table_info('schema1.table1, schema2.table2')
FROM dual;
```
come (or go) back值:
come (or go) back包含表信息的 VARCHAR2 string (computer science),多个表such information以换行符分隔。
caveat:
- function is based on DBA view `XZ_OBJECTS` cap (a poem) `XZ_INDEXES`,You need to ensure that the calling user has the appropriate permissions。
- Please make sure to enter thetable name列表格式正确,Otherwise, you may not be able to get information about the table。
Maintenance records:
- 2024-09-23: 初始releases发布。
- 2024-09-23: 增加统计Number of rows of table or view data的功能。
*/
-- meter header
v_outinfo := RPAD('table name', 35) || ' | ' || RPAD('Partitioning rules', 45) || ' | ' || RPAD('Number of local indexes量', 15) || ' | ' || RPAD('Number of global indexes量', 15) || ' | ' || RPAD('primary keyPKquantities', 10) || ' | ' || RPAD('Number of data lines', 10) || CHR(10) ;
v_outinfo := v_outinfo || '------------------------------------------------------------------------------------------------------------------------------------------------' || CHR(10);
-- 使用circulate遍历每个子string (computer science)
WHILE v_index <= v_length LOOP
-- Start of each cycle,Clearing Exception Messages
v_exception_info := NULL;
-- 获取下一个子string (computer science)的位置
v_part := SUBSTR(v_tablename_list, v_index, INSTR(v_tablename_list, v_separator, v_index) - v_index);
-- as if果子string (computer science)为空,则说明已经到达最后一个子string (computer science)
IF v_part IS NULL OR v_part = '' THEN
v_part := SUBSTR(v_tablename_list, v_index);
END IF;
-- withdrawing user、table name ,(for) instance: user ID.table name
v_schemaname := UPPER(REGEXP_SUBSTR(v_part, '^[^.]+'));
v_tablename := UPPER(REGEXP_SUBSTR(v_part, '[^.]+$'));
-- 首先through (a gap) xz_objects Determine if there is such an object.,Otherwise, I'm afraid I'll get an error.、exceptions
BEGIN
SELECT UPPER(MAX(CASE
WHEN object_type = 'TABLE SUBPARTITION' THEN 'TABLE PARTITION'
ELSE object_type END)
) -- 1、TABLE PARTITION 2、TABLE 3、VIEW
INTO v_object_type FROM xz_objects WHERE owner = v_schemaname AND object_name = v_tablename;
EXCEPTION
WHEN NO_DATA_FOUND THEN
v_exception_info := 'exist xz_objects No information about the object was found in the base table,Please check for input errors!';
WHEN OTHERS THEN
v_exception_info := '未知exceptions,Unable to get object information:( ' || SUBSTR(SQLERRM, 1, 60) || ' ... )'; -- Truncate error messages
END;
-- as if果获取到对象信息
IF v_object_type IS NOT NULL THEN
IF v_object_type = 'TABLE PARTITION' OR v_object_type = 'TABLE' THEN
-- Partitioned or non-partitioned table processing
IF v_object_type = 'TABLE PARTITION' THEN
-- partition table processing
SELECT /*+ PARALLEL(8) */
UPPER(TO_CHAR(REGEXP_SUBSTR(DBMS_METADATA.GET_DDL('TABLE', v_tablename, v_schemaname), 'partition\s+by\s+(\w+)\(([^)]+)\)')))
INTO v_partition_rule
FROM dual;
ELSE
-- 非partition table processing
v_partition_rule := 'unpartitioned table';
END IF;
-- 获取表的Number of data lines
BEGIN
EXECUTE IMMEDIATE 'SELECT /*+ PARALLEL(8) */ COUNT(1) FROM ' || v_schemaname || '.' || v_tablename INTO v_row_cnt;
EXCEPTION
WHEN OTHERS THEN
v_row_cnt := -1; -- as if果查询表行数时出现exceptions,set to -1 unknown
END;
-- Getting table index information
BEGIN
-- as if果表没有索引
SELECT a.local_index_cnt,
a.global_index_cnt,
a.pk_cnt INTO v_local_index_cnt, v_global_index_cnt, v_pk_cnt
FROM xz_indexes a
WHERE a.table_owner = v_schemaname AND a.table_name = v_tablename;
EXCEPTION
WHEN NO_DATA_FOUND THEN
v_local_index_cnt := 0;
v_global_index_cnt := 0;
v_pk_cnt := 0;
END;
-- Splicing Output Information
v_outinfo := v_outinfo || RPAD(v_schemaname || '.' || v_tablename, 35) || ' | ' || RPAD(v_partition_rule, 45)
|| ' | ' || RPAD(v_local_index_cnt, 15)
|| ' | ' || RPAD(v_global_index_cnt, 15)
|| ' | ' || RPAD(v_pk_cnt, 10)
|| ' | ' || RPAD(v_row_cnt, 10) || CHR(10);
ELSIF v_object_type = 'VIEW' THEN
-- as if果是view,Number of rows of statistics,local index、全局索引cap (a poem)primary key设为 0
v_local_index_cnt := 0;
v_global_index_cnt := 0;
v_pk_cnt := 0;
-- 获取view的行数
BEGIN
EXECUTE IMMEDIATE 'SELECT /*+ PARALLEL(8) */ COUNT(1) FROM ' || v_schemaname || '.' || v_tablename INTO v_row_cnt;
EXCEPTION
WHEN OTHERS THEN
v_row_cnt := -1; -- as if果查询view行数时出现exceptions,set to -1 unknown
END;
-- 拼接view的输出信息
v_outinfo := v_outinfo || RPAD(v_schemaname || '.' || v_tablename, 35) || ' | ' || RPAD('view', 45)
|| ' | ' || RPAD(v_local_index_cnt, 15)
|| ' | ' || RPAD(v_global_index_cnt, 15)
|| ' | ' || RPAD(v_pk_cnt, 10)
|| ' | ' || RPAD(v_row_cnt, 10) || CHR(10);
ELSE
-- Other objects,Unable to get details
v_local_index_cnt := 0;
v_global_index_cnt := 0;
v_pk_cnt := 0;
-- 拼接Unable to get details的输出
v_outinfo := v_outinfo || RPAD(v_schemaname || '.' || v_tablename, 35) || ' | ' || RPAD('未知exceptions,Unable to get object information!', 100)
|| ' | ' || RPAD(v_local_index_cnt, 15)
|| ' | ' || RPAD(v_global_index_cnt, 15)
|| ' | ' || RPAD(v_pk_cnt, 10)
|| ' | ' || RPAD('0', 10) || CHR(10); -- Number of data lines为 0
END IF;
ELSE
-- as if果没获取到对象信息,默认设置索引cap (a poem)primary key为 0
v_local_index_cnt := 0;
v_global_index_cnt := 0;
v_pk_cnt := 0;
-- 拼接exceptions输出信息
v_outinfo := v_outinfo || RPAD(v_schemaname || '.' || v_tablename, 35) || ' | ' || RPAD(v_exception_info, 100)
|| ' | ' || RPAD(v_local_index_cnt, 15)
|| ' | ' || RPAD(v_global_index_cnt, 15)
|| ' | ' || RPAD(v_pk_cnt, 10)
|| ' | ' || RPAD('0', 10) || CHR(10); -- Number of data lines为 0
END IF;
-- Update index position
v_index := v_index + LENGTH(v_part) + 1;
-- reset variable
v_partition_rule := '';
v_object_type := '';
END LOOP;
-- Add a separator for the last line
v_outinfo := v_outinfo || '------------------------------------------------------------------------------------------------------------------------------------------------' || CHR(10);
RETURN v_outinfo;
END get_table_info;
PROCEDURE real_time_cdc_proc(
p_END_TIME VARCHAR2 -- end time
)
IS
p_stop_date varchar(200):=p_END_TIME; -- Incoming stop time,It's incremental until this time.cdcextract (a medical sample)gv$ob_sql_auditdigital
v_stop_date TIMESTAMP ; -- string (computer science)参数转换日期时间戳的变量
v_num int:=1; -- circulateCNTcontrol variable
v_current_date TIMESTAMP; -- current date
v_start_time TIMESTAMP ; -- cdc 拉取的Starting time、core variable
v_end_time TIMESTAMP ; -- cdc 拉取的end time、core variable
v_start_scn NUMBER ; -- cdc Beginning of the pullscn、core variable
v_end_scn NUMBER ; -- cdc End of pullscn、core variable
v_sql_excute_start_time TIMESTAMP ;
v_sql_excute_end_time TIMESTAMP ;
v_interval INTERVAL day to second; -- at a time insert into xz_sql_audit_cdc The type of interval
v_interval_char varchar2(2000); -- at a time insert into xz_sql_audit_cdc 的间隔类型转换成string (computer science)
v_interval_num NUMBER;
v_sql varchar2(4000);
BEGIN
/*
herald
Process name: real_time_cdc_proc
role of a function (math.): This procedure is used to collect real-time `gv$ob_sql_audit` view中的digital,and curing it to `xz_sql_audit_cdc` statistical tables。
Date of creation: 2024-09-10
author: (YZJ、Sudo (name))
releases: v2.0
Statement of Requirements:
This procedure is designed to satisfy the XX A core system of a bankISV V8 microservice(TCC build)核心系统exist跑批任务期间实时收集请求的跑批 SQL。
due to `gv$ob_sql_audit` viewdigital超过内存阈值会被定期刷新,and the application runs the batch for a longer period of time,Therefore, only real-time acquisition can be implemented `gv$ob_sql_audit` 表的digital,and curing it to `xz_sql_audit_cdc` statistical tables。
This procedure is primarily used to collect `gv$ob_sql_audit` view中的所有digital,for follow-up analysis缓慢 SQL and provide suggestions for optimization。
Analytical tools require manual preparation of analyses SQL。
Prerequisites:
exist创建此存储过程之前,The following objects need to be created in advance:
1. sequences:SEQ_XZ_SQL_AUDIT_CDC_CID
2. log sheet:XZ_CDC_LOG
- For recording `xz_sql_audit_cdc` 表目前采集了多少次digital,at a time采集的时间。
3. digitalanalytical table:XZ_SQL_AUDIT_CDC
- For curing `gv$ob_sql_audit` view中的所有digital,Follow-up mainly analyzes the contents of the table。
invoke a method:
Recommended Use obclient command-line tool编写 Shell Script background execution:
```
BEGIN
real_time_cdc_proc(p_END_TIME => '2024-12-12 01:00:00');
-- 2024-12-12 01:00:00 is the time at which the procedure ends its run
END;
/
```
*/
BEGIN
/* as if果传入的 p_stop_date Unable to convert to date parameters,报错抛出exceptions,The following code does not need to be executed */
v_stop_date := to_timestamp(p_stop_date,'YYYY-MM-DD HH24:MI:SS');
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('p_stop_date Please pass a parameter that converts the date type!' );
RETURN ;
END;
v_current_date := SYSTIMESTAMP;
WHILE v_current_date <= v_stop_date LOOP
IF v_num = 1 THEN
/* 初始化digital,xz_sql_audit_cdc instill10行digital */
v_sql := '
insert into xz_sql_audit_cdc(
C_BATCH_ID,SVR_IP,SVR_PORT,REQUEST_ID,SQL_EXEC_ID,TRACE_ID,SID,CLIENT_IP,CLIENT_PORT,TENANT_ID,EFFECTIVE_TENANT_ID,
TENANT_NAME,USER_ID,USER_NAME,USER_GROUP,USER_CLIENT_IP,DB_ID,DB_NAME,SQL_ID,QUERY_SQL,PLAN_ID,AFFECTED_ROWS,
RETURN_ROWS,PARTITION_CNT,RET_CODE,QC_ID,DFO_ID,SQC_ID,WORKER_ID,EVENT,P1TEXT,P1,P2TEXT,P2,P3TEXT,P3,WAIT_CLASS_ID,
WAIT_CLASS#,WAIT_CLASS,STATE,WAIT_TIME_MICRO,TOTAL_WAIT_TIME_MICRO,TOTAL_WAITS,RPC_COUNT,PLAN_TYPE,IS_INNER_SQL,
IS_EXECUTOR_RPC,IS_HIT_PLAN,REQUEST_TIME,ELAPSED_TIME,NET_TIME,NET_WAIT_TIME,QUEUE_TIME,DECODE_TIME,GET_PLAN_TIME,
EXECUTE_TIME,APPLICATION_WAIT_TIME,CONCURRENCY_WAIT_TIME,USER_IO_WAIT_TIME,SCHEDULE_TIME,ROW_CACHE_HIT,
BLOOM_FILTER_CACHE_HIT,BLOCK_CACHE_HIT,DISK_READS,RETRY_CNT,TABLE_SCAN,CONSISTENCY_LEVEL,MEMSTORE_READ_ROW_COUNT,
SSSTORE_READ_ROW_COUNT,DATA_BLOCK_READ_CNT,DATA_BLOCK_CACHE_HIT,INDEX_BLOCK_READ_CNT,INDEX_BLOCK_CACHE_HIT,BLOCKSCAN_BLOCK_CNT,
BLOCKSCAN_ROW_CNT,PUSHDOWN_STORAGE_FILTER_ROW_CNT,REQUEST_MEMORY_USED,EXPECTED_WORKER_COUNT,USED_WORKER_COUNT,SCHED_INFO,
PS_CLIENT_STMT_ID,PS_INNER_STMT_ID,TX_ID,SNAPSHOT_VERSION,REQUEST_TYPE,IS_BATCHED_MULTI_STMT,OB_TRACE_INFO,PLAN_HASH,
PARAMS_VALUE,RULE_NAME,TX_INTERNAL_ROUTING,TX_STATE_VERSION,FLT_TRACE_ID,NETWORK_WAIT_TIME)
select
:1,SVR_IP,SVR_PORT,REQUEST_ID,SQL_EXEC_ID,TRACE_ID,SID,CLIENT_IP,CLIENT_PORT,TENANT_ID,EFFECTIVE_TENANT_ID,
TENANT_NAME,USER_ID,USER_NAME,USER_GROUP,USER_CLIENT_IP,DB_ID,DB_NAME,SQL_ID,to_char(substr(QUERY_SQL,1,3995)),PLAN_ID,AFFECTED_ROWS,
RETURN_ROWS,PARTITION_CNT,RET_CODE,QC_ID,DFO_ID,SQC_ID,WORKER_ID,EVENT,P1TEXT,P1,P2TEXT,P2,P3TEXT,P3,WAIT_CLASS_ID,
WAIT_CLASS#,WAIT_CLASS,STATE,WAIT_TIME_MICRO,TOTAL_WAIT_TIME_MICRO,TOTAL_WAITS,RPC_COUNT,PLAN_TYPE,IS_INNER_SQL,
IS_EXECUTOR_RPC,IS_HIT_PLAN,REQUEST_TIME,ELAPSED_TIME,NET_TIME,NET_WAIT_TIME,QUEUE_TIME,DECODE_TIME,GET_PLAN_TIME,
EXECUTE_TIME,APPLICATION_WAIT_TIME,CONCURRENCY_WAIT_TIME,USER_IO_WAIT_TIME,SCHEDULE_TIME,ROW_CACHE_HIT,
BLOOM_FILTER_CACHE_HIT,BLOCK_CACHE_HIT,DISK_READS,RETRY_CNT,TABLE_SCAN,CONSISTENCY_LEVEL,MEMSTORE_READ_ROW_COUNT,
SSSTORE_READ_ROW_COUNT,DATA_BLOCK_READ_CNT,DATA_BLOCK_CACHE_HIT,INDEX_BLOCK_READ_CNT,INDEX_BLOCK_CACHE_HIT,BLOCKSCAN_BLOCK_CNT,
BLOCKSCAN_ROW_CNT,PUSHDOWN_STORAGE_FILTER_ROW_CNT,REQUEST_MEMORY_USED,EXPECTED_WORKER_COUNT,USED_WORKER_COUNT,SCHED_INFO,
PS_CLIENT_STMT_ID,PS_INNER_STMT_ID,TX_ID,SNAPSHOT_VERSION,REQUEST_TYPE,IS_BATCHED_MULTI_STMT,OB_TRACE_INFO,PLAN_HASH,
PARAMS_VALUE,RULE_NAME,TX_INTERNAL_ROUTING,TX_STATE_VERSION,FLT_TRACE_ID,NETWORK_WAIT_TIME
from gv$ob_sql_audit where rownum <= 10';
/* fulfillmentSQL */
v_sql_excute_start_time := SYSTIMESTAMP ;
EXECUTE IMMEDIATE V_SQL USING v_num;
v_sql_excute_end_time := SYSTIMESTAMP;
INSERT INTO xz_cdc_log values(v_num,v_start_time,v_end_time,v_start_scn,v_end_scn,(v_sql_excute_end_time-v_sql_excute_start_time));
/* 初始化digital的提交,Submitted only once */
COMMIT ;
ELSIF v_num = 2 THEN
v_end_time := SYSTIMESTAMP ; -- end time戳,Get current time。
v_start_time := v_end_time - interval '70' second; -- Starting time戳,end time往前面减70秒作为at a timedigitalStarting time。
/*
where request_time > v_start_time:(v_end_time - interval '70' second) and request_time < v_end_time:SYSTIMESTAMP
*/
v_start_scn := timestamp_to_scn(v_start_time) / 1000 ; -- 将Starting time转成开始scn
v_end_scn := timestamp_to_scn(v_end_time) / 1000 ; -- 将end time转成结束scn
v_sql := '
insert /*+ APPEND PARALLEL(8) */ into xz_sql_audit_cdc(
C_BATCH_ID,SVR_IP,SVR_PORT,REQUEST_ID,SQL_EXEC_ID,TRACE_ID,SID,CLIENT_IP,CLIENT_PORT,TENANT_ID,EFFECTIVE_TENANT_ID,
TENANT_NAME,USER_ID,USER_NAME,USER_GROUP,USER_CLIENT_IP,DB_ID,DB_NAME,SQL_ID,QUERY_SQL,PLAN_ID,AFFECTED_ROWS,
RETURN_ROWS,PARTITION_CNT,RET_CODE,QC_ID,DFO_ID,SQC_ID,WORKER_ID,EVENT,P1TEXT,P1,P2TEXT,P2,P3TEXT,P3,WAIT_CLASS_ID,
WAIT_CLASS#,WAIT_CLASS,STATE,WAIT_TIME_MICRO,TOTAL_WAIT_TIME_MICRO,TOTAL_WAITS,RPC_COUNT,PLAN_TYPE,IS_INNER_SQL,
IS_EXECUTOR_RPC,IS_HIT_PLAN,REQUEST_TIME,ELAPSED_TIME,NET_TIME,NET_WAIT_TIME,QUEUE_TIME,DECODE_TIME,GET_PLAN_TIME,
EXECUTE_TIME,APPLICATION_WAIT_TIME,CONCURRENCY_WAIT_TIME,USER_IO_WAIT_TIME,SCHEDULE_TIME,ROW_CACHE_HIT,
BLOOM_FILTER_CACHE_HIT,BLOCK_CACHE_HIT,DISK_READS,RETRY_CNT,TABLE_SCAN,CONSISTENCY_LEVEL,MEMSTORE_READ_ROW_COUNT,
SSSTORE_READ_ROW_COUNT,DATA_BLOCK_READ_CNT,DATA_BLOCK_CACHE_HIT,INDEX_BLOCK_READ_CNT,INDEX_BLOCK_CACHE_HIT,BLOCKSCAN_BLOCK_CNT,
BLOCKSCAN_ROW_CNT,PUSHDOWN_STORAGE_FILTER_ROW_CNT,REQUEST_MEMORY_USED,EXPECTED_WORKER_COUNT,USED_WORKER_COUNT,SCHED_INFO,
PS_CLIENT_STMT_ID,PS_INNER_STMT_ID,TX_ID,SNAPSHOT_VERSION,REQUEST_TYPE,IS_BATCHED_MULTI_STMT,OB_TRACE_INFO,PLAN_HASH,
PARAMS_VALUE,RULE_NAME,TX_INTERNAL_ROUTING,TX_STATE_VERSION,FLT_TRACE_ID,NETWORK_WAIT_TIME)
select /*+ PARALLEL(8) query_timeout(50000000000) */
:1,SVR_IP,SVR_PORT,REQUEST_ID,SQL_EXEC_ID,TRACE_ID,SID,CLIENT_IP,CLIENT_PORT,TENANT_ID,EFFECTIVE_TENANT_ID,
TENANT_NAME,USER_ID,USER_NAME,USER_GROUP,USER_CLIENT_IP,DB_ID,DB_NAME,SQL_ID,to_char(substr(QUERY_SQL,1,3995)),PLAN_ID,AFFECTED_ROWS,
RETURN_ROWS,PARTITION_CNT,RET_CODE,QC_ID,DFO_ID,SQC_ID,WORKER_ID,EVENT,P1TEXT,P1,P2TEXT,P2,P3TEXT,P3,WAIT_CLASS_ID,
WAIT_CLASS#,WAIT_CLASS,STATE,WAIT_TIME_MICRO,TOTAL_WAIT_TIME_MICRO,TOTAL_WAITS,RPC_COUNT,PLAN_TYPE,IS_INNER_SQL,
IS_EXECUTOR_RPC,IS_HIT_PLAN,REQUEST_TIME,ELAPSED_TIME,NET_TIME,NET_WAIT_TIME,QUEUE_TIME,DECODE_TIME,GET_PLAN_TIME,
EXECUTE_TIME,APPLICATION_WAIT_TIME,CONCURRENCY_WAIT_TIME,USER_IO_WAIT_TIME,SCHEDULE_TIME,ROW_CACHE_HIT,
BLOOM_FILTER_CACHE_HIT,BLOCK_CACHE_HIT,DISK_READS,RETRY_CNT,TABLE_SCAN,CONSISTENCY_LEVEL,MEMSTORE_READ_ROW_COUNT,
SSSTORE_READ_ROW_COUNT,DATA_BLOCK_READ_CNT,DATA_BLOCK_CACHE_HIT,INDEX_BLOCK_READ_CNT,INDEX_BLOCK_CACHE_HIT,BLOCKSCAN_BLOCK_CNT,
BLOCKSCAN_ROW_CNT,PUSHDOWN_STORAGE_FILTER_ROW_CNT,REQUEST_MEMORY_USED,EXPECTED_WORKER_COUNT,USED_WORKER_COUNT,SCHED_INFO,
PS_CLIENT_STMT_ID,PS_INNER_STMT_ID,TX_ID,SNAPSHOT_VERSION,REQUEST_TYPE,IS_BATCHED_MULTI_STMT,OB_TRACE_INFO,PLAN_HASH,
PARAMS_VALUE,RULE_NAME,TX_INTERNAL_ROUTING,TX_STATE_VERSION,FLT_TRACE_ID,NETWORK_WAIT_TIME
from gv$ob_sql_audit where request_time > :2 and request_time < :3' ;
/* fulfillmentSQL */
v_sql_excute_start_time := SYSTIMESTAMP ;
EXECUTE IMMEDIATE V_SQL USING v_num,v_start_scn,v_end_scn;
v_sql_excute_end_time := SYSTIMESTAMP;
-- record (in sports etc)SQLfulfillment的时间
v_interval := (v_sql_excute_end_time - v_sql_excute_start_time);
v_interval_char := to_char(extract(day from v_interval) * 86400 +
extract(hour from v_interval) * 3600 +
extract(minute from v_interval) * 60 +
extract(second from v_interval));
v_interval_num := to_number(v_interval_char);
INSERT INTO xz_cdc_log values(v_num,v_start_time,v_end_time,v_start_scn,v_end_scn,(v_sql_excute_end_time-v_sql_excute_start_time));
/* 初始化digital的提交,Submitted only once */
COMMIT ;
ELSE
/* v_num = 1 (prefix indicating ordinal number, e.g. first, number two etc)1次初始化装载digital,It doesn't matter how much it's loaded.
v_num = 2 (prefix indicating ordinal number, e.g. first, number two etc)2次拉取间隔digital,settle v_start_time、v_end_time point in time。
v_num = 3 (prefix indicating ordinal number, e.g. first, number two etc)3after the second meeting,继承(prefix indicating ordinal number, e.g. first, number two etc)2subtimes
start_time = null end_time = null
start_time = t2 - 5 end_time = t2
start_time = t2 end_time = t3
start_time = t3 end_time = t4
*/
v_start_time := v_end_time;
v_end_time := v_end_time + (interval '1' second * v_interval_num);
v_start_scn := timestamp_to_scn(v_start_time) / 1000 ; -- 将Starting time转成开始scn
v_end_scn := timestamp_to_scn(v_end_time) / 1000 ; -- 将end time转成结束scn
v_sql := '
insert /*+ APPEND PARALLEL(8) */ into xz_sql_audit_cdc(
C_BATCH_ID,SVR_IP,SVR_PORT,REQUEST_ID,SQL_EXEC_ID,TRACE_ID,SID,CLIENT_IP,CLIENT_PORT,TENANT_ID,EFFECTIVE_TENANT_ID,
TENANT_NAME,USER_ID,USER_NAME,USER_GROUP,USER_CLIENT_IP,DB_ID,DB_NAME,SQL_ID,QUERY_SQL,PLAN_ID,AFFECTED_ROWS,
RETURN_ROWS,PARTITION_CNT,RET_CODE,QC_ID,DFO_ID,SQC_ID,WORKER_ID,EVENT,P1TEXT,P1,P2TEXT,P2,P3TEXT,P3,WAIT_CLASS_ID,
WAIT_CLASS#,WAIT_CLASS,STATE,WAIT_TIME_MICRO,TOTAL_WAIT_TIME_MICRO,TOTAL_WAITS,RPC_COUNT,PLAN_TYPE,IS_INNER_SQL,
IS_EXECUTOR_RPC,IS_HIT_PLAN,REQUEST_TIME,ELAPSED_TIME,NET_TIME,NET_WAIT_TIME,QUEUE_TIME,DECODE_TIME,GET_PLAN_TIME,
EXECUTE_TIME,APPLICATION_WAIT_TIME,CONCURRENCY_WAIT_TIME,USER_IO_WAIT_TIME,SCHEDULE_TIME,ROW_CACHE_HIT,
BLOOM_FILTER_CACHE_HIT,BLOCK_CACHE_HIT,DISK_READS,RETRY_CNT,TABLE_SCAN,CONSISTENCY_LEVEL,MEMSTORE_READ_ROW_COUNT,
SSSTORE_READ_ROW_COUNT,DATA_BLOCK_READ_CNT,DATA_BLOCK_CACHE_HIT,INDEX_BLOCK_READ_CNT,INDEX_BLOCK_CACHE_HIT,BLOCKSCAN_BLOCK_CNT,
BLOCKSCAN_ROW_CNT,PUSHDOWN_STORAGE_FILTER_ROW_CNT,REQUEST_MEMORY_USED,EXPECTED_WORKER_COUNT,USED_WORKER_COUNT,SCHED_INFO,
PS_CLIENT_STMT_ID,PS_INNER_STMT_ID,TX_ID,SNAPSHOT_VERSION,REQUEST_TYPE,IS_BATCHED_MULTI_STMT,OB_TRACE_INFO,PLAN_HASH,
PARAMS_VALUE,RULE_NAME,TX_INTERNAL_ROUTING,TX_STATE_VERSION,FLT_TRACE_ID,NETWORK_WAIT_TIME)
select /*+ PARALLEL(8) query_timeout(50000000000) */
:1,SVR_IP,SVR_PORT,REQUEST_ID,SQL_EXEC_ID,TRACE_ID,SID,CLIENT_IP,CLIENT_PORT,TENANT_ID,EFFECTIVE_TENANT_ID,
TENANT_NAME,USER_ID,USER_NAME,USER_GROUP,USER_CLIENT_IP,DB_ID,DB_NAME,SQL_ID,to_char(substr(QUERY_SQL,1,3995)),PLAN_ID,AFFECTED_ROWS,
RETURN_ROWS,PARTITION_CNT,RET_CODE,QC_ID,DFO_ID,SQC_ID,WORKER_ID,EVENT,P1TEXT,P1,P2TEXT,P2,P3TEXT,P3,WAIT_CLASS_ID,
WAIT_CLASS#,WAIT_CLASS,STATE,WAIT_TIME_MICRO,TOTAL_WAIT_TIME_MICRO,TOTAL_WAITS,RPC_COUNT,PLAN_TYPE,IS_INNER_SQL,
IS_EXECUTOR_RPC,IS_HIT_PLAN,REQUEST_TIME,ELAPSED_TIME,NET_TIME,NET_WAIT_TIME,QUEUE_TIME,DECODE_TIME,GET_PLAN_TIME,
EXECUTE_TIME,APPLICATION_WAIT_TIME,CONCURRENCY_WAIT_TIME,USER_IO_WAIT_TIME,SCHEDULE_TIME,ROW_CACHE_HIT,
BLOOM_FILTER_CACHE_HIT,BLOCK_CACHE_HIT,DISK_READS,RETRY_CNT,TABLE_SCAN,CONSISTENCY_LEVEL,MEMSTORE_READ_ROW_COUNT,
SSSTORE_READ_ROW_COUNT,DATA_BLOCK_READ_CNT,DATA_BLOCK_CACHE_HIT,INDEX_BLOCK_READ_CNT,INDEX_BLOCK_CACHE_HIT,BLOCKSCAN_BLOCK_CNT,
BLOCKSCAN_ROW_CNT,PUSHDOWN_STORAGE_FILTER_ROW_CNT,REQUEST_MEMORY_USED,EXPECTED_WORKER_COUNT,USED_WORKER_COUNT,SCHED_INFO,
PS_CLIENT_STMT_ID,PS_INNER_STMT_ID,TX_ID,SNAPSHOT_VERSION,REQUEST_TYPE,IS_BATCHED_MULTI_STMT,OB_TRACE_INFO,PLAN_HASH,
PARAMS_VALUE,RULE_NAME,TX_INTERNAL_ROUTING,TX_STATE_VERSION,FLT_TRACE_ID,NETWORK_WAIT_TIME
from gv$ob_sql_audit where request_time > :2 and request_time < :3' ;
/* 批插digital fulfillmentSQL */
v_sql_excute_start_time := SYSTIMESTAMP ;
EXECUTE IMMEDIATE V_SQL USING v_num,v_start_scn,v_end_scn;
v_sql_excute_end_time := SYSTIMESTAMP;
-- record (in sports etc)SQLfulfillment的时间
v_interval := (v_sql_excute_end_time - v_sql_excute_start_time);
v_interval_char := to_char(extract(day from v_interval) * 86400 +
extract(hour from v_interval) * 3600 +
extract(minute from v_interval) * 60 +
extract(second from v_interval));
v_interval_num := to_number(v_interval_char);
INSERT INTO xz_cdc_log values(v_num,v_start_time,v_end_time,v_start_scn,v_end_scn,(v_sql_excute_end_time-v_sql_excute_start_time));
/* 初始化digital的提交,Submitted only once */
COMMIT ;
END IF ;
v_num := v_num + 1;
v_current_date := SYSTIMESTAMP ;
END LOOP ;
COMMIT ;
-- debugging code
/* if v_stop_date = to_date('2024-01-01','YYYY-MM-DD') THEN
DBMS_OUTPUT.PUT_LINE('true') ;
else
DBMS_OUTPUT.PUT_LINE('false') ;
END IF;
DBMS_OUTPUT.PUT_LINE('Modulation Code-p_stop_date:'|| to_char(v_stop_date,'YYYY-MM-DD HH24:MI:SS') );
*/
END;
END ob_tools;
I. extract_sqlaudit_proc Procedure Introduction, Use
1. Introduction
extract_sqlaudit_proc
It is the first stored procedure I wrote, which originated from the need to write a slow SQL analysis of a core system ISV V8 core application during a bank project during a pressure test.
In fact, through the OCP SQL diagnostic function to slow down the SQL statement is not impossible, then do not choose to use OCP to analyze the following reasons:
- OCP SQL diagnostic data may have some bias, for example, in a bank encountered a situation: OCP will count the number of RPCs of SQL into the number of SQL execution. Because of this problem, there was some misunderstanding with a core system ISV development.
- OCP SQL diagnostic data can only show some average information after aggregation, if you want to analyze the application of different scenarios of what SQL statements are contained in a single transaction, the order of execution of SQL of the transaction interface, and what are the slow SQL statements of a single transaction interface? These needs through the OCP is not collected by the corresponding data.
- The database and application system have been adjusted and optimized, and it is difficult to compare and analyze the historical data with the current optimized data.
Based on the above pain points, so the follow-up to the customer site, spend half a day to write and debug this process can be used. A core system ISV core system of all subsequent transaction scenarios, single transaction SQL statement data, different transactions concurrent pressure test data, are collected through this process to persist.
At the beginning of this process is relatively Low, the performance is very poor, the collection of 5 minutes of pressure test data drop disk (about 2000w ~ 3000w rows of data or so), open 100 concurrent need to pull more than 10 minutes, but the follow-up slowly optimize some of the logic, and now pull a large amount of data to open the 16 parallel only takes a few minutes only.
PROCEDURE extract_sqlaudit_proc(p_NAME VARCHAR2, p_FLAG char, p_START_TIME VARCHAR2, p_END_TIME VARCHAR2)
V_NAME VARCHAR2, p_FLAG char
V_NAME VARCHAR2(100):=upper(P_NAME); -- scene name
V_FLAG CHAR(2) :=upper(p_FLAG); -- s for pulling a single transaction, m for pulling multiple transactions (pressure testing)
V_MAX_BATCH INT.
V_START_TIME VARCHAR2(100) := P_START_TIME; -- start time
V_END_TIME VARCHAR2(100):=p_END_TIME; --end time
V_START_SCN NUMBER.
V_END_SCN NUMBER.
/*
V_NAME:name of the scene
V_FLAG:S for pulling a single transaction, M for pulling multiple transactions (pressure testing)
V_MAX_BATCH:Record batch number of times
*/
V_SQL VARCHAR2(4000).
V_FLAG_EXCEPTION EXCEPTION.
V_END_TIME_SMALL_EXCEPTION EXCEPTION.
BEGIN
/*BEGIN
Declaration
Procedure Name: extract_sqlaudit_proc
Description: This procedure extracts data from the `gv$ob_sql_audit` view for a specified time period and stores it in the `xz_sql_audit` table for subsequent analysis.
Created Date: 2024-08-15
Author: (YZJ, Susa)
Version: v1.2
Requirement Background:
This stored procedure is written to fulfill the requirement of on-line transaction pressure testing under different scenarios for the core system of a core system ISV V8 Microservices (TCC architecture) of XX Bank.
By collecting data during the stress test, slow SQL can be analyzed and optimization recommendations can be provided.
The means of analysis requires manually writing the analyzed SQL.
Creation Prerequisites:
The `xz_sql_audit` analysis table must be pre-created to ensure that the `extract_sqlaudit_proc` procedure compiles successfully.
Invocation Methods:
It is recommended to use the obclient command line tool with the service output option turned on:
``
set serveroutput on; ```
```
Example call:
```
BEGIN
extract_sqlaudit_proc(
P_NAME => 'IB1261 One Borrower One Loan Single Transaction',
-- Describe the specific scenario and use for which the data was collected.
-- If it is concurrent pressure test data, it can be written as: 'IB1261 200 concurrent pressure test 5 minutes'
p_FLAG => 'S',
-- Specify the type of pressure test: 'S' for single transaction, 'M' for concurrent
p_START_TIME => '2024-09-03 18:06:00', -- Specify the start time of the press test: 'S' for single transaction and 'M' for concurrent'.
-- Specify the start time of the pressure test
p_END_TIME => '2024-09-03 18:06:20'
-- Specifies the end time of the pressure test
);
END.
/
``.
*/
IF TO_TIMESTAMP(V_START_TIME,'YYYY-MM-DD HH24:MI:SS:FF6') > TO_TIMESTAMP(V_END_TIME,'YYYY-MM-DD HH24:MI:SS:FF6') THEN
RAISE V_END_TIME_SMALL_EXCEPTION.
END IF;
IF V_FLAG ! = 'S' AND V_FLAG ! = 'M' THEN
RAISE V_FLAG_EXCEPTION.
END IF.
-- Add 1 to the maximum number of finds
SELECT NVL(MAX(C_BATCH_ID),0) +1 INTO V_MAX_BATCH FROM XZ_SQL_AUDIT;
-- Convert incoming values to string timestamp format.
V_START_TIME := TO_CHAR(TO_TIMESTAMP(V_START_TIME,'YYYY-MM-DD HH24:MI:SS:FF6'),'YYYY-MM-DD HH24:MI:SS:FF6');
V_END_TIME := TO_CHAR(TO_TIMESTAMP(V_END_TIME,'YYYY-MM-DD HH24:MI:SS:FF6'),'YYYY-MM-DD HH24:MI:SS:FF6');
V_START_SCN := timestamp_to_scn(TO_TIMESTAMP(V_START_TIME,'YYYY-MM-DD HH24:MI:SS:FF6')) / 1000 ; -- convert start time to start scn
V_END_SCN := timestamp_to_scn(TO_TIMESTAMP(V_END_TIME,'YYYY-MM-DD HH24:MI:SS:FF6')) / 1000 ; -- convert end time to end scn
V_SQL:='
insert /*+ ENABLE_PARALLEL_DML PARALLEL(16)*/ into XZ_SQL_AUDIT
select (:1 * 10000000000)+i , scn_to_timestamp(x.request_time * 1000),:2,:3,:4,
x.SVR_IP,x.SVR_PORT,x.REQUEST_ID,x.SQL_EXEC_ID,x.TRACE_ID,,,x.CLIENT_IP,x.CLIENT_PORT,
x.TENANT_ID,x.EFFECTIVE_TENANT_ID,x.TENANT_NAME,x.USER_ID,x.USER_NAME,x.USER_GROUP,x.USER_CLIENT_IP,
x.DB_ID,x.DB_NAME,x.SQL_ID,x.QUERY_SQL,x.PLAN_ID,x.AFFECTED_ROWS,x.RETURN_ROWS,x.PARTITION_CNT,
x.RET_CODE,x.QC_ID,x.DFO_ID,x.SQC_ID,x.WORKER_ID,,x.P1TEXT,x.P1,x.P2TEXT,x.P2,x.P3TEXT,x.P3,
x.WAIT_CLASS_ID,x.WAIT_CLASS#,x.WAIT_CLASS,,x.WAIT_TIME_MICRO,x.TOTAL_WAIT_TIME_MICRO,x.TOTAL_WAITS,
x.RPC_COUNT,x.PLAN_TYPE,x.IS_INNER_SQL,x.IS_EXECUTOR_RPC,x.IS_HIT_PLAN,x.REQUEST_TIME,x.ELAPSED_TIME,
x.NET_TIME, x.NET_WAIT_TIME, x.QUEUE_TIME, x.DECODE_TIME, x.GET_PLAN_TIME, x.EXECUTE_TIME, x.APPLICATION_WAIT_TIME, x.CONCURRENCY_RPC, x.IS_HIT_PLAN, x.REQUEST_TIME, x.ELAPSED_TIME
x.CONCURRENCY_WAIT_TIME, x.USER_IO_WAIT_TIME, x.SCHEDULE_TIME, x.ROW_CACHE_HIT, x.BLOOM_FILTER_CACHE_HIT, x.BLOCK_CACHE_TIME, x.BLOCK_CACHE_HIT, x.BLOOM_FILTER_CACHE_HIT
x.BLOCK_CACHE_HIT,x.DISK_READS,x.RETRY_CNT,x.TABLE_SCAN,x.CONSISTENCY_LEVEL,x.MEMSTORE_READ_ROW_COUNT,
x.SSSTORE_READ_ROW_COUNT, x.DATA_BLOCK_READ_CNT, x.DATA_BLOCK_CACHE_HIT, x.INDEX_BLOCK_READ_CNT, x.INDEX_BLOCK_READ_CNT, x.TABLE_SCAN, x.CONSISTENCY_LEVEL
x.INDEX_BLOCK_CACHE_HIT, x.BLOCKSCAN_BLOCK_CNT, x.BLOCKSCAN_ROW_CNT, x.PUSHDOWN_STORAGE_FILTER_ROW_CNT, x.REQUEST_MEMENT_CNT, x.REQUEST_MEMORANDUM
x.REQUEST_MEMORY_USED, x.EXPECTED_WORKER_COUNT, x.USED_WORKER_COUNT, x.SCHED_INFO, x.PS_CLIENT_STMT_ID, x.
x.PS_INNER_STMT_ID,x.TX_ID,x.SNAPSHOT_VERSION,x.REQUEST_TYPE,x.IS_BATCHED_MULTI_STMT,x.OB_TRACE_INFO,
x.PLAN_HASH,x.PARAMS_VALUE,x.RULE_NAME,x.TX_INTERNAL_ROUTING,x.TX_STATE_VERSION,x.FLT_TRACE_ID,x.NETWORK_WAIT_TIME
from(
select /*+ PARALLEL(16) query_timeout(50000000000) */ rownum i, a.* from GV$OB_SQL_AUDIT a where
query_sql not like ''%tsp_datasource_check_config%''
and query_sql not like ''%tsp_instans_heartbeat%''
and query_sql not like ''%tsp_instans_param_version%'''
and query_sql not like ''%tsp_service_in%''
and query_sql not like ''%tsp_mutex%''
and query_sql not like ''%tsp_param_sync_task%''
and query_sql not like ''%ob_sql_audit%''
and query_sql not like ''%ALTER%TABLE%''
and query_sql not like ''%extract_sqlaudit_proc%''
and TENANT_NAME <> ''sys''
) x where x.request_time >= :5 and x.request_time <= :6
';
/*
query_sql not like are tables that exclude unwanted data, e.g. the beginning of tsp_ are tables of a core system ISV backend tasks that are not involved in transaction scenarios, so they are excluded in advance.
*/
EXECUTE IMMEDIATE V_SQL USING V_MAX_BATCH,V_MAX_BATCH,V_NAME,V_FLAG,V_START_SCN,V_END_SCN.
COMMIT.
DBMS_OUTPUT.PUT_LINE(P_START_TIME | | ' ~ ' | | p_END_TIME | | 'Time period can be queried: select * from xz_sql_audit where c_batch_id = ' | | V_MAX_BATCH | | ' ;');;
/*
-- Debugging Code
dbms_output.put_line(V_NAME);
dbms_output.put_line(V_MAX_BATCH);
dbms_output.put_line(V_START_TIME);
dbms_output.put_line(V_END_TIME); dbms_output.put_line(V_START_TIME); dbms_output.
dbms_output.put_line(V_SQL);
*/
EXCEPTION
WHEN V_FLAG_EXCEPTION THEN
DBMS_OUTPUT.PUT_LINE('Incorrect value passed for p_FLAG parameter, S: for pulling single transaction, M: for pulling multiple transactions (pressure testing)');;
WHEN V_END_TIME_SMALL_EXCEPTION THEN
DBMS_OUTPUT.PUT_LINE('End time: ' || V_END_TIME || ' is smaller than ' || ' Start time: ' || V_START_TIME || ', please check the incoming time.'); WHEN V_END_TIME_SMALL_EXCEPTION THEN DBMS_OUTPUT. );
END extract_sqlaudit_proc.
2、Call the stored procedure
Each time the application staff executes a concurrent pressure test task/single transaction task, they will provide the task start time and end time, at which point they can use a stored procedure to pull the data.
It is recommended that you use a black screen to execute the procedure, and black screen before execution to turn on the output option: set serverout on;.
set serveroutput on;
BEGIN
ob_tools.extract_sqlaudit_proc(
P_NAME => 'IB1261 One Borrower One Loan Single Transaction',
-- Describe the specific scenario and use for which the data was collected.
-- If it is concurrent pressure test data, it can be written as: 'IB1261 200 concurrent pressure test 5 minutes'
p_FLAG => 'S', -- Specify the type of press test: single transaction.
-- Specify the type of pressure test: 'S' for single transaction, 'M' for concurrent
p_START_TIME => '2024-09-28 11:58:00', -- Specify the start time of the press test: 'S' for single transaction and 'M' for concurrent'.
-- Specify the start time of the pressure test
p_END_TIME => '2024-09-28 11:59:20'
-- Specifies the end time of the pressure test
);
END.
/
Specifying c_batch_id = 2 will get you the data for the minute between 2024-09-28 11:58:00 ~ 2024-09-28 11:59:20.
3. Description of parameters
P_NAME
Parameter : describes the specific scenario and purpose of collecting data, the clearer the description before collecting data, the better, single transaction, concurrent pressure testing, or other purposes are best to describe clearly, the name is the name that will be written into thexz_sql_audit
homogeneousc_name
field, you can directly use c_batch_id to find and count the data of different scenarios and uses for easy comparison.
P_FLAG
Parameter : This parameter has only two states, S: is the single transaction data, M: is the concurrent pressure test data, in fact, it is also a data labeling field, equivalent to the auxiliaryP_NAME
The function of the
P_START_TIME
Parameter : Execute the start time you want to get the data, I usually enter the start time of the pressure test, enter the format of the time:yyyy-mm-dd hh24:mi:ss.ff6
。
P_END_TIME
Parameter :Execute the end time you want to get the data, I usually enter the end time of the pressure test, enter the format of the time:yyyy-mm-dd hh24:mi:ss.ff6
。
4. Attention
If the required data time range is outside thegv$ob_sql_audit
of the retention window, the corresponding data will not be available. Therefore, it is recommended that as soon as possible after the end of the application pressure test, you use theextract_sqlaudit_proc
The stored procedure pulls data and solidifies it into thexz_sql_audit
table to avoidgv$ob_sql_audit
The pressure measurement data in was eliminated.
During a banking project, a typical stress test for a single transaction scenario is 200 concurrent and lasts about 5 minutes. To avoidgv$ob_sql_audit
Exceeding the data carrying capacity, it is recommended to control the pressure test time to avoid data overflow or loss due to excessive time.
5、Analysis of slow SQL data
5.1. Analyze slow SQL during concurrent pressure testing
SELECT /*+ PARALLEL(8) */
xzzz.c_name,
xzzz.c_start_time,
xzzz.c_end_time, xzzz.user_name , xzzz.c_start_time, xzzz.user_name
xzzz.user_name ,
xzzz.db_name , xzzz.SQL_ID , xzzz.c_start_time
xzzz.SQL_ID ,
statement, nvl(ob_string)
nvl(ob_tools.get_table_info(ob_tools.get_table_name(xzzz.db_name,statement)),'none') table_info ,
Execution count,
xzzz.schedule count, 'none') table info, execution count, xzzz.schedule count, 'none')
xzzz.sql_cpu_time_percentage,
xzzz.average_execution_time_in_milliseconds,
xzzz.average_total_execution_time_milliseconds, xzzz.
xzzz.max_total_execution_time_milliseconds, xzzz.
xzzz.average_retries, xzzz.
xzzz.average_number_of_rpcs, xzzz.average_number_of_rpcs, xzzz.
xzzz.average_RPC_received_time_milliseconds, xzzz.
xzzz.average_request_received_time_milliseconds, xzzz.
xzzz.average_queue_time_milliseconds,
xzzz.average-out-of-queue-parsing-time_milliseconds,
xzzz.average_fetch_schedule_time_milliseconds
from (
SELECT
xzz.c_name as c_name,
xzz.c_start_time as c_start_time,
xzz.c_end_time as c_end_time, upper(xzz.user_name), lower(xzz.user_name), lower(xzz.user_name)
upper(xzz.user_name) as user_name ,
upper(xzz.db_name) as db_name ,
xzz.sql_id as SQL_ID , count(1) as sql_execution_time , xzz.user_name as user_name , xzz.
count(1) as sql_execution_times, max(xzz.sql_id) as SQL_ID
max(xzz.c_query_sql) as sql statement, max(xzz.c_plan)
max(xzz.c_plan_cnt) as number of plans,
round((sum(xzz.execute_time) / xzz.cpu_sum) * 100, 2) as sql_cpu_time_as_ratio,
round(avg(xzz.execute_time) ,2)/1000 as average_execution_time_milliseconds,
round(avg(xzz.elapsed_time) ,2)/1000 as average_total_execution_time_msec,
round(max(xzz.elapsed_time) ,2)/1000 as max_total_execution_time_milliseconds, round(avg(xzz.elapsed_time) ,2)/1000 as
round(avg(xzz.net_time) ,2)/1000 as average_RPC_received_times_milliseconds,
round(avg(xzz.net_wait_time) ,2)/1000 as average_request_received_time_milliseconds,
round(avg(xzz.queue_time) ,2)/1000 as average_queue_time_milliseconds,
round(avg(xzz.get_plan_time) ,2)/1000 as average_get_plan_time_milliseconds
from (
SELECT xz.*,
LISTAGG(distinct UPPER(xz.c_plan_type) || ':' | xz.c_plan_type_cnt , ',')
WITHIN GROUP(ORDER BY xz.c_plan_type)
over(partition by xz.c_batch_id,xz.user_name,xz.db_name,xz.sql_id) c_plan_cnt
from (
SELECT
min(to_char(x.c_rq_time,'YYYY-MM-DD HH24:MI:SS.FF6')) over(partition by x.c_batch_id) c_start_time,
max(to_char(x.c_rq_time,'YYYYY-MM-DD HH24:MI:SS.FF6')) over(partition by x.c_batch_id) c_end_time, count(x.c_plan_time,'YYYYY-MM-DD HH24:MI:SS.FF6')) over(partition by x.c_batch_id)
count(x.c_plan_type) over(partition by x.c_batch_id,x.user_name,x.db_name,x.sql_id,x.c_plan_type) c_plan_type_cnt ,
x.*
from (
SELECT
b.*,
to_char(substr(b.query_sql, 1, 4000)) c_query_sql,
(case
when b.PLAN_TYPE = 0 then 'inner'
when b.PLAN_TYPE = 1 then 'local'
when b.PLAN_TYPE = 2 then 'remote'
when b.PLAN_TYPE = 3 then 'distributed'
else null end) c_plan_type,
sum(b.execute_time) over(partition by b.c_name) cpu_sum
FROM xz_sql_audit b
where b.user_name not in ('YZJ')
and b.db_name not in ('YZJ')
and b.c_batch_id = 2 -- select the dataset to analyze
) x
) xz
) xzz
GROUP BY
xzz.c_name,
xzz.c_start_time,
xzz.c_end_time, xzz.db_name, xzz.c_start_time, xzz.c_end_time
xzz.db_name, xzz.user_name, xzz.
xzz.user_name, xzz.sql_id, xzz.
xzz.sql_id, xzz.cpu_summary
xzz.cpu_sum
) xzzz
where xzzz.sql_cpu_time_ratio > 0.5
ORDER BY 1 ASC, 11 DESC.
5.2. Analyze the number of single/concurrent transaction SQL plan types for different scenarios
with x as (select sql_id,plan_type from xz_sql_audit where c_batch_id = 16 ) /*Just change this. c_batch_id */
select count(sql_id) cnt,
(case
when a.PLAN_TYPE = 0 then 'inner'
when a.PLAN_TYPE = 1 then 'local'
when a.PLAN_TYPE = 2 then 'remote'
when a.PLAN_TYPE = 3 then 'distributed'
else null
end ) PLAN_TYPE
from x a group by plan_type
order by 1 desc;
5.3. Analyze single/concurrent SQL with RPC greater than 0
with x AS
(SELECT /*+ PARALLEL(32) */ * FROM xz_sql_audit WHERE c_batch_id = 2)
SELECT a.sql_id,
a.sql_excute_cnt,
a.rpc_sum,
a.rpc_avg,
(SELECT to_char(substr(b.query_sql,1,3000)) FROM x b WHERE a.sql_id = b.sql_id AND rownum <=1) query_sql
FROM
(SELECT a.sql_id,
count(1) sql_excute_cnt,
sum(a.RPC_COUNT) rpc_sum,
round(avg(a.RPC_COUNT)) rpc_avg
FROM x a
WHERE not(query_sql LIKE '%COMMIT%'
OR query_sql LIKE '%ROLLBACK%')
GROUP BY a.sql_id
HAVING sum(a.RPC_COUNT) > 0
) a
ORDER BY 3 desc;
5.4 Analyze the order of SQL execution for a single transaction (only a single transaction can be analyzed)
with x as (select /*+ PARALLEL(32) */ * from xz_sql_audit where c_batch_id = 2)
SELECT
a.start_time Business Transaction Start Time,
a.end_time Business Transaction End Time, a.c_name Trading Scenario
a.c_name Transaction Scenario, a.c_rq_time
a.c_rq_time_char Service Tier SQL Request Time, a.svr_ip Service Tier SQL Request Time, a.c_rq_time_char
a.svr_ip server-side IP, a.user_name user_name
a.user_name User name, a.user_name
session_number, a.tx_id
a.tx_id Transaction ID, a.tx_id_rc_transaction_id
a.tx_id_rn Transaction order within transaction, a.query_sql
a.query_sql Service Tier Request SQL, a.net_time RPC Receiver, a.net_time RPC Receiver
a.net_time RPC receive time, a.net_wait_time request_time
a.net_wait_time Request Received Time, a.QUEUE_TIME
a.QUEUE_TIME Queue time, a.DECODE_TIME
a.DECODE_TIME out of queue parse time, a.get_plan_time get_plan_time
a.get_plan_time Get plan time, a.execute_time
a.execute_time Execute time, a.RETRY_CFD_TIME
a.RETRY_CNT Retry count, a.RPC_COUNT RUNT
a.RPC_COUNT RPC count, a.ELAPSED_TIME
a.ELAPSED_TIME SERVER total execution time, a.TABLE_SCAN
a.TABLE_SCAN Whether full table scan, a.PLAN_TYPE
a.PLAN_TYPE Plan type, a.IS_HIT_PLAN_TYPE
a.IS_HIT_PLAN whether cache hit or not
FROM (
SELECT
TO_CHAR(b.start_time,'YYYYY-MM-DD HH24:MI:SS.FF6') start_time,
TO_CHAR(b.end_time,'YYYY-MM-DD HH24:MI:SS.FF6') end_time,
a.c_name, TO_CHAR(a.c_name, 'YYYY-MM-DD HH24:MI:SS.
TO_CHAR(a.c_rq_time,'YYYY-MM-DD HH24:MI:SS.FF6') c_rq_time_char,
a.svr_ip svr_ip,
a.user_name user_name,
a.tx_id tx_id,
svr_ip svr_ip, a.user_name user_name
row_number() over(partition by a.tx_id order by c_rq_time asc) tx_id_rn ,
to_char(substr(a.query_sql,1,3000)) query_sql ,
a.net_time,
a.net_wait_time, a.QUEUE_TIME, a.net_wait_time, a.QUEUE_TIME
a.net_time, a.net_wait_time, a.QUEUE_TIME, a.DECODE_TIME
a.DECODE_TIME, a.get_plan_time, a.get_plan_time, a.DECODE_TIME
a.get_plan_time, a.execute_time, a.get_plan_time, a.get_plan_time
a.execute_time, a.RETRY_CNT
a.RETRY_CNT, a.RPC_COUNT
a.RPC_COUNT, a.ELAPSED_TIME
a.ELAPSED_TIME, a.TABLE_SCAN
a.TABLE_SCAN, a.IS_HIT_PLAN
a.IS_HIT_PLAN, a.TABLE_SCAN
(case
when a.PLAN_TYPE = 0 then 'inner'
when a.PLAN_TYPE = 1 then 'local'
when a.PLAN_TYPE = 2 then 'remote'
when a.PLAN_TYPE = 3 then 'distributed'
end ) PLAN_TYPE = 2 when a.PLAN_TYPE = 3 then 'distributed'
end ) PLAN_TYPE, a.c_rq_timeout
a.c_rq_time
FROM x a inner join (
SELECT min(c_rq_time) start_time, max(c_rq_time)
max(c_rq_time) end_time, to_char(min(c_rq_time))
to_char(min(c_rq_time),'YYYY-MM-DD HH24:MI:SS.FF6') start_time_char, to_char(max(c_rq_time))
to_char(max(c_rq_time),'YYYY-MM-DD HH24:MI:SS.FF6') end_time_char,
min(to_char(substr(query_sql,1,3000))) min_sql,
max(to_char(substr(query_sql,1,3000))) max_sql,
C_BATCH_ID
from x where (QUERY_SQL like '%insert%in_log%' or QUERY_SQL like '%update%in_log%') -- a core system ISV The core system has all scenario interfaces starting and ending with insert into service_in_log and update service_in_log table operations at the beginning and end of all scenario interfaces in the core system.
GROUP BY C_BATCH_ID
) b on a.c_batch_id = b.c_batch_id and a.c_rq_time between b.start_time
and b.end_time and a.user_name <> 'YZJ' ) A
order by Service Tier SQL Request Time asc , Transaction ID asc , Order within Transaction asc.
Second, the real_time_cdc_proc procedure is introduced, using the
1. Introduction
real_time_cdc_proc
The process is equivalent to theextract_sqlaudit_proc
The advanced version of this application automates the operation of collecting curing data manually every time. Written from the need to continuously collect slow SQL analytics during run batches for a core system ISV V8 core application during a bank project.
Because of the long duration of the period of running the lot, thegv$ob_sql_audit
In-view data can become obsolete very quickly, so a way to collect data on an ongoing basis is needed.real_time_cdc_proc
What the process does is to regularly placegv$ob_sql_audit
Data curing toXZ_SQL_AUDIT_CDC
Inside the table.
PROCEDURE real_time_cdc_proc( p_END_TIME VARCHAR2) -- end time
IS
p_stop_date varchar(200):=p_END_TIME; -- Incoming stop time,It's incremental until this time.cdcextract (a medical sample)gv$ob_sql_auditdigital
v_stop_date TIMESTAMP ; -- Variable for converting date and time stamps from string parameters
v_num int:=1; -- circulateCNTcontrol variable
v_current_date TIMESTAMP; -- current date
v_start_time TIMESTAMP ; -- cdc Start time of the pull、core variable
v_end_time TIMESTAMP ; -- cdc 拉取的end time、core variable
v_start_scn NUMBER ; -- cdc Beginning of the pullscn、core variable
v_end_scn NUMBER ; -- cdc End of pullscn、core variable
v_sql_excute_start_time TIMESTAMP ;
v_sql_excute_end_time TIMESTAMP ;
v_interval INTERVAL day to second; -- at a time insert into xz_sql_audit_cdc The type of interval
v_interval_char varchar2(2000); -- at a time insert into xz_sql_audit_cdc The interval type of a string is converted to a string
v_interval_num NUMBER;
v_sql varchar2(4000);
BEGIN
/*
herald
Process name: real_time_cdc_proc
role of a function (math.): This procedure is used to collect real-time `gv$ob_sql_audit` 视图中的digital,and curing it to `xz_sql_audit_cdc` statistical tables。
Date of creation: 2024-09-10
author: (YZJ、Sudo (name))
releases: v2.0
Statement of Requirements:
This procedure is designed to satisfy the XX A core system of a bankISV V8 microservice(TCC build)The core system collects requested run-awards in real time during run-award tasks SQL。
due to `gv$ob_sql_audit` 视图digital超过内存阈值会被定期刷新,and the application runs the batch for a longer period of time,Therefore, only real-time acquisition can be implemented `gv$ob_sql_audit` 表的digital,and curing it to `xz_sql_audit_cdc` statistical tables。
This procedure is primarily used to collect `gv$ob_sql_audit` 视图中的所有digital,for subsequent analysis of slow SQL and provide suggestions for optimization。
Analytical tools require manual preparation of analyses SQL。
Prerequisites:
Before creating this procedure,The following objects need to be created in advance:
1. sequences:SEQ_XZ_SQL_AUDIT_CDC_CID
2. log sheet:XZ_CDC_LOG
- For recording `xz_sql_audit_cdc` 表目前采集了多少次digital,at a time采集的时间。
3. digital分析表:XZ_SQL_AUDIT_CDC
- For curing `gv$ob_sql_audit` 视图中的所有digital,Follow-up mainly analyzes the contents of the table。
invoke a method:
Recommended Use obclient Command line tool authoring Shell Script background execution:
```
BEGIN
real_time_cdc_proc(p_END_TIME => '2024-12-12 01:00:00');
-- 2024-12-12 01:00:00 is the time at which the procedure ends its run
END;
/
```
*/
BEGIN
/* If the incoming p_stop_date Unable to convert to date parameters,report an error and throw an exception,The following code does not need to be executed */
v_stop_date := to_timestamp(p_stop_date,'YYYY-MM-DD HH24:MI:SS');
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('p_stop_date Please pass a parameter that converts the date type!' );
RETURN ;
END;
v_current_date := SYSTIMESTAMP;
WHILE v_current_date <= v_stop_date LOOP
IF v_num = 1 THEN
/* 初始化digital,xz_sql_audit_cdc instill10行digital */
v_sql := '
insert into xz_sql_audit_cdc(
C_BATCH_ID,SVR_IP,SVR_PORT,REQUEST_ID,SQL_EXEC_ID,TRACE_ID,SID,CLIENT_IP,CLIENT_PORT,TENANT_ID,EFFECTIVE_TENANT_ID,
TENANT_NAME,USER_ID,USER_NAME,USER_GROUP,USER_CLIENT_IP,DB_ID,DB_NAME,SQL_ID,QUERY_SQL,PLAN_ID,AFFECTED_ROWS,
RETURN_ROWS,PARTITION_CNT,RET_CODE,QC_ID,DFO_ID,SQC_ID,WORKER_ID,EVENT,P1TEXT,P1,P2TEXT,P2,P3TEXT,P3,WAIT_CLASS_ID,
WAIT_CLASS#,WAIT_CLASS,STATE,WAIT_TIME_MICRO,TOTAL_WAIT_TIME_MICRO,TOTAL_WAITS,RPC_COUNT,PLAN_TYPE,IS_INNER_SQL,
IS_EXECUTOR_RPC,IS_HIT_PLAN,REQUEST_TIME,ELAPSED_TIME,NET_TIME,NET_WAIT_TIME,QUEUE_TIME,DECODE_TIME,GET_PLAN_TIME,
EXECUTE_TIME,APPLICATION_WAIT_TIME,CONCURRENCY_WAIT_TIME,USER_IO_WAIT_TIME,SCHEDULE_TIME,ROW_CACHE_HIT,
BLOOM_FILTER_CACHE_HIT,BLOCK_CACHE_HIT,DISK_READS,RETRY_CNT,TABLE_SCAN,CONSISTENCY_LEVEL,MEMSTORE_READ_ROW_COUNT,
SSSTORE_READ_ROW_COUNT,DATA_BLOCK_READ_CNT,DATA_BLOCK_CACHE_HIT,INDEX_BLOCK_READ_CNT,INDEX_BLOCK_CACHE_HIT,BLOCKSCAN_BLOCK_CNT,
BLOCKSCAN_ROW_CNT,PUSHDOWN_STORAGE_FILTER_ROW_CNT,REQUEST_MEMORY_USED,EXPECTED_WORKER_COUNT,USED_WORKER_COUNT,SCHED_INFO,
PS_CLIENT_STMT_ID,PS_INNER_STMT_ID,TX_ID,SNAPSHOT_VERSION,REQUEST_TYPE,IS_BATCHED_MULTI_STMT,OB_TRACE_INFO,PLAN_HASH,
PARAMS_VALUE,RULE_NAME,TX_INTERNAL_ROUTING,TX_STATE_VERSION,FLT_TRACE_ID,NETWORK_WAIT_TIME)
select
:1,SVR_IP,SVR_PORT,REQUEST_ID,SQL_EXEC_ID,TRACE_ID,SID,CLIENT_IP,CLIENT_PORT,TENANT_ID,EFFECTIVE_TENANT_ID,
TENANT_NAME,USER_ID,USER_NAME,USER_GROUP,USER_CLIENT_IP,DB_ID,DB_NAME,SQL_ID,to_char(substr(QUERY_SQL,1,3995)),PLAN_ID,AFFECTED_ROWS,
RETURN_ROWS,PARTITION_CNT,RET_CODE,QC_ID,DFO_ID,SQC_ID,WORKER_ID,EVENT,P1TEXT,P1,P2TEXT,P2,P3TEXT,P3,WAIT_CLASS_ID,
WAIT_CLASS#,WAIT_CLASS,STATE,WAIT_TIME_MICRO,TOTAL_WAIT_TIME_MICRO,TOTAL_WAITS,RPC_COUNT,PLAN_TYPE,IS_INNER_SQL,
IS_EXECUTOR_RPC,IS_HIT_PLAN,REQUEST_TIME,ELAPSED_TIME,NET_TIME,NET_WAIT_TIME,QUEUE_TIME,DECODE_TIME,GET_PLAN_TIME,
EXECUTE_TIME,APPLICATION_WAIT_TIME,CONCURRENCY_WAIT_TIME,USER_IO_WAIT_TIME,SCHEDULE_TIME,ROW_CACHE_HIT,
BLOOM_FILTER_CACHE_HIT,BLOCK_CACHE_HIT,DISK_READS,RETRY_CNT,TABLE_SCAN,CONSISTENCY_LEVEL,MEMSTORE_READ_ROW_COUNT,
SSSTORE_READ_ROW_COUNT,DATA_BLOCK_READ_CNT,DATA_BLOCK_CACHE_HIT,INDEX_BLOCK_READ_CNT,INDEX_BLOCK_CACHE_HIT,BLOCKSCAN_BLOCK_CNT,
BLOCKSCAN_ROW_CNT,PUSHDOWN_STORAGE_FILTER_ROW_CNT,REQUEST_MEMORY_USED,EXPECTED_WORKER_COUNT,USED_WORKER_COUNT,SCHED_INFO,
PS_CLIENT_STMT_ID,PS_INNER_STMT_ID,TX_ID,SNAPSHOT_VERSION,REQUEST_TYPE,IS_BATCHED_MULTI_STMT,OB_TRACE_INFO,PLAN_HASH,
PARAMS_VALUE,RULE_NAME,TX_INTERNAL_ROUTING,TX_STATE_VERSION,FLT_TRACE_ID,NETWORK_WAIT_TIME
from gv$ob_sql_audit where rownum <= 10';
/* fulfillmentSQL */
v_sql_excute_start_time := SYSTIMESTAMP ;
EXECUTE IMMEDIATE V_SQL USING v_num;
v_sql_excute_end_time := SYSTIMESTAMP;
INSERT INTO xz_cdc_log values(v_num,v_start_time,v_end_time,v_start_scn,v_end_scn,(v_sql_excute_end_time-v_sql_excute_start_time));
/* 初始化digital的提交,Submitted only once */
COMMIT ;
ELSIF v_num = 2 THEN
v_end_time := SYSTIMESTAMP ; -- end time戳,Get current time。
v_start_time := v_end_time - interval '70' second; -- Start timestamp,end time往前面减70秒作为at a timedigital开始时间。
/*
where request_time > v_start_time:(v_end_time - interval '70' second) and request_time < v_end_time:SYSTIMESTAMP
*/
v_start_scn := timestamp_to_scn(v_start_time) / 1000 ; -- Convert start time to startscn
v_end_scn := timestamp_to_scn(v_end_time) / 1000 ; -- 将end time转成结束scn
v_sql := '
insert /*+ APPEND PARALLEL(8) */ into xz_sql_audit_cdc(
C_BATCH_ID,SVR_IP,SVR_PORT,REQUEST_ID,SQL_EXEC_ID,TRACE_ID,SID,CLIENT_IP,CLIENT_PORT,TENANT_ID,EFFECTIVE_TENANT_ID,
TENANT_NAME,USER_ID,USER_NAME,USER_GROUP,USER_CLIENT_IP,DB_ID,DB_NAME,SQL_ID,QUERY_SQL,PLAN_ID,AFFECTED_ROWS,
RETURN_ROWS,PARTITION_CNT,RET_CODE,QC_ID,DFO_ID,SQC_ID,WORKER_ID,EVENT,P1TEXT,P1,P2TEXT,P2,P3TEXT,P3,WAIT_CLASS_ID,
WAIT_CLASS#,WAIT_CLASS,STATE,WAIT_TIME_MICRO,TOTAL_WAIT_TIME_MICRO,TOTAL_WAITS,RPC_COUNT,PLAN_TYPE,IS_INNER_SQL,
IS_EXECUTOR_RPC,IS_HIT_PLAN,REQUEST_TIME,ELAPSED_TIME,NET_TIME,NET_WAIT_TIME,QUEUE_TIME,DECODE_TIME,GET_PLAN_TIME,
EXECUTE_TIME,APPLICATION_WAIT_TIME,CONCURRENCY_WAIT_TIME,USER_IO_WAIT_TIME,SCHEDULE_TIME,ROW_CACHE_HIT,
BLOOM_FILTER_CACHE_HIT,BLOCK_CACHE_HIT,DISK_READS,RETRY_CNT,TABLE_SCAN,CONSISTENCY_LEVEL,MEMSTORE_READ_ROW_COUNT,
SSSTORE_READ_ROW_COUNT,DATA_BLOCK_READ_CNT,DATA_BLOCK_CACHE_HIT,INDEX_BLOCK_READ_CNT,INDEX_BLOCK_CACHE_HIT,BLOCKSCAN_BLOCK_CNT,
BLOCKSCAN_ROW_CNT,PUSHDOWN_STORAGE_FILTER_ROW_CNT,REQUEST_MEMORY_USED,EXPECTED_WORKER_COUNT,USED_WORKER_COUNT,SCHED_INFO,
PS_CLIENT_STMT_ID,PS_INNER_STMT_ID,TX_ID,SNAPSHOT_VERSION,REQUEST_TYPE,IS_BATCHED_MULTI_STMT,OB_TRACE_INFO,PLAN_HASH,
PARAMS_VALUE,RULE_NAME,TX_INTERNAL_ROUTING,TX_STATE_VERSION,FLT_TRACE_ID,NETWORK_WAIT_TIME)
select /*+ PARALLEL(8) query_timeout(50000000000) */
:1,SVR_IP,SVR_PORT,REQUEST_ID,SQL_EXEC_ID,TRACE_ID,SID,CLIENT_IP,CLIENT_PORT,TENANT_ID,EFFECTIVE_TENANT_ID,
TENANT_NAME,USER_ID,USER_NAME,USER_GROUP,USER_CLIENT_IP,DB_ID,DB_NAME,SQL_ID,to_char(substr(QUERY_SQL,1,3995)),PLAN_ID,AFFECTED_ROWS,
RETURN_ROWS,PARTITION_CNT,RET_CODE,QC_ID,DFO_ID,SQC_ID,WORKER_ID,EVENT,P1TEXT,P1,P2TEXT,P2,P3TEXT,P3,WAIT_CLASS_ID,
WAIT_CLASS#,WAIT_CLASS,STATE,WAIT_TIME_MICRO,TOTAL_WAIT_TIME_MICRO,TOTAL_WAITS,RPC_COUNT,PLAN_TYPE,IS_INNER_SQL,
IS_EXECUTOR_RPC,IS_HIT_PLAN,REQUEST_TIME,ELAPSED_TIME,NET_TIME,NET_WAIT_TIME,QUEUE_TIME,DECODE_TIME,GET_PLAN_TIME,
EXECUTE_TIME,APPLICATION_WAIT_TIME,CONCURRENCY_WAIT_TIME,USER_IO_WAIT_TIME,SCHEDULE_TIME,ROW_CACHE_HIT,
BLOOM_FILTER_CACHE_HIT,BLOCK_CACHE_HIT,DISK_READS,RETRY_CNT,TABLE_SCAN,CONSISTENCY_LEVEL,MEMSTORE_READ_ROW_COUNT,
SSSTORE_READ_ROW_COUNT,DATA_BLOCK_READ_CNT,DATA_BLOCK_CACHE_HIT,INDEX_BLOCK_READ_CNT,INDEX_BLOCK_CACHE_HIT,BLOCKSCAN_BLOCK_CNT,
BLOCKSCAN_ROW_CNT,PUSHDOWN_STORAGE_FILTER_ROW_CNT,REQUEST_MEMORY_USED,EXPECTED_WORKER_COUNT,USED_WORKER_COUNT,SCHED_INFO,
PS_CLIENT_STMT_ID,PS_INNER_STMT_ID,TX_ID,SNAPSHOT_VERSION,REQUEST_TYPE,IS_BATCHED_MULTI_STMT,OB_TRACE_INFO,PLAN_HASH,
PARAMS_VALUE,RULE_NAME,TX_INTERNAL_ROUTING,TX_STATE_VERSION,FLT_TRACE_ID,NETWORK_WAIT_TIME
from gv$ob_sql_audit where request_time > :2 and request_time < :3' ;
/* fulfillmentSQL */
v_sql_excute_start_time := SYSTIMESTAMP ;
EXECUTE IMMEDIATE V_SQL USING v_num,v_start_scn,v_end_scn;
v_sql_excute_end_time := SYSTIMESTAMP;
-- record (in sports etc)SQLfulfillment的时间
v_interval := (v_sql_excute_end_time - v_sql_excute_start_time);
v_interval_char := to_char(extract(day from v_interval) * 86400 +
extract(hour from v_interval) * 3600 +
extract(minute from v_interval) * 60 +
extract(second from v_interval));
v_interval_num := to_number(v_interval_char);
INSERT INTO xz_cdc_log values(v_num,v_start_time,v_end_time,v_start_scn,v_end_scn,(v_sql_excute_end_time-v_sql_excute_start_time));
/* 初始化digital的提交,Submitted only once */
COMMIT ;
ELSE
/* v_num = 1 (prefix indicating ordinal number, e.g. first, number two etc)1次初始化装载digital,It doesn't matter how much it's loaded.
v_num = 2 (prefix indicating ordinal number, e.g. first, number two etc)2次拉取间隔digital,settle v_start_time、v_end_time point in time。
v_num = 3 (prefix indicating ordinal number, e.g. first, number two etc)3after the second meeting,继承(prefix indicating ordinal number, e.g. first, number two etc)2subtimes
start_time = null end_time = null
start_time = t2 - 5 end_time = t2
start_time = t2 end_time = t3
start_time = t3 end_time = t4
*/
v_start_time := v_end_time;
v_end_time := v_end_time + (interval '1' second * v_interval_num);
v_start_scn := timestamp_to_scn(v_start_time) / 1000 ; -- Convert start time to startscn
v_end_scn := timestamp_to_scn(v_end_time) / 1000 ; -- 将end time转成结束scn
v_sql := '
insert /*+ APPEND PARALLEL(8) */ into xz_sql_audit_cdc(
C_BATCH_ID,SVR_IP,SVR_PORT,REQUEST_ID,SQL_EXEC_ID,TRACE_ID,SID,CLIENT_IP,CLIENT_PORT,TENANT_ID,EFFECTIVE_TENANT_ID,
TENANT_NAME,USER_ID,USER_NAME,USER_GROUP,USER_CLIENT_IP,DB_ID,DB_NAME,SQL_ID,QUERY_SQL,PLAN_ID,AFFECTED_ROWS,
RETURN_ROWS,PARTITION_CNT,RET_CODE,QC_ID,DFO_ID,SQC_ID,WORKER_ID,EVENT,P1TEXT,P1,P2TEXT,P2,P3TEXT,P3,WAIT_CLASS_ID,
WAIT_CLASS#,WAIT_CLASS,STATE,WAIT_TIME_MICRO,TOTAL_WAIT_TIME_MICRO,TOTAL_WAITS,RPC_COUNT,PLAN_TYPE,IS_INNER_SQL,
IS_EXECUTOR_RPC,IS_HIT_PLAN,REQUEST_TIME,ELAPSED_TIME,NET_TIME,NET_WAIT_TIME,QUEUE_TIME,DECODE_TIME,GET_PLAN_TIME,
EXECUTE_TIME,APPLICATION_WAIT_TIME,CONCURRENCY_WAIT_TIME,USER_IO_WAIT_TIME,SCHEDULE_TIME,ROW_CACHE_HIT,
BLOOM_FILTER_CACHE_HIT,BLOCK_CACHE_HIT,DISK_READS,RETRY_CNT,TABLE_SCAN,CONSISTENCY_LEVEL,MEMSTORE_READ_ROW_COUNT,
SSSTORE_READ_ROW_COUNT,DATA_BLOCK_READ_CNT,DATA_BLOCK_CACHE_HIT,INDEX_BLOCK_READ_CNT,INDEX_BLOCK_CACHE_HIT,BLOCKSCAN_BLOCK_CNT,
BLOCKSCAN_ROW_CNT,PUSHDOWN_STORAGE_FILTER_ROW_CNT,REQUEST_MEMORY_USED,EXPECTED_WORKER_COUNT,USED_WORKER_COUNT,SCHED_INFO,
PS_CLIENT_STMT_ID,PS_INNER_STMT_ID,TX_ID,SNAPSHOT_VERSION,REQUEST_TYPE,IS_BATCHED_MULTI_STMT,OB_TRACE_INFO,PLAN_HASH,
PARAMS_VALUE,RULE_NAME,TX_INTERNAL_ROUTING,TX_STATE_VERSION,FLT_TRACE_ID,NETWORK_WAIT_TIME)
select /*+ PARALLEL(8) query_timeout(50000000000) */
:1,SVR_IP,SVR_PORT,REQUEST_ID,SQL_EXEC_ID,TRACE_ID,SID,CLIENT_IP,CLIENT_PORT,TENANT_ID,EFFECTIVE_TENANT_ID,
TENANT_NAME,USER_ID,USER_NAME,USER_GROUP,USER_CLIENT_IP,DB_ID,DB_NAME,SQL_ID,to_char(substr(QUERY_SQL,1,3995)),PLAN_ID,AFFECTED_ROWS,
RETURN_ROWS,PARTITION_CNT,RET_CODE,QC_ID,DFO_ID,SQC_ID,WORKER_ID,EVENT,P1TEXT,P1,P2TEXT,P2,P3TEXT,P3,WAIT_CLASS_ID,
WAIT_CLASS#,WAIT_CLASS,STATE,WAIT_TIME_MICRO,TOTAL_WAIT_TIME_MICRO,TOTAL_WAITS,RPC_COUNT,PLAN_TYPE,IS_INNER_SQL,
IS_EXECUTOR_RPC,IS_HIT_PLAN,REQUEST_TIME,ELAPSED_TIME,NET_TIME,NET_WAIT_TIME,QUEUE_TIME,DECODE_TIME,GET_PLAN_TIME,
EXECUTE_TIME,APPLICATION_WAIT_TIME,CONCURRENCY_WAIT_TIME,USER_IO_WAIT_TIME,SCHEDULE_TIME,ROW_CACHE_HIT,
BLOOM_FILTER_CACHE_HIT,BLOCK_CACHE_HIT,DISK_READS,RETRY_CNT,TABLE_SCAN,CONSISTENCY_LEVEL,MEMSTORE_READ_ROW_COUNT,
SSSTORE_READ_ROW_COUNT,DATA_BLOCK_READ_CNT,DATA_BLOCK_CACHE_HIT,INDEX_BLOCK_READ_CNT,INDEX_BLOCK_CACHE_HIT,BLOCKSCAN_BLOCK_CNT,
BLOCKSCAN_ROW_CNT,PUSHDOWN_STORAGE_FILTER_ROW_CNT,REQUEST_MEMORY_USED,EXPECTED_WORKER_COUNT,USED_WORKER_COUNT,SCHED_INFO,
PS_CLIENT_STMT_ID,PS_INNER_STMT_ID,TX_ID,SNAPSHOT_VERSION,REQUEST_TYPE,IS_BATCHED_MULTI_STMT,OB_TRACE_INFO,PLAN_HASH,
PARAMS_VALUE,RULE_NAME,TX_INTERNAL_ROUTING,TX_STATE_VERSION,FLT_TRACE_ID,NETWORK_WAIT_TIME
from gv$ob_sql_audit where request_time > :2 and request_time < :3' ;
/* 批插digital fulfillmentSQL */
v_sql_excute_start_time := SYSTIMESTAMP ;
EXECUTE IMMEDIATE V_SQL USING v_num,v_start_scn,v_end_scn;
v_sql_excute_end_time := SYSTIMESTAMP;
-- record (in sports etc)SQLfulfillment的时间
v_interval := (v_sql_excute_end_time - v_sql_excute_start_time);
v_interval_char := to_char(extract(day from v_interval) * 86400 +
extract(hour from v_interval) * 3600 +
extract(minute from v_interval) * 60 +
extract(second from v_interval));
v_interval_num := to_number(v_interval_char);
INSERT INTO xz_cdc_log values(v_num,v_start_time,v_end_time,v_start_scn,v_end_scn,(v_sql_excute_end_time-v_sql_excute_start_time));
/* 初始化digital的提交,Submitted only once */
COMMIT ;
END IF ;
v_num := v_num + 1;
v_current_date := SYSTIMESTAMP ;
END LOOP ;
COMMIT ;
-- debugging code
/* if v_stop_date = to_date('2024-01-01','YYYY-MM-DD') THEN
DBMS_OUTPUT.PUT_LINE('true') ;
else
DBMS_OUTPUT.PUT_LINE('false') ;
END IF;
DBMS_OUTPUT.PUT_LINE('Modulation Code-p_stop_date:'|| to_char(v_stop_date,'YYYY-MM-DD HH24:MI:SS') );
*/
END;
2、Call the stored procedure
It is recommended to use the obclient command line tool to write a shell script to run in the background.
BEGIN
real_time_cdc_proc(p_end_time => '2024-12-12 01:00:00'); -- 2024-12-12 01:00:00 is the time at which the procedure ended its run
END;
/
nohup obclient -h11.161.204.57 -P2883 -uYZJ@oracle_pub#availabilitytest -p"xxxxx" -e "source real_time_cdc_proc.sql" &
3. Description of parameters
p_end_time
Parameter: Time formatyyyy-mm-dd hh24:mi:ss
. Pass in a deadline. for example:2024-12-01 01:00:00
, to this point in time.real_time_cdc_proc
The procedure stops running automatically.
4. Attention
real_time_cdc_proc
After being stopped (either automatically or manually)kill
(dropping the background session), if a direct restart of thereal_time_cdc_proc
It will report an error and will not continue to run.
The following actions are required:
- If the previous data is still needed:
* Back up the tables xz_sql_audit_cdc and xz_cdc_log. it is recommended that you rename these tables.
* Execute the manage_objects procedure after the naming is complete.
* You can then proceed with the execution of the procedure.
alter table xz_sql_audit_cdc rename to xz_sql_audit_cdc_1;
alter table xz_cdc_log rename to xz_cdc_log_1;
BEGIN
manage_objects( p_action => 'delete'); -- Delete all objects
END;
/
BEGIN
manage_objects( p_action => 'init'); -- create an object
END;
/
BEGIN
real_time_cdc_proc(p_end_time => '2024-12-12 01:00:00'); -- 2024-12-12 01:00:00 is the time at which the procedure ends its run
END;
/
nohup obclient -h11.161.204.57 -P2883 -uYZJ@oracle_pub#availabilitytest -p"xxxxx" -e "source real_time_cdc_proc.sql" &
- If the previous data is not needed:
* Execute the manage_objects procedure directly. you can then continue with the procedure.
BEGIN
manage_objects( p_action => 'delete'); -- delete all objects
END;
/
BEGIN
manage_objects( p_action => 'init'); -- create objects
END;
/
BEGIN
real_time_cdc_proc(p_end_time => '2024-12-12 01:00:00'); -- 2024-12-12 01:00:00 is the time at which the procedure ended its run
END;
/
nohup obclient -h11.161.204.57 -P2883 -uYZJ@oracle_pub#availabilitytest -p "xxxxx" -e "source real_time_cdc_proc.sql" &
5、Monitoringreal_time_cdc_proc
Progress in data synchronization
observablexz_cdc_log
table to know what point in time the data is currently synchronized to.
execute_time is how many times this data pull took.
c_end_time - c_start_time is always equal to the execute_time of the last data pull.
This logic ensures thatreal_time_cdc_proc
Stored Procedure Pullgv$ob_sql_audit
The data will not be missing.
select
c_batch_id,
to_char(c_start_time,'yyyy-mm-dd hh24:mi:ss.ff6') c_start_time,
to_char(c_end_time,'yyyy-mm-dd hh24:mi:ss.ff6') c_end_time,
c_end_time - c_start_time c_last_time,
c_execute_time execute_time
from XZ_CDC_LOG order by 1 desc;
6. Analyze slow SQL data
SELECT /*+ PARALLEL(64) */
xzzz.c_name,
xzzz.c_start_time,
xzzz.c_end_time,
xzzz.user_name ,
xzzz.db_name ,
xzzz.SQL_ID ,
statement,
nvl(ob_tools.get_table_info(ob_tools.get_table_name(xzzz.db_name,statement)),'none') Table information,
Number of executions,
xzzz.Planned number of times,
xzzz.sql_cpu_time share,
xzzz.Average implementation time,
xzzz.Average total implementation time,
xzzz.Maximum total execution time,
xzzz.Average number of retries,
xzzz.on averageRPCordinal number,
xzzz.on averageRPCReceived time,
xzzz.on average请求接收时间,
xzzz.on average队列时间,
xzzz.on average出队列解析时间,
xzzz.on average获取计划时间
from (
SELECT
xzz.c_name as c_name,
xzz.c_start_time as c_start_time,
xzz.c_end_time as c_end_time,
upper(xzz.user_name) as user_name ,
upper(xzz.db_name) as db_name ,
xzz.sql_id as SQL_ID ,
count(1) as sqlNumber of executions,
max(xzz.c_query_sql) as sqlstatement,
max(xzz.c_plan_cnt) as Planned number of times,
round((sum(xzz.execute_time) / xzz.cpu_sum) * 100, 2) as sql_cpu_time share,
round(avg(xzz.execute_time) ,2)/1000 as Average implementation time,
round(avg(xzz.elapsed_time) ,2)/1000 as Average total implementation time,
round(max(xzz.ELAPSED_TIME) ,2)/1000 as Maximum total execution time,
round(avg(xzz.retry_cnt) ,2) as Average number of retries,
round(avg(xzz.rpc_count) ,2) as on averageRPCordinal number,
round(avg(xzz.net_time) ,2)/1000 as on averageRPCReceived time,
round(avg(xzz.net_wait_time) ,2)/1000 as on average请求接收时间,
round(avg(xzz.queue_time) ,2)/1000 as on average队列时间,
round(avg(xzz.decode_time) ,2)/1000 as on average出队列解析时间,
round(avg(xzz.get_plan_time) ,2)/1000 as on average获取计划时间
from (
SELECT xz.*,
LISTAGG(distinct UPPER(xz.c_plan_type) || ':' || xz.c_plan_type_cnt , ',')
WITHIN GROUP(ORDER BY xz.c_plan_type)
over(partition by xz.c_name,xz.user_name,xz.db_name,xz.sql_id) c_plan_cnt
from (
SELECT
x.*,
count(x.c_plan_type) over(partition by x.c_name,x.user_name,x.db_name,x.sql_id,x.c_plan_type) c_plan_type_cnt
from (
SELECT
a.*,
b.*,
--nvl(get_table_info(get_table_name(b.db_name,b.query_sql)),'none') c_table_name,
to_char(substr(b.query_sql, 1, 4000)) c_query_sql,
(case
when b.PLAN_TYPE = 0 then 'inner'
when b.PLAN_TYPE = 1 then 'local'
when b.PLAN_TYPE = 2 then 'remote'
when b.PLAN_TYPE = 3 then 'distributed'
else null end) c_plan_type,
sum(b.execute_time) over(partition by a.c_name) cpu_sum
FROM (
/* It is recommended to understand the process of each step of running a batch,Name of each process、Start and end time of each process,
utilizationUNION ALL Merged together into a running batch task flow sheet,The finer the task flow,The more you can analyze performance bottlenecks
*/
select 'move1:Data cleansing before the changeover day' c_name,
(timestamp_to_scn(to_timestamp('2024-09-13 21:25:40:669','yyyy-mm-dd hh24:mi:ss:ff6')) / 1000) c_start_scn,
(timestamp_to_scn(to_timestamp('2024-09-13 21:25:48:181','yyyy-mm-dd hh24:mi:ss:ff6')) / 1000) c_end_scn,
to_char(to_timestamp('2024-09-13 21:25:40:669','yyyy-mm-dd hh24:mi:ss:ff6'),'YYYY-MM-DD HH24:MI:SS.FF6') c_start_time,
to_char(to_timestamp('2024-09-13 21:25:48:181','yyyy-mm-dd hh24:mi:ss:ff6'),'YYYY-MM-DD HH24:MI:SS.FF6') c_end_time
from dual
union all
select 'move2:the day before the exchange of dates' c_name,
(timestamp_to_scn(to_timestamp('2024-09-13 21:27:01:281','yyyy-mm-dd hh24:mi:ss:ff6')) / 1000) c_start_scn,
(timestamp_to_scn(to_timestamp('2024-09-13 21:34:31:893','yyyy-mm-dd hh24:mi:ss:ff6')) / 1000) c_end_scn,
to_char(to_timestamp('2024-09-13 21:27:01:281','yyyy-mm-dd hh24:mi:ss:ff6'),'YYYY-MM-DD HH24:MI:SS.FF6') c_start_time,
to_char(to_timestamp('2024-09-13 21:34:31:893','yyyy-mm-dd hh24:mi:ss:ff6'),'YYYY-MM-DD HH24:MI:SS.FF6') c_end_time
from dual
union all
select 'move3: alternating day' c_name,
(timestamp_to_scn(to_timestamp('2024-09-13 21:35:40:493','yyyy-mm-dd hh24:mi:ss:ff6')) / 1000) c_start_scn,
(timestamp_to_scn(to_timestamp('2024-09-13 21:36:20:912','yyyy-mm-dd hh24:mi:ss:ff6')) / 1000) c_end_scn,
to_char(to_timestamp('2024-09-13 21:35:40:493','yyyy-mm-dd hh24:mi:ss:ff6'),'YYYY-MM-DD HH24:MI:SS.FF6') c_start_time,
to_char(to_timestamp('2024-09-13 21:36:20:912','yyyy-mm-dd hh24:mi:ss:ff6'),'YYYY-MM-DD HH24:MI:SS.FF6') c_end_time
from dual
union all
select 'move4:alternating day后' c_name,
(timestamp_to_scn(to_timestamp('2024-09-13 21:51:30:641','yyyy-mm-dd hh24:mi:ss:ff6')) / 1000) c_start_scn,
(timestamp_to_scn(to_timestamp('2024-09-13 22:15:49:056','yyyy-mm-dd hh24:mi:ss:ff6')) / 1000) c_end_scn,
to_char(to_timestamp('2024-09-13 21:51:30:641','yyyy-mm-dd hh24:mi:ss:ff6'),'YYYY-MM-DD HH24:MI:SS.FF6') c_start_time,
to_char(to_timestamp('2024-09-13 22:15:49:056','yyyy-mm-dd hh24:mi:ss:ff6'),'YYYY-MM-DD HH24:MI:SS.FF6') c_end_time
from dual
union all
select 'move5:alternating day后数据清理' c_name,
(timestamp_to_scn(to_timestamp('2024-09-13 22:17:00:520','yyyy-mm-dd hh24:mi:ss:ff6')) / 1000) c_start_scn,
(timestamp_to_scn(to_timestamp('2024-09-13 23:12:54:548','yyyy-mm-dd hh24:mi:ss:ff6')) / 1000) c_end_scn,
to_char(to_timestamp('2024-09-13 22:17:00:520','yyyy-mm-dd hh24:mi:ss:ff6'),'YYYY-MM-DD HH24:MI:SS.FF6') c_start_time,
to_char(to_timestamp('2024-09-13 23:12:54:548','yyyy-mm-dd hh24:mi:ss:ff6'),'YYYY-MM-DD HH24:MI:SS.FF6') c_end_time
from dual
) a inner join xz_sql_audit_cdc b
on b.request_time between a.c_start_scn and a.c_end_scn
where b.user_name not in ('YZJ')
and b.db_name not in ('YZJ')
) x
)xz
) xzz
GROUP BY
xzz.c_name,
xzz.c_start_time,
xzz.c_end_time,
xzz.db_name,
xzz.user_name,
xzz.sql_id,
xzz.cpu_sum
) xzzz
where xzzz.sql_cpu_time share > 0.5
ORDER BY 1 ASC, 11 DESC;
I'm still building up the SQL statements for the different dimensions of the analysis, and one of the big advantages of ODC, I feel, is the ability to export Excel reports.
By writing corresponding SQL analysis statements, ODC can generate data reports of different dimensions and time points, and present them graphically, making data analysis more intuitive and beautiful.
Third, get_table_name, get_table_info functions, the use of
1. Introduction
When initially analyzing SQL statements in Excel tables and suggesting optimizations, only the SQL statement and database name information is obtained, while the table name and related information needs to be found one by one in the SQL statement and usedSHOW CREATE TABLE
and other commands, which seems very cumbersome. Therefore, I wrote the following two functions:
-
**get_table_name**
: This function extracts one or more table names from the input SQL statement. It uses regular expression matching keywords (such as FROM and JOIN) to identify the table name, splices it with the owner name, and returns a comma-separated list of table names. This function works with standard SQL statements, making it easy to quickly get relevant table information. -
**get_table_info**
: This function receives the data provided by theget_table_name
A list of table names is extracted to get detailed information about each table, including partitioning rules, number of local and global indexes, number of primary keys, and number of rows of data. By accessing the database view, the function extracts the information and formats it into easy-to-read output for subsequent analysis and optimization.
These two functions work in tandem, allowing me to efficiently get table names and table information from SQL statements, increasing my productivity.
2、Call function
select ob_tools.get_table_info(ob_tools.get_table_name(a.db_name,a.query_sql)) from gv$ob_sql_audit;
Table Name | Partitioning Rules | Number of Local Indexes | Number of Global Indexes | Number of Primary Key PKs | Number of Data Rows
---------------------------------------------------------------------------------------------------------------------------------- --------------
ENS_CBANK.MB_GL_HIST_TOTAL | Non-Partitioned Table | 0 | 1 | 1 | 0
ENS_CBANK.MB_GL_HIST | Non-Partitioned Table | 0 | 3 | 1 | 0
ENS_CBANK.MB_PROD_TYPE | Non-Partitioned Table | 0 | 1 | 1 | 658
ENS_CBANK.MB_GL_HIST | non-partitioned table | 0 | 3 | 1 | 0
ENS_CBANK.BAT_MB_BUSINESS_INFO | Non-Partitioned Table | 0 | 0 | 0 | 0 | 7
ENS_CBANK.MB_GL_HIST | Non-Partitioned Table | 0 | 3 | 1 | 0
ENS_CBANK.MB_FEE_AMORTIZE_AGR | Non-Partitioned Table | 0 | 1 | 1 | 16364
---------------------------------------------------------------------------------------------------------------------------------- --------------
Table Name | Partitioning Rules | Number of Local Indexes | Number of Global Indexes | Number of Primary Key PKs | Number of Data Rows
---------------------------------------------------------------------------------------------------------------------------------- --------------
ENS_CBANK.MB_ACCT | PARTITION BY HASH("BRANCH") | 0 | 18 | 0 | 17289678
ENS_CBANK.MB_ACCT_SETTLE | PARTITION BY HASH("INTERNAL_KEY") | 1 | 4 | 1 | 12758205
---------------------------------------------------------------------------------------------------------------------------------- --------------
3. Description of parameters
-
**get_table_name**
** Functions:**-
**p_owner**
** (VARCHAR2)**: The owner (schema) name of the table, used to construct the full table name. Usually in uppercase. -
**p_string_sql**
** (CLOB, VARCHAR2)**: Input SQL statement from which the function will extract the table name.
-
select ob_tools.get_table_name('ENS_CBANK','
SELECT c.settle_acct_internal_key
FROM (SELECT distinct a.settle_acct_internal_key
FROM MB_ACCT_SETTLE a
where (A.SETTLE_ACCT_CLASS = ''AUT'' or
(A.SETTLE_ACCT_CLASS = ''SSI'' and a.settle_weight > 0))
and a.settle_acct_internal_key BETWEEN ''27417440'' and ''27475285''
and nvl(, ''1'') = ''1''
and exists (select 1
from mb_acct b
WHERE a.INTERNAL_KEY = b.INTERNAL_KEY
and b.source_module = ''CL''
and b.auto_settle = ''Y''
and b.lead_acct_flag = ''N''
and B.acct_status != ''C'')) c,
mb_acct d
WHERE c.settle_acct_internal_key = d.INTERNAL_KEY
AND d.ACCT_STATUS != ''C''
' ) tb_name from dual;
come (or go) back:ENS_CBANK.MB_ACCT_SETTLE,ENS_CBANK.MB_ACCT
-
**get_table_info**
** Functions:**-
**p_tablename_list**
** (VARCHAR2)**: comma-separated list of table names containing a combination of owner and table name (e.g.schema1.table1, schema2.table2
). This parameter is used to get detailed information about each table.
-
4. Attention
The following points should be noted when using these two functions:
-
SQL statement format: Ensure that the entered SQL statement conforms to the standard format, especially the part containing the table name.
get_table_name
The function does not support comma-separated table joins and non-standard SQL statements, for example:select * from a,b where =
(unsupported). -
Input parameter validity: To ensure that
get_table_info
function passes in the correct parameters, it is highly recommended to work with theget_table_name
functions together to avoid query failures in calls due to formatting issues. - Performance Impact: Note that in a large data-volume environment, the function execution may affect the performance, it is not recommended to bring these two functions when dealing with large volumes of data, it is best to wait until all the data aggregation is complete, in the outermost level of the query (the case of returning less data), use these two functions.
-- If you return 99998 data in rows,This is not recommended
select ob_tools.get_table_info(ob_tools.get_table_name(a.db_name,a.query_sql)) tb_info from xz_sql_audit_cdc a where a.request_time > 1 and a.request_time < 99999;
-- It is recommended that after the aggregation has returned a small amount of data, the outermost function is used
select
ob_tools.get_table_info(ob_tools.get_table_name(x.db_name,)))
from (
select
a.db_name,
a.sql_id,
max(a.query_sql) qsql
from xz_sql_audit_cdc a
where a.request_time > 1 and a.request_time < 99999
group by a.db_name,a.sql_id
) x
-
Data correctness:
get_table_info
The data returned by the function, the values of number of local indexes, number of global indexes, and number of primary key PKs are passed through theXZ_INDEXES
The view is queried, and synchronizing the system view data to a secondary table to provide it to the function can improve performance. If you have done some business modifications during the period, such as adding primary keys to tables and adding local or global index operations, continue to use theget_table_info
The information in the queried table will be the same as the original data. This is becauseget_table_info
function queryXZ_INDEXES
The data in the auxiliary table has not been updated, and at this point we just need to update theXZ_INDEXES
table data can be used, using themanage_objects
Upon completion of the process update, theget_table_info
function looks up the correct table information.
BEGIN
manage_objects( p_action => 'UPDATE'); -- update data
END;
/
================ UPDATE - update tables XZ_INDEXES and XZ_OBJECTS ================
Start emptying table: XZ_INDEXES
Start time: 2024-09-28 22:52:20.528044
End time: 2024-09-28 22:52:20.605177
Elapsed time: +00 00:00:00.077133
Re-import data to XZ_INDEXES
Started: 2024-09-28 22:52:20.605831
End time: 2024-09-28 22:52:21.161543
Elapsed time: +00 00:00:00.555712
Started emptying table: XZ_OBJECTS
Start: 2024-09-28 22:52:21.162242
End time: 2024-09-28 22:52:21.231126
Elapsed time: +00 00:00:00.068884
Re-import data to XZ_OBJECTS
Start: 2024-09-28 22:52:21.231995
End time: 2024-09-28 22:52:21.519196
Elapsed time: +00 00:00:00.287201