第四次优化 使用 FORALL 处理批量作业 上面我们已经介绍了批量作业的一些概念,这里不再赘述。 这次优化的思想主要是使用 FORALL 的思路来改写第三次优化拆分 DELETE 操作,具体的修改如下: SQL l 1 creat" name="description" />

如何给Large Delete操作提速近千倍?(五)

发表于:2007-06-07来源:作者:点击数: 标签:
1.1. MI LY: 黑体; mso-ascii-font-family: Arial">第四次优化 使用 FORALL 处理批量作业 上面我们已经介绍了批量作业的一些概念,这里不再赘述。 这次优化的思想主要是使用 FORALL 的思路来改写第三次优化拆分 DELETE 操作,具体的修改如下: SQL l 1 creat

 

1.1. MILY: 黑体; mso-ascii-font-family: Arial">第四次优化——使用FORALL 处理批量作业

上面我们已经介绍了批量作业的一些概念,这里不再赘述。

这次优化的思想主要是使用FORALL的思路来改写“第三次优化——拆分DELETE操作”,具体的修改如下:

SQL> l

  1  create or replace procedure del_hubei_SSR_forall

  2  as

  3       type ridArray is table of rowid index by binary_integer;

  4       type dtArray is table of varchar2(50) index by binary_integer;

  5

  6       v_rowid ridArray;

  7       v_mid_to_delete dtArray;

  8

  9  begin

 10       select mid,rowid bulk collect into v_mid_to_delete,v_rowid from temp_mid_hubei where rownum<11;

 11

 12      forall i in 1 .. v_mid_to_delete.COUNT

 13           delete from SSR where mid = v_mid_to_delete(i);

 14

 15             DBMS_OUTPUT.PUT_LINE('Full Finished!!!');

 16             DBMS_OUTPUT.PUT_LINE('Totally '||to_char(v_mid_to_delete.COUNT)||' records deleted from hubei_SSR   !!!');

 17

 18       forall i in 1 .. v_rowid.COUNT

 19           delete from temp_mid_hubei where rowid = v_rowid(i);

 20

 21             DBMS_OUTPUT.PUT_LINE('Full Finished!!!');

 22             DBMS_OUTPUT.PUT_LINE('Totally '||to_char(v_rowid.COUNT)||' records deleted from temp_mid_hubei   !!!');

 23

 24* end;

SQL> /

 

Procedure created.

 

Elapsed: 00:00:00.07

SQL>

SQL>  exec del_hubei_SSR_forall;

Full Finished!!!

Totally 10 records deleted from hubei_SSR   !!!

Full Finished!!!

Totally 10 records deleted from temp_mid_hubei   !!!

 

PL/SQL procedure suclearcase/" target="_blank" >ccessfully completed.

 

Elapsed: 00:00:00.16

SQL>

 

 

这里应用了上面我们说过的SELECT BULK COLLECT INTO 方法来进行批量绑定,根据以10条记录为一组的测试结果表名,速度比较理想,于是再测1000条记录的情况:

 

SQL> exec del_hubei_SSR_forall;

Full Finished!!!

Totally 1000 records deleted from hubei_SSR   !!!

Full Finished!!!

Totally 100 records deleted from temp_mid_hubei   !!!

 

PL/SQL procedure successfully completed.

 

Elapsed: 00:00:00.66

SQL>

 

 

测试结果还是比较稳定,于是再测10000100000条记录的情况:

 

SQL> exec del_hubei_SSR_forall;

Full Finished!!!

Totally 10000 records deleted from hubei_SSR   !!!

Full Finished!!!

Totally 10000 records deleted from temp_mid_hubei   !!!

 

PL/SQL procedure successfully completed.

 

Elapsed: 00:00:04.07

SQL>

SQL> exec del_hubei_SSR_forall;

Full Finished!!!

Totally 100000 records deleted from hubei_SSR   !!!

Full Finished!!!

Totally 100000 records deleted from temp_mid_hubei   !!!

 

PL/SQL procedure successfully completed.

 

Elapsed: 00:03:51.29

SQL>

 

这里我们注意到,完成删除10000记录需要4秒中,而完成100000条记录的删除需要将近4分钟,也就是说,从删除10000条记录到删除100000条记录,所需要的操作时间已经出现了比较大的倍数关系。

 

下面再测试一下50万条记录的情况:

SQL> exec del_hubei_SSR_forall;

Full Finished!!!

Totally 509555 records deleted from hubei_SSR   !!!

Full Finished!!!

Totally 509555 records deleted from temp_mid_hubei   !!!

 

PL/SQL procedure successfully completed.

 

Elapsed: 00:39:04.41

SQL>

 

这里,删除500000条记录已经需要将近40分钟了!

到这里,基本上可以得出结论,使用批量删除虽然可以解决一定的问题,但是如果需要操作的数组太大,那么执行结果会打打折扣,这个是不难理解的。

于是,第五次优化的思路就产生了:

1)        分段处理

2)        拆分操作

3)        批量绑定

 

 


原文转自:http://www.ltesting.net