2012-12-10

UKOUG 2012



UKOUG 2012

I am alive all thou I did not obey the first timer’s advice Health warning!
"You will not be able to fill all three conference days with presentations and come out of the other side in one piece."
I stuffed my days full of presentations.  I had time to:
* Talk to quite many people
* Walk outside at  the Christmas market - it started raining
* Visit the exhibition - many times and had good talks with many exhibitors
* There actually is someone using Oracle Workspace Manager out there somewhere, no presentations here


Attended sessions:


Oaktable Sunday 


12:15 – 13:00
Keeping it Simple in Database Application Development:  A Case Study
Melanie Caffrey, Oracle

Research to create manitainable system.
Get out of users way, do not throw errorstacks to users face.
Design should be centered around users needs.
Call your baby ugly -sometimes. The problem may be easier and more efficiently implemented using another approach than your hammer.
Simple is not allways easy.


13:05 – 13:50 Horrid Collateral Compression
Jonathan Lewis, JL Computer Consultancy

Deduplication of index compress and basic/oltp table compression.
Hybrid Column compression available in Exadata, ZFS and Pillar.
Describing the uncompress cpu usage problem while rowid access path


14:35 – 15:20 Guiding Practices for Gathering Optimiser Statistics (or Not)
Martin Widlake, ORA600

Collected during maintenance window. May be changed.
Sometimes beneficial to cather first primary key stats and other afterwards.
System, fixed object, dictionary and object stats.
Don't replace auto stats job.
Incremential stats on partitioned objects brake if even one (sub)partition stats missing or alter table add column.


15:25 – 16:10 Tuning Untouchable SQL using Profiles
Dan Fink, TZOD Consultancy

v$sql exact_matching_signature and force_matching_signature
category may be used to use profiles
alter session set sqltune_category='DEMO';


Evening ACE Dinner - Thank you Debra Lilley for arranging also this.


Monday


09:30 - 10:20
Opening Oracle Keynote
Dermot O'Kelly & Dr. Andrew Sutherland - Oracle

Whole stack. Hardware and software together. Did I hear correctly "We can put database to chipset" or was my mind wondering somewhere?
Comparing Social media to inhouse applications, structured data and processes. Social media is not an application.
It is a common service. How to handle integration in between?
Mobile access. Fusion TAP.
Big data value varies ease of tools ease of use. Oil drilling comparison.

10:30 - 11:15
KEYNOTE: Oracle's Latest Generation of Database Technology
Tom Kyte, OakTable - Oracle

12 new features next year


11:50 - 12:35
Creating Test Cases
Jonathan Lewis, Oracle ACE Director, OakTable - JL Computer Consultancy

Should have been titled Test Data Generation
How to generate rows
Distribution, physical location and index size varies depending how you generate rows
sql vs pl/sql
How to generate and run many statements
Use dbms_lock package to handle concurrency instead of a table row lock.

12:45 -

Benchmarking Oracle I/O Performance with ORION
Mr Alex Gorbachev, Oracle ACE Director, OakTable

First couple of slides. Orion comes with database binaries.



13:00 - 13:45
PL/SQL: Stop Making the Same Performance Mistakes
Tim Hall, Oracle ACE Director, OakTable - ORACLE-BASE.com

Use SQL instead
Bulk operations
Nocopy
Avoid using wrong datatypes


14:30 - 15:30
Is RAT Worth Catching?
Julian Dyke, Oracle ACE, OakTable - JulianDyke.com

RAT experiences traps
system time before replay


15:40 - 17:30
B-Tree Indexes
Jonathan Lewis, Oracle ACE Director, OakTable - JL Computer Consultancy

I attended only first ten minutes and was deilghted to see him using a book index example as I have been using allready when expaining indexing.


15:50 - 16:35
How to Create in 5 Minutes a SQL Tuning Test Case Using SQLTXPLAIN
Carlos Sierra - Oracle

What for
Investigate unexpected results
11g test case builder tcb
tc
tcx system generated column names missing use tc


16:45 - 17:30
Shareable Cursors
Christian Antognini, OakTable - Trivadis

Reason_xml information v$sql_shared_cursor sql_id and child# are not allways unique in v$sql when _cursor_obsolete_treshold reached v$sql_bind_metadata


UKOUG Exhibition drinks: 18:30 – 19:30 and SIG Focus Pubs: 19:30 – 22:30



Tuesday


Unlucky me my notes file got corrupted for tuesday


09:00 - 09:45
Are You Sure You Need Exadata?
Jonathan Lewis, Oracle ACE Director, OakTable - JL Computer Consultancy


09:55 - 10:55
Oracle Optimizer: Harnessing The Power of Optimizer Hints
Maria Colgan, OakTable - Oracle

Hints may appear not to be used


11:15 - 12:00
A Deep Dive into the SQL Monitor Report
Graham Wood - Oracle

Yes it is the best place to see ongoing problems. Only if it was available without tuning pack license.
The buffer might be a bit bigger that also long plans would stick in.
The plan page is using graphical presentation of the plan by default. Who uses such. Even Garham changed to tabular.


12:10 - 13:10
Controlling Execution Plans (Without Touching the Code)
Kerry Osborne, Oracle ACE Director, OakTable - Enkitec

About Outlines - SQL Patches - SQL Profiles - SQL Baselines
Profiles may work also with binds


13:25 - 14:10
Big Data Meets Big Process
Tristan Atkins - Microgen


14:30 - Unscheduled Oracle Optimizer roudtable with
Maria Colgan, Jonathan Lewis, Tony Hasler, Coscan Gundogar, Andy Colvin, Kerry Osborne, Cristian Antognini, Tuomas Pystynen, Jože Senegačnik and many others

Missed many other planned talks, but it was worth the sacrifice.


16:05 - 17:05
Cardinality Feedback
Peter Brink - Credit Suisse

Preferred to have good statistics


17:20 - 18:05
Oracle Exalytics - One Size Fits All?
Robin Moffatt - Rittman Mead

Timesten is a datasoure to obiee inside.
Sources may be other than oracle db
Obiee repository must be solid
Special relase of timesten for exalytics only available with exalytics
Summary advisor tool suggest most useful aggregates to timesten based on queries run
No incremental refeshes available yet
Just another batch stage in your etl
Loading through oracle staging db
or times ten may be used as a target of your etl
idea of ditching your dw
timesten may be used in cache mode not supporting materialized view sources


Wednesday


09:00 - 09:45
Capacity Planning, Simple and Without Tooling
Piet De Visser, Oracle ACE - Logica

Add a zero


9:20-9:50
Unconference
Tony Hasler

Talking about how a block corruption recovery became a sql tuning task. Looking forward for the promised blog post.
How to find high water mark


09:55 - 10:55
Oracle Optimizer: An Insider’s View of How the Oracle Optimizer Works
Maria Colgan, OakTable - Oracle

Future improvements
Adaptivity while runtime
Nested loops statistics collector has a tresholder. Reaching it changes to hash only on first execution. Format +adaptive +report will have swiching plan_hash_values during the first execution nl to hash hash to nl no additional parsing needed. All needed paths are allredy in the cursor.
Small buffering needed. Nothing is restarted.
Adapt parallel data distribution
Hybrid-hash
Change tresholds possible. That is the beauty how cost based optimizer works.
Dynamic statistics replaces dynamic sampling re-optimization new cardinality feedback used to age out when cursor ages out 12c stores that info as sql plan directives dbms_spd will create extended stats next time stats are gathered due to directives exists adaptive plans and statistics


11:15 - 12:00
The MODEL Clause Explained
Tony Hasler - Anvil Computer Services

May run parallel by model partitions


12:10 - 13:10
Really Using Analytic Functions
Kim Berg Hansen - T. Hansen Gruppen A/S

Aggregate funcions may be used mixed in analytic functions
Nice picking route and efficiency
Examples regr_slope to forecast


13:55 - 14:55
Exadata and the Oracle Optimizer: The Untold Story
Maria Colgan, OakTable - Oracle

Addressed the same issue Jonathan Lewis talked sunday and mentioned the soon to be available patch numbers to bypass the problem.
Do not gather system stats. Tested with defaults. Recomended for extremely slow hardware.
They will change the cost model too much.


No more sessions. Had a plane to catch. Originally half an hour change time in Amsterdam was a bit scary. The first plane left 15 minutes late. We made it as the other flight was one hour late.

See you hopefully next year in Manchester. Before that OUG Harmony.

2012-12-07

Call for Papers for the OUGF 25th anniversary conference

Just arrived from UKOUG 2012. While waiting for next year in Manchester consider attending a boat trip. Submit an abstract before end of this year. The Call for Papers for the OUGF 25th anniversary conference.

More details
OUGF Spring Conference
“OUG Harmony” and 25th anniversary of OUGF
on a cruise from Helsinki (Finland) to Stockholm (Sweden)
Location: TallinkSilja cruise boat
Event starts on April 15, 2013 at 11.00 in Helsinki
Event ends on April 17th, 2013 at 10.00 in Helsinki
The boat leaves Helsinki on April 15th, 2013 at 17.00
The boat arrives to Stockholm on April 16th at 9.55
The boat leaves Stockholm on April 16th at 17.00
The boat arrives to Helsinki on April 17th at 10.00
Languages: Finnish, English
Organizer Oracle User Group Finland

2012-08-22

Insert All and Column Naming


SQL> create table ins(i int, j int);

Table created.

SQL> insert into ins(i, j) select 1,0 from dual;

1 row created.

SQL> insert into ins(i, j) select 1,1 from dual;

1 row created.

All fine. How about insert all?

SQL> insert all into ins(i, j) select 2,0 from dual;

1 row created.

SQL> insert all into ins(i, j) select 2,1 from dual;

1 row created.

SQL> insert all into ins(i, j) select 2,2 from dual;
insert all into ins(i, j) select 2,2 from dual
                *
ERROR at line 1:
ORA-00918: column ambiguously defined

SQL> select * from ins;

         I          J
---------- ----------
         1          0
         1          1
         2          0
         2          1

Got an error about column naming. The simple insert into does not seem to require column naming. As the 1,1 value pair is inserted fine. Insert all requires different column names in the select list. 2,2 without column naming is not working without naming the columns.

SQL> select 2, 2 from dual;

         2          2
---------- ----------
         2          2

SQL> insert all into ins(i, j) select 2 foo, 2 bar from dual;

1 row created.

The column naming does not have an influence here. The placing does.
SQL> insert all into ins(i, j) select 3 j, 4 i from dual;

1 rows created.

SQL> select * from ins where j=3;

no rows selected

SQL> select * from ins where i=3;

         I          J
---------- ----------
         3          4

How about actual multitable insert?

SQL> insert all into ins(i, j) into ins(i, j) select 5 j, 6 i from dual;

2 rows created.

SQL> select * from ins where j=5;

no rows selected

SQL> select * from ins where i=5;

         I          J
---------- ----------
         5          6
         5          6

Similar behavior. Select list column naming does not have effect which column is populated. I would prefer using values with multitable insert to map the select list columns to the placeholders of inserted column names.
SQL> insert all into ins(i, j) into ins(i, j) values (i, j) select 7 j, 8 i from dual;

SQL> select * from ins where j in (7,8);

         I          J
---------- ----------
         7          8
         8          7

2012-08-21

SQL Developer 3.2 and something about 12c

SQL Developer 3.2 available for download. Seems to include improvements to database diff and future 12c Database Support.

Could it have something to do with Cloud Connections.

2012-08-15

Index Organized Materialized View

Having a materialized view joining two tables. I want to create the materialized view as a index organized table. Materialized view should be possible to be an IOT as documented. Examples of such may be found from support pages "How to create an Index-Organized Materialized View (Snapshot) [ID 114272.1]". But the examples are only for query on top of single table. No join in the materialized view query. There is the ORAGANIZATION INDEX opportunity in create materialized view, but the statement does not have the possibility to state the primary key constraint for the view.

It is possible to create a materialized view ON PREBUILT TABLE. The table may be index organized and attach the materialized view query on top to the existing table. Here is an example of such.


drop materialized view ab;

drop table ab;

drop table b;

drop table a;


create table a(aid int primary key, aname varchar2(20));

create table b(bid int primary key, aid references a, bname varchar2(20));


create table ab(aid int, bid int constraint ab_pk primary key, aname varchar2(20), bname varchar2(20)) organization index;

create materialized view ab (aid, bid, aname, bname) 
on prebuilt table
as
select a.aid, b.bid, a.aname, b.bname
  from a, b
 where a.aid=b.aid
;

insert into a values (1,'A');

insert into b values (1,1,'B');


insert into a values (2,'C');

insert into b values (2,2,'D');


commit;

exec dbms_mview.refresh('AB','C');

select * from ab;

1 1 A B
2 2 C D

select segment_name,segment_type from user_segments where segment_name like 'AB_PK';

AB_PK    INDEX

select table_name,iot_type from user_tables where table_name = 'AB';

AB    IOT


2012-08-09

Statistics change

By default gathering statistics change during lifetime of a table in Oracle 11g. Just after creating and populating a table the statistics are not as they will be after awhile the table is used. The usage, how the table columns are queried, effects the way the statics are gathered. Actually it is not necessary to execute a query. Just making the optimizer to parse may change how the statistics are gathered. And this way the execution plans may differ from day to day. In here a table is created and populated with a unbalanced values in a column. Just after the population is done the statistics are gathered.

drop table stat;

create table stat as select level le, 999 mo from dual connect by level < 1000;

insert into stat values (1000,1);

create index stat_mo_idx on stat(mo);

exec dbms_stats.gather_table_stats(user,'STAT');

Just asking a query plan.

explain plan for select * from stat where mo = 999;

select * from table(dbms_xplan.display(format=>'basic rows'));

-----------------------------------------------------------
| Id  | Operation                   | Name        | Rows  |
-----------------------------------------------------------
|   0 | SELECT STATEMENT            |             |   500 |
|   1 |  TABLE ACCESS BY INDEX ROWID| STAT        |   500 |
|   2 |   INDEX RANGE SCAN          | STAT_MO_IDX |   500 |
-----------------------------------------------------------


select column_name,histogram,num_buckets 
  from user_tab_columns
 where table_name = 'STAT';

COLUMN_NAME    HISTOGRAM    NUM_BUCKETS
LE             NONE         1
MO             NONE         1

Nothing else is done than "explain plan for" for a query that has a column in a where clause predicate. Gathering the statistics again.

exec dbms_stats.gather_table_stats(user,'STAT');

explain plan for select * from stat where mo = 999;

select * from table(dbms_xplan.display(format=>'basic rows'));

------------------------------------------
| Id  | Operation         | Name | Rows  |
------------------------------------------
|   0 | SELECT STATEMENT  |      |   999 |
|   1 |  TABLE ACCESS FULL| STAT |   999 |
------------------------------------------


select column_name,histogram,num_buckets 
  from user_tab_columns 
 where table_name = 'STAT';

COLUMN_NAME    HISTOGRAM    NUM_BUCKETS
LE             NONE         1
MO             FREQUENCY    2

The plan changed as the statistics gathering changed to gather a histogram for the column previously used in a where predicate.

This example shows that before gathering the initial statistics for a newly created and populated table the gathering would like to know how the table will be used. At least when you let Oracle decide how the statistics are gathered. No execution of queries is needed. Just ask the plans of the newly created application before gathering the first statistics. This way your first day of using will be a bit more similar than the future days of using the table. As the stats will change during some future maintenance window gathering after the table is changed enough.

Seems like I copied slides and the idea straight from Tom Kyte "Five things you probably didn’t know about SQL" slides presented at last OUG Harmony, but here I showed that you actually do not even need to execute the query. Only parsing makes the effect.

11.2 version one can use

select dbms_stats.report_col_usage(user,'STAT') from dual;
to get information how the columns are used. In 11.1 the information may be found in sys.col_usage$

2012-08-07

Iniling a Common Table Expression

Having a complex common table expression (subquery factoring) SQL clause. It is used several times in the main query. The optimizer desides to do TEMP TABLE TRANSFORMATION. Effectively do what the undocumented MATERIALIZE hint would do. Sometimes that is not the most efficient way to execute. The problem described through an exaple:

drop table cteinline;

create table cteinline as
select level id, mod(level,20) md
  from dual connect by level < 100000;
  
alter table cteinline add constraint cteinline_pk primary key(id);

create index cteinline_md_idx on cteinline(md) compress 1;


with cte as (
select id, (select count(*) from cteinline co where ct.id=co.md) co from cteinline ct
)
select /*+gather_plan_statistics*/ * 
  from cte 
 where id < 10
 union all
select * 
  from cte ct2 
 where id > 99990
;

select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name                        | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                             |      1 |        |     18 |00:00:00.34 |     867 |    165 |    165 |       |       |          |
|   1 |  TEMP TABLE TRANSFORMATION |                             |      1 |        |     18 |00:00:00.34 |     867 |    165 |    165 |       |       |          |
|   2 |   LOAD AS SELECT           |                             |      1 |        |      1 |00:00:00.32 |     525 |      0 |    165 |   525K|   525K|  525K (0)|
|   3 |    SORT AGGREGATE          |                             |  99999 |      1 |  99999 |00:00:00.17 |     173 |      0 |      0 |       |       |          |
|*  4 |     INDEX RANGE SCAN       | CTEINLINE_MD_IDX            |  99999 |    961 |  95000 |00:00:00.08 |     173 |      0 |      0 |       |       |          |
|   5 |    TABLE ACCESS FULL       | CTEINLINE                   |      1 |  96116 |  99999 |00:00:00.01 |     182 |      0 |      0 |       |       |          |
|   6 |   UNION-ALL                |                             |      1 |        |     18 |00:00:00.02 |     339 |    165 |      0 |       |       |          |
|*  7 |    VIEW                    |                             |      1 |  96116 |      9 |00:00:00.02 |     171 |    165 |      0 |       |       |          |
|   8 |     TABLE ACCESS FULL      | SYS_TEMP_0FD9D9DFC_47AEB497 |      1 |  96116 |  99999 |00:00:00.01 |     171 |    165 |      0 |       |       |          |
|*  9 |    VIEW                    |                             |      1 |  96116 |      9 |00:00:00.01 |     168 |      0 |      0 |       |       |          |
|  10 |     TABLE ACCESS FULL      | SYS_TEMP_0FD9D9DFC_47AEB497 |      1 |  96116 |  99999 |00:00:00.01 |     168 |      0 |      0 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   4 - access("CO"."MD"=:B1)
   7 - filter("ID"<10)
   9 - filter("ID">99990)

We want to avoid the TEMP TABLE TRANSFORMATION. Do not do the MATERIALIZE for the common table expression. 11.1 introduced a new performance view v$sql_hint. From there one can find yet another not documented hint that does the inverse.
 
select inverse from V$SQL_HINT where name = 'MATERIALIZE'; 

INLINE

The inverse is not NO_MATERIALIZE or NOMATERIALIZE. So many other inverse hints start with word NO. Like MERGE and NO_MERGE. The INLINE hint is not icluded in the 11.2.0.3 documentation but seems to do the job for us. TEMP TABLE TRANSFORMATION is wanished from the plan and less rows are browsed during the execution in this case.


with cte as (
select /*+inline*/ id, (select count(*) from cteinline co where ct.id=co.md) co from cteinline ct
)
select /*+gather_plan_statistics*/ * 
  from cte 
 where id < 10
 union all
select * 
  from cte ct2 
 where id > 99990
;

-------------------------------------------------------------------------------------------------
| Id  | Operation          | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                  |      1 |        |     18 |00:00:00.01 |      92 |
|   1 |  UNION-ALL         |                  |      1 |        |     18 |00:00:00.01 |      92 |
|   2 |   SORT AGGREGATE   |                  |      9 |      1 |      9 |00:00:00.01 |      80 |
|*  3 |    INDEX RANGE SCAN| CTEINLINE_MD_IDX |      9 |    961 |  45000 |00:00:00.01 |      80 |
|*  4 |   INDEX RANGE SCAN | CTEINLINE_PK     |      1 |      9 |      9 |00:00:00.01 |       2 |
|   5 |   SORT AGGREGATE   |                  |      9 |      1 |      9 |00:00:00.01 |       8 |
|*  6 |    INDEX RANGE SCAN| CTEINLINE_MD_IDX |      9 |    961 |      0 |00:00:00.01 |       8 |
|*  7 |   INDEX RANGE SCAN | CTEINLINE_PK     |      1 |      9 |      9 |00:00:00.01 |       2 |
-------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - access("CO"."MD"=:B1)
   4 - access("ID"<10)
   6 - access("CO"."MD"=:B1)
   7 - access("ID">99990)
   

2012-06-27

The Latests

It is nice to share information when there are opportunities to receive information. Had just a nice after work sauna, introduction and chat about Data Vault modeling technique.

In following rehearsal there is a part of a satellite table with some data. And there is a request to receive the latest. It is possible to receive the information with only one reference to the table.
create table sat_dv(
  dv_id number not null
, dv_load_dt date not null
, dv_info varchar2(200)
);

insert into sat_dv
select mod(level,5)
     , sysdate-level
     , level||'info' 
 from dual connect by level < 200;

select satelite.* 
  from sat_dv satelite
  inner join ( 
     select dv_id, max(dv_load_dt) max_dv_load_dt
       from sat_dv 
      group by dv_id) latest 
  on latest.dv_id= satelite.dv_id 
 and latest.max_dv_load_dt = satelite.dv_load_dt
;

"DV_ID" "DV_LOAD_DT" "DV_INFO"
1 26.06.2012 "1info"
2 25.06.2012 "2info"
3 24.06.2012 "3info"
4 23.06.2012 "4info"
0 22.06.2012 "5info"
Lazy as I am. I let you to do the work. Two examples to use analytic or keep first aggregate approach are introduced in a earlier post. There I did not introduce this "browsing the table twice" approach. Although unfortunately quite often this is the starting point to query tuning.

2012-06-26

Nominated

Thank you for my coworkers at Solita making this happend. I got just invited and signed to receive the Oracle ACE award. Proud to be credited to the community as Julian, Tuomas and Heli from Finland earlier. From now on I am allowed to use ACE logo along.

We will continue delivering success to our customers. Some highlights will continue popping up here in this blog.

2012-06-15

Pascal matrix

Just read about yet another SQL Challenge pointing to the original request to solve Pascal matrix with ANSI SQL. Well this following solution is maybe braking the rules, but illustrates an Oracle 1Xy wishlist feature. Mixing analytic functions and recursive subquery factoring. The solution is using PostgreSQL 9.1 capabilities.

with recursive n (u) as (
select 1 
union all
select n.u+1 
  from n
 where n.u < 8
), q as (
select n.u v, m.u w 
  from n, n m 
), r (v,w,s,d,e) as (
select v,w, 1::bigint,w::bigint,sum(w)over(order by w)::bigint
  from q
 where v = 1
union all
select q.v,q.w
      ,r.d
      ,r.e
      ,sum(r.e)over(order by r.w)::bigint
  from r 
 inner join q 
    on r.w=q.w and r.v+1=q.v
)
select v,w,s
  from r
;

1;1;1
1;2;1
1;3;1
1;4;1
1;5;1
1;6;1
1;7;1
1;8;1
2;1;1
2;2;2
2;3;3
2;4;4
2;5;5
2;6;6
2;7;7
2;8;8
3;1;1
3;2;3
3;3;6
3;4;10
3;5;15
3;6;21
3;7;28
3;8;36
4;1;1
4;2;4
4;3;10
4;4;20
4;5;35
4;6;56
4;7;84
4;8;120
5;1;1
5;2;5
5;3;15
5;4;35
5;5;70
5;6;126
5;7;210
5;8;330
6;1;1
6;2;6
6;3;21
6;4;56
6;5;126
6;6;252
6;7;462
6;8;792
7;1;1
7;2;7
7;3;28
7;4;84
7;5;210
7;6;462
7;7;924
7;8;1716
8;1;1
8;2;8
8;3;36
8;4;120
8;5;330
8;6;792
8;7;1716
8;8;3432

2012-05-22

NoCOUG Third SQL Challenge

Boy that was fun. Not revealing my answer to wider audience just jet as the competition just started. Try it yourself. Breadth first seems to do less than depth first. Maybe that is why it is the default in recursive common table expression.
Update 28.5.2012 additional rules published. Made the question a bit harder than it seemed at first try. Well that is just software testing, when solving complex tasks. Test data and desired answers are required.

2012-05-07

A long plan to monitor

The day came yet again that I need to monitor a long plan with tuning pack sql monitor. Thanks to a blog post from Doug Burns I get those visible. The long plans have more than 300 lines.

The undescore parameter _sqlmon_max_planlines is also session modifiable.


alter session set "_sqlmon_max_planlines" = 3000;

SELECT /*+monitor*/ ... and the rest of your query
;

select dbms_sqltune.report_sql_monitor(mo.sql_id,mo.sid,mo.session_serial#),mo.*
  from v$sql_monitor mo 
 where sid = SYS_CONTEXT('userenv', 'SID') 
 order by mo.sql_exec_start desc
;

And you get the result. Compared to dbms_xplan.display_cursor(format=>'allstats last')) monitoring is giving results even while the query is running and so before you have read the whole result.

2012-04-30

Partition wise merge -continued

Continuing the Partition wise merge post. Putting more lines to target and creating a bitmap index. The note last was that the plan strategy was nested loops. Indexing the access path to tgt at the end of plan makes possibly the execution do less.
alter session set statistics_level = all;

create bitmap index tgt_cust_bidx on tgt(cust) local;


insert into tgt
select level, level, trunc(sysdate), level 
from dual 
connect by level < 10000;


exec dbms_stats.gather_table_stats(user,'TGT');

exec dbms_stats.gather_table_stats(user,'STG');


merge /*+index (tgt tgt_cust_bidx)*/into tgt 
using (select cust, prd, dt, num from stg) sou 
   on (tgt.dt=sou.dt and tgt.cust=sou.cust and tgt.prd=sou.prd)
when matched then update set tgt.num=tgt.num+sou.num
when not matched then insert (tgt.cust,tgt.prd,tgt.dt,tgt.num)
values(sou.cust,sou.cust,sou.dt,sou.num)
;

select * from table(dbms_xplan.display_cursor(format=>'allstats last +partition'));

------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name          | Starts | E-Rows | Pstart| Pstop | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------------------------------------------
|   0 | MERGE STATEMENT                          |               |      1 |        |       |       |      0 |00:00:00.10 |      26 |
|   1 |  MERGE                                   | TGT           |      1 |        |       |       |      0 |00:00:00.10 |      26 |
|   2 |   PX COORDINATOR                         |               |      1 |        |       |       |      4 |00:00:00.10 |      22 |
|   3 |    PX SEND QC (RANDOM)                   | :TQ10000      |      0 |      2 |       |       |      0 |00:00:00.01 |       0 |
|   4 |     VIEW                                 |               |      0 |        |       |       |      0 |00:00:00.01 |       0 |
|   5 |      NESTED LOOPS OUTER                  |               |      0 |      2 |       |       |      0 |00:00:00.01 |       0 |
|   6 |       PX BLOCK ITERATOR                  |               |      0 |      2 |     1 |     4 |      0 |00:00:00.01 |       0 |
|*  7 |        TABLE ACCESS FULL                 | STG           |      0 |      2 |     1 |     4 |      0 |00:00:00.01 |       0 |
|   8 |       PARTITION RANGE ITERATOR           |               |      0 |      1 |   KEY |   KEY |      0 |00:00:00.01 |       0 |
|   9 |        PARTITION HASH ITERATOR           |               |      0 |      1 |   KEY |   KEY |      0 |00:00:00.01 |       0 |
|* 10 |         TABLE ACCESS BY LOCAL INDEX ROWID| TGT           |      0 |      1 |       |       |      0 |00:00:00.01 |       0 |
|  11 |          BITMAP CONVERSION TO ROWIDS     |               |      0 |        |       |       |      0 |00:00:00.01 |       0 |
|* 12 |           BITMAP INDEX SINGLE VALUE      | TGT_CUST_BIDX |      0 |        |       |       |      0 |00:00:00.01 |       0 |
------------------------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   7 - access(:Z>=:Z AND :Z<=:Z)
  10 - filter(("TGT"."PRD"="PRD" AND "TGT"."DT"="DT"))
  12 - access("TGT"."CUST"="CUST")
  
It was possible to add partition information to the format of display_cursor How about putting more lines to the source table? Next post about that...

2012-04-29

Partition wise merge

Having a sub partitioned target table and a staging table that should be merged into the target. Will a partition wise merge work.

drop table tgt;

drop table stg;


create table tgt(cust int, prd int, dt date, num int) partition by range (dt) subpartition by hash (prd) (partition p1 values less than (to_date(20120101,'yyyymmdd')) subpartitions 4,partition pmax values less than (maxvalue) subpartitions 4);

insert into tgt values(1,1,to_date(20110101,'yyyymmdd'),1);

create table stg(cust int, prd int, dt date, num int) partition by hash (prd) partitions 4 parallel 4;


insert into stg
select level, level, trunc(sysdate), level 
from dual 
connect by level < 3;

select * from stg;

1 1 29.04.2012 00:00:00 1
2 2 29.04.2012 00:00:00 2

select * from tgt;

1 1 01.01.2011 00:00:00 1

alter session set statistics_level = all;

merge into tgt 
using (select cust, prd, dt, num from stg) sou 
   on (tgt.dt=sou.dt and tgt.cust=sou.cust and tgt.prd=sou.prd)
when matched then update set tgt.num=tgt.num+sou.num
when not matched then insert (tgt.cust,tgt.prd,tgt.dt,tgt.num)
values(sou.cust,sou.cust,sou.dt,sou.num)
;

Just ask how did it execute...

select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

----------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------
|   0 | MERGE STATEMENT               |          |      1 |        |      0 |00:00:00.02 |      84 |
|   1 |  MERGE                        | TGT      |      1 |        |      0 |00:00:00.02 |      84 |
|   2 |   PX COORDINATOR              |          |      1 |        |      4 |00:00:00.02 |      24 |
|   3 |    PX SEND QC (RANDOM)        | :TQ10000 |      0 |      4 |      0 |00:00:00.01 |       0 |
|   4 |     VIEW                      |          |      0 |        |      0 |00:00:00.01 |       0 |
|   5 |      NESTED LOOPS OUTER       |          |      0 |      4 |      0 |00:00:00.01 |       0 |
|   6 |       PX BLOCK ITERATOR       |          |      0 |      4 |      0 |00:00:00.01 |       0 |
|*  7 |        TABLE ACCESS FULL      | STG      |      0 |      4 |      0 |00:00:00.01 |       0 |
|   8 |       PARTITION RANGE ITERATOR|          |      0 |      1 |      0 |00:00:00.01 |       0 |
|   9 |        PARTITION HASH ITERATOR|          |      0 |      1 |      0 |00:00:00.01 |       0 |
|* 10 |         TABLE ACCESS FULL     | TGT      |      0 |      1 |      0 |00:00:00.01 |       0 |
----------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   7 - access(:Z>=:Z AND :Z<=:Z)
  10 - filter(("TGT"."PRD"="PRD" AND "TGT"."CUST"="CUST" AND "TGT"."DT"="DT"))
  
Well that did not tell us anything about partitioning. How about without allstats.

merge into tgt 
using (select cust, prd, dt, num from stg) sou 
   on (tgt.dt=sou.dt and tgt.cust=sou.cust and tgt.prd=sou.prd)
when matched then update set tgt.num=tgt.num+sou.num
when not matched then insert (tgt.cust,tgt.prd,tgt.dt,tgt.num)
values(sou.cust,sou.cust,sou.dt,sou.num)
;

select * from table(dbms_xplan.display_cursor());

---------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------------------
|   0 | MERGE STATEMENT               |          |       |       |     4 (100)|          |       |       |        |      |            |
|   1 |  MERGE                        | TGT      |       |       |            |          |       |       |        |      |            |
|   2 |   PX COORDINATOR              |          |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM)        | :TQ10000 |    16 |  1728 |     4   (0)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
|   4 |     VIEW                      |          |       |       |            |          |       |       |  Q1,00 | PCWP |            |
|   5 |      NESTED LOOPS OUTER       |          |    16 |  1728 |     4   (0)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
|   6 |       PX BLOCK ITERATOR       |          |    16 |   768 |     2   (0)| 00:00:01 |     1 |     8 |  Q1,00 | PCWC |            |
|*  7 |        TABLE ACCESS FULL      | STG      |    16 |   768 |     2   (0)| 00:00:01 |     1 |     8 |  Q1,00 | PCWP |            |
|   8 |       PARTITION RANGE ITERATOR|          |     1 |    60 |     1   (0)| 00:00:01 |   KEY |   KEY |  Q1,00 | PCWP |            |
|   9 |        PARTITION HASH ITERATOR|          |     1 |    60 |     1   (0)| 00:00:01 |   KEY |   KEY |  Q1,00 | PCWP |            |
|* 10 |         TABLE ACCESS FULL     | TGT      |     1 |    60 |     1   (0)| 00:00:01 |       |       |  Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   7 - access(:Z>=:Z AND :Z<=:Z)
  10 - filter(("TGT"."PRD"="PRD" AND "TGT"."CUST"="CUST" AND "TGT"."DT"="DT"))

Seems more informative about partitioning. There you can see pstart and pstop. Seems like staging table partitioning matches target table sub partitioning. Yet another approach is monitoring.

merge into /*+monitor*/ tgt 
using (select cust, prd, dt, num from stg) sou 
   on (tgt.dt=sou.dt and tgt.cust=sou.cust and tgt.prd=sou.prd)
when matched then update set tgt.num=tgt.num+sou.num
when not matched then insert (tgt.cust,tgt.prd,tgt.dt,tgt.num)
values(sou.cust,sou.cust,sou.dt,sou.num)
;

select dbms_sqltune.report_sql_monitor(sql_id,sid,session_serial#) from v$sql_monitor m where sid=userenv('sid') order by last_refresh_time desc;

And the latest line there tells us

SQL Monitoring Report

SQL Text
------------------------------
merge into /*+monitor*/ tgt using (select cust, prd, dt, num from stg) sou on (tgt.dt=sou.dt and tgt.cust=sou.cust and tgt.prd=sou.prd) when matched then update set tgt.num=tgt.num+sou.num when not matched then insert (tgt.cust,tgt.prd,tgt.dt,tgt.num) values(sou.cust,sou.cust,sou.dt,sou.num)

Global Information
------------------------------
 Status              :  DONE                
 Instance ID         :  1                   
 Session             :  RAFU (140:5746)     
 SQL ID              :  cqakadxwrc3yz       
 SQL Execution ID    :  16777221            
 Execution Started   :  04/29/2012 00:01:02 
 First Refresh Time  :  04/29/2012 00:01:02 
 Last Refresh Time   :  04/29/2012 00:01:02 
 Duration            :  .004028s            
 Module/Action       :  SQL Developer/-     
 Service             :  SYS$USERS           
 Program             :  SQL Developer       

Global Stats
===============================
| Elapsed |  Other   | Buffer |
| Time(s) | Waits(s) |  Gets  |
===============================
|    0.01 |     0.01 |     58 |
===============================

Parallel Execution Details (DOP=4 , Servers Allocated=4)
================================================================================
|      Name      | Type  | Server# | Elapsed |  Other   | Buffer | Wait Events |
|                |       |         | Time(s) | Waits(s) |  Gets  | (sample #)  |
================================================================================
| PX Coordinator | QC    |         |    0.00 |     0.00 |     14 |             |
| p000           | Set 1 |       1 |    0.00 |     0.00 |     16 |             |
| p001           | Set 1 |       2 |    0.00 |     0.00 |      9 |             |
| p002           | Set 1 |       3 |    0.00 |     0.00 |     10 |             |
| p003           | Set 1 |       4 |    0.00 |     0.00 |      9 |             |
================================================================================

SQL Plan Monitoring Details (Plan Hash Value=659647117)
=========================================================================================================================================
| Id |            Operation            |   Name   |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Activity | Activity Detail |
|    |                                 |          | (Estim) |      | Active(s) | Active |       | (Actual) |   (%)    |   (# samples)   |
=========================================================================================================================================
|  0 | MERGE STATEMENT                 |          |         |      |         1 |     +0 |     1 |        1 |          |                 |
|  1 |   MERGE                         | TGT      |         |      |         1 |     +0 |     1 |        1 |          |                 |
|  2 |    PX COORDINATOR               |          |         |      |         1 |     +0 |     5 |        2 |          |                 |
|  3 |     PX SEND QC (RANDOM)         | :TQ10000 |       2 |    3 |         1 |     +0 |     4 |        2 |          |                 |
|  4 |      VIEW                       |          |         |      |         1 |     +0 |     4 |        2 |          |                 |
|  5 |       NESTED LOOPS OUTER        |          |       2 |    3 |         1 |     +0 |     4 |        2 |          |                 |
|  6 |        PX BLOCK ITERATOR        |          |       2 |    2 |         1 |     +0 |     4 |        2 |          |                 |
|  7 |         TABLE ACCESS FULL       | STG      |       2 |    2 |         1 |     +0 |    10 |        2 |          |                 |
|  8 |        PARTITION RANGE ITERATOR |          |       1 |    3 |         1 |     +0 |     2 |        2 |          |                 |
|  9 |         PARTITION HASH ITERATOR |          |       1 |    3 |         1 |     +0 |     2 |        2 |          |                 |
| 10 |          TABLE ACCESS FULL      | TGT      |       1 |    3 |         1 |     +0 |     2 |        2 |          |                 |
=========================================================================================================================================

Parallel four of the stg table was obeyed(DOP=4). There was only two lines to merge. Processes p000 and p002 did more than others. Seems like partition wise parallel merge is possible even in this case when source partitioning matches target sub partitioning.

How about partition wising. The plan join strategy is nested loops joining so is it actually doing stuff partition wise? This example was constructed with small amount of lines, but how about more lines and stuff to do. To be continued...

2012-04-26

Overlapping Locator approach

I have been writing earlier about handling overlapping time periods. Those approaches are trying to deal populating rows without mutating table problems and compound triggers. In this post there is a query example dealing with a solution possibility while dealing with such data. But first a list of earlier posts about the issue: Having a requirement that periods are not overlapping and the timeline has to be continuous. It is possible to use SDO_AGGR_CONCAT_LINES aggregate function to find out violations of such rule.
drop table z;

CREATE TABLE Z(V VARCHAR2(10) NOT NULL
             , VALIDFROM date NOT NULL
             , VALIDTILL date NOT NULL
             , CONSTRAINT FRO2000 CHECK (to_date('20000101','yyyymmdd') < VALIDFROM)
             , CONSTRAINT TIL2050 CHECK (VALIDTILL <= to_date('20510101','yyyymmdd'))
             , CONSTRAINT FROTIL CHECK (VALIDFROM <= VALIDTILL)
             );

INSERT INTO Z VALUES('meet',to_date('20010101','yyyymmdd'),to_date('20110101','yyyymmdd'));

INSERT INTO Z VALUES('meet',to_date('20110101','yyyymmdd'),to_date('20130101','yyyymmdd'));

INSERT INTO Z VALUES('overlap',to_date('20010101','yyyymmdd'),to_date('20110101','yyyymmdd'));

INSERT INTO Z VALUES('overlap',to_date('20050101','yyyymmdd'),to_date('20130101','yyyymmdd'));

INSERT INTO Z VALUES('disjoint',to_date('20010101','yyyymmdd'),to_date('20050101','yyyymmdd'));

INSERT INTO Z VALUES('disjoint',to_date('20110101','yyyymmdd'),to_date('20130101','yyyymmdd'));

COMMIT;

Creating a function that constructs SDO_GEOMETRY object out of periods.

CREATE OR REPLACE FUNCTION TF(FRO DATE, TIL DATE) RETURN SDO_GEOMETRY deterministic as 
BEGIN
RETURN MDSYS.SDO_GEOMETRY(2002,NULL, NULL, SDO_ELEM_INFO_ARRAY (1,2,1),
    SDO_ORDINATE_ARRAY(to_number(to_char(FRO,'j')),0,to_number(to_char(TIL,'j')),0));
END;
/

A query that groups the periods of different v values. When the aggregated SDO_GEOMETRY object is a line SDO_GTYPE=2002 the requirement is fine. Otherwise it is a multiline SDO_GTYPE=2006.

select v
     , min(validfrom) mi
     , max(validtill) ma
     , SDO_AGGR_CONCAT_LINES(tf(validfrom,validtill)).SDO_GTYPE gt
  from z
 group by v
;

v           mi          ma          gt
disjoint    1.1.2001    1.1.2013    2006
meet        1.1.2001    1.1.2013    2002
overlap     1.1.2001    1.1.2013    2006

The violations of the rule are multilines. The aggregate function may be used in having part of a query.

select v
     , min(validfrom)
     , max(validtill)
     , SDO_AGGR_CONCAT_LINES(tf(validfrom,validtill)).SDO_GTYPE gt
  from z
 group by v
 having SDO_AGGR_CONCAT_LINES(tf(validfrom,validtill)).SDO_GTYPE = 2006
;

2012-04-20

Making a schema empty

A thing to a future Oracle version wish list. Schema level privileges. I mean way to grant select on all tables on a schema. These grants should become grants to the tables created afterwards. SQL Server has such possibility since 2008.

One can find drop all schema objects scripts and discussions. This way you preserve the grants given to your database user. And no need to issue drop user cascade. Sometimes there is only a need to make all schema tables empty. After dropping tables you loose the grants given to others. Disabling foreign key constraints, truncating tables and enabling constraints may be a way to do this. If your tables are small truncating may do more work than just deleting. Also deleting requires foreign keys to be considered. One way is to disable them, delete and enable constraints. But another way could be deleting the rows in the order foreign keys are defined. Here is an anonymous block deleting all rows from all tables in a schema. It starts deleting rows from tables that are not referenced from another tables. And level by level in the foreign key tree deletes all rows from tables. You probably do not want to run this in a schema with huge tables.
declare
perform_execute boolean;
procedure putline_or_execute(s varchar2, perform_execute boolean) is
begin
   if perform_execute = true
   then
     begin
        execute immediate s;
     exception when others then
        dbms_output.put_line('FAILED: '||s);
        raise;
     end;
   else dbms_output.put_line(s||';'||chr(10));
   end if;
end putline_or_execute;
begin
   perform_execute := true;
   for com in (
with 
cte(table_name,constraint_name,r_table_name,r_constraint_name,deferred) as (
  select r.table_name,r.constraint_name,p.table_name,p.constraint_name,r.DEFERRED
    from user_constraints r inner join user_constraints p on p.constraint_name = r.r_constraint_name
   where r.constraint_type = 'R' 
     and r.deferred = 'IMMEDIATE'
     and r.status = 'ENABLED'
),
hcte(table_name,constraint_name,r_table_name,r_constraint_name,lvl) as (
  select null,null,e.table_name,null,1
    from user_tables e 
   where e.iot_name is null 
     and e.table_name not in (select r_table_name from cte)
union all
  select pa.table_name,pa.constraint_name,pa.r_table_name,pa.r_constraint_name
       , ch.lvl+1
    from cte pa inner join hcte ch on pa.table_name = ch.r_table_name 
) SEARCH breadth FIRST BY table_name set dep
select 'delete from '||r_table_name statem,max(lvl) mx
  from hcte
group by r_table_name
order by max(lvl),r_table_name
     )
   loop
     putline_or_execute(com.statem, perform_execute);
   end loop;  
end;
/

2012-04-04

Miss leading ORA-44425

Error message for ORA-44425 is miss leading.

ORA-44425: XML DB needs to be installed for creating an XMLType table or column
Cause: An attempt was made to create an XMLType table or column with out XML DB installed.

The correct information in the message is that you do not have XML DB installed in your database. The fact is that you are trying to store your XMLType column as BINARY XML. You actually do not have to have XML DB installed to use XMLType column. SQL Developer generates binary option to DDL for a XMLType column by default.

CREATE TABLE TABLE_1 
    ( 
     Column_1 XMLTYPE 
    ) 
    XMLTYPE COLUMN Column_1 STORE AS BINARY XML 
;
You have the option to STORE as CLOB.

CREATE TABLE TABLE_1 
    ( 
     Column_1 XMLTYPE 
    ) 
    XMLTYPE COLUMN Column_1 STORE AS CLOB XML 
;


If you need BINARY stored XML install XML DB option to your database as the error message states. Otherwise you have the option to change the store method to database default CLOB. This may be found from Relational Models - Physical Models - Tables - TABLE_1 - Columns - Column_1 - Right click mouse - Properties - Store as - CLOB

There is also option of STORE AS OBJECT RELATIONAL. Then you need to deal with ORA-19002: Missing XMLSchema URL. The same properties page has XML Schema URL and XML Schema Elem properties to be filled. Once filled the generated DDL throws an error ORA-31159: XML DB is in an invalid state. Maybe this error should be the same as ORA-44425.

The latest 11.2.0.3 create table XMLType_column_properties documentation instructs not to use CLOB storage as it might be deprecated in a future release.

Maybe it would be wise just to install the XML DB if a XMLType column is used. There is many kinds of XML out there. Unfortunately so often no knowledge of schema used. In this case maybe CLOB store is ok. No need to install XML DB and make Oracle patching take longer.

2012-04-02

Keep first

Green is good. In database console it is saying that the database is doing work with CPU. It is not waiting. It might be thou that the end user is waiting for the query to finnish. These problems are not to be dealt with buying more CPU. Maybe faster CPU. But more performance may be gained by not doing stuff.

drop table a cascade constraints;

create table a as 
select mod(level,5) b,level c,'foo'||level d from dual connect by level < 100000;

exec dbms_stats.gather_table_stats(user,'A');

select * from a where rownum <= 10;

1    1    foo1
2    2    foo2
3    3    foo3
4    4    foo4
0    5    foo5
1    6    foo6
2    7    foo7
3    8    foo8
4    9    foo9
0    10    foo10

A simple aggregation from the data:

select b,min(c) c
  from a
 group by b
;

Plan
SELECT STATEMENT  ALL_ROWSCost: 92  Bytes: 40  Cardinality: 5          
    2 HASH GROUP BY  Cost: 92  Bytes: 40  Cardinality: 5      
        1 TABLE ACCESS FULL TABLE RAFU.A Cost: 88  Bytes: 799.992  Cardinality: 99.999  

1    1
2    2
4    4
3    3
0    5

But when we want to have also d columns included in the result of aggregation we need to write sql a bit differently.

0    5    foo5
1    1    foo1
2    2    foo2
3    3    foo3
4    4    foo4

Here is a analytic approach to get the desired rows.
select b,c,d 
  from (
 select b,c,d, row_number()over(partition by b order by c) rn
   from a
)
where rn=1
;

Plan
SELECT STATEMENT  ALL_ROWSCost: 643  Bytes: 12.699.873  Cardinality: 99.999              
    3 VIEW RAFU. Filter Predicates: "RN"=1  Cost: 643  Bytes: 12.699.873  Cardinality: 99.999          
        2 WINDOW SORT PUSHED RANK  Filter Predicates: ROW_NUMBER() OVER ( PARTITION BY "B" ORDER BY "C")<=1  Cost: 643  Bytes: 1.699.983  Cardinality: 99.999      
            1 TABLE ACCESS FULL TABLE RAFU.A Cost: 88  Bytes: 1.699.983  Cardinality: 99.999  

;
The analytic way is sorting all rows and restricting rows after. How about collecting the needed information while aggregating. An alternative using keep and first reserved words in SQL.
select b,min(c) c,min(d) keep (dense_rank first order by c) dd
  from a
 group by b
;

Plan
SELECT STATEMENT  ALL_ROWSCost: 93  Bytes: 85  Cardinality: 5          
    2 SORT GROUP BY  Cost: 93  Bytes: 85  Cardinality: 5      
        1 TABLE ACCESS FULL TABLE RAFU.A Cost: 88  Bytes: 1.699.983  Cardinality: 99.999  
Plans are taken from Toad. The cardinality information there is more accurate in the aggregate version. This may have huge influence if such code is a part of a bigger query. Comparing analytic to aggregate keep first approach we get 77% improvement in execution time. It is all CPU time. Using Tom Kyte runstats.

DECLARE
  CURSOR c_rn
  IS
select b,c,d 
  from (
 select b,c,d, row_number()over(partition by b order by c) rn
   from a
)
where rn=1
;
--
  CURSOR c_keepfirst
  IS
select b,min(c) c,min(d) keep (dense_rank first order by c) dd
  from a
 group by b
;
--
BEGIN
   runstats_pkg.rs_start;
  FOR i IN 1 .. 100 LOOP
     FOR rec IN c_rn LOOP
        NULL;
     END LOOP;
  END LOOP;
   runstats_pkg.rs_middle;
  FOR i IN 1 .. 100 LOOP
     FOR rec IN c_keepfirst LOOP
        NULL;
     END LOOP;
  END LOOP;
   runstats_pkg.rs_stop;
END;
/

STAT...CPU used by this sessio       1,872         426      -1,446
STAT...recursive cpu usage           1,872         424      -1,448
STAT...Elapsed Time                  1,873         424      -1,449


2012-03-21

OUG Harmony 2012 registration open

OUG Harmony 2012 registration is open. Act before 30.4.2012 to be an early bird and save money.

2012-03-20

join strategies to cross join

There are three different join startegies the optimizer may perform a join. One of those is different considering a cross join.

In this example join is executed using nested loops. It may be hinted also to a merge join path.

select * from dual d1 cross join dual d2;

select * from table(dbms_xplan.display_cursor(format=>'basic'));

-----------------------------------
| Id  | Operation          | Name |
-----------------------------------
|   0 | SELECT STATEMENT   |      |
|   1 |  NESTED LOOPS      |      |
|   2 |   TABLE ACCESS FULL| DUAL |
|   3 |   TABLE ACCESS FULL| DUAL |
-----------------------------------

select /*+use_merge(d1 d2)*/ * from dual d1 cross join dual d2;

-------------------------------------
| Id  | Operation            | Name |
-------------------------------------
|   0 | SELECT STATEMENT     |      |
|   1 |  MERGE JOIN CARTESIAN|      |
|   2 |   TABLE ACCESS FULL  | DUAL |
|   3 |   TABLE ACCESS FULL  | DUAL |
-------------------------------------

The third approach would be hash join. But it is not possible.

select /*+use_hash(d1 d2)*/ * from dual d1 cross join dual d2;

-----------------------------------
| Id  | Operation          | Name |
-----------------------------------
|   0 | SELECT STATEMENT   |      |
|   1 |  NESTED LOOPS      |      |
|   2 |   TABLE ACCESS FULL| DUAL |
|   3 |   TABLE ACCESS FULL| DUAL |
-----------------------------------

Change the query to inner join and have a join condition. The USE_HASH hint is obeyed.
select /*+use_hash(d1 d2)*/ * from dual d1 inner join dual d2 on d1.dummy = d2.dummy;

select * from table(dbms_xplan.display_cursor(format=>'basic +predicate'));

-----------------------------------
| Id  | Operation          | Name |
-----------------------------------
|   0 | SELECT STATEMENT   |      |
|*  1 |  HASH JOIN         |      |
|   2 |   TABLE ACCESS FULL| DUAL |
|   3 |   TABLE ACCESS FULL| DUAL |
-----------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - access("D1"."DUMMY"="D2"."DUMMY")

2012-03-19

many to many select -view efficiently -the problem

My last post was describing a solution to problem that I did not explain beforehand. I also chose a bad example of two things to be localized. The original problem did not have relationship between the things to be localized. The example has the citycountry table there and it should be used in the citycountrynames view. It was yet again difficult to generate a good example. But the original problematic view was something like:
create or replace view citycountrynames_slow as 
select cl.lang
     , ci.city
     , co.country
     , cl.name  cityname
     , col.name countryname
  from city ci 
  cross join country co 
  left outer join citylang cl on cl.city=ci.city
  left outer join countrylang col on col.country=co.country and col.lang=cl.lang
;
Cross join there. And that is not the problem. The plan for the query was

select cn.cityname,cn.countryname,sh.shop_name
  from citycountrynames_slow cn 
 inner join shop sh 
    on sh.city=cn.city and sh.country=cn.country
 where cn.city=:city
   and cn.lang=:lang
;

-------------------------------------------------------------
| Id  | Operation                     | Name                |
-------------------------------------------------------------
|   0 | SELECT STATEMENT              |                     |
|   1 |  NESTED LOOPS OUTER           |                     |
|   2 |   NESTED LOOPS                |                     |
|   3 |    VIEW                       |                     |
|   4 |     NESTED LOOPS              |                     |
|   5 |      NESTED LOOPS             |                     |
|   6 |       INDEX UNIQUE SCAN       | CITYLANG_PK         |
|   7 |       INDEX UNIQUE SCAN       | CITY_PK             |
|   8 |      INDEX FAST FULL SCAN     | COUNTRY_PK          |
|   9 |    TABLE ACCESS BY INDEX ROWID| SHOP                |
|  10 |     INDEX RANGE SCAN          | SHOP_COUNTRY_FK_IDX |
|  11 |   INDEX UNIQUE SCAN           | COUNTRYLANG_PK      |
-------------------------------------------------------------

One problem there is INDEX FAST FULL SCAN COUNTRY_PK. And the country table was a big one. The view query has the problem of outer joining to several tables. That is why the optimizer is not able to merge the view to the main query.
select cl.lang
     , ci.city
     , co.country
     , cl.name  cityname
     , col.name countryname
  from city ci, country co, citylang cl, countrylang col 
 where cl.city(+)=ci.city
   and col.country(+)=co.country 
   and col.lang(+)=cl.lang
;

ORA-01417: a table may be outer joined to at most one other table
One can avoid ORA-01417 by rewriting the query
select xx.*, col.name countryname
 from (
select cl.lang
     , ci.city
     , co.country
     , cl.name  cityname
  from city ci, country co, citylang cl
 where cl.city(+)=ci.city) xx, countrylang col 
where  col.country(+)=xx.country 
   and col.lang(+)=xx.lang
;
Actually a bit similar structure may be seen in the plan line 3. A view without a name is appearing to the plan. The tuned previous post view is behaving much more efficiently for the query. No full scans. The tuning was to add the lang table to the view query. Move the many parts to select list sub query.
select cn.cityname,cn.countryname,sh.shop_name
  from citycountrynames cn 
 inner join shop sh 
    on sh.city=cn.city and sh.country=cn.country
 where cn.city=:city
   and cn.lang=:lang
;

----------------------------------------------------------
| Id  | Operation                     | Name             |
----------------------------------------------------------
|   0 | SELECT STATEMENT              |                  |
|   1 |  INDEX UNIQUE SCAN            | CITYLANG_PK      |
|   2 |  INDEX UNIQUE SCAN            | COUNTRYLANG_PK   |
|   3 |  NESTED LOOPS                 |                  |
|   4 |   NESTED LOOPS                |                  |
|   5 |    NESTED LOOPS               |                  |
|   6 |     INDEX UNIQUE SCAN         | CITY_PK          |
|   7 |     INDEX UNIQUE SCAN         | LANG_PK          |
|   8 |    TABLE ACCESS BY INDEX ROWID| SHOP             |
|   9 |     INDEX RANGE SCAN          | SHOP_CITY_FK_IDX |
|  10 |   INDEX UNIQUE SCAN           | COUNTRY_PK       |
----------------------------------------------------------

2012-03-18

many to many select -view efficiently

The problem to be solved here is described in the next post. A while ago I wrote about creating a view on top of many to many relation. In that article there was no need to select any columns from the many to many relation table. A localization view might be something that you want to have columns received from. Here is an example of such. A bit of preparation first. We have localized names for country and city names first. Create a view of those to be used from several usages. And a query example that joins a shop table to the view.
drop table shop;

drop table countrylang;

drop table citycountry;

drop table citylang;

drop table country;

drop table city;

drop table lang;



create table lang (lang varchar2(2) constraint lang_pk primary key);

insert into lang values ('fi');

insert into lang values ('en');

insert into lang values ('sv');



create table country(country varchar2(3) constraint country_pk primary key);

insert into country values ('FI');

insert into country values ('SV');


create table countrylang(country constraint countrylang_country_fk references country
                       , lang constraint countrylang_lang_fk references lang
                       , name varchar2(200) not null
                       , constraint countrylang_pk primary key (country,lang)
                       ) organization index compress 1
;

insert into countrylang values ('FI','fi','Suomi');

insert into countrylang values ('FI','sv','Finland');

insert into countrylang values ('FI','en','Finland');

insert into countrylang values ('SV','fi','Ruotsi');

insert into countrylang values ('SV','sv','Sweden');

insert into countrylang values ('SV','en','Sweden');


create table city(city varchar2(3) constraint city_pk primary key);

insert into city values ('tpe');

insert into city values ('hki');

insert into city values ('sto');

create table citylang(city constraint citylang_city_fk references city
                    , lang constraint citylang_lang_fk references lang
                    , name varchar2(200) not null
                    , constraint citylang_pk primary key (city,lang)
                    ) organization index compress 1
;

insert into citylang values ('tpe','fi','Tampere');

insert into citylang values ('tpe','sv','Tammerfors');

insert into citylang values ('tpe','en','Tampere');

insert into citylang values ('hki','fi','Helsinki');

insert into citylang values ('hki','sv','Helsingfors');

insert into citylang values ('hki','en','Helsinki');


create table citycountry(city references city, country references country, constraint citycountry_pk primary key (city,country));

insert into citycountry values('tpe','FI');

insert into citycountry values('hki','FI');

insert into citycountry values('sto','SV');


create view citycountrynames as 
select la.lang
     , ci.city
     , co.country
     , (select cl.name from citylang cl where cl.city=ci.city and cl.lang=la.lang) cityname
     , (select cl.name from countrylang cl where cl.country=co.country and cl.lang=la.lang) countryname
  from lang la, city ci, country co
;

So that was the view. Be sure to populate the tables underneath correctly to avoid nulls.

create table shop (shop int constraint shop_pk primary key
                 , shop_name varchar2(200) not null
                 , city references city not null
                 , country references country
                 , constraint shop_citycountry_fk foreign key (city,country) references citycountry)
;

create index shop_city_fk_idx on shop(city);

create index shop_country_fk_idx on shop(country);

insert into shop values (1,'Helsinki shop','hki','FI');

insert into shop values (2,'Tampere shop','tpe','FI');

insert into shop values (3,'Stockholm shop','sto','SV');

commit;

select cn.cityname,cn.countryname
  from citycountrynames cn 
 inner join shop sh 
    on sh.city=cn.city and sh.country=cn.country
 where cn.city=:city
   and cn.lang=:lang
;
It is late Saturday evening now. I will not show any execution plans now. Try those out yourself. I will have another taste of Isle of Jura.
SQL> variable lang varchar2(2);
SQL> variable city varchar2(3);
SQL> exec :city := 'tpe'

PL/SQL procedure successfully completed.

SQL> exec :lang := 'fi'

PL/SQL procedure successfully completed.

SQL> select sh.shop_name, cn.cityname,cn.countryname
  2    from citycountrynames cn
  3   inner join shop sh
  4      on sh.city=cn.city and sh.country=cn.country
  5   where cn.city=:city
  6     and cn.lang=:lang
  7  ;

SHOP_NAME
------------------------------------------------------

CITYNAME
------------------------------------------------------

COUNTRYNAME
------------------------------------------------------

Tampere shop
Tampere
Suomi

2012-03-14

Store more days of history

Diagnostic pack purchased. The default amount of days stored work load repository is eight days. This is how it is changed to 365 days:
select snap_interval, retention
from dba_hist_wr_control;

SNAP_INTERVAL                                     
--------------------------------------------------
RETENTION                                         
--------------------------------------------------
+00 01:00:00.000000                               
+08 00:00:00.000000                               
                                                                                

execute dbms_workload_repository.modify_snapshot_settings(interval => 60,retention => 525600);

select snap_interval, retention
from dba_hist_wr_control;

SNAP_INTERVAL                                     
--------------------------------------------------
RETENTION                                         
--------------------------------------------------
+00 01:00:00.000000                               
+365 00:00:00.000000 

2012-03-08

Treasure hunting and blocking sessions

Bored with being just a Sherlock Holmes dba? Take a look at treasurehunt.solita.fi some tricky puzzles available for a week. The airprize available only to scan-agile participants but the pages seem to be open for all internet.

Back to Sherlock Holmesing. Blocking sessions are tricky ones. At least when the bug report is talking about yesterday. The application is throwing 5 minute transaction timeout to end user face. Lucky us the Diagnostic pack is available.

The bug information is tracked to a action SOS.createSO. And lucky us we have module, action and client_identifier populated to the session while accessing database.
select session_id,blocking_session,blocking_session_serial#,sql_exec_start,min(sample_time),max(sample_time), count(*), action
  from  DBA_HIST_ACTIVE_SESS_HISTORY s
 where sample_time > sysdate -300 
   and event= 'enq: TX - row lock contention' 
   and action = 'SOS.createSO'
 group by session_id,blocking_session,blocking_session_serial#,sql_exec_start,action
 having count(*) > 29
 order by min(sample_time) desc
;

sessio blocki blocking sql_exec_start         min(sample_time)           max(sample_time)           count action
465    699    17041    07.03.2012 12:32:45    07.03.2012 12:32:49,839    07.03.2012 12:37:40,203    30    SOS.createSO

We see that our reported bug session 465 was blocked by session 699. The five minutes timeout is readable from count(*) being 30. There is no other information about the blocking session than the sid and serial#. Should we be lucky to see some samples around the same time from this blocking session.
select s.*
  from  DBA_HIST_ACTIVE_SESS_HISTORY s 
 where session_id = 699 
   and session_serial# = 17041
;
No rows
Unlucky us. How about any other sessions being influenced by the kept row lock?
select session_id,blocking_session,blocking_session_serial#,sql_exec_start,min(sample_time),max(sample_time), count(*), action
  from  DBA_HIST_ACTIVE_SESS_HISTORY s
 where sample_time > sysdate -300 
   and event= 'enq: TX - row lock contention' 
 group by session_id,blocking_session,blocking_session_serial#,sql_exec_start,action
 having count(*) > 29
 order by min(sample_time) desc
;

sessio blocki blocking sql_exec_start         min(sample_time)           max(sample_time)           count action
465    699    17041    07.03.2012 12:32:45    07.03.2012 12:32:49,839    07.03.2012 12:37:40,203    30    SOS.createSO
30     699    17041    07.03.2012 12:31:11    07.03.2012 12:31:19,712    07.03.2012 12:36:10,080    30    SOS.createSOL
461    699    17041    07.03.2012 12:16:32    07.03.2012 12:16:38,563    07.03.2012 12:21:28,946    30    SOS.createSOL
406    699    17041    07.03.2012 12:14:21    07.03.2012 12:14:28,402    07.03.2012 12:19:18,780    30    SOS.createSOL
657    304    2655    02.03.2012 12:38:37    02.03.2012 12:38:41,735    02.03.2012 12:43:32,113    30    SOS.createSOL


The session 699 blocked also other sessions doing action SOS.createSOL. The time frame 699 was holding the lock got wider is has started already at time 12:14:21.

And yet again lucky us the similar thing happened a couple days ago also. Another blocking session is found.
select sample_time,sql_exec_start,s.module,s.action,s.client_id
  from  DBA_HIST_ACTIVE_SESS_HISTORY s 
 where session_id = 304 
   and session_serial# = 2655
;

sample_tim sql_exec module   action       client_id
02.03.2012 12:25:20 services RIS.updateRI integration
02.03.2012 12:47:47 works    SS.getSOT    obfuscateduser
And yet again lucky us. We get a before and after problem suspect end users and actions. The guilty code lines are possible to find with this information and application logs. Yet again Diagnostic pack license purchase got us to the right direction. The problem would have otherwise been unsolved.

2012-03-07

Time to learn model clause

I received the latest paper printed Oracle magazine today. Reading what the soon to be OUG Harmony 2012  keynote speaker has to say in the magazine askTom column. Dealing with the problem of Grouping Ranges.
So many ways writing code. The article introduces PL/SQL, recursive common table and model clause approaches. I just added a SQL and analytic function approach solution to the problem discussion at http://asktom.oracle.com. Not so pretty, but possible. Maybe the model clause version is still the most efficient one for this problem. Should also I start rehearsing model clause to my coding tool set...

2012-02-29

Speaking at OUG Harmony 2012

I will be speaking about execution plans OUG Harmony 30.-31.5.2012 Hämeenlinna Finland. Ougf.fi draft agenda published. Keynote speakers: Chris Date and Tom Kyte

2012-02-08

2012-01-24

XMLType from no rows

I used DBMS_XMLGEN package in my post constraint name generated. There I used GETXML function and got problems when the schema constraints were all named. So the query result was empty - no rows.

SELECT XMLTYPE(
         DBMS_XMLGEN.GETXML('SELECT 1 FROM dual WHERE 1=0')
       ) AS xml
  FROM dual
;

ORA-06502: PL/SQL: numeric or value error
ORA-06512: at "SYS.XMLTYPE", line 272
ORA-06512: at line 1

There is a built in DBMS_XMLGEN.GETXMLTYPE function to get the XMLType. This is how you get rid of the ORA-06502 problem.

SELECT DBMS_XMLGEN.GETXMLTYPE('SELECT 1 FROM dual WHERE 1=0') AS xml
  FROM dual
;

2012-01-20

A single serializable session throwing ORA-08177

We have a newly created table in a 11.2.0.3 Enterprise edition instance. A single session is throwing ORA-08177 can't serialize access for this transaction. The whole service level is marked for trace.

exec DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE('servicename')

No one else is updating the tables involved. There comes out only two trace files. One having only the select 1 from dual connection pool check. And in the another trace there may be found.
ALTER SESSION SET ISOLATION_LEVEL = SERIALIZABLE
END OF STMT
Yes we wanted transactions to be serialized. Couple of inserts and internal staements. And an update to seg$ table thowing ORA-08177.
PARSING IN CURSOR #47594977567928 len=314 dep=1 uid=0 oct=6 lid=0 tim=1327068762208173 hv=3096556448 ad='de83deb8' sqlid='0kkhhb2w93cx0'
update seg$ set type#=:4,blocks=:5,extents=:6,minexts=:7,maxexts=:8,extsize=:9,extpct=:10,user#=:11,iniexts=:12,lists=decode(:13, 65535, NULL
, :13),groups=decode(:14, 65535, NULL, :14), cachehint=:15, hwmincr=:16, spare1=DECODE(:17,0,NULL,:17),scanhint=:18, bitmapranges=:19 where t
s#=:1 and file#=:2 and block#=:3
END OF STMT
PARSE #47594977567928:c=0,e=673,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=1327068762208172
EXEC #47594977567928:c=1999,e=1411,p=0,cr=3,cu=1,mis=1,r=1,dep=1,og=4,plh=2170058777,tim=1327068762209667
STAT #47594977567928 id=1 cnt=0 pid=0 pos=1 obj=0 op='UPDATE  SEG$ (cr=3 pr=0 pw=0 time=116 us)'
STAT #47594977567928 id=2 cnt=1 pid=1 pos=1 obj=14 op='TABLE ACCESS CLUSTER SEG$ (cr=3 pr=0 pw=0 time=16 us cost=2 size=68 card=1)'
STAT #47594977567928 id=3 cnt=1 pid=2 pos=1 obj=9 op='INDEX UNIQUE SCAN I_FILE#_BLOCK# (cr=2 pr=0 pw=0 time=6 us cost=1 size=0 card=1)'
CLOSE #47594977567928:c=0,e=2,dep=1,type=3,tim=1327068762209778
EXEC #47594977567928:c=0,e=125,p=0,cr=3,cu=1,mis=0,r=1,dep=1,og=4,plh=2170058777,tim=1327068762209931
CLOSE #47594977567928:c=0,e=1,dep=1,type=3,tim=1327068762209972
EXEC #47594977567928:c=0,e=129,p=0,cr=4,cu=1,mis=0,r=1,dep=1,og=4,plh=2170058777,tim=1327068762210127
CLOSE #47594977567928:c=0,e=1,dep=1,type=3,tim=1327068762210167
EXEC #47594984047704:c=50992,e=50764,p=7,cr=576,cu=241,mis=1,r=0,dep=0,og=1,plh=0,tim=1327068762211409
ERROR #47594984047704:err=8177 tim=1327068762211441
STAT #47594984047704 id=1 cnt=0 pid=0 pos=1 obj=0 op='LOAD TABLE CONVENTIONAL  (cr=0 pr=0 pw=0 time=1 us)'

*** 2012-01-20 16:12:42.223
WAIT #47594984047704: nam='log file sync' ela= 11537 buffer#=84 sync scn=604674242 p3=0 obj#=-1 tim=1327068762223055
WAIT #47594984047704: nam='SQL*Net break/reset to client' ela= 21 driver id=1413697536 break?=1 p3=0 obj#=-1 tim=1327068762223170
WAIT #47594984047704: nam='SQL*Net break/reset to client' ela= 3715 driver id=1413697536 break?=0 p3=0 obj#=-1 tim=1327068762226909

After awhile the problem vanished... No solutions in this post. Just wondering why...

2012-01-19

Oracle instance memory usage

The right place to look for memory usage of a process in at operating system level. Interpreting the shared memory usage might be misleading. The top command seems to add the shared part to each process. And thou makes it hard to get right numbers. Here is a blog post trying to do that. Thou you could ask the numbers also from Oracle. Parameters influencing Oracle instance memory usage limits. 11g instance might have memory target set or sga and pga set separately.
select name,display_value 
  from v$parameter 
 where name like 'mem%target' 
    or name like 'pga%' 
    or name like 'sga%'
;

The actual PGA usage might hit and go above those limits. The current allocation can be queried.
select round(sum(bytes)/1024/1024/1024,3) SGA_G 
  from v$sgastat;

select round(value/1024/1024/1024,3) PGA_G
  from v$pgastat 
 where name = 'total PGA allocated';

Most often the database load varies over time. For resource planning it is vital to know how the memory usage is changing during night time. There might be going on some batch jobs whose behavior is not seen during day time. If you have diagnostics pack purchased you can also ask what was the situation earlier. Here is a query getting hourly memory usages of an Oracle instance.
select sn.INSTANCE_NUMBER, sga.allo sga, pga.allo pga,(sga.allo+pga.allo) tot,trunc(SN.END_INTERVAL_TIME,'mi') time
  from
(select snap_id,INSTANCE_NUMBER,round(sum(bytes)/1024/1024/1024,3) allo 
   from DBA_HIST_SGASTAT 
  group by snap_id,INSTANCE_NUMBER) sga
,(select snap_id,INSTANCE_NUMBER,round(sum(value)/1024/1024/1024,3) allo 
    from DBA_HIST_PGASTAT where name = 'total PGA allocated' 
   group by snap_id,INSTANCE_NUMBER) pga
, dba_hist_snapshot sn 
where sn.snap_id=sga.snap_id
  and sn.INSTANCE_NUMBER=sga.INSTANCE_NUMBER
  and sn.snap_id=pga.snap_id
  and sn.INSTANCE_NUMBER=pga.INSTANCE_NUMBER
order by sn.snap_id desc, sn.INSTANCE_NUMBER
;


2012-01-16

Longest roman number

It is possible to ask Oracle database the roman number presentation of a number. Just use to_char function with RN format. Just wondering what is the longest roman number.
select rn,len
 from (
  select rn, max(length(rn))over() mlen, length(rn) len
    from (
    select to_char(level,'RN') rn from dual connect by level <= 4000
    )
  )
where mlen=len
;
Well they all are not 15 characters. Need to trim.
select n, rn,len
 from (
  select n, rn, max(length(rn))over() mlen, length(rn) len
    from (
    select level n, trim(to_char(level,'RN')) rn from dual connect by level <= 4000
    )
  )
where mlen=len
;

3888 MMMDCCCLXXXVIII 15
4000 ############### 15
4000 and above are not converted. Also negative numbers and zero are not available. But in between 3888 seems to be "the longest".

2012-01-10

Using client identifier to populate a column

Earlier I wrote about missing dbms_session.get_client_identifier. Here is a compound trigger example using session client identifier to populate a column in a table.

drop table t;

create table t(n number, client_id varchar2(30) not null);

create or replace trigger t_iuc for insert or update on t 
  compound trigger
  cli varchar2(30);
 before statement is
  begin
   cli := SYS_CONTEXT('userenv', 'CLIENT_IDENTIFIER');
  end before statement;
 before each row is
  begin
   :new.client_id := cli;
  end before each row;
end;
/

exec dbms_session.set_identifier('rafu')

insert into t(n) select level from dual connect by level < 4;

select * from t;

1 rafu
2 rafu
3 rafu

2012-01-04

Constraint name generated

Generated constraint names makes development, testing and production environment error messages vary. And those reported SYS_C006206630 could be more informative. To identify those generated constraint names *_constraints views has the information available at generated column. The constraint search_condition is a long typed column and thou hard to use. Working with long columns document gives tools to convert the search column to varchar2. This query reports generated named constraints in users schema.
WITH xml AS (
          SELECT DBMS_XMLGEN.GETXMLTYPE(
                     'select co.constraint_name
                           , co.table_name
                           , cc.column_name
                           , co.constraint_type
                           , co.r_constraint_name
                           , co.search_condition  
                        from user_constraints co
                        left outer join user_cons_columns cc
                          on co.constraint_name = cc.constraint_name 
                       where co.generated = ''GENERATED NAME''')
                    AS xml
          FROM   dual
          ), concol as (
  SELECT extractValue(xs.object_value, '/ROW/TABLE_NAME')       AS table_name
  ,      extractValue(xs.object_value, '/ROW/COLUMN_NAME')      AS column_name
  ,      extractValue(xs.object_value, '/ROW/CONSTRAINT_TYPE')  AS constraint_type
  ,      extractValue(xs.object_value, '/ROW/CONSTRAINT_NAME')  AS constraint_name
  ,      extractValue(xs.object_value, '/ROW/R_CONSTRAINT_NAME')  AS r_constraint_name
  ,      extractValue(xs.object_value, '/ROW/SEARCH_CONDITION') AS search_condition
  FROM   xml x
  ,      TABLE(XMLSEQUENCE(EXTRACT(x.xml, '/ROWSET/ROW'))) xs
)
select cc.* 
  from concol cc
order by table_name, column_name,constraint_type
;

Give names to the reported constraints to your modeling tool. NOT NULL checks are not first in the list to be named. NOT NULL error messages are informative with the generated names also. But naming also those make the different (DEV,TST,PRD) schemas a bit more similar. SQL Developer Data modeler 3.1 EA3 has the possibility to name those also at relational model. Formerly it was only possible at physical model. If you feel like too much work modeling those at least change the reported names.

ALTER TABLE test RENAME CONSTRAINT SYS_C006206630 TO test_pk;


Blog Archive

About Me

My photo
I am Timo Raitalaakso. I have been working since 2001 at Solita Oy as a Senior Database Specialist. I have received Oracle ACE nomination. My main focus is on projects involving Oracle database. In this Rafu on db blog I write some interesting issues that evolves from my interaction with databases. Mainly Oracle.