2011-12-28

unindex 11.2 cross schema

Foreign keys might point to another schema. Here is an unindex query for such situation. My earlier queries unindex and unindex 11.2 were built on top of user_* views. Here the queries are using all_* views. I am not using dba_* views as all_* views are usable for wider audience than just dba privileged. Reminder: you probably do not need all of those, but they might explain you some poor execution times or TM lock waits.

select case when i.index_name is not null
            then 'OK'
            else '****'
       end ok
     , c.owner
     , c.table_name
     , c.constraint_name
     , c.cols
     , i.index_name
from (
  select a.owner
       , a.table_name
       , a.constraint_name
       , listagg(b.column_name, ' ' ) 
          within group (order by column_name) cols
      from all_constraints a, all_cons_columns b
     where a.owner = b.owner
       and a.constraint_name = b.constraint_name
       and a.constraint_type = 'R'
  group by a.owner,a.table_name, a.constraint_name
 ) c
 left outer join
 (
  select index_owner
       , table_name
       , index_name
       , cr
       , listagg(column_name, ' ' ) 
          within group (order by column_name) cols
    from (
        select index_owner
             , table_name
             , index_name
             , column_position
             , column_name
             , connect_by_root(column_name) cr
          from all_ind_columns
       connect by prior column_position-1 = column_position
              and prior index_name = index_name
              and prior index_owner = index_owner
         )
    group by index_owner, table_name, index_name, cr
) i on c.owner = i.index_owner and c.cols = i.cols and c.table_name = i.table_name
order by ok,owner,table_name,constraint_name;

2011-12-20

The index could be unique

drop table uq;

create table uq(pk int constraint uq_pk primary key, col int, col2 int);

create index uq_col_idx on uq(col,pk);

Index uq_col_idx is not unique but it might be defined as such. It contains all columns in a unique primary key. To find such candidates is just another relational division question.
with co as (
select con.constraint_name
     , con.table_name
     , cco.column_name
     , count(*) over (partition by con.constraint_name) ccc 
  from user_constraints con 
 inner join user_cons_columns cco 
    on cco.constraint_name=con.constraint_name 
 where con.constraint_type in ('P','U') 
   and con.deferrable = 'NOT DEFERRABLE' 
), ix as (
select idx.table_name,idx.index_name,ico.column_name,idx.uniqueness
  from user_indexes idx 
 inner join user_ind_columns ico 
    on idx.index_name = ico.index_name
 where exists (
   select 0 from user_tab_columns col
    inner join user_ind_columns coi 
    on coi.table_name = col.table_name
   and coi.column_name = col.column_name
    where idx.index_name = coi.index_name
      and col.nullable = 'N'
 )
), ixco as (
select ix.table_name
     , ix.index_name
     , ix.column_name
     , co.constraint_name
     , co.ccc
     , count(*) over (partition by co.constraint_name,ix.index_name) ixc
     , ix.uniqueness
  from co, ix
 where co.table_name=ix.table_name 
  and co.column_name=ix.column_name
)
select distinct table_name,index_name could_be_unique,constraint_name based_on
  from ixco
 where uniqueness = 'NONUNIQUE'
   and ccc=ixc
 order by table_name,index_name
;

TABLE_NAME                     COULD_BE_UNIQUE                BASED_ON
------------------------------ ------------------------------ ------------------------------
UQ                             UQ_COL_IDX                     UQ_PK
drop index uq_col_idx;

create unique index uq_col_idx on uq(col,pk);


2011-12-14

A table vanishing from a plan

Christian Antognini published his presentation slides Challenges and Chances of the 11g Query Optimizer. Worth reading to get a nice overview. The 11g new feature "join elimination" is mentioned there. There is a similar behavior even in 10gR2 optimizer. A table mentioned in a query text vanishes from the query plan. It is not a join but a correlated sub query with a aggregate.
select 1 
  from dual mai 
 where exists (
   select max(1) 
     from dual inn 
    where mai.dummy=inn.dummy
)
;

---------------------------------
| Id  | Operation        | Name |
---------------------------------
|   0 | SELECT STATEMENT |      |
|   1 |  FAST DUAL       |      |
---------------------------------
The sub query and the "inn" table is not seen in the plan. There is no need as the aggregate returns a row. This is not the case in my earlier post about not exists null. There the sub query is not correlated and it is using not exists. The not exists correlated makes similar elimination. But in addition adds a filter(NULL IS NOT NULL) just before entering even the first table.
select 1 
  from dual mai 
 where not exists (
   select max(1) 
     from dual inn 
    where mai.dummy=inn.dummy
)
;

---------------------------------
| Id  | Operation        | Name |
---------------------------------
|   0 | SELECT STATEMENT |      |
|*  1 |  FILTER          |      |
|   2 |   FAST DUAL      |      |
---------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(NULL IS NOT NULL)
One could ask, why would somebody write such a query...

2011-11-25

A Question

I attended the Jonathan Lewis seminar three weeks ago and wrote just after that about information overload. The things learned there have helped me understanding issues I have to deal with the Oracle optimizer. The amount of information gained need some time to melt into mind. One issue talked there was a question. The original written question was "How many choises does Oracle have in this example?" With addition there was mentioned "for all versions".

drop table t1;

create table t1( 
  n1 number
, n2 number
, n3 number
, constraint t1p primary key(n1)
, constraint t1u unique (n2));

select count(*) from t1;

I answered wrongly mentioning also that it depends on the version. As so often my mind started to think howcome I answered such an answer. Maybe I was thinking a question: How many different execution plan possibilities does Oracle have in this example? Unfortunately I am not attending UKOUG to throw this to the OakTable Challenge. Feel free to use "Give me at least one that was not in the list of the Jonathan's answer and how to get there." I will be coming back to the issue after 7th of December.

2011-11-23

Filtering outer join to hash join plan

I gave a SQL presentation a couple of years ago. There I mentioned the possibility to restrict an outer join when using ANSI joins. Every now and then seen such an join condition. Maybe even written myself. This week such a query has been bothering me with a long running nested loops access path. This was a simpler query than the one with outer join exists predicate I have mentioned earlier. While dealing with it I saw a quite funny looking predicate.

drop table aa;

drop table bb;

create table aa as 
select rownum id
     , nullif(mod(rownum,3),1) val 
  from dual connect  by level < 12;

create table bb as 
select rownum id
     , nullif(mod(rownum,3),0)+10 val 
  from dual connect by level < 9;

alter table aa modify id not null;

alter table bb modify id not null;

select aa.* from aa;

        ID        VAL
---------- ----------
         1
         2          2
         3          0
         4
         5          2
         6          0
         7
         8          2
         9          0
        10
        11          2

select bb.* from bb;

        ID        VAL
---------- ----------
         1         11
         2         12
         3
         4         11
         5         12
         6
         7         11
         8         12

Desired results:
   
        ID        VAL         ID        VAL
---------- ---------- ---------- ----------
         1
         2          2          2         12
         3          0
         4
         5          2          5         12
         6          0
         7
         8          2          8         12
         9          0
        10
        11          2
A normal outer join uses hash join plan. Adding the predicate aa.val = 2 to the join condition I get the result I want.
select * 
  from aa 
  left outer join bb 
    on aa.id=bb.id
;

-----------------------------------
| Id  | Operation          | Name |
-----------------------------------
|   0 | SELECT STATEMENT   |      |
|*  1 |  HASH JOIN OUTER   |      |
|   2 |   TABLE ACCESS FULL| AA   |
|   3 |   TABLE ACCESS FULL| BB   |
-----------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("AA"."ID"="BB"."ID")
   
   
select * 
  from aa 
  left outer join bb 
    on aa.id=bb.id and aa.val = 2
;

-------------------------------------
| Id  | Operation            | Name |
-------------------------------------
|   0 | SELECT STATEMENT     |      |
|   1 |  NESTED LOOPS OUTER  |      |
|   2 |   TABLE ACCESS FULL  | AA   |
|   3 |   VIEW               |      |
|*  4 |    FILTER            |      |
|*  5 |     TABLE ACCESS FULL| BB   |
-------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - filter("AA"."VAL"=2)
   5 - filter("AA"."ID"="BB"."ID")

But the 11.1.0.7 optimizer goes to a nested loops path. My problem is that the table BB is kind of big. Together with plan row 4 filter is not filtering out so many rows the execution is full table scanning the table bb too many times. Throwing the same query to 11.2.0.3 optimizer I get a hash join access path. Nice. That was something I was hoping for. But look at the access predicate.
-----------------------------------
| Id  | Operation          | Name |
-----------------------------------
|   0 | SELECT STATEMENT   |      |
|*  1 |  HASH JOIN OUTER   |      |
|   2 |   TABLE ACCESS FULL| AA   |
|   3 |   TABLE ACCESS FULL| BB   |
-----------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("AA"."ID"="BB"."ID" AND "AA"."VAL"=CASE  WHEN ("BB"."ID"
              IS NOT NULL) THEN 2 ELSE 2 END )

Obviously the outer join (+) are missing next to BB.ID. You need to read it from the word OUTER next to the HASH JOIN. Sqldeveloper is able to visualize the predicate differently.

AA.ID=BB.ID(+) AND AA.VAL=CASE  WHEN (BB.ID(+) IS NOT NULL) THEN 2 ELSE 2 END

But that case statement seems kind of weird. "If something then constant otherwise the same constant". Should that case statement not be possible to be optimized to be a simple constant 2? I rewrote my query to use the old style join. And with 11.2.0.3 the plan and results stay the same. But going back to 11.1.0.7 things change.

select aa.*,bb.* 
 from aa, bb 
where aa.id=bb.id(+) 
  and aa.val = case when (bb.id(+) is not null) then 2 else 2 end
;

-----------------------------------
| Id  | Operation          | Name |
-----------------------------------
|   0 | SELECT STATEMENT   |      |
|*  1 |  HASH JOIN OUTER   |      |
|*  2 |   TABLE ACCESS FULL| AA   |
|   3 |   TABLE ACCESS FULL| BB   |
-----------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("AA"."ID"="BB"."ID")
   2 - filter("AA"."VAL"=2)
   
The case statement is optimized as a simple constant and used as a filter staright to the table AA. Query results change compared to 11.2.0.3 results. Actually the thing I wanted out is without the else part in the predicate.
select aa.*,bb.* 
 from aa, bb 
where aa.id=bb.id(+) 
  and aa.val = case when (bb.id(+) is not null) then 2 end
;
-----------------------------------
| Id  | Operation          | Name |
-----------------------------------
|   0 | SELECT STATEMENT   |      |
|*  1 |  HASH JOIN OUTER   |      |
|   2 |   TABLE ACCESS FULL| AA   |
|   3 |   TABLE ACCESS FULL| BB   |
-----------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("AA"."ID"="BB"."ID" AND "AA"."VAL"=CASE  WHEN ("BB"."ID"
              IS NOT NULL) THEN 2 END )
The results are what I was requesting. This may be rewoten back to ANSI join that is understood also by 11.1.0.7 optimizer and the execution goes to a hash join path.
select aa.*,bb.* 
  from aa 
  left outer join bb 
    on aa.id=bb.id 
   and aa.val = case when (bb.id is not null) then 2 end
;
I am satisfied with the results. Well throw another problem to the optimizers. How about nulls. Change the "and aa.val =" to "and aa.val is null" and even 11.2.0.3 is not able to go to a hash join plan. Here is an problematic nested loops access path query and a dirty trick on a way to a hash path.

select aa.*,bb.*
  from aa 
  left outer join bb 
    on aa.id=bb.id 
   and aa.val is null
;

select * 
  from aa 
  left outer join bb 
    on aa.id=bb.id 
   and coalesce(nullif(aa.val,bb.id),aa.val) is null
;


2011-11-17

Skip locked


SQL SELECT SKIP LOCKED not mentioned in new features part of the SQL document. Seems to be there already in 11gr1.


SQL> create table skiplocked as select  level n from dual connect by level < 3;

Table created.

SQL> select * from skiplocked where n=1 for update;

         N
----------
         1


At the same time another session


SQL> select * from skiplocked;

         N
----------
         1
         2

SQL> select * from skiplocked for update skip locked;

         N
----------
         2

As the 11gr1 have been around for a while. Rob Van Wijk has written something about the feature.
http://rwijk.blogspot.com/2007/12/parallellism-in-skip-locked-scenario.html
http://rwijk.blogspot.com/2009/02/for-update-skip-locked.html
"when using FOR UPDATE SKIP LOCKED, records are locked when they are fetched, not when the cursor is opened" Need to read those again when the need for this feature comes...

Installing an agent and not a free lunch

Installing an Cloud Control 12 agent using rpm method. So not letting the Cloud Control do the agent installation, but do it manually at the agent node. While generating the rpm
$OMS_HOME/bin/emcli get_agentimage_rpm -destination=/tmp -platform="Linux x86-64"
I got an error
Agent image to rpm conversion failed
and the logfile stating
Directory /usr/lib/oracle doesnt exist. Please create the directory with write permissions and then retry the emcli command.
Well that is mentioned in the document Prerequisites, but This requirement is for the server OMS side. So as a root before issuing the command:
mkdir /usr/lib/oracle
chown oracle:oinstall /usr/lib/oracle
Also note that Editing agent.properties File (Only for Virtual Hosts) means here (Mandatory). Put the agent node host name there.

ORACLE_HOSTNAME (Only for Virtual Hosts) Enter the virtual host name where you want to install the Management Agent. 

And before issuing the  make sure the machines see each other. After installed the agent I had a small issue with machine naming at /etc/hosts. The installed agent and host appeared yellow "Pending" at the summary pie chart. After fixing the visibity of the agent machine emcli resyncAgent fixed the issue.

$OMS_HOME/bin/emcli resyncAgent -agent="agentname:3872"

Would it be cool to have following situation possible:
But there is no such thing as a free lunch. XE Options and Major Features Not Included ... Tuning Pack Nice to see that XE 11g installation changes the default control_management_pack_access DIAGNOSTIC+TUNING to NONE by default. It is changed actually with a comment that the default is changed because not an EE installation.

I would not have guessed before starting to install an agent to end up browsing licensing documentation. Seems like Even in PE: The Management Packs are not included in Personal Edition.


Next to the offline agent loading and generating oracle-agt-12.1.0.1.0-1.0.i386.rpm as the Cloud Control has only the oracle-agt-12.1.0.1.0-1.0.x86_64.rpm available. As i would like to try out agent installation also to a -platform="Linux x86" Lets see how that goes in the future...

2011-11-03

Information overload

More than two days of Jonathan Lewis an then some other Ougf autumn seminar presentations. Feels like I am a beginner. I actually do not know enough. There is so much to learn about the basics. The change vector in Oracle. To learn the basics today it is much harder to figure out today than ten years ago. There is so many performance improvements disturbing the basic calculation based on numbers in simple tests.

Mark to your calendars two last days at the end of may 2012. There will be a Ougf seminar in Hämeenlinna. Just another time to feel closer as a beginner. And better yet learn things to unlearn...

Well just a tought. Lending an example from Jonathans teaching. Should the bind name be a part to be considered a part of sql text by optimizer?

drop table t1;

drop table t2;

create table t1(n1 number, n2 number);

create table t2(n1 number, n2 number);

create index t1_i1 on t1(n1);

create index t2_i1 on t2(n1);

First without binds.
 
select t1.n2,t2.n2
 from t1, t2
 where t1.n1=1
   and t2.n1=1
   and t1.n1=t2.n1
;
select * from table(dbms_xplan.display_cursor(format=>'basic +predicate'));

EXPLAINED SQL STATEMENT:
------------------------
select t1.n2,t2.n2  from t1, t2  where t1.n1=1    and t2.n1=1    and 
t1.n1=t2.n1
 
Plan hash value: 1051316565
 
-----------------------------------------------
| Id  | Operation                     | Name  |
-----------------------------------------------
|   0 | SELECT STATEMENT              |       |
|   1 |  NESTED LOOPS                 |       |
|   2 |   NESTED LOOPS                |       |
|   3 |    TABLE ACCESS BY INDEX ROWID| T1    |
|*  4 |     INDEX RANGE SCAN          | T1_I1 |
|*  5 |    INDEX RANGE SCAN           | T2_I1 |
|   6 |   TABLE ACCESS BY INDEX ROWID | T2    |
-----------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   4 - access("T1"."N1"=1)
   5 - access("T1"."N1"="T2"."N1")
       filter("T2"."N1"=1)
 
Nice looking plan.and now with binds.
select t1.n2,t2.n2
 from t1, t2
 where t1.n1=:b1
   and t2.n1=:b1
   and t1.n1=t2.n1
;

select * from table(dbms_xplan.display_cursor(format=>'basic +predicate'));

EXPLAINED SQL STATEMENT:
------------------------
select t1.n2,t2.n2  from t1, t2  where t1.n1=:b1    and t2.n1=:b1    
and t1.n1=t2.n1
 
Plan hash value: 810726339
 
------------------------------------------------
| Id  | Operation                      | Name  |
------------------------------------------------
|   0 | SELECT STATEMENT               |       |
|*  1 |  FILTER                        |       |
|   2 |   MERGE JOIN CARTESIAN         |       |
|   3 |    TABLE ACCESS BY INDEX ROWID | T1    |
|*  4 |     INDEX RANGE SCAN           | T1_I1 |
|   5 |    BUFFER SORT                 |       |
|   6 |     TABLE ACCESS BY INDEX ROWID| T2    |
|*  7 |      INDEX RANGE SCAN          | T2_I1 |
------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(TO_NUMBER(:B1)=TO_NUMBER(:B1))
   4 - access("T1"."N1"=TO_NUMBER(:B1))
       filter("T1"."N1"=TO_NUMBER(:B1))
   7 - access("T2"."N1"=TO_NUMBER(:B1))
       filter("T2"."N1"=TO_NUMBER(:B1))
 

 
select t1.n2,t2.n2
 from t1, t2
 where t1.n1=1
   and t2.n1=1
   and t1.n1=t2.n1
;

Plan changed. Oracle knows :B1 should be :B1 1 - filter(TO_NUMBER(:B1)=TO_NUMBER(:B1)) But why the plan changed? The bind might be null. And NUL is not equalt to NULL. How about sql monitor report.
 
 
SQL Monitoring Report

SQL Text
------------------------------
select /*+monitor*/t1.n2,t2.n2 from t1, t2 where t1.n1=:b1 and t2.n1=:b1 and t1.n1=t2.n1

Global Information
------------------------------
 Status              :  DONE (ALL ROWS)     
 Instance ID         :  1                   
 Session             :  RAFU (28:21)        
 SQL ID              :  dm8z90v1mk8tb       
 SQL Execution ID    :  16777217            
 Execution Started   :  11/03/2011 13:54:27 
 First Refresh Time  :  11/03/2011 13:54:27 
 Last Refresh Time   :  11/03/2011 13:54:27 
 Duration            :  .002418s            
 Module/Action       :  SQL Developer/-     
 Service             :  SYS$USERS           
 Program             :  SQL Developer       
 Fetch Calls         :  1                   

Binds
========================================================================================================================
| Name | Position |     Type     |                                        Value                                        |
========================================================================================================================
| :B1  |        1 | VARCHAR2(32) | 1                                                                                   |
| :B1  |        2 | VARCHAR2(32) | 1                                                                                   |
========================================================================================================================

Global Stats
==============================
| Elapsed |  Other   | Fetch |
| Time(s) | Waits(s) | Calls |
==============================
|    0.00 |     0.00 |     1 |
==============================

SQL Plan Monitoring Details (Plan Hash Value=810726339)
=======================================================================================================================================
| Id |            Operation             | Name  |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Activity | Activity Detail |
|    |                                  |       | (Estim) |      | Active(s) | Active |       | (Actual) |   (%)    |   (# samples)   |
=======================================================================================================================================
|  0 | SELECT STATEMENT                 |       |         |      |           |        |     1 |          |          |                 |
|  1 |   FILTER                         |       |         |      |           |        |     1 |          |          |                 |
|  2 |    MERGE JOIN CARTESIAN          |       |       1 |    1 |           |        |     1 |          |          |                 |
|  3 |     TABLE ACCESS BY INDEX ROWID  | T1    |       1 |    1 |           |        |     1 |          |          |                 |
|  4 |      INDEX RANGE SCAN            | T1_I1 |       1 |    1 |           |        |       |          |          |                 |
|  5 |     BUFFER SORT                  |       |       1 |      |           |        |       |          |          |                 |
|  6 |      TABLE ACCESS BY INDEX ROWID | T2    |       1 |      |           |        |       |          |          |                 |
|  7 |       INDEX RANGE SCAN           | T2_I1 |       1 |      |           |        |       |          |          |                 |
=======================================================================================================================================

B1 is named nicely. But through jdbc seems like the bind variable naming is lost at some point. A named bind becomes named like :B1 and :B2 all thou there was only one name while binding.

2011-10-31

Variable inlist

There comes every now and then discussions about how to pass varying in list to a SQL statement as a single bind. Just currently there is such discussion going on at oracle-l mailing list "Large IN LIST in an OBIEE query". Also several discussions may be found in Oracle discussion forums.

One problem with these varying in lists is that the SQL query is different for every amount of values passed to an in list.


where col in (?,?);


where col in (?,?,?);

Another problem is that an in list accepts only 1000 different values. This may be overcome using OR:

where col in (?,?,...,?) --1000 placeholders
       or col in (?,?,...,?) --1000 placeholders
       or col in (?,?,?);


But that is just plain old stupid. It seems to be the thing that comes to a developers mind first. As the approach is seen too often.

All kinds of single string passing approaches can be found using Google. With those there is an issue of maximum length 4000 bytes of varchar2 datatype.

One approach is to create an temp table, populate that before executing a query and using the temp table as a normal table in the query. Problems: Two round trips to the database. How to populate the temp table efficiently.

It was two years ago I attended Tanel Poder Advanced Oracle Troubleshooting Seminar. He mentioned an approach using SQL Collection types and table function to implement the varying inlist with just one bind. Actually seems like he is having his seminar starting just todayhttp://blog.tanelpoder.com/seminar/. Lucky you attending. This week I will have a pleasure to attend Jonathan Lewis seminar in Helsinki.

Just a bit earlier I wrote about how to populate a table efficiently using Java. This approach may be used to populate the temp table. But why do that when you can do it all in a single call as Tanel described.



create type numbers is table of number;

And Map the NUMBERS type to ArrayDescriptor.createDescriptor.

        ArrayDescriptor ad = ArrayDescriptor.createDescriptor("NUMBERS", c);
        ARRAY a = new ARRAY(ad, c, elems.toArray());
        OraclePreparedStatement ops 
          = (OraclePreparedStatement)c.prepareStatement(
                 "select /*+gather_plan_statistics*/ count(*) cou"+
                 "   from inlist_test "+
                 "  where num in ( " +
                 "        select column_value " +
                 "          from table(?))");
        ops.setARRAY(1, a);

        ResultSet rs = ops.executeQuery();

This approach works also for another types of Collections Dates, Strings and so . Just create own table of type for each needed. Readily availabel SYS.ODCINUMBERLIST seems to be safe to use with Collections smaller than about 32k. So no additional database objects need to be greated.

ArrayDescriptor ad = ArrayDescriptor.createDescriptor("SYS.ODCINUMBERLIST", c);

One problem in the discussions have been how the optimizer sees the cardinalities of the collection. Here are three different plans from 11.2.0.2 depending on the Collection and table size. As you can see quite large collections may be passed to the database using this technique. Plan 1497893883 is generated using one million numbers in the inlist. With large Collections be aware of the memory consumption I mentioned in the using Java post.
SQL_ID  fzb8ysr291xz3, child number 0
-------------------------------------
select /*+gather_plan_statistics*/ count(*) cou   from inlist_test   
where num in (         select column_value           from table(:1 ))
 
Plan hash value: 4131715231
 
----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |             |      1 |        |      1 |00:00:00.01 |       3 |       |       |          |
|   1 |  SORT AGGREGATE                     |             |      1 |      1 |      1 |00:00:00.01 |       3 |       |       |          |
|*  2 |   HASH JOIN SEMI                    |             |      1 |      1 |     24 |00:00:00.01 |       3 |  1517K|  1517K| 1308K (0)|
|   3 |    TABLE ACCESS FULL                | INLIST_TEST |      1 |     99 |     99 |00:00:00.01 |       3 |       |       |          |
|   4 |    COLLECTION ITERATOR PICKLER FETCH|             |      1 |     25 |     25 |00:00:00.01 |       0 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------
 
 
 
Plan hash value: 1497893883
 
-------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |             |      1 |        |      1 |00:00:01.15 |    1528 |   1262 |       |       |          |
|   1 |  SORT AGGREGATE                     |             |      1 |      1 |      1 |00:00:01.15 |    1528 |   1262 |       |       |          |
|*  2 |   HASH JOIN RIGHT SEMI              |             |      1 |      1 |    999K|00:00:04.22 |    1528 |   1262 |    34M|  6431K|   48M (0)|
|   3 |    COLLECTION ITERATOR PICKLER FETCH|             |      1 |   1000K|   1000K|00:00:00.71 |       0 |      0 |       |       |          |
|   4 |    TABLE ACCESS FULL                | INLIST_TEST |      1 |    893K|    999K|00:00:00.69 |    1528 |   1262 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------
 

 
Plan hash value: 2750379354
 
--------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |             |      1 |        |      1 |00:00:00.45 |    1528 |   1262 |       |       |          |
|   1 |  SORT AGGREGATE                      |             |      1 |      1 |      1 |00:00:00.45 |    1528 |   1262 |       |       |          |
|*  2 |   HASH JOIN                          |             |      1 |   3125 |  99999 |00:00:00.44 |    1528 |   1262 |  3749K|  1936K| 5625K (0)|
|   3 |    SORT UNIQUE                       |             |      1 |    100K|    100K|00:00:00.11 |       0 |      0 |  4588K|   893K| 4078K (0)|
|   4 |     COLLECTION ITERATOR PICKLER FETCH|             |      1 |    100K|    100K|00:00:00.07 |       0 |      0 |       |       |          |
|   5 |    TABLE ACCESS FULL                 | INLIST_TEST |      1 |    893K|    999K|00:00:00.68 |    1528 |   1262 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("NUM"=VALUE(KOKBF$))
 
Note
-----
   - dynamic sampling used for this statement (level=2)

2011-10-25

Congratulations to the winners

I received just an email from Iggy Fernandez. My answer was not in the set of winners. Congratulations to all three. The challenge might be nice rehearsal to solve using Spatial Network Data Model SQL extension. The email as received:

The Second International NoCOUG SQL Challenge was published on 2/13/11 in the February 2011 issue of the NoCOUG Journal (http://bit.ly/gVNZsW). SQL commands to create the data were provided at http://bit.ly/g58WVn. The challenge was to find the secret message hidden in a seemingly random collection of words. The winners are Andre Araujo (Australia), Rob van Wijk (Netherlands), and Ilya Chuhnakov (Russia.) Each winner will receive an Amazon Kindle from contest sponsor Pythian and the August Order of the Wooden Pretzel in keeping with the pronouncement of Steven Feuerstein that“some people can perform seeming miracles with straight Es-Cue-El, but the statements end up looking like pretzels created by somebody who is experimenting with hallucinogens.”

The full announcement can be read in the 100th issue of the NoCOUG Journal (http://bit.ly/rC2gRA).

2011-10-21

How many function calls and sysdate evaluated

Talking here about function calls and when sysdate is evaluated. Starting with a silly function f2 that takes a second to execute and a session that shows only the seconds out of a date type.

SQL> alter session set nls_date_format='ss';

Session altered.

SQL> create or replace function f2( x in date ) return date
  2    as
  3    begin
  4            dbms_lock.sleep(1);
  5            return x;
  6    end;
  7  /

Function created.

SQL> select a a, a b from (select f2(sysdate) a from dual);

A  B
-- --
37 37

Elapsed: 00:00:02.02

Written a query that uses the function once. And seems like sysdate is evaluated at the main query and it returns the same value for both a and b. But how come the execution time is two seconds? From the optimizer trace can be seen that the query is transformed to have two calls to the function. That explains the execution time.
Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "RAFU"."F2"() "A","RAFU"."F2"() "B" FROM "SYS"."DUAL" "DUAL"
SQL> select f2(sysdate) a, f2(sysdate) b from dual;

A  B
-- --
39 39

Elapsed: 00:00:02.02

Similar result. Lets change the function to have its own cursor.

SQL> create or replace function f3 return date
  2    as
  3    x date;
  4    begin
  5            dbms_lock.sleep(1);
  6            select sysdate into x from dual;
  7            return x;
  8    end;
  9  /

Function created.

SQL> select a a, a b from (select f3() a from dual);

A  B
-- --
42 43

Elapsed: 00:00:02.02

Execution time the same and the result changed. The sysdate is evaluated at the time cursor in the function is called. Remember the query transformation. The transformation may be avoided and the function is called actually only once.

SQL> select a a, a b from (select /*+no_merge*/ f3() a from dual);

A  B
-- --
54 54

Elapsed: 00:00:03.01

Be aware query transformations may lead to changing query results when function are mixed in a query. How about transaction isolation. Does that have influence on the issue when the function is called twice. So is the sysdate behaving like changing rownum. The changing rownum is talked in Laurent Schneider blog and comments there.

SQL> set transaction read only;

Transaction set.

Elapsed: 00:00:00.00
SQL> SELECT F3() A,F3() B FROM DUAL;

A  B
-- --
03 04

Elapsed: 00:00:02.01
SQL> rollback;
The sysdate is not respecting the read only setting on the transaction but it is getting its value just when asked.

2011-09-28

New package DBMS_XPAN?

Is this humor from error message writer?
SQL> select * from table(dbms_xplan.display_cursor(format=>'allsts'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------
Error: format 'allsts' not valid for DBMS_XPAN.DISPLAY_CURSOR()

2011-09-27

Migrating to 11.2.0.3 ORA-00976

You can have exists even in join condition in addition to what Rob Van Wijk has writen. We have code lines that has such exists together with an connect by query. Patching an 11.2.0.2 database to 11.2.0.3 In 11.2.0.2 one can run such a query. Here is one stupid example of such.
select d2.dummy from dual d1 left outer join dual d2 on exists (select 1 from dual connect by level < 3); 

D
-
X


In 11.2.0.3 you get an error thrown to your face.
ORA-00976: Specified pseudocolumn or operator not allowed here.
Cause: LEVEL, PRIOR, ROWNUM, CONNECT_BY_ROOT, CONNECT_BY_ISLEAF or CONNECT_BY_ISCYCLE was specified at an illegal location.
Action: Remove LEVEL, PRIOR, ROWNUM, CONNECT_BY_ROOT, CONNECT_BY_ISLEAF or CONNECT_BY_ISCYCLE.

The query may be rewritten like 11.2.0.3 parser is satisfied.
with cte as (select 1 from dual connect by level < 3)
select d2.dummy from dual d1 left outer join dual d2 on exists (select 1 from cte);

D
-
X

This is a runtime problem. Run your tests before patching production. Actually this is not just a runtime problem. We had the code inside a pl/sql procedure. The procedure is valid in 11.2.0.2 environment. After patching the procedure is still valid just waiting for the next compile or runtime problem of this code. Well there is most certainly a reason for such problem appearing. Might this be a side affect for a Bug correction 8724314 mentioned in 11.2.0.3 Patch Set - List of Bug Fixes by Problem Type [ID 1348303.1]. Might this usage of common table expression workaround be just a next bug waiting to appear.

Update: Confirmation about 8724314 being the reason for this
Bug 13496250 - Unexpected ORA-976 using LEVEL / PRIOR / ROWNUM in an ANSI "ON" filter predicate [ID 13496250.8]The mentioned workaround there will not be a valid workaround always. If you start rewriting the query change to the common table expression approach described in this post.

2011-09-26

11.2.0.3


Just downloading Patch 10404530: 11.2.0.3.0 PATCH SET FOR ORACLE DATABASE SERVER


Patches are correcting bugs and adding new features. I guess that there is a new value to  OPTIMIZER_FEATURES_ENABLED parameter coming.

Time to unlearn. Tom Kyte talked at Oug Harmony 2011 last spring about caching inside a SQL query. He wrote about the same topic in Oracle Magazine september 2011. The thing seems like a candidate to be implemented in Oracle SQL engine in some future release. (I guess not yet in this 11.2.0.3). Just when a wide developer audience has learned to wrap pl/sql funtion calls inside a query to avoid multiple calls.

The thing to be unlearned might be a bug. Causing something not to perform or do wrong behavior in certain version. In some future version the bug might be corrected and the workaround becomes obsolete.  These are currently most often details that influence some specific situation. If you want to know the basics and more. Jonathan Lewis is comig to Finland this autumn. A two day learning session is available. Optimizing Oracle 1.-2.11.2011. And in addition Oracle User Group Finland autumn seminar 3.11.2011 seems to have pleasure to have his presentation. Agenda and registration information available to both at http://www.ougf.fi/ A lot guess working in this post. To avoid that attend.

2011-09-21

Avoid temp usage while table reorg

Richard Foote blog is having questioning posts. Thanks to the latest solution there we have a possible alternative while reorganizing a huge table. Just hit an "ORA-01652: unable to extend temp segment by 64 in tablespace TEMP" problem while doing table reorganization. Who and what sql is consuming temp helps identifying the problem statement.
insert /*+append*/ into targettable select * from sourcetable order by sortcolumn;
The source table has an index on sortcolumn. Using that it is possible to avoid sorting and temp usage.
insert /*+append*/ into targettable select /*+index(sourcetable sortcolumn_idx)*/ * from sourcetable order by sortcolumn;

2011-09-01

Password expiring


Yet again...
A new 181 days ago created 11g database instance...
A software connection pool user is not allowed to connect to the database...

ORA-28002: the password will expire within 7 days
OR
ORA-28001: the password has expired


select profile from dba_users where username = 'POOLUSER';

DEFAULT

create profile pool_profile limit PASSWORD_LIFE_TIME unlimited;

alter user pooluser profile pool_profile;

select profile from dba_users where username = 'POOLUSER';

POOL_PROFILE

2011-08-16

set role

Trying to get autotrace out of SQL Developer and getting error message.


Failed to access V$MYSTAT.
Please obtain read catalog privilege
from your database administrator:
grant SELECT_CATALOG_ROLE to RAFU
grant SELECT ANY DICTIONARY toRAFU
NOTE: you need to reconnect your current session
in order for the settings change to have an effect


Granted the privileges. I guess less would be enough. Weird note. Why do I need to reconnect the session? There exists SET ROLE command.


SET ROLE ALL;


That does the job. No reconnect needed.

2011-08-10

Crabby code



ORA-00020: maximum number of processes 500 exceeded


Connections are using DEDICATED connections. The pooling is done in the middle tier. Several pools. Connection leakage is not a database fault but it becomes to a everybody problem. While figuring out the problem, I managed to produce a following 10046 SQL trace.


ERROR #1:err=12899 tim=1312894876206660
WAIT #1: nam='SQL*Net break/reset to client' ela= 20 driver id=675562835 break?=1 p3=0 obj#=-1 tim=1312894876206740
WAIT #1: nam='SQL*Net break/reset to client' ela= 303 driver id=675562835 break?=0 p3=0 obj#=-1 tim=1312894876207059
WAIT #1: nam='SQL*Net message to client' ela= 2 driver id=675562835 #bytes=1 p3=0 obj#=-1 tim=1312894876207098
WAIT #1: nam='SQL*Net message from client' ela= 247133 driver id=675562835 #bytes=1 p3=0 obj#=-1 tim=1312894876454273


Using the Oracle friendly search.

Talking about SQL*Net break/reset to client events Tanel Poder mentions these breaks are caused by bad application design. This is something I was trying to find out, but he is talking about MERGE statements. My trace included only inserts.

In the Xtrace manual it is mentioned "err" being equal to 12899 (ORA-12899 is "value too large for column"): That makes sense.

Luckily I had taken the trace with binds. And could point out the actual column causing the root problem. Only the first 255 characters of the bind values seems to be populated to the trace. The problem column was larger in this case. Using My Oracle support page Interpreting Raw SQL_TRACE and DBMS_SUPPORT.START_TRACE output [ID 39817.1] it is possible to read the trace. In the bind part avl Actual value length (array length too).. Compared that to the column size at the position of the bind in the insert statement and the root problem was found.

About bad application design
* The application should know how long values can be inserted to each column.
* If the application receives an error it should have an error handler that releases the reserved resources. This time the unreleased resource was a database connection.


2011-06-09

Major

Today I did something Donald Duck would do. A couple years ago I built a house. Here are two doors to leave the house, one in front and also a back door. There are wooden stairs in the front door and a wooden terrace outside the back door. I just oiled the front stairs and just after that continued oiling at the back terrace. Now I am stuck inside for a while. It is time to open a beer and think of something else done today. Karhu has some virtual fishing to do.

Julian Dontcheff mentioned a while ago about 11.2.0.2 that it should actually be called 11R3. I was reading my oracle support. The issue in my spontaneous online demonstration in What is bugging me presentation is noticed. My demonstration was about "No results with function based indexes and OR expansion". Now there is a document in MOS "Things to Consider Before Upgrade to 11.2.0.2 Database Performance [ID 1320966.1]" updated 7.6.2011. On off patch is recommended as Oracle does not want to interfere optimizer with CPU or PSU patching. The fifth number in the version numbering. So are those four letters actually something that should be considered major version. Well 11.2.0.2 has so many other changes than the optimizer ones that it could be a 11R3. Just see the list of new features in 11.2.0.2 list in the documentation. OPTIMIZER_FEATURES_ENABLED It has already been from version 10.1.0.3 and 9.2.0.8 that the fourth number have had a meaning. Actually 11.2.0.2 is missing there. Yet another place to submit a user comment about the documentation.

While writing this post it seems like the 1320966.1 document is vanishing or is it just something about the flashy interface... The patch recommended to install was 9776940. Contact support before installing... Another thing mentioned there was the 11.2.0.3 patchset due out later this year.

Yet another thing. During last week there has been a slight peak in the visitor count of my blog. After Jonathan Lewis has made an Argh! issue about merge ignores check constraint and someone linking that post to his pages. Yes it actually is an Argh issue.

Actually yet another Argh! issue might be visualized as a snip

2011-05-20

OUG Harmony 2011

Just going home from OUG Harmony seminar. I took the room of a cancelled session and gave a presentation that I put together after Thursday dinner. I tittled the presentation "What is bugging me". So satisfied with other presentations I attended. Eg now I know indexing possibilities in MySQL.

In my presentation I got to show an email from my report on oracle documentation. The issue Tom Kyte mentioned in his keynote speak. There is a feedback possibility in every page in online documentation pages. It is working as Tom mentioned.

Also mentioned beta testing possibility. Currently 11.2 XE available.

One issue was dealing with ora_rowscn. I noticed that my oracle support pages does have only one workaround to a Bug 9360157: WRONG RESULTS WHEN USING ORA_ROWSCN PSEUDOCOLUMN USING ANSI JOIN


The buggy way:

select dep.n,dep.ora_rowscn 
from testnorowdep nodep 
inner join testrowdep dep on dep.n=nodep.n
where dep.ora_rowscn > :scn ;


The mentioned workaround SQL 92 join way - do not use ansi join.


select dep.n,dep.ora_rowscn 
from testnorowdep nodep  , testrowdep dep 
where dep.n=nodep.n
and dep.ora_rowscn > :scn;


An additional workaround - use ora_scn in join condition


select dep.n,dep.ora_rowscn 
from testnorowdep nodep 
inner join testrowdep dep
on dep.n=nodep.n
and dep.ora_rowscn > :scn;

2011-05-09

BAUD

Earlier I wrote about sqlplus client tool and date datatype. This post is about presenting dates.

"BAUD: Battle against Unclear Dataformats" Piet de Visser commenting on Alex Gorbachev tweet
"In the times of total globalization why people still battle with different date formats like 02/03/11 vs 03/02/11 - solution is 03-Feb-2011"



The solution is not alone 03-Feb-2011. NLS_LANGUAGE or NLS_DATE_LANGUAGE need to be fixed also.

Let us assume your date string is 03-Mar-2011. It might be that your session is talking POLISH.


SQL> alter session set nls_date_format='yyyy-mm-dd';

SQL> alter session set nls_language='POLISH';

SQL> select to_date('01-mar-2011','dd-mon-yyyy') from dual;

2011-03-01

SQL> alter session set nls_language='AMERICAN';

SQL> select to_date('01-mar-2011','dd-mon-yyyy') from dual;

2011-03-01



Ok that is fine. But if your data is in POLISH and you just happen to read a string in CROATIAN...



SQL> select to_date('01-lip-2011','dd-mon-yyyy','NLS_DATE_LANGUAGE=CROATIAN') from dual;

2011-06-01

SQL> select to_date('01-lip-2011','dd-mon-yyyy','NLS_DATE_LANGUAGE=POLISH') from dual;

2011-07-01



There are also another similar month abbreviations that may be converted to different months depending on language.



SQL> alter session set nls_date_format='yyyy-mm-dd';

SQL> select to_date('01-lis-2011','dd-mon-yyyy','NLS_DATE_LANGUAGE=CROATIAN') from dual;

2011-10-01

SQL> select to_date('01-lis-2011','dd-mon-yyyy','NLS_DATE_LANGUAGE=CZECH') from dual;

2011-11-01

SQL> select to_date('01-lis-2011','dd-mon-yyyy','NLS_DATE_LANGUAGE=POLISH') from dual;

2011-11-01




SQL> select to_date('01-srp-2011','dd-mon-yyyy','NLS_DATE_LANGUAGE=CROATIAN') from dual;

2011-07-01

SQL> select to_date('01-srp-2011','dd-mon-yyyy','NLS_DATE_LANGUAGE=CZECH') from dual;

2011-08-01




So in case of Feb, is it in AMERICAN, DANISH, DUTCH, ENGLISH, GERMAN, GERMAN DIN, ICELANDIC, ITALIAN, LATIN AMERICAN SPANISH, LATIN SERBIAN, LATVIAN, MALAY, MEXICAN SPANISH, NORWEGIAN, ROMANIAN, SLOVAK, SLOVENIAN, SPANISH or SWEDISH?

2011-05-06

many to many access path -view

Having a many to many relationship. It is the access path from one table to another. Lets call them cust, ord and prod tables.


cust 1-* ord *-1 prod


There is a need to query prod that has an access path to a certain cust. And we need a database view to implement this. So we want prod out and give a cust to the query as a parameter. The first impression to a developer is to do joins to ord table. That would lead to duplicates in the result. Something that is not desired. It is possible to implement a fine performing view without those duplicates. Here is an example of such.



create table cust (cust_id number constraint cust_pk primary key, cname varchar2(20) not null);

create table prod (prod_id number constraint prod_pk primary key, pname varchar2(20) not null);

create table ord ( ord_id number constraint ord_pk primary key
, cust_id constraint ord_cust_fk references cust not null
, prod_id constraint ord_prod_pk references prod not null
, amount number not null);

create index ord_cust_fk_idx on ord(cust_id,prod_id);

create or replace view custprod as
select cust.cust_id
, cust.cname
, prod.prod_id
, prod.pname
from cust,prod
where exists (select 1 from ord where ord.prod_id=prod.prod_id and ord.cust_id=cust.cust_id)
;

select prod_id,pname from custprod where cust_id = :cust;



So there it is. Take only those rows to the from clause you want results from. Use exists on another parts of the access path.
Some data and a query plan.



insert all into cust(cust_id,cname) values (le,le||le)
into prod(prod_id,pname) values (le,le||le)
select level le from dual connect by level < 1000;

begin
dbms_stats.gather_table_stats(user,'PROD');
dbms_stats.gather_table_stats(user,'CUST');
end;
/

insert into ord select rownum,cust_id,prod_id,mod(cust_id,prod_id) from cust, prod where cust_id <= prod_id;

begin
dbms_stats.gather_table_stats(user,'ORD');
end;
/


select prod_id,pname from custprod where cust_id = :cust;

--------------------------------------------------------
| Id | Operation | Name |
--------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | NESTED LOOPS | |
| 2 | NESTED LOOPS | |
| 3 | NESTED LOOPS | |
| 4 | INDEX UNIQUE SCAN | CUST_PK |
| 5 | SORT UNIQUE | |
| 6 | INDEX RANGE SCAN | ORD_CUST_FK_IDX |
| 7 | INDEX UNIQUE SCAN | PROD_PK |
| 8 | TABLE ACCESS BY INDEX ROWID| PROD |
--------------------------------------------------------

drop table ord;

drop table prod;

drop table cust;

drop view custprod;

2011-05-04

Denormalize for Safety

12.5.2011 the last day to register to OUG Harmony 2011 19-20.5.2011, Paasitorni, Helsinki. Agenda worth reading.

Just posted a kind of denormalize for safety post to Oracle SQL forum. Using the ideas i wrote earlier denormalize safely and presented in OUGF seminar autumn.

2011-04-13

Date variable in sqlplus

Having a huge query including several temporal joins and sum over time and several date type binds. I want to execute that query using sqlplus.


SQL> variable til date
Usage: VAR[IABLE] [ [ NUMBER | CHAR | CHAR (n [CHAR|BYTE]) |
VARCHAR2 (n [CHAR|BYTE]) | NCHAR | NCHAR (n) |
NVARCHAR2 (n) | CLOB | NCLOB | BLOB | BFILE
REFCURSOR | BINARY_FLOAT | BINARY_DOUBLE ] ]


Using binds in sqlplus by Tanel Poder and askTom discussing missing date variable support in sqlplus. Putting those together.


SQL> variable fro varchar2(10)
SQL> exec select sysdate into :fro from dual;

PL/SQL procedure successfully completed.

SQL> select 1 from dual where :fro < sysdate;

1
----------
1

SQL> print fro

FRO
--------------------------------
13.04.2011



Be aware about possible changes in execution plan caused by datatype conversions. Read the askTom discussion.

Changing NLS_DATE_FORMAT


SQL> alter session set nls_territory=america;

Session altered.

SQL> select 1 from dual where :fro < sysdate;
select 1 from dual where :fro < sysdate
*
ERROR at line 1:
ORA-01843: not a valid month


SQL> exec select sysdate into :fro from dual;

PL/SQL procedure successfully completed.

SQL> select 1 from dual where :fro < sysdate;

1
----------
1

SQL> print fro

FRO
--------------------------------
13-APR-11


And yet another aspect about date formats


SQL> select 1 from dual where :fro = trunc(sysdate);

1
----------
1

SQL> alter session set nls_date_format='yyyymmdd hh24mi';

Session altered.

SQL> exec select sysdate into :fro from dual;
BEGIN select sysdate into :fro from dual; END;

*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 1


SQL> variable fro varchar2(13)
SQL> exec select sysdate into :fro from dual;

PL/SQL procedure successfully completed.

SQL> select 1 from dual where :fro = trunc(sysdate);

no rows selected

SQL> print fro

FRO
--------------------------------
20110413 0902

2011-04-01

11.2 XE beta and SQL Developer 3

11.2 XE beta available.

And seems like not a april fool Tanel Poder describing features.

Documentation available. And a dedicated discussion forum.

SQL Deveoper 3 beta days seems to be over

Descending index is function based

Descendin index is implemented as function based index in Oracle. This is documented and discussed.
Nulls may lead to a surprise with a desc unique index.


SQL> create table a(a varchar2(2 char), b varchar2(2 char) );

Table created.

SQL>
SQL> select table_name,column_name,data_type,data_length from user_tab_cols where table_name = 'A';

TABLE_NAME COLUMN_NAME DATA_TYPE
------------------------------ ------------------------------ --------------------------------------
A A VARCHAR2
A B VARCHAR2

SQL>
SQL> create unique index a_idx on a (a,b desc);

Index created.

SQL>
SQL> select table_name,column_name,data_type,data_length from user_tab_cols where table_name = 'A';

TABLE_NAME COLUMN_NAME DATA_TYPE
------------------------------ ------------------------------ --------------------------------------
A A VARCHAR2
A B VARCHAR2
A SYS_NC00003$ RAW

SQL>
SQL> select index_type from user_indexes where index_name = 'A_IDX';

INDEX_TYPE
---------------------------
FUNCTION-BASED NORMAL

SQL>
SQL> select * from user_ind_columns where index_name = 'A_IDX';

INDEX_NAME TABLE_NAME
------------------------------ ------------------------------
COLUMN_NAME
----------------------------------------------------------------------------------------------------
COLUMN_POSITION COLUMN_LENGTH CHAR_LENGTH DESC
--------------- ------------- ----------- ----
A_IDX A
A
1 8 2 ASC

A_IDX A
SYS_NC00003$
2 13 0 DESC


SQL>
SQL> select * from USER_IND_EXPRESSIONS where index_name = 'A_IDX';

INDEX_NAME TABLE_NAME COLUMN_EXPRESSION
------------------------------ ------------------------------ --------------------------------------
A_IDX A "B"

SQL>
SQL> insert into a values(null,null);

1 row created.

SQL>
SQL> insert into a values(null,null);
insert into a values(null,null)
*
ERROR at line 1:
ORA-00001: unique constraint (KIOS.A_IDX) violated


SQL>
SQL> drop index a_idx;

Index dropped.

SQL>
SQL> create unique index a_idx on a (a,b);

Index created.

SQL>
SQL> insert into a values(null,null);

1 row created.

SQL>
SQL> insert into a values(null,null);

1 row created.

Installing

Just to remember. Have to try next time when installing a new computer.
http://ninite.com

2011-03-25

Greatest Common Divisor

Need a greatest common divisor function? Vadim Tropashko has written a SQL implementation of such. Here it is wrapped inside a function.



create or replace function greatest_common_div(n1 number, n2 number)
return number
as
ret number;
begin
with NumberSet as (
select column_value Element from table(sys.odcinumberlist( n1, n2 ))
), Integers as (
select level num# from dual connect by level <= least( n1, n2 )
)
select max(Divisor) into ret
from (
select Divisor
from (
select num# as Divisor from Integers
where num# <= (select min(Element) from NumberSet)
), NumberSet
where mod(Element, Divisor)=0
group by Divisor
having count(*) = (select count(*) from NumberSet)
);
return ret;
end;
/


select greatest_common_div(15,123) from dual;

3



And a 11.2 version. Idea borrowed from http://wiki.postgresql.org. This is a better performing one as it handles only two numbers.


create or replace function greatest_common_div2(n1 number, n2 number)
return number
as
ret number;
begin
WITH t(a,b) AS (
select n1, n2 from dual
UNION ALL
SELECT b, mod(a,b) FROM t
WHERE b > 0
)
SELECT a into ret
FROM t
WHERE b = 0
;
return ret;
end;
/



And the same without pl/sql to sql context switch. Added 12.9.2011



create or replace function greatest_common_div3(n1 number, n2 number)
return number
as
ret number;
begin
ret := n1;
if (n2 != 0) then
ret := greatest_common_div3(n2,mod(n1,n2));
end if;
return ret;
end;
/

with numbers as (select level num from dual connect by level < 100)
, test as (
select a.num a
, b.num b
, greatest_common_div(a.num,b.num) gcd
, greatest_common_div2(a.num,b.num) gcd2
, greatest_common_div3(a.num,b.num) gcd3
from numbers a, numbers b
)
select * from test where gcd!=gcd2 or gcd!=gcd3
;




Update 28.3.2011
Added an user defined aggregate also.



CREATE OR REPLACE TYPE t_gcd_agg AS OBJECT
(
g_number number,
STATIC FUNCTION ODCIAggregateInitialize(sctx IN OUT t_gcd_agg)
RETURN NUMBER,
MEMBER FUNCTION ODCIAggregateIterate(self IN OUT t_gcd_agg,
value IN number )
RETURN NUMBER,
MEMBER FUNCTION ODCIAggregateTerminate(self IN t_gcd_agg,
returnValue OUT number,
flags IN NUMBER)
RETURN NUMBER,
MEMBER FUNCTION ODCIAggregateMerge(self IN OUT t_gcd_agg,
ctx2 IN t_gcd_agg)
RETURN NUMBER
);
/

CREATE OR REPLACE TYPE BODY t_gcd_agg IS
STATIC FUNCTION ODCIAggregateInitialize(sctx IN OUT t_gcd_agg)
RETURN NUMBER IS
BEGIN
sctx := t_gcd_agg(NULL);
RETURN ODCIConst.Success;
END;
MEMBER FUNCTION ODCIAggregateIterate(self IN OUT t_gcd_agg,
value IN number )
RETURN NUMBER IS
BEGIN
SELF.g_number := greatest_common_div3(case when self.g_number is null then value else self.g_number end, value);
RETURN ODCIConst.Success;
END;
MEMBER FUNCTION ODCIAggregateTerminate(self IN t_gcd_agg,
returnValue OUT number,
flags IN NUMBER)
RETURN NUMBER IS
BEGIN
returnValue := SELF.g_number;
RETURN ODCIConst.Success;
END;
MEMBER FUNCTION ODCIAggregateMerge(self IN OUT t_gcd_agg,
ctx2 IN t_gcd_agg)
RETURN NUMBER IS
BEGIN
SELF.g_number := greatest_common_div3(case when self.g_number is null then ctx2.g_number else self.g_number end
, case when ctx2.g_number is null then self.g_number else ctx2.g_number end);
RETURN ODCIConst.Success;
END;
END;
/

CREATE OR REPLACE FUNCTION gcd (p_input VARCHAR2)
RETURN VARCHAR2
PARALLEL_ENABLE AGGREGATE USING t_gcd_agg;
/

select gcd(column_value) from table(sys.odcinumberlist(4,8,34));

2

with test as (
select 'a' a, 4 n from dual union all
select 'a' a, 6 n from dual union all
select 'b' a, 30 n from dual union all
select 'b' a, 60 n from dual union all
select 'b' a, 96 n from dual
)
select a,gcd(n)
from test
group by a;

a 2
b 6

2011-03-16

grep in windows

Missing grep in windows.
Power shell to the rescue.

findstr -s "search_string" files

2011-03-11

Merge ignores check constraint

It was a while ago when we noticed that our enabled and validated constraints were not respected by our 11.2.0.1 database. The reason for this appeared to be a merge clause updating the rows and ignoring the constraints. A bug 9285259 was created.
Now the bug will be fixed in version 12.1 and hopefully 11.2.0.3 also. There is a patch available on top of 11.2.0.2 version. The patch installed online. Now we are receiving ORA-02290: check constraint violated as supposed. Also workaround was introduced. Our merge clause had only update part. By adding insert or delete part to the merge the constraint is noticed without the patch.

2011-03-09

Removing duplicates

From asktom one can find an example removing duplicates. Here is another.


create table duplicates (n int);

insert into duplicates select level from dual connect by level < 100;

insert into duplicates select level from dual connect by level < 50;

delete from duplicates where rowid in (
select rid from (
select rowid rid, first_value(rowid)over(partition by n) frid, dup.*
from duplicates dup
) where frid != rid
)
;

2011-03-03

Deprecation

During last two weeks I have noticed twice the use of oracle.jdbc.driver.OracleDriver with 11g version. It has been almost five years ago when the Oracle 9i jdbc driver package "oracle.jdbc.driver." deprecated.

With 11g please use oracle.jdbc.OracleDriver instead of the deprecated oracle.jdbc.driver.OracleDriver.

This might be found in your application server connection pool settings, project jdbc.properties or hard coded something like java.lang.Class.forName("oracle.jdbc.driver.OracleDriver");
as it should be
java.lang.Class.forName("oracle.jdbc.OracleDriver");

2011-02-28

NOCOUG Second SQL Challenge

Working with the second NoCOUg SQL Challenge. In the magazine there can be found also "advice for an Oracle Beginner" articles - worth reading.




Here is my five cents to towards the problem. Another answers may be found from Iggy Fernandez blog comments.

When I found the challenge, there were already some published answers to the riddle. So I started with minimizing the starting set. Got rid of nulls in the first place. And after a while ended up with a hierarchical query. On a way I draw a Graphviz picture of the riddle data. Maybe that visualizes some paths I was trying to follow trying to figure out alternative solutions. SQL commands for creating the required data.




with aa as (
select word1, word2, word3, word2 gr
from riddle
where word1 is not null
), bb as (
select gr,pre,word
from aa
unpivot (word for pre in (word1 as 1, word2 as 2, word3 as 3))
), cc as (
select gr,pre,word
, first_value(case when pre = 3 and word != gr then gr end ignore nulls)over(partition by word) bg
, first_value(case when pre = 1 and word != gr then gr end ignore nulls)over(partition by word) ag
, min(pre)over(partition by word) mi
, max(pre)over(partition by word) ma
from bb
), dd (gr,mi,ma,pre,word,ord)as (
select gr,mi,ma,pre,word,cast(2 as varchar2(10))
from cc
where cc.pre=2 and cc.bg is null and cc.ag is null
union all
select cc.gr,cc.mi,cc.ma,cc.pre,cc.word
, dd.ord||case when cc.pre = 1 then cc.mi else cc.ma end
from dd inner join cc on cc.pre in (1,3) and dd.word = cc.gr
)
select listagg(dd.word,' ')within group(order by rpad(dd.ord,10,'2'))
from dd
;



Update 8.3.2011
Ordering with rpad seems like so borrowed from the riddle_tree. So here is another solution that maintains the ordering number while browsing the tree.



with aa as (
select word1, word2, word3, word2 gr
from riddle
where word1 is not null
), bb as (
select gr,pre,word
from aa
unpivot (word for pre in (word1 as 1, word2 as 2, word3 as 3))
), cc as (
select gr,pre,word
, first_value(case when pre = 3 and word != gr then gr end ignore nulls)over(partition by word) bg
, first_value(case when pre = 1 and word != gr then gr end ignore nulls)over(partition by word) ag
, min(pre)over(partition by word) mi
, max(pre)over(partition by word) ma
from bb
), dd (gr,mi,ma,pre,word,nord,lv)as (
select gr,mi,ma,pre,word,2222222,1
from cc
where cc.pre=2 and cc.bg is null and cc.ag is null
union all
select cc.gr,cc.mi,cc.ma,cc.pre,cc.word
, dd.nord+(cc.pre-2)*power(10,6-dd.lv)
, dd.lv+1
from dd inner join cc on cc.pre in (1,3) and dd.word = cc.gr
)
select listagg(dd.word,' ')within group(order by dd.nord)
from dd
;


2011-02-23

revoke through a db link

Why would someone do such a thing? Following will end up an active a user session stuck waiting "SQL*Net message from dblink".



sqlplus /nolog

connect / as sysdba


drop user a cascade;

drop user b cascade;


create user a identified by a;

grant create session to a;

grant create procedure to a;

grant create database link to a;


create user b identified by b;

grant create session to b;


connect a/a

declare
comm varchar2(200);
begin
select 'create database link b connect to b identified by b using '''||
sys_context('userenv','db_unique_name')||'''' into comm
from dual;
execute immediate comm;
end;
/


create or replace procedure p as
begin
execute immediate 'revoke execute on p from b';
end;
/


grant execute on p to b;

exec a.p@b

2011-02-09

not exists null

I wrote earlier about not in and null. Be careful also when using not exists predicate together with a sub query resulting nulls.


SQL> select 1 from dual where not exists (select 1 from dual where 1=0);

1
----------
1

SQL> select 1 from dual where 1=0;

no rows selected



SQL> select 1 from dual where not exists (select max(1) from dual where 1=1);

no rows selected

SQL> select max(1) from dual where 1=1;

MAX(1)
----------
1



SQL> select 1 from dual where not exists (select max(1) from dual where 1=0);

no rows selected

SQL> select max(1) from dual where 1=0;

MAX(1)
----------


SQL> select 1 from dual where not exists (select null from dual);

no rows selected

SQL> select null from dual;

N
-



In the sub query there is a row but it is null, unknown. So not exists null evaluates to false. More about the issue in responses to a oracle-l mailing list post.

2011-01-28

ORDER SIBLINGS BY CONNECT_BY_ROOT

In this post I am dealing with a sorting problem of a recursive query and giving a guideline how to implement such ordering.

We are patching an Oracle database to 11.2.0.2 and with one of our test case hit an error

ORA-30007: CONNECT BY ROOT operator is not supported in the START WITH or in the CONNECT BY condition

The problem query does not have connect_by_root in START WITH or CONNECT BY. But it is in order by "order siblings by connect_by_root". So the reported error is somewhat misleading.

What does this order siblings by connect_by_root is trying to accomplish. The hierarchical result is ordered first by some column from a root node of the hierarchy and after that with some columns at the same level of the hierarchy.

In a thread in www.sql.ru there may be found discussion about the same problem. With 10.2.0 ORA-00600: internal error code, arguments: [qkacon:FJswrwo] is reported. It is mentioned that giving a hint /*+ NO_CONNECT_BY_COST_BASED */ bypasses the ORA-00600 problem, but a new one is described. connect_by_root is returning nulls if the same query has siblings word in order by. So our query has problem and the newly introduced error in 11.2.0.2 is actually revealing that to us.

11.2 introduced an alternative way to write hierarchical queries. Here I introduce the problematic queries with a data set having two roots. And in the end a way to implement the requirement using recursive common table expression.



drop table emp purge;

CREATE TABLE EMP
(
EMPNO NUMBER(4),
ENAME VARCHAR2(10 BYTE),
MGR NUMBER(4)
)
;

insert into emp(empno,mgr,ename) values (11,23,'SMITH');
insert into emp(empno,mgr,ename) values (12,16,'ALLEN');
insert into emp(empno,mgr,ename) values (13,16,'WARD');
insert into emp(empno,mgr,ename) values (14,19,'JONES');
insert into emp(empno,mgr,ename) values (15,16,'MARTIN');
insert into emp(empno,mgr,ename) values (16,19,'BLAKE');
insert into emp(empno,mgr,ename) values (17,19,'CLARK');
insert into emp(empno,mgr,ename) values (18,null,'SCOTT');
insert into emp(empno,mgr,ename) values (19,null,'KING');
insert into emp(empno,mgr,ename) values (20,16,'TURNER');
insert into emp(empno,mgr,ename) values (21,18,'ADAMS');
insert into emp(empno,mgr,ename) values (22,16,'JAMES');
insert into emp(empno,mgr,ename) values (23,14,'FORD');
insert into emp(empno,mgr,ename) values (24,17,'MILLER');

update emp set mgr = null where ename = 'SCOTT';

commit;


select em.*, rpad('-',level,'-')||empno , level
from emp em
start with em.mgr is null
connect by prior em.empno = em.mgr
;
18 SCOTT -18 1
21 ADAMS 18 --21 2
19 KING -19 1
14 JONES 19 --14 2
23 FORD 14 ---23 3
11 SMITH 23 ----11 4
16 BLAKE 19 --16 2
12 ALLEN 16 ---12 3
13 WARD 16 ---13 3
15 MARTIN 16 ---15 3
20 TURNER 16 ---20 3
22 JAMES 16 ---22 3
17 CLARK 19 --17 2
24 MILLER 17 ---24 3



Trying to add the described ordering:



select /*+ NO_CONNECT_BY_COST_BASED */em.*, level, connect_by_root ename cbr, rpad('-',level,'-')||empno
from emp em
start with em.mgr is null
connect by prior em.empno = em.mgr
order siblings by connect_by_root ename, empno
;

18 SCOTT 1 SCOTT -18
21 ADAMS 18 2 SCOTT --21
19 KING 1 KING -19
14 JONES 19 2 KING --14
23 FORD 14 3 KING ---23
11 SMITH 23 4 KING ----11
16 BLAKE 19 2 KING --16
12 ALLEN 16 3 KING ---12
13 WARD 16 3 KING ---13
15 MARTIN 16 3 KING ---15
20 TURNER 16 3 KING ---20
22 JAMES 16 3 KING ---22
17 CLARK 19 2 KING --17
24 MILLER 17 3 KING ---24


Rows from KING root should be ordered before SCOTT. So using the 11.2.0.2 database the problem is noticed and the ORA-30007: CONNECT BY ROOT operator is not supported in the START WITH or in the CONNECT BY condition is thrown.

How to bypass the problem with a recursive common table query:



with cte (empno,mgr,ename,cbr,l) as (
select empno,mgr,ename,ename cbr,1 from emp where mgr is null
union all
select em.empno,em.mgr,em.ename,ct.cbr,ct.l+1 from emp em inner join cte ct on em.mgr = ct.empno
)
SEARCH DEPTH FIRST BY cbr,empno SET rn
select te.*, rpad('-',l,'-')||empno
from cte te
;

19 KING KING 1 1 -19
14 19 JONES KING 2 2 --14
23 14 FORD KING 3 3 ---23
11 23 SMITH KING 4 4 ----11
16 19 BLAKE KING 2 5 --16
12 16 ALLEN KING 3 6 ---12
13 16 WARD KING 3 7 ---13
15 16 MARTIN KING 3 8 ---15
20 16 TURNER KING 3 9 ---20
22 16 JAMES KING 3 10 ---22
17 19 CLARK KING 2 11 --17
24 17 MILLER KING 3 12 ---24
18 SCOTT SCOTT 1 13 -18
21 18 ADAMS SCOTT 2 14 --21


Problem solved, nice feeling. Also a good taste in my mouth. Thanks to Ilkka and H. and The Yamazaki Single Malt Whisky aged 12 years Japanese whisky. Now to have some cake and buy tickets to Hakametsä Tappara ice hockey game.

2011-01-27

Partitioned Outer Join

Today was the day I had to fill some sparse data. I actually used partition by right outer join. The documentation example describes the problem well. Nothing much else to say about that.

2011-01-24

GetClientIdentifier

Having a connection pool that connects to a schema with one username. The end users are authenticated to your application. It is possible to get the end user identifier information visible to database level v$session client_identifier column. DBMS_SESSION.SET_IDENTIFIER may be used or with JDBC setEndToEndMetrics method of the oracle.jdbc.OracleConnection.

After setting client identifier information to a session it is possible to set database tracing to a client identifier. But how about your own code, how is it possible to read the client identifier information set in the session? There is no DBMS_SESSION.GET_IDENTIFIER method. One might use select to v$session view but there might be not grants to do so.

Another possibility might be to user DBMS_APPLICATION_INFO.SET_CLIENT_INFO. DBMS_APPLICATION_INFO.READ_CLIENT_INFO is available. Setting client info is not available through JDBC EndToEndMetrics. Client info changes are possible to be set so the client identifier changes also. Set ALTER SYSTEM SET EVENTS 'CLIENTID_OVERWRITE'; This way the tracing by the end user becomes available.

The client identifier is available through SYS_CONTEXT so the GET_IDENTIFIER may be achieved using
select SYS_CONTEXT('userenv', 'CLIENT_IDENTIFIER') from dual;

Blog Archive

About Me

My photo
I am Timo Raitalaakso. I have been working since 2001 at Solita Oy as a Senior Database Specialist. My main focus is on projects involving Oracle database. Oracle ACE alumni 2012-2018. In this Rafu on db blog I write some interesting issues that evolves from my interaction with databases. Mainly Oracle.