2009-12-07

SQL Tuning reading plans and temp usage

What the heck is using so much temp space? Luckily we have an environment with

1. Diagnostic pack purchased

2. 11.2.0.1 version - TEMP_SPACE_ALLOCATED in v$active_session_history

3. software using dbms_application info - action and module populated

And following queries will help. First to get overview and the last queries give pointers inside execution of the queries. Now we will have something where to start fixing our temp wasters.

Trying to figure out stuff from execution plans? Here are some presentations worth reading from Tanel Poder, Christian Antognini and Kyle Hailey.


select sql_id,action,module
,min(sql_exec_start),max(sql_exec_start),max(sample_time)
,min(TEMP_SPACE_ALLOCATED) mi
,trunc(avg(TEMP_SPACE_ALLOCATED)) av
,max(TEMP_SPACE_ALLOCATED) mx
from v$active_session_history h
group by sql_id,action,module
having max(TEMP_SPACE_ALLOCATED) > 0
order by mx desc;

select *
from v$active_session_history h
order by TEMP_SPACE_ALLOCATED desc nulls last;

select sql_id,action,module
,o.object_type,o.object_name,h.sql_plan_operation,h.sql_plan_options
,min(sample_time),max(sample_time)
,min(TEMP_SPACE_ALLOCATED) mi
,trunc(avg(TEMP_SPACE_ALLOCATED)) av
,max(TEMP_SPACE_ALLOCATED) mx
from v$active_session_history h
left outer join dba_objects o on o.object_id = h.current_obj#
group by sql_id,action,module,o.object_type,o.object_name
,SQL_PLAN_OPERATION,SQL_PLAN_OPTIONS
having max(TEMP_SPACE_ALLOCATED) > 0
order by mx desc;


DBA_HIST_ACTIVE_SESS_HISTORY for searching temp users from a longer period.


select sql_opname,sql_id,module,action
,count(distinct sql_exec_start),max(temp_space_allocated) mx
from DBA_HIST_ACTIVE_SESS_HISTORY
where temp_space_allocated > 0
group by sql_opname,sql_id,module,action
order by mx desc
;

2009-12-01

25 hour day



alter session set TIME_ZONE = 'Europe/Helsinki';

select to_timestamp_tz('28032010','ddmmyyyy')
+ interval '3' hour
+ interval '30' minute as a
from dual;

28.3.2010 4:30:00.000000000 +03:00

select to_timestamp_tz('31102010','ddmmyyyy')
+ interval '3' hour
+ interval '30' minute as b
from dual;

31.10.2010 3:30:00.000000000 +03:00

select to_timestamp_tz('31102010','ddmmyyyy')
+ interval '4' hour
+ interval '30' minute as c
from dual;

31.10.2010 3:30:00.000000000 +02:00

select to_timestamp_tz('31102010','ddmmyyyy')
+ interval '24' hour
+ interval '30' minute as d
from dual;

31.10.2010 23:30:00.000000000 +02:00

2009-11-21

Invalid procedure

Wonder what is in there. Self invalidating procedure. Executable although invalid.


SQL> drop table i purge;

Table dropped.

SQL> create table i(n number);

Table created.

SQL> create or replace procedure INVALID_EXECUTABLE wrapped
2 a000000
3 b2
4 abcd
5 abcd
6 abcd
7 abcd
8 abcd
9 abcd
10 abcd
11 abcd
12 abcd
13 abcd
14 abcd
15 abcd
16 abcd
17 abcd
18 abcd
19 7
20 133 148
21 /GKO5fy/wMAULSQ3EONeuteOeyYwgzLINa5qfHQCrcHqNAWjUjjq5aQl2wunqxfypGG6DTUC
22 yHBZPTO1oTh2rTHCnRckEhNR70+2N0Mxn4pbg7fZJwVKX9dFOCicXmazqP6JcBIEQF1QCSdZ
23 DK5WOcSpQXvAnj5qIawt3H+JSwVpk8gG4oVLFcJ/Du2MO8CJvj7IVzAKn1TGVhx7nATQ1Xoh
24 FtjOUd1kLeAhNpt+DGEwkLz/U3ZKLO8ZUrzcIIYGLJsBqyicCNA73FdMJNCGw7WlVkNl7Vji
25 GmA6ougclJqWHoi4fzY81Efcubz7O35pxg==
26 /

Procedure created.

SQL> select * from i;

no rows selected

SQL> select status from user_objects where object_name='INVALID_EXECUTABLE';

STATUS
-------
VALID

SQL> exec INVALID_EXECUTABLE

PL/SQL procedure successfully completed.

SQL> select * from i;

N
----------
1

SQL> select status from user_objects where object_name='INVALID_EXECUTABLE';

STATUS
-------
INVALID

SQL> alter procedure INVALID_EXECUTABLE compile;

Procedure altered.

SQL> select status from user_objects where object_name='INVALID_EXECUTABLE';

STATUS
-------
VALID

SQL> exec INVALID_EXECUTABLE

PL/SQL procedure successfully completed.

SQL> select * from i;

N
----------
2

SQL> select status from user_objects where object_name='INVALID_EXECUTABLE';

STATUS
-------
INVALID

SQL> exec INVALID_EXECUTABLE

PL/SQL procedure successfully completed.

SQL> select * from i;

N
----------
3

SQL> select status from user_objects where object_name='INVALID_EXECUTABLE';

STATUS
-------
INVALID

2009-11-20

PLSQL_WARNINGS

Struggling with pl/sql code having EXCEPTION WHEN OTHERS THEN NULL; lines.


alter session set plsql_warnings = 'ENABLE:6009';


and compile your code. You get a nice report of your buggy code. There are quite a few other warnings available since 10.1. Here is a sqlplus script to be run on your development/test environment to get a information about your possible problematic code. The compilewarnings.sql script generates a p.txt file containing the output.


SQL> compilewarnings.sql

2009-11-08

Check constraints and AND

Rob van Wijk has described nicely check constraints in inheritance conversion to a relation using single table implementation. The issue I planned to write about some day. Similar approach also to checks depending on state/status fields.

2009-11-03

unindex 11.2

Tom Kyte has updated his unindex documentation. It is good to review your work every now and then. I guessed he had a valid version of unindex somewhere. Just did not find it at the moment.

Jonathan Lewis talking about the foreign key indexing issue in OTN. It might be that not all foreign keys need an index in your schema.

11.2 new function listagg is useful also in unindex. Here is a listagg version of unindex for the new Oracle version.

select case when i.index_name is not null
            then 'OK'
            else '****'
       end ok
     , c.table_name
     , c.constraint_name
     , c.cols
     , i.index_name
from (
  select a.table_name
       , a.constraint_name
       , listagg(b.column_name, ' ' ) 
          within group (order by column_name) cols
      from user_constraints a, user_cons_columns b
     where a.constraint_name = b.constraint_name
       and a.constraint_type = 'R'
  group by a.table_name, a.constraint_name
 ) c
 left outer join
 (
  select table_name
       , index_name
       , cr
       , listagg(column_name, ' ' ) 
          within group (order by column_name) cols
    from (
        select table_name
             , index_name
             , column_position
             , column_name
             , connect_by_root(column_name) cr
          from user_ind_columns
       connect by prior column_position-1 = column_position
              and prior index_name = index_name
         )
    group by table_name, index_name, cr
) i on c.cols = i.cols and c.table_name = i.table_name
;



Even thou unindex query is not the kind of query that is run several times a day, I measured execution times from different versions. The test schema contains 1700 foreign keys. Performance comparison
"col_cnt > ALL" 13 sec
"connect by"     3 sec 
"listagg"        1 sec


Foreign keys may point also to and from another schema. Here you can find a version using ALL_CONSTRAINTS and ALL_CONS_COLUMNS views.


2009-11-02

TOra

Old news, but I just noticed, TOra project is alive. 2.0.0 version have been available for Windows almost a year already. I used the product before Quest purchased it from Underscore AB. Worth trying next to SQL Developer, Toad and/or other tools.

2009-10-29

OUGF Autumn Seminar

Today is the last day to register to OUGF Autumn Seminar 2009 at Vanha ylioppilastalo (The old student house), Helsinki 5.11.2009. Also time for me to put together a presentation about hierarchical storing structures and - queries. To be held there in finnish.

2009-10-28

Coding Dojo

So, the day came to use PL/SQL MapReduce. Sooner than I thought two days ago. The purpose to use it was not parallel execution but the example itself. Today I participated Solita Oy free time activity coding dojo. And the simplest task there was to put together SpellNumbers and count used letters. I used the brute force method to spell all required numbers. I guess others used more sophisticated approaches. Mine is for sure the only SQL implementation today. Did not get to see other implementations and missed Sauna, because I left early to play with my kids.

My implementation euler17.sql

2009-10-27

Direct path insert and Data Guard

Christian Antognini describes nicely the direct path insert hints from version 10.2 to 11.2.

Direct path insert works with Data Guard. No undo is generated for direct path insert and redo is generated. No redo is generated if nologgigin is set on the destination table. With Data Guard force logging should be set. Good explanation about this can be found in several discussions in asktom.

2009-10-26

MapReduce

There might come a day to use parallel SQL processing. A good description about Map-Reduce model using Parallel Pipelined Table Functions and parallel operations.

2009-10-08

Not overlapping daily

Attending. First day gone. Looking forward for tommorow. Tanel should you call your seminar Basic Oracle Troubleshooting Seminar as you talk about how a certain c program is executing function by function? Recommended attendance to all who troubleshoot Oracle.

Back to basics. Should it not be basics of a rdbms system to launch a trigger on an event that it is instructed to execute. With Oracle the issue is not so obvious. A certain kind of a compound trigger will not fire when called through jdbc. Metalink -soon to retire- bug no 6785707.

Another reason for my previous post. But was i thinking too complex. There is actually no need to track bitwise the used years. The problem is not O 2^n problem, but O n. As n is the number of distinct possible values in the validity interval. How about changing possible values from yearly to daily.




drop table z cascade constraints purge;

create table z(z number(16) not null
, validfrom date not null
, validtill date not null
, constraint tilld check (trunc(validfrom)=validfrom)
, constraint fromd check (trunc(validtill)=validtill)
, constraint fro2000
check (to_date('20000101','yyyymmdd') < validfrom)
, constraint til2050
check (validtill <= to_date('20500101','yyyymmdd'))
, constraint frotil check (validfrom <= validtill)
);

begin
for i in (
select 'create unique index z'||d||'
on z (case when validfrom <= to_date('''||d||''',''yyyymmdd'')
and to_date('''||d||''',''yyyymmdd'') < validtill
then z
else null
end)' createindex
from
(select level l
, to_char(to_date('20000101','yyyymmdd')+level-1,'yyyymmdd') d
from dual
connect by level<=to_date('20500101','yyyymmdd')-to_date('20000101','yyyymmdd')
)
order by l
)
loop
execute immediate i.createindex;
end loop;
end;
/

begin
*
ERROR at line 1:
ORA-01792: maximum number of columns in a table or view is 1000
ORA-06512: at line 18



Not a production ready aproach. Howcome index creation states that no more columns for a table?


select count(*)
from user_indexes
where table_name = 'Z';

COUNT(*)
----------
997

select count(*)
from user_tab_columns
where table_name = 'Z';

COUNT(*)
----------
3

select count(*)
from user_tab_cols
where table_name = 'Z';

COUNT(*)
----------
1000



A function based index is creating a invisible virtual column to table. A table may have at least 1000 columns.

Well we have 997 first days covered. How a small inserting test is performing with those 997 indexes?



SQL> insert into z
2 values(1,to_date('20010101','yyyymmdd'),to_date('20010102','yyyymmdd'));

1 row created.

Elapsed: 00:00:00.50
SQL>
SQL> insert into z
2 values(1,to_date('20020101','yyyymmdd'),to_date('20020102','yyyymmdd'));

1 row created.

Elapsed: 00:00:00.40
SQL>
SQL> insert into z
2 values(1,to_date('20020102','yyyymmdd'),to_date('20030101','yyyymmdd'));

1 row created.

Elapsed: 00:00:00.42
SQL>
SQL> insert into z
2 values(1,to_date('20020201','yyyymmdd'),to_date('20020202','yyyymmdd'));
insert into z
*
ERROR at line 1:
ORA-00001: unique constraint (RAFU.Z20020201) violated


Elapsed: 00:00:01.50
SQL>
SQL> select * from z;

Z VALIDFRO VALIDTIL
---------- -------- --------
1 20010101 20010102
1 20020101 20020102
1 20020102 20030101

Elapsed: 00:00:01.01
SQL>
SQL> update z
2 set validfrom=to_date('20010101','yyyymmdd')
3 where z=1 and validfrom=to_date('20020101','yyyymmdd');
update z
*
ERROR at line 1:
ORA-00001: unique constraint (RAFU.Z20010101) violated


Elapsed: 00:00:01.15

2009-09-28

Not overlapping

Lets have a validity period in a table and a rule not to have overlapping periods. The correct solution is to have an another table having the concurrency lock and a combound trigger to handle the overlapping check. Or might there be an alternative way? A way that does not force us to rely that all code modifying the table remember to use the concurrency lock table before modifying the periods. Here it comes. A great abuse of the "bad hack" function based indexes. No need for the concurrency table or triggers.



drop table z cascade constraints purge;

create table z(z number(16) not null
, validfrom number(4) not null
, validtill number(4) not null
, constraint fro2000 check (2000 < validfrom)
, constraint til2050 check (validtill <= 2050)
, constraint frotil check (validfrom <= validtill)
);

create or replace function f(fro number,til number)
return number
deterministic
as
n number(16);
begin
n:=0;
for i in fro..(til-1) loop
n:= n + power(2,i-2001);
end loop;
return n;
end;
/


create unique index z1 on z (case when bitand(1,f(validfrom,validtill)) = 1 then z else null end);

create unique index z2 on z (case when bitand(2,f(validfrom,validtill)) = 2 then z else null end);

create unique index z3 on z (case when bitand(4,f(validfrom,validtill)) = 4 then z else null end);

create unique index z4 on z (case when bitand(8,f(validfrom,validtill)) = 8 then z else null end);

create unique index z5 on z (case when bitand(16,f(validfrom,validtill)) = 16 then z else null end);

create unique index z6 on z (case when bitand(32,f(validfrom,validtill)) = 32 then z else null end);

create unique index z7 on z (case when bitand(64,f(validfrom,validtill)) = 64 then z else null end);

create unique index z8 on z (case when bitand(128,f(validfrom,validtill)) = 128 then z else null end);

create unique index z9 on z (case when bitand(256,f(validfrom,validtill)) = 256 then z else null end);

create unique index z10 on z (case when bitand(512,f(validfrom,validtill)) = 512 then z else null end);

--You maybe got the idea. Left out indexes z11-z46 ...

create unique index z47 on z (case when bitand(70368744177664,f(validfrom,validtill)) = 70368744177664 then z else null end);

create unique index z48 on z (case when bitand(140737488355328,f(validfrom,validtill)) = 140737488355328 then z else null end);

create unique index z49 on z (case when bitand(281474976710656,f(validfrom,validtill)) = 281474976710656 then z else null end);

SQL> insert into z values(1,2001,2011);

1 row created.

SQL> insert into z values(1,2011,2011);

1 row created.

SQL> insert into z values(1,2010,2012);
insert into z values(1,2010,2012)
*
ERROR at line 1:
ORA-00001: unique constraint (RAFU.Z10) violated

SQL> insert into z values(2,2049,2050);

1 row created.

SQL> insert into z values(2,2049,2050);
insert into z values(2,2049,2050)
*
ERROR at line 1:
ORA-00001: unique constraint (RAFU.Z49) violated

SQL> insert into z values(2,2010,2012);

1 row created.

SQL> insert into z values(2,2001,2049);
insert into z values(2,2001,2049)
*
ERROR at line 1:
ORA-00001: unique constraint (RAFU.Z10) violated

SQL> insert into z values(2,2014,2017);

1 row created.

2009-09-21

Denormalize safely

For some reason there is a need to denormalize values from p table to c table.
How to ensure that denormalized values are the same that original values in p?


drop table c cascade constraints purge;

drop table p cascade constraints purge;

create table p(p_id number(10) primary key
, p_name varchar2(200) not null);

create table c(c_id number(10) primary key
, p_id constraint c_p_fk references p
, c_value varchar2(200) not null);


The denormalization.



alter table c add (p_name varchar2(200) not null);

alter table p add unique (p_name,p_id);

alter table c add constraint c_p_2fk foreign key(p_name,p_id) references p(p_name,p_id);



Make the c_p_2fk deferrable if there is a need to update the denormalized values. To satisfy unindex:

create index c_p_2fk_idx on c(p_id,p_name);

2009-09-11

Only one (is_current)

Kimball writes in The Date Warehouse Toolkit about slowly changing dimension type 6. Seen such structures with marked current values in a separate attribute. Like in wikipedia. If the dimension is storing transaction time as durations then the current indicator column might as well be a virtual column in 11g, evaluated to Y if the end day is set to end_date value. No need to update the current indicator column when populating new values for a dimension.

There should be at most one current indicator = Y row for each supplier. To satisfy this requirement it is possible to create a unique constraint to a virtual column. Actually I would replace the Y value with the supplier_key if the end of days is set in the row. And put the unique key to that column as show earlier. Adding such a column to an existing model might be problematic because the ETL or mainetenance software might populate new rows before updating the old ones. Should there exist atomic multi table merge? With 11g and virtual columns this is quite easy to overcome by setting the newly created unique constraint deferrable initially deferred.

But how about 10.2 without virtual columns. Lets dig into the only one, insert first and update then problem with an example without durations.


drop table di purge;

drop materialized view di_mv;

create table di(di_id number(4) primary key
, sour number(4) not null
, val number(4) not null
, is_current number(1) not null);



Each sour should have only one current. This can be acheaved by creating a bad hack function based index.


create unique index d_only_one_u on di(case when is_current = 1 then sour end);

insert into di (di_id,sour,val,is_current) values (1,1,0,0);

insert into di (di_id,sour,val,is_current) values (2,1,1,1);

insert into di (di_id,sour,val,is_current) values (3,2,2,1);



Problematic ETL prosess might want to populate new values first and update the current status at the end of transaction.


insert into di (di_id,sour,val,is_current) values (4,2,3,1);
insert into di (di_id,sour,val,is_current) values (4,2,3,1)
*
ERROR at line 1:
ORA-00001: unique constraint (RAFU.D_ONLY_ONE_U) violated


Unique index is checked immediately and now row is allowed to be inserted.
Glue on commit refreshable materialized view to the model.


drop index d_only_one_u;

create materialized view log on di with rowid;

create materialized view di_mv refresh on commit as
select sour,is_current, rowid drid
from di
where is_current = 1
;

alter table di_mv
add constraint sour_u unique (sour);

insert into di (di_id,sour,val,is_current) values (5,2,3,1);

commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-00001: unique constraint (RAFU.SOUR_U) violated


And it behaves like an deferrable initially deferred unique constraint. Uniquenes check is done at the end of a transaction.


insert into di (di_id,sour,val,is_current) values (6,2,3,1);

update di set is_current = 0 where di_id = 3;

commit;


Ok. The downside is that di_mv is using space. May be worth using, if your data population is beeing coded by humans.

2009-09-01

Only one (alternative 2)

10g friendly approach to store primary currencies. No virtual column used. Structure seems a lot like Tom Kyte suggested. Difference is that all country currencies are here populated in the same table and the materialized view replaced with a foreign key constraint.


create table country(country varchar2(2) constraint country_pk primary key
, primary_currency varchar(3) not null
)
;

create table currency(country references country
, currency varchar(3)
, constraint currency_pk primary key (currency,country)
)
;

alter table country
add constraint primary_curr_in_country_curr
foreign key(primary_currency,country)
references currency(currency,country)
deferrable initially deferred
;


Populating


insert into country (country,primary_currency) values ('US','USS');

insert into currency (country,currency) values ('US','USD');

insert into currency (country,currency) values ('US','USN');

insert into currency (country,currency) values ('US','USS');

commit;

insert into country (country,primary_currency) values ('FI','EUR');

insert into currency (country,currency) values ('FI','EUR');

insert into currency (country,currency) values ('FI','FIM');

commit;


Changing the default. Easier than before. Less code


update country set primary_currency='USD' where country='US';

commit;

select * from country;
country primary_currency
US USD
FI EUR

select * from currency;
country currency
FI EUR
FI FIM
US USD
US USN
US USS


Testing the model:



update country set primary_currency='USD' where country='FI';

commit;

ORA-02091: transaction rolled back
ORA-02291: integrity constraint (RAFU.PRIMARY_CURR_IN_COUNTRY_CURR) violated -
parent key not found

insert into country (country, primary_currency) values ('SW','SEK');

commit;

ORA-02091: transaction rolled back
ORA-02291: integrity constraint (RAFU.PRIMARY_CURR_IN_COUNTRY_CURR) violated -
parent key not found



There is only one default currency per country, should be obvious.

Be sure to run unindex on your production model.

2009-08-31

Load using Java

Cary Millsap is writing good things about measuring when tuning performance. His paper Making friends "Optimizing the insert program" is talking how to insert 10000 rows using Java. There are presented only possibilities to insert using Statement in a loop or using PreparedStatement in a loop. Should there be a alternative way also presented? Avoid looping statements in Java and populate all rows just in single call to the database.

Timing results inserting 10000 rows:

Array
Executed in 0 min 0 s 313 ms.
Prepared
Executed in 0 min 2 s 985 ms.


ARRAY:

create table si (s number(19),s2 number(19));
create or replace type nums is object ( n1 number(19), n2 number(19));
create type sit is table of nums;

private static void insertArray(Connection c, List elems)
throws SQLException {
ArrayDescriptor ad = ArrayDescriptor.createDescriptor("SIT", c);
ARRAY a = new ARRAY(ad, c, elems.toArray());
OraclePreparedStatement ops
= (OraclePreparedStatement)c.prepareStatement(
"insert into si " +
" select *" +
" from table(?)");
ops.setARRAY(1, a);

System.out.println(ops.executeUpdate());
ops.close();
c.commit();
}


Prepared:

private static void insertPrepared(Connection c, List elems)
throws SQLException {
PreparedStatement ps = c.prepareStatement("insert into si values(?,?)");
Iterator i = elems.iterator();
while ( i.hasNext())
{
Object[] o = i.next();
ps.setInt(1, (Integer)o[0]);
ps.setInt(2, (Integer)o[1]);
ps.executeUpdate();
}
ps.close();

c.commit();
}


Here are some timings for other number of rows. It shows that if you are inserting 100-1000 rows this approach might be worth considering. Measure yourself. Be sure to have enough memory available for your Java.


ms localhost remote db
rows prepared array prepared array
10 94 140 93 141
100 125 141 125 156
1000 313 203 844 265
10000 1500 344 16297 453
100000 12063 1422 206210 2078
1000000 182063 java.lang.OutOfMemoryError: Java heap space


insertintoselect.java

2009-08-27

Only one

Last September Tom Kyte wrote a yet another good article in Oracle magazine The Trouble with Triggers. The point about triggers is very good, but the example and the "Correct Answer" I do not like. He is suggesting to model currencies to two separate tables. In my opinion the data model is entirely wrong if the same thing is modeled in many places. "Less code equals fewer bugs. Look for ways to write less code." Here is my approach to the problem without a materialized view and materialized view logs.

The problem itself is that
-a country must have a default currency
-there is only one default currency per country


create table country(country varchar2(2) constraint country_pk primary key);

create table currency(country references country
, currency varchar(3)
, is_primary varchar2(1) not null
check (is_primary in ('Y','N'))
, primary_country
as (case when is_primary = 'Y' then country end)
virtual
constraint only_one_primary_u unique
, constraint currency_pk primary key (currency,country)
)
;

alter table country
add constraint must_have_at_least_one_primary
foreign key(country)
references currency(primary_country)
deferrable initially deferred;



And that is it. One may see that there is used the "bad hack" behind only_one_primary_u. The unique constraint generates a function-based normal index.
The good thing about this is that constraints are visible in constraint list, not implemented only in a index or a materialized view.


select *
from user_constraints
where constraint_name
in ('ONLY_ONE_PRIMARY_U','MUST_HAVE_AT_LEAST_ONE_PRIMARY');


Populating


insert into country (country) values ('US');

insert into currency (country,currency,is_primary) values ('US','USD','N');

insert into currency (country,currency,is_primary) values ('US','USN','N');

insert into currency (country,currency,is_primary) values ('US','USS','Y');

commit;

insert into country (country) values ('FI');

insert into currency (country,currency,is_primary) values ('FI','EUR','Y');

insert into currency (country,currency,is_primary) values ('FI','FIM','N');

commit;



Changing the default


update currency set is_primary = 'N' where country = 'US' and currency = 'USS';

update currency set is_primary = 'Y' where country = 'US' and currency = 'USD';

commit;

select * from country;

COUNTRY
US
FI

select * from currency;

COUNTRY CURRENCY IS_PRIMARY PRIMARY_COUNTRY
US USD Y US
US USN N
US USS N
FI EUR Y FI
FI FIM N




A country must have a default currency


insert into country (country) values ('NO');

commit;

ORA-02091: transaction rolled back
ORA-02291: integrity constraint (RAFU.MUST_HAVE_AT_LEAST_ONE_PRIMARY) violated - parent key not found

insert into country (country) values ('SW');

insert into currency (country,currency,is_primary) values ('SW','EUR','N');

commit;

ORA-02091: transaction rolled back
ORA-02291: integrity constraint (RAFU.MUST_HAVE_AT_LEAST_ONE_PRIMARY) violated - parent key not found



There is only one default currency per country


insert into currency (country,currency,is_primary) values ('FI','USD','Y');

ORA-00001: unique constraint (RAFU.ONLY_ONE_PRIMARY_U) violated




2009-08-19

Multi table insert atomic?

Reading SQL and Relational Theory: How to Write Accurate SQL Code. Talking about constraints and data modifications. There should be able to update several tables in one atomic operation. Oracle multi table insert could be something to that direction, but no atomicy. Bug still open from version not any more supported.


SQL> create table a (a_id number(1) primary key
2 , b_id number(1) not null);

Table created.

SQL> create table b (b_id number(1) primary key
2 , a_id not null constraint b_a_fk references a );

Table created.

SQL> alter table a add
2 constraint a_b_fk foreign key (b_id) references b
3 deferrable initially deferred;

Table altered.

SQL> insert all into b values (x,x)
2 into a values (x,x)
3 select 0 x
4 from dual;
insert all into b values (x,x)
*
ERROR at line 1:
ORA-02291: integrity constraint (RAFU.B_A_FK) violated - parent key not found


SQL> insert all into a values (x,x)
2 into b values (x,x)
3 select 1 x
4 from dual;

2 rows created.

SQL> insert into a values (2,2);

1 row created.

SQL> insert into b values (2,2);

1 row created.

SQL> select * from a;

A_ID B_ID
---------- ----------
1 1
2 2

SQL> rollback;

Rollback complete.

2009-08-14

SQL Donut

Oracle version of http://code.openark.org/blog/mysql/sql-pie-chart. Just change the val query to greate your own donuts.

######
Note 17.8.2009 wm_concat does not quarantee the right col_number order. Update to this post coming in future.
Note 2.9.2009 replaced wm_concat implementation to listagg. New in 11.2
######


:s size
:r radius
:t stretch

with val as (
select 'red' name_column, 1 value_column from dual
union all
select 'blue' name_column, 2 value_column from dual
union all
select 'orange' name_column, 3 value_column from dual
union all
select 'white' name_column, 4 value_column from dual
), ratio as (
select row_number() over (order by v.value_column,rownum) name_order
, v.name_column
, v.value_column
, sum(value_column) over (order by value_column,rownum) accumulating_value
, sum(value_column) over (order by value_column,rownum)
/ sum(value_column) over () accumulating_value_ratio
, 2*3.15*sum(value_column) over (order by value_column,rownum)
/ sum(value_column) over () accumulating_value_radians
, trunc(100*value_column / sum(value_column) over (),2) pc
from val v
), t1 as (
select level-1 value from dual connect by level <= to_number(:s)*to_number(:t)
), t2 as (
select level-1 value from dual connect by level <= :s
), c as (
select '#;o:X"@+-=123456789abcdef' as colors from dual
), ro as (
select t1.value col_number
, t2.value row_number
, t1.value/:t - (:s-1)/2 dx
, (:s-1)/2 - t2.value dy
from t1, t2
)
, ro2 as (
select ro.*, case when dx = 0
then 3.14/2
else (atan(abs(dy/dx)))
end abs_radians
from ro)
, ro3 as (
select ro2.*, case when sign(dy) >= 0 and sign(dx) >=0 then abs_radians
when sign(dy) >= 0 and sign(dx) <=0 then 3.14-abs_radians
when sign(dy) <= 0 and sign(dx) <=0 then 3.14+abs_radians
when sign(dy) <= 0 and sign(dx) >=0 then 2*3.14-abs_radians
end radians
from ro2
)
select listagg(case when round(sqrt(power(col_number/:t-0.5-(:s-1)/2, 2)
+ power(row_number-(:s-1)/2, 2))) BETWEEN :r/2 AND :r
then rpad(name_order,:t,name_order)
else rpad(' ',:t,' ')
end) within group (order by col_number) circle
from (select substr(colors, name_order, 1) name_order
, row_number
, col_number
, row_number() over (partition by col_number,row_number
order by name_order) c
from ratio, ro3, c
where accumulating_value_radians >= radians
) where c = 1
group by row_number
union all
select rpad(substr(colors,name_order,1),:t,substr(colors,name_order,1))||' '||
name_column||' '||
value_column||' ('||
pc||'%)'
from ratio, c
;


CIRCLE
oooooo;;;;;;;;
oooooooooo;;;;;;;;;;;;
oooooooo ;;######
oooooooo ########
oooooooo ::::::::
oo::::::::::::::::::::
::::::::::::::
## red 1 (10%)
;; blue 2 (20%)
oo orange 3 (30%)
:: white 4 (40%)


2009-06-02

Unique index candidates

Jonathan Lewis is blogging about indexes that could benefit from rebuild and Richard Foote is pointing out that preferably use unique indexes if possible. Putting these together. Indexes, that could be unique and are not, should be rebuild. Here is a way to find candidates.


SELECT DISTINCT table_name, index_name AS unique_index_candidate
, constraint_name AS based_on_constraint
, status AS constraint_status
FROM (SELECT con.table_name, con.constraint_name, con.m
, ind.index_name, con.status
, COUNT (con.column_name)
OVER (PARTITION BY con.table_name
, con.constraint_name
, ind.index_name) n
FROM (SELECT c.table_name, c.constraint_name
, o.column_name
, c.status
, MAX (o.POSITION)
OVER (PARTITION BY c.owner
, c.table_name
, c.constraint_name) m
FROM user_cons_columns o
INNER JOIN user_constraints c
ON o.constraint_name = c.constraint_name
WHERE c.constraint_type IN ('P', 'U')
AND c.DEFERRABLE = 'NOT DEFERRABLE') con
INNER JOIN
(SELECT i.table_name, i.index_name, n.column_name
FROM user_ind_columns n INNER JOIN user_indexes i
ON n.index_name = i.index_name
WHERE i.uniqueness = 'NONUNIQUE') ind
ON con.table_name = ind.table_name
AND con.column_name = ind.column_name
)
WHERE n = m
ORDER BY table_name, index_name
;

2009-04-24

sys_connect_by_path in sql server

Oracle sys_connect_by_path

create table v(o int, k varchar(20));

insert into v values(1,'Barbados');
insert into v values(1,'Rio De Janeiro');
insert into v values(1,'Kapkaupunki');
insert into v values(2,'Hanko');
insert into v values(2,'Helsinki');


select o, max(sys_connect_by_path(k, '/')) as scbp
from (
select o
, k
, row_number() over (partition by o order by k) as rn
from v
)
start with rn = 1
connect by prior rn = rn - 1 and prior o = o
group by o
;

1 /Barbados/Kapkaupunki/Rio De Janeiro
2 /Hanko/Helsinki


SQL Server:

create table dbo.v(o int, k varchar(20))

insert into dbo.v values(1,'Barbados')
insert into dbo.v values(1,'Rio De Janeiro')
insert into dbo.v values(1,'Kapkaupunki')
insert into dbo.v values(2,'Hanko')
insert into dbo.v values(2,'Helsinki')

with x as (
select o
, k
, 1 as level
, row_number() over (partition by o order by k) as rn
, k as scbp
from dbo.v
), cte as (
select o
, k
, level
, rn
, '/' + cast(k as varchar(max)) as scbp
from x
where rn = 1
union all
select x.o
, x.k
, cte.level + 1 as level
, x.rn
, cte.scbp + '/' + cast(x.scbp as varchar(max)) as scbp
from x inner join cte on x.rn - 1 = cte.rn and x.o = cte.o
)
select o, max(scbp)
from cte
group by o
;

1 /Barbados/Kapkaupunki/Rio De Janeiro
2 /Hanko/Helsinki

2009-04-08

SUM over time

Time is continuous and time periods are modelled in SQL tables with validFrom validTill columns. I like to see continous pair stored as open ended, validFrom moment is included in the period, but validTill moment is excluded. This way it is possible to store the whole timeline.

Lets have a relation having such a pair

9.4.2009 10.4.2009 1
10.4.2009 12.4.2009 2
11.4.2009 14.4.2009 3

As one can see two last rows overlaps and for 11.4. there is all together 5 as a sum. Here is a SQL how to generate a sum for each moment of time.


with td as (
 select trunc(sysdate) + mod(level, 4) validFrom
      , trunc(sysdate) + mod(level, 8) * 2 validTill
      , level a
   from dual
connect by level < 4
)
select validFrom
     , nvl(lead(validFrom) over (order by validFrom)
         , to_date('22000101','yyyymmdd')) validTill
     , sumovertime
  from (
 select validFrom, max(sumover) sumovertime
   from (
  select a
   , case r when 1 then validFrom else validTill end validFrom
   , sum(case r when 1 then a else -a end) over (order by case r when 1 then validFrom else validTill end,r desc) sumover
   from td, (
  select rownum r from dual connect by level < 3
  )
 )
 group by validFrom
)
order by validFrom
;

9.4.2009 10.4.2009 1
10.4.2009 11.4.2009 2
11.4.2009 12.4.2009 5
12.4.2009 14.4.2009 3
14.4.2009 1.1.2200 0



2009-03-13

Vacation

Just starting my winter holiday. Next month there are also days free from work. Easter is coming. The date of easter is moving around. Found an algorithm from wikipedia for counting the day of easter sunday. Here is an implementation of "Meeus/Jones/Butcher" algorithm in SQL:


select to_date((p+1)||'-'||n||'-'||vu,'dd-mm-yyyy')
from (
select vu, a, b, c, d, e, f, g, h, i, k, l, m, trunc((h+l-7*m+114)/31) as n, mod(h+l-7*m+114,31) as p
from (
select vu, a, b, c, d, e, f, g, h, i, k, l, trunc((a+11*h+22*l)/451) as m
from (
select vu, a, b, c, d, e, f, g, h, i, k, mod(32+2*e+2*i-h-k,7) as l
from (
select vu, a, b, c, d, e, f, g, mod(19*a+b-d-g+15,30) as h, trunc(c/4) as i, mod(c,4) as k
from (
select vu, a, b, c, d, e, f, trunc((b-f+1)/3) as g
from (
select vu, a, b, c, trunc(b/4) as d, mod(b,4) as e, trunc((b+8)/25) as f
from (
select vu, mod(vu,19) as a, trunc(vu/100) as b, mod(vu,100) as c
from (
select level+1999 vu
from dual connect by level <= 20 
) ) ) ) ) ) ) )
;

2009-02-24

Idiocy

People are idiots, some more than others. At least people are idiots in different aspects of life and issues. A person is an idiot differently each day. At least I feel like that. Other days it is quite obvious to see things exists and other days you cannot figure out a simplest case. Well maybe it is not idiotism to be clueless about a thing that one has never figured out before. Most often it is really nice to be working with persons that are less idiots than me. At least in a issue that one tried to figure out and the other has a ready answer to.

I have set up a 10g data quard manually and tried to register the databases to Grid Control. 
Stand by instance seemed to be quite hard to get registered to the agent. I did not figure out that the standby node is in mount state and dbsnmp monitoring user is trying to reach the database as a normal user. Just change the monitoring user to connect to the database as sysdba and voila grid control is showing roles of the instances correctly. Creating a stand by instance with Grid Control handles that ok. 

This week I have had a pleasure to work with a person who could figure out thigs while problems arise and a oraganization that has scheduled HA evironment testing well before going to production. After setting up Data Guard environment we had plenty of time to try out different kinds of switchower and failover situations and there is still time in the project plan to try out even more complex situations. This possibility and capability should exist in every project having a HA environment. 

2009-02-21

A week on the other side (part3 DB2)

Missing Oracle function based indexes. Got involved in a project having a DB2 database environment. There is a need for case insensitive predicate in a where clause. In Oracle it is possible to create function based index:

create table t(n number(8) primary key, last_name varchar(100));

insert into t (n,last_name)
select rownum, object_name
from all_objects;

select * from t where upper(last_name) = :ln;

Performs a full table scan to table t.

create index t_last_name_upper on t(upper(last_name));

select * from t where upper(last_name) = :ln;

And the index is used if the table is big enough.

In DB2 there is no such thing as a function based index. But in a article on IBM site there is described alternatives to do this. Generated columns and index extensions. Generated columns seems like a valid approach. Even though data is duplicated on a row. No need for triggers seems like a good thing and the original query do not need any modifications.

This was about the first time that I am using db2cc Control Center to do things in DB2 environment. Execution plans were found easily from Access Plan page. At least with a small table implementation and tests went ok. Allthough adding a column in a table was not so straight forward because the column was a GENERATED ALLWAYS AS column.

SET INTEGRITY FOR t OFF;

alter table t

add column last_name_u varchar(100)

GENERATED ALWAYS AS ( UPPER(last_name));


SET INTEGRITY FOR t IMMEDIATE CHECKED FORCE GENERATED;


create index t_lastname_upper_idx on t(last_name_u);



How much easier would that have been in Oracle 11g. Just add a virtual column...

alter table t add last_name_u as (upper(last_name));

create index t_last_name_upper on t(last_name_u);

select * from t where upper(last_name) = :ln;


Similarily no need to change the query. Index is used. And the column is virtual, no need for storage. The index extensions approach might be the way to avoid unneeded storing of the data in DB2.

Needed also reduce a size of column and got surpriced how often REORG TABLE is needed in DB2. Or did I miss something in the documentation.

A week on the other side (part2 SQLServer)


I had a pleasure to spend more than a hour installing SQLServer for evaluation purposes. Installed SQLServer 2008 for the first time. I have now 180 days to do test with AdventureWorks databases.

A couple weeks ago I went to see a presentation about SQLServer 2008 to Oracle specialists mainly presented by Marko Hotti. Nice overview about the product. There was a question from the audience about isolation levels in SQLServer. There is still a common understanding among Oracle specialist that readers block writers. There exists snapshot isolation level in SQLServer. It was presented allready in version 2005.

2009-02-20

A week on the other side (part1 PostgreSQL)

For a Oracle oriented person like me, it was nice to see that indexing in PostgreSql environment has the same kind of possibility to boost performance of a query. In a OLTP environment the first star was the most meaningfull this time. Query execution time deminisched from minutes to part of a second. Talking about stars I mean the stars described in a book Relational Database Index Design and the Optimizers writen by Tapio Lahdenmäki. Recommended reading, at least the chapter DERIVING THE IDEAL INDEX FOR A SELECT. There was three separately indexed foreign key columns in a half a million rows table. All those columns were predicates in a query. All that was needed was to create a single index having all those three columns in it.

Dropping an index in postgre was not an online operation. Need to figure out why.

2009-02-16

unindex



###########
# updated link to Tom Kyte blog.
# Comment about Toms script in this post is not valid anymore.
# The script itself is valid.
###########

Until now Tom Kytes unindex has been satisfying my needs on searching unindexed foreign key constraints. There came a day when there became need to improve the query. I came across "enq: TM - contention" wait events - cause there was no fk indexing. After a sneak overview on other indexing and queries on the table I figured that it would be nice to have indexing in other order than the constraint is defined.

--example

create table a(a1 number(8)
, a2 number(8)
, x varchar2(2)
, primary key (a1,a2));

create table b(b number(8) primary key
, a1 number(8)
, a2 number(8)
, foreign key (a1,a2) references a);

create index fk_idx on b(a2,a1);

Toms unindex is giving me false negative four stars.
In my opinion fk_idx is satisfying the unindex need. At least TM Enq Wait events dismished from the environment after creating the index with different order of columns than fk.

Here is an alternative approach to search for unindexed foreign keys implemented in a single sql clause. It is operational at least with Oracle 10.2.0.1 and 11.1.0.7. There exists a bug involved with usage of connect_by_root in versions 10.2.0.3, 10.2.0.4 and 11.1.0.6. For those versions you need to set the _optimizer_connect_by_cost_based parameter to false.

alter session set "_optimizer_connect_by_cost_based" = false;


select case when i.index_name is not null
then 'OK'
else '****'
end ok
, c.table_name
, c.constraint_name
, c.cols,i.index_name
from (
select table_name, constraint_name
, max(sys_connect_by_path(column_name, ' ' )) cols
from (
select a.table_name
, b.constraint_name
, column_name
, position
, row_number() over
(partition by b.constraint_name
order by column_name) rn
from user_constraints a, user_cons_columns b
where a.constraint_name = b.constraint_name
and a.constraint_type = 'R'
)
start with rn = 1
connect by prior rn = rn-1
and prior constraint_name = constraint_name
group by table_name, constraint_name
) c
left outer join
(
select table_name
, index_name
, cr
, max(sys_connect_by_path(column_name, ' ' )) cols
from (
select table_name
, index_name
, column_position
, column_name
, cr
, row_number() over (partition by index_name, cr
order by column_name) rn
from (
select table_name
, index_name
, column_position
, column_name
, connect_by_root(column_name) cr
from user_ind_columns
connect by prior column_position-1 = column_position
and prior index_name = index_name
)
)
start with rn = 1
connect by prior rn = rn-1
and prior index_name = index_name
and prior cr = cr
group by table_name, index_name, cr
) i on c.cols = i.cols and c.table_name = i.table_name
;

Blog Archive

About Me

My photo
I am Timo Raitalaakso. I have been working since 2001 at Solita Oy as a Senior Database Specialist. My main focus is on projects involving Oracle database. Oracle ACE alumni 2012-2018. In this Rafu on db blog I write some interesting issues that evolves from my interaction with databases. Mainly Oracle.