Thursday, July 16, 2009

More than one way to save blocks

In these days, disk capacity is often not a big issue today. If you have at least a decent load on the database, you will hit the IOPS limit much sooner than you run out of disk space.

Well, almost. First, you will still have a lot of inactive data that consumes the space but does not require any IOPS. And second, in some applications (like ETL in DWH) you are bound by throughput. Let me talk about this case.

Don't expect any exceptional thoughts, this post just inspired by a real production case and tries to pinpoint that there is always one more way to consider.

The optimal plan for many ETL queries is a lot of fullscans with hash joins. And often, you read one table more times, to join it in different ways. Such queries benefit if you make your tables smaller - you save on I/O.

(1) In ETL, your source tables are often imported from a different system, and you actually don't need all columns from the tables. So, first of all - don't load the data you don't need. However, usually you just can't drop the columns - this would change the data model, you would have to update it in the ETL tool, and you would have to do a lot of work when the list of used columns change.
How to tackle this? Use views, and select NULL for the columns you don't need. Use FGA (Fine-grained auditing) (at least on test) to verify you don't access any of those non-loaded columns. (Just beware, that things like dbms_stats access all columns.)
(Bonus: depending on the source system, transferring less data may take less time due to limits of the transfer channel - ODBC, network, etc.)

(2) As the source data is usually loaded only once and truncated before each load, PCTFREE should be 0, so no space is lost for allocations that will never come.

(3) Now, with the (1) implemented, the tables contain (a lot of) NULL columns. It's just one byte for each such column, but interestingly, it still makes a difference. Just recreate the tables while putting the NULL columns at the end. (No proper application depend on column order, right?). On a 1.2GB table, we got a 35% saving just by using (2) and (3) - it's really worth trying.

(4) And of course, use APPEND hint and use the 10g table direct-load data compression (COMPRESS in table definition). Another 50% for us...

Please note that the only problem you usually face here is to get a list of used columns - fortunately, most ETL tools (like ODI) can provide it, even it means accessing directly their repository (snp_txt_crossr in ODI). The rest is easy to automate.

Wednesday, July 8, 2009

A latteral view quirk

This quest started with the usual question: why is this query so slow? To put it in the picture, it was a query loading one DWH table by reading one source table from a legacy system (already loaded to Oracle, so no heterogenous services were involved at this step), joining it several times to several tables.
(It's the usual badly-designed legacy system: if flag1 is I, join table T1 by C1, if flag1 is N, join table T1 by C2... 20 times.)

If I simplify the query, we are talking about something like:
SELECT T1.m, 
case
when T1.h = 'I' then T2_I.n
when T1.h = 'G' then T2_G.n
else null
end
FROM T1
LEFT OUTER JOIN T2 T2_I
ON (T1.h = 'I' and T1.y = T2_I.c1)
LEFT OUTER JOIN T2 T2_G
ON (T1.h = 'G' and T1.z = T2_G.c2)

We even know, that the query always return number of rows identical to number of rows in T2. However, ommiting the T1.h = 'I'/'G' conditions in join clause would duplicate the rows, so the conditions are necessary there. Ofcourse it's not possible to move the conditions to WHERE clause, as this would elimitate all rows from result query.

To make the test case query even shorter, we can use for the demonstration just:
SELECT count(*)
FROM T1
LEFT OUTER JOIN T2
ON (T1.h = 'I' and T1.y = T2.c1)

(This query makes almost no business sense now, but the the lateral view issue I want to demonstrate is still there.)

The query plan looks like:

---------------------------------------------------------------
|Id|Operation |Name|Rows |Bytes| Cost | Time |
---------------------------------------------------------------
|0 |SELECT STATEMENT | | 1 | 43 | 1804M|999:59:59 |
|1 | SORT AGGREGATE | | 1 | 43 | | |
|2 | NESTED LOOPS OUTER | | 9805M| 392G| 1804M|999:59:59 |
|3 | TABLE ACCESS FULL | T1 | 188K|7899K| 718 | 00:00:09 |
|4 | VIEW | |52124 | | 9593 | 00:01:56 |
|*5| FILTER | | | | | |
|*6| TABLE ACCESS FULL| T2 |52124 | 356K| 9593 | 00:01:56 |
---------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

5 - filter("T1"."H"='I')
6 - filter("T1"."Y"="T2"."C1")


This is awful! Cost of 1804M just for joining two tables (T1: 188K rows, T2: 5M rows). And yes, the execution proves the plan is not good (I did not have the patience to wait many hours (days?) for the query to finish).


However, a colleague suggested modifying the query as follows:

SELECT count(*)
FROM T1
LEFT OUTER JOIN T2
ON (T1.h = nvl('I',T2.c1) and T1.y = T2.c1)

This does not change the result set - the 'I' is always not null and thus the nvl is superfluos. However, we get a different execution plan!


-------------------------------------------------------------
|Id|Operation |Name|Rows |Bytes| Cost | Time |
-------------------------------------------------------------
|0 |SELECT STATEMENT | | 1 | 54 | 5409K| 18:01:45 |
|1 | SORT AGGREGATE | | 1 | 54 | | |
|*2| HASH JOIN OUTER | | 9805M| 493G| 5409K| 18:01:45 |
|3 | TABLE ACCESS FULL| T1 | 188K|7899K| 718 | 00:00:09 |
|4 | TABLE ACCESS FULL| T2 | 5212K| 54M| 9585 | 00:01:55 |
-------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("T1"."Y"="T2"."C1"(+) AND
"T1"."H"=NVL('I',"T2"."C1"(+)))

The cost is now 5409K, the operation is a nice hash join, and the query really finishes in few minutes.

The question now is: WHY?

Well, this is a matter of query optimization and plan generation, so the first person to ask is directly the CBO. So, I enabled the 10053 event for the two queries and dived into the two trace files, mainly to see the differences.

Both queries had the main query block initially rewritten as:
SQL:******* UNPARSED QUERY IS *******
SELECT "T1"."Y" "Y","T1"."H" "H",
"from$_subquery$_004"."C1_0" "C1"
FROM "SCOTT"."T1" "T1",
LATERAL( (SELECT "T2"."C1" "C1_0" FROM "SCOTT"."T2" "T2" WHERE "T1"."H"='G' AND "T1"."Y"="T2"."C1"))(+) "from$_subquery$_004"

(The second query with the added NVL in "T1"."H"=NVL('G',"T2"."C1") ).

So, for Oracle, it is a lateral (correlated) view. That's not nice, but at this stage of CBO processing, normal. CBO will try to get rid of it.

However only for the NVL case the CBO trace shows:
CVM:   Merging SPJ view SEL$1 (#0) into SEL$2 (#0)

Followed by:
SQL:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "COUNT(*)"
FROM "SCOTT"."T1" "T1","SCOTT"."T2" "T2"
WHERE "T1"."Y"="T2"."C1"(+)
AND "T1"."H"=NVL('G',"T2"."C1"(+))

Thus, CBO was able to rewrite is as the old-fashioned (+) outer join; however, it was not able to do it for the non-NVL query. And these result are passed to the next stage, and as no constraints or predicate move-around changes the query, they are verbatim passed for actual plan generation. And understandably, a lateral (correlated) view is not considered for hash join.

Anyway, should you read the Inside the Oracle Optimizer blog, you would already know that this is the classical example of the lateral non-mergeable view. Still, why the second one worked as we wanted?

Well, the quirk is in the fact that there is no way how to write the non-NVL query using (+) syntax - there is just no place to put the (+) sign to the t1.y='I' predicate to change it from filter to join predicate. However, adding artifically a column from T2 makes it possible, and the CBO did it. The CBO internally uses the old Oracle syntax, and thus if you can't rewrite your query using that syntax, neither CBO can.

Just a note - the same applies for example for predicate length(t1.q)=10, you can save the day by using length(nvl(t1.q,t2.c1))=10.

Tested on: Windows 64-bit (EM64T), Oracle 10.2.0.4.

Sunday, July 5, 2009

Gentle introduction to opimization

I was just asked to prepare a short, one-hour workshop/presentation about optimization on Oracle. As this topis is so huge, and everyone had already read something, I decided to concept this workshop as an overview of the concepts (starting with db design) and the tools available.
The .pdf version is thus a kind of checklist - have you read about all of these issues? Have you thought them out when designing your system?
I hope you will find at least one new thing there:-)
The PDF is available for download on my website download area.