Skip to main content

Posts

Showing posts from 2009

ORA-27048: skgfifi: file header information is invalid

I was asked to analyze a situation, when an attempt to recover a 11g (standby) database resulted in bunch of "ORA-27048: skgfifi: file header information is invalid" errors.

I tried to reproduce the error on my test system, using different versions (EE, SE, 11.1.0.6, 11.1.0.7), but to no avail. Fortunately, I finally got to the failing system:

SQL> recover standby database;
ORA-00279: change 9614132 generated at 11/27/2009 17:59:06 needed for thread 1
ORA-00289: suggestion :
/u01/flash_recovery_area/T1/archivelog/2009_11_27/o1_mf_1_208_%u_.arc
ORA-27048: skgfifi: file header information is invalid
ORA-27048: skgfifi: file header information is invalid
ORA-27048: skgfifi: file header information is invalid
ORA-27048: skgfifi: file header information is invalid
ORA-27048: skgfifi: file header information is invalid
ORA-27048: skgfifi: file header information is invalid
ORA-00280: change 9614132 for thread 1 is in sequence #208


Interestingly, nothing interesting is written to alert.log n…

OCM 11g

After all, I learned one new thing I did not expect to find out at OOW: at the OCP Lounge (where, contrary to promises, no networking took place), I was told new info regarding the 11g OCM exam.
While from supposedly trustworthy source, don't take the following as 100%-sure; still, it's worth to know, that:
- the 11g OCM Upgrade exam is scheduled to start late November this year, and will be delivered on 11gR1.
- the full 2-day 11g OCM has no schedule yet, and will be delivered on 11gR2 from it's start.

The only trouble I see is the attendance - with 10g OCM, every local Oracle was cancelling it if only one person signed for it - two persons was minimum. With the upgrade, how many people will sign up?

I plan to take it quite soon after it's available, somewhere in Europe (UK has usually densest exam schedule, while countries like Czech Republic/Slovakia/Italy etc. have lower prices for the exam).
Drop me a line if you would be interested in taking the exam, so we can force s…

OOW 2009 experience

As I was at the Open World last week, you would expect me to post a bunch of blog entries, right? Well, no... first of all, it was already covered by many others, closer to the real time of the events. You can read about the perhaps most interesting event I attended at Pythian OOW09 Diaries.

Still, let me empahsize one thing - as a first-timer to OOW I realized, that after all, the sessions held there are not so important, after all. Ok, select some of them, but reserve enought time for the Unconference, for the OTN Lounge, and to meet other fellows. You will catch up with the sessions using OOW On-demand (and as I remember, the PDFs are published later for general public) later - you will have to do it anyway, you can't attend everything you would like.

More than one way to save blocks

In these days, disk capacity is often not a big issue today. If you have at least a decent load on the database, you will hit the IOPS limit much sooner than you run out of disk space.

Well, almost. First, you will still have a lot of inactive data that consumes the space but does not require any IOPS. And second, in some applications (like ETL in DWH) you are bound by throughput. Let me talk about this case.

Don't expect any exceptional thoughts, this post just inspired by a real production case and tries to pinpoint that there is always one more way to consider.

The optimal plan for many ETL queries is a lot of fullscans with hash joins. And often, you read one table more times, to join it in different ways. Such queries benefit if you make your tables smaller - you save on I/O.

(1) In ETL, your source tables are often imported from a different system, and you actually don't need all columns from the tables. So, first of all - don't load the data you don't need. However,…

A latteral view quirk

This quest started with the usual question: why is this query so slow? To put it in the picture, it was a query loading one DWH table by reading one source table from a legacy system (already loaded to Oracle, so no heterogenous services were involved at this step), joining it several times to several tables.
(It's the usual badly-designed legacy system: if flag1 is I, join table T1 by C1, if flag1 is N, join table T1 by C2... 20 times.)

If I simplify the query, we are talking about something like:
SELECT T1.m,
case
when T1.h = 'I' then T2_I.n
when T1.h = 'G' then T2_G.n
else null
end
FROM T1
LEFT OUTER JOIN T2 T2_I
ON (T1.h = 'I' and T1.y = T2_I.c1)
LEFT OUTER JOIN T2 T2_G
ON (T1.h = 'G' and T1.z = T2_G.c2)

We even know, that the query always return number of rows identical to number of rows in T2. However, ommiting the T1.h = 'I'/'G' conditions in join clause would duplicate the rows, so the conditions are necessary there. Ofcourse i…

Gentle introduction to opimization

I was just asked to prepare a short, one-hour workshop/presentation about optimization on Oracle. As this topis is so huge, and everyone had already read something, I decided to concept this workshop as an overview of the concepts (starting with db design) and the tools available.
The .pdf version is thus a kind of checklist - have you read about all of these issues? Have you thought them out when designing your system?
I hope you will find at least one new thing there:-)
The PDF is available for download on my website download area.