Skip to main content

More than one way to save blocks

In these days, disk capacity is often not a big issue today. If you have at least a decent load on the database, you will hit the IOPS limit much sooner than you run out of disk space.

Well, almost. First, you will still have a lot of inactive data that consumes the space but does not require any IOPS. And second, in some applications (like ETL in DWH) you are bound by throughput. Let me talk about this case.

Don't expect any exceptional thoughts, this post just inspired by a real production case and tries to pinpoint that there is always one more way to consider.

The optimal plan for many ETL queries is a lot of fullscans with hash joins. And often, you read one table more times, to join it in different ways. Such queries benefit if you make your tables smaller - you save on I/O.

(1) In ETL, your source tables are often imported from a different system, and you actually don't need all columns from the tables. So, first of all - don't load the data you don't need. However, usually you just can't drop the columns - this would change the data model, you would have to update it in the ETL tool, and you would have to do a lot of work when the list of used columns change.
How to tackle this? Use views, and select NULL for the columns you don't need. Use FGA (Fine-grained auditing) (at least on test) to verify you don't access any of those non-loaded columns. (Just beware, that things like dbms_stats access all columns.)
(Bonus: depending on the source system, transferring less data may take less time due to limits of the transfer channel - ODBC, network, etc.)

(2) As the source data is usually loaded only once and truncated before each load, PCTFREE should be 0, so no space is lost for allocations that will never come.

(3) Now, with the (1) implemented, the tables contain (a lot of) NULL columns. It's just one byte for each such column, but interestingly, it still makes a difference. Just recreate the tables while putting the NULL columns at the end. (No proper application depend on column order, right?). On a 1.2GB table, we got a 35% saving just by using (2) and (3) - it's really worth trying.

(4) And of course, use APPEND hint and use the 10g table direct-load data compression (COMPRESS in table definition). Another 50% for us...

Please note that the only problem you usually face here is to get a list of used columns - fortunately, most ETL tools (like ODI) can provide it, even it means accessing directly their repository (snp_txt_crossr in ODI). The rest is easy to automate.

Comments

Popular posts from this blog

ORA-27048: skgfifi: file header information is invalid

I was asked to analyze a situation, when an attempt to recover a 11g (standby) database resulted in bunch of "ORA-27048: skgfifi: file header information is invalid" errors. I tried to reproduce the error on my test system, using different versions (EE, SE, 11.1.0.6, 11.1.0.7), but to no avail. Fortunately, I finally got to the failing system: SQL> recover standby database; ORA-00279: change 9614132 generated at 11/27/2009 17:59:06 needed for thread 1 ORA-00289: suggestion : /u01/flash_recovery_area/T1/archivelog/2009_11_27/o1_mf_1_208_%u_.arc ORA-27048: skgfifi: file header information is invalid ORA-27048: skgfifi: file header information is invalid ORA-27048: skgfifi: file header information is invalid ORA-27048: skgfifi: file header information is invalid ORA-27048: skgfifi: file header information is invalid ORA-27048: skgfifi: file header information is invalid ORA-00280: change 9614132 for thread 1 is in sequence #208 Interestingly, nothing interesting is written to

Multitenant and standby: recover from subsetting

In the previous post we learnt how to exclude a PDB (or a datafile) from the standby database recovery. Of course, that might not be the real end goal. We may just want to skip it for now, but have the standby continue to be up-to-date for every other PDB, and eventually include the new PDB as well. Again, standard Oracle pre-12c DBA knowledge is helpful here. These files are just missing datafiles and thus a backup can be used to restore them. The new 12c features add some quirks to this process, but the base is just sound backup and recovery. Backup So let's start with a proper backup: rman target=/ Recovery Manager: Release 12.1.0.2.0 - Production on Mon Nov 16 12:42:38 2015 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. backup database; connected to target database: CDB2 (DBID=600824249) Starting backup at 16-NOV-15 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=193

Multitenant and standby: subsetting

In the previous post we looked at managing new PDBs added to a standby database, by copying the files to the DR server, as required. However, there is another possible approach, and that is to omit the PDB from the standby configuration altogether. There are two ways of achieving this: 1. Do it the old-school way. A long time before 12c arrived on the scene one could offline a datafile on the standby database to remove it. The same trick is used in TSPITR (tablespace point-in-time recovery), so that you don't need to restore and recover the entire database if you are only after some tablespaces. 2. 12.1.0.2 adds the option to automatically exclude the PDB from standby(s). And 12.2 adds the option to be more specific in case of multiple standbys. For the sake of curiosity I started by setting standby file management to manual again. What I found is that there was very little difference, and the steps to take are exactly the same - it’s just the error message that is slightly