Wednesday, December 23, 2015

A few thoughts about OCM 12c upgrade

Yesterday I sat for the 12c OCM upgrade exam, which I mentioned in few blog posts before. The first step after checking your ID is of course signing the NDA, and thus you won't find much real information here.

This time I chose Utrecht as the place to take the exam. Not that I have any special preference, I took each of the exams in a different place so far. The only requirements were convenient time and location defined as 'somewhere in Europe'. But in the end, Utrecht turned out to be a good place. Oracle NL headquarters are easy accessible, it's a very new building, the lunch was good:-)
And the city is nice to see.

Regarding the exam, the usual important notes still hold true:

  1. Arrive on time. It's a long day and you will have a lot of things to do.
  2. You will work hard the whole day. Get a good sleep before, be well rested.
  3. Review the exam topics well. Note that they may have change over time. There is for example an update as of January 1, 2016: Flex ASM was added.
  4. Learn how to work with the docs - with no search available. You will need the docs, nobody can remember all the syntax and all the arcane settings.
  5. Love your command line. "GUI is not available for every segment of the exam." And anyway, it's much faster to do things in sqlplus. And you will struggle for time.
Now I just have to wait for the results... And for any of you who wants to take the exam: Good luck!

Monday, December 21, 2015

Don't trust the lying (Data Guard) broker

One of the new 12c features is the "VALIDATE DATABASE" command. According to the documentation it should do many thorough checks and tell you if all is configured well and correctly. However, there is one catch - or to put it a little more bluntly - bug. Or two.

You know that you need standby redo logs for SYNC (or the new FASTSYNC) transport mode. The validate command knows that, too. And you know that you should have one more standby redo log than online redo logs. The validate command seems to know this one as well.

However, the checks appear to have one flaw: they test whether the threads (and let's talk here about a single-instance, so we have only thread #1) have enough standby redo logs (SRLs) assigned. But when you create an SRL with 'alter database add standby logfile', they are unassigned to any thread. In fact, you get 0 as thread#:

select thread#, sequence# from V$STANDBY_LOG;

THREAD# SEQUENCE#
------- ---------
      0         0
      0         0
      0         0
      0         0
Which is perfectly fine - Oracle waits until the instance actually needs the SRL and only then is this assigned. Makes the administration easier.

But the guys responsible for VALIDATE DATABASE do not seem to realize this. So if you have just set up your SRLs and run the validate command - just to see if the config is all ok (e.g. because you just want to change the LogXptMode and protection mode) then you will get a result like this:
Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (CDB5)                  (CDB5SBY)
    1         3                       0                       Insufficient SRLs
    Warning: standby redo logs not configured for thread 1 on CDB5SBY

WTF? Yes, the validate command did not understand that we have plenty of SRLs, only that they have not yet been assigned to any thread.

So.. we do a switchover, back and forth, to let both databases touch the SRLs and...

Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (CDB5)                  (CDB5SBY)
    1         3                       2                       Insufficient SRLs

And we still receive a warning - although we have created 4 SRLs, only two of which Oracle has required so far...with the other two currently unassigned. Again, VALIDATE DATABASE is not aware of this and complains.

The morale? Don't just trust the command, especially in the beginning, when your configuration is fresh and still settling down. Although that's exactly the time you want to use checks like this.

Sunday, December 13, 2015

UKOUG Tech15 is over, looking forward to Tech16

What a busy week! The UKOUG Tech15 conference made me busy for four days, postponing any other work and non work stuff.
As usual, I met many people actually using our products - it's always a bit strange feeling and a strong confirmation seeing people trusting their data and apps to something a developer writes:-)
And of course, seeing many old friends again was also very nice. Especially talking to Gluent guys (http://gluent.com) and seeing what they are up to was very interesting and promising - I hope they succeed in a big way and change the data landscape.

And of course, the Twinkies...

Sunday, December 6, 2015

Oracle transactions in the new world

If the new world of BigData, NoSQL and streaming has sparked your interest, you may have noticed one peculiarity - the lack of proper transactions in these contexts (or transactions at all!) Yes, durability is retained, but the other properties of ACID (Atomicity, Consistency, Isolation, Durability) leave a lot to be desired.

One might think that in this new world perhaps applications are built in such a way that they no longer need it,  and in some cases this may be true. For example, if a tweet or an update to Facebook gets lost, then who cares, and we can simply continue on. But there is of course more important data that still requires transaction support and some NoSQL databases have limited support for this nowadays. However, this is still far from a full implementation, the likes of which everyone takes for granted in the Oracle database (e.g. you cannot modify just arbitrary rows in arbitrary tables in a single transaction). Of course, the huge benefit is that these databases are much easier to scale, as they are not bogged down by lock/synchronization mechanisms that ensure data consistency.

But recently there seems to be much more interest in marrying the worlds together; by this I mean, the old 'proper' RDBMS (Oracle) world and the new BigData/NoSQL/streaming  ('Kids from the Valley') one. So the question then follows, working from the old to the new; how do you feed data from a database, built on an inherently transactional foundation, into one that has no idea about them?

Mind you, such interoperability issues are not a new thing...anyone remember that old problem of sending messages from PL/SQL or triggers? In that case any message (or email) was sent when requested, but the encapsulating transaction could be rolled back or tried again. This lead to messages that were not supposed to be sent, along with messages sent multiple times. The trick there was to use dbms_job in the workflow. This package (unlike the new dbms_scheduler) just queued the job and the job coordinator saw it only after the insert into the queue is committed – i.e, when the whole transaction commits.

There are two basic approaches to addressing this issue for a data feed between the systems:
1. You can revert to the 'old and proven' batch processing method (think ETL). Just select (e.g. using Sqoop) the data that arrived since the last load, and be sure to change your application to provide enough information so that such query is possible at all (e.g. add last-update timestamp columns).

2. Logical replication or change data capture. There is an overwhelming trend (and demand)   toward near-real-time, and people now want and and expect data with low seconds latency. In this approach changes from the source database are mined and sent to the target as they are happening.

So the second option sounds great - nice and easy – except that it’s NOT...
The issue is that any change in the database happens as it is initiated by the user/application, but until the transaction is committed you cannot be sure if the change will be persistent, and thus whether anyone outside of the database should see it.

The only solution here is to wait for the commit, and you can be more or less clever with what you do until the commit happens. You can simply wait for the commit and only then begin parsing the changes; or you can do some/all pre-processing and just flush the data out when you see the actual commit.

For this pre-processing option, as is often the case, things are actually more complicated in real life – and we don’t have to contend just with simple commits/rollbacks at the end of the transaction, but also need to handle savepoints. These are used much more often than you would think; for example, any SQL implicitly issues a savepoint, so that it can roll itself back if it fails. The hurdle is that there is no information in the redo as to when a savepoint was established, and which savepoint a rollback roll backs to.

In the end, things turn out well with the commit/rollback mechanism, except that a queue of yet-uncommitted changes must be maintained somewhere (memory, disk) when running pre-processing or that transactions are shipped only after they ended, adding to lag (especially for long/large transactions) with the wait for commit approach.

A side note: replication in the ‘old RDBMS’ world can also introduce another layer of complexity. Such logical replication can actually push changes into the target even before they are committed - and ask the target to roll them back if necessary. But due to the issues discussed above, this is actually pretty tricky and many products don't even try (Streams, Oracle GoldenGate), although others support this (Dbvisit Replicate).

Sunday, November 29, 2015

Multitenant and standby: recover from subsetting

In the previous post we learnt how to exclude a PDB (or a datafile) from the standby database recovery. Of course, that might not be the real end goal. We may just want to skip it for now, but have the standby continue to be up-to-date for every other PDB, and eventually include the new PDB as well.

Again, standard Oracle pre-12c DBA knowledge is helpful here. These files are just missing datafiles and thus a backup can be used to restore them. The new 12c features add some quirks to this process, but the base is just sound backup and recovery.

Backup

So let's start with a proper backup:

rman target=/

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Nov 16 12:42:38 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

backup database;
connected to target database: CDB2 (DBID=600824249)

Starting backup at 16-NOV-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=193 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
...
... a lot of lines omitted,
... RMAN creates one backupset for each PDB.
...
Finished backup at 16-NOV-15

Starting Control File and SPFILE Autobackup at 16-NOV-15
piece handle=/home/oracle/fra/CDB2/autobackup/2015_11_16/o1_mf_s_895927547_c4mmzw1t_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 16-NOV-15

In order to make use of the backup on the standby side, the standby must know about this backup. We could either use RMAN catalog, or simply register the backup pieces on the standby, having copied the files there first, of course:
catalog start with '/home/oracle/fra/CDB2/';

...

searching for all files that match the pattern /home/oracle/fra/CDB2/

List of Files Unknown to the Database
=====================================
File Name: /home/oracle/fra/CDB2/205A13D4947F11DCE0530100007F5CA4/backupset/2015_11_16/o1_mf_nnndf_TAG20151116T124303_c4mmvjx4_.bkp
File Name: /home/oracle/fra/CDB2/244AEB6C0961185CE0530100007F6410/backupset/2015_11_16/o1_mf_nnndf_TAG20151116T124303_c4mmwhjd_.bkp
...
Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /home/oracle/fra/CDB2/205A13D4947F11DCE0530100007F5CA4/backupset/2015_11_16/o1_mf_nnndf_TAG20151116T124303_c4mmvjx4_.bkp
File Name: /home/oracle/fra/CDB2/244AEB6C0961185CE0530100007F6410/backupset/2015_11_16/o1_mf_nnndf_TAG20151116T124303_c4mmwhjd_.bkp
...

Offlined datafiles, standby file management auto

So let's restore one of the PDBs:
restore pluggable database 'PDBCOPY3';

Starting restore at 16-NOV-15
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00020 to /oradata2/CDB2SBY/pdbcopy3/system01.dbf
channel ORA_DISK_1: restoring datafile 00021 to /oradata2/CDB2SBY/pdbcopy3/sysaux01.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/fra/CDB2/246FA48B945F1085E0530100007FDD58/backupset/2015_11_16/o1_mf_nnndf_TAG20151116T124303_c4mmyf0t_.bkp
channel ORA_DISK_1: ORA-19870: error while restoring backup piece /home/oracle/fra/CDB2/246FA48B945F1085E0530100007FDD58/backupset/2015_11_16/o1_mf_nnndf_TAG20151116T124303_c4mmyf0t_.bkp
ORA-19504: failed to create file "/oradata2/CDB2SBY/pdbcopy3/sysaux01.dbf"
ORA-27040: file create error, unable to create file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 1

failover to previous backup

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 11/16/2015 12:47:19
RMAN-06026: some targets not found - aborting restore
RMAN-06023: no backup or copy of datafile 21 found to restore
RMAN-06023: no backup or copy of datafile 20 found to restore

But wait... no backup of datafile 20 found? How come? We just backed it up and registered here... NO! That's a trap :-) The error itself is hidden few lines above: failed to create file "/oradata2/CDB2SBY/pdbcopy3/sysaux01.dbf So we need to actually create the path manually first:
!mkdir /oradata2/CDB2SBY/pdbcopy3

restore pluggable database pdbcopy3;

Starting restore at 16-NOV-15
using channel ORA_DISK_1

skipping datafile 20; already restored to file /oradata2/CDB2SBY/pdbcopy3/system01.dbf
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00021 to /oradata2/CDB2SBY/pdbcopy3/sysaux01.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/fra/CDB2/246FA48B945F1085E0530100007FDD58/backupset/2015_11_16/o1_mf_nnndf_TAG20151116T124303_c4mmyf0t_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/fra/CDB2/246FA48B945F1085E0530100007FDD58/backupset/2015_11_16/o1_mf_nnndf_TAG20151116T124303_c4mmyf0t_.bkp tag=TAG20151116T124303
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 16-NOV-15


recover pluggable database pdbcopy3;

Starting recover at 16-NOV-15
using channel ORA_DISK_1

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 11/16/2015 12:51:00
RMAN-06067: RECOVER DATABASE required with a backup or created control file

So we have restored the files ok, but RMAN cannot recover them by itself, so some additional steps are required. But the good news is that the standby recovery process will do this for us.
There is just one more step necessary before this; although the files are there on the standby server, the database marks them as offline. One can query v$datafile, or just do the usual check of the alert.log, and see the following:
Warning: Datafile 20 (/oradata/CDB2/pdbcopy3/system01.dbf) is offline during full database recovery and will not be recovered
Warning: Datafile 21 (/oradata/CDB2/pdbcopy3/sysaux01.dbf) is offline during full database recovery and will not be recovered

So let's finish off with this PDB, by onlining the datafiles and setting to recover mode:
alter session set container=pdbcopy3;

Session altered.

alter database datafile 20 online;

Database altered.

alter database datafile 21 online;

Database altered.

alter session set container=cdb$root;

Session altered.

alter database recover managed standby database;

And the standby database will then happily hum along.


Skipped standby

Does the picture change with the new STANDBYS clause introduced in 12.1.0.2? Indeed it does, but not dramatically.
First of all, the files don't have the proper names in v$datafile, as with standby_file_management=manual. This means we must set the new file names in the RMAN script and then switch to them:
run {
set newname for datafile 22 to '/oradata2/CDB2SBY/pdbcopy4/system.dbf';
set newname for datafile 23 to '/oradata2/CDB2SBY/pdbcopy4/sysaux.dbf';
restore pluggable database pdbcopy4;
}

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 16-NOV-15
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00022 to /oradata2/CDB2SBY/pdbcopy4/system.dbf
channel ORA_DISK_1: restoring datafile 00023 to /oradata2/CDB2SBY/pdbcopy4/sysaux.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/fra/CDB2/246FA48B94601085E0530100007FDD58/backupset/2015_11_16/o1_mf_nnndf_TAG20151116T124303_c4mmywdq_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/fra/CDB2/246FA48B94601085E0530100007FDD58/backupset/2015_11_16/o1_mf_nnndf_TAG20151116T124303_c4mmywdq_.bkp tag=TAG20151116T124303
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 16-NOV-15

switch datafile 22 to datafilecopy;
datafile 22 switched to datafile copy "/oradata2/CDB2SBY/pdbcopy4/system.dbf"

switch datafile 23 to datafilecopy;
datafile 23 switched to datafile copy "/oradata2/CDB2SBY/pdbcopy4/sysaux.dbf"

The second difference is around how to bring the datafiles online. Instead of running alter online, we must enable recovery of the PDB itself:
alter session set container=pdbcopy4;

Session altered.

alter database datafile 22 online
*
ERROR at line 1:
ORA-65152: cannot bring datafile online

!oerr ora 65152
65152, 00000, "cannot bring datafile online"
// *Cause:   An attempt was made to bring online one or more datafiles
//           belonging to a pluggable database that is disabled for recovery.
// *Action:  Issue the ALTER PLUGGABLE DATABASE ENABLE RECOVERY statement first
//           and retry the operation.
//

alter pluggable database enable recovery;

Pluggable database altered.
alter session set container=cdb$root;

Session altered.

alter database recover managed standby database;

And once again, following these steps, the PDB is now properly recovered by the standby, ready for a switchover or failover.

Friday, November 20, 2015

Multitenant and standby: subsetting

In the previous post we looked at managing new PDBs added to a standby database, by copying the files to the DR server, as required. However, there is another possible approach, and that is to omit the PDB from the standby configuration altogether. There are two ways of achieving this:

1. Do it the old-school way. A long time before 12c arrived on the scene one could offline a datafile on the standby database to remove it. The same trick is used in TSPITR (tablespace point-in-time recovery), so that you don't need to restore and recover the entire database if you are only after some tablespaces.
2. 12.1.0.2 adds the option to automatically exclude the PDB from standby(s). And 12.2 adds the option to be more specific in case of multiple standbys.

For the sake of curiosity I started by setting standby file management to manual again. What I found is that there was very little difference, and the steps to take are exactly the same - it’s just the error message that is slightly different. So let’s look at this example:



create pluggable database  pdbcopy2 as clone using '/home/oracle/unplug_pdb.xml' file_name_convert=('/oradata/CDB2/pdbfrompdb','/oradata/CDB2/pdbcopy2');

So for MANUAL:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
*
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-07202: sltln: invalid parameter to sltln.

And for AUTO:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
*
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-01274: cannot add data file that was originally created as
'/oradata/CDB2/pdbcopy3/system01.dbf'

Now we can drop the missing datafile.
However, note that:
1. You need to be in the right container, otherwise you will get a misleading error message.
2. The files appear on the standby one-by-one and each of them stops the apply. So you have to drop, restart recovery, drop, restart recovery, rinse, repeat (and switch containers in the process as needed).


alter database datafile 20 offline drop
*
ERROR at line 1:
ORA-01516: nonexistent log file, data file, or temporary file "20"

alter session set container=pdbcopy3;

Session altered.

select file#, name from v$datafile;
FILE# NAME
----- ----------------------------------------
   20 /oradata2/CDB2SBY/pdbcopy3/system01.dbf

alter database datafile 20 offline drop;

Database altered.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database

alter session set container=cdb$root;
Session altered.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
*
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-01274: cannot add data file that was originally created as           '/oradata/CDB2/pdbcopy3/sysaux01.dbf'

alter session set container=pdbcopy3;

Session altered.

alter database datafile 21 offline drop;

Database altered.

alter session set container=cdb$root;
Session altered.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;


And let's try with the 12.1.0.2 new feature, specifying standby behavior directly:
create pluggable database  pdbcopy4 as clone using '/home/oracle/unplug_pdb.xml' file_name_convert=('/oradata/CDB2/pdbfrompdb','/oradata/CDB2/pdbcopy4') standbys=none;

Media Recovery Log /home/oracle/fra/CDB2SBY/archivelog/2015_11_13/o1_mf_1_778_c4d4m8q0_.arc
Recovery created pluggable database PDBCOPY4
File #22 added to control file as 'UNNAMED00022'. Originally created as:
'/oradata/CDB2/pdbcopy4/system01.dbf'
because the pluggable database was created with nostandby
or the tablespace belonging to the pluggable database is
offline.
File #23 added to control file as 'UNNAMED00023'. Originally created as:
'/oradata/CDB2/pdbcopy4/sysaux01.dbf'
because the pluggable database was created with nostandby
or the tablespace belonging to the pluggable database is
offline.

After all this activity, what do we see in v$datafile? The UNNAMED files are from manual standby file management and from the STANDBYS=NONE clause.
select file#, status, name from v$datafile;

FILE# STATUS  NAME
----- ------- -------------------------------------------------
    1 SYSTEM  /oradata2/CDB2SBY/system01.dbf
    2 SYSTEM  /oradata2/CDB2SBY/pdbseed/system01.dbf
    3 ONLINE  /oradata2/CDB2SBY/sysaux01.dbf
    4 ONLINE  /oradata2/CDB2SBY/pdbseed/sysaux01.dbf
    5 ONLINE  /oradata2/CDB2SBY/undotbs01.dbf
    6 ONLINE  /oradata2/CDB2SBY/users01.dbf
    7 SYSTEM  /oradata2/CDB2SBY/pdb1/system01.dbf
    8 ONLINE  /oradata2/CDB2SBY/pdb1/sysaux01.dbf
    9 ONLINE  /oradata2/CDB2SBY/pdb1/users.dbf
   10 SYSTEM  /oradata2/CDB2SBY/pdbfromseeduser/system01.dbf
   11 ONLINE  /oradata2/CDB2SBY/pdbfromseeduser/sysaux01.dbf
   12 SYSTEM  /oradata2/CDB2SBY/pdbfrompdb/system01.dbf
   13 ONLINE  /oradata2/CDB2SBY/pdbfrompdb/sysaux01.dbf
   14 SYSTEM  /oradata2/CDB2SBY/pdbfrompdbcopy/system01.dbf
   15 ONLINE  /oradata2/CDB2SBY/pdbfrompdbcopy/sysaux01.dbf
   16 SYSOFF  /u01/app/oracle/product/12.1.0.2/dbs/UNNAMED00016
   17 RECOVER /u01/app/oracle/product/12.1.0.2/dbs/UNNAMED00017
   18 SYSOFF  /u01/app/oracle/product/12.1.0.2/dbs/UNNAMED00018
   19 RECOVER /u01/app/oracle/product/12.1.0.2/dbs/UNNAMED00019
   20 SYSOFF  /oradata2/CDB2SBY/pdbcopy3/system01.dbf
   21 RECOVER /oradata2/CDB2SBY/pdbcopy3/sysaux01.dbf
   22 SYSOFF  /u01/app/oracle/product/12.1.0.2/dbs/UNNAMED00022
   23 RECOVER /u01/app/oracle/product/12.1.0.2/dbs/UNNAMED00023

We can also see minor difference in v$pdbs:
SQL> select con_id, name, open_mode, recovery_status from v$pdbs;

CON_ID NAME            OPEN_MODE  RECOVERY
------ --------------- ---------- --------
     2 PDB$SEED         MOUNTED    ENABLED
     3 PDB1             MOUNTED    ENABLED
     4 PDBFROMSEED      MOUNTED    ENABLED
     5 PDBFROMPDB       MOUNTED    ENABLED
     6 PDFROMPDBCOPY    MOUNTED    ENABLED
     7 PDFROMPDBCOPY2   MOUNTED    ENABLED
     8 PDBCOPY2         MOUNTED    ENABLED
     9 PDBCOPY3         MOUNTED    ENABLED
    10 PDBCOPY4         MOUNTED    DISABLED
The last one has recovery disabled, as we used the STANDBYS=NONE clause. The others are being recovered, they just have all their tablespaces offline.

Note that the PDBs are still defined in the standby controlfile and you can switch to it, although you cannot open it, of course:
alter pluggable database pdbcopy3 open
*
ERROR at line 1:
ORA-01147: SYSTEM tablespace file 20 is offline

As with offline datafiles in any other database, you can recover them. But that's a topic for another post. 

Wednesday, November 18, 2015

Multitenant and standby databases

As you probably know, the Oracle 12c multitenant feature consists of a container database (CDB) and one or more pluggable databases (PDBs). Each PDB has a separate set of data files, however, a lot of other components are still shared.
The obvious one is the instance itself and, after all, that's a major selling point for this feature in the first place (i.e.; consolidation). But even at the database (=disk) level, a lot of things are common:

  • The dictionary is partially shared (linked) 
  • Redo logs contain changes for all PDBs and the CDB 
  • The undo is also shared (however, watch 12.2 for a possible change with this)

The nature of a shared (linked) dictionary and common redo logs imply that a standby (Data Guard) configuration is set up at the CDB level. One source CDB means one target CDB, with all PDBs going to the same single target CDB in a 1:1 relationship.

This problem is not completely new, having been introduced with transportable tablespaces in Oracle 8i – but the CDB is now potentially subject to more data movement than we have been used to before; PDBs can be created, unplugged, plugged, dropped.

In this post let's look at the more common, simple case, where you want the same operations to occur on the standby database as they did on the primary:

  • If you create a new PDB from PDB$SEED Oracle has all the information it needs and the standby will recover the database just fine.
  • If you create a new PDB as a clone of an existing (local) one, Oracle has the files it needs... but it won't copy it this for the standby, unless you are using the additional paid Advanced Data Guard option.
  • If you create new PDB from a different set of files (remote), you need to copy the files to the standby, too.

So let's try this and see:

1. From pdb$seed
First of all, one needs to realize that STANDBY_FILE_MANAGEMENT comes into play here - after all, it's just new datafiles being created:
create pluggable database pdbfromseed admin user pdbfromseeduser identified by oracle FILE_NAME_CONVERT=('/oradata/CDB2/pdbseed','/oradata/CDB2/pdbfromseed');

Recovery created pluggable database PDBFROMSEED
Wed Nov 11 20:48:20 2015
Recovery copied files for tablespace SYSTEM
Recovery successfully copied file /oradata2/CDB2SBY/pdbfromseed/system01.dbf from /oradata2/CDB2SBY/pdbseed/system01.dbf
File #10 added to control file as 'UNNAMED00010' because
the parameter STANDBY_FILE_MANAGEMENT is set to MANUAL
The file should be manually created to continue.
Not surprisingly, you can either manually rename the file or just set the parameter to auto (but that won't fix the current problem, it works for new operations only):
alter database rename  file '/u01/app/oracle/product/12.1.0.2/dbs/UNNAMED00010' to '/oradata2/CDB2SBY/pdbfromseed/system01.dbf';

Database altered.

alter system set standby_file_management=auto;

System altered.
And all is then well:
Recovery copied files for tablespace SYSAUX
Recovery successfully copied file /oradata2/CDB2SBY/pdbfromseed/sysaux01.dbf from /oradata2/CDB2SBY/pdbseed/sysaux01.dbf
Recovery created file /oradata2/CDB2SBY/pdbfromseed/sysaux01.dbf
Successfully added datafile 11 to media recovery
Datafile #11: '/oradata2/CDB2SBY/pdbfromseed/sysaux01.dbf'

2. So let's create a new PDB from an existing one:
create pluggable database pdbfrompdb from pdbfromseed file_name_convert=('/oradata/CDB2/pdbfromseed','/oradata/CDB2/pdbfrompdb'); 
Errors in file /oradata/diag/rdbms/cdb2sby/CDB2SBY/trace/CDB2SBY_pr00_7068.trc:
ORA-01565: error in identifying file '/oradata2/CDB2SBY/pdbfrompdb/system01.dbf'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Recovery was unable to create the file as:
'/oradata2/CDB2SBY/pdbfrompdb/system01.dbf'
Errors with log /home/oracle/fra/CDB2SBY/archivelog/2015_11_11/o1_mf_1_763_c47c77xz_.arc
Wed Nov 11 21:07:27 2015
Media Recovery failed with error 1274
Recovery interrupted!
Recovery stopped due to failure in applying recovery marker (opcode 17.34).
Datafiles are recovered to a consistent state at change 2814907 but controlfile could be ahead of datafiles.
Wed Nov 11 21:07:27 2015
Errors in file /oradata/diag/rdbms/cdb2sby/CDB2SBY/trace/CDB2SBY_pr00_7068.trc:
ORA-00283: recovery session canceled due to errors
ORA-01274: cannot add data file that was originally created as '/oradata/CDB2/pdbfrompdb/system01.dbf'
ORA-283 signalled during: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE...
So we need to copy the files:
oracle@localhost:/oradata2/CDB2SBY$ mkdir pdbfrompdb
oracle@localhost:/oradata2/CDB2SBY$ cp pdbfromseed/* pdbfrompdb/
And after restarting recovery, the standby is up again.

3. A similar thing will also happen when plugging in an unplugged PDB:
create pluggable database  pdbfrompdbcopy as clone using '/home/oracle/unplug_pdb.xml' file_name_convert=('/oradata/CDB2/pdbfrompdb','/oradata/CDB2/pdbfrompdbcopy');
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
*
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-01274: cannot add data file that was originally created as
'/oradata/CDB2/pdbfrompdbcopy/system01.dbf'
cp /oradata/CDB2/pdbfrompdb/* pdbfrompdbcopy/
Note that in #2 we could copy the files locally, because Oracle has the files of the cloned PDB (and ADG can thus do it easily, too). In this case, however, it's just some other PDB, so we have to copy the files from the PDBs source of origin.

Next time we will have a look at another option: omitting the files from standby.

Monday, November 16, 2015

Multitenant CDB views trick

In this post, let's look at the new dictionary views added by multitenant: the CDB views and how they actually work.
With the advent of Oracle 12c, and the multitenant database, one more layer of naming has been added. It is no longer enough to specify just the schema and table as a unique identifier, and that new required layer is, of course, the PDB.

Oracle has decided to keep things much the same for a user connecting to a PDB; although a DBA can see some differences (e.g. redo), a user (or an application) can be virtually oblivious to this new multi-tenancy functionality.

One of the consequences is that Oracle decided to have all DBA (plus ALL and USER) views show only the current PDB. So to show data from all PDBs a new family of views has been introduced; the CDB views. Generally speaking then, for any DBA view showing all objects in a PDB, there is an associated CDB view that displays this information globally, across all PDBs.

CDB$VIEW

In 12.1.0.1, Oracle introduced a function called CDB$VIEW and the query for CDB_OBJECTS looks like this:

SELECT ..., CON_ID FROM CDB$VIEW("SYS"."DBA_OBJECTS")

This special function works similar to GV$ views in that it runs a parallel query to query all the PDBs, and then union all the results together. It also adds CON_ID to show which PDB each row comes from.
It actually queries all the containers, including CDB$ROOT and PDB$SEED:
select con_id, count(*) from cdb_data_files group by con_id;

    CON_ID   COUNT(*)
---------- ----------
         2          2
         1          4
         4          3
         3          5

The function is not documented, and although you can try using it, you will run into strange behavior. If you use a non-global user, you can end up killing the server process:

select * from cdb$view("SCOTT"."EMP")
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel

Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0xA] [PC:0xB83E308, kglhdgn()+120] [flags: 0x0, count: 1
kglhdgn()+120        signal   __sighandler()       7F5B34766C40 ?
kglrdti()+238        call     kglhdgn()            7F5B34766C40 ?
qcdlgbo()+5243       call     kglrdti()            7F5B34766C40 ?

And if you use global user (as you should), you run into another problem: the table must exist in all containers, including PDB$SEED.
So if it is not in CDB$ROOT, the query fails immediately:
select * from cdb$view("SYSTEM"."TEST1")
 *
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-00942: table or view does not exist

But if you create this in root, you are still missing the PDB$SEED:
select * from cdb$view("SYSTEM"."TEST1")
*
ERROR at line 1:
ORA-12801: error signaled in parallel query server P001
ORA-00942: table or view does not exist

And to modify PDB$SEED, you have to open it read-write... which is scary and not something I would recommend!:
alter session set "_oracle_script"=TRUE;

alter pluggable database pdb$seed close immediate;
alter pluggable database pdb$seed open read write;
...
alter pluggable database pdb$seed close;
alter pluggable database pdb$seed open read only;
alter session set "_oracle_script"=FALSE;

Fortunately, Oracle is clever enough to run the PX slaves only for those that are really needed - so specifying CON_ID, you can get around the PDB$SEED issue.

select * from cdb$view("SYSTEM"."TEST1") where con_id in (3,4);
D CON_ID
- ----------
X          3
X          4

However, you still need the CDB$ROOT copy though, so that Oracle knows the table definition. And, of course, the table definition must be the same in all PDBs, otherwise you get errors about missing columns - or something more obtuse.
select * from cdb$view("SYSTEM"."TEST1") where con_id in (4)
*
ERROR at line 1:
ORA-12801: error signaled in parallel query server P000
ORA-00904: "DUMMY": invalid identifier

CONTAINERS

12.1.0.2 changed this functionality, so that it now uses a CONTAINERS() clause:
SELECT ..., CON_ID FROM CONTAINERS("SYS"."DBA_OBJECTS")

The is very similar to the CDB$VIEW functionality, but the biggest 'improvement' is that it is now a documented and supported option, which you can safely use on your own tables. But note that there are still some quirks regarding the ownership, meaning it must be owned by the same common (=CDB level) user.

Also one noteworthy point is that the clause now excludes PDB$SEED by default. While this is usually a good thing (you don't want to bother with tables nobody can actually use), it can sting you when querying CDB_DATA_FILES. For example, it no longer shows all the files of a database, although there is a parameter for this; set exclude_seed_cdb_view: i.e:
select * from containers("SYSTEM"."TEST1")
SQL> select con_id, count(*) from cdb_data_files group by con_id;

select con_id, count(*) from cdb_data_files group by con_id;

    CON_ID   COUNT(*)
---------- ----------
         1          4
         3          3
         4          3

As for other changes covered by what we have discussed above, not much is different. Using a non-global user is still prohibited, but at least there is a proper error message to report this now:

select * from containers("SCOTT"."EMP")
                                 *
ERROR at line 1:
ORA-00942: table or view does not exist

And because it does not query PDB$SEED, one does not need to use the CON_ID predicate to exclude the PDB$SEED from the query.

Sunday, November 15, 2015

Update on my sessions for download

This year, I presented at Collaborate at April, I recently returned from Open World when I also had one presentation, and in a week or so I am to present at DOAG, too.

So let me recap all the links for these presentations:

  • What a DBA Should Know or How to Know Your Way Around Database (unique and non-unique IDs in the data dictionary) (DOAG 2015, presentation PDF, whitepaper PDF files for download, 0.6MB), 2015
  • Internals of the Oracle Database 12c Multitenant Architecture (Oracle OpenWorld 2014, Collaborate 2015, UKOUG 2015 updated for Oracle Open World 2015, PDF file for download, 0.8MB), 2015

And just to recap, 2014 presentations:
  • Evolution of Oracle Database Redo Logs Through Versions (Oracle OpenWorld 2014, NZOUG 2014, PDF file for download, 1.1MB), 2014
  • Life of a transaction (NZOUG 2014, PDF file for download, 0.8MB), 2014

Saturday, November 7, 2015

OCM 12c uprgade exam available

I have not noticed before, but the OCM 12c upgrade exam (that is, from 11c), is finally available. The Oracle University has now a proper page for it.
The pricing is same as for the 11g and the format seems to be the same, too. One day.

Truth to Oracle tradition, there is another page for this event, too. The second page has less details, but has a link to OCM exam schedule.

This also shows that the full 2-day 12c OCM is not available yet.

So I guess now I have to shed off dust of my rusty DBA skills, update them to include the new fancy 12c stuff and hopefully in February I will be ready for a try in Munich.