Tuesday, February 9, 2016

How many columns in a query

Everybody knows that the limit for number of columns in an Oracle table is 1000. It is actually limit of all columns in the table, including internal ones, virtual, unused but not yet dropped and so on.

But what is the limit for a query?

Let's start with a simple table, called many_columns. It has 1000 columns, all NUMBERs, to make things easy. Columns are named COLUMN_0001 to COLUMN_1000.

And I insert 1 row into the table:

insert into many_columns(COLUMN_0001) values (1);
commit;

So what happens with an innocent query?

select m.*, n.* from many_columns m, many_columns n;

Well, nothing special - SQL*Plus is happy to return 2000 columns.

Obviously, there must an upper limit, right? At the very maximum, OCI specifies value for column count as ub2, i.e. max 65535.
However, SQL*Plus complains much sooner: the limit seems to be 8150. I added one more table - many_columns2 with just then columns. The first query to go over the limit, with 8151, fails with:

select
m01.*,
m02.*,
m03.*,
m04.*,
m05.*,
m06.*,
m07.*,
m08.*,
m10.*,
m11.*,
m12.*,
m13.*,
m14.*,
m15.*,
m16.*,
m17.*,
m18.*,
m19.*,
m20.*,
m21.*,
m22.*,
m23.*,
m24.*,
m25.*,
dummy
from
many_columns m01,
many_columns m02,
many_columns m03,
many_columns m04,
many_columns m05,
many_columns m06,
many_columns m07,
many_columns m08,
many_columns2 m10,
many_columns2 m11,
many_columns2 m12,
many_columns2 m13,
many_columns2 m14,
many_columns2 m15,
many_columns2 m16,
many_columns2 m17,
many_columns2 m18,
many_columns2 m19,
many_columns2 m20,
many_columns2 m21,
many_columns2 m22,
many_columns2 m23,
many_columns2 m24,
many_columns2 m25,
dual;

select
*
ERROR at line 1:
ORA-00913: too many values


However, in more complex situations, Oracle will complain much sooner.
select *
from   many_columns
right outer join (select count(*) c, count(*) c2 from dual) on (c=column_0001);
ERROR at line 2:
ORA-01792: maximum number of columns in a table or view is 1000

However, this is version dependent, this was in 12.1.0.2. Same test, same tables on my 11.2.0.4 environment and Oracle does not complain about this.

Tuesday, February 2, 2016

Docker machine - wonderful idea, too many bugs?

When doing various experiments with docker, I painfully realized that btrfs keeps a lot to be desired.

Wonderful idea, terrible user experience. First of all, df lies, and you are supposed to run btrfs balance often. Maybe it's because of the way docker uses it - it creates a ton of large images.
Eventually you touch all chunks and rebalance stops working completely. Now you desperately delete things, hoping to get chunk free and let rebalance get thins back to order. Or not - and you end up nuking the server and reinstalling.
Or you perhaps end up crashing up the server and the btrfs won't mount anymore...

So after going through 5 servers (OL7.1 in VBox), I moved onto docker-machine. Wonderful idea - and it does not use btrfs, yay!

However, it also has it's bugs... and pretty ugly. First of all, the latest stable boot2docker 1.9 has a kernel bug that causes Java process to become zombies and docker container won't finish. See https://github.com/docker/docker/issues/18180 . In my case, it means that Oracle database software installation never finishes.

Ok, the link says it's fixed in the upcoming 1.10 image. And indeed it does - and it's very easy to switch to it, just add --virtualbox-boot2docker-url=https://github.com/boot2docker/boot2docker/releases/download/v1.10.0-rc2-b/boot2docker.iso or similar to docker-machine create. Oracle then installs fine.
However, another bug that emerges: nobody can ptrace a process. Which includes gdb - it cannot attach to a running process, becoming completely useless.

Attaching to process 24

ptrace: Operation not permitted.

Let's hope this is fixed soon...
(See my post at https://forums.docker.com/t/boot2docker-mac-os-x-10-0-failing-ptrace-gdb/6005 )

Thursday, January 21, 2016

Docker: Handling multiple copies of the same database/container

Inspired by Frits Hoogland's excellent article on Oracle running in Docker, I started building a lot of Oracle containers. It's nice to have multiple different Oracle versions available at your fingertips for research, product testing and so on.

However, one thing annoys me with Docker: if you want any usable IPC, you need to use --ipc=host. This means that all the images share the same namespace and, furthermore, when a container exits it sometimes does not clean up the IPC entries.

As you probably know the IPC is used by Oracle for SGA memory and semaphore sets. It identifies which belong to which instance, by combining SID and ORACLE_HOME.

This in turn means that you cannot run two databases with the same SID and ORACLE_HOME at the same time... which is usually fine, but not so with Docker and --ipc=host. In this case we do want to run multiple containers built off the same image, or perhaps have multiple similar images with the same ORACLE_HOME, differing in minor details only, such as patchset level.

Fortunately it is actually pretty easy to change the ORACLE_SID, without altering the name of the database. The only thing you really need to change is the name of the spfile (or you can specify the name explicitly when starting the database). You should also change the name of the password file, if you use one, and add an entry to /etc/oratab for convenience.

This has to happen when the container is started, not in the image. And you also have to decide how you handle container stop/start: do you want to generate a new name, or do you remember the new names? (Because, as you know, you need the SID to startup the database in the start scripts.)

I decided to go with the first approach, so that on every start I generate a new name. And I just copy the scripts, so that the previous name is always there and the copy scripts always find it, even when executed repeatedly.

export OLD_SID=SRC
export NEW_SID=`perl -e 'my @c=("A".."Z","a".."z","0".."9");my $s; $s.=$c[rand @c] for 1..8;print $s;'`
export ORACLE_SID=$OLD_SID
export ORAENV_ASK=NO
. oraenv #get ORACLE_HOME
cd $ORACLE_HOME/dbs
cp spfile$OLD_SID.ora spfile$NEW_SID.ora
cp orapw$OLD_SID orapw$NEW_SID.ora
echo "$NEW_SID:$ORACLE_HOME:N" >> /etc/oratab
echo "Generated: $NEW_SID:$ORACLE_HOME:N"
export ORACLE_SID=$NEW_SID
. oraenv
cd -

You can also see that the names of some files will change, for example, alert log changed from, for example alert log changed from diag/rdbms/src/SRC/trace/altert_SRC.log to diag/rdbms/src/081b59ce/trace/alert_081b59ce.log.

So, to conclude, note that the purpose of this script is to have a quick and easy way to spin up multiple containers - and it leaves much room for improvement. There are other possibilities, such as statically registering the new SIDs in listener.ora so you can connect to start the instances without knowing the SID, or writing the new SIDs to disk and using them on container restart.

Wednesday, December 23, 2015

A few thoughts about OCM 12c upgrade

Yesterday I sat for the 12c OCM upgrade exam, which I mentioned in few blog posts before. The first step after checking your ID is of course signing the NDA, and thus you won't find much real information here.

This time I chose Utrecht as the place to take the exam. Not that I have any special preference, I took each of the exams in a different place so far. The only requirements were convenient time and location defined as 'somewhere in Europe'. But in the end, Utrecht turned out to be a good place. Oracle NL headquarters are easy accessible, it's a very new building, the lunch was good:-)
And the city is nice to see.

Regarding the exam, the usual important notes still hold true:

  1. Arrive on time. It's a long day and you will have a lot of things to do.
  2. You will work hard the whole day. Get a good sleep before, be well rested.
  3. Review the exam topics well. Note that they may have change over time. There is for example an update as of January 1, 2016: Flex ASM was added.
  4. Learn how to work with the docs - with no search available. You will need the docs, nobody can remember all the syntax and all the arcane settings.
  5. Love your command line. "GUI is not available for every segment of the exam." And anyway, it's much faster to do things in sqlplus. And you will struggle for time.
Now I just have to wait for the results... And for any of you who wants to take the exam: Good luck!

Monday, December 21, 2015

Don't trust the lying (Data Guard) broker

One of the new 12c features is the "VALIDATE DATABASE" command. According to the documentation it should do many thorough checks and tell you if all is configured well and correctly. However, there is one catch - or to put it a little more bluntly - bug. Or two.

You know that you need standby redo logs for SYNC (or the new FASTSYNC) transport mode. The validate command knows that, too. And you know that you should have one more standby redo log than online redo logs. The validate command seems to know this one as well.

However, the checks appear to have one flaw: they test whether the threads (and let's talk here about a single-instance, so we have only thread #1) have enough standby redo logs (SRLs) assigned. But when you create an SRL with 'alter database add standby logfile', they are unassigned to any thread. In fact, you get 0 as thread#:

select thread#, sequence# from V$STANDBY_LOG;

THREAD# SEQUENCE#
------- ---------
      0         0
      0         0
      0         0
      0         0
Which is perfectly fine - Oracle waits until the instance actually needs the SRL and only then is this assigned. Makes the administration easier.

But the guys responsible for VALIDATE DATABASE do not seem to realize this. So if you have just set up your SRLs and run the validate command - just to see if the config is all ok (e.g. because you just want to change the LogXptMode and protection mode) then you will get a result like this:
Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (CDB5)                  (CDB5SBY)
    1         3                       0                       Insufficient SRLs
    Warning: standby redo logs not configured for thread 1 on CDB5SBY

WTF? Yes, the validate command did not understand that we have plenty of SRLs, only that they have not yet been assigned to any thread.

So.. we do a switchover, back and forth, to let both databases touch the SRLs and...

Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
              (CDB5)                  (CDB5SBY)
    1         3                       2                       Insufficient SRLs

And we still receive a warning - although we have created 4 SRLs, only two of which Oracle has required so far...with the other two currently unassigned. Again, VALIDATE DATABASE is not aware of this and complains.

The morale? Don't just trust the command, especially in the beginning, when your configuration is fresh and still settling down. Although that's exactly the time you want to use checks like this.

Sunday, December 13, 2015

UKOUG Tech15 is over, looking forward to Tech16

What a busy week! The UKOUG Tech15 conference made me busy for four days, postponing any other work and non work stuff.
As usual, I met many people actually using our products - it's always a bit strange feeling and a strong confirmation seeing people trusting their data and apps to something a developer writes:-)
And of course, seeing many old friends again was also very nice. Especially talking to Gluent guys (http://gluent.com) and seeing what they are up to was very interesting and promising - I hope they succeed in a big way and change the data landscape.

And of course, the Twinkies...

Sunday, December 6, 2015

Oracle transactions in the new world

If the new world of BigData, NoSQL and streaming has sparked your interest, you may have noticed one peculiarity - the lack of proper transactions in these contexts (or transactions at all!) Yes, durability is retained, but the other properties of ACID (Atomicity, Consistency, Isolation, Durability) leave a lot to be desired.

One might think that in this new world perhaps applications are built in such a way that they no longer need it,  and in some cases this may be true. For example, if a tweet or an update to Facebook gets lost, then who cares, and we can simply continue on. But there is of course more important data that still requires transaction support and some NoSQL databases have limited support for this nowadays. However, this is still far from a full implementation, the likes of which everyone takes for granted in the Oracle database (e.g. you cannot modify just arbitrary rows in arbitrary tables in a single transaction). Of course, the huge benefit is that these databases are much easier to scale, as they are not bogged down by lock/synchronization mechanisms that ensure data consistency.

But recently there seems to be much more interest in marrying the worlds together; by this I mean, the old 'proper' RDBMS (Oracle) world and the new BigData/NoSQL/streaming  ('Kids from the Valley') one. So the question then follows, working from the old to the new; how do you feed data from a database, built on an inherently transactional foundation, into one that has no idea about them?

Mind you, such interoperability issues are not a new thing...anyone remember that old problem of sending messages from PL/SQL or triggers? In that case any message (or email) was sent when requested, but the encapsulating transaction could be rolled back or tried again. This lead to messages that were not supposed to be sent, along with messages sent multiple times. The trick there was to use dbms_job in the workflow. This package (unlike the new dbms_scheduler) just queued the job and the job coordinator saw it only after the insert into the queue is committed – i.e, when the whole transaction commits.

There are two basic approaches to addressing this issue for a data feed between the systems:
1. You can revert to the 'old and proven' batch processing method (think ETL). Just select (e.g. using Sqoop) the data that arrived since the last load, and be sure to change your application to provide enough information so that such query is possible at all (e.g. add last-update timestamp columns).

2. Logical replication or change data capture. There is an overwhelming trend (and demand)   toward near-real-time, and people now want and and expect data with low seconds latency. In this approach changes from the source database are mined and sent to the target as they are happening.

So the second option sounds great - nice and easy – except that it’s NOT...
The issue is that any change in the database happens as it is initiated by the user/application, but until the transaction is committed you cannot be sure if the change will be persistent, and thus whether anyone outside of the database should see it.

The only solution here is to wait for the commit, and you can be more or less clever with what you do until the commit happens. You can simply wait for the commit and only then begin parsing the changes; or you can do some/all pre-processing and just flush the data out when you see the actual commit.

For this pre-processing option, as is often the case, things are actually more complicated in real life – and we don’t have to contend just with simple commits/rollbacks at the end of the transaction, but also need to handle savepoints. These are used much more often than you would think; for example, any SQL implicitly issues a savepoint, so that it can roll itself back if it fails. The hurdle is that there is no information in the redo as to when a savepoint was established, and which savepoint a rollback roll backs to.

In the end, things turn out well with the commit/rollback mechanism, except that a queue of yet-uncommitted changes must be maintained somewhere (memory, disk) when running pre-processing or that transactions are shipped only after they ended, adding to lag (especially for long/large transactions) with the wait for commit approach.

A side note: replication in the ‘old RDBMS’ world can also introduce another layer of complexity. Such logical replication can actually push changes into the target even before they are committed - and ask the target to roll them back if necessary. But due to the issues discussed above, this is actually pretty tricky and many products don't even try (Streams, Oracle GoldenGate), although others support this (Dbvisit Replicate).