Skip to main content

Oracle transactions in the new world

If the new world of BigData, NoSQL and streaming has sparked your interest, you may have noticed one peculiarity - the lack of proper transactions in these contexts (or transactions at all!) Yes, durability is retained, but the other properties of ACID (Atomicity, Consistency, Isolation, Durability) leave a lot to be desired.

One might think that in this new world perhaps applications are built in such a way that they no longer need it,  and in some cases this may be true. For example, if a tweet or an update to Facebook gets lost, then who cares, and we can simply continue on. But there is of course more important data that still requires transaction support and some NoSQL databases have limited support for this nowadays. However, this is still far from a full implementation, the likes of which everyone takes for granted in the Oracle database (e.g. you cannot modify just arbitrary rows in arbitrary tables in a single transaction). Of course, the huge benefit is that these databases are much easier to scale, as they are not bogged down by lock/synchronization mechanisms that ensure data consistency.

But recently there seems to be much more interest in marrying the worlds together; by this I mean, the old 'proper' RDBMS (Oracle) world and the new BigData/NoSQL/streaming  ('Kids from the Valley') one. So the question then follows, working from the old to the new; how do you feed data from a database, built on an inherently transactional foundation, into one that has no idea about them?

Mind you, such interoperability issues are not a new thing...anyone remember that old problem of sending messages from PL/SQL or triggers? In that case any message (or email) was sent when requested, but the encapsulating transaction could be rolled back or tried again. This lead to messages that were not supposed to be sent, along with messages sent multiple times. The trick there was to use dbms_job in the workflow. This package (unlike the new dbms_scheduler) just queued the job and the job coordinator saw it only after the insert into the queue is committed – i.e, when the whole transaction commits.

There are two basic approaches to addressing this issue for a data feed between the systems:
1. You can revert to the 'old and proven' batch processing method (think ETL). Just select (e.g. using Sqoop) the data that arrived since the last load, and be sure to change your application to provide enough information so that such query is possible at all (e.g. add last-update timestamp columns).

2. Logical replication or change data capture. There is an overwhelming trend (and demand)   toward near-real-time, and people now want and and expect data with low seconds latency. In this approach changes from the source database are mined and sent to the target as they are happening.

So the second option sounds great - nice and easy – except that it’s NOT...
The issue is that any change in the database happens as it is initiated by the user/application, but until the transaction is committed you cannot be sure if the change will be persistent, and thus whether anyone outside of the database should see it.

The only solution here is to wait for the commit, and you can be more or less clever with what you do until the commit happens. You can simply wait for the commit and only then begin parsing the changes; or you can do some/all pre-processing and just flush the data out when you see the actual commit.

For this pre-processing option, as is often the case, things are actually more complicated in real life – and we don’t have to contend just with simple commits/rollbacks at the end of the transaction, but also need to handle savepoints. These are used much more often than you would think; for example, any SQL implicitly issues a savepoint, so that it can roll itself back if it fails. The hurdle is that there is no information in the redo as to when a savepoint was established, and which savepoint a rollback roll backs to.

In the end, things turn out well with the commit/rollback mechanism, except that a queue of yet-uncommitted changes must be maintained somewhere (memory, disk) when running pre-processing or that transactions are shipped only after they ended, adding to lag (especially for long/large transactions) with the wait for commit approach.

A side note: replication in the ‘old RDBMS’ world can also introduce another layer of complexity. Such logical replication can actually push changes into the target even before they are committed - and ask the target to roll them back if necessary. But due to the issues discussed above, this is actually pretty tricky and many products don't even try (Streams, Oracle GoldenGate), although others support this (Dbvisit Replicate).

Comments

Popular posts from this blog

ORA-27048: skgfifi: file header information is invalid

I was asked to analyze a situation, when an attempt to recover a 11g (standby) database resulted in bunch of "ORA-27048: skgfifi: file header information is invalid" errors.

I tried to reproduce the error on my test system, using different versions (EE, SE, 11.1.0.6, 11.1.0.7), but to no avail. Fortunately, I finally got to the failing system:

SQL> recover standby database;
ORA-00279: change 9614132 generated at 11/27/2009 17:59:06 needed for thread 1
ORA-00289: suggestion :
/u01/flash_recovery_area/T1/archivelog/2009_11_27/o1_mf_1_208_%u_.arc
ORA-27048: skgfifi: file header information is invalid
ORA-27048: skgfifi: file header information is invalid
ORA-27048: skgfifi: file header information is invalid
ORA-27048: skgfifi: file header information is invalid
ORA-27048: skgfifi: file header information is invalid
ORA-27048: skgfifi: file header information is invalid
ORA-00280: change 9614132 for thread 1 is in sequence #208


Interestingly, nothing interesting is written to alert.log n…

Reading data from PGA and SGA

Overview For our investigation of execution plan as it is stored in memory, we need in the first place to be able to read the memory.

We have the options of
x$ksmmem, reading SGA using SQL. Personally I don't like it, it's cumbersome and slow.direct SGA read: obviously reading SGA only; it's fast and easy to doread process memory: can read PGA, process stack - and since the processes do map the SGA, too, you can read it as well. Unfortunately ptrace sends signals to the processes and the process is paused when reading it, but so far all my reads were short and fast and the processes did not notice. Some OS configurations can prevent you from using ptrace (e.g. docker by default), google for CAP_SYS_PTRACE.gdb: using your favorite debugger, you can read memory as well. Useful when investigating. Direct SGA read I always considered direct SGA read of some dark magic, but the fundamentals are actually very easy. It still looks like sorcery when actually reading the Oracle in…

Reading execution plan from SGA and PGA - teaser

Some of you have seen my presentation about hidden parts of Oracle execution plans and how to access the plan in the memory directly and parse it. I presented it at OakTable World 2017  and it will be also presented in Wellington and Acukland this November.  You can download the presentation at http://vitspinka.com/files/ReadingPlanFromSGA-OTWatOOW-2017.pdf.


I realize that many people did not have the chance to attend... and that the slides need quite a lot of explaining, it's hard to understand this internals without more explanation.


Thus you can look forward to seeing a handful of blog posts, inspired by this presentation, and explain some of the aspects of this whole topic.

We will start with some basic tools; accessing SGA and PGA, which you may find useful for many other tasks, too. Then we will look at some details of the execution plan. This is not - and neither the presentation is - an exhaustive guide to the execution plan internals. It would be a multi-year project to t…