10 September 2014

NOFILENAMECHECK Parameter Causes DUPLICATE DATABASE To Ignore DB_CREATE_FILE_DEST?

Last week I was creating a new testing database from a backup of our demo database, both under Oracle 11.2.0.3. I grabbed one of my old scripts to handle the duplicate function, which looked similar to this:

connect auxiliary /;
run {

        duplicate database to testdb
                backup location '$BACKUPDIR'
                nofilenamecheck;

}

One important difference between the demo database and this new test database is that the original demo database lives on a filesystem and the new database was to go onto ASM on a different host. I had copied the syntax from the old script and kicked it off. I made sure that the db_create_file_dest was set to the ASM diskgroup. However the restore would fail as RMAN tried to write to the filesystem path used by the original demo database, which didn't exist on this host, instead of the ASM diskgroup.

21 August 2014

ORA-14048 When Adding Composite Partition

Fresh off the heels of my earlier composite partitioning post, I just ran into this confusing issue:

SQL> alter table p_objects
  2          add partition p201410
  3                  values less than (to_date('2014/11/01','yyyy/mm/dd'))
  4          (
  5                  subpartition p201410_spdts values ('DTS')
  6                  , subpartition p201410_spfoo values ('FOO')
  7                  , subpartition p201410_spbar values ('BAR')
  8                  , subpartition p201410_spsys values ('SYS')
  9                  , subpartition p201410_spsysaux values ('SYSAUX')
 10          )
 11  tablespace tbs1
 12  ;
tablespace tbs1
*
ERROR at line 11:
ORA-14048: a partition maintenance operation may not be combined with other
operations

The error doesn't quite make it easy to determine what the problem is. Turns out that the tablespace/storage clause of the ALTER TABLE ... ADD PARTITION has to come prior to the subpartitions definition. Simply moving that part of the statement a few rows up yields success:

SQL> alter table p_objects
  2          add partition p201410
  3                  values less than (to_date('2014/11/01','yyyy/mm/dd'))
  4          tablespace tbs1
  5          (
  6                  subpartition p201410_spdts values ('DTS')
  7                  , subpartition p201410_spfoo values ('FOO')
  8                  , subpartition p201410_spbar values ('BAR')
  9                  , subpartition p201410_spsys values ('SYS')
 10                  , subpartition p201410_spsysaux values ('SYSAUX')
 11          )
 12  ;

Table altered.

Again, probably obvious to most of you. It wasn't as obvious to decipher when I was trying to add a partition with over 6,300 subpartitions. As always:


Adding New Partitions with Custom Subpartition Definition (Range-List)

As part of a project for work I wanted to create a script that would create a new range partition but also pre-create all the list subpartitions. By default the subpartitions would be created based on the subpartition template. However for various reasons which I won't get into we don't update or use the subpartition template. I wanted to define the subpartition list as part of the ALTER TABLE ... ADD PARTITION statement. I assumed it was perfectly acceptable but didn't see any obvious examples in my hasty web searching, so I thought I'd share on myself.

I start by creating my composite range-list partitioned table:

SQL> create table p_objects
  2  tablespace tbs1
  3  partition by range(rdate)
  4  subpartition by list(owner)
  5  subpartition template
  6  (
  7          subpartition spsys values ('SYS')
  8          , subpartition spsysaux values ('SYSAUX')
  9  )
 10  (
 11          partition p201301 values less than (to_date('2013/02/01','YYYY/MM/DD')),
 12          partition p201302 values less than (to_date('2013/03/01','YYYY/MM/DD')),
 13          partition p201303 values less than (to_date('2013/04/01','YYYY/MM/DD')),
 14          partition p201304 values less than (to_date('2013/05/01','YYYY/MM/DD')),
 15          partition p201305 values less than (to_date('2013/06/01','YYYY/MM/DD')),
 16          partition p201306 values less than (to_date('2013/07/01','YYYY/MM/DD')),
 17          partition p201307 values less than (to_date('2013/08/01','YYYY/MM/DD')),
 18          partition p201308 values less than (to_date('2013/09/01','YYYY/MM/DD')),
 19          partition p201309 values less than (to_date('2013/10/01','YYYY/MM/DD')),
 20          partition p201310 values less than (to_date('2013/11/01','YYYY/MM/DD')),
 21          partition p201311 values less than (to_date('2013/12/01','YYYY/MM/DD')),
 22          partition p201312 values less than (to_date('2014/01/01','YYYY/MM/DD'))
 23  )
 24  as select object_id
 25          , owner
 26          , object_name
 27          , object_type
 28          , to_date(trunc(dbms_random.value(
 29                  to_char(to_date('2013/01/01','YYYY/MM/DD'),'J'),
 30                  to_char(to_date('2013/12/31','YYYY/MM/DD'),'J')
 31                  )),'J') rdate
 32  from all_objects
 33  where owner in ('SYS','SYSAUX');

Table created.

This creates the partitions with 2 subpartitions each, per my defined template. For example:

PARTITION_NAME                 SUBPARTITION_NAME
------------------------------ ------------------------------
P201301                        P201301_SPSYS
                               P201301_SPSYSAUX
P201302                        P201302_SPSYS
                               P201302_SPSYSAUX
P201303                        P201303_SPSYS
                               P201303_SPSYSAUX

Next I'll add two more partitions, one with no subpartition definition specified and another with a custom definition:

SQL> alter table p_objects
  2          add partition p201408
  3                  values less than (to_date('2014/09/01','yyyy/mm/dd'));

Table altered.

SQL> alter table p_objects
  2          add partition p201409
  3                  values less than (to_date('2014/10/01','yyyy/mm/dd'))
  4          (
  5                  subpartition p201409_spdts values ('DTS')
  6                  , subpartition p201409_spsys values ('SYS')
  7                  , subpartition p201409_spsysaux values ('SYSAUX')
  8          )
  9  ;

Table altered.

The results:
PARTITION_NAME                 SUBPARTITION_NAME
------------------------------ ------------------------------
P201408                        P201408_SPSYS
                               P201408_SPSYSAUX
P201409                        P201409_SPDTS
                               P201409_SPSYS
                               P201409_SPSYSAUX

You can see that the first statement just used the subpartition template as defined in the original CREATE TABLE statement. The second uses the list I defined in the ALTER TABLE statement. I could have only defined the SPDTS subpartition and it would have only used that one and not used SYS or SYSAUX subpartitions at all.

Hopefully this post helps a few of you in your searching when faced with a similar task. This was probably obvious to most of you but the mind starts to slip for some of us.

13 July 2014

ASM Startup Fails With ORA-04031 After Adding CPUs

A few weeks ago we upgraded one of our production server, adding another CPU tray. This brought the number of CPU cores from 80 to 160, and took us from 2Tb of RAM to 4Tb (just in time for Oracle to announce the in-memory database in 12.1.0.2!).

However when I went to start things up, ASM wasn't starting up, giving me these errors:

ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","unknown object","KKSSP^550","kglseshtSegs")
ORA-04031: unable to allocate 392 bytes of shared memory ("shared pool","unknown object","sga heap(1,1)","kdlwss")

ORA-04031: unable to allocate 560 bytes of shared memory ("shared pool","unknown object","sga heap(1,1)","KKSSP")
RBAL (ospid: 59427): terminating the instance due to error 4031



03 June 2014

Beware April 2014 PSU and Golden Gate Integrated Capture Users

When the latest quarterly Patch Set Update (PSU) came out from Oracle, we planned to get it in place as soon as reasonable due to our need to stay current with security patches, and my need to apply what I had assumed were well-tested bug fixes for our 11.2.0.3 installations. However we were in for an unpleasant surprise.

We first applied the April 2014 PSU on our staging & development database hosts and things ran fine. After two weeks, we put it into production. The next morning we noticed that our Golden Gate extracts were abending with a message like this:

2014-05-13 01:41:44 ERROR OGG-02077 Extract encountered a read error in the asynchronous reader thread and is abending: Error code 600, error message:
ORA-00600: internal error code, arguments: [knlogcPackColList:101], [1], [], [], [],[], [], [], [], [], [], []. 

Obviously the PSU was the only change from the day before when these GoldenGate extracts ran fine. The error itself seemed to match Bug 16306373 (OGG Lightweight capture fails with ORA-600 [knlogcPackColList:101]), which affects integrated capture extracts, which was what we were trying to use.

So we had two questions to answer:

  1. Was it really the PSU, and if so do we need to rollback?
  2. Why didn't we see this in development/staging?


12 May 2014

archive_lag_target Works in SE

TL;DR: The archive_lag_target parameter will force log archiving in Standard Edition.

Just a quick note here that I wanted to share since I didn't see anything directly confirming this when I was searching around.

I have an Oracle 11gR2 Standard Edition (SE) database that I'm also maintaining a manual standby for, since Oracle Data Guard is not available in SE. I created a metric extension in EM12c to alert me if the standby is more than 1 hour behind the primary. However since this is a very low-activity database, archive logs were not switching even once an hour. Obviously, I could include a command to force a log switch/archive in the script that I use to push archivelogs to the standby. However we all know that with Data Guard on Enterprise Edition (EE), one would use the archive_lag_target initialization parameter to set the desired maximum standby lag. Oracle enforces this by performing a log switch at most every X seconds, where X is the number specified by the archive_lag_target value. By default this is set to 0, which disables the feature.

I had assumed that archive_lag_target would only work in EE but decided to give it a try and was pleasantly surprised to see that it does work as intended in SE. So I can set archive_lag_target=900 to specify a 15 minute maximum log archiving (it would be more frequent if the database activity warranted an earlier switch).

25 April 2014

Don't Fear the EM12c Metric Extensions

A few weeks ago, our customer support team asked us to automate part of their checklist that looks at the number of active sessions in our production database. In EM12c, this seemed like a no-brainer with the Average Active Sessions metric. So I added this to my production incident ruleset and went back to another project. Over the next few days we'd get pinged by EM12c but the support folks would say it shouldn't. After taking a look we realized that we should be looking at USER sessions, excluding the BACKGROUND sessions Oracle creates to run the instance (like DBWn and LGWR).

The trouble was that I couldn't find a metric for just the user sessions. I had resolved myself to having a scheduler job or script run the SQL that I wanted and send an email if it was over my critical threshold. On a whim I put a message out to twitter, and thankfully my friend Leighton answered. He suggested I look at adding a metric extension, something still foreign to me. I had seen the term in the EM12c interface but it sounded like some kind of plugin interface. Turns out it's simply another way of saying "user-defined metrics". Honestly, if they were labelled that way I would have started playing with them much sooner (subtle criticism).

So a quick search turned up a great video that showed just how simple it was to create a metric extension based on a SQL query. In just a few minutes I had the metric extension created, tested and published. You can create many different types of metric extensions all target types, but in my case a simple SQL query for a database instance was all I needed:

select count(*) from v$session
where type='USER' and status='ACTIVE';

I then define the warning and critical thresholds for the count and it's done! I added it to my ruleset (removing the stock Average Active Sessions metric) and haven't looked back.

Since then I've created a few other metric extensions, for example a standby lag check on a standard edition physical standby. My only regret is not taking the time to learn about these sooner. I suggest anyone using EM12c do so sooner rather than later.