Quantcast
Channel: Bryan's Oracle Blog
Viewing all 147 articles
Browse latest View live

Recovery Continuity of your Oracle Database

$
0
0
"Recovery Continuity" should be a critical part of your Oracle Database support plan.
As multitenant Oracle Databases becomes the standard for database implementations, you need to ensure that you maintain your recovery window even as your pluggable moves around your environment.

Above is the recommended practice we have all been hearing about to make upgrades of your Oracle Database easier. Unplug from your current CDB (CDBPROD122) plug into a new CDB (CDB19C) that has the new release.  What you need to think about however, is how am I  going to ensure that I can recover my pluggable database to any point in time, all the way this migration without a huge amount of downtime?

This is where preplugin backups, and some planning comes into play.
You can find out more about preplugin backups with some of the links below.
Let's take a look at what I am doing for my pluggable database PDBDWPROD before I migrate it from OLDCDB to NEWCDB.

Pre-unplug


In the picture PDBDWPROD is plugged into CDBPROD122.

In my environment I am testing my PDB (PDBDWPROD)  and it is plugged into OLDCDB,  migrating to NEWCDB.

To ensure that I have a good restore point I am going to perform a full backup of my pluggable database prior to unplugging, and I will also include an archive log backups. 

RMAN> backup incremental level 0 pluggable database PDBDWPROD plus archivelog delete input;


Starting backup at 15-APR-22
current log archived
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=426 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=23.0.0.1
channel ORA_SBT_TAPE_1: starting compressed archived log backup set
channel ORA_SBT_TAPE_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=185 RECID=186 STAMP=1102096068
input archived log thread=1 sequence=186 RECID=187 STAMP=1102096071
input archived log thread=1 sequence=187 RECID=188 STAMP=1102096147
input archived log thread=1 sequence=188 RECID=189 STAMP=1102096166
input archived log thread=1 sequence=189 RECID=190 STAMP=1102096288
channel ORA_SBT_TAPE_1: starting piece 1 at 15-APR-22
channel ORA_SBT_TAPE_1: finished piece 1 at 15-APR-22
piece handle=6l0r19t1_213_1_1 tag=TAG20220415T175129 comment=API Version 2.0,MMS Version 23.0.0.1
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:03
channel ORA_SBT_TAPE_1: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_185_k5mcy4w8_.arc RECID=186 STAMP=1102096068
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_186_k5mcy7xv_.arc RECID=187 STAMP=1102096071
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_187_k5md0m8l_.arc RECID=188 STAMP=1102096147
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_188_k5md16rt_.arc RECID=189 STAMP=1102096166
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_189_k5md50qy_.arc RECID=190 STAMP=1102096288
Finished backup at 15-APR-22

Starting backup at 15-APR-22
using channel ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: starting compressed incremental level 0 datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
input datafile file number=00066 name=/u01/app/oracle/oradata/OLDCDB/PDBDWPROD/sysaux01.dbf
input datafile file number=00065 name=/u01/app/oracle/oradata/OLDCDB/PDBDWPROD/system01.dbf
input datafile file number=00068 name=/u01/app/oracle/oradata/OLDCDB/PDBDWPROD/PDBDWPROD.dbf
input datafile file number=00067 name=/u01/app/oracle/oradata/OLDCDB/PDBDWPROD/undotbs01.dbf
channel ORA_SBT_TAPE_1: starting piece 1 at 15-APR-22
channel ORA_SBT_TAPE_1: finished piece 1 at 15-APR-22
piece handle=6m0r19t5_214_1_1 tag=TAG20220415T175132 comment=API Version 2.0,MMS Version 23.0.0.1
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:15
Finished backup at 15-APR-22

Starting backup at 15-APR-22
current log archived
using channel ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: starting compressed archived log backup set
channel ORA_SBT_TAPE_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=190 RECID=191 STAMP=1102096309
channel ORA_SBT_TAPE_1: starting piece 1 at 15-APR-22
channel ORA_SBT_TAPE_1: finished piece 1 at 15-APR-22
piece handle=6n0r19tm_215_1_1 tag=TAG20220415T175150 comment=API Version 2.0,MMS Version 23.0.0.1
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:03
channel ORA_SBT_TAPE_1: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_190_k5md5osn_.arc RECID=191 STAMP=1102096309
Finished backup at 15-APR-22

Starting Control File and SPFILE Autobackup at 15-APR-22
piece handle=c-1180802953-20220415-07 comment=API Version 2.0,MMS Version 23.0.0.1
Finished Control File and SPFILE Autobackup at 15-APR-22

RMAN>


Then right before the unplug I am going to execute another archive log backup, immediately followed by the unplug.

RMAN> backup archivelog all delete input;

Starting backup at 15-APR-22
current log archived
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=442 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=23.0.0.1
channel ORA_SBT_TAPE_1: starting compressed archived log backup set
channel ORA_SBT_TAPE_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=191 RECID=192 STAMP=1102096412
input archived log thread=1 sequence=192 RECID=193 STAMP=1102096418
input archived log thread=1 sequence=193 RECID=194 STAMP=1102096424
input archived log thread=1 sequence=194 RECID=195 STAMP=1102096502
channel ORA_SBT_TAPE_1: starting piece 1 at 15-APR-22
channel ORA_SBT_TAPE_1: finished piece 1 at 15-APR-22
piece handle=6p0r1a3n_217_1_1 tag=TAG20220415T175503 comment=API Version 2.0,MMS Version 23.0.0.1
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:07
channel ORA_SBT_TAPE_1: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_191_k5md8w57_.arc RECID=192 STAMP=1102096412
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_192_k5md926n_.arc RECID=193 STAMP=1102096418
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_193_k5md9893_.arc RECID=194 STAMP=1102096424
archived log file name=/u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_194_k5mdcpko_.arc RECID=195 STAMP=1102096502
Finished backup at 15-APR-22

Starting Control File and SPFILE Autobackup at 15-APR-22
piece handle=c-1180802953-20220415-08 comment=API Version 2.0,MMS Version 23.0.0.1
Finished Control File and SPFILE Autobackup at 15-APR-22



Then the unplug

SQL>  alter pluggable database PDBDWPROD  close immediate;

Pluggable database altered.

SQL> ALTER PLUGGABLE DATABASE PDBDWPROD UNPLUG INTO '/tmp/PDBDWPROD.xml';

Pluggable database altered.

SQL>


Plug


SQL>  create pluggable database PDBDWPROD using  '/tmp/PDBDWPROD.xml' nocopy tempfile reuse KEYSTORE IDENTIFIED BY "change-on-install" ;

Pluggable database created.

SQL> alter pluggable database PDBDWPROD open;

Pluggable database altered.



Update database and set restore point


Now I am going create some objects in my PDB, set a restore point, and then create a few more objects to ensure I am restoring to a point in time.
SQL>  alter session set container=PDBDWPROD;

Session altered.

SQL> create table bgrenn.postmove as select * from dba_objects ;

Table created.

############################ perform a couple of log switches

SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> alter system archive log current;
System altered.

SQL> alter system archive log current;
System altered.

SQL> alter session set container=PDBDWPROD;
Session altered.

############################ create a restore point

SQL> create restore point PDBDWPROD_restore;
Restore point created.

############################ create a second table

SQL> create table bgrenn.postrestorepoint as select * from dba_objects ;
Table created.

############################ perform a couple of log switches

SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> alter system archive log current;
System altered.

SQL> alter system archive log current;
System altered.

SQL> alter system archive log current;
System altered.



Backups available post plugin

Now using the preplugin commands I can see the backups that we taken before the migration.

rman> SET PREPLUGIN CONTAINER=PDBDWPROD;
rman> list preplugin backup of pluggable database PDBDWPROD;

 

RMAN>  list preplugin backup of pluggable database PDBDWPROD;

starting full resync of recovery catalog
full resync complete

List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
209 Incr 0 284.50M SBT_TAPE 00:00:07 15-APR-22
BP Key: 209 Status: AVAILABLE Compressed: YES Tag: TAG20220415T175132
Handle: 6m0r19t5_214_1_1 Media: objectstorage.us-ashburn-1.oraclecloud.com/n/xxx/oldcdb
List of Datafiles in backup set 209
Container ID: 5, PDB Name: PDBDWPROD
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
59 0 Incr 6346380 15-APR-22 NO /u01/app/oracle/oradata/OLDCDB/PDBDWPROD/system01.dbf
60 0 Incr 6346380 15-APR-22 NO /u01/app/oracle/oradata/OLDCDB/PDBDWPROD/sysaux01.dbf
61 0 Incr 6346380 15-APR-22 NO /u01/app/oracle/oradata/OLDCDB/PDBDWPROD/undotbs01.dbf
62 0 Incr 6346380 15-APR-22 NO /u01/app/oracle/oradata/OLDCDB/PDBDWPROD/PDBDWPROD.dbf


list preplugin backup of archivelog all;

List of Backup Sets
===================


BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
208 2.25M SBT_TAPE 00:00:01 15-APR-22
BP Key: 208 Status: AVAILABLE Compressed: YES Tag: TAG20220415T175129
Handle: 6l0r19t1_213_1_1 Media: objectstorage.us-ashburn-1.oraclecloud.com/n/xxx/oldcdb

List of Archived Logs in backup set 208
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 185 6345022 15-APR-22 6345387 15-APR-22
1 186 6345387 15-APR-22 6345399 15-APR-22
1 187 6345399 15-APR-22 6345803 15-APR-22
1 188 6345803 15-APR-22 6345912 15-APR-22
1 189 6345912 15-APR-22 6346322 15-APR-22

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
210 256.00K SBT_TAPE 00:00:00 15-APR-22
BP Key: 210 Status: AVAILABLE Compressed: YES Tag: TAG20220415T175150
Handle: 6n0r19tm_215_1_1 Media: objectstorage.us-ashburn-1.oraclecloud.com/n/xxx/oldcdb

List of Archived Logs in backup set 210
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 190 6346322 15-APR-22 6346391 15-APR-22

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
212 512.00K SBT_TAPE 00:00:02 15-APR-22
BP Key: 212 Status: AVAILABLE Compressed: YES Tag: TAG20220415T175503
Handle: 6p0r1a3n_217_1_1 Media: objectstorage.us-ashburn-1.oraclecloud.com/n/id20skavsofo/oldcdb

List of Archived Logs in backup set 212
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 191 6346391 15-APR-22 6346585 15-APR-22
1 192 6346585 15-APR-22 6346593 15-APR-22
1 193 6346593 15-APR-22 6346601 15-APR-22
1 194 6346601 15-APR-22 6346663 15-APR-22



Restore from preplugin

I shutdown my pluggable database and start with "from preplugin" in the command in my rman session.

RAMN> alter pluggable database PDBDWPROD close;
RMAN> restore pluggable database PDBDWPROD   from preplugin;

RMAN> alter pluggable database PDBDWPROD close;

Statement processed
starting full resync of recovery catalog
full resync complete

RMAN> restore pluggable database PDBDWPROD from preplugin;

Starting restore at 15-APR-22
using channel ORA_SBT_TAPE_1
using channel ORA_DISK_1

channel ORA_SBT_TAPE_1: starting datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_SBT_TAPE_1: restoring datafile 00059 to /u01/app/oracle/oradata/OLDCDB/PDBDWPROD/system01.dbf
channel ORA_SBT_TAPE_1: restoring datafile 00060 to /u01/app/oracle/oradata/OLDCDB/PDBDWPROD/sysaux01.dbf
channel ORA_SBT_TAPE_1: restoring datafile 00061 to /u01/app/oracle/oradata/OLDCDB/PDBDWPROD/undotbs01.dbf
channel ORA_SBT_TAPE_1: restoring datafile 00062 to /u01/app/oracle/oradata/OLDCDB/PDBDWPROD/PDBDWPROD.dbf
channel ORA_SBT_TAPE_1: reading from backup piece 6m0r19t5_214_1_1
channel ORA_SBT_TAPE_1: piece handle=6m0r19t5_214_1_1 tag=TAG20220415T175132
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:25
Finished restore at 15-APR-22


Recover from preplugin

Now I am running the recover from preplugin

 recover pluggable database PDBDWPROD   from preplugin;
RMAN>

Starting recover at 15-APR-22
using channel ORA_SBT_TAPE_1
using channel ORA_DISK_1

starting media recovery

channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=190
channel ORA_SBT_TAPE_1: reading from backup piece 6n0r19tm_215_1_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 04/15/2022 18:23:58
ORA-19870: error while restoring backup piece 6n0r19tm_215_1_1
ORA-19827: Restoring preplugin files to a recovery area is not supported.

RMAN>


You can see that it is not going to let me apply the archive logs by restoring them from backup to the local recovery area of my new CDB.

I need to catalog the archive logs themselves by restoring them.

By looking at the backup piece name, I can see it is looking for "sequence 190" and I restored it from my original CDB.


RMAN> restore archivelog sequence 190;

Starting restore at 15-APR-22
starting full resync of recovery catalog
full resync complete
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=449 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=23.0.0.1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=71 device type=DISK

channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=190
channel ORA_SBT_TAPE_1: reading from backup piece 6n0r19tm_215_1_1
channel ORA_SBT_TAPE_1: piece handle=6n0r19tm_215_1_1 tag=TAG20220415T175150
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:01
Finished restore at 15-APR-22

RMAN> list archivelog sequence 190;

List of Archived Log Copies for database with db_unique_name OLDCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
8134 1 190 A 15-APR-22
Name: /u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_190_k5mgd1h2_.arc




Now I need to catalog it in preplugin backup to continue the recovery.
I am able to copy the restored archive log to /tmp and catalog it, but I am still missing some pieces. I will continue restoring the rest of archivelogs that in the listing up to sequence 194

RMAN> restore archivelog sequence 190;

Starting restore at 15-APR-22
starting full resync of recovery catalog
full resync complete
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=449 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=23.0.0.1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=71 device type=DISK

channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=190
channel ORA_SBT_TAPE_1: reading from backup piece 6n0r19tm_215_1_1
channel ORA_SBT_TAPE_1: piece handle=6n0r19tm_215_1_1 tag=TAG20220415T175150
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:01
Finished restore at 15-APR-22

RMAN> list archivelog sequence 190;

List of Archived Log Copies for database with db_unique_name OLDCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
8134 1 190 A 15-APR-22
Name: /u01/app/oracle/fast_recovery_area/OLDCDB/archivelog/2022_04_15/o1_mf_1_190_k5mgd1h2_.arc




Now that I restored and catalog all the backup pieces up to sequence 194, I will continue the recovery.
RMAN>  recover pluggable database PDBDWPROD   from preplugin;

Starting recover at 15-APR-22
using channel ORA_SBT_TAPE_1
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 191 is already on disk as file /tmp/o1_mf_1_191_k5mgw2fr_.arc
archived log for thread 1 with sequence 192 is already on disk as file /tmp/o1_mf_1_192_k5mgw83q_.arc
archived log for thread 1 with sequence 193 is already on disk as file /tmp/o1_mf_1_193_k5mgwlf8_.arc
archived log for thread 1 with sequence 194 is already on disk as file /tmp/o1_mf_1_194_k5mgx0t1_.arc
unable to find archived log
archived log thread=1 sequence=195
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 04/15/2022 18:40:31
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 195 and starting SCN of 6346663



I am finding that there is still one last archivelog (hopefully). This was the redo log that was active while I unplugged my database.
In fact I can see on the source CDB, that it is still the active redo log, so I am going to have to do a log switch to grab a copy of the archive log and catalog it.

SQL> select sequence#,status from v$log;

SEQUENCE# STATUS
---------- ----------------
195 CURRENT
193 INACTIVE
194 INACTIVE


Now that I have the last archive log, my preplug recovery is completed to the time it was unplugged.

RMAN> recover pluggable database PDBDWPROD   from preplugin;

Starting recover at 15-APR-22
using channel ORA_SBT_TAPE_1
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 195 is already on disk as file /tmp/o1_mf_1_195_k5mhf79h_.arc
media recovery complete, elapsed time: 00:00:01
Finished recover at 15-APR-22



Recover post plugin

Now I can recover to my restore point, and open it up.


RMAN>

RMAN> recover pluggable database PDBDWPROD until restore point PDBDWPROD_restore;

Starting recover at 15-APR-22
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=4 device type=DISK


starting media recovery

archived log for thread 1 with sequence 101 is already on disk as file /u01/app/oracle/fast_recovery_area/NEWCDB/archivelog/2022_04_15/o1_mf_1_101_k5mf0rr5_.arc
archived log for thread 1 with sequence 102 is already on disk as file /u01/app/oracle/fast_recovery_area/NEWCDB/archivelog/2022_04_15/o1_mf_1_102_k5mf0s8b_.arc
archived log for thread 1 with sequence 103 is already on disk as file /u01/app/oracle/fast_recovery_area/NEWCDB/archivelog/2022_04_15/o1_mf_1_103_k5mf1wof_.arc
archived log for thread 1 with sequence 104 is already on disk as file /u01/app/oracle/fast_recovery_area/NEWCDB/archivelog/2022_04_15/o1_mf_1_104_k5mf1zqm_.arc
archived log for thread 1 with sequence 105 is already on disk as file /u01/app/oracle/fast_recovery_area/NEWCDB/archivelog/2022_04_15/o1_mf_1_105_k5mf22sk_.arc
archived log for thread 1 with sequence 106 is already on disk as file /u01/app/oracle/fast_recovery_area/NEWCDB/archivelog/2022_04_15/o1_mf_1_106_k5mf91fn_.arc
archived log for thread 1 with sequence 107 is already on disk as file /u01/app/oracle/fast_recovery_area/NEWCDB/archivelog/2022_04_15/o1_mf_1_107_k5mf94g2_.arc
archived log for thread 1 with sequence 108 is already on disk as file /u01/app/oracle/fast_recovery_area/NEWCDB/archivelog/2022_04_15/o1_mf_1_108_k5mf9wk2_.arc
media recovery complete, elapsed time: 00:00:01
Finished recover at 15-APR-22

RMAN> alter pluggable database PDBDWPROD open resetlogs;

Statement processed
starting full resync of recovery catalog
full resync complete



And let's make sure I can see that it was recovered until prior to my restore point, and that I can see the data in the table.

SQL>  alter session set container=PDBDWPROD;

Session altered.

SQL> select table_name from dba_tables where owner='BGRENN';

TABLE_NAME
--------------------------------------------------------------------------------
POSTMOVE

SQL> select count(1) from bgrenn.postmove;

COUNT(1)
----------
73610




Conclusion : 


Preplugin backups provide you with Recovery Continuity ensuring you can recovery your pluggable after migrating to a new PDB even before you take your first backup.  As you can tell by my example, you want to make sure you take the backup as close to the point in time you are unplugging as possible to lessen the work to catalog and apply the archive logs.  I would also recommend you take a backup on the new CDB as soon as possible.  

Recovery Continuity with Multitenant

$
0
0

 Recovery Continuity with Multitenant is something you need to understand as you migrate databases from one CDB to another






Above is from a presentation that I recently gave to my internal Oracle Team. This was such a big hit (and very eye opening) that I wanted to make sure I share the information on my Blog


The first thing to point out is that before we (Oracle) moved to the multitenant architecture, life was simple. Below is my slide showing how databases were moving around as they upgraded.  Regardless of whether it was an out-of-place upgrade, or migrating it to a different host, the DB name stays, and the backups stay contiguous.


But, like many things in life, new ideas came along that changed the way we do things.  Multitenant is one of those things.  Don't get me wrong, multitenant is a great feature giving DBAs a lot more flexibility.  Below are a couple of pictures that show all the wonderful things that multitenant can do.




Above are the 2 slides from my presentation.  These slides are often used to show the benefits of multitenant.  I did point out on the last slide encryption keys that are used to secure the database with TDE.

The use of Encryption keys is an important point to think about.  With Multitenant (if you think about how it works), the CDB has a different encryption key from the PDB.  If I create an encrypted backup of my CDB, it is encrypted with the CDB key. The backup (and actual datafiles) for my PDB are encrypted with the PDB key.

Below is the next slide I used.  All the information on multitenant talks about how easy it is to unplug/plug (which it is), but ensuring you maintain your recovery window is the hard part.



Database backup and recovery in a multitenant environment

Here are some things to keep in mind in a multitenant environment

  • Pluggable database backup pieces are ALWAYS kept independent of the CDB and other PDBs.  Even with a filesperset=1000 channels=1 each PDB, and the CDB will be individual backupsets.
  • Pluggable databases can be backed up independently of each other, and of the CDB. “backup pluggable database xxx.
  • You can perform a point in time recovery of a pluggable database independent of other PDBS. This requires local undo. “recover pluggable database until”
  • Recovering a pluggable database requires a backup of the CDB (for metadata), and backups of the archive logs.
  • All redo transactions for all PDBs are intertwined into a single redo stream. This will not change in the near future. 
  • Flashback can be set at the PDB level
  • You can create restore points within a PDB
When backing up a Multitenant environment, the item to keep in mind is that the RMAN catalog information is stored at the CDB level.  Pluggable databases are part of the CDB, and registration is done at the CDB.


The next image shows what a recovery of the Pluggable database looks like. Keep in mind that the datafiles for the  pluggable database get restored using the pluggable database backup, but to defuzzy them, the archive logs get restored from the CDB.  Remember that in a multitenant environment the redo/archive logs are intertwined at the CDB level.


The next image shows what is typically done to perform a PDB upgrade with unplug/plug. The pluggable database is migrated from 12c to 19c.


Now that the database is migrated, let's look at what happens to the RMAN catalog after the migration to ensure that we have a backup of the pluggable database.



You can see in the image above, that the pluggable database is now associated with the CDB that the pluggable database is plugged into.

Now to go back to the image at the beginning of this post, you can see what it takes to restore and recovery the database throughout it's lifecycle.

  • Backups that were taken through previous CDBs (for example an archival backup) needs to be restored through the CDB is was backed up through.
  • Backups that were taken in original CDB can only be restored back to the original CDB.
  • Pre-plugin backups provide a gateway between plugging in and when the first backup is taken
  • Backups to the new CDB will restored back to the new CDB.



Finally some parting thoughts on backups of pluggable databases when migrating.

  • Perform a full backup if possible (ZDLRA makes this easy) with the PDB mounted prior to unplugging. This is the best possible restore point after migrating.
  • Keep the RMAN catalog entries for the old CDB as long as there are valid backups pieces. This could be years for keep backups.
  • NOTE – On the ZDLRA you can execute “Pause Database” this will remove all backups, but leave the RMAN catalog entries.
  • Ensure you have the encryption keys for both CDBs and PDBs for the needed recovery window which may be years.
  • Keep track of CDB backups, as a PDB might be migrated between multiple CDBs throughout it’s backup cycle.
  • NEVER delete a CDB backup that has needed backups
  • NEVER delete any TDE keys or wallets that support needed backups.

ZFS Object Store now offers detailed access control policies

$
0
0

 Object Retention Rules is one of the new features that was released in ZFSSA version 8.8.36.  Before I talk about Object Retention Rules on buckets on ZFSSA, I am going to go how to leverage the new access control polies that go along with managing objects, buckets, and retention. 

User Architecture


If you have followed my previous postings on configuring ZFS as object store, you found that one of the options available is to configure ZFSSA as an OCI Gen 2 (sometimes called OCI native) object store.
 When configuring this API interface on ZFSSA, the authentication utilizes the same public/private key concept that is used in most of the Oracle Cloud.

If you want to read my post on configuring authentication you can find it here.

What I want to go through in this post is how you can configure a set of user roles on ZFSSA with different permissions based public/private keys.

This will help you isolate and secure backups that were sent from multiple sources, and allow you to define both a security administrator (to apply retention policies), and an auditor to view the existence of backups without having the ability to delete or update backups.

In the "User Architecture" diagram at the beginning of this post you see that I have defined 5 user roles  that will be used to manage the object store security for the backups.

Users:

  • SECADMIN - This user role is the security administrator for all 3 object store backups, and all three buckets.  This user role is responsible for creating, deleting and assigning retention rules to the buckets.
  • AUDITOR - This user reviews the backups and has a read only view of all 3 backups. The auditor cannot delete or update any objects, but they can view the existence of the backup pieces.
  • GLUSER  - This user controls the backups for GLDB only
  • APUSER  - This user controls the backups for APDB only
  • DWUSER - This user controls the backups for DWDB only
NOTE: Because the Object Store API controls the access to objects in the bucket, all access to objects in a bucket is through the bucket owner. I can have multiple buckets on the same share, managed by different different users, but access WITHIN the bucket is only granted to the bucket owner.

Based on the above note, I am going to create 3 users to manage the buckets for the 3 database backups.
The 2 additional user roles, SECADMIN and AUDITOR are going to control their access through the use of RSA keys.

 Because I am not going to use pre-authenticated URLs for my backups (which requires login), all 3 users are going to be created as "no-login" users.    Below is an example of creating the APUSER.





I created all 3 users as no-login users




Project/Share for Object Storage


Now I am going to create a project and share to store the backup pieces for all 3 databases.  The project is going to be "dbbackups" and the share is going to be "dbbackups".  I am going to set the default user for the share to "oracle" and I am also going to grant the other 3 users "Full Control" of the share. I will later limit the permissions for these users.


Share User Access


User certificates:


Authentication to the object store is through the use of RSA public/private certificates.
For each user/role I created a certificate that will be used for authentication. 
The following table shows the users/roles and the fingerprint that identifies them.




Authentication:


Within the OCI service on the ZFSSA I combine the user and key (fingerprint) to provide the role.


First I will add the SECADMIN role.  Notice that I am adding this users access to all 3 database backup "users".  This will allow the SECADMIN role to manage bucket creation/deletion and retention to the individual buckets.  The role SECADMIN is accessed through the key.

I will start by adding the key owned by this role (SECADMIN) to the 3 users APUSER, GLUSER and DWUSER.



Now that I have the SECADMIN role assigned to the 3 users, I want set the proper capabilities for this role.  I click on the pencil to edit the key configuration, and I can see the permissions assigned to this user/key combination.  I want to allow the SECADMIN the ability create buckets, delete buckets and  to control the retention within the 3 users buckets.  This role will need the ability to read the bucket.  Notice that this role does not have ability to read any of the objects within the bucket





Now I am going to move on to the AUDITOR role.  This role will be configured using the AUDITOR key assigned to all 3 users.  Within each user the AUDITOR will be granted the ability to read the bucket and the objects but not make any changes.


I now have both the SECADMIN role and the AUDITOR role defined for all 3 users. Below is what is configured within the OCI service. Notice that that there are 2 keys set for each user, and the there are 2 unique keys (one for SECADMIN and one for AUDITOR).


Finally I am going to add the 3 users that own the buckets and grant them access to create objects, but not control the retention or be able to add/remove buckets.



Once completed with adding users/keys I have my 2 roles defined and assigned to each user, and I have an individual key for each user/backup.

When completed, the chart below shows the permissions for each user/role.



OCI cli configuration :


I added entries to the ~/.oci/config file for each of the users/roles configured for the service.
Below is an example entry for the SECADMIN role with the APDB bucket.

[SECADMIN_APDB]
user=ocid1.user.oc1..apuser
fingerprint=0a:35:21:1b:5c:eb:09:8c:e9:44:42:f2:7c:b5:bc:f6
key_file=~/keys/secadmin.ppk
tenancy=ocid1.tenancy.oc1..nobody
region=us-phoenix-1
endpoint=http://150.136.215.19
os.object.bucket-name=apdb
namespace-name=dbbackups
compartment-id=dbbackups


Below is a table of the entries that I added to the config file.




Creating buckets:


Now I am going to create my 3 buckets using the SECADMIN role. Below is an example of adding the bucket for APDB

[oracle@oracle-19c-test-tde keys]$ oci os bucket create  --namespace-name dbbackups  --endpoint http://150.136.215.19  --config-file ~/.oci/config --profile SECADMIN_APDB    --name apdb  --compartment-id dbbackups
{
"data": {
"approximate-count": null,
"approximate-size": null,
"auto-tiering": null,
"compartment-id": "dbbackups",
"created-by": "apuser",
"defined-tags": null,
"etag": "2f0b55dbbb925ebbaabbc37e3ce342fa",
"freeform-tags": null,
"id": "2f0b55dbbb925ebbaabbc37e3ce342fa",
"is-read-only": null,
"kms-key-id": null,
"metadata": null,
"name": "apdb",
"namespace": "dbbackups",
"object-events-enabled": null,
"object-lifecycle-policy-etag": null,
"public-access-type": "NoPublicAccess",
"replication-enabled": null,
"storage-tier": "Standard",
"time-created": "2022-05-17T17:55:49+00:00",
"versioning": "Disabled"
},
"etag": "2f0b55dbbb925ebbaabbc37e3ce342fa"
}


I then did the same thing for the GLDB bucket using SECADMIN_GLDB, and the DWDB bucket using SECADMIN_DWDB.

Once the buckets were created, I attempted to create buckets with both the AUDITOR role, and the DB role.  You can see below that both of these configurations did not have the correct privileges.

[oracle@oracle-19c-test-tde keys]$ oci os bucket create  --namespace-name dbbackups  --endpoint http://150.136.215.19  --config-file ~/.oci/config --profile AUDITOR_APDB    --name apdb  --compartment-id dbbackups
ServiceError:
{
"code": "BucketNotFound",
"message": "Either the bucket does not exist in the namespace or you are not authorized to access it",
"opc-request-id": "tx3a37f1dee0cc4778a1201-006283e2a1",
"status": 404
}
[oracle@oracle-19c-test-tde keys]$ oci os bucket create --namespace-name dbbackups --endpoint http://150.136.215.19 --config-file ~/.oci/config --profile APDB --name apdb --compartment-id dbbackups
ServiceError:
{
"code": "BucketNotFound",
"message": "Either the bucket does not exist in the namespace or you are not authorized to access it",
"opc-request-id": "tx46435ae6b8234982b3fbd-006283e2a9",
"status": 404
}



Listing buckets:

All of the entries I created have access to view the buckets.  Below is an example of SECADMIN_APDB listing buckets. You can see that I have 3 buckets each owned by the correct user.
[oracle@oracle-19c-test-tde keys]$ oci os bucket list --namespace-name dbbackups  --endpoint http://150.136.215.19  --config-file ~/.oci/config --profile SECADMIN_APDB    --compartment-id dbbackups

{
"data": [
{
"compartment-id": "dbbackups",
"created-by": "apuser",
"defined-tags": null,
"etag": "2f0b55dbbb925ebbaabbc37e3ce342fa",
"freeform-tags": null,
"name": "apdb",
"namespace": "dbbackups",
"time-created": "2022-05-17T17:55:49+00:00"
},
{
"compartment-id": "dbbackups",
"created-by": "dwuser",
"defined-tags": null,
"etag": "866ded83e5ea2a29c66dca0d01036f0e",
"freeform-tags": null,
"name": "dwdb",
"namespace": "dbbackups",
"time-created": "2022-05-17T17:58:32+00:00"
},
{
"compartment-id": "dbbackups",
"created-by": "gluser",
"defined-tags": null,
"etag": "2169cf94f86009f66ca8770c1c58febb",
"freeform-tags": null,
"name": "gldb",
"namespace": "dbbackups",
"time-created": "2022-05-17T17:58:17+00:00"
}
]
}


Configuration retention lock :


Here is the documentation on how to configure retention lock for the objects within a bucket. For my example, I am going to lock all objects for 15 days.  I am going to use the SECADMIN_APDB account to lock the objects on the apdb bucket.

[oracle@oracle-19c-test-tde keys]$ oci os retention-rule create --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile SECADMIN_APDB --bucket-name apdb --time-amount 30  --time-unit days --display-name APDB-30-day-Bound-backups
{
"data": {
"display-name": "APDB-30-day-Bound-backups",
"duration": {
"time-amount": 30,
"time-unit": "DAYS"
},
"etag": "2c9ab8ff9c4743392d308365d9f72e05",
"id": "2c9ab8ff9c4743392d308365d9f72e05",
"time-created": "2022-05-17T18:49:24+00:00",
"time-modified": "2022-05-17T18:49:24+00:00",
"time-rule-locked": null
}
}


Now I am going to make sure my AUDITOR role and my BACKUP role do not have privileges to manage retention. For both of these I can an error.

[oracle@oracle-19c-test-tde keys]$ oci os retention-rule create --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_GLDB --bucket-name gldb --time-amount 30  --time-unit days --display-name APDB-30-day-Bound-backups
ServiceError:
{
"code": "BucketNotFound",
"message": "Either the bucket does not exist in the namespace or you are not authorized to access it",
"opc-request-id": "tx52e8849aa6444c639d59b-006283ee99",
"status": 404
}

I set the retention rule for the other buckets, and now I can use the AUDITOR accounts to list the retention rules.

[oracle@oracle-19c-test-tde keys]$ oci os retention-rule list --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_APDB --bucket-name apdb
oci os retention-rule list --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_GLDB --bucket-name gldb
oci os retention-rule list --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_DWDB --bucket-name dwdb


{
"data": {
"items": [
{
"display-name": "APDB-30-day-Bound-backups",
"duration": {
"time-amount": 30,
"time-unit": "DAYS"
},
"etag": "2c9ab8ff9c4743392d308365d9f72e05",
"id": "2c9ab8ff9c4743392d308365d9f72e05",
"time-created": "2022-05-17T18:49:24+00:00",
"time-modified": "2022-05-17T18:49:24+00:00",
"time-rule-locked": null
}
]
}
}
[oracle@oracle-19c-test-tde keys]$ oci os retention-rule list --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_GLDB --bucket-name gldb
{
"data": {
"items": [
{
"display-name": "GLDB-30-day-Bound-backups",
"duration": {
"time-amount": 30,
"time-unit": "DAYS"
},
"etag": "ee0d6114310a9971f5a464b428916e48",
"id": "ee0d6114310a9971f5a464b428916e48",
"time-created": "2022-05-17T18:56:45+00:00",
"time-modified": "2022-05-17T18:56:45+00:00",
"time-rule-locked": null
}
]
}
}
[oracle@oracle-19c-test-tde keys]$ oci os retention-rule list --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_DWDB --bucket-name dwdb
{
"data": {
"items": [
{
"display-name": "DWDB-30-day-Bound-backups",
"duration": {
"time-amount": 30,
"time-unit": "DAYS"
},
"etag": "96cc109a7308d5f849541be72d87757a",
"id": "96cc109a7308d5f849541be72d87757a",
"time-created": "2022-05-17T18:57:42+00:00",
"time-modified": "2022-05-17T18:57:42+00:00",
"time-rule-locked": null
}
]
}
}


Sending backups to buckets :

Here is the link to the "archive to cloud" section of the latest ZDLRA documentation.  The buckets are added as cloud locations.  Since I am going to be using an immutable bucket, I also need to add a metadata bucket to match the normal backup bucket. The metadata bucket holds temporary objects that get removed as the backup is written.
  I created 3 additional buckets, "apdb_meta", "gldb_meta" and "dwdb_meta".
When I configure the Cloud Location I want to use the keys I created to send the backups.

The backup pieces were sent by the keys for apuser, gluser, and dwuser.

I used the process in the documentation to send the backups pieces from  the ZDLRA

Audit Backups :


Now that I have backups created for my database, I am going to use the the AUDITOR role to report on what's available within the apdb bucket.

First I am going to look at the Retention Settings.


[oracle@oracle-19c-test-tde keys]$ oci os retention-rule list --endpoint http://150.136.215.19 --namespace-name dbbackups --config-file ~/.oci/config --profile AUDITOR_APDB --bucket-name apdb
{
"data": {
"items": [
{
"display-name": "APDB-30-day-Bound-backups",
"duration": {
"time-amount": 30,
"time-unit": "DAYS"
},
"etag": "2c9ab8ff9c4743392d308365d9f72e05",
"id": "2c9ab8ff9c4743392d308365d9f72e05",
"time-created": "2022-05-17T18:49:24+00:00",
"time-modified": "2022-05-17T18:49:24+00:00",
"time-rule-locked": null
}
]
}
}


Now I am going to print out all the backups that exist for the APDB database.
I am using the python script that comes with the Cloud Backup Library and instructions for how to use it can be found in my blog here.
 
Below I am running the script. Notice I am running it using the AUDITOR role.

[oracle@oracle-19c-test-tde ~]$ python2  /home/oracle/ociconfig/lib/odbsrmt.py --mode report --ocitype bmc  --host http://150.136.215.19 --dir /home/oracle/keys/reports --base apdbreport --pvtkeyfile  /home/oracle/keys/auditor.ppk --pubfingerprint a8:31:78:c2:b4:4f:44:93:bd:4f:f1:72:1c:37:c8:86 --tocid ocid1.tenancy.oc1..nobody --uocid ocid1.user.oc1..apuser --container apdb  --dbid 2867715978
odbsrmt.py: ALL outputs will be written to [/home/oracle/keys/reports/apdbreport12193.lst]
odbsrmt.py: Processing container apdb...
cloud_slave_processors: Thread Thread_0 starting to download metadata XML files...
cloud_slave_processors: Thread Thread_0 successfully done
odbsrmt.py: ALL outputs have been written to [/home/oracle/keys/reports/apdbreport12193.lst]

And finally I can see the report that is creating from this script.


FileName
Container Dbname Dbid FileSize LastModified BackupType Incremental Compressed Encrypted
870toeq3_263_1_1
apdb ORCLCDB 2867715978 1285029888 2022-05-17 19:09:45 Datafile true false true
890toetk_265_1_1
apdb ORCLCDB 2867715978 2217476096 2022-05-17 19:12:17 ArchivedLog false false true
8a0tof0j_266_1_1
apdb ORCLCDB 2867715978 2790260736 2022-05-17 19:14:15 Datafile true false true
8b0tof4g_267_1_1
apdb ORCLCDB 2867715978 2124677120 2022-05-17 19:15:52 Datafile true false true
8c0tof7f_268_1_1
apdb ORCLCDB 2867715978 536346624 2022-05-17 19:16:21 Datafile true false true
8d0tof89_269_1_1
apdb ORCLCDB 2867715978 262144 2022-05-17 19:16:25 ArchivedLog false false true
c-2867715978-20220517-00
apdb ORCLCDB 2867715978 18874368 2022-05-17 19:09:47 ControlFile SPFILE false false true
c-2867715978-20220517-01
apdb ORCLCDB 2867715978 18874368 2022-05-17 19:16:26 ControlFile SPFILE false false true
Total Storage: 8.37 GB



Conclusion :

By creating 3 different roles for the user through the use of separate keys I am able to provide a separation of duties on the OCI object store.,

SECADMIN - This user role creates/deletes buckets and controls retention. This user role cannot see any backup pieces, and this user role cannot delete any objects from the buckets. This user role is isolated from the backup pieces themselves.

AUDITOR - This user role is used to create reporting on the backups to ensure there are backup pieces available.

DBA - These user roles are used to manage the individual backup pieces within the bucket but they do not have the ability to delete the bucket, or change the retention.

This provides a true separation of duties.



File Retention Lock now available in ZFSSA OS8.8.45

$
0
0
 File Retention Lock is introduced today in the much-awaited release of ZFSSA AK Software OS8.8.45 (aka 2013.1 Update 8.45) 

OS8.8.45 introduces File Retention to ZFSSA.

 File retention is controlled by a new system attribute timestamp for files that, once set, makes the file read-only and unable to be deleted. Once the date/time specified by that timestamp has passed and the retention has expired, the file may be deleted. No other modification is allowed, even after expiration.

In a filesystem with retention enabled, rename of directories is blocked unless the directory is empty. This is done to preserve the name of a file, including its path, so that its location cannot be hidden or any meaning conveyed by a changed path.

File retention enforces one of two policies, set at filesystem creation:

  • Privileged mode: Allows a process with the FILE_RETENTION_OVERRIDE privilege to override retention and delete files. This privilege does not allow files to be modified once retained.           

Mandatory mode: No privilege or authorization allows deletion of a retained file until the retention timestamp has been surpassed. Mandatory mode's protection extends to the filesystem and pool in that they may not be destroyed until all retention on all files therein has expired. A mandatory-mode-protected filesystem also protects its ancestors and clone descendants from destruction.


NOTE: File retention must be enabled in the filesystem during creation before files can be retained because in most settings, taking away the ability to modify or delete a file would be undesirable behavior.

ZFSSA offers versatile data protection

$
0
0

 The latest release of ZFSSA Software OS8.8.45 includes file retention lock joining object retention lock and snapshot retention lock.



 

3 types of retention lock

Legal Hold

You might need to preserve certain business data in response to potential or on-going lawsuits. A legal hold does not have a defined retention period and remains in effect until removed.  Once the legal hold is removed, all protected data is immediately eligible for deletion.

Data Governance

Data Governance locks data sets (snapshot, object or file) for a period of time protecting the data from deletion.  You might need to protect certain data sets as a part of internal business process requirements or protect data sets from a cyber attack. While retaining the data for a defined length of time is necessary, that time period could change.

Regulatory Compliance

Your industry might require you to retain a certain class of data for a defined length of time. Your data retention regulations might also require that you lock the retention settings. Regulatory compliance only allows you to increase the retention time.

 

3 implementations of retention lock

Object storage

Object storage retention is managed through the OCI client tool and Object retention is enforced through the API. Current retention settings are applied to all objects when they are accessed.  Adding a rule immediately takes affect for all objects.  

Administration of retention rules can be managed through the use of RSA certificates.  It is recommended to create a separation of duties between a security administrator, and the object owner.

Retention on object storage is implemented in the following way based on the retention lock type.

Legal hold

Legal holds are implemented by placing an indefinite retention rule on a bucket.  Creating this rule ensures that all objects within the bucket can not be deleted, and cannot be changed. Only new objects can be stored.

 

Data Governance

Data Governance is implemented by placing a time bound retention rule on a bucket.  The rule sets a lock on all objects for a set length of time.  The rule can be later deleted. For cyber protection it is recommended to implement this with a separation of duties.

 

Regulatory Compliance

Regulatory Compliance is implemented by placing a locked time bound retention rule on a bucket with a grace period.  When a locked time bound retention rule is created it immediately takes effect, but there is a grace period of at least 14 days which allows you to test the rule. Once the grace period expires the rule cannot be deleted even by an administrator.

 

Snapshots

Snapshot locking is managed their the BUI, or CLI.  Individual snapshots can be locked, and scheduled snapshots can be created and automatically locked.  Permission for controlling snapshot locking can be assigned to ZFSSA users allowing to create a separation of duties. Shared or projects cannot be removed if they contained locked snapshots.

Retention on snapshots is implemented in the following way based on the retention lock type.

Legal hold

Legal holds on snapshots is handled by creating a snapshot, and locking the snapshot. The snapshot cannot be removed until the lock on the snapshot is removed.  There is no mechanism to schedule unlimited snapshots. but it is possible to create a large number of daily snapshots that are retained as locked snapshots for 1000s+ of days.

 

Data Governance

Data governance of snapshots is handled through the use of scheduled locked snapshots.  A schedule is created with both a retention, and "keep at most" setting. This allows you to manage snapshots for a locked number of snapshots, while automatically cleaning up snapshots that are past the retention number.  The snapshots can be unlocked and removed, and the schedule can be removed by an administrator with the correct privileges.

 

Regulatory Compliance

Regulatory compliance of snapshots is handled through the use of locked snapshot schedules.  Similar to data governance, a snapshot schedule is created, but when regulatory compliance is set, the schedule cannot be decreased or removed as long as data exists within the snapshot.  

 

File Retention

File retention is set at the share or project level and controls updating and deletion of all data contained on the share/project.  A default file retention is set and all new files will inherit the default setting in effect when the file is created. It is also possible to manually set the retention on a file overriding the default setting inherited by the file.

 

Legal Hold

Legal hold cannot be easily implemented with file retention.  File retention of individual files is set at file creation, and can only be changed manually.

 

Data Governance

Data governance is implemented by creating a NEW project and share with a file retention policy of privileged.  Privileged mode allows you to create a default retention setting for all new files, and change that setting (longer or shorter) going forward.  

 

Regulatory Compliance

Regulatory compliance  is implemented by creating a NEW project and share with a file retention policy of mandatory (no override).  Mandatory mode does not allow you to decrease the default file retention.  The project/share cannot be removed when locked files exist, and the storage pool cannot be removed when locked files exist within the pool. This mode also requires an NTP server be utilized, and root is locked out of any remote access.

 

The best way to explore these new features is by using the ZFSSA image in OCI to test different scenarios.

test54

test55

Migrate a large oracle database to OCI from disk backup

$
0
0

 Migrating an Oracle database from on-premise to OCI is especially challenging when the database is quite large.  In this blog post I will walk through the steps to migrate to OCI leveraging an on-disk local backup copied to object storage.

migrate Oracle database to OCI


The basic steps to perform this task are on on the image above.

Step #1 - Upload backup pieces to object storage.

The first step to migrate my database (acmedb) is to copy the RMAN backup pieces to the OCI object storage using the OCI Client tool.

In order to make this easier, I am breaking this step into a few smaller steps.

Step #1A - Take a full backup to a separate location on disk 


This can also be done by moving the backup pieces, or creating them with a different backup format.  By creating the backup pieces in a separate directory, I am able to take advantage of the bulk upload feature of the OCI client tool. The alternative is to create an upload statement for each backup piece.

For my RMAN backup example (acmedb) I am going to change the location of the disk backup and perform a disk backup.  I am also going to compress my backup using medium compression (this requires the ACO license).  Compressing the backup sets allows me to make the backup pieces as small as possible when transferring to the OCI object store.

Below is the output from my RMAN configuration that I am using for the backup.

RMAN> show all;

RMAN configuration parameters for database with db_unique_name ACMEDBP are:


CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/acmedb/ocimigrate/backup_%d_%U';
CONFIGURE COMPRESSION ALGORITHM 'MEDIUM' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;

I created a new level 0 backup including archive logs and below is the "list backup summary" output showing the backup pieces.

List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
4125 B A A DISK 21-JUN-22 1 1 YES TAG20220621T141019
4151 B A A DISK 21-JUN-22 1 1 YES TAG20220621T141201
4167 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4168 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4169 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4170 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4171 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4172 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4173 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4174 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4175 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4176 B 0 A DISK 21-JUN-22 1 1 YES TAG20220621T141202
4208 B A A DISK 21-JUN-22 1 1 YES TAG20220621T141309
4220 B F A DISK 21-JUN-22 1 1 YES TAG20220621T141310



From the output you can see that there are a total of 14 backup pieces
  • 3 Archive log backup sets (two created before the backup of datafiles, and one after).
    • TAG20220621T141019
    • TAG20220621T141201
    • TAG20220621T141309
  • 10 Level 0 datafile backups
    • TAG20220621T141202
  • 1 controlfile backup 
    • TAG20220621T141310

Step #1B - Create the bucket in OCI and configure OCI Client

Now we need a bucket to upload the 14 RMAN backup pieces to. 

Before I can upload the objects, I need to download and configure the OCI Client tool. You can find the instructions to do this here.

Once the client tool is installed I can create the bucket and verify that the OCI Client tool is configured correctly.

The command to create the bucket is.



Below is the output when I ran it for my compartment and created the bucket "acmedb_migrate"

 oci os bucket create --namespace id2avsofo --name acmedb_migrate --compartment-id ocid1.compartment.oc1..aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq
{
"data": {
"approximate-count": null,
"approximate-size": null,
"auto-tiering": null,
"compartment-id": "ocid1.compartment.oc1..aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq",
"created-by": "ocid1.user.oc1..aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq",
"defined-tags": {
"Oracle-Tags": {
"CreatedBy": "oracleidentitycloudservice/john.smith@oracle.com",
"CreatedOn": "2022-06-21T14:36:19.680Z"
}
},
"etag": "e0f028ac-d80d-4e09-8e60-876d90f57893",
"freeform-tags": {},
"id": "ocid1.bucket.oc1.iad.aaaaaaaanqbquh2bwju4igabu5g7clir2b4ykd3tyq",
"is-read-only": false,
"kms-key-id": null,
"metadata": {},
"name": "acmedb_migrate",
"namespace": "id2avsofo",
"object-events-enabled": false,
"object-lifecycle-policy-etag": null,
"public-access-type": "NoPublicAccess",
"replication-enabled": false,
"storage-tier": "Standard",
"time-created": "2022-06-21T14:36:19.763000+00:00",
"versioning": "Disabled"
},
"etag": "e0f028ac-d80d-4e09-8e60-876d90f57893"
}


Step #1C - Upload the backup pieces to Object Storage in OCI


The next step is to upload all the backup pieces that are in the directory "/acmedb/ocimigrate" to OCI using the bulk upload feature.



Below is the output of the upload - Notice I used a parallelism of 14 to ensure a quick upload.

 oci os object bulk-upload --namespace-name id20skavsofo    --bucket-name acmedb_migrate --src-dir /acmedb/ocimigrate/ --parallel-upload-count 10

Uploaded backup_RADB_3u10k6hj_126_1_1 [####################################] 100%
Uploaded backup_RADB_4710k6jl_135_1_1 [####################################] 100%
Uploaded backup_RADB_4610k6jh_134_1_1 [####################################] 100%
Uploaded backup_RADB_3n10k6b0_119_1_1 [####################################] 100%
Uploaded backup_RADB_3m10k6b0_118_1_1 [####################################] 100%
Uploaded backup_RADB_3r10k6ec_123_1_1 [####################################] 100%
Uploaded backup_RADB_4510k6jh_133_1_1 [####################################] 100%
Uploaded backup_RADB_4010k6hj_128_1_1 [####################################] 100%
Uploaded backup_RADB_3v10k6hj_127_1_1 [####################################] 100%
Uploaded backup_RADB_4110k6hk_129_1_1 [####################################] 100%
Uploaded backup_RADB_4210k6id_130_1_1 [####################################] 100%
Uploaded backup_RADB_4310k6ie_131_1_1 [####################################] 100%
Uploaded backup_RADB_3l10k6b0_117_1_1 [####################################] 100%
Uploaded backup_RADB_4410k6ie_132_1_1 [####################################] 100%
Uploaded backup_RADB_3k10k6b0_116_1_1 [####################################] 100%
Uploaded backup_RADB_3t10k6hj_125_1_1 [####################################] 100%

{
"skipped-objects": [],
"upload-failures": {},
"uploaded-objects": {
"backup_RADB_3k10k6b0_116_1_1": {
"etag": "ab4a1017-3ba7-46e2-a2ee-3f4cd9a82ad3",
"last-modified": "Tue, 21 Jun 2022 14:57:42 GMT",
"opc-multipart-md5": "W0hYIzfAWUVzACWNudcQDg==-3"
},
"backup_RADB_3l10k6b0_117_1_1": {
"etag": "a620076e-975f-4d8c-87e8-394c4cf966cd",
"last-modified": "Tue, 21 Jun 2022 14:57:41 GMT",
"opc-multipart-md5": "zapGBx8Imcdk91JM2+gORQ==-3"
},
"backup_RADB_3m10k6b0_118_1_1": {
"etag": "a96c35c0-4c0b-4646-ae38-723f92c8496e",
"last-modified": "Tue, 21 Jun 2022 14:57:32 GMT",
"opc-content-md5": "vNAsU3vLcjzp6OwEeLXGgA=="
},
"backup_RADB_3n10k6b0_119_1_1": {
"etag": "8f565894-5097-4ebb-9569-fdd31cc0c22d",
"last-modified": "Tue, 21 Jun 2022 14:57:31 GMT",
"opc-content-md5": "aSUSQWv5b+EfoLy9L9UBYQ=="
},
"backup_RADB_3r10k6ec_123_1_1": {
"etag": "120dead4-c8ae-44de-9d27-39e1c28a2c48",
"last-modified": "Tue, 21 Jun 2022 14:57:33 GMT",
"opc-content-md5": "4wHBrgZXuIMlYWriBbs1ng=="
},
"backup_RADB_3s10k6hh_124_1_1": {
"etag": "07d74b7f-68d6-4a77-9c4d-42f78c51c692",
"last-modified": "Tue, 21 Jun 2022 14:57:28 GMT",
"opc-content-md5": "uzRd51bAKvFjhbbsfL1YAg=="
},
"backup_RADB_3t10k6hj_125_1_1": {
"etag": "e5d3225b-a687-47e1-ad31-f4270ce31ddd",
"last-modified": "Tue, 21 Jun 2022 14:57:42 GMT",
"opc-multipart-md5": "aZIirf98ZNqwBAlIeWzuhQ==-3"
},
"backup_RADB_3u10k6hj_126_1_1": {
"etag": "5f5cc5ad-4aa3-4c3a-8848-16b3442a1e2c",
"last-modified": "Tue, 21 Jun 2022 14:57:28 GMT",
"opc-content-md5": "dT6EYLv1yzf6LZCn1/Dsvw=="
},
"backup_RADB_3v10k6hj_127_1_1": {
"etag": "297daece-be72-475f-b40d-982fb7115cd3",
"last-modified": "Tue, 21 Jun 2022 14:57:36 GMT",
"opc-content-md5": "Zt3h5YfHU6F771ahltYhDQ=="
},
"backup_RADB_4010k6hj_128_1_1": {
"etag": "9d723f2a-962e-4d03-9283-fc8a68f53af8",
"last-modified": "Tue, 21 Jun 2022 14:57:35 GMT",
"opc-content-md5": "KuNzVyUQrrSsA/kgioq9oA=="
},
"backup_RADB_4110k6hk_129_1_1": {
"etag": "16f7f02a-e5ae-48a2-a7d2-b6d1dedc82ad",
"last-modified": "Tue, 21 Jun 2022 14:57:36 GMT",
"opc-content-md5": "24SzzZwg7iu7PV8TBpMXEg=="
},
"backup_RADB_4210k6id_130_1_1": {
"etag": "0584e14f-53dc-4251-8bad-907f357a283e",
"last-modified": "Tue, 21 Jun 2022 14:57:37 GMT",
"opc-content-md5": "sjPsmoeFsMhZISAmaVN0vQ=="
},
"backup_RADB_4310k6ie_131_1_1": {
"etag": "176aea41-dd31-4404-99f4-ffd59c521fd3",
"last-modified": "Tue, 21 Jun 2022 14:57:40 GMT",
"opc-content-md5": "2ksAQ2UuU/75YyRKujlLXg=="
},
"backup_RADB_4410k6ie_132_1_1": {
"etag": "766c7585-3837-490b-8563-f3be3d24c98e",
"last-modified": "Tue, 21 Jun 2022 14:57:41 GMT",
"opc-content-md5": "sh4CFUC/vnxjmMZ5mfgT3Q=="
},
"backup_RADB_4510k6jh_133_1_1": {
"etag": "2de62d73-e44c-4f25-a41d-d45c556054dd",
"last-modified": "Tue, 21 Jun 2022 14:57:34 GMT",
"opc-content-md5": "4tVrHqwYG57STn9W6c2Mqw=="
},
"backup_RADB_4610k6jh_134_1_1": {
"etag": "4667419d-9555-4edb-bd6d-749a1ee7660b",
"last-modified": "Tue, 21 Jun 2022 14:57:29 GMT",
"opc-content-md5": "/MVdDn/vA2IXUcCmtdgKnw=="
},
"backup_RADB_4710k6jl_135_1_1": {
"etag": "d467810a-d62e-42b3-bf7b-019913707312",
"last-modified": "Tue, 21 Jun 2022 14:57:29 GMT",
"opc-content-md5": "hq8PEQ3PUwyTMWyUBfW4ew=="
}
}
}


Step #2 - Create the manifest for the backup pieces.


The next step covers creating the "metadata.xml" for each object which is the manifest the the RMAN library uses to read the backup pieces.

Again this is broken down into a few different steps.

Step #2A - Download an configure the Oracle Database Cloud Backup Module.

The link for the instructions (which includes the download can be found here.

I executed the jar file which downloads/created the following files.
  • libopc.so - This is the library used by the Cloud Backup module, and I downloaded it into  "/home/oracle/ociconfig/lib/" on my host
  • acmedb.ora - This is the configuration file for my database backup. This was created in "/home/oracle/ociconfig/config/" on my host
This information is used to allocate the channel in RMAN for the manifest.

Step #2b - Generate the manifest create for each backup piece.

The next step is to dynamically create the script to build the manifest for each backup piece. This needs to be done for each backup piece, and the command is

"send channel t1 'export backuppiece <object name>';

The script I am using to complete this uses backup information from the controlfile of the database, and narrows the backup pieces to just the pieces in the directory I created for this backup.



Step #2c - Execute the script with an allocated channel.

The next step is to execute the script in RMAN within a run block after allocating a channel to the bucket in object storage. This needs to be done for each backup piece. You create a run block with one channel allocation followed by "send" commands.

NOTE: This does not have be executed on the host that generated the backups.  In the example below, I set my ORACLE_SID to "dummy" and performed create manifest with the "dummy" instance started up nomount.


Below is an example of allocating a channel to the object storage and creating the manifest for one of the backup pieces.



export ORACLE_SID=dummy
rman target /
RMAN> startup nomount;

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/19c/dbhome_1/dbs/initdummy.ora'

starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started

Total System Global Area 1073737792 bytes

Fixed Size 8904768 bytes
Variable Size 276824064 bytes
Database Buffers 780140544 bytes
Redo Buffers 7868416 bytes

RMAN> run {
allocate channel t1 device type sbt parms='SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
send channel t1 'export backuppiece backup_RADB_3r10k6ec_123_1_1';
}
2> 3> 4>
allocated channel: t1
channel t1: SID=19 device type=SBT_TAPE
channel t1: Oracle Database Backup Service Library VER=23.0.0.1

sent command to channel: t1
released channel: t1


Step #2d - Validate the manifest is created.

I logged into the OCI console, and I can see that there is a directory called "sbt_catalog". This is the directory containing the manifest files. Within this directory you will find a subdirectory for each backup piece. And within those subdirectories you will find a "metadata.xml" object containing the manifest.

Step #3 - Catalog the backup pieces.


The next step covers cataloging the backup pieces in OCI. You need to download the controlfile backup from OCI and start up mount the database.

Again this is broken down into a few different steps.

Step #3A - Download an configure the Oracle Database Cloud Backup Module.

The link for the instructions (which includes the download can be found here.

Again, you need to configure the backup module (or you can copy the files from your on-premise host).

Step #3b - Catalog each backup piece.

The next step is to dynamically create the script to build the catalog each backup piece. This needs to be done for each backup piece, and the command is

"catalog device type 'sbt_tape'  backuppiece <object name>';

The script I am using to complete this uses backup information from the controlfile of the database, and narrows the backup pieces to just the pieces in the directory I created for this backup.



Step #3c - Execute the script with a configured channel.

I created a configure channel command, and cataloged the backup pieces that in the object store.


RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';


run {
catalog device type 'sbt_tape' backuppiece 'backup_RADB_3r10k6ec_123_1_1';
catalog device type 'sbt_tape' backuppiece 'backup_RADB_3s10k6hh_124_1_1';
catalog device type 'sbt_tape' backuppiece 'backup_RADB_3t10k6hj_125_1_1';
catalog device type 'sbt_tape' backuppiece 'backup_RADB_3u10k6hj_126_1_1';
catalog device type 'sbt_tape' backuppiece 'backup_RADB_3v10k6hj_127_1_1';
catalog device type 'sbt_tape' backuppiece 'backup_RADB_4010k6hj_128_1_1';
catalog device type 'sbt_tape' backuppiece ' backup_RADB_4110k6hk_129_1_1';
catalog device type 'sbt_tape' backuppiece 'backup_RADB_4210k6id_130_1_1';
catalog device type 'sbt_tape' backuppiece 'backup_RADB_4310k6ie_131_1_1';
catalog device type 'sbt_tape' backuppiece 'backup_RADB_4410k6ie_132_1_1';
catalog device type 'sbt_tape' backuppiece 'backup_RADB_4510k6jh_133_1_1';
}

old RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/acmedb.ora)';
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

RMAN>
RMAN> 2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13>
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=406 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=23.0.0.1
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: SID=22 device type=SBT_TAPE
channel ORA_SBT_TAPE_2: Oracle Database Backup Service Library VER=23.0.0.1
allocated channel: ORA_SBT_TAPE_3
channel ORA_SBT_TAPE_3: SID=407 device type=SBT_TAPE
...
...
...
channel ORA_SBT_TAPE_4: SID=23 device type=SBT_TAPE
channel ORA_SBT_TAPE_4: Oracle Database Backup Service Library VER=23.0.0.1
channel ORA_SBT_TAPE_1: cataloged backup piece
backup piece handle=backup_RADB_4510k6jh_133_1_1 RECID=212 STAMP=1107964867

RMAN>


Step #3d - List the backups pieces cataloged

I performed a list backup summary to view the newly cataloged tape backup pieces.


RMAN> list backup summary;


List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
4220 B F A DISK 21-JUN-22 1 1 YES TAG20220621T141310
4258 B A A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141019
4270 B A A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141201
4282 B 0 A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141202
4292 B 0 A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141202
4303 B 0 A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141202
4315 B 0 A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141202
4446 B 0 A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141202
4468 B 0 A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141202
4490 B 0 A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141202
4514 B 0 A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141202
4539 B 0 A SBT_TAPE 21-JUN-22 1 1 YES TAG20220621T141202

RMAN>


Step #4 - Restore the database.


The last step is restore the cataloged backup pieces. Remember you might have to change the location of the datafiles.



The process above can be used to upload and catalog both additional archive logs (to bring the files forward) and incremental backups to bring the database forward.




ZFSSA File Retention Authorizations

$
0
0
ZFS File Retention authorizations is important to understand if you plan on implementing retention lock on ZFS. This feature was added in release OS8.8.46. and there is a MOS note explaining how it works (2867335.1 - Understanding ZFS Appliance File Retention Policy).
In order to start using the new features, you need to grant some new authorizations that manage who can administer the new settings.  Be aware that these new authorizations are NOT granted to the administrator role.  You must add them to the administrator role or create an additional role.



ZFS file retention authorizations

The image above shows the File Retention Policies that can be set and which authorization is needed to administer each setting.

NOTE: The share must be created with file retention in order to have these settings take effect.  You cannot add file retention to an existing Project/Share.


Now let's go through the 3 Authorizations and what they allow the administrator to do.

retentionPeriods



When an administrator is granted the "retentionPeriods" authorization they are given the authority to administer 3 of the setting for file retention

  • "Minimum file retention period" - This is the minimum amount of time in the future that you can set a file retention to be. If you set the file retention date manually the retention time must be at least this far if not longer in the future. If you set the "Default file retention period", it must be at least the "Minimum file retention period" if not longer.  The default value for this setting is "0 seconds".
  • "Maximum file retention period"- This is the maximum amount of time in the future that you can set a file retention to be. If you set the file retention date manually the retention time must at most this far if not shorter in the future. If you set the "Default file retention period", it must be at most  the "maximum file retention period" if not shorter. The default value for this setting is "5 years".
  • "Default file retention period"- This is the default amount of time in the future that you can set a file retention to be.  This value has to fall within the minimum and maximum file retention period.  Unless this value is set to a value greater than "0 seconds" no files are locked by default.

NOTE : The most common method used to lock files is to set the "Default file retention period" to a value greater than '0 seconds". When this is set (and file retention is turned on) any files created will be locked for this period of time.

retentionAuto



When an administrator is granted the "retentionAuto" authorization they are given the authority to set the Automatic file retention grace period.
This value controls how long after the last access time the ZFS waits to lock the file.  The default setting is "0 seconds".  Until this value is set to a value greater than "0 seconds" no files are automatically locked (using the Default file retention period).  The only method to lock files when this value is left as "0", the default, is to manually lock files.

NOTE: A very important item to understand is that the ZFS locks the file once it has not been updated for this period of time. If you have a process that holds a file open without writing to it, for example an RMAN channel, it may lock the file before it is closed.
Be sure to set the grace period to be longer than the amount of time a process may pause writing to a file.  DO NOT set it too short.  If you wish to lock a file immediately after you have finished writing to it (because you have a long grace period) you can remove the "w" bit from the files using chmod. This will bypass the grace period.
If the share is configured with mandatory retention, the automatic grace period cannot be increased, it can only be lowered.

retentionMandatory



When an administrator is granted the "retentionMandatory" authorization they are given the authority to create a share with a "mandatory (no override)" file retention.  This authorization is not necessary to create a "privileged override" file system.
Be aware that in order to create a file system with "mandatory" file retention the ZFS must be configured with the following settings. The "file retention" service must be running, and the file system needs to be a mirrored configuration

  • Remote root user login via the BUI/REST needs to be turned off in the HTTPS service
  • Remote root login via SSH needs to be turned off in the SSH service
  • NTP sync needs to be configured in the NTP service
  • NTP service needs to be on-line.

NOTE : You must ensure that the ZFS administrator is granted these authorizations before attempting to configure file retention. If the administration user is not granted the proper authorization you will permission errors like below.



"You are not authorized to perform this action. If you wish to proceed, contact an administrator to obtain the proper credentials.






File Retention Lock on ZFSSA

$
0
0
File Retention Lock was recently released on ZFSSA and I wanted to take the time to explain how to set the retention time and view the retention of locked files. Below is an example of what happens. You can see that the files are locked until January 1st 2025

ZFS Retention Lock


The best place to start for information on how this works is by looking at my last blog post on authorizations.


Grace period

The grace period is used to automatically lock a file when there has not been updates to the file for this period of time.
If the automatic file retention grace period is "0" seconds, then the default retention is NOT in effect.


NOTE: even with a grace period of "0" seconds files can be locked by manually setting a retention period.  Also, once a grace period is set (non "0") it cannot be increased or disabled if there are files that can be affected.

Default retention

The most common method to implement retention is by using the default retention period. This takes effect when the grace period expired for a file.

zfs file retention lock


In the example above you can see that all files created on this share are created with a default retention of 1 day (24 hours).

Minimum/Maximum File retention

The second settings you see on the image above are the "minimum file retention period" and the "maximum file retention period".

These control the retention settings on files which follows the rules below.

  • The default retention period for files MUST be at least the minimum file retention period, and not greater than the maximum file retention period

  • If the retention date is set manually on a file, the retention period must fall within the minimum and maximum retention period.

Display current Lock Expirations.

In order to display the lock expiration on Linux the first thing you need to do is to change the share/project setting to "Update access time on read" off . Through the CLI this is a "set atime=false".


zfssa file retention lock

Once this settings is made, the client will then display the lock time as the "atime". In my example at the top of the block, you can see by executing "ls -lu" the file lock time is displayed.

NOTE: you can also use the find command to search for files using the "atime" This will allow to find all the locked files.

Manually setting a retention date


It is possible to set a specific date/time that a file is locked until.

NOTE: If you try to change the retention date on a specific file, the new retention date has to be greater than current retention date (and less than or equal to the maximum file retention period). This makes sense.  You cannot lower the retention period for a locked file.

Now how do you manually set the retention date ?  Below is an example of how it is set for a file.


Setting File retention lock

There are 3 steps that are needed to lock the file with a specific lock expiration date.

1. Touch the file and set the access date. This can be done with
    • "-a" to change the access date/time
    • "-d" or "-t" to specify the date format
 2. Remove the write bit with chmod guo-2

3.  execute a cmod to make the file read only.

Below is an example where I am taking a file that does not contain retention, and setting the date to January 1, 2025.


First I am going to create a file and touch it setting the atime to a future data.

$echo 'xxxx'> myfile4.txt

$ls -al myfile4.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ touch -a -t "2501011200" myfile3.txt
$ ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jan 1 2025 myfile3.txt
$rm myfile3.txt
$ls -lu myfile3.txt
ls: cannot access myfile3.txt: No such file or directory


You can see that I set the "atime" and it display a future date, but I was still able to delete the file.

Now I am going to move to  remove the write bit before deleting.

$echo 'xxxx'> myfile4.txt

$ls -al myfile4.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ touch -a -t "2501011200" myfile3.txt
$ ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jan 1 2025 myfile3.txt
$chmod ugo-w myfile3.txt
$rm myfile3.txt
ls: cannot access myfile3.txt: No such file or directory


Still, I am able to delete the file.. Finally I am going to do all three 

$echo 'xxxx'> myfile4.txt

$ls -al myfile4.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jul 15 20:40 myfile3.txt

$ touch -a -t "2501011200" myfile3.txt
$ ls -lu myfile3.txt
-rw-r--r--. 1 nobody oinstall 5 Jan 1 2025 myfile3.txt
$chmod ugo-w myfile3.txt
$chmod a=r myfile3.txt
#$rm myfile3.txt
rm: remove write-protected regular file ‘myfile3.txt’? y
rm: cannot remove ‘myfile3.txt’: Operation not permitted

echo 'xxxx'> myfile3.txt
touch -a -t "2501011200" myfile3.txt
chmod ugo-w myfile3.txt
chmod a=r myfile3.txt


ZFSSA replicating locked snaphots to OCI for offsite backup

$
0
0

ZFSSA replication can be used to create locked offsite backups. In this post I will show you how to take advantage of the new "Locked Snapshot" feature of ZFSSA and the ZFS Image in OCI to create an offsite backup strategy to OCI.

ZFSSA Snapshot Replication
If you haven't heard of the locked snapshot feature of ZFSSA I blogged about here.  In this post I am going to take advantage of this feature and show you how you can leverage it to provide a locked backup in the Oracle Cloud using the ZFS image available in OCI.

In order to demonstrate this I will start by following the documentation to create a ZFS image in OCI as my destination.  Here is a great place to start with creating the virtual ZFS appliance in OCI.

Step 1 - Configure remote replication from source ZFSSA to ZFS appliance in OCI. 


By enabling the "Remote Replication" service with a named destination, "downstream_zfs" in my example, I can now replicate to my ZFS appliance in OCI.

zfssa remote replication


Step 2 -  Ensure the source project/share has "Enable retention policy for Scheduled Snapshots" turned on


For my example I created a new project "Blogtest".  On the "snapshots" tab I put a checkmark next to 
"Enable retention policy for Scheduled Snapshots".  By checking this, the project will adhere to preventing the deletion of any locked snapshots.  This property is replicated to the downstream and will cause the replicated project shares to also adhere to locking snapshots.  This can also be set at the individual share level if you wish to control the configuration of locked snapshots for individual shares.

Below you can see where this is enabled for snapshots created within the project.

ZFSSA Enable Snapshot Retention


Step 3 -  Create a snapshot schedule with "locked" snapshots


The next step is to create locked snapshots. This can be done at the project level (affecting all shares) or at the share level. In my example below I gave the scheduled snapshots a label "daily_snaps".  Notice for my example I am only keeping only 1 snapshot and I am locking the snapshot at the source. In order for the snapshot to be locked at the destination
  • Retention Policy MUST be enabled for the share (or inherited from the project).
  • The source snapshot MUST be locked when it is created
zfssa create snapshots

Step 4 -  Add replication to downstream ZFS in OCI

The next step is to add replication to the project  configuration to replicate the shares to my ZFS in OCI. Below you can see the target is my "downstream_zfs" that I configured in the "Remote Replication" service.
You can also see that I am telling the replication to "include snapshots", which are my locked snapshots, and also to "Retain user snapshots on target".  Under "Disaster Recovery" you can see that I am telling the downstream to keep a 30 day recovery point.  Even though I am only keeping 1 locked snapshot on the source, I want to keep 30 days of recovery on the downstream in OCI.

ZFSSA add replication

Step 5 -  Configure snapshots to replicate

In this step I am updating the replication action to replicate the locked scheduled snapshot to the downstream.  Notice that I changed the number of snapshots from 1 (on the source) to 30 on the destination, and I am keeping the snapshot retention locked. This will ensure that the daily locked snapshot taken on the source will replicate to the destination as a locked snapshot, and 30 snapshots on the destination will remain locked.  The 31st snapshot is no longer needed.

ZFSSA Autosnap replication


Step 6 -  Configure the replication schedule

The last step is to configure the replication schedule. This ensures that on a daily basis the snapshots that are configured to be replicated will be replicated regularly to the downstream. You can make this more aggressive than daily if you wish the downstream to be more in sync in the primary.  In my example below I configured the replication to occur every 10 minutes. This means that the downstream should have all updates as of 10 minutes ago or less. If I need to go back in time, I will have daily snapshots for the last 30 days that are locked and cannot be removed.

ZFSSA Replication Schedule

Step 7 -  Validate the replication


Now that I have everything configured I am going to take a look at the replicated snapshots on my destination.  I navigate to "shares" and I look under "replicat" and find my share. By clicking on the pencil and looking at the "snapshots" tab I can see my snapshot replicated over.

zfssa downstream copy

And when I click on the pencil next to the snapshot I can see that the snapshot is locked and I can't unlock it.

zfssa downstream locked



From there I can clone the snap and create a local snapshot, back it up to object storage, or reverse the replication if needed.



OCI Database backups with retention lock

$
0
0

 OCI Object Storage provides both lifecycle rules and retention lock.  How to take advantage of both these features isn't always as easy as it looks.

 In this post I will go through an example customer request and how to implement a backup strategy to accomplish the requirements.

OCI Buckets

This image above gives you an idea of what they are looking to accomplish.

Requirements

  • RMAN retention is to keep a 14 day point in time recovery window
  • All long term backups beyond 14 days are cataloged as KEEP backups
  • All buckets are protected with a retention rule to prevent backups from being deleted before they become obsolete
  • Backups are moved to lower tier storage when appropriate to save costs.

Backup strategy

  • A full backup is taken every Sunday at 5:30 PM and this backup is kept for 6 weeks.
  • Incremental backups are taken Monday through Saturday at 5:30 PM and are kept for 14 days
  • Archive log sweeps are taken 4 times a day and are kept for 14 days
  • A backup is taken the 1st day of the month at 5:30 PM and this backup is kept for 13 months.
  • A full backup is taken following the Tuesday morning bi-weekly payroll run and is kept for 7 years
This sounds easy enough.  If you look at the image above you can what this strategy looks like in general. I took this strategy and mapped it to the 4 buckets, how they would be configured, and what they would contain. This is the image below.

OCI Object rules


Challenges


As I walked through this strategy I found that it involved some challenges. My goal was limit the number of full backups to take advantage of current backups.  Below are the challenges I realized exist with this schedule
  • The weekly full backup taken every Sunday is kept for longer than the incremental backups and archive logs. This caused 2 problems
  1. I wanted to make this backup a KEEP backup that is kept for 6 weeks before becoming obsolete.  Unfortunately KEEP backups are ignored as part of an incremental backup strategy. I could not create a weekly full backup that was both a KEEP backup and also be used as part of  incremental backup strategy.
  2. Since the weekly full backup is kept longer than the archive logs, I need to ensure that this backup contains the archive logs needed to defuzzy the backup without containing to many unneeded archive logs
  • The weekly full backup could fall on the 1st of the month. If this is the case it needs to be kept for 13 months otherwise it needs to be kept for 6 weeks.
  • I want the payrun backups to be immediately placed in archival storage to save costs.  When doing a restore I want to ignore these backups as they will take longer to restore.
  • When restoring and recovering the database within the 14 day window I need to include channels allocated to all the buckets that could contain those buckets. 14_DAY, 6_WEEK,  and 13_MONTH.
  • Solutions

    I then worked through how I would solve each issue.

    1. Weekly full backup must be both a normal incremental backup and KEEP backup - After doing some digging I found the best way to handle this issue was to CHANGE the backup to be a KEEP backup with either a 6 week retention, or a 13 month retention from the normal NOKEEP type. By using tags I can identify the backup I want change after it is no longer needed as part of the 14 day strategy.
    2. Weekly full backup contains only archive logs needed to defuzzy - The best way to accomplish this task is to perform an archive log backup to the 14_DAY bucket immediately before taking the weekly full backup
    3. Weekly full backup requires a longer retention - This can be accomplished by checking if the the full backup is being executed on the 1st of the month. If it is the 1st, the full backup will be placed in the 13_MONTH bucket.  If it is not the 1st, this backup will be placed in the 6_WEEK bucket.  This backup will be created with a TAG with a format that can be used to identify it later.
    4. Ignore bi-weekly payrun backups that are in archival storage - I found that if I execute a recovery and do not have any channels allocated to the 7_YEAR bucket, it will may try to restore this backup, but it will not find it and move to the next previous backup. Using tags will help identify that a restore from the payrun backup was attempted and ultimately bypassed.
    5. Include all possible buckets during restore - By using a run block within RMAN I can allocate channels to different buckets and ultimately include channels from all 3 appropriate buckets.
    Then as a check I drew out a calendar to walk through what this strategy would look like.

    OCI backup schedule


    Backup examples

    Finally I am included examples of what this would like.

    Mon-Sat 5:30 backup job



    dg=$(date +%Y%m%d)
    rman <<EOD
    run {
    ALLOCATE CHANNEL daily1 DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
    ALLOCATE CHANNEL daily2 DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
    backup incremental level 1 database tag="full_backup_${dg}" plus archivelog tag="full_backup_${dg}";
    }
    exit
    EOD

    Sat 5:30 backup job schedule

    1) Clean up archive logs first



    dg=$(date +%Y%m%d:%H)
    rman <<EOD
    run {
    ALLOCATE CHANNEL daily1 DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
    ALLOCATE CHANNEL daily2 DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
    backup archivelog tag="full_backup_${dg}";
    }
    exit
    EOD

    2a) If this 1st of the month then execute this script to send the full backup to the 13_MONTH bucket


    dg=$(date +%Y%m%d)
    rman <<EOD
    run {
    ALLOCATE CHANNEL monthly1 DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/13_MONTH.ora)';
    ALLOCATE CHANNEL monthly2 DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/13_MONTH.ora)';
    backup incremental level 1 database tag="full_backup_${dg}" plus archivelog tag="full_backup_${dg}";
    }
    exit
    EOD


    2b) If this is NOT the 1st of the month execute this script and send the full backup to the 6_WEEK bucket

    dg=$(date +%Y%m%d)
    rman <<EOD
    run {
    ALLOCATE CHANNEL weekly1 DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/6_WEEK.ora)';
    ALLOCATE CHANNEL weekly2 DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/6_WEEK.ora)';
    backup incremental level 1 database tag="full_backup_${dg}" plus archivelog tag="full_backup_${dg}";
    }
    exit
    EOD


    3a) If today is the 14th then change the  full backup to a 13 month retention


    dg=$(date --date "-14 days" +%Y%m%d)
    rman <<EOD
    CHANGE BACKUPSET TAG="full_backup_${dg}" keep until time 'sysdate + 390';
    EOD

    3b) If today is NOT the 14th then change the  full backup to a 6 week retention


    dg=$(date --date "-14 days" +%Y%m%d)
    rman <<EOD
    CHANGE BACKUPSET TAG="full_backup_${dg}" keep until time 'sysdate + 28';
    EOD

    Tuesday after payrun backup job 



    dg=$(date +%Y%m%d)
    rman <<EOD
    run {
    ALLOCATE CHANNEL yearly1 DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/7_YEAR.ora)';
    ALLOCATE CHANNEL yearly2 DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/7_YEAR.ora)';
    backup incremental level 1 database tag="payrun_backup_${dg}" plus archivelog tag="full_backup_${dg}";
    }
    exit
    EOD


    Restore example

    Now in order to restore, I need to allocate channels to all the possible buckets. Below is the script I used  to validate this with a "restore database validate" command.


    run {
    ALLOCATE CHANNEL daily1 DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
    ALLOCATE CHANNEL daily2 DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/14_DAY.ora)';
    ALLOCATE CHANNEL weekly1 DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/6_WEEK.ora)';
    ALLOCATE CHANNEL weekly2 DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/6_WEEK.ora)';
    ALLOCATE CHANNEL monthly1 DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/13_MONTH.ora)';
    ALLOCATE CHANNEL monthly2 DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/cloudbackup/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/ociconfig/config/13_MONTH.ora)';
    restore database validate;
    }


    Below is what I am seeing in the RMAN log because I picked a point in time where I want it to ignore the 7_YEAR backups.

    In this case you can see that it tried to retrieve the Payrun backup but failed back to the previous backup with tag "FULL_073122". This is the backup I want.


    channel daily1: starting validation of datafile backup set
    channel daily1: reading from backup piece h613o4a4_550_1_1
    channel daily1: ORA-19870: error while restoring backup piece h613o4a4_550_1_1
    ORA-19507: failed to retrieve sequential file, handle="h613o4a4_550_1_1", parms=""
    ORA-27029: skgfrtrv: sbtrestore returned error
    ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
    KBHS-07502: File not found
    KBHS-01404: See trace file /u01/app/oracle/diag/rdbms/acmedbp/acmedbp/trace/sbtio_4819_140461854265664.log for det
    failover to previous backup

    channel daily1: starting validation of datafile backup set
    channel daily1: reading from backup piece gq13o3rm_538_1_1
    channel daily1: piece handle=gq13o3rm_538_1_1 tag=FULL_073122
    channel daily1: restored backup piece 1
    channel daily1: validation complete, elapsed time: 00:00:08


    That's all there is to it. Tags are very help helpful to identify the correct backups.



    ZFSSA File Retention and Snapshot Retention provide protection for RMAN incremental merge backups.

    $
    0
    0
    File Retention Lock and Snapshot Retention Lock are great new features on ZFSSA that can help protect your backups from deletion and help you meet regulatory requirements. Whether it be an accidental deletion or a bad actor attempting to corrupt your backups they are protected.

    In this post I am going to walk through how to implement File Retention and Snapshot Retention together to protect an RMAN incremental merge backup from being deleted . 

     Why do I need both? 

    The first question you might have is why do I need both File Retention and Snapshot Retention to protect my backups ? RMAN incremental merge backups consists of 3 types of backup pieces.

     FILE IMAGE COPIES - Each day when the backup job is executed the same image copy of each datafile file is updated by recovering the datafile with an incremental backup. This moves the image copy of each datafile forward one day using the changed blocks from the incremental backup. The backup files containing the image copy of the datafiles needs to be updatable by RMAN.

    INCREMENTAL BACKUP- Each day a new incremental backup (differential) is taken. This incremental backup contains the blocks that changed in the database files since the previous incremental backup. Once created this file does not change. 

     ARCHIVE LOG BACKUPS- Multiple times a day, archive log backups (also known as log sweeps) are taken. These backup files contain the change records for the database and do not change once written. 


     How to leverage both retention types 


     SNAPSHOT RETENTION can be used to create a periodic restorable copy of a share/project by saving the unique blocks as of the "snapshot" time a new snapshot is taken. Each of these periodic snapshots can be scheduled on a regular basis. With snapshot retention, the snapshots are locked from being deleted, and the schedule itself is locked to prevent tampering with the snapshots. This is perfect for ensuring we have a restorable copy of the datafile images each time they are updated by RMAN.

    FILE RETENTION can be used to lock both the incremental backups and the archive log backups. Both types of backup files do not change once created and should be locked to prevent removal or tampering with for the retention period. 


     How do I implement this ? 

    First I am going create a new project for my backups named "DBBACKUPS". Of course you could create 2 different projects. Within this project I am going to create 2 shares with different retention settings. 

     FULLBACKUP - Snapshot retention share 

     My image copy backups are going to a share that is protected with snapshot retention. The documentation on where to start with snapshot retention can be found here. In the example below I am keeping 5 days of snapshots, and I am locking the most recent 3 days of snapshots. This configuration will ensure that I have locked image copies of my database files for the last 3 days. 

     NOTE: Snapshots only contain the unique blocks since the last snapshot, but still provide a FULL copy of each datafile. The storage used to keep each snapshots is similar to the storage needed for each incremental backup. 

    ZFSSA snapshot retention settings for /fullbackup




     DAILYBACKUPS - File Retention share 

    My incremental backups and archivelog backups are going to a share with File Retention. The files (backup pieces) stored on this share will be locked from being modified or deleted. The documentation on where to start with File Retention can be found here

     NOTE: I chose the "Privileged override" file retention policy. I could have chosen "Mandatory" file retention policy if I wanted to lock down the backup pieces even further. 

     In the example below I am retaining all files for 6 days. 

    ZFSSA file retention settings for /dailybackups



    DAILY BACKUP SCRIPT 


    Below is the daily backup script I am using to perform the incremental backup, and the recovery of the image copy datafiles with the changed blocks. You can see that I am allocating channels to "/fullbackup" which is the share configured with Snapshot Retention, and the image copy backups are going to this share. The incremental backups are going to "/dailybackups" which is protected with File Retention. 

    run {
    ALLOCATE CHANNEL Z1 TYPE DISK format '/fullbackup/radb/DATA_%N_%f.dbf';
    ALLOCATE CHANNEL Z2 TYPE DISK format '/fullbackup/radb/DATA_%N_%f.dbf';
    ALLOCATE CHANNEL Z3 TYPE DISK format '/fullbackup/radb/DATA_%N_%f.dbf';
    ALLOCATE CHANNEL Z4 TYPE DISK format '/fullbackup/radb/DATA_%N_%f.dbf';
    ALLOCATE CHANNEL Z5 TYPE DISK format '/fullbackup/radb/DATA_%N_%f.dbf';
    ALLOCATE CHANNEL Z6 TYPE DISK format '/fullbackup/radb/DATA_%N_%f.dbf';

    backup
    section size 32G
    incremental level 1
    for recover of copy with tag 'DEMODBTEST' database FORMAT='/dailybackups/radb/FRA_%d_%T_%U.bkp';
    recover copy of database with tag 'DEMODBTEST' ;
    RELEASE CHANNEL Z1;
    RELEASE CHANNEL Z2;
    RELEASE CHANNEL Z3;
    RELEASE CHANNEL Z4;
    RELEASE CHANNEL Z5;
    RELEASE CHANNEL Z6;
    }


     ARCHIVELOG BACKUP SCRIPT 

    Below is the log sweep script that will perform the periodic backup of archive logs and send them to the "/dailybackups" share which has File Retention configured. 

    run {
    ALLOCATE CHANNEL Z1 TYPE DISK format '/dailybackups/radb/ARCH_%U.bkup';
    ALLOCATE CHANNEL Z2 TYPE DISK format '/dailybackups/radb/ARCH_%U.bkup';
    ALLOCATE CHANNEL Z3 TYPE DISK format '/dailybackups/radb/ARCH_%U.bkup';
    ALLOCATE CHANNEL Z4 TYPE DISK format '/dailybackups/radb/ARCH_%U.bkup';
    ALLOCATE CHANNEL Z5 TYPE DISK format '/dailybackups/radb/ARCH_%U.bkup';
    ALLOCATE CHANNEL Z6 TYPE DISK format '/dailybackups/radb/ARCH_%U.bkup';


    backup
    section size 32G
    filesperset 32
    archivelog all;
    RELEASE CHANNEL Z1;
    RELEASE CHANNEL Z2;
    RELEASE CHANNEL Z3;
    RELEASE CHANNEL Z4;
    RELEASE CHANNEL Z5;
    RELEASE CHANNEL Z6;
    }




     RESULT: 

    This strategy will ensure that I have 5 days of untouched full backups available for recovery. It also ensures that I have 6 days of untouched archive logs, and incremental backups that can be applied if necessary. This will protect my RMAN incremental merge backups using a combination of Snapshot Retention for backup pieces that need to be updated, and File Retention for backup pieces that will not change.

    Estimated space for Compliance Window on RA

    $
    0
    0

     In this post  I will go through how to estimate how much space you need to store backups on the Recovery Appliance to meet your Compliance Window.

    This is critical to understand, since compliance protected backups cannot be removed from the RA, and if all space is utilized to meet Compliance Windows, new backups will be refused.


    First a bit about Compliance window.


    COMPLIANCE WINDOW

    Compliance Window is set at the Policy level.  All databases within that policy will inherit the Compliance Window going forward.  Below is some more detail you need to know on Compliance Window.

    • The Compliance Window cannot be greater than the Recovery Window Goal
    • You cannot set the Policy to "Auto Tune" reserve space when setting a Compliance Window. You must manage the reserve space as you did in the past.
    • The Compliance Window can be adjusted up or down once set, but this will not affect any previous backups. Backups previously created observe the Compliance Window in effect when the backup was created.
    • The RA does not have to be in Compliance Mode (disabled direct root access) in order to set the a Compliance Window.

    Space management for Compliance Window

    Reserved Space

    If you are familiar with reserved space, then you understand how that can help.  Reserved space is set for each database, and is the estimate of how much is needed to meet the Recovery Window Goal.  The major points to understand with reserved space are
    • The sum of all reserved space cannot be greater than the usable space on the RA.
    • Reserved space is used during space pressure to determine which databases will not be able to keep their recovery window goal. Databases with reserved space less than what is needed will have their older backups purged first.
    • Reserved space should be either
      • About 10% greater than the space needed to meet the recovery window goal
      • The high water mark of space needed during large volume updates (Black Friday sales for example).
    By setting the reserved space for each database to be 10% larger than the space needed to meet the recovery window goal, you can alert when the Recovery Appliance cannot accept new databases to be backed up.  If all reserved space is allocated, then the Recovery Appliance is 90% full.

    Recovery Window Goal

    Within each policy you set a recovery window goal. This is a "goal" and if you run into space pressure, backups can be deleted from databases with insufficient reserved space (noted in the previous section).
    The recommendation is to set the Compliance Window smaller than Recovery Window Goal if all databases are being protected.
    By setting the recovery window goal smaller, you can alert when the required space to meet the recovery window goal is not available on the Recovery Appliance.  This will give you time to determine the space issue and take corrective action.


    Compliance Window


    Within each policy you can set a Compliance Window. This will lock any backups for the protected databases from being deleted, and will disable the database from being from the Recovery Appliance as long as it has backups that fall under compliance.  Since these backups cannot be removed, and the database cannot be removed, it is critical that you do not reach the point where all storage is utilized by compliant backups.

    ESTIMATING COMPLIANCE SPACE

    As you can tell by reading through how this works, it is critical to understand the space needed for compliant backups. 
    The recommendation to estimate the space needed is to utilize the DBMS_RA.ESTIMATE_SPACE package.
    Unfortunately with release 21.1 you cannot call this package from within a SQL statement. You will receive the following error.

    Select dbms_ra.estimate_space ('TIMSP' , numtodsinterval(45,'day')) from dual
    *
    ERROR at line 1:
    ORA-14551: cannot perform a DML operation inside a query
    ORA-06512: at "RASYS.DBMS_RA_MISC", line 5092
    ORA-06512: at "RASYS.DBMS_RA", line 1204
    ORA-06512: at line 1


    In order to help everyone calculate the space needed, I came up with a code snippet that can give you the data you need.
    Using the snippet below, and setting the variable for compliance window you can create an HTML report that will show you the estimate for space needed.




    What the output looks like is below.  Note you can adjust the compliance window you want to look at.



    This should allow you to look at the effect of setting a compliance window and compare it to the reserved space, and the RWG database by database, policy by policy, and as a whole.



    ZDLRA - Quick Start Guide

    $
    0
    0

     This post is intended to be a Quick Start Guide for those who are new to ZDLRA (RA for short).  I spend part of time working with customers who are new the RA and often the same topics/questions come up.  I wanted to put together a "Quick Start" guide that they can use to learn more about these common topics.


    ZDLRA Quick Start


    The steps I would follow for anyone new to the RA are.


    1. Read through the section on configuring users and security settings for the RA. Decide which compliance settings make sense for the RA and come with a plan to implement them.
    2. Identify the users, both OS users (if you are disabling direct root access), and users within the databases that will manger and/or monitor the RA. OS users can be added with "racli add admin_user". Database users can be added with "racli add db_user"
    3. Create protection policies that contain the recovery window(s) that you want to set for the databases. You will also set compliance windows when creating policies. This can be done manually using the package DBMS_RA.CREATE_PROTECTION_POLICY.
    4. Identify the VPC user(s) needed to manage the database. Is it a single DBA team, or different teams requiring multiple VPC users? Create the VPC user using "racli add vpc_user"
    5. Add databases to be backed up to the RA, associate the database with both a protection policy and a VPC user who will be managing the database. NOTE that you should look at the Reserved Space, and adjust it as needed.  Databases can be added manually by using two PL/SQL calls. DBMS_RA.ADD_DB will add the database to the RA. DBMS_RA.GRANT_DB_ACCESS will allow the VPC user to manage the database.
    6. Configure the database to be backed up to the RA either by using OEM, or manually. The manual steps would be
      • Create a wallet on the DB client that contains the VPC credentials to connect to the RA.
      • Update the sqlnet.ora file to point to this wallet
      • Connect to the RMAN catalog on the RA from the DB client
      • Register the database to the RA
      • Configure the channel configuration to point to the RA
      • Configure Block change tracking (if it is not configured).
      • Configure the redo destination to point to the RA if you want to configure real-time redo.
      • Change the RMAN retention to be "applied to all standby" if using real-time redo, or "backed up 1 time" if not.
      • Update OEM to have the database point to the RMAN catalog on the ZDLRA.

    Documentation

    The documentation can be found here. Within the documentation there are several sections that will help you manage the RA.

    Get Started 

    The get started section contains some subtopics to delve into

    Install and configure the Recovery Appliance

    The links in this section cover all the details about the installation and configuration of the RA.  I won't be talking about those sections in the post, but be aware this is where to look for general maintenance/patching/expanding information.

    Learn about the Recovery Appliance.

    This section covers an overview of the RA, and is mostly marketing material. If you are not familiar with the RA, or want an overview this is the place to turn.

    Administer the Recovery Appliance


    These sections are going to be a lot more helpful to get you started. This section of the documentation covers 

    Managing Protect Policies - Protection policies is the place to start when configuring an RA. Protection policies group databases together and it is critical to make sure you have the correct protection policies in place before adding databases to be backed up.

    Copying Backups to Tape - This section is useful if you plan on creating backups (either point in time or archival) that will be sent externally from the RA. This can be either to physical/virtual tape, or to an external media manager.

    Archiving Backups to the Cloud - This section covers how to configure the RA to send backups to an OCI compatible object storage.  This can either be OCI, or it can be an on-premises ZFS that has a project configured as OCI object storage.

    Accessing Recovery Appliance Reports - This section covers how to access all the reports available to you.  You will find these reports are priceless to manage the RA over time. Some examples of the areas these reports cover are.
    • Storage Capacity Planning reports with future usage projections
    • Recovery Window Summary reports to validate backups are available
    • Active incident reports to manage any alerts
    • API History Report to audit any changes to the RA
    NOTE : If you are using the RA in a charge backup model to your internal business units, there is specific reporting that can be used for this. Talk your Oracle team find out more.

    Monitoring the Recovery Appliance - This section covers how to monitor the RA and set up alerts. This will allow you identify any issues that would affect the recovery of the backups including space issues, and missing backups.


    Administer the Recovery Appliance

    Configure Protected Databases - This section goes through how to configure databases to be backed up to the recovery appliance and includes instructions for both using OEM, and adding databases using the command line.

    Backup Protected Databases - This section covers how to backup a database from either OEM, or from the traditional RMAN command line. I would also recommend looking at the MOS note to ensure that you are using the current best practices for backups. "RMAN best practice recommendations for backing up to the Recovery Appliance (Doc ID 2176686.1)".

    Recover Databases - This section covers how to recover databases from the RA. This section also covers information about cloning databases. Cloning copies of production is a common use case for the RA, and this section is very useful to help you with this process.


    Books

    This section contains the documentation you look at regularly to manage the RA and answer questions that you may have on managing it.  I am only going to point the sections that you find most useful.


    Deployment

    The one important section under deployment is the Zero Data Loss Recovery Appliance Owners Guide.

    Zero Data Loss Recovery Appliance Owners Guide - This guide contains information on configuring users on the RA, and the most critical sections to look at are

    •  "Part III Security and Maintenance of Recovery Appliance".   If you are using the RA to manage immutable backups, it is important to go through this section to understand how users will be managed for maximum protection of your backups.
    • Part IV Command Reference - This section covers the CLI commands you will use the manage the RA.

    Administration

    This is probably the most important guide in the documentation. It covers many of the areas of you will be managing as you configure databases to be backed up.  The most critical sections are

    Part I Managing Recovery appliance - This section covers
    • Implementing Immutable Backups
    • Securing the Recovery Appliance operations
    • Managing Protection Policies
    • Configuring replication and replication concepts
    • Additional High Availability strategies
    Part III Recovery Appliance Reference - This section covers
    • DBMS_RA packages to manage the RA through commands
    • Recovery Appliance View Reference to see what views are available

    MOS Notes

    There are number of useful MOS notes that you will want to bookmark

    • Zero Data Loss Recovery Appliance (ZDLRA) Information Center (Doc ID 2673011.2)
    • How to Backup and Recover the Zero Data Loss Recovery Appliance (Doc ID 2048074.1)
    • Zero Data Loss Recovery Appliance Supported Versions (Doc ID 1927416.1)
    • Zero Data Loss Recovery Appliance Software Updates Guide (Doc ID 2028931.1)
    • Cross Platform Database Migration using ZDLRA (Doc ID 2460552.1)
    • How to Move RMAN Catalog To A Different Database (Doc ID 351918.1)

    Helpful Blogs

    Fernando Simon

    Fernando has a number of helpful blog entries. Be aware he has been blogging for a long time on the RA, and some of the management processes have changed. One example is RACLI is now used to create VPC users. Some of the Blogs to note are

    Bryan Grenn


    I have a number of blog posts on features of the ZDLRA.










    Oracle Database recovery using Incremental merge, snapshots, OMF and "switch to copy"

    $
    0
    0

    I work with backup and recovery of the Oracle Database, and sometimes this means looking at the Incremental Merge backup strategy.  I know this isn't the best backup/recovery strategy, and below are few posts giving you more detail on the topic.

    They have some great points, and I typically don't recommend using incremental merge backups.  The incremental merge backup strategy is almost always paired with snapshots to increase the recovery window.

    Below is an image of how these are typically paired with snapshots.



    One of the biggest draws of using the incremental merge strategy with snapshots, is the ability to perform a "switch to copy" as a recovery strategy.

    NOTE: When you perform "switch to copy" the database is now accessing datafiles using the backup copy.  This is not supported on Exadata for any storage other than Oracle ZFS.

    If you review the MOS note "Using External Storage with Exadata (Doc ID 2663308.1)" you will find that "Use of non-Oracle storage for database files is not supported."

    Given all of that, I got the question "I am using the incremental merge strategy on Oracle an ZFS appliance using snapshots. If I perform a switch-to-copy recovery of one or more datafiles, how do I avoid forcing a new full backup on the next incremental merge backup?".

    I thought this was a great question, and I created a test database, and started googlin'.  Below are some of the posts I looked at.

    I started  by using the first post and walked through a testing scenario using a DBCS database in OCI.
    My database was a 19.8 database using local storage (to make things easier to see the datafiles), and it was using OMF by default.  
    The piece that was missing from the first post was the "alter database move datafile 'xxx' to 'xxx' KEEP;

    What I found is that it wasn't so easy with my database using OMF for 2 reasons.
    1. Using OMF, you don't specify the "to 'xxx'" since OMF will automatically name the destination datafile.
    2. Using "KEEP" is ignored when the source file is OMF.  This meant that the original image copy being used by the database is removed when move process completes.  I couldn't catalog the image copy.
    Since it took a bit of research to find the best strategy I wanted to share the process that I would recommend when dealing with OMF and non-OMF image copy backups with snapshots.

    NON-OMF image copy backups

    1. Snap the backup storage just to preserve the starting point. --> optional but recommended
    2. Take the tablespaces offline
    3. Perform a "switch to copy" of the datafiles --> This will use the incremental merge backup.
    4. Recover the datafiles
    5. Bring the tablespaces online ---> Application is running using the external image copy
    6. Perform an "alter database move datafile 'xxx' to 'xxx' KEEP; --> Using keep will preserve the original copy, but will only work if the image copy is NOT OMF. If the destination is an OMF file, you will not use the "to"
    7. Catalog the image copy that was preserved with the "KEEP" ensuring you use the same tag used for the incremental merge. "catalog datafilecopy '+fra/ocm121/datafile/MONSTER.346.919594959' level 0 TAG 'incr_update';"
    8. The next incremental merge will pick up with the updated image copy.

    OMF image copy backups

    1. Snap the backup storage to create a copy for the switch to copy.
    2. Unmount the "current" image copy 
    3. Mount the snap copy using the same mount point as the "current" image copy.
    4. Take the tablespaces offline
    5. Perform a "switch to copy" of the datafiles --> This will use the snap copy of the incremental merge backup on external storage.
    6. Recover the datafiles
    7. Bring the tablespaces online ---> Application is running using external copies.
    8. Perform an "alter database move datafile 'xxx' ; --> Since the source is an OMF file you cannot use "KEEP" to preserve the original copy. The original copy will be removed.
    9. Once all moves are complete, unmount the snapped copy.
    10. Mount the "current" copy. This is as of when you started this process.
    11. Catalog the image copy for all datafiles that performed the "switch to copy" ensuring you use the same tag used for the incremental merge "catalog datafilecopy '+fra/ocm121/datafile/MONSTER.346.919594959' level 0 TAG 'incr_update';"
    12. You can now destroy the snap that was created to perform the switch to copy.
    13. The next incremental merge will pick up with the current image datafile copies where it left off.
    As you can see, using OMF greatly complicates preserving the incremental merge backup, and forces you to start at the last backup.

    Why DBCS (Oracle Base Database Service) in OCI can make a DBA's life much easier (even with BYOL)

    $
    0
    0

    DBCS (now named Oracle Base Database service, but I will call it DBCS throughout this post) in OCI  can help make a DBA's life easier.  When I was testing the new Autonomous Recovery Service for Oracle Database in OCI, I created a LOT of different DBCS systems to test backup and recovery.  Along the way I learned a lot about the workings of DBCS, and I came to appreciate how it makes sense, even if you are a BYOL (bring your own license) customer.




    I'm more of a an "old school" DBA, preferring command line, and scripting processes myself.  I am typically not a fan of automation.  When using DBCS I was surprised by all the things it would do for me that I would have to do manually.

    Install oracle software and create a database

    Having installed oracle software hundreds of times, and having created test databases, I didn't think I would care much about automation that did this for me.

    Central Software image management

    What I found in OCI, is that you can create your own software images that can be used to ensure each new database environment is consistent.  OCI gives you ability to create your own set of release images (which can include patches).  This ensures each time I create a new DBCS environment, and choose my custom image, it's running the same version in all environments. No more installing base release, then patches, and then then any possible one-off patches.  This makes the installation of the database software much, much easier, and ensures consistency.


    Easy Database creation

    Recently I've gotten familiar with performing a silent database creation, as using dbca isn't always easy to configure.  The tooling provided by DBCS will not only create a database for you, but will also configure TDE encryption (with a local wallet, or using OCI vault).  It can even create a RAC database across 2 nodes.  And don't forget, it can create the standby for me also.


    Configure ASM storage

    Now this is the most interesting piece I found when using DBCS.  Not only does the DBCS service create a disk group, but it automatically stripes multiple block volumes together maximizing performance.  This is a HUGE help in ensuring I am getting the best performance.
    When I was going through what the configuration did, I tried to build tables showing how the different storage sizes translate to the storage configurations.
    There were 2 configurations and DB data storage sizes, one for Flex, and one for Standard shapes.

    Flex


    First I looked at flex, and regardless of the performance level these were the sizes.


    Then within Flex, I looked at the "Balanced performance" configuration.

    Balanced Performance configuration





    You can see that as the DB storage available goes up, the number of disks goes up also allowing for a higher  possible IOPS than you would get from a single Block Storage device.

    Below is the chart for "High Performance"

    High Performance configuration



    You can see that the IOPS is even higher, and it is using even more disks to get that performance.

    Standard


    Next looked at standard shapes, and regardless of the performance level these were the sizes. Note that with Standard shapes, there were many more options for configurations.


    Balanced Performance configuration





    High Performance configuration






    Benefits of DBCS

    I also went through what some of the other benefits of DBCS are, and below is the list I came up with.

    • When using the DBCS service,  the storage cost is based on the Block Storage cost. This is the same cost as you would pay in an IaaS service.  Having the storage striped and configured for maximum IOPS makes this a huge plus.

    • DBCS allows you purchase licenses if you don't have enough licenses to use the BYOL option.

    • The DBCS service price is based on OCPU and is the same regardless of the shape. Memory is included in the OCPU cost.

    • DBCS automatically configures RAC if you choose it.

    • DBCS provides tooling that automatically configures backups, can apply patches, and rotate encryption keys.

    • DBCS allows you to automate the cloning of your database, and automate any restores.

    • DBCS includes TDE, and relieves you of having to own the ASO license.  

    Conclusion:

    DBCS offers a lot more than you realize. Take a deep dive into what it can do for you to save time as DBA and you also might realize that sometimes tooling along with automation has it's benefits.


    ZDLRA real-time redo demonstrated

    $
    0
    0

     One of the key features of the ZDLRA is the ability to capture changes from the database "real-time" just like a standby database does. In this blog post I am going to demonstrate what is happening during this process so that you can get a better understanding of how it works.

    ZDLRA Real-time Redo


    If you look at the GIF above, I will explain what is happening, and show what happens with a demo of the process.

    The ZDLRA uses the same process as a standby database.  In fact if you look at the flow of the real-time redo you will notice the redo blocks are sent to BOTH the local redo log files, AND to the staging area on the ZDLRA.  The staging area on the ZDLRA acts just like a standby redo does on a standby database.

    As the ZDLRA receives the REDO blocks from the protected database they are validated to ensure that they are valid Oracle Redo block information.  This ensures that a man-in-the-middle attack does not change any of the backup information.  The validation process also assures that if the database is attacked by ransomware (changing blocks), the redo received is not tainted.


    The next thing that happens during the process is the logic when a LOG SWITCH occurs.  As we all know, when a log switch occurs on a database instance, the contents of the redo log are written to an archive log.  With real-time redo, this causes the contents of the redo staging area on the ZDLRA (picture a standby redo log) to become a backup set of an archive log.  The RMAN catalog on the ZDLRA is then updated with the internal location of the backup set.


    Log switch operation

    I am going to go through a demo of what you see happen when this process occurs.

    ZDLRA is configured as a redo destination

    Below you can see that my database has a "Log archive destination" 3 configured.  The destination itself is the database on the ZDLRA (zdl9), and also notice that the log information will be sent for ALL_ROLES, which will send the log information regardless if it is a primary database or a standby database.
    Archive Dest


    List backup of recent archive logs from RMAN catalog


    Before I demonstrate what happens with the RMAN catalog, I am going to list out the current archive log backup. Below you see that the current archive log backed up to the ZDLRA has the "SEQUENCE #10".

    archive log backups prior

    Perform a log switch

    As you see in the animation at the top of the post, when a log switch occurs, the contents of the redo log in the "redo staging area" are used to create an archive log backup that is stored and cataloged.  I am going to perform a log switch to force this process.

    Log switch


    List backup of archive logs from RMAN catalog

    Now that the log switch occurred, you can see below that there is a new backup set created from the redo staging area.
    There are a couple of interesting items to note when you look at the backup set created.

    archive logs after


    1. The backup of the archive log is compressed.  As part of the policy on the ZDLRA you have the option to have the backup of the archive log compressed when it is created from the "staged redo". This does NOT require the ACO (Advanced Compression) license. The compressed archive log will be sent back to the DB compressed during a restore operation, and the DB host will uncompress it.  This is the default option (standard compression) and I recommend changing it.  If you decide to compress, then MEDIUM or Low is recommended. Keep this in mind that he this may put more workload on the client to uncompress  the backup sets which may affect recovery times.  NOTE: When using TDE, there will be little to no compression possible.
    2. The TAG is automatically generated. By looking at the timestamp in the RMAN catalog information, you can see that the TAG is automatically generated using the timestamp to make it unique.
    3. The handle begins with "$RSCN_", this is because the backup piece was generated by the ZDLRA itself, and archivelog backup sets will begin with these characters.

    Restore and Recovery using partial log information


    Now I am going to demonstrate what happens when the database crashes, and there is no time for the database to perform a log switch.

    List the active redo log and current SCN

    Below you can see that my currently active redo log is sequence # 12.  This is where I am going to begin my test.

    begin test


    Create a table 

    To demonstrate what happens when the database crashes I am going to create a new table. In the table I am going to store the current date, and the current SCN. Using the current SCN we will be able to determine the redo log that contains the table creation.

    table create


    Abort the database


    As you probably know, if I shut down the database gracefully, the DB will automatically clean out the redo logs and archive it's contents. Because I want to demonstrate what happens with crash I am going to shut the database down with an ABORT to ensure the log switch doesn't occur.  Then start the database mount so I can look at the current redo log information

    abort


    Verify that the log switch did not occur


    Next I am going to look at the REDO Log information and verify that my table creation (SCN 32908369) is still in the active redo log and did not get archived during the shutdown.

    Log switch doesn't occur

    Restore the database


    Next I am going to restore the database from backup.


    restore

    Recover the database


    This is where the magic occurs so I am going to show that happens step by step.

    Recover using archive logs on disk


    The first step the database does is to use the current archive logs to recover the database. You can see in the screenshot below that the database recovers the database using archive logs on disk up to sequence #11 for thread 1.  This contains all the changes for this thread, but does not include what is in the REDO log sequence #12.  Sequence #12 contains the create table we are interested in.

    archives on disk

    Recover using partial redo log


    This step is where the magic of the ZDLRA occurs.  You can see from the screen shot below that the RMAN catalog on the ZDLRA returns the redo log information for Sequence #12 even though it was never archived. The ZDLRA was able to create an archive log backup from the partial contents it had in the Redo Staging area.

    rtr recovery

    Open the database and display table contents.


    This is where it all comes together.  Using the partial redo log information from Redo Log sequence #12, you can see that when the database is opened, the table creation transaction is indeed in the database even though the redo did not become an archive log.
    '


    Conclusion : I am hoping this post gives you a better idea of how Real-time redo works on the ZDLRA, and how it handles recovering transactions after a database crash

    ZDLRA Validation is your best protection against Ransomware

    $
    0
    0

     Validation of Oracle backups on ZDLRA is often one the most overlooked features of the product. With the rise of ransomware, the question "how to I ensure that I have validated Oracle backups" is critical.



    I know there are a lot of vendors out there that provide a great solution for most generic backups. But, as you probably know, Oracle Database backups are different from other system backups and they provide unique challenges which include

    • The backup of a large database consists of 100s, if not 1000s of backup pieces. All of which are necessary to successfully restore the database.
    • Oracle Database backups won't contain "ransomware signatures" or any easy way of determining if the backup pieces are tainted.
    • Oracle Database backups are in a proprietary format that can only be validated by performing a "restore validate" which reads, and validates  the contents of Oracle database backup pieces.

    How ZDLRA provides superior validation

    Backups land on flash during ingest


    When backup pieces are sent to the ZDLRA during backup, they land on Flash Storage and are quarantined  within the ZDLRA waiting to be validated.  

    Backup pieces are validated


    The ZDLRA will then examine arriving backup pieces.  The internal metadata is read and the contents of the backup pieces are validated block-by-block.  This ensures that before storing the backup pieces, they are confirmed to be Oracle Database backup pieces, containing valid Oracle Blocks.

     Backup pieces are stored and virtual full created


    Once the backup piece is examined, and the metadata is read, the individual validated blocks are stored on disk compressed.  The blocks are indexed, and a virtual full backup is built.  The final step in the process is to update the RMAN catalog on the ZDLRA with an entry pointing to the virtual full.

    Weekly validation for both block content, and restore continuity


    On a weekly basis all backups on the ZDLRA undergo a "restore validate" which will validate that all the backup pieces are valid, usable backup pieces. This is critical with an "incremental forever" strategy to ensure that unchanged blocks are valid.  Along with checking for the integrity of the backup piece, the ZDLRA also checks for "Restore Continuity". I know this a term I made up. The idea is that whatever time/SCN you choose within the recovery window, the ZDLRA ensures that ALL backup pieces needed to recover are available.  This is similar to performing a "restore preview" of all time periods to ensure that all backup pieces are available for recovery.


    Validation during replication

    Replication of backup pieces from one ZDLRA to another takes this process one step further.
    Along with all the same validation that  occurs when the ZDLRA receives backups from databases,  the upstream ZDLRA also catalogs the replicated copy of the backup pieces.

    ZDLRA in a Cyber vault





    This is where all the pieces come together. The ZDLRA not only utilizes it's validated, incremental forever strategy to keep replication traffic to a minimum, but it also ensures that backups pieces are validated PRIOR to cataloging them.

    The ZDLRA has a number of advantages in a Cyber vault scenario
    • Replication traffic is much smaller than most solutions which require a Weekly Full backup. The ZDLRA uses incremental forever.
    • Backup pieces are quarantined after arrival in the vault to ensure tainted backups are not included in restore plans. This process is similar to what other vendors do to check for ransomware. The ZDLRA goes one step further by using the proprietary knowledge of Oracle Blocks to ensure all backup, and blocks within the backups are valid.
    • Backups stored within the ZDLRA in the vault are validated on a weekly basis for both content, and continuity to ensure a restore will be successful.
    • The upstream sending the backup pieces catalogs what backups are in the vault, and can resend any backup pieces if necessary.

    I hope this helps you understand better why the ZDLRA provides superior ransomware protection.



    Autotuned_reserved_space is a new feature on the ZDLRA that you should be using

    $
    0
    0

     Autotuned_reserved_space is a new policy setting that got released with 21.1 and you should be using it. When I talk to customers about how to manage databases on a ZDLRA, the biggest confusion comes in when I talk about reserved space.  Reserved space needs to be understood, and properly managed. This new feature in 21.1 allows the ZDLRA to handle the reserved space for you, and I explain how to use it in this blog post.  First let's go through space usage, and reserved space in general.

    space usage ZDLRA

    Space usage on the ZDLRA. 


    Recovery Window goal (which drives the space utilization)

    The recovery window goal is set at the policy level, and this value (in days) is the number of days that you want to keep as a recovery window for all databases that are a member of this policy.  This will drive the space utilization.

    Total space

    The ZDLRA comes with all the space pre-allocated.  When you are looking at OEM, or in the SAR report you will see the total space listed. You want to make sure that you have enough space for your database backups and any incoming new backups.

    Used Space

    When the ZDLRA purges backups beyond the the Recovery Window Goal that you set, if does a bulk purge of backups.  This can be controlled by setting the maximum disk backup retention in days (which defaults to 1.5 times the recovery window goal).  Because of the bulk purge, more space is shown as used than is needed to support your recovery window goal.

    Recovery Window Space

    This is the amount space that is needed to support the recovery window goal.  Because, of the bulk purge, the recovery window space is less than the used space.


    Reserved space

    In order to control what happens with space, the concept of reserved space is used.  When a database is added to the ZLDRA, the reserved space value is set for this database.  This value should be updated regularly to ensure that there is enough space for the database backups to be stored.

    The important things to know about reserved space are:
    • The sum of all the reserved space cannot be greater than the total space available on the ZDLRA.
    • When adding a new database, it's reserved space must fit within the unreserved space.
    • When a new database is added, the reserved space must be set to least the size of the database, and defaults to 2.5 times the size of the database.
    • The reserved space for a database needs to be at least the size of the largest datafile.
    • The reserved space should be larger than the amount of space needed to support the recovery window goal space for the database.  For databases with fluctuation, you need to reserve space for the peak usage. 
    The reserved space serves two purposes when properly set
    1. It can be used to determine how much space is available for new database backups.
    2. If the ZDLRA determines that it does not have enough space to support the recovery window goal of the supported databases, space is reclaimed from databases whose reserved space is too small.
    It is critical to keep the reserved space updated, and many customers have used an automated process to set the reserved space to "recovery window space needed" + 10%

    Unfortunately configuring an automated process for all databases does not take into account any fluctuations in usage.  Let's say I have a database which is much busier at months end, I want to make that sure my reserved space is not adjusted down to the low value, I want it to stay adjusted based on the highest space usage value.

    Autotuned_reserved_space 


    This where autotuned reserved space can help you manage the reserved space.  This setting is controlled at the policy level.  

    AUTOTUNED_RESERVED_SPACE

    This value is set at the protection policy level and contains either "YES" or "NO", and defaults to "NO". "YES" will allow the ZDLRA to manage reserved space automatically for all databases (whose disk_reserved_space is not set) and are a member of this policy.

    MAX_RESERVED_SPACE


    This value is also set at the protection policy level.  This value is optional for autotuned_reserved_space, but if set, it will control the maximum amount of reserved space that can be set for an individual database in the protection policy. 

    AUTOTUNE_SPACE_LIMIT


    This value is set at the storage level for ALL databases. This sets a reserved space usage limit, where autotuning can slow down large reserved space increases. When reached, autotune will limit databases from increasing their reserved space growth to 10% per week.  This value is optional and will default to the total space if not set.  


    SUMMARY:

    • autotuned_reserved_space - Enables autotuning of space within a protection policy
    • max_reserved_space - Controls the maximum reserved space of databases in a protection policy
    • autotune_space_limit - Slows the reserved space growth when a specified space limit is reached.

    What does autotune reserved space do ?

    • On a regular basis, if needed, the reserved space for each autotune controlled database is adjusted to reserve space for the recovery window goal, and incoming backups.
    • If the database has a disk_reserved_space set, autotuning will not be used for this database.  It is assumed that the disk_reserved_space will be set manually for this database

    Autotune  will replace the need for the ZDLRA admin to constantly update the reserved space for each database, as it's space needs change over time. It will also allow them to configure a constant reserved space for databases with fluctuating storage usage.

    Viewing all 147 articles
    Browse latest View live


    Latest Images

    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>