Quantcast
Channel: Bryan's Oracle Blog
Viewing all 147 articles
Browse latest View live

RMAN - Create weekly archival backup from weekly full backups

$
0
0

 This blog post demonstrates a process to create KEEP archival backups dynamically by using backups pieces within a  weekly full/daily incremental backup strategy.  

KEEP backups

First let's go through what keep a keep backup is and how it affects your backup strategy.

  1. A KEEP backup is a self-contained backupset.  The archive logs needed to de-fuzzy the database files are automatically included in the backupset.  
  2. The archive logs included in the backup are only the archive logs needed to de-fuzzy.
  3. The backup pieces in the KEEP backup (both datafile backups and included archive log pieces) are ignored in the normal incremental backup strategy, and in any log sweeps.
  4. When a recovery window is set in RMAN, KEEP backup pieces are ignored in any "Delete Obsolete" processing.
  5. KEEP backup pieces, once past the "until time" are removed using the "Delete expired" command.

Normal  process to create an archival KEEP backup.

  • Perform a weekly full backup and a daily incremental backup that are deleted using an RMAN recovery window.
  • Perform archive log backups with the full/incremental backups along with log sweeps. These are also deleted using the an RMAN recovery window.
  • One of these processes are used to create an archival KEEP backup.
    • A separate full KEEP backup is performed along with the normal weekly full backup
    • The weekly full backup (and archive logs based on tag) are copied to tape with "backup as backupset" and marked as "KEEP" backup pieces.

Issues with this process

  • The process of copying the full backup to tape using "backup as backupset" requires 2 copies of the same backup for a period of time.  You don't want to wait until the end of retention to copy it to tape.
  • If the KEEP full backups are stored on disk, along with the weekly full backups you cannot use the backup as backupset, you must perform a  second, separate backup.

Proposal to create a weekly KEEP backup

Problems with simple solution

The basic idea is that you perform a weekly full backup, along with daily incremental backups that are kept for 30 days. After the 30 day retention, just the full backups (along with archive logs to defuzzy) are kept for an additional 30 days.

The most obvious way to do this is to

  •  Set the RMAN retention 30 days
  • Create a weekly full backup that is a KEEP backup with an until time of 60 days in the future.
  • Create a daily incremental backup that NOT a keep backup.
  • Create archive backups as normal.
  • Allow delete obsolete to remove the "non-KEEP" backups after 30 days.
.
Unfortunately when you create an incremental backups, and there is only KEEP backups proceeding it, the incremental Level 1 backup is forced into an incremental level 0 backups.  And with delete obsolete, if you look through MOS note "RMAN Archival (KEEP) backups and Retention Policy (Doc ID 986382.1)" you find that the incremental backups and archive logs are kept for 60 days because there is no proceeding non-KEEP backup.


Solution

The solution is to use tags, mark the weekly full as a keep after a week, and use the "delete backups completed before tag='xx'" command.

Weekly full backup scripts

run
{
   backup archivelog all filesperset=20  tag ARCHIVE_ONLY delete input;
   change backup tag='INC_LEVEL_0'  keep until time 'sysdate+53';
   backup incremental level 0 database tag='INC_LEVEL_0' filesperset=20  plus archivelog filesperset=20 tag='INC_LEVEL_0';

  delete backup completed before 'sysdate-61' tag= 'INC_LEVEL_0';
  delete backup completed before 'sysdate-31' tag= 'INC_LEVEL_1';
  delete backup completed before 'sysdate-31' tag= 'ARCHIVE_ONLY';
}

Daily Incremental backup scripts

run
{
  backup incremental level 1 database tag='INC_LEVEL_1'  filesperset=20 plus archivelog filesperset=20 tag='INC_LEVEL_1';
}

Archive log sweep backup scripts

run
{
  backup archivelog all tag='ARCHIVE_ONLY' delete input;
}


Example

I then took these scripts, and built an example using a 7 day recovery window.  My full backup commands are below.
run
{
   backup archivelog all filesperset=20  tag ARCHIVE_ONLY delete input;
   change backup tag='INC_LEVEL_0'  keep until time 'sysdate+30';
   backup incremental level 0 database tag='INC_LEVEL_0' filesperset=20  plus archivelog filesperset=20 tag='INC_LEVEL_0';

  delete backup completed before 'sysdate-30' tag= 'INC_LEVEL_0';
  delete backup completed before 'sysdate-8' tag= 'INC_LEVEL_1';
  delete backup completed before 'sysdate-8' tag= 'ARCHIVE_ONLY';
}


First I am going to perform a weekly backup and incremental backups for 7 days to see how the settings affect the backup pieces in RMAN.

for Datafile #1.


 File# Checkpoint Time   Incr level Incr chg# Chkp chg# Incremental Typ Keep Keep until Keep options    Tag
------ ----------------- ---------- --------- --------- --------------- ---- ---------- --------------- ---------------
     3 06-01-23 00:00:06          0         0   3334337 FULL            NO                              INC_LEVEL_0
     3 06-02-23 00:00:03          1   3334337   3334513 INCR1           NO                              INC_LEVEL_1
     3 06-03-23 00:00:03          1   3334513   3334665 INCR1           NO                              INC_LEVEL_1
     3 06-04-23 00:00:03          1   3334665   3334805 INCR1           NO                              INC_LEVEL_1
     3 06-05-23 00:00:03          1   3334805   3334949 INCR1           NO                              INC_LEVEL_1
     3 06-06-23 00:00:03          1   3334949   3335094 INCR1           NO                              INC_LEVEL_1
     3 06-07-23 00:00:03          1   3335094   3335234 INCR1           NO                              INC_LEVEL_1

for  archive logs

Sequence# First chg# Next chg# Create Time       Keep Keep until Keep options    Tag
--------- ---------- --------- ----------------- ---- ---------- --------------- ---------------
      625    3333260   3334274 15-JUN-23         NO                              ARCHIVE_ONLY
      626    3334274   3334321 01-JUN-23         NO                              INC_LEVEL_0
      627    3334321   3334375 01-JUN-23         NO                              INC_LEVEL_0
      628    3334375   3334440 01-JUN-23         NO                              ARCHIVE_ONLY
      629    3334440   3334490 01-JUN-23         NO                              INC_LEVEL_1
      630    3334490   3334545 02-JUN-23         NO                              INC_LEVEL_1
      631    3334545   3334584 02-JUN-23         NO                              ARCHIVE_ONLY
      632    3334584   3334633 02-JUN-23         NO                              INC_LEVEL_1
      633    3334633   3334695 03-JUN-23         NO                              INC_LEVEL_1
      634    3334695   3334733 03-JUN-23         NO                              ARCHIVE_ONLY
      635    3334733   3334782 03-JUN-23         NO                              INC_LEVEL_1
      636    3334782   3334839 04-JUN-23         NO                              INC_LEVEL_1
      637    3334839   3334876 04-JUN-23         NO                              ARCHIVE_ONLY
      638    3334876   3334926 04-JUN-23         NO                              INC_LEVEL_1
      639    3334926   3334984 05-JUN-23         NO                              INC_LEVEL_1
      640    3334984   3335023 05-JUN-23         NO                              ARCHIVE_ONLY
      641    3335023   3335072 05-JUN-23         NO                              INC_LEVEL_1
      642    3335072   3335124 06-JUN-23         NO                              INC_LEVEL_1
      643    3335124   3335162 06-JUN-23         NO                              ARCHIVE_ONLY
      644    3335162   3335211 06-JUN-23         NO                              INC_LEVEL_1
      645    3335211   3335273 07-JUN-23         NO                              INC_LEVEL_1
      646    3335273   3335311 07-JUN-23         NO                              ARCHIVE_ONLY


Next I'm going to execute the weekly full backup script that changes the last backup to a keep backup to see how the settings affect the backup pieces in RMAN.

for Datafile #1.
 File# Checkpoint Time   Incr level Incr chg# Chkp chg# Incremental Typ Keep Keep until Keep options    Tag
------ ----------------- ---------- --------- --------- --------------- ---- ---------- --------------- ---------------
     3 06-01-23 00:00:06          0         0   3334337 FULL            YES  08-JUL-23  BACKUP_LOGS     INC_LEVEL_0
     3 06-02-23 00:00:03          1   3334337   3334513 INCR1           NO                              INC_LEVEL_1
     3 06-03-23 00:00:03          1   3334513   3334665 INCR1           NO                              INC_LEVEL_1
     3 06-04-23 00:00:03          1   3334665   3334805 INCR1           NO                              INC_LEVEL_1
     3 06-05-23 00:00:03          1   3334805   3334949 INCR1           NO                              INC_LEVEL_1
     3 06-06-23 00:00:03          1   3334949   3335094 INCR1           NO                              INC_LEVEL_1
     3 06-07-23 00:00:03          1   3335094   3335234 INCR1           NO                              INC_LEVEL_1
     3 06-08-23 00:00:07          0         0   3335715 FULL            NO                              INC_LEVEL_0


for archive logs


Sequence# First chg# Next chg# Create Time       Keep Keep until Keep options    Tag
--------- ---------- --------- ----------------- ---- ---------- --------------- ---------------
      625    3333260   3334274 15-JUN-23         NO                              ARCHIVE_ONLY
      626    3334274   3334321 01-JUN-23         YES  08-JUL-23  BACKUP_LOGS     INC_LEVEL_0
      627    3334321   3334375 01-JUN-23         YES  08-JUL-23  BACKUP_LOGS     INC_LEVEL_0
      628    3334375   3334440 01-JUN-23         NO                              ARCHIVE_ONLY
      629    3334440   3334490 01-JUN-23         NO                              INC_LEVEL_1
      630    3334490   3334545 02-JUN-23         NO                              INC_LEVEL_1
      631    3334545   3334584 02-JUN-23         NO                              ARCHIVE_ONLY
      632    3334584   3334633 02-JUN-23         NO                              INC_LEVEL_1
      633    3334633   3334695 03-JUN-23         NO                              INC_LEVEL_1
      634    3334695   3334733 03-JUN-23         NO                              ARCHIVE_ONLY
      635    3334733   3334782 03-JUN-23         NO                              INC_LEVEL_1
      636    3334782   3334839 04-JUN-23         NO                              INC_LEVEL_1
      637    3334839   3334876 04-JUN-23         NO                              ARCHIVE_ONLY
      638    3334876   3334926 04-JUN-23         NO                              INC_LEVEL_1
      639    3334926   3334984 05-JUN-23         NO                              INC_LEVEL_1
      640    3334984   3335023 05-JUN-23         NO                              ARCHIVE_ONLY
      641    3335023   3335072 05-JUN-23         NO                              INC_LEVEL_1
      642    3335072   3335124 06-JUN-23         NO                              INC_LEVEL_1
      643    3335124   3335162 06-JUN-23         NO                              ARCHIVE_ONLY
      644    3335162   3335211 06-JUN-23         NO                              INC_LEVEL_1
      645    3335211   3335273 07-JUN-23         NO                              INC_LEVEL_1
      646    3335273   3335311 07-JUN-23         NO                              ARCHIVE_ONLY
      647    3335311   3335652 07-JUN-23         NO                              ARCHIVE_ONLY
      648    3335652   3335699 08-JUN-23         NO                              INC_LEVEL_0
      649    3335699   3335760 08-JUN-23         NO                              INC_LEVEL_0
      650    3335760   3335833 08-JUN-23         NO                              ARCHIVE_ONLY


Finally I'm going to execute the weekly full backup script that changes the last backup to a keep backup and this time it will delete the older backup pieces to see how the settings affect the backup pieces in RMAN.

for Datafile #1.

File# Checkpoint Time   Incr level Incr chg# Chkp chg# Incremental Typ Keep Keep until Keep options    Tag
------ ----------------- ---------- --------- --------- --------------- ---- ---------- --------------- ---------------
     3 06-01-23 00:00:06          0         0   3334337 FULL            YES  15-JUL-23  BACKUP_LOGS     INC_LEVEL_0
     3 06-08-23 00:00:07          0         0   3335715 FULL            YES  15-JUL-23  BACKUP_LOGS     INC_LEVEL_0
     3 06-09-23 00:00:03          1   3335715   3336009 INCR1           NO                              INC_LEVEL_1
     3 06-10-23 00:00:03          1   3336009   3336183 INCR1           NO                              INC_LEVEL_1
     3 06-11-23 00:00:03          1   3336183   3336330 INCR1           NO                              INC_LEVEL_1
     3 06-12-23 00:00:03          1   3336330   3336470 INCR1           NO                              INC_LEVEL_1
     3 06-13-23 00:00:03          1   3336470   3336617 INCR1           NO                              INC_LEVEL_1
     3 06-14-23 00:00:04          1   3336617   3336757 INCR1           NO                              INC_LEVEL_1
     3 06-15-23 00:00:07          0         0   3336969 FULL            NO                              INC_LEVEL_0



for archive logs

Sequence# First chg# Next chg# Create Time       Keep Keep until Keep options    Tag
--------- ---------- --------- ----------------- ---- ---------- --------------- ---------------
      626    3334274   3334321 01-JUN-23         YES  15-JUL-23  BACKUP_LOGS     INC_LEVEL_0
      627    3334321   3334375 01-JUN-23         YES  15-JUL-23  BACKUP_LOGS     INC_LEVEL_0
      647    3335311   3335652 07-JUN-23         NO                              ARCHIVE_ONLY
      648    3335652   3335699 08-JUN-23         YES  15-JUL-23  BACKUP_LOGS     INC_LEVEL_0
      649    3335699   3335760 08-JUN-23         YES  15-JUL-23  BACKUP_LOGS     INC_LEVEL_0
      650    3335760   3335833 08-JUN-23         NO                              ARCHIVE_ONLY
      651    3335833   3335986 08-JUN-23         NO                              INC_LEVEL_1
      652    3335986   3336065 09-JUN-23         NO                              INC_LEVEL_1
      653    3336065   3336111 09-JUN-23         NO                              ARCHIVE_ONLY
      654    3336111   3336160 09-JUN-23         NO                              INC_LEVEL_1
      655    3336160   3336219 10-JUN-23         NO                              INC_LEVEL_1
      656    3336219   3336258 10-JUN-23         NO                              ARCHIVE_ONLY
      657    3336258   3336307 10-JUN-23         NO                              INC_LEVEL_1
      658    3336307   3336359 11-JUN-23         NO                              INC_LEVEL_1
      659    3336359   3336397 11-JUN-23         NO                              ARCHIVE_ONLY
      660    3336397   3336447 11-JUN-23         NO                              INC_LEVEL_1
      661    3336447   3336506 12-JUN-23         NO                              INC_LEVEL_1
      662    3336506   3336544 12-JUN-23         NO                              ARCHIVE_ONLY
      663    3336544   3336594 12-JUN-23         NO                              INC_LEVEL_1
      664    3336594   3336639 13-JUN-23         NO                              INC_LEVEL_1
      665    3336639   3336677 13-JUN-23         NO                              ARCHIVE_ONLY
      666    3336677   3336734 13-JUN-23         NO                              INC_LEVEL_1
      667    3336734   3336819 14-JUN-23         NO                              INC_LEVEL_1
      668    3336819   3336857 14-JUN-23         NO                              ARCHIVE_ONLY
      669    3336857   3336906 14-JUN-23         NO                              ARCHIVE_ONLY
      670    3336906   3336953 15-JUN-23         NO                              INC_LEVEL_0
      671    3336953   3337041 15-JUN-23         NO                              INC_LEVEL_0
      672    3337041   3337113 15-JUN-23         NO                              ARCHIVE_ONLY


Result

For my datafiles, I still have the weekly full backup, and it is a keep backup. For my archive logs, I still have the archive logs that were part of the full backup which are needed to de-fuzzy my backup.


Restore Test


Now for the final test using the next chg# on the June 1st archive logs 3334375;


RMAN> restore database until scn=3334375;

Starting restore at 15-JUN-23
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=259 device type=DISK
...
channel ORA_DISK_1: piece handle=/u01/ocidb/backups/da1tiok6_1450_1_1 tag=INC_LEVEL_0
channel ORA_DISK_1: restored backup piece 1
...
channel ORA_DISK_1: reading from backup piece /u01/ocidb/backups/db1tiola_1451_1_1
channel ORA_DISK_1: piece handle=/u01/ocidb/backups/db1tiola_1451_1_1 tag=INC_LEVEL_0
channel ORA_DISK_1: restored backup piece 1

RMAN> recover database until scn=3334375;
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=627
channel ORA_DISK_1: reading from backup piece /u01/ocidb/backups/dd1tiom8_1453_1_1
channel ORA_DISK_1: piece handle=/u01/ocidb/backups/dd1tiom8_1453_1_1 tag=INC_LEVEL_0
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/u01/app/oracle/product/19c/dbhome_1/dbs/arch1_627_1142178912.dbf thread=1 sequence=627
media recovery complete, elapsed time: 00:00:00
Finished recover at 15-JUN-23
RMAN> alter database open resetlogs;

Statement processed



Success !

ZDLRA - Copy-to-cloud steps by step explained

$
0
0

 One of the best features of the ZDLRA is the ability to dynamically create a full Keep backup and send it to Cloud (ZFSSA or OCI) for archival storage.

Here is a great article by Oracle Product Manager Marco Calmasini that explains how to use this feature.



In this blog post, I will go through the RACLI steps that you execute, and explain what is happening with each step

The documentation I am started with is the 19.2 administrators Guide which can be here.  If you are on a more current release, then you can find the steps in chapter named "Archiving Backups to Cloud".


Deploying the OKV Client Software

To ensure that all the backup pieces are encrypted, you must use OKV (Oracle Key Vault) to manage the encryption keys that are being used by the ZDLRA.  Even if you are using TDE for the datafiles, the copy-to-cloud process encrypts ALL backup pieces including the backup of the controlfile, and spfile which aren't already encrypted.

I am not going to go through the detailed steps that are in the documentation to configure OKV, but I will just go through the high level processes.

The most important items to note on this sections are

  • Both nodes of the ZDLRA are added as endpoints, and they should have a descriptive name that identifies them, and ties them together.
  • A new endpoint group should be created with a descriptive name, and both nodes should be added to the new endpoint group.
  • A new virtual wallet is created with a descriptive name, and this needs to both associated with the 2 endpoints, and be the default wallet for the endpoints.
  • Both endpoints of the ZDLRA are enrolled through OKV and during the enrollment process a unique enrollment token file is created for each node. It is best to immediately rename the files to identify the endpoint it is associated with using the format <myhost>-okvclient.jar.
  • Copy the enrollment token files to the /radump directory on the appropriate host.
NOTE: It is critical that you follow these directions exactly, and that each node has the appropriate enrollment token with the appropriate name before continuing.

#1 Add credential_wallet

racli add credential_wallet


Fri Jan 1 08:56:27 2018: Start: Add Credential Wallet
Enter New Keystore Password: <OKV_endpoint_password>
Confirm New Keystore Password:
Enter New Wallet Password: <ZDLRA_credential_wallet_password> 
Confirm New Wallet Password:
Re-Enter New Wallet Password:
Fri Jan 1 08:56:40 2018: End: Add Credential Wallet

The first step to configure the ZDLRA to talk to OKV is to have the ZDLRA create a password protected SEPS wallet file that contains the OKV password.
This step asks for 2 new passwords when executing
  1. New Keystore Password - This password is the OKV endpoint password.  This password is used to communicate with OKV by the database, and can be used with okvutil to interact with OKV directly
  2. New Wallet Password - This password is used to protect the wallet file itself that will contain the OKV keystore password.
This password file is shared across both nodes.

Update contents      -  "racli add credential"
Change password    - "racli alter credential_wallet"

#2 Add keystore

racli add keystore --type hsm --restart_db

RecoveryAppliance/log/racli.log
Fri Jan 1 08:57:03 2018: Start: Configure Wallets
Fri Jan 1 08:57:04 2018: End: Configure Wallets
Fri Jan 1 08:57:04 2018: Start: Stop Listeners, and Database
Fri Jan 1 08:59:26 2018: End: Stop Listeners, and Database
Fri Jan 1 08:59:26 2018: Start: Start Listeners, and Database
Fri Jan 1 09:02:16 2018: End: Start Listeners, and Database

The second step to configure the ZDLRA to talk to OKV is to have the ZDLRA database be configured to communicate with OKV. The Database on the ZDLRA will be configured to use the OKV wallet for encryption keys which requires a bounce of the database.  


Backout         - "racli remove keystore" 
Status            - "racli status keystore"
Update          - "racli alter keystore"
Disable          - "racli disable keystore"
Enable            - "racli enable keystore"

#3 Install okv_endpoint

racli install okv_endpoint

23 20:14:40 2018: Start: Install OKV End Point [node01]
Wed August 23 20:14:43 2018: End: Install OKV End Point [node01]
Wed August 23 20:14:43 2018: Start: Install OKV End Point [node02]
Wed August 23 20:14:45 2018: End: Install OKV End Point [node02]

The third step to configure the ZDLRA to talk to OKV is to have the ZDLRA nodes (OKV endpoints) enrolled in OKV.  This step will install the OKV software on both nodes of the ZDLRA, and complete the enrollment of the 2 ZDLRA nodes with OKV.  The password that entered in step #1 for OKV is used during the enrollment process.

Status            - "racli status okv_endpoint"

NOTE: At the end of this step, the status command should return a status of online from both nodes.

Node: node02
Endpoint: Online
Node: node01
Endpoint: Online

#4 Open the Keystore

racli enable keystore

The fourth step to configure the ZDLRA to talk to OKV is to have the ZDLRA nodes open the encryption wallet in the database. This step will use the saved passwords from step #1 and open up the encryption wallet.

NOTE: This will need to be executed after any restarts of the database on the ZDLRA.

#5 Create a TDE master key for the ZDLRA in the Keystore

racli alter keystore --initialize_key

The final step to configure the ZDLRA to talk to OKV is to have the ZDLRA create the master encryption for the ZDLRA in the wallet.

Creating Cloud Objects for Copy-to-Cloud

These steps create the cloud objects necessary to send backups to a cloud location.

NOTE: If you are configuring multiple cloud locations, you may go through these steps for each location.

Configure public/private key credentials

Authentication with the object storage is done using an X.509 certificate.  The ZDLRA steps outlined in the documentation will generate a new pair of API signing keys and register the new set of keys.
You can also use any set of API keys that you previously generated by putting your private key in the shared location on the ZDLRA nodes..
In OCI each user can only have 3 sets of API keys, but the ZFSSA has no restrictions on the number of API signing keys that can be created.
Each "cloud_key" represents an API signing key pair, and each cloud_key contains 
  1. pvt_key_path - Shared location on the ZDLRA where the private key is located
  2. fingerprint      - fingerprint associated with the private key to identify which key to use.
You can use the same "cloud_key" to authenticate to multiple buckets, and even different cloud locations.

Documentation steps to create new key pair

#1 Add Cloud_key


racli add cloud_key --key_name=sample_key

Tue Jun 18 13:22:07 2019: Using log file /opt/oracle.RecoveryAppliance/log/racli.log
Tue Jun 18 13:22:07 2019: Start: Add Cloud Key sample_key
Tue Jun 18 13:22:08 2019: Start: Creating New Keys
Tue Jun 18 13:22:08 2019: Oracle Database Cloud Backup Module Install Tool, build 19.3.0.0.0DBBKPCSBP_2019-06-13
Tue Jun 18 13:22:08 2019: OCI API signing keys are created:
Tue Jun 18 13:22:08 2019:   PRIVATE KEY --> /raacfs/raadmin/cloud/key/sample_key/oci_pvt
Tue Jun 18 13:22:08 2019:   PUBLIC  KEY --> /raacfs/raadmin/cloud/key/sample_key/oci_pub
Tue Jun 18 13:22:08 2019: Please upload the public key in the OCI console.
Tue Jun 18 13:22:08 2019: End: Creating New Keys
Tue Jun 18 13:22:09 2019: End: Add Cloud Key sample_key

This step is used to generate a new set of API signing keys,
The output of this step is a shared set of files on the ZLDRA which are stored in:
/raacfs/raadmin/cloud/key/{key_name)/

In order to complete the cloud_key information, you need to add the public key to OCI, or to the ZFS and save the fingerprint that is associated with the public key. The fingerprint is used in the next step.

#2 racli alter cloud_key


racli alter cloud_key --key_name=sample_key --fingerprint=12:34:56:78:90:ab:cd:ef:12:34:56:78:90:ab:cd:ef

The fingerprint that is associated with the public key (from the previous step) is added to the ZDLRA cloud_key information so that it can be used for authentication.  
Both the private key, and the fingerprint are need to use the API signing key for credentials.

Using your own API signing key pair

#1 Add cloud_key

racli add cloud_key --key_name=KEY_NAME [--fingerprint=PUBFINGERPRINT --pvt_key_path=PVTKEYFILE]

You can add your own API signing keys to the ZDLRA by  using the "add cloud_key" command identifying both the private key file location (it is best to follow the format and location in the automated steps) and the fingerprint associated with the API signing keys.
It is assumed that the public key has already been added to OCI, or to the ZFSSA.

Status        - racli list cloud_key
Delete        - racli remove cloud_key
Update       - racli alter cloud_key

Documentation steps to create a new cloud_user 

This step is used to create the wallet entry on the ZDLRA that is used for authenticating to the object store.
This step combines the "cloud_key", which contains the API signing keys, the user login information and the compartment (on ZFSSA the compartment is the share ).
The cloud_user can be used for authentication with multiple buckets/locations that are identified as cloud_locations as long as they are within the same compartment (share on ZFSSA).

The format of the command to create a new cloud_user is below

racli add cloud_user 
--user_name=sample_user
--key_name=sample_key
--user_ocid=ocid1.user.oc1..abcedfghijklmnopqrstuvwxyz0124567901
--tenancy_ocid=ocid1.tenancy.oc1..abcedfghijklmnopqrstuvwxyz0124567902
--compartment_ocid=ocid1.compartment.oc1..abcedfghijklmnopqrstuvwxyz0124567903

The parameters for this command are

  • user_name        - This is the username that is associated with the cloud_user to unique identify it.
  • key_name         - This is name of the "cloud_key" identifying the API signing keys to be used.
  • user_ocid          - This is the Username for authentication. In OCI this is the users OCID, in ZFS, this combines the ocid format with the username on the ZFSSA that owns the share.
  • tenancy_ocid    - this is the tenancy OCID in OCI, on ZFSSA it is ignored
  • compartment_ocid - this is the OCID, on ZFSSA it is the share
For more information on configuring the ZFSSA see
How to configure Zero Data Loss Recovery Appliance to use ZFS OCI Object Storage as a cloud repository (Doc ID 2761114.1)


List                - racli list  cloud_user
Delete            - racli remove  cloud_user
Update           - racli alter cloud_user

Documentation steps to create a new cloud_location 

This step is used to associate the cloud_user (used for authentication) with both the location and the bucket that is going to be used for backups.

racli add cloud_location
--cloud_user=<CLOUD_USER_NAME>
--host=https://<OPC_STORAGE_LOCATION>
--bucket=<OCI_BUCKET_NAME>
--proxy_port=<HOST_PORT>
--proxy_host=<PROXY_URL>
--proxy_id=<PROXY_ID>
--proxy_pass=<PROXY_PASS>
--streams=<NUM_STREAMS>
[--enable_archive=TRUE]
--archive_after_backup=<number>:[YEARS | DAYS]
[--retain_after_restore=<number_hours>:HOURS]
--import_all_trustcert=<X509_CERT_PATH>

I am going to go through the key items that need to be entered here.  I am going to skip over the PROXY information and certificate.

  • cloud_user - This is the object store authentication information that was created in the previous steps.
  • host - This the URL for the object storage location. On ZFS the namespace in the URL is the "share"
  • bucket - This is the bucket where the backups will be sent. The bucket will be created if it doesn't exists. 
  • streams - The maximum number of channels to use when sending backups to the cloud
  • enable_archive - Not used with ZFS. With OCI the default TRUE allows you to set an archival strategy, FALSE will automatically put backups in archival storage.
  • archive_after_restore - Not used with ZFS. Automatically configures an archival strategy in OCI
  • retain_after_restore - Not used with ZFS. Sets the period of time that backups will remain in standard storage before returning to archival storage.
This command will create multiple attribute sets (between 1 and the number of streams) for the cloud_location that can be used for sending archival backups to the cloud with different numbers of channels.
The format of <copy_cloud_name> is a combination of  <bucket name> and <cloud_user>.
The format of the attributes used for the copy jobs is <Cloud_location_name>_<stream number>


Update          - racli alter cloud_location
Disable          - racli disable cloud_location  - This will pause all backups going to this location
Enable           - racli enable cloud_location  - This unpauses all backups going to this location
List                - racli list  cloud_location
Delete            - racli remove cloud_location

NOTE: There are quite a few items to note in this section.
  • When configuring backups to go to ZFSSA use the documentation previously mentioned to ensure the parameters are correct.
  • When executing this step with ZFSSA, make sure that the default OCI location on the ZFSSA is set to the share that you are currently configuring. If you are using multiple shares for buckets, then you will have to change the ZFSSA settings as you add cloud locations.
  • When using OCI for archival ensure that you configure the archival rules using this command. This ensures that the metadata objects, which can't be archived are excluded as part of the lifecycle management rules created during this step.


Create the job template using the documentation.


How to clone a single PDB onto another DB host.

$
0
0

Cloning a single PDB isn't always easy to do, especially if you are trying to use an existing backup rather copying from an existing database.  In this blog post I will walk through how to restore a PDB from an existing Multi-tenant backup to another host, and plug it into another CDB.




My environment is:

DBCS database  FASTDB

        db_name                                  fastdb

        db_unique_name                     = fastdb_67s_iad

        DB Version                              = 19.19

        TDE                                         = Using local wallet 

        Backup                                    = Object Storage using the Tooling 

        RMAN catalog                        = Using RMAN catalog to emulate ZDLRA

        PDB name                              = fastdb_pdb1


Step #1 - Prepare destination

The first step is to copy over all the necessary pieces for restoring the database.

  • TDE wallet
  • Tape Library
  • Tape Library config file
  • SEPS wallet used by backup connection
  • SPFILE contents to build a pfile
Also create any directories needed (like audit file location).
  • mkdir /u01/app/oracle/admin/fastdb_67s_iad/adump
I added the entry to the /etc/oratab file and changed my environment to point to this database name.

In my case I copied the following directories and subdirectories to the same destination on the host.
  • scp /opt/oracle/dcs/commonstore/wallets/fastdb_67s_iad/*
  • scp /opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/*
Finally, I copied some of the contents in the spfile.  Below are the critical entries.
audit_file_dest='/u01/app/oracle/admin/fastdb_67s_iad/adump'
*.compatible='19.0.0.0'
*.control_files='+RECO/FASTDB_67S_IAD/CONTROLFILE/current.256.1143303659'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+RECO'
*.db_domain='subnet.vcn.oraclevcn.com'
*.db_files=1024
*.db_name='fastdb'
*.db_recovery_file_dest='+RECO'
*.db_recovery_file_dest_size=8191g
*.db_unique_name='fastdb_67s_iad'
*.diagnostic_dest='/u01/app/oracle'
*.enable_pluggable_database=true
*.global_names=TRUE
*.log_archive_format='%t_%s_%r.dbf'
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.processes=4000
*.sga_target=4g
*.tde_configuration='keystore_configuration=FILE'
*.undo_retention=900
*.undo_tablespace='UNDOTBS1'
*.wallet_root='/opt/oracle/dcs/commonstore/wallets/fastdb_67s_iad'



Step #2 - Restore controlfile

The next step is to restore the controlfile to my destination host

I grabbed 2 pieces of information from the source database
  • DBID  - This is needed to restore the controlfile from the backup.
  • Channel configuration.
With this I executed the following to restore the controlfile.

startup nomount;
set dbid=1292000107;

 run
 {
 allocate CHANNEL sbt1 DEVICE TYPE  'SBT_TAPE' FORMAT   '%d_%I_%U_%T_%t' PARMS  'SBT_LIBRARY=/opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/libopc.so ENV=(OPC_PFILE=/opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/acefbba5-65ad-454c-b1fe-467dec1abde4/opc_fastdb_67s_iad.ora)';
 restore controlfile ;
 }

and below is my output.

RMAN>  run
 {
 allocate CHANNEL sbt1 DEVICE TYPE  'SBT_TAPE' FORMAT   '%d_%I_%U_%T_%t' PARMS  'SBT_LIBRARY=/opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/libopc.so ENV=(OPC_PFILE=/opt/oracle/dcs/commonstore/oss/fastdb_67s_iad/acefbba5-65ad-454c-b1fe-467dec1abde4/opc_fastdb_67s_iad.ora)';
 restore controlfile ;
 }2> 3> 4> 5>

allocated channel: sbt1
channel sbt1: SID=1513 device type=SBT_TAPE
channel sbt1: Oracle Database Backup Service Library VER=19.0.0.1

Starting restore at 08-AUG-23

channel sbt1: starting datafile backup set restore
channel sbt1: restoring control file
channel sbt1: reading from backup piece c-1292000107-20230808-04
channel sbt1: piece handle=c-1292000107-20230808-04 tag=TAG20230808T122731
channel sbt1: restored backup piece 1
channel sbt1: restore complete, elapsed time: 00:00:01
output file name=+RECO/FASTDB_67S_IAD/CONTROLFILE/current.2393.1144350823
Finished restore at 08-AUG-23


Step #3 - Restore Datafiles for CDB and my PDB

Below is the commands I am going to execute to restore the datafiles for my CDB , my PDB and the PDB$SEED.

First I'm going to mount the database, and I am going to spool the output to a logfile.



alter database mount;

SPOOL LOG TO '/tmp/restore.log';
set echo on;

run { 
            restore database root ;
            restore database FASTDB_PDB1;
            restore database "PDB$SEED";
     }

I went through the output, and I can see that it only restored  the CDB , my PDB, and the PDB$SEED.


Step #4 - execute report schema and review file locations


List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    1040     SYSTEM               YES     +DATA/FASTDB_67S_IAD/DATAFILE/system.283.1144351313
3    970      SYSAUX               NO      +DATA/FASTDB_67S_IAD/DATAFILE/sysaux.284.1144351305
4    95       UNDOTBS1             YES     +DATA/FASTDB_67S_IAD/DATAFILE/undotbs1.280.1144351303
5    410      PDB$SEED:SYSTEM      NO      +DATA/FASTDB_67S_IAD/F9D6EA8CCAA09630E0530905F40A5107/DATAFILE/system.264.1143303695
6    390      PDB$SEED:SYSAUX      NO      +DATA/FASTDB_67S_IAD/F9D6EA8CCAA09630E0530905F40A5107/DATAFILE/sysaux.265.1143303695
7    50       PDB$SEED:UNDOTBS1    NO      +DATA/FASTDB_67S_IAD/F9D6EA8CCAA09630E0530905F40A5107/DATAFILE/undotbs1.266.1143303695
8    410      FASTDB_PDB1:SYSTEM   YES     +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/DATAFILE/system.291.1144351333
9    410      FASTDB_PDB1:SYSAUX   NO      +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/DATAFILE/sysaux.292.1144351331
10   70       FASTDB_PDB1:UNDOTBS1 YES     +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/DATAFILE/undotbs1.281.1144351329
11   5        USERS                NO      +DATA/FASTDB_67S_IAD/DATAFILE/users.285.1144351303
12   5        FASTDB_PDB1:USERS    NO      +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/DATAFILE/users.295.1144351329
13   420      RMANPDB:SYSTEM       YES     +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/system.285.1143999311
14   420      RMANPDB:SYSAUX       NO      +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/sysaux.282.1143999317
15   50       RMANPDB:UNDOTBS1     YES     +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/undotbs1.281.1143999323
16   5        RMANPDB:USERS        NO      +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/users.284.1143999309
17   100      RMANPDB:RMANDATA     NO      +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/DATAFILE/rmandata.280.1144001911

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       +DATA/FASTDB_67S_IAD/TEMPFILE/temp.263.1143304005
2    131      PDB$SEED:TEMP        32767       +DATA/FASTDB_67S_IAD/017B5DDEB84167ACE063A100000AD816/TEMPFILE/temp.267.1143303733
4    224      FASTDB_PDB1:TEMP     4095        +DATA/FASTDB_67S_IAD/017B7B0563F0410FE063A100000A1C63/TEMPFILE/temp.272.1143304235
6    224      RMANPDB:TEMP         4095        +DATA/FASTDB_67S_IAD/021D506D8C7ADC01E063A100000A8702/TEMPFILE/temp.283.1143999305





Step #5 - Determine tablespaces to skip during recovery

I ran this on my primary database, and used it to build the RMAN command. This command will get the names of the tablespaces that are not part of this PDB so that I can ignore them.



select '''' ||pdb_name||''':'||tablespace_name ||',' 
    from cdb_tablespaces a,
         dba_pdbs b
         where a.con_id=b.con_id(+)
         and b.pdb_name not in ('FASTDB_PDB1')
order by 1;

From the above, I built the script below that skips the tablespaces for the PDB "RMANPDB".



recover database skip forever tablespace 
'RMANPDB':RMANDATA,
'RMANPDB':SYSAUX,
'RMANPDB':SYSTEM,
'RMANPDB':TEMP,
'RMANPDB':UNDOTBS1,
'RMANPDB':USERS;
And then ran my RMAN script to recover my datafiles that were restored.
NOTE: the datafiles for my second PDB were "offline dropped"


Starting recover at 08-AUG-23
RMAN-06908: warning: operation will not run in parallel on the allocated channels
RMAN-06909: warning: parallelism require Enterprise Edition
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=3771 device type=DISK
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=4523 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=19.0.0.1
channel ORA_SBT_TAPE_1: starting incremental datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: +DATA/FASTDB_67S_IAD/DATAFILE/system.283.1144351313


...

Executing: alter database datafile 13, 14, 15, 16, 17 offline drop
starting media recovery

channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=26
channel ORA_SBT_TAPE_1: reading from backup piece FASTDB_1292000107_5m23a29f_182_1_1_20230808_1144326447
channel ORA_SBT_TAPE_1: piece handle=FASTDB_1292000107_5m23a29f_182_1_1_20230808_1144326447 tag=TAG20230808T122727
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:01
archived log file name=+RECO/FASTDB_67S_IAD/ARCHIVELOG/2023_08_08/thread_1_seq_26.2389.1144352807 thread=1 sequence=26
channel default: deleting archived log(s)
archived log file name=+RECO/FASTDB_67S_IAD/ARCHIVELOG/2023_08_08/thread_1_seq_26.2389.1144352807 RECID=1 STAMP=1144352806
media recovery complete, elapsed time: 00:00:01
Finished recover at 08-AUG-23


Step #6 - Open database 

I opened the database and the PDB
SQL> alter database open;

Database altered.


SQL> alter pluggable database fastdb_pdb1 open;

Pluggable database altered.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 FASTDB_PDB1                    READ ONLY  NO
         4 RMANPDB                        MOUNTED


I also went and updated my init{sid}.ora to point to the controlfile that I restored.


Step #8 - Create shell PDB in the tooling

I created a new PDB that is going to be the name of the PDB I am going to plug in.




Step #7 - Switch my restored database to be a primary database

I found that the database was considered a standby database, and I needed to make it a primary to unplug my pdb




SQL> RECOVER MANAGED STANDBY DATABASE FINISH;
Media recovery complete.
SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;

SWITCHOVER_STATUS
--------------------
TO PRIMARY

SQL> alter database commit to switchover to primary with session shutdown;



Database altered.


Step #8 - unplug my PDB

I opened the database and unplugged my PDB.

SQL> alter database open;

Database altered.

SQL> alter pluggable database fastdb_pdb1 unplug into '/tmp/fastdb_pdb1.xml' ENCRYPT USING transport_secret;


Pluggable database altered.

SQL>
drop pluggable database fastdb_pdb1 keep datafiles;SQL>

Pluggable database dropped.



Step #9 - Drop the placeholder PDB from the new CDB

Now I am unplugging, and dropping the placeholder PDB.

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 LAST21C_PDB1                   READ WRITE NO
         4 CLONED_FASTDB                  READ WRITE NO
SQL> alter pluggable database CLONED_FASTDB close;

Pluggable database altered.


SQL> alter pluggable database CLONED_FASTDB unplug into '/tmp/CLONED_FASTDB.xml' ENCRYPT USING transport_secret;

Pluggable database altered.

SQL> drop pluggable database CLONED_FASTDB keep datafiles;

Pluggable database dropped.



Step #10 - Plug in the PDB and open it up




create pluggable database CLONED_FASTDB USING '/tmp/fastdb_pdb1.xml' keystore identified by W3lCom3#123#123 decrypt using transport_secret
NOCOPY
TEMPFILE REUSE;
SQL>   2    3

Pluggable database created.

SQL> SQL>alter pluggable database cloned_fastdb open;



That's it.  it took a bit to track down the instructions, but this all seemed to work.



Creating dynamic KEEP archival backups from ZDLRA

$
0
0

 This post covers how to utilize the new package DBMS_RA.CREATE_ARCHIVAL_BACKUP to dynamically create KEEP archival backups from a ZDLRA.

When using this package to schedule KEEP backups, I recommend creating restore points with every incremental backup.  Read this blog post to find out why.

 The documentation can be found here.  

These archival KEEP backups can be sent to either

  • TAPE - Using the copy-to-tape process you can send archival backups to physical tape, virtual tape, or any media manager that uses a "TAPE" backup type.
  • CLOUD - Using the copy-to-cloud process you can send archival backups to an OCI object store bucket which can be either on a local ZFSSA (using the OCI API protocol), or to the Oracle Cloud directly.



NOTE: When sending backups to a cloud location, retention rules can be set on the bucket LOCKING the cloud backups to ensure that they are immutable.  This is integrated with the new compliance settings on the RA21.



How to use this package

1. Identify the Database

Because this is more of an on demand process, you have to execute the package for each database separately (rather than by using a protection policy), and identify for each database the point-in-time you want to use for recovery..

2. Set Archival Restore Point

Because the archival backup is dynamically created using existing backups the restore point works differently than if you create the KEEP backup on demand from the protected database. 


When you create a KEEP backup from the protected database, the backup contains 

    • Full backup of all datafiles
    • Backup of spfile and controlfile
    • Backup of archive logs created during the backup starting with a log switch at the beginning of the backup.
    • Final archive logs created by performing a log switch at the end of the backup.

 When you create an Archival backup from the ZDLRA , the backup contains

    • Most current virtual full backup of each datafile prior to the point in time for recovery that you choose. 
    • Backup of spfile and controlfile 
    • Backup of the active archive logs generated when the oldest virtual full datafile backup started, up to the archive logs needed to recover until the point in time chosen for recovery.

As you can see a normal KEEP backup generated by the protected database is a a "self-contained" backup that can be recovered only to the point in time that the backup completed.  You can increase the recover point by adding additional KEEP archival log backups after the backup.

The dynamically created KEEP backup generated by the ZDLRA is also a "self-contained" backup that can be recovered to any point in time after the last datafile backup completed, but it also includes any point in time up to the restore point identified.  

Choices for a dynamic restore point 

 There are 3 options to choose a specific restore point. If you do not set one of these options, the KEEP backup will be created using the current restore point of the database.

  •  RESTORE_POINT - If you set a unique restore point in the database immediately following an incremental backup (or  at a later point in time), you can create a KEEP backup that will recover to that point-in-time.  This restore point name is used as the default RESTORE_TAG, and should be unique.  The recommended name (because it is the default KEEP restore tag) is "<KEEP_BACKUP_><yyyyMMddHH24miSS>".  By using a restore point, you can better control the amount of archive logs necessarily to recover the database.

 

    • Incremental forever backups ensure that the duration of the backup is much shorter than a typical full KEEP backup limiting the amount of archive logs necessary to have a recovery point.
    • Setting a restore point immediately following the backup ensures that the recovery window following the last datafile backup piece is short also limiting the amount of archive logs necessary.

 

  • RESTORE_UNTIL_SCN or RESTORE_UNTIL_TIMEI am grouping these 2 choices together, because they are so similar.  Unlike using a restore point that is preset, using either of these options will create the KEEP archive backup with a recover point as the SCN number given or the UNTIL TIME given (using the databases timezone). 


FROM_TAG - The documentation states that only backups containing the FROM_TAG will be considered if a FROM_TAG is set. I am thinking this would make sense if you let the restore point default to the current time, and you want to choose which backup pieces to include.  I am not sure of the full use of this option however.

 RECOMMENDATIONS 

  •  Because you can create up to 2048 RESTORE_POINTs in a database, and normal restore points are automatically dropped when necessary, I would recommend creating a restore point following each incremental backup with the format mentioned above, This will allow you to create a self-contained FULL KEEP backup from any incremental backup as needed. This can be used to easily create an end-of-month KEEP backup (for example).

 

  • I would use the RESTORE_UNTIL options when it is necessary to create a KEEP backup as of a specific point-in-time regardless of when the backup completed. This would be used if the recovery point is critical.


3. Set Archival Options


COMPRESSION_ALGORITHM
-  The default is no compression, and if the backup piece is already compressed, it will not try to compress the backup again.  The documentation does a good job of going through the options, and why you would chose one or the other.  Keep in mind that if your database uses TDE for all the datafiles, there will be no gain with compression, and the extra resources required for compression may slow down the restore.  Also, the compression is performed by the ZDLRA (RMAN compression), but the de-compression is performed by the protected database during restore.

 ENCRYPTION_ALGORITHM - The default is no encryption, but it is important to understand that any copy-to-cloud processing MUST have encryption set.  It is also important to understand that the ZDLRA must be using OKV (Oracle Key Vault) to store the encryption keys when encryption is set. The list of algorithms can be found in the documentation.

 

4. Set Archival Location and Name

ATTRIBUTE_SET_NAME - This must be specified, and this identifies the backup location to send the archival backups.

FORMAT - By default the  backup pieces are given handles automatically generated by the ZDLRA, this setting allows you to change the default backup piece format using normal RMAN formatting options.

AUTOBACKUP_PREFIX - - By default the autobackup pieces will retain the original names, but  you can add a prefix to the original autobackup names. 

 

5. Set Restore TAG

 By default the RESTORE_TAG defaults to  "<KEEP_BACKUP_><yyyyMMddHH24miSS>". This can be overridden to give the backup a more meaningful tag. For example, the end-of-month backup could be tagged as "MONTHLY_12_2023", making it easier to automate finding specific KEEP backups.

 RECOMMENDATIONS 

I would set the Restore Tag to a set format that makes the KEEP backups easy to find. You can see the example above. 

6. Set KEEP_UNTIL time

The default KEEP_UNTIL time is "FOREVER". In most cases you want to set an end date for the backup, allowing the ZDLRA to automatically remove the backup when it expires.  This date-time is based on the timezone of the protected database. 



 SUMMARY 

 If using this functionality to dynamically create Archival KEEP backups...

  • I would set a Restore Point in each database immediately following every incremental backup.  
  • I would schedule this procedure to create the archival backup with a formatted restore tag to make the backup easy to find.
  • If backing up to a CLOUD location, I would use retention rules to ensure the backups are immutable until they expire.

 

 

ZFS storing encryption keys in Oracle Key Vault (OKV)

$
0
0

 ZFS can be configured to use Oracle Key Vault (OKV)  as a KMIPs cluster to store it's encryption keys. In this blog post I will go through how to configure my ZFS replication pair to utilize my OKV cluster and take advantage of the Raw Crypto Replication mode introduced in 8.8.57.


OKV Cluster Environment:

First I am going to describe the environment I am using for my OKV cluster.

I have 2 OKV servers, OKVEAST1 ( IP:10.0.4.230)  and OKVEAST2 (IP: 10.0.4.254). These OKV servers are both running 21.6 (the current release as of writing this post).


ZFS replication Pair:

For my ZFS pair, I am using a pair of ZFS hosts that I have been running for awhile.  My first ZFS host is "testcost-a" (IP: 10.0.4.45)  and my second ZFS host is "zfs_s3"( IP: 10.0.4.206).  Both of these servers are running the 8.8.60 release.

For my replication, I already have "testcost-a" configured as my upstream, and "zfs_s3" configured as the downstream.

Steps to configure encryption using OKV

Documentation:

The documentation I am using to configure ZFS can be found in the 8.8.x Storage Administrators guide.  I did look through the documentation for OKV, and I didn't find anything specific that needs to be done when using OKV as a KMIP server.

Step #1 - Configure endpoints/wallets in OKV

The first step is to create 2 endpoints in OKV and assign a shared wallet between these 2 endpoints. 

 I am starting by creating a single wallet that I am going to use share the encryption keys between my 2 ZFS replication pairs.  I


The next step after creating the wallet is to create the 2 endpoints. Each ZFS host is an endpoint. Below is the screenshot for adding the first node.


After creating both endpoints I see them in the OKV console.


Then I click on each endpoint and ensure that 

  • The default wallet for each endpoint is the "ZFS_ENCRYPTION_KEYS" wallet
  • The endpoint has the ability to manage this wallet.



Then I go back to endpoint list in the console and save the "enrollment token" for each node and logout.

Server                    Enrollment Token

ZFS_S3       FdqkaimSpCUBfVqV

TESTCOST-A         uy59ercFNjBisU12

I then go to the main screen for OKV and click on the enrollment token download



Enter the Enrollment Token and click on "Submit Token"


You see that the token is validated. Then click on Enroll and it will download the token "okvclient.jar" which I am renaming to okvclient_{zfs server}.zip.  This will allow me to extract the certificates.

When completed, I have enrolled the endpoints and I am ready to add them to the ZFS.


Step #2 - Add the Certificates 

When I look at the .jar files that were created for the endpoints I can see all the files that are included in the endpoint enrollment. I need to add the certificates to the ZFS servers.  I can find those in the "ssl" directory contained in .jar file.



I start by uploading the "key.pem" for my first ZFS "testcost-a" in the Configuration=>SETTINGS=>Certificates=>System section of the BUI.


After uploading it I then add the "cert.pem" certificate in system also.


After uploading, I clicked in the pencil to see the details for the certificate.  

NOTE: The IP Address is the primary node in my OKV cluster.

Under Certificates=>Trusted I uploaded the CA.pem certificate.



After uploading this certificate, I click on the pencil and select "kmip" identifying this certificate to be used for the KMIP service.


The certificate should now appear as a trusted KMIP services certificate.



I can now upload the certificates for my other ZFS server (zfs_s3) the same way.


Step #3 - Add the OKV/KMIP service

I now navigate to the Shares=>ENCRYPTION=>KMIP section of the BUI to add the KMIP servers to my first ZFS host.  Because I have 2 possible KMIP servers (I am using an OKV cluster), I am going to uncheck the "Match Hostname against certificate subject" button.  I left the default to destroy the key when removing it from the ZFS.

I added the 2 OKV servers (if I had a more than 2 nodes in my cluster I would add those nodes also).  I added the port used for KMIP services on OKV (5696), and I chose the "Client TLS Authentication Certificate" I uploaded in the previous step (FLxULFbeMO).




I perform the same process on my second ZFS so that the paired ZFS servers are all configured to communicate with my OKV cluster to provide KMIP services.

NOTE: If you want to get the list of OKV hosts in the cluster you can look in the .jar file within the conf=>install.cfg file to see the OKV servers details. Below is the contents of my file.



Once I add the KMIP configuration to both of my ZFS servers I can look at my endpoints in OKV and see that they are both ENROLLED, and that OKV knows the IP address of my ZFS servers.



Step #4 - Add one or more keys.

On my upstream ZFS, I click on the "+" to add a new key and save it.


After adding it, the key appears in this section.




Step #5 - Add the keys to the shared wallet

I noticed that even though the wallet is the default wallet for the endpoints, the key did not get added to the wallet. I can see that both nodes have access to manage the wallet.






I clicked on the wallet, and then the "Add Contents", from there I am adding my new key to the wallet.



And now I login into the second ZFS (zfs_s3) and add the same key.  Make sure you add the same named key on the second ZFS so that they can match.

Step #6 - Create a new encrypted project/share

On my first ZFS (upstream - testcost-a) I am creating a new project and share that is encrypted using the key from the KMIP service.



Then within the share, I configure replication to my paired ZFS.
And now I am creating a share within this project.


Step #7 - Configure replication

Finally I configured replication from my project in my upstream (testcost-a) to my downstream (zfs_s3).  Below are the settings for my replication processing to send a snapshot every 10 minutes.  Notice that I made sure that I did NOT disable raw Crypto Mode (which is what I am using for this replication).  You can follow this link to learn more about Raw Crypto Replication.



Result:


I now have replication on my encrypted share working between my upstream and downstream.  With this new feature, the blocks are sent in their original encrypted format, and are stored on the downstream encrypted.  Since both ZFS servers can access the encryption key, both servers are able to decrypt the blocks.

I did test shutting down one of my OKV hosts, and found that the ZFS severs were able to successfully connect to the surviving node.

I even mounted the share, stored some files, replicated it, mounted a snapshot copy, and ensured that both ZFS servers presented the shares readable.

DG per PDB with 23c on DBCS (Oracle Base Database)

$
0
0
At Oracle Cloud World (a few weeks ago as of writing this) 23c was announced as being available with DBCS.
 I've been wanting to test out DGPDB, which is Data Guard per PDB. This new feature was introduced with 21c and is covered in detail in this blog here .


Since I cover a lot of topics around backup, recovery, and availability this feature intrigued me.

NOTE: DGPDB is NOT supported with the tooling on DBCS and by testing it in your environment as I am going to do here your databases will not be supported.

First I am going to start with the documentation which can be found here.

My Environment :

I created 2 database services in ExaCS, both of which are 23c databases that are TDE encrypted (since they are in DBCS), and I am using a locally managed key (local wallet file).


CDB NameDB Unique NamePDB name
cdb1cdb1_c83_iadpdb1
cdb2cdb2_cdx_iadcdb2_pdb2


Step # 1 - Prepare the 2 CDBs

The first step is to prepare the 2 CDBs and follow the steps in the documentation

On both databases execute
  • ALTER DATABASE FORCE LOGGING;
  • ALTER DATABASE FLASHBACK ON;

On the source database execute
  • alter system set dg_broker_start=true scope=both;
  • alter system set log_archive_dest_1="LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=cdb1_c83_iad";
  • alter system set standby_file_management=auto scope=both; ;
On the destination database execute
  • alter system set dg_broker_start=true scope=both;
  • alter system set log_archive_dest_1="LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=cdb2_cdx_iad";
  • alter system set standby_file_management=auto scope=both;

Step # 2 - Configure tnsname.oras and SEPS wallets

First I am going to ensure that both tnsnames.ora files contains entries for both CDBs.
  • CDB1_C83_IAD
  • CDB2_CDX_IAD
Next I create a SEPS wallet on both nodes, and add entries for both databases
mkdir -p $ORACLE_HOME/dbs/wallets/dgpdb
mkstore -wrl $ORACLE_HOME/dbs/wallets/dgpdb -createALO
mkstore -wrl $ORACLE_HOME/dbs/wallets/dgpdb -createCredential cdb1_c83_iad sys {password}
mkstore -wrl $ORACLE_HOME/dbs/wallets/dgpdb -createCredential cdb2_cdx_iad sys {password}
mkstore -wrl $ORACLE_HOME/dbs/wallets/dgpdb -listCredential


Then I update the sqlnet.ora to use this wallet file.

WALLET_LOCATION =
    (SOURCE =
      (METHOD = FILE)
      (METHOD_DATA =
        (DIRECTORY = /u01/app/oracle/product/23.0.0.0/dbhome_1/dbs/wallets/dgpdb)
    )
)
SQLNET.WALLET_OVERRIDE = TRUE

And finally I test the connection
  • sqlplus /@cdb1_c83_iad as sysdba
  • sqlplus /@cdb2_cdx_iad as sysdba

Step # 3 - Create source and target configuration

Configure the source


dgmgrl /@cdb1_c83_iad <<EOF

CREATE CONFIGURATION 'cdb1_c83_iad' AS CONNECT IDENTIFIER IS cdb1_c83_iad; 
SHOW CONFIGURATION; 

EOF

Configure the target
dgmgrl /@cdb2_cdx_iad <<EOF

CREATE CONFIGURATION 'cdb2_cdx_iad' AS CONNECT IDENTIFIER IS cdb2_cdx_iad; 
SHOW CONFIGURATION; 

EOF

Step # 4 - Establish the connection between configurations

I ran the step in the documentation to configure the connection.


dgmgrl /@cdb1_c83_iad <<EOF

ADD CONFIGURATION 'cdb2_cdx_iad' CONNECT IDENTIFIER IS cdb2_cdx_iad; 
SHOW CONFIGURATION; 
ENABLE CONFIGURATION ALL; 
SHOW CONFIGURATION; 
EOF


And I validated that the output looked correct.

Configuration - cdb1_c83_iad

  Protection Mode: MaxPerformance
  Members:
  cdb1_c83_iad - Primary database
  cdb2_cdx_iad - Primary database in cdb2_cdx_iad configuration

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 1 second ago)

Step # 5 - Prepare the databases for DGPDB

The first step was to ensure that the PDBS are open in both database, which they are.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
SQL>

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CDB2_PDB1                      READ WRITE NO
SQL>


Then prepare the databases for DGPDB

 dgmgrl /@cdb1_c83_iad
DGMGRL for Linux: Release 23.0.0.0.0 - Production on Thu Sep 28 17:40:38 2023
Version 23.3.0.23.09

Copyright (c) 1982, 2023, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "cdb1_c83_iad"
Connected as SYSDBA.
DGMGRL> EDIT CONFIGURATION PREPARE DGPDB;
Enter password for DGPDB_INT account at cdb1_c83_iad:
Enter password for DGPDB_INT account at cdb2_cdx_iad:

Prepared Data Guard for Pluggable Database at cdb2_cdx_iad.

Prepared Data Guard for Pluggable Database at cdb1_c83_iad.
DGMGRL>


Step # 6 - Restart the listeners and databases with the wallet location


On both the source and destination, I performed an "su - grid" and restarted the listeners.
I also restarted both databases. Until I restarted the databases I kept getting a 

 "Error: ORA-12578: A wallet file was not found or failed to open."

error message.


Step # 7 - Add the pluggable database to the destination

The next step is to add the pluggable database to the destination in dgmgrl.

DGMGRL> add pluggable database pdb1_stby at cdb2_cdx_iad source is pdb1 at cdb1_c83_iad  pdbfilenameconvert is "'+DATA/CDB1_C83_IAD/066E9679DE2850DCE063A104000ABBA7/DATAFILE/','+DATA/CDB2_CDX_IAD/066E9679DE2850DCE063A104000ABBA7/DATAFILE/'"  ;
ORA-46697: Keystore password required.
Help: https://docs.oracle.com/error-help/db/ora-46697/


At this point by following the documentation, adding the pluggable database failed with  "ORA-46697: Keystore password required."

After some digging I found that I need to pass the wallet password (which is my sys password), and that the phrase needs to be in single quotes.


DGMGRL> add pluggable database pdb1_stby at cdb2_cdx_iad source is pdb1 at cdb1_c83_iad  pdbfilenameconvert is "'+DATA/CDB1_C83_IAD/066E9679DE2850DCE063A104000ABBA7/DATAFILE/','+DATA/CDB2_CDX_IAD/066E9679DE2850DCE063A104000ABBA7/DATAFILE/'"'keystore  IDENTIFIED BY "W3lCom3#123#123"';

Pluggable Database "PDB1_STBY" added


Step # 8 - List datafiles in the source PDB


     FILE#  NAME
----------  ------------------------------------------------------------------------------------------------------------------------
         8  +DATA/CDB1_C83_IAD/066E9679DE2850DCE063A104000ABBA7/DATAFILE/system.273.1148746431
         9  +DATA/CDB1_C83_IAD/066E9679DE2850DCE063A104000ABBA7/DATAFILE/sysaux.270.1148746437
        10  +DATA/CDB1_C83_IAD/066E9679DE2850DCE063A104000ABBA7/DATAFILE/undotbs1.271.1148746445
        12  +DATA/CDB1_C83_IAD/066E9679DE2850DCE063A104000ABBA7/DATAFILE/users.274.1148746543

Step # 9 - Copy the datafiles to the destination


First I logged onto the primary database and closed it.

 alter pluggable database pdb1 close;

I decided to use DBMS_FILE_TRANSFER.GET_FILE

Step 9a - create directory on source pointing to datafiles


SQL*Plus: Release 23.0.0.0.0 - Production on Fri Sep 29 11:33:11 2023
Version 23.3.0.23.09

Copyright (c) 1982, 2023, Oracle.  All rights reserved.


Connected to:
Oracle Database 23c EE High Perf Release 23.0.0.0.0 - Production
Version 23.3.0.23.09

SQL> create directory SOURCE_DUMP as '+DATA/CDB1_C83_IAD/066E9679DE2850DCE063A104000ABBA7/DATAFILE/';

Directory created.

SQL> grant read,write on directory SOURCE_DUMP to public;

Grant succeeded.

SQL>

Step 9b - Log onto ASM and precreate directory

ASMCMD> mkdir 
+data/CDB2_CDX_IAD/066E9679DE2850DCE063A104000ABBA7/datafile


Step 9c - create directory on target pointing to datafiles

[oracle@cdb1 admin]$ sqlplus /@cdb2_cdx_iad as sysdba

SQL*Plus: Release 23.0.0.0.0 - Production on Fri Sep 29 11:33:44 2023
Version 23.3.0.23.09

Copyright (c) 1982, 2023, Oracle.  All rights reserved.


Connected to:
Oracle Database 23c EE High Perf Release 23.0.0.0.0 - Production
Version 23.3.0.23.09

SQL> create directory TARGET_DUMP as '+DATA/CDB2_CDX_IAD/066E9679DE2850DCE063A104000ABBA7/DATAFILE/';

grant read,write on directory TARGET_DUMP to public;



Directory created.

SQL> SQL>
Grant succeeded.

Step 9d - create database link on target  pointing to source database


SQL> SQL> SQL> create public database link SOURCEDB connect to system identified by {password} using 'CDB1_C83_IAD';

Database link created.


SQL> select sysdate from dual@SOURCEDB;
select sysdate from dual@SOURCEDB
                         *
ERROR at line 1:
ORA-02085: database link SOURCEDB.ZFSADMIN.VCN.ORACLEVCN.COM connects to
CDB1.ZFSADMIN.VCN.ORACLEVCN.COM
Help: https://docs.oracle.com/error-help/db/ora-02085/


SQL> show parameter global

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
allow_global_dblinks                 boolean     FALSE
global_names                         boolean     TRUE
global_txn_processes                 integer     1
SQL> alter system set global_names=false;

System altered.

SQL> select sysdate from dual@SOURCEDB;

SYSDATE
---------
29-SEP-23

SQL>

Step 9e - Get the datafiles from the source


SQL> set timing on
BEGIN
dbms_file_transfer.get_file('SOURCE_DUMP',
'system.273.1148746431',
'SOURCEDB',
'TARGET_DUMP',
'SYSTEM.DBF');
END;
/SQL>   2    3    4    5    6    7    8

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.21

Step # 10 - Rename datafiles in the destination.


List the placeholder datafiles in  destination database
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CDB2_PDB1                      READ WRITE NO
         4 PDB1_STBY                      MOUNTED
SQL> alter session set container=pdb1_stby;

Session altered.

Elapsed: 00:00:00.02
SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+DATA/MUST_RENAME_THIS_DATAFILE_8.4294967295.4294967295
+DATA/MUST_RENAME_THIS_DATAFILE_9.4294967295.4294967295
+DATA/MUST_RENAME_THIS_DATAFILE_10.4294967295.4294967295
+DATA/MUST_RENAME_THIS_DATAFILE_12.4294967295.4294967295

Elapsed: 00:00:00.02
SQL>

Match the file number in the destination files to the source datafiles and rename them.

SQL> alter database rename file '+DATA/MUST_RENAME_THIS_DATAFILE_8.4294967295.4294967295' to '+DATA/CDB2_CDX_IAD/066E9679DE2850DCE063A104000ABBA7/DATAFILE/SYSTEM.dbf';

Database altered.

Elapsed: 00:00:00.04
SQL> alter database rename file '+DATA/MUST_RENAME_THIS_DATAFILE_9.4294967295.4294967295' to '+DATA/CDB2_CDX_IAD/066E9679DE2850DCE063A104000ABBA7/DATAFILE/SYSAUX.dbf';

Database altered.

Elapsed: 00:00:00.02

SQL> alter database rename file '+DATA/MUST_RENAME_THIS_DATAFILE_10.4294967295.4294967295' to '+DATA/CDB2_CDX_IAD/066E9679DE2850DCE063A104000ABBA7/DATAFILE/UNDOTBS1.dbf';

Database altered.

Elapsed: 00:00:00.05
SQL> alter database rename file '+DATA/MUST_RENAME_THIS_DATAFILE_12.4294967295.4294967295' to '+DATA/CDB2_CDX_IAD/066E9679DE2850DCE063A104000ABBA7/DATAFILE/USERS.dbf';

Database altered.

Elapsed: 00:00:00.03
SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+DATA/CDB2_CDX_IAD/066E9679DE2850DCE063A104000ABBA7/DATAFILE/system.dbf
+DATA/CDB2_CDX_IAD/066E9679DE2850DCE063A104000ABBA7/DATAFILE/sysaux.dbf
+DATA/CDB2_CDX_IAD/066E9679DE2850DCE063A104000ABBA7/DATAFILE/undotbs1.dbf
+DATA/CDB2_CDX_IAD/066E9679DE2850DCE063A104000ABBA7/DATAFILE/users.dbf

Elapsed: 00:00:00.02
SQL>


Step # 11 - Bring over the TDE keys

Export the encryption keys from the source database

SQL> ADMINISTER KEY MANAGEMENT
  EXPORT ENCRYPTION KEYS WITH SECRET "export_secret"
  TO '/tmp/mykey'
  FORCE KEYSTORE
  IDENTIFIED BY "{password}";  2    3    4    5

keystore altered.



Import them into the target database
ADMINISTER KEY MANAGEMENT
  IMPORT KEYS WITH SECRET "export_secret"
  FROM '/tmp/mykeypdb1'
  force keystore
  IDENTIFIED BY "W3lCom3#123#123"
  WITH BACKUP;

SQL> SQL>   2    3    4    5    6
keystore altered.

SQL> SQL>

List the encryption keys on the  source database
PDB Name        Key ID                              Master Key ID
--------------- ----------------------------------- -------------------------
CDB$ROOT        35B424EFDB6D4F19BFF31B5D3DE0BFFA    ATW0JO/bbU8Zv/MbXT3gv/o=
PDB$SEED        00000000000000000000000000000000    AQAAAAAAAAAAAAAAAAAAAAA=
PDB1            1AAD19EB6F364F3BBF6FE226A25AF1E5    ARqtGetvNk87v2/iJqJa8eU=


List the encryption keys on the destination database
Master Key ID                                           Tag                  PDB Name        KEYSTORE_TYPE     Origin     Key Creation Time  Key Act. Time
------------------------------------------------------- -------------------- --------------- ----------------- ---------- ------------------ ------------------
AZ5CKuJLrU/Nv5M5pXQCFcEAAAAAAAAAAAAAAAAAAAAAAAAAAAAA                         CDB$ROOT        SOFTWARE KEYSTORE LOCAL      09/28/2023 09:09   09/28/2023 09:09
ATW0JO/bbU8Zv/MbXT3gv/oAAAAAAAAAAAAAAAAAAAAAAAAAAAAA                         CDB$ROOT        SOFTWARE KEYSTORE LOCAL      09/28/2023 09:10   09/28/2023 09:10
AVAymCcev0/Pv94Tiqw/P50AAAAAAAAAAAAAAAAAAAAAAAAAAAAA                         CDB2_PDB1       SOFTWARE KEYSTORE LOCAL      09/28/2023 09:12   09/28/2023 09:12
ARqtGetvNk87v2/iJqJa8eUAAAAAAAAAAAAAAAAAAAAAAAAAAAAA                         CDB2_PDB1       SOFTWARE KEYSTORE LOCAL      09/28/2023 09:13   09/28/2023 09:13




Step # 12 - Add the standby redo logs to the PDB and validate

I created standby redo logs in the PDB on Target database

select 'ALTER DATABASE ADD STANDBY LOGFILE thread ' ||a.thread# || ' group 1' ||a.group# || ' (''' ||substr(b.member,1,35) || 'standby_pdb1_1' || a.group# ||  ''') size ' || a.bytes ||';'
  from v$log a, v$logfile b
  where a.group#=b.group# ;SQL>   2    3

'ALTERDATABASEADDSTANDBYLOGFILETHREAD'||A.THREAD#||'GROUP1'||A.GROUP#||'('''||SUBSTR(B.MEMBER,1,35)||'STANDBY_PDB1_1'||A.GROUP#||'
----------------------------------------------------------------------------------------------------------------------------------
ALTER DATABASE ADD STANDBY LOGFILE thread 1 group 13 ('+RECO/CDB2_CDX_IAD/ONLINELOG/group_standby_pdb1_13') size 1073741824;
ALTER DATABASE ADD STANDBY LOGFILE thread 1 group 12 ('+RECO/CDB2_CDX_IAD/ONLINELOG/group_standby_pdb1_12') size 1073741824;
ALTER DATABASE ADD STANDBY LOGFILE thread 1 group 11 ('+RECO/CDB2_CDX_IAD/ONLINELOG/group_standby_pdb1_11') size 1073741824;

SQL> alter session set container=PDB1_STBY;

Session altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE thread 1 group 4 ('+RECO/CDB2_CDX_IAD/ONLINELOG/group_standby_pdb1_13') size 1073741824;

Database altered.


SQL> ALTER DATABASE ADD STANDBY LOGFILE thread 1 group 5 ('+RECO/CDB2_CDX_IAD/ONLINELOG/group_standby_pdb1_12') size 1073741824;

Database altered.

SQL>  ALTER DATABASE ADD STANDBY LOGFILE thread 1 group 6  ('+RECO/CDB2_CDX_IAD/ONLINELOG/group_standby_pdb1_11') size 1073741824;

Database altered.

SQL>

And then I validated the configuration

DGMGRL> VALIDATE PLUGGABLE DATABASE PDB1_STBY AT cdb2_cdx_iad;

  Ready for Switchover:    NO

  Data Guard Role:         Physical Standby
  Apply State:             Not Running
  Standby Redo Log Files:  3
  Source:                  PDB1 (con_id 3) at cdb1_c83_iad



Step # 13 - change the state of standby to APPLY-ON

I changed the state of the standby and checked the configuration.


DGMGRL> VALIDATE PLUGGABLE DATABASE PDB1_STBY AT cdb2_cdx_iad;

  Ready for Switchover:    NO

  Data Guard Role:         Physical Standby
  Apply State:             Not Running
  Standby Redo Log Files:  3
  Source:                  PDB1 (con_id 3) at cdb1_c83_iad


DGMGRL>
EDIT PLUGGABLE DATABASE PDB1_STBY AT cdb2_cdx_iad SET STATE='APPLY-ON';DGMGRL>
Succeeded.
DGMGRL> SHOW CONFIGURATION;

Configuration - cdb2_cdx_iad

  Protection Mode: MaxPerformance
  Members:
  cdb2_cdx_iad - Primary database
    Warning: ORA-16910: inconsistency detected for one or more pluggable databases

  cdb1_c83_iad - Primary database in cdb1_c83_iad configuration

Data Guard for PDB:   Enabled in TARGET role

Configuration Status:
SUCCESS   (status updated 37 seconds ago)

DGMGRL> SHOW PLUGGABLE DATABASE pdb1 AT cdb1_c83_iad;

Pluggable database - PDB1 at cdb1_c83_iad

  Data Guard Role:     Primary
  Con_ID:              3
  Active Target:       con_id 4 at cdb2_cdx_iad

Pluggable Database Status:
SUCCESS

DGMGRL> SHOW PLUGGABLE DATABASE pdb1_stby AT cdb2_cdx_iad ;

Pluggable database - PDB1_STBY at cdb2_cdx_iad

  Data Guard Role:     Physical Standby
  Con_ID:              4
  Source:              con_id 3 at cdb1_c83_iad
  Transport Lag:       20 hours 26 minutes 24 seconds (computed 9 seconds ago)
  Apply Lag:           (unknown)
  Intended State:      APPLY-ON
  Apply State:         Running
  Apply Instance:      cdb2
  Average Apply Rate:  (unknown)
  Real Time Query:     OFF

Pluggable Database Status:
SUCCESS

DGMGRL> VALIDATE PLUGGABLE DATABASE PDB1_STBY AT cdb2_cdx_iad;

  Ready for Switchover:    NO

  Data Guard Role:         Physical Standby
  Apply State:             Waiting for Redo Data
  Standby Redo Log Files:  3
  Source:                  PDB1 (con_id 3) at cdb1_c83_iad

DGMGRL>

Step # 14 - Perform a log switch at both databases



 sqlplus /@cdb1_c83_iad as sysdba

SQL*Plus: Release 23.0.0.0.0 - Production on Fri Sep 29 15:57:52 2023
Version 23.3.0.23.09

Copyright (c) 1982, 2023, Oracle.  All rights reserved.


Connected to:
Oracle Database 23c EE High Perf Release 23.0.0.0.0 - Production
Version 23.3.0.23.09

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

System altered.

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

System altered.

SQL> exit
Disconnected from Oracle Database 23c EE High Perf Release 23.0.0.0.0 - Production
Version 23.3.0.23.09
[oracle@cdb2 admin]$ sqlplus /@cdb2_cdx_iad as sysdba

SQL*Plus: Release 23.0.0.0.0 - Production on Fri Sep 29 15:58:11 2023
Version 23.3.0.23.09

Copyright (c) 1982, 2023, Oracle.  All rights reserved.


Connected to:
Oracle Database 23c EE High Perf Release 23.0.0.0.0 - Production
Version 23.3.0.23.09

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

System altered.

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

System altered.

SQL>


Step # 15 - Validate standby is applying redo.


DGMGRL> SHOW PLUGGABLE DATABASE pdb1_stby AT cdb2_cdx_iad ;

Pluggable database - PDB1_STBY at cdb2_cdx_iad

  Data Guard Role:     Physical Standby
  Con_ID:              4
  Source:              con_id 3 at cdb1_c83_iad
  Transport Lag:       0 seconds (computed 1 second ago)
  Apply Lag:           4 seconds (computed 1 second ago)
  Intended State:      APPLY-ON
  Apply State:         Running
  Apply Instance:      cdb2
  Average Apply Rate:  489 KByte/s
  Real Time Query:     OFF

Pluggable Database Status:
SUCCESS




SUMMARY :

It is all working, and for fun I am going to look at the destination settings on the source Database.



NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_1                   string      LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=cdb1_c83_iad
log_archive_dest_2                   string      service="cdb2_cdx_iad", ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 reopen=300 db_unique_name="cdb2_cdx_iad" net_timeout=30, valid_for=(online_logs)


OBSERVATIONS:


I will continue to update this post as I find useful observations.

Observation 1 :

I looked at the archive log using v$archived_logs view in the target (CDB2).
In it I only 6 archive logs were created.
SQL> select thread#,sequence#,dest_id,blocks,status from v$ARCHIVED_LOG;

   THREAD#  SEQUENCE#    DEST_ID     BLOCKS S
---------- ---------- ---------- ---------- -
         1          2          1     230208 A
         1          3          1     896391 A
         1          4          1          7 A
         1          5          1      35474 A
         1          6          1        387 A

I added a couple more PDBS and made changes to a second PDB (that didn't have a standby) and the original PDB (that had a standby).
I then created a workload in both PDBS and looked at the archive logs on the source.

   THREAD#  SEQUENCE#    DEST_ID     BLOCKS S
---------- ---------- ---------- ---------- -
         1          3          2       4894 A
         1          4          2     567681 A
         1          5          2         14 A
         1          6          2     871136 A
         1          7          2    1756544 A
         1          8          2    1847012 A
         1          9          2    1931501 A
         1         10          2    1901263 A
         1         11          2    1931491 A
         1         12          2    1931837 A
         1         13          2    1465002 A
         1         14          2    1955017 A
         1         15          2    1936946 A
         1         16          2    1685507 A
         1         17          2    1931499 A
         1         18          2    1808038 A
         1         19          2    1874455 A
         1         20          2    1910501 A
         1         21          2    2038138 A
         1         22          2    1812968 A
         1         23          2    1935941 A
         1         24          2    1939459 A
         1         25          2    2030413 A
         1         26          2    1934927 A

24 rows selected.

Then I looked at ASM on the target and I could ALL the archive logs on the target.


ASMCMD> ls
thread_1_seq_10.282.1148837541
thread_1_seq_11.283.1148837541
thread_1_seq_12.284.1148837557
thread_1_seq_13.285.1148837667
thread_1_seq_14.287.1148837943
thread_1_seq_15.288.1148837963
thread_1_seq_16.289.1148837979
thread_1_seq_17.290.1148837979
thread_1_seq_18.291.1148838023
thread_1_seq_19.292.1148838023
thread_1_seq_20.293.1148838101
thread_1_seq_21.294.1148838119
thread_1_seq_22.295.1148838137
thread_1_seq_23.296.1148838137
thread_1_seq_24.297.1148838183
thread_1_seq_25.298.1148838213
thread_1_seq_26.299.1148838237
thread_1_seq_3.275.1148831895
thread_1_seq_4.276.1148831899
thread_1_seq_5.274.1148831883
thread_1_seq_5.278.1148837115
thread_1_seq_6.277.1148836711
thread_1_seq_6.286.1148837681
thread_1_seq_7.279.1148837481
thread_1_seq_8.280.1148837499
thread_1_seq_9.281.1148837517



Cyber Vault Characteristics

$
0
0

 One topic that has been coming up over and over this year is Cyber Vault. In this post I am going to through the characteristics I commonly see when a customer build a Cyber Vault.  The image below gives you a good idea of what is involved.

Characteristics of a Cyber Vault

  • NTP and DNS services.: Because a Cyber Vault is often isolated from the rest of the datacenter it is critical to have NTP service.  Proper time management is critical to ensuring backups are kept for the proper retention.  DNS isn't critical, but it is definitely very helpful in configuring infrastructure.  In many cases "/etc/hosts" can get around this, but is a pain to maintain.
  • Firewalls:  Configuring firewalls, and isolated networks is critical to ensure the Cyber Vault is isolated.  The vault is often physically in the same datacenter, with network isolation providing the protection.  Be sure to understand what ports, networks, and traffic direction is utilized on all infrastructure so you can proper set firewall rules.
  • Air Gap:  Creating an Air-Gap has become the standard to protect backups in the Cyber Vault. The Air Gap is often open for only a few hours a day at random times to ensure that the opening isn't predictable.  To limit the exposure time, it is critical to maximize the networking into the vault, and minimize the amount of data necessary to transfer.
 NOTE: Not all customers choose to have an Air Gap.  Having an Air Gap that is closed for long periods of times ensures there is less chance of intrusions, BUT it guarantees long periods of data loss when a restoration is performed.  This is most critical to decide with databases that are always changing.
  • Break-the-glass: There needs to be control on who gets access into the vault, and an approval process to ensure that all access is planned and controlled.
  • Backup validation: There needs to be a validation process in a vault to ensure that the backups are untouched.  When the backups contain executables, this is typically scanning for ransomware signatures. When backups are Oracle Backups, performing  "Restore Database Validate" is the gold standard for validating backups.
  • Clean Room: A clean room is an environment where backups can tested, This can be a small environment (a server or 2) or it can be large enough to restore and run the whole application.
  • Monitoring and reporting infrastructure : For Oracle this OEM (Cloud Control). It is critical that any issues are alerted and reported outside the vault.
  • Audit Reports: Audit reports are critical to ensuring that the backups in the Cyber Vault are secured.  Audit reports will capture any changes to the environment, and any issues with the backups themselves.

BONUS: One thing that customers don't often think about is encryption keys.  Implementing TDE on Oracle Databases is an important part of protecting your data from exfiltration. But you should also ensure that you have a secure backup of you encryption keys in the Vault.
OKV (Oracle Key Vault) is the best way of managing the keys for Oracle databases.

Oracle Recovery Service now offers retention lock

$
0
0

 Oracle DB Recovery Service recently added a new feature to protect backups from being prematurely deleted, even by a tenancy administrator.  This new feature adds a retention lock to the Backup Retention Period at the policy level. The image below shows the new settings that you see within the protection policy.

Enabling retention lock

The recovery service comes with some default policies that appear as "oracle defined" policy types

Name            Backup retention period
Platinum            46 days
Gold                   65 days
Silver                 35 days
Bronze               14 days

These policies can't' be changed, and they do not enable retention lock.

In order to implement a retention lock you need to create a new protection policy or  update an existing user defined protection policy.

Step #1 Set/Adjust "Backup retention period"

If you are creating a new "user defined" protection policy, you need to set the backup retention to a number of days between 14 and 95.  You should also take this opportunity to adjust the backup retention of an existing policy, if appropriate, before it is locked.

NOTE: Once a retention lock on the protection policy is activated (discussed in step #3), the backup retention period cannot be decreased, it can only be increased.

Step #2 Click on "enable retention lock"

This step is pretty straightforward. But the most important item to know is that the retention lock is not immediately in effect.  Much like the "retention lock" that is set on object storage, there is a minimum period of at least 14 days before the lock is "active".

 Note:  that once you "enable retention lock" for a policy, it is permanent and cannot be removed.


Step #3 Set "Scheduled lock time"

As I said in the previous step, the lock isn't immediately active. In this step you set the future date/time  that the lock time becomes active, and this Date/Time must be at least 14 days in the future.  This provides a grace period that delays when the lock on the policy becomes active. You have up until the lock activation date/time to adjust the scheduled lock time further into the future if it becomes necessary to further day lock activation.

Grace Period 

I wanted to make sure I explain what happens with this grace period so that you can plan accordingly.

  • If you change an existing "user defined" policy to enable the retention lock, any databases that are a member of this policy will not have locked backups until the scheduled lock date/time activates the lock.  
  • If you add databases to a protection policy that has a retention lock enabled, the backups will not be locked until whichever time is farther in the future.
    • Scheduled lock time for the policy if the retention lock has not yet activated.
    • 14 days after the database is added to the protection policy.
  • Databases can be removed from a retention locked protection policy during this grace period.
  • If the policy itself is still within it's grace period from activating, the backup retention period can be adjusted down for the protection policy.
NOTE: This 14 day grace period allows you to review the estimated space needed.  On the protected database summary page, for each database, you can see the "projected space for policy"  in the Space Usage section.  This value can be used to estimate the "locked backup" utilization.


What happens with a retention lock ?

Once the grace period expires the backups for the protected database are time locked and can't be prematurely deleted.  

The backups are protected by the following rules.

1. The database cannot be moved to another policy. No user within the tenancy, including an administrator can remove a database from it's retention enabled policy.  If it becomes necessary to move a database to another policy , an SR needs to raised, and security policies are followed to ensure that this is an approved change.


2.  There is always a 14 day grace period in which changes can be made before the backups become locked. This is your window to verify the backup storage usage required before the lock activates.

3. Even if you check the "72 hour termination option" on the database, backups are locked throughout the retention window.


Comments:

This is a great new feature that protects backups from being deleted by anyone in the tenancy, including tenancy administrators.  This provides an extra layer of security from an attack with compromised credentials.  Because the lock is permanent, always use the 14 day grace period to ensure the usage and duration is appropriate for you database.







Oracle Database Backup Cloud Service Primer

$
0
0

 One topic that has been coming a lot as customers look at options for offsite protected backups, is the use of the Oracle Database Backup Cloud Service.  This service can be used either directly from the database itself leveraging an RMAN tape library, or by performing a copy-to-cloud from the ZDLRA.  In this post I will try to consolidate all the information I can find on this topic to get you started.


Overview

The best place to start is by downloading, and reading through this technical brief

This document walks you through what the service is and how to implement it. Before you go forward with the Backup Cloud Service I suggest you download the install package and go through how to install it.

The key points I saw in this document are

  • RMAN encryption is mandatory - In this brief you will see that the backups being sent to OCI MUST be encrypted, and the brief explains how to create an encrypted backup.  Included in the Backup Cloud Service is the use of encryption and compression (beyond basic compression) without requiring the ASO, or ACO license.
  • How to install the client files - The brief explains the parameters that are needed to install the client files, and what the client files are that get installed. I will go into more detail later on explaining additional features that have been added recently.
  • Config file settings including host - The document explains the contents of the configuration file used by the Backup Cloud Service library. It also explains how to determine the name of the host (OCI endpoint) based on the region you are sending the backups to.
  • Channel configuration example - There is an example channel configuration to show you how to connect to the service.
  • Best practices - The document includes sample scripts and best practices to use when using the Backup Cloud Service.
  • Lifecycle policies and storage tiers - This is an important feature of using the Backup Cloud Service, especially for long term archival backups.  You most likely want have backups automatically moved to low cost archival storage after uploading to OCI.
NOTE: When using lifecycle polies to manage the storage tiers it is best to set the "-enableArchiving" and "-archiveAfterBackup" parameters when installing the backup module for a new bucket.  There are small metadata files that MUST remain in standard storage, and the installation module creates a lifecycle rule with the bucket that properly archives backup pieces, leaving the metadata in standard storage.


Download

The version of the library on OTN (at the time I am writing this) is NOT the current release of the library, and that version does not support retention lock of objects.

Please download the library from this location.

Documentation on the newer features can be found here, using retention lock can be found here, and there is a oci_readme.txt file that contains all the parameters available.


Updates

There were a few updates since the tech brief was written, and I will summarize the important ones here.  I also spoke the PM who is working on an updated brief that will contain this new information.

  • newRSAKeyPair - The installer is now able to generate the key pair for you making it much easier to generate new key pair. In order to have the installer ONLY create a new key pair pair, just pass the installer the "walletDir" parameter.  The installer will generate both a public and private key, and place them in the walletDir (see below).

 /u01/app/oracle/product/19c/dbhome_1/jdk/bin/java -jar oci_install.jar -newRSAKeyPair -walletDir /home/oracle/oci/wallet 
Oracle Database Cloud Backup Module Install Tool, build 19.18.0.0.0DBBKPCSBP_2023-09-21
OCI API signing keys are created:
  PRIVATE KEY --> /home/oracle/oci/wallet/oci_pvt
  PUBLIC  KEY --> /home/oracle/oci/wallet/oci_pub
Please upload the public key in the OCI console.

Once you generate the public/private key, you can upload the public key to the OCI console. This will show you the fingerprint, and you can execute the installer using the private key file.

  • "immutable-bucket" and "temp-metadata-bucket" - The biggest addition to library is the ability to support the use of retention rules on buckets containing backups.  The uploading of backups is monitored by using a "heartbeat" file, and this file is deleted when the upload is successful.  Because all objects in a bucket are locked, the "heartbeat" object must be managed from a second bucket without retention rules.  This is the temp-metadata-bucket.  When using retention rules you MUST have both buckets set in the config file.

NOTE

I ran into 2 issues when executing this script.

1)  When trying to execute the jar file, I used the default java version in my OCI tenancy that is located in "/user/bin". The installer received a java error

"java.lang.NoClassDefFoundError: javax/xml/bind/DatatypeConverter"

In order to properly execute the installer, I used the java executable located in $ORACLE_HOME/jdk/bin

2) When executing the jar file with my own RSA key that I had been previously used with OCI object storage, I received a java error.

Exception in thread "main" java.lang.RuntimeException: Could not produce a private key
at oracle.backup.util.FileDownload.encode(FileDownload.java:823)
at oracle.backup.util.FileDownload.addBmcAuthHeader(FileDownload.java:647)
at oracle.backup.util.FileDownload.addHttpAuthHeader(FileDownload.java:169)
at oracle.backup.util.FileDownload.addHttpAuthHeader(FileDownload.java:151)
at oracle.backup.opc.install.BmcConfig.initBmcConnection(BmcConfig.java:437)
at oracle.backup.opc.install.BmcConfig.initBmcConnection(BmcConfig.java:428)
at oracle.backup.opc.install.BmcConfig.testConnection(BmcConfig.java:393)
at oracle.backup.opc.install.BmcConfig.doBmcConfig(BmcConfig.java:250)
at oracle.backup.opc.install.BmcConfig.main(BmcConfig.java:242)
Caused by: java.security.spec.InvalidKeySpecException: java.security.InvalidKeyException: IOException : algid parse error, not a sequence

I found that this was caused by the PKCS format. I was using a PKCS1 key, and the java installer was looking for a PKCS8 key.  The header in my private key file contained "BEGIN RSA PRIVATE KEY".
In order to convert my private PKCS1 key "oci_api_key.pem" to a PKCS8 key "pkcs8.key" I ran.

openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in oci_api_key.pem -out pkcs8.key

Executing the install

The next step is to execute the install. For my install I also wanted configure a lifecycle rule that would archive backups after 14 days.  In order to implement this, I had the script create a new bucket "bsgtest".  Below is parameters I used (note I used "..." to obfuscate the OCIDs).

$ORACLE_HOME/jdk/bin/java -jar oci_install.jar -pvtKeyFile /home/oracle/oci/wallet/pkcs8.key -pubFingerPrint .... -tOCID  ocid1.tenancy.oc1... -host https://objectstorage.us-ashburn-1.oraclecloud.com -uOCID ocid1.user.oc1.... -bucket bsgtest -cOCID ocid1.compartment.oc1... -walletDir /home/oracle/oci/wallet -libDir /home/oracle/oci/lib -configFile /home/oracle/oci/config/backupconfig.ora -enableArchiving TRUE -archiveAfterBackup "14 days"

This created a new bucket "bsgtest" containing a lifecycle rule.

I then added a 14 day retention rule to this bucket, and created a second bucket "bsgtest_meta" for the temporary metadata. If you want to make this rule permanent you enable retention rule lock which I highlighted on the screenshot below.




I then updated the config file to use the metadata bucket because I set a retention rule on the main bucket. Note that there is also a parameter that determines how long archival objects are cached in standard storage before they are returned to archival storage.


OPC_CONTAINER=bsgtest
OPC_TEMP_CONTAINER=bsgtest_meta
OPC_AUTH_SCHEME=BMC
retainAfterRestore=48 HOURS


Testing

Once you execute the installer you will be able to begin backing up to OCI object storage.  Don't forget that you need to:
  • Change the default device type to SBT_TAPE
  • Change the compression algorithm. I recommend "medium" compression.
  • Configure encryption for database ON.
  • Configure the device type SBT_TAPE to send COMPRESSED BACKUPSET to optimize throughput and storage in OCI.
  • Create a default channel configuration for SBT_TAPE (or allocate channels manually) that use the library that was downloaded, and point to the configuration file for the database.
  • If you do not use ACO and don't have a wallet , manually set an encryption password in your session.
I recommend sending a "small" backup piece first to ensure that everything is properly configured.  My favorite command is

RMAN>backup incremental level 0 datafile 1;

Datafile 1 is always the system tablespace.

Below is what my configuration looks like for RMAN specifically for what I changed to use the Backup Cloud Service.

CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/home/oracle/oci/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/oci/config/backupconfig.ora)';
CONFIGURE ENCRYPTION FOR DATABASE ON;
CONFIGURE ENCRYPTION ALGORITHM 'AES256'; # default
CONFIGURE COMPRESSION ALGORITHM 'MEDIUM' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;

Network Performance

One of the big areas that comes up with using the Backup Cloud Service, is understanding the network capabilities.
The best place to start is with this MOS note

RMAN> run {
2> allocate channel foo device type sbt  PARMS  'SBT_LIBRARY=/home/oracle/oci/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/oci/config/backupconfig.ora)';
3>  send channel foo 'NETTEST 1000M';
4> }

allocated channel: foo
channel foo: SID=431 device type=SBT_TAPE
channel foo: Oracle Database Backup Service Library VER=19.0.0.1

released channel: foo
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of send command at 11/22/2023 14:12:04
ORA-19559: error sending device command: NETTEST 1000M
ORA-19557: device error, device type: SBT_TAPE, device name:
ORA-27194: skgfdvcmd: sbtcommand returned error
ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
   KBHS-00402: NETTEST sucessfully completed
KBHS-00401: NETTEST RESTORE: 1048576000 bytes received in 15068283 microseconds
KBHS-00400: NETTEST BACKUP: 1048576000 bytes sent


Executing Backups

Now to put it all together I am going to execute a backup of datafile 1.  My database is encrypted, so I am going to set a password along with the encryption key.



 set encryption on identified by oracle;

executing command: SET encryption

RMAN>  backup incremental level 0 datafile 1;

Starting backup at 22-NOV-23
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=404 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=19.0.0.1
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: SID=494 device type=SBT_TAPE
channel ORA_SBT_TAPE_2: Oracle Database Backup Service Library VER=19.0.0.1
allocated channel: ORA_SBT_TAPE_3
channel ORA_SBT_TAPE_3: SID=599 device type=SBT_TAPE
channel ORA_SBT_TAPE_3: Oracle Database Backup Service Library VER=19.0.0.1
allocated channel: ORA_SBT_TAPE_4
channel ORA_SBT_TAPE_4: SID=691 device type=SBT_TAPE
channel ORA_SBT_TAPE_4: Oracle Database Backup Service Library VER=19.0.0.1
channel ORA_SBT_TAPE_1: starting incremental level 0 datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/ACMEDBP/system01.dbf
channel ORA_SBT_TAPE_1: starting piece 1 at 22-NOV-23
channel ORA_SBT_TAPE_1: finished piece 1 at 22-NOV-23
piece handle=8t2c4fmi_1309_1_1 tag=TAG20231122T150554 comment=API Version 2.0,MMS Version 19.0.0.1
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:35
Finished backup at 22-NOV-23

Starting Control File and SPFILE Autobackup at 22-NOV-23
piece handle=c-1654679317-20231122-01 comment=API Version 2.0,MMS Version 19.0.0.1
Finished Control File and SPFILE Autobackup at 22-NOV-23


Restoring

Restoring is very easy as long as you have the entries in your controlfile. If you don't then there is a 
 script included in the installation that can catalog the backup pieces and I go through that process here.
This also allows you to display what's in the bucket.

Buckets 1 vs many

If you look at what created when executing backup you will see that there is a set format for the backup pieces. Below are the 2 backup pieces that I created

  • 8t2c4fmi_1209_1_1 - This is the backup of datafile 1 for my database ACMEDBP
  • c-16546791317-20231122-01 - This is the controlfile backup for this database
Notice that the DB name is not in the name of the backup pieces, or in the visible nesting.
If you think about a medium sized database (let's say 100 datafiles), that has 2 weeks of backups (14 days), you would have 1,400 different backup pieces for the datafiles within the "sbt_catalog" directory.

My recommendation is to group small databases together in the same bucket (keeping the amount of backup pieces to a manageable level).
For large database (1,000+ datafiles), you can see where a 30 day retention could become 30,000+ backup pieces.

Having a large number of objects within a bucket increases the time to report the available backup pieces.  There is no way to determine which database the object is a member of without looking at the metadata.

Keep this in mind when considering how many buckets to create.





File Retention on ZFS now supports expired file deletion/holds and changing permissions

$
0
0

 The latest release of ZFS (8.8.63) contains 2 new features associated with File Retention lock.

  • File retention (deletion or hold) after file expiration
  • Allow permission changes on retained files.


File retention on expiry policy

This new setting for projects/shares defaults to "off" which is the normal behavior of unlocking files, but leaving them on the filesystem.  In order to delete files you need to wait until the retention period expires, and then you can delete the file.

There are 2 new settings you can use to work with locked files to change this behavior.

Delete

When set to "Delete", files will be immediately deleted when their retention lock expires. This can be very useful if you want files to be automatically cleaned up at the end of their retention without having to create a deletion process.

There are a few items to be aware of the automatic deletion process.
  1. DO NOT use this with an RMAN retention window.  Customers typically use a weekly full/daily incremental backup strategy with RMAN. With this strategy, a weeks worth of backups (all dependent on the oldest full backup) are deleted together.  Deleting backups as soon as they expire would delete backups too soon.  Even with archival backups I recommend letting RMAN perform the deletions, otherwise you risk having a file deleted too early.
  2. Be careful changing this setting on an existing share.  This setting takes effect immediately and will affect ALL files that have a retention lock.  Any files that were locked, but their retention lock expired will be deleted when this setting is applied.

Hold

When set to "Hold", any files that have, or have had a retention lock will be affected.  This setting  immediately prevents the deletion of all retention locked files until the hold is removed regardless of when the lock is set to expire.  Keep in mind that while a hold is in place, the files still have a retention lock with an expiration date.

Removing the hold: When you remove the hold, the normal expiration date takes effect.  If you remove the hold by changing the expiry policy to "Delete", ALL files that have an expired retention will be immediately deleted.  If you change the expire policy to "Off", the files remain, and you must manually delete them.


NOTE: Be very careful when changing the Expiry Policy.  The new setting immediately affects existing files, not just new files going forward, unlike the other file retention settings.


Allow permission changes on retained files


What happens normally : When a file has a retention lock set, you are protecting this file from both being deleting AND from being updated. Because you were not allowed to update the file permissions, you were not able to change the settings from the default of -r--r-----+ while the file was locked.

This could be an issue depending on what type of file you are protecting. There are some cases where you want to make this file either
  • An executable file, not just a read only file.
  • readable by any user.
When you attempted to make this change to a locked file the update would fail with a "Operation not permitted".

-r--r-----+ 1 oracle oinstall 792 Dec  4  2023 testfile
[oracle@ssh-server rmanbackups]$ ls -al testfile
-r--r-----+ 1 oracle oinstall 792 Dec  4 20:26 testfile
[oracle@ssh-server rmanbackups]$ chmod 550 testfile
chmod: changing permissions of 'testfile': Operation not permitted
[oracle@ssh-server rmanbackups]$ chmod 444 testfile
chmod: changing permissions of 'testfile': Operation not permitted
[oracle@ssh-server rmanbackups]$


What this setting does: When you check the setting for "Allow permission changes on retained files" you are IMMEDIATELY able to change the permissions on files that are locked.  The files are still protected from making them writable, but you can adjust both the "r" - read and "x" execute settings for all users. 

NOTE: this setting does take affect immediately and will affect all currently locked files regardless of when they were created.

ZFSSA can be used to share data from your Oracle Database

$
0
0

 Data Sharing has become a big topic recently, and Oracle Cloud has added some new services to allow you to share data from an Autonomous Database.  But how do you do this with your on-premises database ? In this post I show you how to use ZFS as your data sharing platform.


Data Sharing

Being able to securely share data between applications is a critical feature in todays world.  The Oracle Database is often used to consolidate and summarize collected data, but is not always the platform for doing analysis.  The Oracle Database does have the capability to analyze data, but tools such as Jupyter Notebooks, Tableu, Power Bi, etc are typically the favorites of Data Scientists and data analysts.

The challenge is how to give access to specific pieces of data in the database without providing access to the database itself.  The most common solution is to use object storage and pre-authenticated URLs.  Data is extracted from the database based on the user and stored in object storage in a sharable format (JSON for example).  With this paradigm, and pictured above, you can create multiple datasets that contain a subset of the data specific to the users needs and authorization.  The second part is the use of a pre-authenticated URL.  This is a dynamically created URL that allows access to the object without authentication. Because it contains a long string of random characters, and is only valid for a specified amount of time, it can be securely shared with the user.

My Environment

For this post, I started with an environment I had previously configured to use DBMS_CLOUD.  My database is a 19.20 database.  In that database I used the steps specified in the MOS note and my blog (information can be found here) to use DBMS_CLOUD.

My ZFSSA environment is using 8.8.63, and I did all of my testing in OCI using compute instances.

For preparation to test I had

  • Installed DBMS_CLOUD packages into my database using MOS note #2748362.1
  • Downloaded the certificate for my ZFS appliance using my blog post and added them to wallet.
  • Added the DNS/IP addresses to the DBMS_CLOUD_STORE table in the CDB.
  • Created a user in my PDB with authority to use DBMS_CLOUD
  • Created a user on ZFS to use for my object storage authentication (Oracle).
  • Configured the HTTP service for OCI
  • Added my public RSA key from my key par to the OCI service for authentication.
  • Created a bucket
NOTE:  In order to complete the full test, there were 2 other items I needed to do.

1) Update the ACL to also access port 80.  The DBMS_CLOUD configuration sets ACLs to access websites using port 443.  During my testing I used port 80 (http vs https).
2) I granted execute on DBMS_CRYPTO to my database user for the testing.

Step #1 Create Object

The first step was to create an object from a query in the database.  This simulated pulling a subset of data (based on the user) and writing it to a object so that it could be shared.  To create the object I used the DBMS_CLOUD.EXPORT_DATA package.  Below is the statement I executed.

BEGIN
 DBMS_CLOUD.EXPORT_DATA(
       credential_name => 'ZFS_OCI2',  
    file_uri_list =>'https://zfs-s3.zfsadmin.vcn.oraclevcn.com/n/zfs_oci/b/db_back/o/shareddata/customer_sales.json',
    format => '{"type" : "JSON" }',
   query => 'SELECT OBJECT_NAME,OBJECT_TYPE,STATUS FROM user_objects');
END;
/

In this example:
  • CREDENTIAL_NAME - Refers to my authentication credentials I had previously created in my database.
  • FILE_URI_LIST - The name and location of the object I want to create on the ZFS object storage.
  • FORMAT - The output is written in JSON format
  • QUERY - This is the query you want to execute and store the results in the object storage.  
As you can see, it would be easy to create multiple objects that contain specific data by customizing the query, and naming the object appropriately.

In order to get the  proper name of the object I then selected the list of objects from object storage.

set pagesize 0
SET UNDERLINE =
col object_name format  a25
col created format  a20
select object_name,to_char(created,'MM/DD/YY hh24:mi:ss') created,bytes/1024  bytes_KB 
       from dbms_cloud.list_objects('ZFS_OCI2', 'https://zfs-s3.zfsadmin.vcn.oraclevcn.com/n/zfs_oci/b/db_back/o/shareddata/');


 customer_sales.json_1_1_1.json 12/19/23 01:19:51    3.17382813


From the output, I can see that my JSON file was named 'customer_sales.json_1_1_1.json'.


Step #2 Create Pre-authenticated URL

The Package I ran to do this is below. I am going to break down the pieces into multiple sections. Below is the full code.



Step #2a Declare variables

The first part of the pl/sql package declares the variables that will be used in the package. Most of the variables are normal VARCHAR variables, but there a re a few other variable types that are specific to the packages used to encrypt and send the URL request.

  • sType,kType - These are constants used to sign the URL request with  RSA 256 encryption
  • utl_http.req,utl_http.resp - These are request and response types used when accessing the object storage
  • json_obj - This type is used to extract the url from the resulting JSON code returned from the object storage call. 

Step #2b Set variables

In this section of code I set the authentication information along with the host, and the private key part of my RSA public/private key pair. 
I also set a variable with the current date time, in the correct GMT format.

NOTE: This date time stamp is compared with the date time on the ZFSSA. It must be within 5 minutes of the ZFSSA date/time or the request will be rejected.

Step #2c Set JSON body

In this section of code, I build the actual request for the pre-authenticated URL.  The parameters for this are...
  • accessType - I set this to "ObjectRead" which allows me to create a URL that points to a specific object.  Other options are Write, and ReadWrite.
  • bucketListingAction - I set this to "Deny",  This disallows the listing of objects.
  • bucketName - Name of the bucket
  • name - A name you give the request so that it can be identified later
  • namespace/namepaceName - This is the ZFS share
  • objectName - This is the object name on the share that I want the request to refer to. 
  • timeExpires - This is when the request expires.
NOTE: I did not spend the time to do a lot of customization to this section.  You could easily make the object name a parameter that is passed to the package along with the bucketname etc. You could also dynamically set the expiration time based on sysdate.  For example you could have the request only be valid for 24 hours by dynamically setting the timeExpires.

The last step in this section of code is to create a sha256 digest of the JSON "body" that I am sending with the request.  I created it using the dbms_crypto.hash.

Step #2d Create the signing string

This section builds the signing string that is encrypted.  This string is set in a very specific format.  The string that is build contains.

(request-target): post /oci/n/{share name}/b/{bucket name}/p?compartment={compartment} 
date: {date in GMT}
host: {ZFS host}
x-content-sha256: {sha256 digest of the JSON body parameters}
content-type: application/json
content-length: {length of the JSON body parameters}

NOTE: This signing string has to be created with the line feeds.

The final step in this section is sign the signing string with the private key.
In order to sign the string the DBMS_CRYPTO.SIGN package is used.


Step #2e Build the signature from the signing string


This section takes the signed string that was built in the prior step and encodes the string in Base 64.  This section uses the utl_encode.base64_encode package to sign the raw string and it is then converted to a varchar.

Note: The resulting base64 encoded string is broken into 64 character sections.  After creating the encoded string, I loop through the string, and combine the 64 character sections into a single string.
This took the most time to figure out.

Step #2f Create the authorization header

This section dynamically builds the authorization header that is sent with the call.  This section includes the authentication parameters (tenancy OCID, User OCID, fingerprint), the headers (these must be in the order they are sent), and the signature that was created in the last 2 steps.

Step #2g Send a post request and header information


The next section sends the post call to the ZFS object storage followed by each piece of header information.  After header parameters are sent, then the JSON body is sent using the utl_http.write_text call.  

Step #2h Loop through the response

This section gets the response from the POST call, and loops through the response.  I am using the json_object_t.parse call to create a JSON type, and then use the json_obj.get to retrieve the unique request URI that is created.
Finally I display the resulting URI that can be used to retrieve the object itself.


Documentation

There were a few documents that I found very useful to give me the correct calls in order to build this package.

Signing request documentation : This document gave detail on the parameters needed to send get or post requests to object storage.  This document was very helpful to ensure that I had created the signature properly.

Http message signature format : This document gives detail on the signature itself and the format.

OCI rest call walk through : This post was the most helpful as it gave an example of a GET call and a PUT call. I was first able to create a GET call using this post, and then I built on it to create a GET call. 


RMAN create standby database - Restore or Duplicate ?

$
0
0
RMAN create standby database - Are you like me and use "restore database" for large databases, or like most people (based on my Linkedin poll) and use "duplicate for standby"? 

The table below shows you the 3 main differences between the 2 methods.


This post started with a discussion within my team around which method you use. I, being of the "restore database" camp, didn't realize how commonly used "duplicate for standby" is. 
I have also dug through the documentation, and there is no common method that is mentioned. Even the 21c documentation for creating a standby database doesn't mention using the duplicate command.

Well in this post, I will explain why "restore database" has been my preference. 

Duplicate database for standby


From the poll I ran, this is the most common way to create a standby database.  It is probably the simplest way also because a lot of the configuration of the standby database is done automatically as part of the automated process.
Below is the simplified steps to perform this process.

PRE work

  1. Create simple initfile on the standby host.  The real SPFILE will be brought over as part of the duplication process.  This may contain location parameters for datafiles and redo logs if different from the primary.
  2. Create directories on the standby host.  This includes the audit directory, and possibly the database file directories if they are different from the host.
  3. Startup nomount.

Duplicate 

The duplicate process automatically performs these major steps using the standby as an auxiliary instance.

  1.  Create an SPFILE. The process creates an SPFILE for the standby and sets parameters for the standby.
  2. Shutdown/Startup standby database. This will use the newly created SPFILE during the rest of the processing
  3. Restore backup controlfile for standby database. The controlfile for the standby database is put in place, and the spfile is updated to it's location
  4. Mount controlfile . Mount the controlfile that was restored
  5. Restore database . Restore the datafiles files for the CDB and PDBs to their new location on the standby
  6. Switch datafile . Uses the new location of the datafiles that were restored.
  7. Create standby redo logs.
  8. Set parameters for standby database. The parameters necessary to communicate with the primary database are set.
  9. Put standby in recover mode . By this time, you should have set the primary database to communicate with the standby database.

NOTES

If you noticed above, I highlighted the second step which forces a shutdown/startup of the standby database. Because of this step, it is not possible to use this method and restore across nodes in a RAC database.  This can cause the duplicate operation to take much longer for larger databases.
Then in step #5 you can see that the "Restore Database" is automatic in the processing and it is not possible to perform a "restore as encrypted" if you are migrating to OCI from a non-TDE database.  The duplicate process does support "restore as encrypted", but only for creating a new Database, not a standby database.

Restore Database


This is the method that I've always used.  There is no automation, but it gives you much more control over the steps.  

PRE work

  1. Restore copy of prod SPFILE to standby host.  For this process, it doesn't matter if it is an intifile or spfile.  In this file you set all the parameters that are needed for the standby database to communicate with the primary and store datafiles/logfiles in the correct location.
  2. Create directories on the standby host.  This includes the audit directory, and possibly the database file directories if they are different from the host.
  3. Startup nomount.
  4. Create copy of primary controlfile for standby. This will be used for the standby database, and should contain the backup catalog  of the primary database, and the RMAN settings including the  channel definitions.
  5. Copy standby controlfile to standby host. The controlfile is copied to the standby host, and may be put in ASM at this point. Ensure the spfile points to the controlfile (and/or srvctl).
  6. Alter database mount.  Mount the controlfile. 
  7. Start up ALL nodes in the RAC cluster in mount mode.  This will allow you to restore the database across ALL nodes in the RAC cluster, and include all the networking from these nodes.  For a large database hosted on multiple DB nodes this can make a HUGE difference when restoring the database.
  8. Create (or copy) TDE wallet.  If the standby database is going to be TDE, then include the wallet if the primary is TDE, or create a new wallet and key if the standby database is going to be TDE.

Restore Database 

The restore process is a manual process

  1.  RMAN Connect to database (and possibly RMAN catalog). Connect to the database and make sure you have access to the backups. For ZDLRA this may mean connecting to the RMAN catalog.
  2. Restore Database (as encrypted). This will restore the database to the new location.  With Restore Database, the database can be encrypted during the restore operation.  With 19c it is supported to have the standby database be encrypted without the primary database being encrypted (Hybrid dataguard).
  3. Switch datafile . Uses the new location of the datafiles that were restored.
  4. Recover database. This will use the archive logs that are cataloged to bring the standby database forward
  5. Create standby redo logs.
  6. Set parameters for standby database. The parameters necessary to communicate with the primary database are set.
  7. Put standby in recover mode . By this time, you should have set the primary database to communicate with the standby database.


NOTES

With the restore database, there are 2 sections I highlighted and these are the advantages that I love about using this method.
  • RMAN is restoring across multiple nodes in a RAC cluster which can make the restore operation much faster.
  • Restore as encrypted allows you take a database that may have TDE partially implemented, or not implemented and create a new standby database that is encrypted. With the duplicate method, TDE would have to be implemented separately.
If you are restoring a VERY large database (200 TB for example) that was not TDE from object storage to the Exadata Cloud Service, both of these advantages can make a HUGE difference when creating a standby database.

Comparison

The chart below compares the the differences between "Duplicate Database" and "Restore Database".

WARNING: When using a ZDLRA for backups, it is NOT recommended to use the "Restore Database" to clone a database as a new copy. Registering the restored copy can cause issues with the RMAN catalog because the "restore database" leaves entries in the RC_SITE table.





DB Script management through pre-authenticated URLs

$
0
0

 Pre-authenticated URLs in OCI are fast becoming one of my favorite features of using object storage.  In this blog post I will explain how I am using them for both:

  • Storing the current copy of my backup scripts and dynamically pulling it from my central repository
  • uploading all my logs files to a central location
Pre-authenticated URL creation

PROBLEM:


The problem I was trying to solve, is that I wanted to create a script to run on all my database nodes to create a weekly archival backup.
Since I have databases that are both Base DB databases, and ExaCS I found that I was continuously making changes to my backup script.  Sometimes it was checking something different in my environment, and sometimes it was improving the error checking.
Each time I made a change, I was going out to every DB host and copying the new copy of my script.
Eventually I realized that Pre-authenticated URLs could not only help me ensure all my DB hosts are running the current copy of my backup script, they could be the central log location.

Solution:


Solution #1 - Script repository


The first problem I wanted to solve, was that I wanted to configure a script repository that I could dynamically pull the most current copy of my scripts from. Since I am running in OCI, I was looking for a "Cloud Native" solution rather than using NFS mounts that are shared across all my DB hosts.
To complicate things, I have databases that are running in different tenancies.

Step #1 - Store scripts in a bucket

The first step was to create a bucket in OCI to store both the scripts and logs.  Within that bucket, under "More Actions" I chose "Create New Folder" and I created 2 new folders, "logs" and "scripts".
Then within the scripts folder I uploaded by current scripts
  • rman.sh - My executable script that will set the environment and call RMAN
  • backup.rman - My RMAN script that contains the RMAN command to backup my database.

Step #2 - Create a Pre-Authenticated Request

The next step was to create a Pre-Authenticated request on the "scripts" folder.  Next to the scripts folder I clicked on the  3 dots and chose "Create Pre-Authenticated Request".
On the window that came up, I changed the expiration to be 1 year in the future (the default is 7 days).  I chose the "Objects with prefix" box so that I could download any scripts that I put in this folder to the DB hosts.  I also made sure the "Access Type" is "Permit object reads on those with specified prefix".
I did not chose "Enable Object Listing".
These settings will allow me to download the scripts from this bucket using the Pre-Authenticated URL only.  From this URL you will not be able to list the objects, or upload any changes.


Step #3 - Create wrapper script to download current scripts

Then using the Pre-Authenticated URL in a wrapper script, I download the current copies of the scripts to the host and then executed my execution script (rman.sh) with a parameter.

Below you can see that I am using curl to download my script (rman.sh) and storing it my local script directory (/home/oracle/archive_backups/scripts).  I am doing the same thing for the RMAN command file.
Once I download the current scripts, I am executing the shell script (rman.sh) .


curl -X GET https://{my tenancy}.objectstorage.us-ashburn-1.oci.customer-oci.com/p/{actual URL is removed }/n/id20skavsofo/b/bgrenn/o/scripts/rman.sh --output /home/oracle/archive_backups/scripts/rman.sh
curl -X GET https://{my tenancy}.objectstorage.us-ashburn-1.oci.customer-oci.com/p/{actual URL is removed }/n/id20skavsofo/b/bgrenn/o/scripts/backup.rman --output /home/oracle/archive_backups/scripts/backup.rman


/home/oracle/archive_backups/scripts/rman.sh $1


Solution #2 - Log repository

The second problem I wanted to solve was to make it easy review the execution of my scripts.  I don't want to go to each DB host and look at the log file.  I want to have the logs stored in a central location that I can check.  Again Pre-Authenticated URLs to the rescue !

Step #1 - Create the Pre-Authenticated URL

In the previous steps I already create a "logs" folder within the bucket. In this step I want to create a Pre-Authenticated URL like I did for the scripts, but in this case I want to use it to store the logs.
Like before I chose "Create Pre-Authenticated Request" for the "logs" folder.
This time, I am choosing "Permit object writes to those with the specified prefix". This will allow me to write my log files to this folder in the bucket, but not list the logs, or download any logs.


Step #2 - Upload the log output from my script

The nice thing was once I implemented Solution #1, and had all of my DB nodes already downloading the current script.  I updated the script to add an upload to object storage of the log file and they will all use my new script.
In my script I already had 2 variables set
  • NOW - The current date in "yyyymmdd" format
  • LOGFILE - The name of the output log file from my RMAN backup.
Now all I had to do was to add a curl command to upload my log file to the bucket.

Note I ma using the NOW variable to create a new folder under "logs" with the data so that my script executions are organized by date.

curl --request PUT --upload-file /home/oracle/archive_backups/logs/${LOGFILE} https://{My tenancy}.objectstorage.us-ashburn-1.oci.customer-oci.com/p/{URL removed}/n/id20skavsofo/b/bgrenn/o/logs/${NOW}/${LOGFILE}

BONUS


If I wanted to get fancy I could have put my LOGS in a new bucket, and configured  a lifecycle management rule to automatically delete logs after a period of time from the bucket.

Autonomous Recovery Service Checklist

$
0
0

 Utilizing the Autonomous Recovery Service (ARS) for your Oracle Databases in OCI is the best method for backing up your Oracle databases.  In this post I will go through the steps required to successfully implement this service.  To learn more about this service you can find the documentation here.


1. Ensure your tenancy's resource limits are adequate

Before implementing the ARS, you first must see what the resource settings are in your tenancy. You want to make sure that the "Space Used for Recovery Window (GB)" and "Protected Database Count" allow for the number of databases, and backup size of the databases you want to utilize the service for.

Below is what you would see for the ARS. This is a screen shot from my free tenancy.  In your tenancy you should see what the current limits are.  When looking at the root compartment, this will show you the Limits and usage for the whole tenancy.


If you need to increase the limits for your tenancy click on the 3 dots to the right of the limit you want to increase. It will bring up a choice to "Open Support Request".  After choosing "Open Support Request" you will see a window that allows you to request a limit increase for your tenancy.

NOTE: There is a second choice when clicking on the 3 dots to "Create Quota Policy Stub". Using the stub displayed you can limit the quota of specific compartments.  This can be used to limit the usage for your "dev" compartment for example, ensuring there is space within your limits for production


2. Verify the policies for the tenancy

A) Set root compartment policies for service



Tenancy Polices for ARS
Policy StatementPurpose

Allow service database to manage recovery-service-family in tenancy

Enables the OCI Database Service to access protected databases, protection policies, and Recovery Service subnets within your tenancy.

Allow service database to manage tagnamespace in tenancy

Enables the OCI Database Service to access the tag namespace in a tenancy.

Allow service rcs to manage recovery-service-family in tenancy

Enables Recovery Service to access and manage protected databases, Recovery Service subnets, and protection policies within your tenancy.

Allow service rcs to manage virtual-network-family in tenancy

Enables Recovery Service to access and manage the private subnet in each database VCN within your tenancy. The private subnet defines the network path for backups between a database and Recovery Service.

Allow group admin to manage recovery-service-family in tenancy

Enables users in a specified group to access all Recovery Service resources. Users belonging to the specified group can manage protected databases, protection policies, and Recovery Service subnets.


B) Allow users (in group) to manage the Recovery Service


Group Policy Statement by Compartment
Policy StatementCreate InPurpose

Allow group {group name} to manage recovery-service-policy in compartment {location}

Compartment that owns the protection policies.Enables all users in a specified group to create, update, and delete protection policies in Recovery Service.


C) Allow users (in group) to manage the required subnet for the Recovery Service


Group Policy Statement by Compartment
Policy StatementCreate InPurpose

Allow Group {group name} to manage recovery-service-subnet in compartment {location}

Compartment that owns the Recovery Service subnets.Enables all users in a specified group to create, update, and delete Recovery Service subnets.


3. Configure Network Resources for Recovery Service

The Recovery Service uses Private endpoints to control backup traffic between your database and the recovery service.  Below is the architecture.



Each Recovery service subnet needs to be created within the VNC where your database resides.  

The minimum size of the subnet is /24 (256 IP addresses).  You can create a new subnet, or use an preexisting subnet in you database VCN.  This subnet must be IPv4.

Security lists for the private subnet must include stateful ingress rules to allow destination ports 8005 and 2484.

NOTE: You can use a public subnet, but it is not recommended for security reasons.

This private subnet must be registered as a Recovery Service Subnet.

Checklist for Security rules

1. Rule 1 - Ingress. Allow HTTPS traffic from anywhere

  • Stateless: No (All rules must be stateful)
  • Source Type: CIDR
  • Source CIDR : CIDR of the VCN where the database resides
  • IP Protocol: TCP
  • Source Port Range: All
  • Destination Port Range: 8005

2. Rule 2 - Ingress. Allow SQLNet traffic from anywhere

  • Stateless: No (All rules must be stateful)
  • Source Type: CIDR
  • Source CIDR : CIDR of the VCN where the database resides
  • IP Protocol: TCP
  • Source Port Range: All
  • Destination Port Range: 2484
NOTE: If your VCN restricts network traffic between subnets, ensure to add an egress rule for ports 2484, and 8005 from the database subnet to the Recovery Service subnet that you create.

3. Register the subnet with recovery service


Under Oracle Database --> Database Backups you need to click on "Recovery Service Subnets" and register the Recovery Service Subnet.



4. Ensure the Recovery Service Subnet can communicate with Oracle services.

The Recovery Service Subnet that you registered needs to communicate with the Recovery Service. In order to access the service, the routing table for this subnets needs to include "All IAD Services In Oracle Services Network".


If all these pieces are in place you should be ready to successfully configure your database backups to go the Recover Service for backup.


Short checklist

  1. Check your limits and quotas for the recovery service
  2. Create the policies for the Recovery Service, and groups (users) to manage the recovery service
  3. Create the subnet for the Recovery Service making sure you have the correct security settings, and the subnet has access to Oracle services
  4. Register the subnet as the Recovery Service Subnet.


Restoring OCI object store backups onto Exadata Cloud Service

$
0
0

 This blog post covers the steps necessary to restore backups made using the Oracle Database Cloud backup Service, onto Exadata Cloud Service in the event of DR situation.  


In this post, I am going to assume that you have already configured an ExaCS environment and have a VM defined to restore the database into.

The database I am going to use for testing has the characteristics below.

DBNAME:    bgrenndb

DB version:    19.19

DB_UNIQUE_NAME: BGRENNDB_HS7_IAD/

NOTE:  have been creating "KEEP" backups for this database and I want to use one of them to restore from in OCI.  This may not be case, you might be sending a weekly full backup, and a daily incremental backup.


Prerequisites:

There are some prerequisites that I found are important to make the restoration go smoothly

  • Backup your TDE encryption wallet - It is important to make sure you have the encryption keys for your database.  When using the Oracle Database backup service, ALL backup pieces are encrypted, including the backups of the spfile, and controlfile. It is critical to have the encryption wallet to restore the backups.  You want to backup just the "ewallet.p12" file. I recommend you DO NOT backup the cwallet.sso file, as this is the autologin wallet.  Best MSA (Maximum Security Architecture) practice is to backup the wallet stored separate from the database backups, and recreate the autologin wallet using the password. This is much more secure than backing up the autologin wallet.
  • Store the backup logs in a bucket - When restoring from a database backup you need to determine the backup pieces that are needed, especially when restoring the controlfile.  If you store the log files, it will make it much easier to restore the database without an RMAN catalog.
  • Create a bucket for DB backups and Metadata - This is where the database backups will be stored, and I recommend adding a retention lock to the bucket.  Instructions on creating the retention lock can be found here.
PRO TIP : The easiest way to upload the RMAN backup log files, and backups of the wallets is to use Pre-Authenticated URLs (PARS). These make it secure (because they can only be used to drop the backup into a bucket), and they also make it easier to deal with authentication.

Steps to restore a database from object storage.

1) Create a stub database 

Because I want to use the tooling in OCI to manage my database, I am starting with a "stub" database with the same name as my backed up database, and it should be the same DB release  or higher. 

NOTE: When creating the stub database, you should use the same password as you are using for the original database.  In my case the SYS password, and the wallet password are the same.  If your wallet password is different from the SYS password, you can create the stub database with different passwords.

Stub database

DBNAME:    bgrenndb

DB version:    19.22

DB_UNIQUE_NAME: BGRENNDB_S39_IAD


PRO TIP  - In hindsight, I should have named the DB_UNIQUE_NAME the same as my production database to make it easier to restore.

2) Backup a copy of the stub SPFILE


In sqlplus I backed up the SPFILE to a PFILE that I will use later to ensure my parameters which are local to this VM are correct when I restore my database.

SQL> create pfile='/tmp/bgrenndb.origpfile' from spfile;

3) Shutdown the database and delete all files.

I shut down the database in srvctl since this is a RAC instance

#> srvctl stop database -d bgrenndb_s39_iad

I deleted all the files on ASM from both +DataC1 and +RecoC1 for this database


4) Download and configure the Oracle Database Backup Service

You need to download the Oracle Database backup service installation jar file.  Once this is downloaded, you need to run the installation which will download the library, create a wallet file, and create the configuration file used by the library.

Instructions on how to do this are documented in my last blog post you can find here.

Pro Tip : Since the I am restoring the database to a RAC cluster it would be easier if I install the Database Service configuration to a shared locations across all nodes.  In my environment, I am going to install the Backup Service configuration in "/acfs01/dbaas_acfs/bgrenndb" in a directory called opc.


Once I go through the installation, I will have the following directories

/acfs01/dbaas_acfs/bgrenndb/opc/lib        --> contains libopc.so used during restore

/acfs01/dbaas_acfs/bgrenndb/opc/config    --> backupconfig.ora containg the library parameters

/acfs01/dbaas_acfs/bgrenndb/opc/wallet     --> contains the authentication information


5) Download and configure the TDE Wallet from my backup

The easiest way to to download the most current wallet from OCI object storage is by using a Pre-authenticated URL (PAR).  I created a PAR on the object and then used curl to download my wallet file.

curl -o {name of the restored file } {PAR which is a long URL pointing to the object}

Once I download the wallet, I am going to :
  • Go to the wallet directory (under WALLET_ROOT/tde and delete the original wallet files (ewallet.p12 and cwallet.sso).
  • Replace the ewallet.p12 with my downloaded wallet from my source database.
Now that I have the wallet downloaded, I need to create the autologin wallet.

NOTE: it is not recommended to backup the autologin wallet, just the passworded wallet

To create the autologin wallet from the passworded wallet I execute

>mkstore -wrl {wallet_location} -createSSO

I enter the password for the wallet, and it creates the autologin wallet for me.

6) Startup the database nomount and validate wallet


Now that I have the wallet in the correct location, I created a basic pfile.  I only need the following parameters.  You can look at the backup of the stub spfile to get the appropriate setting for the "control_files", "db_unque_name", and proper disk groups for DATA and RECO.

*.control_files='+DATAC1/BGRENNDB_S39_IAD/CONTROLFILE/current.327.1166013711'
*.db_name='bgrenndb'
*.enable_pluggable_database=true
*.db_recovery_file_dest='+RECOC1'
*.db_recovery_file_dest_size=6538932518912
*.db_unique_name='bgrenndb_s39_iad'
*.diagnostic_dest='/u02/app/oracle'
*.pga_aggregate_target=5000m
*.processes=2048
*.sga_target=7600m
*.tde_configuration='keystore_configuration=FILE'
*.wallet_root='/var/opt/oracle/dbaas_acfs/bgrenndb/wallet_root'

NOTE: I am going to restore the spfile, so this is only temporary.

I started the database nomount with this small pfile

SQL> startup nomount pfile=bgrenndb.ora;

Once the database started, I used the first TDE query from my blog to check the status of the wallet.  You want to make sure the encryption wallet is OPEN before proceeding.

 INST_ID PDB Name   Type       WRL_PARAMETER                                      Status                         WALLET_TYPE          KEYSTORE Backed Up
---------- ---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
         1            FILE       /var/opt/oracle/dbaas_acfs/bgrenndb/wallet_root/td OPEN                           UNKNOWN              NONE     NO
                                 e/


7) Locate the name of the SPFILE and Controlfile backup pieces

As part of my backup script, I also uploaded the log file associated with the backup. This gave me
  • The DBID
  • The name of the spfile backup piece associated with the backup I am going to restore
  • The name of the controlfile backup piece associated with the backup I am going to restore

8) Restore the spfile and update it.

Using the backup piece name, I restored my spfile to the file system, and created a pfile copy of it so that I can make a few changes.
RMAN>
 run {
 allocate CHANNEL c1 TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/acfs01/dbaas_acfs/bgrenndb/opc/lib/libopc.so,SBT_PARMS=(OPC_PFILE=/acfs01/dbaas_acfs/bgrenndb/opc/config/backupconfig.ora)';
 set dbid=367184428;
 restore spfile to '/tmp/bgrenndb.spfile' from 'BGRENNDB_KEEP_20240227_3776_1' ;
}

RMAN> 2> 3> 4> 5>
allocated channel: c1
channel c1: SID=2142 device type=SBT_TAPE
channel c1: Oracle Database Backup Service Library VER=19.0.0.1

executing command: SET DBID

Starting restore at 12-APR-24

channel c1: restoring spfile from AUTOBACKUP BGRENNDB_KEEP_20240227_3776_1
channel c1: SPFILE restore from AUTOBACKUP complete
Finished restore at 12-APR-24
released channel: c1

RMAN> create pfile='/tmp/bgrenndb.pfile' from spfile='/tmp/bgrenndb.spfile';

Statement processed

I then edited my pfile, "/tmp/bgrenndb.pfile" and made the following changes.
  • I changed custer_interconnects to match the entries in the original spfile from the stub.
  • I changed entries that were pointing to DATAC6 and RECOC6 to DATAC1 and RECOC1 to match the VM I am restoring to.
  • I changed the REMOTE_LISTENER to match the original spfile.
  • I changed the bgrenndb_hs7_iad to bgrenndb_s39_iad since that will be new db_unique_name.
I then bounced the database and started it up NOMOUNT again with the new pfile

9) Restore the controlfile

Now I am going to identify the backup location of the controlfile I want, and restore the control file 

RMAN>

 run {
  allocate CHANNEL c1 TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/acfs01/dbaas_acfs/bgrenndb/opc/lib/libopc.so,SBT_PARMS=(OPC_PFILE=/acfs01/dbaas_acfs/bgrenndb/opc/config/backupconfig.ora)';
  set dbid=367184428;
 restore controlfile from 'BGRENNDB_KEEP_20240227_3777_1' ;
}
4> 5>
using target database control file instead of recovery catalog
allocated channel: c1
channel c1: SID=9 instance=bgrenndb1 device type=SBT_TAPE
channel c1: Oracle Database Backup Service Library VER=19.0.0.1

executing command: SET DBID

Starting restore at 12-APR-24

channel c1: restoring control file
channel c1: restore complete, elapsed time: 00:00:04
output file name=+DATAC1/BGRENNDB_S39_IAD/CONTROLFILE/current.332.1166124375
Finished restore at 12-APR-24
released channel: c1

Once restored the controlfile, I updated the pfile to the location the controlfile was restored to.
Then I created the spfile from pfile.

SQL> create spfile from pfile='/tmp/bgrenndb.pfile';

I then shutdown the instance and started it mount and ensured the parameters were correct, and once again ensured the wallet was open.

10) Change the channel configuration in RMAN and restore

I changed the channel configuration to match the backup service settings, and restored the database using the TAG

 restore database from tag=KEEP_BGRENNDB_HS7_IAD_20240227;
 recover database from tag=KEEP_BGRENNDB_HS7_IAD_20240227;

11) I opened the database reset logs



RMAN> alter database open resetlogs;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of sql statement command at 04/12/2024 19:59:12
ORA-19751: could not create the change tracking file
ORA-19750: change tracking file: '+DATAC6/BGRENNDB_HS7_IAD/CHANGETRACKING/ctf.898.1160234109'
ORA-17502: ksfdcre:4 Failed to create file +DATAC6/BGRENNDB_HS7_IAD/CHANGETRACKING/ctf.898.1160234109
ORA-15046: ASM file name '+DATAC6/BGRENNDB_HS7_IAD/CHANGETRACKING/ctf.898.1160234109' is not in single-file creation form
ORA-17503: ksfdopn:2 Failed to open file +DATAC6/BGRENNDB_HS7_IAD/CHANGETRACKING/ctf.898.1160234109
ORA-15001: diskgroup "DATAC6" does not ex



Oops, I then disabled block change tracking.


RMAN> alter database disable block change tracking;

RMAN> alter database open resetlogs;

Statement processed
PL/SQL package SYS.DBMS_BACKUP_RESTORE version 19.19.00.00 in TARGET database is not current
PL/SQL package SYS.DBMS_RCVMAN version 19.19.00.00 in TARGET database is not current

Now it was successful, and I see I have to upgrade the database.


12) Patch the database from 19.19 to 19.22

I ran through the patch upgrade process 

> cd $ORACLE_HOME/OPatch
>./datapatch -verbose


Summary :

Once I patched the database, I turned on automatic backups which was successful. This was a great sign that I had everything correct and my new database ready to go !





Autonomous Recovery Service Prechecks

$
0
0

 If you are configuring backups to utilize the Autonomous Recovery Service, there are some prerequisites that you need to be aware of.  If your Oracle Database was originally created in OCI and has always been OCI, those prerequisites are already configured for your database.  But, if you migrated a database to an OCI service, you might not realize that these items are required.


Prerequisites for Autonomous Recovery Service


1) WALLET_ROOT must be configured in the SPFILE.

WALLET_ROOT is a new parameter that was added in 19c, and its purpose is to replace the SQLNET.ENCRYTPION_WALLET_LOCATION in the sqlnet.ora file. Configuring the encryption wallet location in the sqlnet.ora file is depreciated.
WALLET_ROOT points to the directory path on the DB node(s) where the encryption wallet is stored for this database, and possibly the OKV endpoint client if you are using OKV to manage your encryption keys.
WALLET_ROOT allows each database to have it's own configuration location specific to each database.

There is a second parameter that goes with WALLET_ROOT that tells the database what kind of wallet is used (file, HSM or OKV), and that parameter is tde_configuration.


Running the script below should return the WALLET_ROOT location, and the tde_configuration information.


Checking the WALLET_ROOT and tde_configuration


Below you can see that both of these parameters are configured and I am using a wallet file.


Parameter            Value
-------------------- ------------------------------------------------------------
wallet_root          /var/opt/oracle/dbaas_acfs/bgrenndb/wallet_root
tde_configuration    keystore_configuration=FILE


2) Encryption keys must be configured and available

In order to leverage the Autonomous Recovery Service, you must have an encryption key set and available for the CDB and each PDB.  If you migrated a non-TDE database (or plugged in a nonTDE PDB) to OCI you might not have configured encryption for one ore more PDBs.  The next step is to ensure that you have an encryption key set, and the wallet is open.  The query below should return "OPEN" for each CDB/PDB showing that the encryption key is available.


Below is the output from the query showing that the wallet is open for the CDB and the PDBs. 



   INST_ID PDB Name   Type       WRL_PARAMETER                                                Status
---------- ---------- ---------- ------------------------------------------------------------ ---------------
         1 BGRENNPDB1 FILE                                                                    OPEN
           CDB$ROOT   FILE       /var/opt/oracle/dbaas_acfs/bgrenndb/wallet_root/tde/         OPEN
           PDB$SEED   FILE                                                                    OPEN

         2 BGRENNPDB1 FILE                                                                    OPEN
           CDB$ROOT   FILE       /var/opt/oracle/dbaas_acfs/bgrenndb/wallet_root/tde/         OPEN
           PDB$SEED   FILE                                                                    OPEN



3) All tablespaces are TDE encrypted

TDE encryption is mandatory in OCI, and the Autonomous Recovery Service cannot be used if all of your tablespaces are not encrypted.  Below is a query to run that will tell you if your tablespaces are all encrypted.


In my case I can see that all of the tablespaces are encrypted

Encrypted tablespace information
------------------------------------------------------------
Number of encrypted tablespaces   :      12
Number of unencrypted tablespaces :      0
                                         ----
Total Number of tablespaces       :      12



To find any tablespaces that are not encrypted you can run the query below.




ZDLRA's space efficient encrypted backups with TDE explained

$
0
0

 In this post I will explain what typically happens  when RMAN either compresses, or encrypts backups and how the new space efficient encrypted backup feature of the ZDLRA solves these issues.


TDE - What does a TDE encrypted block look like ?

Oracle Block contents

In the image above you can see that only the data is encrypted with TDE.  The header information (metadata) remains unencrypted.  The metadata is used by the database to determine the information about the block, and is used by the ZDLRA to create virtual full backups.


Normal backup of TDE encrypted datafiles

First let's go through what happens when TDE is utilized, and you perform a RMAN backup of the database.

In the image below, you can see that the blocks are written and are not changed in any way. 

NOTE: Because the blocks are encrypted, they cannot be compressed outside of the database.  


TDE backup no compression

Compressed backup of TDE encrypted datafiles

Next let's go through what happens if you perform an RMAN backup of the database AND tell RMAN to create compressed backupsets.  As I said previously, the encrypted data will not compress., and because the data is TDE the backup must remain encrypted.
Below you can see that RMAN handles this with series of steps.  

RMAN will
  1. Decrypt the data in the block using the tablespace encryption key.
  2. Compress the data in block (it is unencrypted in memory).
  3. Re-encrypt the whole block (including the headers) using a new encryption key generated by the RMAN job

You can see in the image below, after executing two RMAN backup jobs the blocks are encrypted with two different encryption keys. Each subsequent backup job will also have new encryption keys.

Compressed TDE data



Compression or Deduplication

This leaves you with having to chose one or the other when performing RMAN backup jobs to a deduplication appliance.  If you execute a normal RMAN backup, there is no compression available, and if you utilize RMAN compression, it is not possible to dedupe the data. The ZDLRA, since it needs to read the header data, didn't support using RMAN compression.

How space efficient encrypted backups work with TDE

So how does the ZDLRA solve this problem to be able provide both compression and the creation of virtual full backups?
The flow is similar to using RMAN compression, BUT instead of using RMAN encryption, the ZDLRA library encrypts the blocks in a special format that leaves the header data unencrypted.  The ZDLRA library only encrypts the data contents of blocks.

  1. Decrypt the data in the block using the tablespace encryption
  2. Compress the data in block (it is unencrypted in memory).
  3. Re-encrypt the data portion of the block (not the headers) using a new encryption key generated by the RMAN job
In the image below you can see the flow as the backup is migrating to utilizing this feature.  The newly backed up blocks are encrypted with a new encryption key with each RMAN backup, and the header is left clear for the ZDLRA to still create a virtual full backup.

This allows the ZDLRA to both compress the blocks AND provide space efficient virtual full backups




How space efficient encrypted backups work with non-TDE blocks


So how does the ZDLRA feature work with non-TDE data ?
The flow is similar to that of TDE data, but the data does not have to be unencrypted first.  The blocks are compressed using RMAN compression, and are then encrypted using the new ZDLRA library.


In the image below you can the flow as the backup is migrating to utilizing this feature.  The newly backed up blocks are encrypted with a new encryption key with each RMAN backup, and the header is left clear for the ZDLRA to still create a virtual full.





I hope this helps to show you how space efficient encrypted backups work, and how it is a much more efficient way to both protect you backups with encryption, and utilize compression.

NOTE: using space efficient encrypted backups does not require with the ACO or the ASO options.









Using APEX to upload objects to ZFSSA

$
0
0

 When working on my latest project, I wanted to be able to provide an easy web interface that can be used to upload images into OCI object storage on ZFSSA by choosing the file on my local file system.

In this blog post, I will go through the series of steps I used to create a page in my APEX application that allows a user to choose a local file on their PC, and upload that file (image in my case) to OCI object storage on ZFSSA.



Below are the series of steps I followed.


Configure ZFSSA as OCI object storage

First you need to configure your ZFSSA as OCI object storage.  Below are a couple of links to get you started.

During this step you will

  • Create a user on ZFSSA that will be be the owner of the object storage
  • Add a share that is owned by the object storage user
  • Enable OCI API mode "Read/Write" as the protocol for this share
  • Under the HTTP service enable the service and enable OCI.
  • Set the default path as the share.
  • Add a public key for the object storage user under "Keys" within the OCI configuration.

NOTE: You can find an example of how to create public/private key pair here.

Create a bucket in the OCI object storage on ZFSSA

In order to create a bucket in the OCI object storage you need to use the "OCI cli" interface.
If you have not installed it already, you can use this link for instructions on how to install it.

Once installed, you need to configure the ~/.oci/config file and I explain the contents in my "OCI access to ZFS" section of this blog post.

Now you should have the oci cli installed, and the configuration file created, and we are ready for the command to create the bucket.

oci os bucket create --endpoint http:{ZFSSA name or IP address} --namespace-name {share name} --compartment-id {share name} --name {bucket name}

For my example below:

Parametervalue
ZFSSA name or IP addresszfstest-adm-a.dbsubnet.bgrennvcn.oraclevcn.com
share nameobjectstorage
bucket namenewobjects

The command to create my bucket would is:
oci os bucket create --endpoint http://zfstest-adm-a.dbsubnet.bgrennvcn.oraclevcn.com --namespace-name objectstorage --compartment-id objectstorage --name newobjects


Ensure you have the authentication information for APEX

This step is to make sure you have what you need for APEX in order to configure and upload an object into object storage on ZFSSA.

If you successfully created a bucket in the last step, you should have everything you need in the configuration file that you used.  Looking at the contents of my config file (below) I have almost all the parameters I need for APEX.

From the step above I have the correct  URL to access the object storage and the bucket.

http://{ZFSSA name or IP address}/n/{share name}/b/{bucket name}/o/

which becomes

http://zfstest-adm-a.dbsubnet.bgrennvcn.oraclevcn.com/n/objectstorage/newobjects/o/

The rest of the information except for tenancy is in the configuration file.

Parameter in config filevalue
userocid1.user.oc1..{ZFS user} ==> ocid1.user.oc1..oracle 
fingerprint{my fingerprint} ==> 88:bf:b8:95:c0:0a:8c:a7:ed:55:dd:14:4f:c4:1b:3e
key_fileThis file contains the private key, and we will use this in APEX
regionThis is always us-phoenix-1 and is 
namespaceshare name ==> objectstorage
compartment
share name ==> objectstorage


NOTE: The tenancy ID for ZFSSA is always  "ocid1.tenancy.oc1..nobody"


In APEX configure web credentials

Now that we have all of the authentication information outlined in the previous step, we need to configure web credentials to access the OCI object storage on ZFSSA as a rest service.

In order to add the web credentials I log into my workspace in APEX. Note I am added the credentials at the workspace level rather than at the application level.
Within your workspace make sure you are within the "App Builder" section and click on "Workspace Utilities". 



Within "Workspace Utilities" click on "web Credentials".



Now click on "Create >" to create new web credential



Enter the information below (also see screen shot)

  • Name of credential
  • Type is OCI
  • user Id from above
  • private key from above
  • Tenancy ID is always ocid1.tenancy.oci1..nobody for ZFSSA
  • Fingerprint that matches the public/private key
  • URL for the ZFS




In apex create the upload region and file selector

I have an existing application, or you can create a new application in apex. I am going to start by creating a blank page in my application.



After clicking on "Next >", I give the new page a name and create the page.






Then on the new page I created a new region by right clicking on "Body"


Once I created the region, I named the region "upload" by changing the identification on the right hand side of Page Designer.



Then on the left hand side of Page Designer, I right clicked on my region "upload" and chose to create a new "Page Item".


After creating the new page item I needed to give the item a better identification name and change the type to "file upload". See the screen shot below.


In apex create the Button to submit the file to be stored in object storage.


Next we need to add a button to upload the file to object storage.  Right click on the "upload" region, and this time choose "create button below".


I gave the button a clearer name to identify what it's there for


And I scrolled down the attributes of the button on the right hand side, and made sure that the behavior for the button was "Submit Page"



In apex add the upload process itself

Click on the processing section in the top left corner of Page Designing and you will see the sections for page process.  Right click on "Processing" and click on "Create process"


The next step is to give the process a better identifier, and I named my "file_upload", and I also need to include the PL/SQL code to execute as part of this process.

The items we need to customer for the code snippet are.

ITEMVALUE
File Browse Page Item":" followed by the name of the file selector. Mine is ":FILE_NAME"
Object Storage URLThis is the whole URL including namespace and bucket name
Web CredentialsThis is the name for the Web Credentials created for the workspace


My PL/SQL code is below with the values I've mentioned throughout this blog.



declare
    l_request_url varchar(32000);
    l_content_length number;
    l_response clob;
    upload_failed_exception exception;
    l_request_object blob;
    l_request_filename varchar2(500);
    begin
        select blob_content, filename into l_request_object, l_request_filename from apex_application_temp_files where name = :FILE_NAME;
        l_request_url := 'http://zfstest-adm-a.dbsubnet.bgrennvcn.oraclevcn.com/n/objectstorage/b/newobjects/o/' || apex_util.url_encode(l_request_filename);        
l_response := apex_web_service.make_rest_request(
            p_url => l_request_url,
            p_http_method => 'PUT',
            p_body_blob => l_request_object,
            p_credential_static_id => 'ZFSAPI'
        );end;


In the APEX database ensure you grant access to the URL

The final step before we test this is to add the ACL grant for the URL.
NOTE: This needs to be granted to to the apex application owner, in my case APEX_230200

BEGIN
    DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
        host => '*',
        ace => xs$ace_type(privilege_list => xs$name_list('connect', 'resolve'),
            principal_name => 'APEX_230200',
            principal_type => xs_acl.ptype_db
        )
    );
END;
/


Upload the object and verify it was successful

After saving the page in Page Designer run the page to upload an object.
Choose an object from your local file system and click on the "Upload Object" button.

If there were no errors, it was successful and you can verify it was uploaded by listing the objects in the bucket.
Below is my statement to list the objects.

oci os object list --endpoint http://zfstest-adm-a.dbsubnet.bgrennvcn.oraclevcn.com  --namespace-name objectstorage --bucket-name newobjects


 That's all there is to it

Creating Archival Backups from ZDLRA using EM Cloud Control

$
0
0


The ability for the ZDLRA to create archive backups was added with release 21.1 and I wrote a blog post (here) on how to do this.  I recently noticed that the latest plugin for ZDLRA (13.5.1.0.0) allows you to dynamically schedule your archival jobs from EM Cloud Control.

Create Archival Backup


In this blog post I will go through how to use this new feature.

First the release that I am using for this is

  •  EM Cloud Control 13.5.0.19
  • Zero Data Loss Recovery Appliance Plugin Release 13.5.1.0.0

Where to find the feature:

If you have the correct plugin, you will notice that there is a new choice in the "Recovery Appliance"  pull down menu provided by the plugin.


There is an entry for "Archival Backups" that appears just below "replication".  When you chose this option it will bring up a new window that you can use to prepare to create an archival backup.


Notice that there is nothing listed here.  I did create an archival backup earlier, but it isn't listed.

In order to create an archival backup, Click on the "Create Archival Backup" button and continue to one of the next sections.  You can either create a "one-time" archival backup, or schedule a recurring backup.  The default is to create a recurring scheduled backup

Create a recurring scheduled Backup:

Protected Databases

I am going to create a recurring scheduled backup for my database "testdb".   I can choose only one database.

Recovery Point Time

  • This should be for every month.  I chose every month individually, and I ensure that I chose all 12 months.
  • This should occur on the "last" day of the month
  • The recovery point should be 11:00 PM based on the browser time (I can also chose the DB time, or UTC).
  • I want to set the restore point prefix to be "MONTHLY_KEEP_BACKUP_". The job will affix the timestamp to the end of the the prefix

Retention Time

  • Keep this backup for 3 years (I can also choose a time period based on months or weeks).

Properties

  • Use the attribute set "TESTDB" that I created earlier.
  • Leave the default format of the backup pieces, but I can change the format if I'd like to.
  • I am setting not setting encryption algorithm on (I would need to for a copy-to-cloud job).
  • I am not setting a compression algorithm on.
My screen for creating the recurring backup looks like the image below.


Once I complete everything I can click on OK, and it will submit my schedule to run.


Viewing recurring scheduled Backup  Procedures:

The recurring backups are not scheduled as jobs, they are scheduled as Procedures because they have a few steps to execute.
You can find these scheduled backups in EM Cloud Control under Enterprise --> Provisioning and Patching --> Procedure Activity.
At this point, I scheduled 2 jobs (actually procedures) , prior and you can see them in this section.


In order to see more detail on these 2 procedures I can select one of them and click on the "Reschedule" button at the top of the list of procedures.
I know the first procedure is for executing scheduled archival backups for TESTDB because the name of the procedure contains TESTDB followed by the timestamp.

Below is what it shows it when I choose to reschedule it.


You can see that during this test, I created a monthly schedule that creates a new backup at 7:00 AM PT on the 10th of the months listed.  During my test I did not include all months, and those that I included, I did not choose them in order.  
When I go back to the list of procedures, and drill into the procedure, I can see there there are just a couple of steps, and I can't see any detail as to what the steps do.


Viewing executed scheduled Backup Procedures:

In order to view any executed scheduled backup you would look in the same place as you do for schedule procedures.  Along with the 2 scheduled procedures I had above, I also had one of the actually execute and I see it in the list.


You can see that the first scheduled job had successfully executed, now let's take a look at the executed step and output.
If you click on the highlighted "Run" name, you can drill into the procedure and steps. Below is what I see for the step detail for this execution.


Below is what the output of the last step looks like.

You can see all of the attributes that were set when I created this procedure, and you can see the actual command that executed to create the archival backup.


Create a One-time only archival Backup:

Similar to creating an recurring backup, you go the "Create Archival Backup" section within the ZDRLA plugin.

Protected Databases

I am going to create a One-time archival backup for my database "testdb".   I can choose only one database.

Create Archival Backup For


Within this section there are 3 choices

Point-in-Time : Using a date picker choose the point in time you want to create the archival backup as of. 


SCN : Enter the SCN you want to use. The text tells the range of SCN numbers you can use.


Restore Point : Enter the restore point from the drop down menu.



Retention Time (same as recurring backups)

  • Keep this backup for 3 years (I can also choose a time period based on months or weeks).

Properties (same as recurring backups)

  • Use the attribute set "TESTDB" that I created earlier.
  • Leave the default format of the backup pieces, but I can change the format if I'd like to.
  • I am setting not setting encryption algorithm on (I would need to for a copy-to-cloud job).
  • I am not setting a compression algorithm on.
Click "OK" after filling in all of the detail, and submit the job.


Viewing archival Backups:


In the window that you choose the "Create Archival Bucket" you can view existing backups.  In order to view the backups, you must first choose the "Protected Database" you want to view the backups for. Below is what you would see once a backup is initiated.



Summary:

You still might find it easier to create the archive log yourself using the PL/SQL package. This can be done either manually or through scripting.  The GUI gives you nice way to schedule individual database jobs, but for 100's, or 1000's of databases with varying requirements, scripting can be more flexible.

































DBMS_CLOUD Debugging with ZFS Object Storage

$
0
0

 In the course of testing the DBMS_CLOUD functionality against OCI object storage on ZFS, I have wanted to perform debugging by looking at the packets sent to the Web Listener on my ZFS.

Unfortunately for debugging purposes, DBMS_CLOUD requires all calls to object storage to be HTTPS calls which are encrypted.

In this blog post, I will go through the architecture below to show you how I was able to use a Load Balancer in OCI on port 443 (HTTPS traffic) to send the requests to my ZFS using Port 80 (HTTP traffic).

By doing this I was able to see all the packets going to ZFS.

You can use this same process to debug network traffic, while leaving the application interface encrypted.


Below are the steps in the OCI console, but I am not going to include the policies that need to be configured.

1) Create a vault

You can find create Vault under "Identity & Security" --> "Key Management & Secret Management".

Click on "Create Vault" and all you need to do is to give the vault a name, and choose the compartment to store the vault.

Once you fill them in click on "Create Vault" to have the vault created.

2) Create a Master Encryption Key

Once the Vault is created, click on the vault name, and this will bring up the window where you can enter a Master Encryption Key to be created within the vault.

Click on "Create Key" and enter the information to create a new Key in this vault.  Note that 

  • The key MUST be an HSM key, you cannot use a software key
  • The key must be asymmentric. The default is symmetric and must be changed.

3) Create a Certificate Authority

Under "Identity & Security" --> "Certificates" you will see "Certificate Authorities". We need to create a new one.

Click on "Create Certificate Authority", and in this case we are creating a Root Certificate Authority. You need to give it a "Name" and "Description" and click on the 'Next" button in the lower left corner.

Then on the next window give it a "Common Name" and click on Next.


On the next window, you must choose a "not valid before". In my case, I chose today.

Then you must enter the Vault and the Encryption key that you had created previously.

Then click on 'Next"


Then set the expiry rule and click on "next".  I left the defaults.


On the next window I changed "Revocation Configuration"  to "skip" and I clicked on "Next"


Then on the "Summary" window I clicked on "Create Certificate Authority" to create the Certificate Authority.


4) Create a Load Balancer

This can be found under "Networking" --> "Load balancers". Click on Load Balancer.

Once here, click on the "Create Load balancer" button.
Give the load balancer a name (if you want) to make it easier to find.
You then need to scroll down to the bottom of the screen to choose your network and subnet for the Load balancer.
Once you fill these in Click on "NEXT".





After clicking on Next, I left everything defaulted. This will do a health check on the ZFS using port 80.  Then I clicked on "Next" again.


In this window, I changed from HTTPS to HTTP. This allows me to create the Load Balancer without having a Certificate yet.  


I left the logging off, and clicked on "Submit" to create the Load Balancer.


5) Determine the Public IP for the Load Balancer

Once the load balancer is created, I go to the list of of load balancers under Networking--> Load balancers --> Load Balancer and it shows me the public IP for the Load Balancer that was created.  The overall Health is showing "incomplete" since I haven't added any backend hosts yet.



6) Create the certificate


Now that I know the Public IP address (129.146.220.252) I can create a certificate for it in my Certificate Authority.
I go back to "Identity & Security" --> Certificates and click on "certificates"

I click on "Create Certificate" and I enter the name and description and Click on "next"







I give the "Common Name" my IP address so that the Certificate Name matches the URL I am going to use to connect.  Then I click on "Next".


In the next window I fill in the "not valid before" and click on "next".





I leave the rules default for the certificate and click on "next"


Then when I get to the "Summary" window I click on "Create Certificate".

7) Create a Backend set for the load balancer

I now go back to Networking --> Load Balancers --> Load Balancer and choose the Load Balancer I had previously created.

On the left hand side of the window I click on "Backend Sets" to list the existing Backend sets.  By default a backend set was created for me, but it has no members.
I click on the default Backend set to bring up the window to add members.
This will bring up a window showing that the backend set is "incomplete"
From here I click on "Backends(0)" on left hand side of the window.


This brings up a window with an "Add backends" button. Click on this button to bring up the window to enter backends.



On the window above I entered the IP address of the HTTP interface I am using ZFS, leaving the port as 80 so that the traffic will be unencrypted, and click on "ADD" to add it to the backend list.

8) Change the Health Check to TCP

On the Backends window I changed the "Update Health Check" to use TCP protocol from HTTP protocol and clicked on "Save Changes".




9) Change the Load Balancer to HTTPS

I now go back to Networking --> Load Balancers --> Load Balancer and choose the Load Balancer I had previously created.

From the left had side, I click on "Listeners" and then I click on "Create Listener".



In the window that comes up, I want to make this a HTTPS listener, I change the protocol to HTTPS, and I choose the certificate I created in the previous step. This allows the load balancer to encrypted receive traffic with a registered certificate.
In this step, I also need to ensure it is using the Backend set I just updated. Once complete choose "Create Listener".




That's all there is to it.

Now I can access the Object storage on ZFS using the "Public IP" using DBMS_CLOUD (which is encrypted) and it will be passed on to the ZFS as HTTP traffic.


Viewing all 147 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>