Quantcast
Channel: Bryan's Oracle Blog
Viewing all 147 articles
Browse latest View live

ZDRLA adds smart incremental to be even smarter.

$
0
0

 Recently version 19.1.1.2 of ZDLRA software was released, and one the features is something called "Smart Incremental".  I will walk through how this feature works, and help you understand why features like this are "ZDLRA Only".




I am going to start by walking through how incremental backups become "virtual full backups", and that will give you a better picture of how "smart incremental" is possible.

The most important thing to understand about these features is that the RMAN catalog itself is within the ZDLRA  AND the ZDLRA has the ability to update the RMAN catalog.

How does a normal backup strategy work ? That is probably the best place to start.  What DBAs typically do is perform a WFDI (Weekly Full Daily Incremental) backup.  To keep my example simple, I will use the following assumptions.
  • My database contains 3 datafile database. SYSTEM, SYSAUX, USERS, but I will only use the example of backing up datafile users.
  • Each of these 3 datafiles are 50 GB in size
  • I am only performing a differential backup which creates a backup containing the changes since the last backup (full OR incremental).
  • My database is in archivelog  *
* NOTE: With ZDLRA you can back up a nologging database, and still take advantage of virtual fulls. The database needs to be in a MOUNTED state when performing the incremental backup.

If placed in a table the backups for datafile USERS would look this. Checkpoint SCN is the current SCN number of the database at the start of the backup.



If I were to look at what is contained in the RMAN catalog (RC_BACKUP_DATAFILE), I would see the same backup information but I would see the SCN information 2 columns.
  • Incremental change # is the oldest SCN contained in the backupset. This is the starting SCN number of the previous backup, this backup is based on.
  • Checkpoint Change # is  starting SCN number of the backup.  Everything newer than this SCN (including this SCN) needs to be defuzzied.


Normal backup progression (differential)

When performing an incremental RMAN backup of a datafile, the first thing that RMAN does is decide which blocks needs to be backed up. Because you are performing an incremental backup,  you may be backing up all of the blocks, only some of the blocks, or even none of the blocks if the file has not changed.
This is a decision RMAN makes by querying the RMAN catalog entries (or the controlfile entries if you not using an RMAN catalog).

Now let's walk through this decision process.  Each RMAN incremental differential's starting SCN is based on the beginning SCN of the previous backup (except for the full).



By looking at the RMAN catalog (or controlfile), RMAN determines  which blocks need to be contained in each incremental backup.



Normal backup progression (cumulative differential)

Up to release 19.1.1.2, the recommendation was to perform a Cumulative Differential backup. The cumulative differential backup compares the starting SCN number of the last full backup to determine the starting point of the incremental backup (rather than the last incremental backup) .
The advantage of the cumulative over differential, is that a cumulative backups can be applied to the last full and take the place of applying multiple differential backups.  However, cumulative backups are bigger  every day that passes between full backups because they contain all blocks since the last full.

Below is what a cumulative schedule would look like and you can compare this to the differential above.
You can see that each cumulative backups starts with the Checkpoint SCN of the last full to ensure that all blocks changed since the full backup started are captured.



The RMAN catalog entries would look like this.




If you were astute, you would notice a few things happened with the cumulative differential vs the differential.
  • The backup size got bigger every day
  • The time it took to perform the incremental backup got longer
  • The range of SCNs contained in the incremental is larger for a cumulative backup.

ZDLRA backup progression (cumulative differential)

As  you most likely know, one the most important features of the ZDLRA is the ability to create a "virtual full" from an incremental backup.,

If we look at what happens with a cumulative differential (from above), I will fill in the virtual full RMAN catalog entries by shading them light green.

The process of performing backups on the ZDLRA is exactly the same as it is for the above cumulative, but the RMAN catalog looks like this.


What you will noticed by looking at this compared to the normal cumulative process that
  • For every cumulative incremental backup there is a matching virtual full backup  The Virtual full backup appears (from the newly inserted catalog entry) to have beeen taken at the same time, and the same starting SCN number as the cumulative incremental. Virtual full backups, and incremental backups match time, and SCN as catalog entries.
  • The size of the virtual full is 0 since it is virtual and does not take up any space.
  • The completion time for the cumulative incremental backup is the same as the differential backups.  Because the RMAN logic can see the virtual full entry in the catalog, it executes the cumulative incremental EXACTLY as if it is the first differential incremental following a full backup.
Smart Incremental backups -

Now all of this led us to smart incremental backups. Sometimes the cumulative backup process doesn't work quite right.  A few of the reasons this can happen are.

  • You perform a full backup to a backup location other than the ZDLRA. This could be because you are backing up to the ZDLRA for the first time replacing a current backup strategy, or maybe you created a special backup to disk to seed a test environment (Using a keep backup for this will alleviate this issue).  The cumulative incremental backup will compare against the last full regardless of where it was taken (there is exceptions if you always use tags to compare).
  • You implement TDE or rekey the database.  Implementing TDE (Transparent Data Encryption) changes the blocks, but does not change the SCN numbers of the blocks. A new full backup is required.
Previously, you would have to perform a special full backup to correct these issues. In the example below you can see what happens (without smart incremental) to the RMAN catalog if you perform a backup on Thursday at 12:00 to disk to refresh a development environment.



Since the cumulative backups are based on the last full backup, the Thursday - Saturday backups contain all the blocks that have changed since the disk backup started on Thursday at 12:00.
And, since it is cumulative, each days backup is larger, and takes longer.

This is when you would typically have to force a new level 0 backup of the datafile.


What the smart incremental does

Since the RMAN catalog is controlled by the ZDLRA it can correct the problem for you. You no longer need to perform cumulative backups as the ZDLRA can fill in any issues that occur.

In the case of the Full backup to disk, it can "hide" that entry, and continue to correctly perform differential backups. It would "hide" the disk backup that occured, and inform the RMAN client that the last full backup as of Thursday night is NOT the disk backup, but it is the previous virtual full backup.
\


 In the case of the TDE, it can "hide" all of the Level 0 virtual full backups, and the L1 differential backups (which will force a new level 0).





All of this is done without updating the DB client version. All the magic is done within the RMAN catalog on the ZDLRA.

Now isn't that smart ?




Oracle TDE encryption - Encrypting my pluggable database

$
0
0

 This is post #1 in a series of posts explaining how to implement TDE (Transparent Data Encryption). In this first post I will take my Multitenant 19c database (remember Multitenant is mandatory with 21c) and configure TDE in my 3 pluggable databases.


The database I created for this contains 3 PDBs as this will give me flexibility to unplug and move PDBs around.

The names I used are

  • OKVTEST - This is my CDB, and I will not be encrypting it.
  • OKVPDB1,OKVPDB2, OKVPDB3 - My 3 PDBs. I will be encrypted all datafiles that make up these 3 PDBS.

The location I chose to put the wallet file that is needed for encryption  is under my $ORACLE_BASE (/home/oracle/app/oracle/okvfiles/okvtest)  . In my further blogs I will converting from using a wallet only for my encryption keys, to using OKV along with a local wallet that caches the encryption keys.

I also chose to perform the encryption using the quickest method "Restore as encrypted".  With my test database, I did not have a standby database. Keep in mind this method (restore as encrypted) can be used to encrypt your production database with limited downtime.

Step 1 - Perform a full backup of the database.  Since I am using "restore as encrypted" this will allow me to open the database with minimal recovery.  Once backed up, you also should create a restore point to quickly identify the point after the full backup prior to the encryption.

create restore point pretde;

Step 2 - Set the location of the wallet_root, and the tde configuration.  I chose to use the WALLET_ROOT parameter (new with 19 I believe) because it gives the most flexibility.  Keep in mind in order to go through step 2 completely the database will need to be bounced.


alter system set WALLET_ROOT='/home/oracle/app/oracle/okvfiles/okvtest/' scope=spfile;

startup force;

alter system set tde_configuration='KEYSTORE_CONFIGURATION=FILE' scope=both;


Step 3 - We are going to a look at the database and the parameters that are set for encryption. Below is the formatted query I am going to be using throughout this post.


set linesize 150;
column wrl_parameter format a50
column wrl_type heading 'Type' format a10
column status heading 'Status' format a20

select * from v$encryption_wallet;

Below is the output of the query and the current settings as of this point. You can see that there are rows for all my PDBs, and that the status is "NOT_AVAILABLE" since I have not created any master keys yet. You can also see that the keystore is UNITED, meaning that all the keys (both for the CDB and all the PDBs) are assumed to be contained in the same Wallet file.

Type  WRL_PARAMETER                                      Status          WALLET_TYPE          WALLET_OR KEYSTORE FULLY_BAC     CON_ID
----- -------------------------------------------------- --------------- -------------------- --------- -------- --------- ----------
FILE /home/oracle/app/oracle/okvfiles/okvtest//tde/ NOT_AVAILABLE UNKNOWN SINGLE NONE UNDEFINED 1
FILE NOT_AVAILABLE UNKNOWN SINGLE UNITED UNDEFINED 2
FILE NOT_AVAILABLE UNKNOWN SINGLE UNITED UNDEFINED 3
FILE NOT_AVAILABLE UNKNOWN SINGLE UNITED UNDEFINED 4
FILE NOT_AVAILABLE UNKNOWN SINGLE UNITED UNDEFINED 5


Step 4. Now I need to set the keystore and open it for the CDB, and all my individual PDBs. Note that each PDB shares the keystore with the CDB. In isolated mode, I would create an individual keystore for each PDB.  

ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/home/oracle/app/oracle/okvfiles/okvtest/tde' IDENTIFIED BY "0KV2021!";

ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "0KV2021!";
alter session set container=okvpdb1;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "0KV2021!" CONTAINER = CURRENT;
alter session set container=okvpdb2;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "0KV2021!" CONTAINER = CURRENT;
alter session set container=okvpdb3;
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "0KV2021!" CONTAINER = CURRENT;

Now let's look at the encryption settings in v$encryption_wallet. Below you can see that there is a single wallet setting (UNITED keystore), and the status is "OPEN_NO_MASTER_KEY". The master key has not been set for CDB, or the PDBs.

Type       WRL_PARAMETER                                      Status               WALLET_TYPE          WALLET_OR KEYSTORE FULLY_BAC     CON_ID
---------- -------------------------------------------------- -------------------- -------------------- --------- -------- --------- ----------
FILE /home/oracle/app/oracle/okvfiles/okvtest//tde/ OPEN_NO_MASTER_KEY PASSWORD SINGLE NONE UNDEFINED 1
FILE CLOSED UNKNOWN SINGLE UNITED UNDEFINED 2
FILE OPEN_NO_MASTER_KEY PASSWORD SINGLE UNITED UNDEFINED 3
FILE OPEN_NO_MASTER_KEY PASSWORD SINGLE UNITED UNDEFINED 4
FILE OPEN_NO_MASTER_KEY PASSWORD SINGLE UNITED UNDEFINED 5

Step 5. Now we create the master keys for the CDB and each PDB . 

NOTE: I added a tag that identifies the key with the CDB or PDB it is created for.


ADMINISTER KEY MANAGEMENT SET encryption KEY using tag 'OKVTEST_MASTERKEY_APRIL1' IDENTIFIED BY "0KV2021!" WITH BACKUP USING 'OKVTEST_TDEKEY_APR1_backup';
alter session set container=okvpdb1;
ADMINISTER KEY MANAGEMENT SET encryption KEY using tag 'OKVPDB1_MASTERKEY_APRIL1' IDENTIFIED BY "0KV2021!" WITH BACKUP USING 'OKVPDB1_TDEKEY_APR1_backup' container=current;
alter session set container=okvpdb2;
ADMINISTER KEY MANAGEMENT SET encryption KEY using tag 'OKVPDB2_MASTERKEY_APRIL1' IDENTIFIED BY "0KV2021!" WITH BACKUP USING 'OKVPDB2_TDEKEY_APR1_backup' container=current;
alter session set container=okvpdb3;
ADMINISTER KEY MANAGEMENT SET encryption KEY using tag 'OKVPDB3_MASTERKEY_APRIL1' IDENTIFIED BY "0KV2021!" WITH BACKUP USING 'OKVPDB3_TDEKEY_APR1_backup' container=current;

And once again let's look at the settings in v$encryption_wallet.  This time you will see that the wallet is open for all CDBs/PDBs except for the PDB$SEED. The wallet type is "PASSWORD" which means that the wallet needs to be manually opened with a password.

Type       WRL_PARAMETER                                      Status               WALLET_TYPE          WALLET_OR KEYSTORE FULLY_BAC     CON_ID
---------- -------------------------------------------------- -------------------- -------------------- --------- -------- --------- ----------
FILE /home/oracle/app/oracle/okvfiles/okvtest//tde/ OPEN PASSWORD SINGLE NONE NO 1
FILE CLOSED UNKNOWN SINGLE UNITED UNDEFINED 2
FILE OPEN PASSWORD SINGLE UNITED NO 3
FILE OPEN PASSWORD SINGLE UNITED NO 4
FILE OPEN PASSWORD SINGLE UNITED NO 5

Step 6 - We have the master keys set and the wallets are open.  We now to implement TDE. As I said in my example, I used "restore as encrypted".   First I am going to close, restore and recover the 3 PDBs.

rman target / catalog rmancat/oracle@rmancat

rman> alter pluggable database okvpdb1 close;
rman> alter pluggable database okvpdb2 close;
rman> alter pluggable database okvpdb3 close;

rman> restore pluggable database okvpdb1 as encrypted;
rman> restore pluggable database okvpdb2 as encrypted;
rman> restore pluggable database okvpdb3 as encrypted;

rman> recover pluggable database okvpdb1;
rman> recover pluggable database okvpdb2;
rman> recover pluggable database okvpdb3;

Then once restored and recovered, I am going to open the wallet, and open the pluggable databases.

sqlplus / as sysdba

sql> alter session set container=okvpdb1;
sql> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "0KV2021!" CONTAINER = CURRENT;
sql> alter pluggable database okvpdb1 open;

sql> alter session set container=okvpdb2;
sql> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "0KV2021!" CONTAINER = CURRENT;
sql> alter pluggable database okvpdb2 open;

sql> alter session set container=okvpdb3;
sql> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "0KV2021!" CONTAINER = CURRENT;
sql> alter pluggable database okvpdb3 open;

Step 7 - I am going to verify that the pluggable databases are encrypted. I am going to use the query below to look at the encryption setting on each datafile.

set linesize 150
column status format a10
column encrypted format a10
column tablespace_name format a30
column name format a20

select c.name,b.tablespace_name,b.status,encrypted
from v_$datafile_header a,cdb_data_files b,v$pdbs c
where a.file#=b.file_id
and a.con_id=c.con_id
order by 1,2;

Below is the output. I see that all the datafiles were properly encrypted and are available.

NAME                 TABLESPACE_NAME                STATUS     ENCRYPTED
-------------------- ------------------------------ ---------- ----------
OKVPDB1 SYSAUX AVAILABLE YES
OKVPDB1 SYSTEM AVAILABLE YES
OKVPDB1 UNDOTBS1 AVAILABLE YES
OKVPDB1 USERS AVAILABLE YES
OKVPDB2 SYSAUX AVAILABLE YES
OKVPDB2 SYSTEM AVAILABLE YES
OKVPDB2 UNDOTBS1 AVAILABLE YES
OKVPDB2 USERS AVAILABLE YES
OKVPDB3 SYSAUX AVAILABLE YES
OKVPDB3 SYSTEM AVAILABLE YES
OKVPDB3 UNDOTBS1 AVAILABLE YES
OKVPDB3 USERS AVAILABLE YES

Step 8 - I am going to change the wallets to be AUTO_LOGIN, bounce the database and verify that the encrypt settings are all correct.

sqlplus / as sysdba

sql> ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE IDENTIFIED BY "0KV2021!";
sql> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '/home/oracle/app/oracle/okvfiles/okvtest/tde' IDENTIFIED BY "0KV2021!";

sql> shutdown immediate
sql> startup

And v$encryption_wallet shows me that my wallets are all open, and that they are AUTOLOGIN.

Type       WRL_PARAMETER                                      Status               WALLET_TYPE          WALLET_OR KEYSTORE FULLY_BAC     CON_ID
---------- -------------------------------------------------- -------------------- -------------------- --------- -------- --------- ----------
FILE /home/oracle/app/oracle/okvfiles/okvtest//tde/ OPEN AUTOLOGIN SINGLE NONE NO 1
FILE OPEN AUTOLOGIN SINGLE UNITED NO 2
FILE OPEN AUTOLOGIN SINGLE UNITED NO 3
FILE OPEN AUTOLOGIN SINGLE UNITED NO 4
FILE OPEN AUTOLOGIN SINGLE UNITED NO 5

Now I am ready to perform a new FULL backup of the pluggable databases, and they are ready for use.

That's all there is to implementing TDE with a wallet file. Next post, I am going to convert my wallet to OKV managed wallets.

 

Migrate your TDE wallet to Oracle Key Vault

$
0
0

How to migrate your local TDE wallet to  Oracle  Key Vault .            




Here and here are the links to the 21C document that I used for to go through this process.

I am assuming that you installed OKV by this point.

Below are the steps.

1) Add the database/host to OKV as an endpoint.

 Remember in OKV, each endpoint is unique, but a wallet is shared between endpoints.

  I navigate to the endpoint tab and click on the "Add" button.

I fill in the information for my database (TDETEST from my previous post). This is the CDB, as I am using a UNITED wallet for all PDBs that are a member of my CDB. Once filled in I click on the "Register" button.




Once registered, I can see it on the Endpoint screen.  Note the "Enroll Token" column. This is needed to enroll the endpoint itself.  Save this token, as this will be needed by the person who actually enrolls each DB host/Database.




2) Download the OKV client install file


Now that the database/host is registered in OKV (the combination of the 2 is the endpoint), I need to download the jar file which will configure the setting on the database host.
The download is initiated by  logging out of the OKV console, and clicking on the "Endpoint Enrollment and Software Download" link on the logon screen. I highlighted it below.

You might not have noticed this link before !  Now click on the link, you don't need to login for this step.  It will bring up the window below and in that window you will 
  • Click on the "Submit Token" button, and it will validate the token
  • Click on "Enroll" to begin the download of the install file. If SMTP was configured, you can also have the software install e-mailed to the endpoint administrator.
The download file is a jar file called okvclient.jar. It is highly recommended that you rename it because it is specific to this endpoint.





3) Transfer the .jar file to the database host and install it.

The pre-requisites are in the install guide. The oracle environment during the install mustbe set to the database you are configuration ($ORACLE_HOME, $ORACLE_BASE, $ORACLE_SID)

My target directory is going to be "/home/oracle/app/oracle/admin/tdetest/wallet/okv" and I copied my .jar file to /home/oracle/app/oracle/admin/tdetest  (which I renamed to tdetest_okv.jar). 

Execute java passing the location of the jar file, followed by -d "install location"


[oracle@oracle-server okvtest]$ java -jar /home/oracle/app/oracle/admin/tdetest/tdetest_okv.jar -d /home/oracle/app/oracle/admin/tdetest/wallet/okv
Detected JAVA_HOME: /home/oracle/db_19c/jdk
Enter new Key Vault endpoint password (<enter> for auto-login):
Confirm new Key Vault endpoint password:
The endpoint software for Oracle Key Vault installed successfully.
Deleted the file : /home/oracle/app/oracle/okvfiles/okvtest/okvtest_install.jar
[oracle@oracle-server okvtest]$


If this is the first time OKV is being installed on the server, you need to execute the root.sh script (located in the /bin directory within the install location) as root.  If it has already been executed, you can skip this step.

Creating directory: /opt/oracle/extapi/64/hsm/oracle/1.0.0/
Copying PKCS library to /opt/oracle/extapi/64/hsm/oracle/1.0.0/
Setting PKCS library file permissions

Finally, verify that we can connect OKV by executing "okvutil list". If successful, you will receive "no objects found". This script is located in the /bin directory within the install.

oracle@oracle-server bin]$ ./okvutil list
Enter Oracle Key Vault endpoint password:
No objects found


4) Review how OKV connects to the database.

  • WALLET_ROOT is set in the database, and within WALLET_ROOT there is an "/okv" directory where the endpoint software must be installed.
  • On the OS itself, a library is installed (as root if it's not already there) to take care of the encryption
  • A link is created to a config file for this endpoint. This link is located in $ORACLE_BASE/admin/$ORACLE_SID and links to 2 files that were part of the install. okvclient.lck, and okvclient.ora.
    NOTE: okvclient.ora is the configuration file where parameters are set for the endpoint.


 5) Create wallet in OKV and associate it with the endpoint(s)


Now that OKV is installed and configured on the client we can create a wallet in OKV to upload the keys into.  I am going to start by logging back into OKV and navigating to the wallets tab and clicking on "Create" to create a new wallet.
The screen belows comes up, and I enter the name of the new wallet to hold the keys for my CDB. I then click on save to save the new wallet.

  

Next I associate this new wallet to the endpoint (database host/database). In order to do this, navigate back to the endpoint tab and click on the endpoint. Scroll down and you will see "access to wallets". Click on add.  You will see a screen like the screen below. In this screen, I add access to the wallet for the endpoint. Since this is the primary database and will be making changes to the wallet, I am giving this endpoint the ability to manage the wallet.




And here is what the endpoint information looks like. My database/host is enrolled.





  

 6) Upload the keys from the local wallet into OKV 

Now we upload the keys from the local wallet into OKV.

The command is 
"okvutil upload -t WALLET -l {wallet location on host} -g {key vault wallet name} -v 2

NOTE: the Key Vault wallet name is case sensitive
[oracle@oracle-server bin]$ ./okvutil upload -t WALLET -l  /home/oracle/app/oracle/admin/tdetest/wallet/tde -g tdetest -v 2
okvutil version 21.1.0.0.0
Endpoint type: Oracle Database
Configuration file: /home/oracle/app/oracle/admin/tdetest/wallet/okv/conf/okvclient.ora
Server: 10.0.0.150:5696
Standby Servers:
Uploading from
/home/oracle/app/oracle/admin/tdetest/wallet/tde

Enter source wallet password:
Enter Oracle Key Vault endpoint password:
ORACLE.SECURITY.ID.ENCRYPTION.
ORACLE.SECURITY.KB.ENCRYPTION.
ORACLE.SECURITY.KT.ENCRYPTION.AQDBKozP1k8Mvwq4sH7ptKYAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.KM.ENCRYPTION.AQDBKozP1k8Mvwq4sH7ptKYAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.DB.ENCRYPTION.AQDBKozP1k8Mvwq4sH7ptKYAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.KT.ENCRYPTION.AYURdnq5XU8Rv7IipWqWgHoAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.KM.ENCRYPTION.AYURdnq5XU8Rv7IipWqWgHoAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.DB.ENCRYPTION.AYURdnq5XU8Rv7IipWqWgHoAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY.BF507489CE7703B4E0536800000A8180
ORACLE.SECURITY.KM.ENCRYPTION.AXLqsppXAU9kv9JLJCcfGYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.DB.ENCRYPTION.AXLqsppXAU9kv9JLJCcfGYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.KT.ENCRYPTION.AXLqsppXAU9kv9JLJCcfGYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY.BF5072A8540A032BE0536800000AB0DD
ORACLE.SECURITY.KM.ENCRYPTION.AXDVlynThU8bvwblg7vruGAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.DB.ENCRYPTION.AXDVlynThU8bvwblg7vruGAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.KT.ENCRYPTION.AXDVlynThU8bvwblg7vruGAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY.BF50708B8BEB0266E0536800000A7B90
ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY

Uploaded 4 TDE keys
Uploaded 0 SEPS entries
Uploaded 0 other secrets
Uploaded 6 opaque objects

Uploading private persona
Uploading certificate request
Uploading trust points

Uploaded 1 private keys
Uploaded 1 certificate requests
Uploaded 0 user certificates
Uploaded 0 trust points


Upload succeeded

Within the upload, I can see where the TDE master keys are being uploaded for my PDBs by looking at the PDB guids.

PDB            PDB_ID

SQL> column name format a40
SQL> select name,guid from v$pdbs;

NAME GUID
---------------------------------------- --------------------------------
PDB$SEED BF5039AF39966A70E0536800000A09E1
TDEPDB1 BF50708B8BEB0266E0536800000A7B90
TDEPDB2 BF5072A8540A032BE0536800000AB0DD
TDEPDB3 BF507489CE7703B4E0536800000A8180


And I can look in the wallet in OKV (filtering by Symmetric key) and see the contents that was uploaded from the local wallet. In this screen I can identify the PDB key because I used tags when I created the keys.





7) Add secret to allow use of "External Store". 

1) I am going to add the OKV password to the keystore as a secret to allow me to use the "EXTERNAL STORE" instead of the password.

ADMINISTER KEY MANAGEMENT ADD SECRET '0KV2021!' FOR CLIENT 'OKV_PASSWORD' TO LOCAL AUTO_LOGIN KEYSTORE '/home/oracle/app/oracle/admin/tdetest/wallet/tde_seps';


NOTE
: As I pointed out in the last post

  • The keystore must be in a subdirectory of the WALLET_ROOT location called "tde_seps" in order to be found.
  • The "FOR CLIENT" entry must be 'OKV_PASSWORD' to be found.
  • It must be AUTO_LOGIN so that it can be opened and used.

2) I am going to add the OKV password to the keystore as a secret to allow me to auto logon to the OKV Keystore.

ADMINISTER KEY MANAGEMENT ADD SECRET '0KV2021!' FOR CLIENT 'HSM_PASSWORD' TO AUTO_LOGIN KEYSTORE '/home/oracle/app/oracle/admin/tdetest/wallet/tde';

3)  I need to change the TDE_CONFIGURATION (which is dynamic).

'ALTER SYSTEM SET TDE_CONFIGURATION = "KEYSTORE_CONFIGURATION=OKV|FILE" SCOPE = BOTH;

4) I am going to bounce the database, and ensure that both my file and OKV wallets open up properly.


Type WRL_PARAMETER Status WALLET_TYPE WALLET_OR KEYSTORE FULLY_BAC CON_ID
---------- -------------------------------------------------- -------------------- -------------------- --------- -------- --------- ----------
FILE /home/oracle/app/oracle/admin/tdetest/wallet//tde/ OPEN AUTOLOGIN SINGLE NONE YES 1
OKV OPEN_NO_MASTER_KEY OKV SINGLE NONE UNDEFINED 1
FILE OPEN AUTOLOGIN SINGLE UNITED YES 2
OKV OPEN_NO_MASTER_KEY OKV SINGLE UNITED UNDEFINED 2
FILE OPEN AUTOLOGIN SINGLE UNITED YES 3
OKV OPEN_NO_MASTER_KEY OKV SINGLE UNITED UNDEFINED 3
FILE OPEN AUTOLOGIN SINGLE UNITED YES 4
OKV OPEN_NO_MASTER_KEY OKV SINGLE UNITED UNDEFINED 4
FILE OPEN AUTOLOGIN SINGLE UNITED YES 5
OKV OPEN_NO_MASTER_KEY OKV SINGLE UNITED UNDEFINED 5

10 rows selected.


8) Combine the local wallet File and OKV. 

  Next I need to migrate the keys using the local wallet. Note this will rekey the database.

ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY IDENTIFIED BY "0KV2021!" MIGRATE USING "F1LE2021!" WITH BACKUP;

I am going to bounce the database and ensure it comes up with both Keystores opened.

Type       WRL_PARAMETER                                      Status                         WALLET_TYPE          WALLET_OR KEYSTORE FULLY_BAC     CON_ID
---------- -------------------------------------------------- ------------------------------ -------------------- --------- -------- --------- ----------
FILE /home/oracle/app/oracle/admin/tdetest/wallet//tde/ OPEN AUTOLOGIN SECONDARY NONE YES 1
OKV OPEN OKV PRIMARY NONE UNDEFINED 1
FILE OPEN AUTOLOGIN SINGLE UNITED YES 2
OKV OPEN OKV SINGLE UNITED UNDEFINED 2
FILE OPEN AUTOLOGIN SECONDARY UNITED YES 3
OKV OPEN OKV PRIMARY UNITED UNDEFINED 3
FILE OPEN AUTOLOGIN SECONDARY UNITED YES 4
OKV OPEN OKV PRIMARY UNITED UNDEFINED 4
FILE OPEN AUTOLOGIN SECONDARY UNITED YES 5
OKV OPEN OKV PRIMARY UNITED UNDEFINED 5


That's all there is to it !

The most important notes I found during this process

  • WALLET_ROOT and TDE_CONFIGURATION should be used in 19c.
  • The password for OKV
    • add secret to the wallet in WALLET_ROOT/tde_seps using client 'OKV_PASSWORD'
    • add secret to the wallet in WALLET_ROOT/tde using client 'HSM_PASSWORD'
  • OKV must  be installed in WALLET_ROOT/okv 

Cloning a TDE encrypted PDB from backup

$
0
0

I am going to walk through how to clone a PDB from an encrypted cloud backup.


My environment.

  • I have a Multi-tenant database called "TDETEST" containing 3 pluggable databases. TDEPDB1,TDEPDB2 and TDEPDB3
  • All of my PDBs are encrypted with TDE.
  • I am backing up to the Oracle cloud using the "Database Cloud backup Module" (though you can use the same process regardless of where you are backing up to).
  • My backups also use RMAN encryption which means my controlfilebackup and spfile backups are encrypted.
  • I am using Oracle Key Vault (OKV) to manage my encryption keys. 
  • My source database contains 4 encryption keys in a United Encryption wallet. The encryption key for the CDB (1)  and a key for each PDB (3).
  • I am using the parameters WALLET_ROOT and TDE_CONFIGURATION to manage my TDE settings.
NOTE: OKV is not required to go through this same process.  It is possible to use the same process with local wallet files by importing/exporting keys from within the TDE wallet.  OKV makes this process much simpler.

Security Concerns.

Since this is an environment leveraging advanced security, I want to be sure that during this process I am following the security philosophy of "least privilege". Because encryption keys are critical to protecting the data I am only going to access the encryption keys that I need, and I am going to change the master on my destination to ensure it is different from my source database.

Clone Process of a PDB to a new CDB

Step #1 -  Identify the encryption keys needed

I first need to identify the encryption keys from the source database that I need in order to clone to my new database.  The script below (executed against the source database) will give me the ID of the current encryption keys. If you rotate the master key, and you are restoring from a backup prior to a key rotation, you can find the KEY ID for that older backup by filtering on the activation_time.


Now the output I am seeing (for my source database with 3 PDBs) looks like this below. I can identify my master encrypt key for the CDB and the master key for my PDB (TDEPDB1), and add it to the wallet (in OKV) for my cloneDB.


PDB Name        TDE Master Encryption Key: MKID                                                Database Name
--------------- ------------------------------------------------------------------------------ ------------------------------
$CDB/tdetest 064B6B6DD1A3F24F7BBF386DAA7940018F tdetest
TDEPDB1 06911C93A8DFF84F58BFA7B77E59285C6F tdetest
TDEPDB2 0631D6ECD792304F23BFB0430B8622EFCF tdetest
TDEPDB3 06F8F1B56701944F77BF61340649D8664D                                tdetest


Step #2 -  Configure wallet

In order to restore my PDB (which is encrypted), I need the encryption keys for both my CDB and this pdb (TDEPDB1) that I identified in the previous step.

 In using OKV,  the process would be to
  • Add my auxiliary database as an endpoint.
  • Create a new wallet for the auxiliary database.
  • Add the encryption keys for both the CDB and my PDB to the wallet.
  • Download the endpoint client install .jar file.
  • Create the directory structure and identify the WALLET_ROOT location.
  • Install the OKV jar file in WALLET_ROOT/okv
  • Create the autologin for OKV in WALLET_ROOT/tde by storing the password as a secret for client 'HSM_PASSWORD'
Using a local wallet file the process would be to
  • export encryption keys to a file using the "WITH IDENTIFIER IN" clause filtering on the the encryption keys for the CDB and PDB.
  • Create the new wallet local file
  • Import the encrypt keys to the local wallet file, and make it autologon.

Step #3 -  Create init/spfile

Now we need to create the init file for the auxiliary database.  

The init file can be very small and only needs to contain a few entries.

NOTE: I am using WALLET_ROOT and TDE_CONFIGURATION. These need to be configured since my RMAN backup is encrypted. If you are not using OKV, then ensure the WALLET_ROOT is  pointing to the newly created local wallet file.

*.db_name='CLONEDB'
*.enable_pluggable_database=true
*.pga_aggregate_target=1567m
*.processes=320
*.sga_target=4700m
*.tde_configuration='KEYSTORE_CONFIGURATION=OKV|FILE'
*.wallet_root='/home/oracle/app/oracle/admin/clonedb/wallet/'


Step #4 -  Start up the database nomount

To make sure I going to be able successfully duplicate the database I am going to startup nomount the database.

sql > startup force nomount pfile='$ORACLE_HOME/dbs/initclonedb.ora';

Then I am going to make sure the wallet is automatically open

Type       WRL_PARAMETER                                      Status                         WALLET_TYPE          WALLET_OR KEYSTORE FULLY_BAC     CON_ID
---------- -------------------------------------------------- ------------------------------ -------------------- --------- -------- --------- ----------
FILE /home/oracle/app/oracle/admin/clonedb/wallet//tde/ OPEN_NO_MASTER_KEY AUTOLOGIN SINGLE NONE UNDEFINED 1
OKV OPEN_UNKNOWN_MASTER_KEY_STATUS OKV SINGLE NONE UNDEFINED 1


And I'm going to verify that the encryption keys are available for the CDB and PDB.

PDB Name        TDE Master Encryption Key: MKID                                                Database Name
--------------- ------------------------------------------------------------------------------ ------------------------------
$CDB/tdetest 06911C93A8DFF84F58BFA7B77E59285C6F tdetest
$CDB/tdetest 064B6B6DD1A3F24F7BBF386DAA7940018F tdetest


Step #5 -  Duplicate the pluggable database

Next I am going to execute the duplicate database command. Along with changing the location of the datafiles, I am also changing the settings for the WALLET_ROOT and TDE_CONFIGURATION.

rman  catalog rmancat/oracle@rmancat auxiliary / 

duplicate database tdetest to clonedb pluggable database tdepdb1 spfile
set control_files '/home/oracle/app/oracle/oradata/clonedb/CONTROLFILE/cf3.ctl'
set db_create_file_dest '/home/oracle/app/oracle/oradata/clonedb/'
set DB_FILE_NAME_CONVERT '/home/oracle/app/oracle/oradata/TDETEST','/home/oracle/app/oracle/oradata/clonedb'
set LOG_FILE_NAME_CONVERT '/home/oracle/app/oracle/oradata/TDETEST','/home/oracle/app/oracle/oradata/clonedb'
set wallet_root '/home/oracle/app/oracle/admin/clonedb/wallet/'
set tde_configuration='KEYSTORE_CONFIGURATION=OKV|FILE' ;


Below is the output from executing this.




Step #6 -  Rekey my encryption keys

I am going to execute the "SET KEY" to change the master key for my cloned copy.

Below are my keys, I have 2 keys for the CDB and 2 keys for the PDB. I can see that the activation time is showing that my new keys are now active.

PDB Name        TDE Master Encryption Key: MKID          Database Name                  activation time
--------------- ---------------------------------------- ------------------------------ -----------------------------------
$CDB/CLONEDB 06FB49082CF3D44FC0BFF085D24B4976FE CLONEDB 12-APR-21 06.21.44.323844 PM +00:00
$CDB/tdetest 064B6B6DD1A3F24F7BBF386DAA7940018F tdetest 06-APR-21 08.58.44.177146 PM +00:00
TDEPDB1 06911C93A8DFF84F58BFA7B77E59285C6F tdetest 06-APR-21 08.59.10.493272 PM +00:00
TDEPDB1 06DF048E03AD1D4F3CBFCEC911312C036B CLONEDB 12-APR-21 06.25.21.614324 PM +00:00

On OKV I added the 2 new keys to the wallet for my CloneDB


That's all there is to cloning a single PDB into a new CDB from Cloud Backup that was encrypted with OKV !








Configuring OKV automation using REST APIs

$
0
0

 This post will go through the process of creating a few simple scripts to automate OKV installation using the REST API capability of OKV.


Step #1 Configure RESTful Services and download client tool

First you need to configure the OKV server for RESTful Services. The instructions can be found here. This is done by navigating to the System tab and clicking on RESTful Services.


This bring up the window below.



 There are three things you want to do from this window.
  1. Click on the "Enable" box to enable RESTful services
  2. Download the okvrestcliepackage.zip which are the client utilities.
  3. Save this setting to enable RESTful services.
Now that we have this file, we need to download it our client and start creating the scripts to automate this process.

I downloaded the zip file to my DB host to configure it. I unzipped it in /home/oracle/okv/rest

NOTE: you can also download it directly from the OKV hosts




Step #2 unzip and configure the client tool 


I unzipped the client tool into my home directory on a DB server so I can put together the automation scripts. In my case I unzipped it into /home/oracle/okv/rest. This creates 3 sub directories. I am going to format the output using this command.




Below is what the output looks like

.
|-lib
| |-okvrestcli.jar
|-bin
| |-okv.bat
| |-okv
|-conf
| |-okvrestcli.ini
| |-okvrestcli_logging.properties


Step #3 - Set the environment for the CLI

In order to configure OKV, I am going to need some variables set in my environment. I can do this manually, but in my case I decided to create a "setenv.sh" script that will set the variables and add the OKV script to my path to be executed.  The 2 main variables I will be using are

OKV_RESTCLI_HOME - Location of the scripts that I am going to be installing. If I source the setenv.sh script, it will set the home to this location.

OKV_RESTCLI_CONFIG - Name of the configuration file that contains the rest CLI configuration.





Step #4 - Set initialization parameters in okvrestcli.ini file


Next, I am going to configure the initialization parameters. These are found in the okvrestcli.ini file.
You can see that the file contains a "[Default]" profile and a few other example profiles. We will start with the default profile. In this we are going to set a few of the properties.

LOG_PROPERTY - Location of the logging properties. Default location is ./conf directory.

SERVER - IP address (or DNS) of one or more OKV hosts 

 OKV_CLIENT_CONFIG - location of the config file. Default location is ./conf directory

USER - OKV user that has authority to administer endpoints an wallets.

PASSWORD - Password for the user, or location of wallet containing the password. I am NOT going to use this as I am going to use a wallet file.

 CLIENT_WALLET - I am going to use a wallet to store the password, and this is the location of the wallet file. I will be creating the autologin wallet later.

 

Below is what my "[Default]" configuration file looks like after my changes. I am going to use the environmental variables I set in the setenv.sh script. 

NOTE: I am choosing to store my password in wallet rather than clear text in the .ini file.

 

[Default]
log_property=$OKV_RESTCLI_HOME/conf/okvrestcli_logging.properties
server=10.0.0.150
okv_client_config=$OKV_RESTCLI_HOME/conf/okvclient.ora
user=bgrenn
client_wallet=$OKV_RESTCLI_HOME/conf



Step #5 - Change the okv script to use the variables


Since I chose to use variables (OKV_RESTCLI_HOME) I am changing the OKV script to use those variables





Step #6 Create the wallet to save the password encrypted

Since I chose to put my password in a wallet, I now need to create that wallet. Using the instructions in the document (linked to at the beginning of this blog), I execute the command from the directory I installed into (/home/oracle/okv/rest)

cd /home/oracle/okv/rest
. ./setenv.sh


create environment variables OKV_RESTCLI_HOME and OKC_RESTCLI_CONFIG

$OKV_RESTCLI_HOME : /home/oracle/okv/rest
$OKV_RESTCLI_CONFIG : /home/oracle/okv/rest/conf/okvrestcli.ini

Adding $OKV_RESTCLI_BIN to the $PATH




okv admin client-wallet add --client-wallet $OKV_RESTCLI_HOME/conf --wallet-user bgrenn
Password: {my password}
{
"result" : "Success"
}

Step #7 Create the run-me.sh script


The last step is to create the script that will be executed  on the host to create the provision script.  In my script, I took the default and did some checking. This script will
  • Ensure the variable OKV_RESTCLI_HOME is set before it can be executed.
  • Determine the DB_NAME from the $ORACLE_BASE/diag/rdbms/*/$ORACLE_SID directory. Solving for the  * should give us the DB_NAME
  • While executing, it tells you what it believes the DB_NAME is, and gives you a chance to change it if incorrect.
  • It will validate if the wallet exists by accessing OKV. If the wallet already exists, it does not try to create it again.
  • It will install the client software in $ORACLE_BASE/admin/$DBNAME/wallet/okv
Below is the script I am using.






Step #8 Zip it all up and place it in a location to be downloaded

Below is the scripts that will be part of the zip file.

.
|-lib
| |-okvrestcli.jar
|-bin
| |-okv.bat
| |-okv
|-conf
| |-okvrestcli_logging.properties
| |-ewallet.p12.lck
| |-ewallet.p12
| |-cwallet.sso.lck
| |-cwallet.sso
| |-okvrestcli.ini
|-setenv.sh
|-run-me.sh


Now I am ready to download this zip file to my Database Host and enroll a database.

NOTE: To change the script to work on another OKV all host I only had to make 3 changes.
  • Update the okvrestcli.ini file with OKV host IP
  • Update the okvrestcli.ini file with the the user
  • recreate the wallet file that contains the password for the OKV user

Enrolling my ExaCC RAC database using REST APIs

$
0
0

 This post will continue the process of automating the enrollment of my RAC database using the REST API, and some automation scripts. the steps to create the scripts are in my previous post.


The first step is to download the zip file I created in the previous post. I downloaded it onto the first DB host in my RAC cluster.  I unzipped it into /home/oracle/okv.

Below is what I am starting with.

.
|-lib
| |-okvrestcli.jar
|-bin
|-conf
| |-okvrestcli_logging.properties
| |-okvrestcli.ini
| |-ewallet.p12.lck
| |-ewallet.p12
| |-cwallet.sso.lck
| |-cwallet.sso
| |-okvclient.ora
|-setenv.sh
|-run-me.sh

STEP #1 - Set the environment

First I am going to set my environment to the database instance I want to configure (jckey1), and then I am going to source the environment for my OKV install.


[oracle@exacc1]$ cd /home/oracle/okv
[oracle@exacc1]$ . oraenv
ORACLE_SID = [jckey1] ? jckey1
The Oracle base remains unchanged with value /u02/app/oracle
[oracle@exacc1]$ . ./setenv.sh


create environment variables OKV_RESTCLI_HOME and OKC_RESTCLI_CONFIG

$OKV_RESTCLI_HOME : /home/oracle/okv
$OKV_RESTCLI_CONFIG : /home/oracle/okv/conf/okvrestcli.ini

Adding $OKV_RESTCLI_BIN to the $PATH


STEP #2 - Execute the enrollment creation script

The next step is to execute the run-me.sh that I created in the previous post. This will create the enrollment script. At the end of the output you will see the script it creates (okv-ep.sh).

NOTE: It will default to my DBNAME for the wallet name.

[oracle@exacc1]$ ./run-me.sh

executing script with $OKV_RESTCLI_HOME=/home/oracle/okv


DB Name is identified as jckey and ORACLE_SID is set to jckey1 setting

Press enter to keep this default [jckey], or enter the DB Name
DB Name [enter for Default] :

Using DB Name : jckey

#!/bin/bash
mkdir -pv /u02/app/oracle/admin/jckey/wallet
mkdir -pv /u02/app/oracle/admin/jckey/wallet/okv
okv manage-access wallet create --wallet JCKEY --description "wallet for database JCKEY" --unique FALSE
okv admin endpoint create --endpoint JCKEY1_on_exacc1 --description "exacc11, 10.136.106.36" --type ORACLE_DB --platform L
INUX64 --unique FALSE
okv manage-access wallet set-default --wallet JCKEY --endpoint JCKEY1_on_exacc1
expect << _EOF
set timeout 120
spawn okv admin endpoint provision --endpoint JCKEY1_on_exacc1 --location /u02/app/oracle/admin/jckey/wallet/okv --auto
-login FALSE
expect "Enter Oracle Key Vault endpoint password: "
send "change-on-install\r"
expect eof
_EOF


STEP #2 - Execute the enrollment script

[oracle@exacc1]$ ./okv-ep.sh
{
"result" : "Success"
}
{
"result" : "Success"
}
{
"result" : "Success"
}
spawn okv admin endpoint provision --endpoint JCKEY1_on_exacc1 --location /u02/app/oracle/admin/jckey/wallet/okv --auto-login FALSE
Enter Oracle Key Vault endpoint password:
{
"result" : "Success",
"value" : {
"javaHome" : "/u02/app/oracle/product/19.0.0.0/dbhome_8/jdk"
}
}


STEP #3 - We can verify what the enrollment script did

 

I am first going to look under $ORACLE_BASE/admin/$DBNAME/wallet where it placed the okv client.
[oracle@exacc1]$ pwd
/u02/app/oracle/admin/jckey/wallet
[oracle@exacc1]$ find . | sed -e "s/[^-][^\/]*\// |/g" -e "s/|\([^ ]\)/|-\1/"
.
|-okv
| |-bin
| | |-okveps.x64
| | |-okvutil
| | |-root.sh
| |-ssl
| | |-ewallet.p12
| |-csdk
| | |-lib
| | | |-liborasdk.so
| |-jlib
| | |-okvutil.jar
| |-conf
| | |-okvclient.ora
| | |-logging.properties
| | |-okvclient.lck
| |-lib
| | |-liborapkcs.so
| |-log
| | |-okvutil.deploy.log



Now I am going to verify in OKV and I can see the wallet got created for my database.

And I am going to look at the endpoint, and verify the default wallet is set.


STEP #4 Execute root.sh (only if this is the first install on this host).


I execute the root.sh script in the /bin directory as root.

[root@exacc1]# ./root.sh
Creating directory: /opt/oracle/extapi/64/hsm/oracle/1.0.0/
Copying PKCS library to /opt/oracle/extapi/64/hsm/oracle/1.0.0/
Setting PKCS library file permissions
Installation successful.


STEP #5 - Verify we can contact the OKV server


The next step is to execute the okvutil list command to verify we can contact the OKV host, and that the default wallet is configured.

[oracle@exacc1]$ ./okvutil list
Enter Oracle Key Vault endpoint password:
Unique ID Type Identifier
9E8BD892-D799-44B7-8289-94447E7ACC54 Template Default template for JCKEY1_ON_ECC5C2N1

STEP #6 - change the OKV endpoint password 

[oracle@exacc1]$ /u02/app/oracle/admin/jckey/wallet/okv/bin/okvutil changepwd -t wallet -l /u02/app/oracle/admin/jckey/wallet/okv/ssl/
Enter wallet password: change-on-install
Enter new wallet password: {my new password}
Confirm new wallet password: {my new password}
Wallet password changed successfully

STEP #7 Install the client and change the password on all nodes.


I followed the steps above on the other 3 nodes to install the client and change the password.

STEP #8 Upload the keys from the wallet file.

I uploaded the keys from the shared wallet files on ACFS.
[oracle@exacc1]$ /u02/app/oracle/admin/jckey/wallet/okv/bin/okvutil upload -t wallet -l /var/opt/oracle/dbaas_acfs/jckey/wallet_root/tde -v 2 -g JCKEY
okvutil version 21.1.0.0.0
Endpoint type: Oracle Database
Configuration file: /u02/app/oracle/admin/jckey/wallet/okv/conf/okvclient.ora
Server: 10.136.102.243:5696
Standby Servers:
Uploading from /acfs01/dbaas_acfs/jckey/wallet_root/tde
Enter source wallet password:
Enter Oracle Key Vault endpoint password:
ORACLE.SECURITY.DB.ENCRYPTION.Ab8Sv6Ezs08fv9Sy7/zZB8oAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.KM.ENCRYPTION.Ab8Sv6Ezs08fv9Sy7/zZB8oAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.KB.ENCRYPTION.
ORACLE.SECURITY.ID.ENCRYPTION.
ORACLE.SECURITY.KM.ENCRYPTION.ATQdCFHhVk9Yv7er6uZtDf8AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.DB.ENCRYPTION.ATQdCFHhVk9Yv7er6uZtDf8AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY.BFF45EC14E46013BE053246A880A5564
ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY

Uploaded 2 TDE keys
Uploaded 0 SEPS entries
Uploaded 0 other secrets
Uploaded 4 opaque objects

Uploading private persona
Uploading certificate request
Uploading trust points

Uploaded 1 private keys
Uploaded 1 certificate requests
Uploaded 0 user certificates
Uploaded 0 trust points

Upload succeeded

STEP #9 Copy current wallet, and add OKV credentials.

Now you copy the current wallet file (from the ACFS location) to the tde directory (new OKV install)  next to the OKV install.
 In my case since my OKV client is installed in $ORACLE_BASE/admin/jckey/wallet (which will be the WALLET_ROOT),  the tde directory will be the file location for wallets.
I am also adding my password credentials to the local wallet.

NOTE: "OKV_PASSWORD" is used to open the wallet. "HSM_PASSWORD" is used to access the OKV server(s).


mkdir /u02/app/oracle/admin/jckey/wallet/tde_seps
mkdir /u02/app/oracle/admin/jckey/wallet/tde
cp /var/opt/oracle/dbaas_acfs/jckey/wallet_root/tde/* /u02/app/oracle/admin/jckey/wallet/tde/.
ADMINISTER KEY MANAGEMENT ADD SECRET 'Welcome1+' FOR CLIENT 'OKV_PASSWORD' TO LOCAL AUTO_LOGIN KEYSTORE '/u02/app/oracle/admin/jckey/wallet/tde_seps';
ADMINISTER KEY MANAGEMENT ADD SECRET 'Welcome1+' FOR CLIENT 'HSM_PASSWORD' TO AUTO_LOGIN KEYSTORE '/u02/app/oracle/admin/jckey/wallet/tde';


STEP # 10 Change the WALLET_ROOT

Since WALLET_ROOT can only be changed with a restart, I am going to shut down all instances in the cluster and perform the next few steps on the first node only.

SQL> alter system set WALLET_ROOT='/u02/app/oracle/admin/jckey/wallet' scope=spfile;

System altered.

SQL> shutdown immediate
startup mount;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL>
alter system set tde_configuration='KEYSTORE_CONFIGURATION=OKV|FILE' scope=both;

select b.name pdb_name,wrl_type,
wrl_parameter,
status,wallet_type,
keystore_mode,
fully_backed_up
from v$encryption_wallet a,v$containers b
where a.con_id = b.con_id(+);SQL> SQL> SQL> SQL> SQL> SQL> SQL> 2 3 4 5 6 7

PDB Name Type WRL_PARAMETER Status WALLET_TYPE KEYSTORE Backed Up
---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
CDB$ROOT FILE /u02/app/oracle/admin/jckey/wallet/tde/ OPEN AUTOLOGIN NONE YES
CDB$ROOT OKV OPEN_NO_MASTER_KEY OKV NONE UNDEFINED
PDB$SEED FILE OPEN AUTOLOGIN UNITED YES
PDB$SEED OKV OPEN_NO_MASTER_KEY OKV UNITED UNDEFINED
JCKPDB FILE OPEN AUTOLOGIN UNITED YES
JCKPDB OKV OPEN_NO_MASTER_KEY OKV UNITED UNDEFINED



STEP # 11 Combine the local wallet File and OKV. 

  Next I need to migrate the keys using the local wallet. Note this will rekey the database.

ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY IDENTIFIED BY "-okv key" MIGRATE USING "-local wallet key-" WITH BACKUP;

STEP # 12 restart the instance and make sure the wallet open.


PDB Name Type WRL_PARAMETER Status WALLET_TYPE KEYSTORE Backed Up
---------- ---------- ------------------------------- ------------------- --------------- --------- ----------
CDB$ROOT FILE /u02/app/oracle/admin/jckey/wallet/tde/ OPEN AUTOLOGIN NONE YES
CDB$ROOT OKV OPEN OKV NONE UNDEFINED
PDB$SEED FILE OPEN AUTOLOGIN UNITED YES
PDB$SEED OKV OPEN OKV UNITED UNDEFINED
JCKPDB FILE OPEN AUTOLOGIN UNITED YES
JCKPDB OKV OPEN OKV UNITED UNDEFINED


STEP # 13 rebuild the local wallet with the password

I deleted the original wallet files from the "tde" and "tde_seps" directories and recreated them using the exact same steps from step #9.
I then pushed executed the same commands to create the wallets on all the nodes in the clusters in the same location .

STEP # 14 - Bounce the database.

I bounced the database and made sure the wallet was open on all 4 nodes. Done.



INST_ID    PDB Name Type  WRL_PARAMETER                           Status               WALLET_TYPE   KEYSTORE Backed Up
-------- ---------- ----- ---------------------------------------- ------------------ -------------- -------- ---------
1 CDB$ROOT OKV OPEN OKV NONE UNDEFINED
2 CDB$ROOT OKV OPEN OKV NONE UNDEFINED
3 CDB$ROOT OKV OPEN OKV NONE UNDEFINED
4 CDB$ROOT OKV OPEN OKV NONE UNDEFINED
1 PDB$SEED OKV OPEN OKV UNITED UNDEFINED
2 PDB$SEED OKV OPEN OKV UNITED UNDEFINED
3 PDB$SEED OKV OPEN OKV UNITED UNDEFINED
4 PDB$SEED OKV OPEN OKV UNITED UNDEFINED
1 JCKPDB OKV OPEN OKV UNITED UNDEFINED
2 JCKPDB OKV OPEN OKV UNITED UNDEFINED
3 JCKPDB OKV OPEN OKV UNITED UNDEFINED
4 JCKPDB OKV OPEN OKV UNITED UNDEFINED
1 PDB$SEED FILE OPEN_NO_MASTER_KEY AUTOLOGIN UNITED UNDEFINED
1 CDB$ROOT FILE /u02/app/oracle/admin/jckey/wallet/tde/ OPEN_NO_MASTER_KEY AUTOLOGIN NONE UNDEFINED
2 CDB$ROOT FILE /u02/app/oracle/admin/jckey/wallet/tde/ OPEN_NO_MASTER_KEY AUTOLOGIN NONE UNDEFINED
3 CDB$ROOT FILE /u02/app/oracle/admin/jckey/wallet/tde/ OPEN_NO_MASTER_KEY AUTOLOGIN NONE UNDEFINED
4 CDB$ROOT FILE /u02/app/oracle/admin/jckey/wallet/tde/ OPEN_NO_MASTER_KEY AUTOLOGIN NONE UNDEFINED
1 PDB$SEED FILE OPEN_NO_MASTER_KEY AUTOLOGIN UNITED UNDEFINED
2 PDB$SEED FILE OPEN_NO_MASTER_KEY AUTOLOGIN UNITED UNDEFINED
3 PDB$SEED FILE OPEN_NO_MASTER_KEY AUTOLOGIN UNITED UNDEFINED
4 PDB$SEED FILE OPEN_NO_MASTER_KEY AUTOLOGIN UNITED UNDEFINED
1 JCKPDB FILE OPEN_NO_MASTER_KEY AUTOLOGIN UNITED UNDEFINED
2 JCKPDB FILE OPEN_NO_MASTER_KEY AUTOLOGIN UNITED UNDEFINED
3 JCKPDB FILE OPEN_NO_MASTER_KEY AUTOLOGIN UNITED UNDEFINED
4 JCKPDB FILE OPEN_NO_MASTER_KEY AUTOLOGIN UNITED UNDEFINED


That's all there is to it. I now have my ExaCC database configuring to use OKV as the key store, and autologin into the wallet on all instances !

Configuring ExaCC backups of an Oracle Database

$
0
0

This post covers how to configure your backups of an ExaCC database beyond the web interface. 


First off the documentation can be found below, along with using the "--help" option at the command line with "bkup_api"

Configuration - https://docs.oracle.com/en/cloud/cloud-at-customer/exadata-cloud-at-customer/exacc/customize-backup-configuration-using-bkup_api.html

Backup execution - https://docs.oracle.com/en/cloud/cloud-at-customer/exadata-cloud-at-customer/exacc/create-demand-backup.html#GUID-2370EA04-3141-4D02-B328-5EE9A10F66F2



    Step #1 - Register my database with my RMAN catalog

    If you have been a DBA for a long time (like me), you realize the value of an RMAN centralized catalog, and you are already using one.  I am going to start by taking my ExaCC database and registering it with my RMAN catalog.

    My database name is DBSG2.  In order to register I am going to make a few changes on the host to make the database easier to manage.

    • I am going to add my SID to the /etc/oratab file on all hosts. The /etc/oratab contains an entry for the DB name, but not the SID that is running on this host. I am going add the SID entry to allow me to use "oraenv" to set my environment.
    • I am going add the RMAN catalog to the tnsnames.ora on all the hosts also. This will allow me to access the RMAN catalog from any host
    Now I am going to login into the DB catalog, and my database to register my database

    rman target / catalog rco/#####@rmanpdb
    > register database;

    Step #2 - Configure backup settings in ExaCC

    The next step is to configure my database to be backed up using the tooling. This is pretty straightforward. I click on the "edit backup" button and fill in the information for my database and save it.  In my case I am using ZFS, and the need to make sure that I change my container to the container where the ZFS is configured.

    NOTE : The backup strategy is a Weekly L0 (full) backup every Sunday, and a daily L1 (differential incremental backup) on all other days. The time the backup is scheduled can be found in either the backup settings, or by looking at the crontab file.



    Then I just wait until I see complete. If I click on the work requests, I can see the progress until it's finished.



    Step #3 - Update the settings to use my RMAN catalog.

    First I need to get what the current settings are for my database (dbsg2) and save them in a config file so I can update them.

    I log into the first node, and su to root.
    Once there I execute "get config --all" and save all the settings to a file that I can update.

    NOTE : I an creating a new file under the bakup_api/cfg directory to make it easy to find.

    $ sudo su -
    Last login: Thu May 6 11:43:46 PDT 2021 on pts/0
    [root@ecc ~]## /var/opt/oracle/bkup_api/bkup_api get config --all --file=/var/opt/oracle/bkup_api/cfg/dbsg2.cfg --dbname dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : get_config
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_92303612_20210506125612.006275.log
    File /var/opt/oracle/bkup_api/cfg/dbsg2.cfg created


    Now I am going to edit it and make some changes.

    I changed to RMAN catalog settings to use my catalog.
    NOTE: The entry has to be the connect string, not a tnsnames.ora entry.

    #### This section is applicable when using a rman catalog ####
    # Enables RMAN catalog. Can be set to yes or no.
    bkup_use_rcat=yes

    ## Below parameters are required if rman catalog is enabled
    # RMAN catalog user
    bkup_rcat_user=rco


    # RMAN catalog password
    #bkup_rcat_passwd=RMan19c#_

    # RMAN catalog conn string
    bkup_rcat_conn=ecc-scan.bgrenn.com:1521:rmanpdb.bgrenn.com



    Now I am going to commit (set) the changes using the "set config" command
    # /var/opt/oracle/bkup_api/bkup_api set config --file=/var/opt/oracle/bkup_api/cfg/dbsg2.cfg --dbname dbsg2 
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : set_config
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_b800281f_20210506130824.084259.log
    cfgfile : /var/opt/oracle/bkup_api/cfg/dbsg2.cfg
    Using configuration file: /var/opt/oracle/bkup_api/cfg/dbsg2.cfg
    API::Parameters validated.
    UUID d0845ea0aea611eb98fb52540068a695 for this set_config(configure-backup)
    ** process started with PID: 86143
    ** see log file for monitor progress
    -------------------------------------



    And after a few minutes, I am going to check and make sure it was successful by using the configure_status command


    /var/opt/oracle/bkup_api/bkup_api configure_status --dbname dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : configure_status
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_fa81558e_20210507060019.504831.log
    * Last registered operation: 2021-05-07 12:58:41 UTC
    * Configure backup status: finished
    **************************************************
    * API History: API steps
    API:: NEW PROCESS 120531
    *
    * RETURN CODE:0
    ##################################################

    Everything looks good !

    Step #4 - Take a manual backup

    Now logged in as OPC, and becoming Root, and can run a special backup using bkup_api


    # /var/opt/oracle/bkup_api/bkup_api bkup_start --dbname=dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : bkup_start
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_9458c30f_20210510084341.430481.log
    UUID 7f6622f8b1a611eb865552540068a695 for this backup
    ** process started with PID: 336757
    ** see log file for monitor progress
    -------------------------------------


    I can see the status while it's running

    /var/opt/oracle/bkup_api/bkup_api bkup_status --dbname=dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : bkup_status
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_46545e6f_20210510084812.014419.log
    (' Warning: unable to get current configuration of:', 'catalog')
    * Current backup settings:
    * Last registered Bkup: 05-10 15:44 UTC API::336757:: Starting dbaas backup process
    * Bkup state: running
    **************************************************
    * API History: API steps
    API:: NEW PROCESS 336757
    API:: Starting dbaas backup process
    *
    * RETURN CODE:0
    ##################################################


    And I waited a few minutes, and I can see it was successful.


    # /var/opt/oracle/bkup_api/bkup_api bkup_status --dbname=dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : bkup_status
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_8acd03e3_20210510085129.207757.log
    (' Warning: unable to get current configuration of:', 'catalog')
    * Current backup settings:
    * Last registered Bkup: 05-10 15:44 UTC API::336757:: Starting dbaas backup process
    * Bkup state: running
    **************************************************
    * API History: API steps
    API:: NEW PROCESS 336757
    API:: Starting dbaas backup process
    *************************************************
    * Backup steps
    -> 2021-05-10 08:44:20.651787 - API:: invoked with args : -dbname=dbsg2 -uuid=7f6622f8b1a611eb865552540068a695 -level1
    -> 2021-05-10 08:44:23.458698 - API:: Wallet is in open AUTOLOGIN state
    -> 2021-05-10 08:44:24.204793 - API:: Oracle database state is up and running
    -> 2021-05-10 08:44:25.686134 - API:: CATALOG SETTINGS
    -> 2021-05-10 08:45:19.767284 - API:: DB instance: dbsg2
    -> 2021-05-10 08:45:19.767424 - API:: Validating the backup repository ......
    -> 2021-05-10 08:46:38.263401 - API:: All backup pieces are ok
    -> 2021-05-10 08:46:38.263584 - API:: Validating the TDE wallet ......
    -> 2021-05-10 08:46:41.842706 - API:: TDE check successful.
    -> 2021-05-10 08:46:42.446560 - API:: Performing incremental backup to shared storage
    -> 2021-05-10 08:46:42.448228 - API:: Executing rman instructions
    -> 2021-05-10 08:49:21.161884 - API:: ....... OK
    -> 2021-05-10 08:49:21.162089 - API:: Incremental backup to shared storage is Completed
    -> 2021-05-10 08:49:21.163822 - API:: Starting backup of config files
    -> 2021-05-10 08:49:21.699197 - API:: Determining the oracle database id
    -> 2021-05-10 08:49:21.726308 - API:: DBID: 2005517379
    -> 2021-05-10 08:49:22.040891 - API:: Creating directories to store config files
    -> 2021-05-10 08:49:22.085476 - API:: Enabling RAC exclusions for config files.
    -> 2021-05-10 08:49:22.114211 - API:: Compressing config files into tar files
    -> 2021-05-10 08:49:22.173842 - API:: Uploading config files to NFS location
    -> 2021-05-10 08:49:22.222493 - API:: Removing temporary location /var/opt/oracle/log/dbsg2/obkup/7f6622f8b1a611eb865552540068a695.
    -> 2021-05-10 08:49:22.224071 - API:: Config files backup ended successfully
    -> 2021-05-10 08:49:26.052494 - API:: All requested tasks are completed
    *
    * RETURN CODE:0
    ##################################################






    Step #5 - Check my periodic backups


    Now it's been a few days (I started on Thursday and it's now Monday).
    I am going to check on the incremental backups, and the archive log backups.

    There are 2 ways I can do this.

    Using the bkup_api command to list the backups that have run.

    # /var/opt/oracle/bkup_api/bkup_api list --dbname=dbsg2
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : list
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_eddcd4e1_20210510064145.497707.log
    -> Listing all backups
    Backup Tag Completion Date (UTC) Type keep
    ---------------------- ----------------------- ----------- --------
    TAG20210506T123203 05/06/2021 19:32:03 full False
    TAG20210506T131438 05/06/2021 20:14:38 incremental False
    TAG20210507T012240 05/07/2021 08:22:40 incremental False
    TAG20210508T012315 05/08/2021 08:23:15 incremental False
    TAG20210509T012438 05/09/2021 08:24:38 full False
    TAG20210510T012322 05/10/2021 08:23:22 incremental False


    Using the RMAN catalog

    Backup Type         Encrypted Tag                                Backup Piece                                                 Backup Time           Day Of Week
    -------------------- --------- --------------------------------- ------------------------------------------------------------ -------------------- --------------------
    Full L0 YES DBAAS_FULL_BACKUP20210506122626 /backup/dbaas_bkup_DBSG2_2005517379_0dvu5rp2_13_1 05/06/21 12:29:32 THURSDAY
    Differential L1 YES DBAAS_INCR_BACKUP20210506131110 /backup/dbaas_bkup_DBSG2_2005517379_2avu5ud1_74_1 05/06/21 13:14:18 THURSDAY
    Differential L1 YES DBAAS_INCR_BACKUP20210507011926 /backup/dbaas_bkup_DBSG2_2005517379_72vu792b_226_1 05/07/21 01:22:27 FRIDAY
    Differential L1 YES DBAAS_INCR_BACKUP20210508011939 /backup/dbaas_bkup_DBSG2_2005517379_lbvu9tf3_683_1 05/08/21 01:22:51 SATURDAY
    Full L0 YES DBAAS_FULL_BACKUP20210509011940 /backup/dbaas_bkup_DBSG2_2005517379_u3vuchr8_963_1 05/09/21 01:22:59 SUNDAY
    Differential L1 YES DBAAS_INCR_BACKUP20210510011940 /backup/dbaas_bkup_DBSG2_2005517379_6rvuf672_1243_1 05/10/21 01:22:49 MONDAY



    NOTE: I can see that a periodic L1 (differential) is executed at 1:22 AM, every day but Sunday where a Full backup is executed.

    Now to look at archive log backups -- I am going to show a subset.

    Again I can use the bkup_api "list_jobs" command and see all the backup jobs that have been run (which include archive logs).


    # /var/opt/oracle/bkup_api/bkup_api list_jobs --dbname dbsg2 | more
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : list_jobs
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_b2532724_20210510070545.552300.log
    UUID | DATE | STATUS | TAG | ACTION
    e7ad1ef6aea011eb9c8252540068a695 | 2021-05-06 19:26:23 | success | TAG20210506T123203 | create-backup-full
    03616d68aea211eba5aa52540068a695 | 2021-05-06 19:34:12 | success | TAG20210506T123516 | archivelog-backup
    33fae162aea611eba0ed52540068a695 | 2021-05-06 20:04:12 | success | TAG20210506T130518 | archivelog-backup
    267c21daaea711eb9d3852540068a695 | 2021-05-06 20:11:07 | success | TAG20210506T131438 | create-backup-incremental
    650fd222aeaa11ebb58652540068a695 | 2021-05-06 20:34:12 | success | TAG20210506T133516 | archivelog-backup
    961831e4aeae11ebb0d452540068a695 | 2021-05-06 21:04:11 | success | TAG20210506T140517 | archivelog-backup
    c6919f28aeb211eb957e52540068a695 | 2021-05-06 21:34:12 | success | TAG20210506T143518 | archivelog-backup
    f7ce0d0caeb611eb97c552540068a695 | 2021-05-06 22:04:12 | success | TAG20210506T150522 | archivelog-backup
    286e8ea6aebb11eb864c52540068a695 | 2021-05-06 22:34:11 | success | TAG20210506T153516 | archivelog-backup
    598f77eeaebf11eb92c052540068a695 | 2021-05-06 23:04:11 | success | TAG20210506T160518 | archivelog-backup
    89f4919aaec311eb9a9452540068a695 | 2021-05-06 23:34:11 | success | TAG20210506T163516 | archivelog-backup
    bb5ba95eaec711ebb1ed52540068a695 | 2021-05-07 00:04:11 | success | TAG20210506T170518 | archivelog-backup


    Step #6 - On demand backups 

    Now that I have my database configured, I am going to demonstrate some of the options you can add to your backup.

    I am going to create a keep backup and give it a tag using bkup_start

    $ /var/opt/oracle/bkup_api/bkup_api bkup_start --dbname=dbsg2 --keep --tag=Maymonthlybackup
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : bkup_start
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_7d923417_20210507113940.052080.log
    UUID 958a58beaf6311eba98a52540068a695 for this backup
    ** process started with PID: 262102
    ** see log file for monitor progress
    -------------------------------------


    Now to list it.

    $ /var/opt/oracle/bkup_api/bkup_api list --dbname dbsg2 --keep
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : list
    -> logfile: /var/opt/oracle/log/dbsg2/bkup_api_log/bkup_api_19714a18_20210507114254.007083.log
    -> Listing all backups
    Backup Tag Completion Date (UTC) Type keep
    ---------------------- ----------------------- ----------- --------
    Maymonthlybackup20210507T113125 05/07/2021 18:31:25 keep-forever True


    Step #7 - Restore my database


    The last step I'm going to do in my database is to restore it to a previous point in time.

    Below is what you see in the console.
    NOTE - If you chose a specific time it will be in UTC time.


    I pick a time to restore to, and click on the 'Restore Database' option. I can follow the process by looking at 'Workload Requests'.




    Step #8 - Validating backups


    A great feature of the command tool is the ability to validate backups that have been taken.  This is easy to do with the 'bkup_api reval_start' command.

    I started my validate for my database dbbsg and I saved the uuid to monitor it.

    # /var/opt/oracle/bkup_api/bkup_api reval_start --dbname=dbbsg
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    -> Action : reval_start
    -> logfile: /var/opt/oracle/log/dbbsg/bkup_api_log/bkup_api_d0647aa8_20210511032638.300613.log
    UUID 5f204c4cb24311eb887252540068a695 for restore validation
    ** process started with PID: 15281
    ** Backup Request uuid : 5f204c4cb24311eb887252540068a695


    Now to monitor it using the uuid until it's done, and I can see it completed successfully.

    # /var/opt/oracle/bkup_api/bkup_api --uuid=5f204c4cb24311eb887252540068a695 --dbname=dbbsg
    DBaaS Backup API V1.5 @2021 Multi-Oracle home
    @ STARTING CHECK STATUS 5f204c4cb24311eb887252540068a695
    [ REQUEST TICKET ]
    [UUID -> 5f204c4cb24311eb887252540068a695
    [DBNAME -> dbbsg
    [STATE -> success
    [ACTION -> start-restore-validate
    [STARTED -> 2021-05-11 10:26:39 UTC
    [ENDED -> 2021-05-11 10:28:00 UTC
    [PID -> 15281
    [TAG -> None
    [PCT -> 0.0
    [LOG -> 2021-05-11 03:26:39.780830 - API:: invoked with args : -dbname=dbbsg -reval=default
    [LOG -> 2021-05-11 03:26:42.324669 - API:: Wallet is in open AUTOLOGIN state
    [LOG -> 2021-05-11 03:26:42.996885 - API:: Oracle database state is up and running
    [LOG -> 2021-05-11 03:28:00.857565 - API:: ....... OK
    [LOG -> 2021-05-11 03:28:00.857645 - API:: Restore Validation is Completed
    [ END TICKET ]


    Step #9 - Restoring with API

    There are many options to restoring with the API for both the "database" which consists of the CDB and all PDBs, or just a specific PDB.

    Below are some of the commands that help with this.
    NOTE: All commands are executed using "bkup_api" from /var/opt/oracle/bkup_api as "oracle"


    CommandOptionsDescription
    bkup_start Start new special backup now
    bkup_start--keepCreate keep backup
    bkup_start--level0Perform a new FULL level 0 backup 
    bkup_start--level1Perform a new level1 incremental backup
    bkup_start--cronCreates an incremntal backup through Cron
    bkup_chkcfg Verifies that backups have been configured
    bkup_status Shows the status of the most recent backup
    list Shows the list of the most recent backups
    reval_start Starts a restore validation of datafiles
    archreval_start Starts a revalidation of archive logs
    recover_start--latestRecover from latest backup
    recover_start--scnRecover to SCN #
    recover_start--bRecover using a specific backup tag and defuzzy to archivelog following
    recover_start-tRecover to time. Specify --nonutc to use a non-UTC timestamp
    recover_status Show status of most recent recover of this database


    With recovery you can also just recover a single PDB
    • --pdb={pdbname} - Recovery just a single PDB
    You can also specify if the config files should be restored
    • --cfgfiles - store the configuration files (controlfiles, spfiles etc) along with database files.

    Step #10 - Configuration changes

    You can execute the "bkup_api get config --dbname={dbname}" to create a file containing the  current configuration.  In that file you can see some of the other changes you can be.
    Below is what I see it using the version at the time of writing this.

    Config ParameterSettingsDescription
    bkup_cron_entryyes/noEnable/Disable automatic backups
    bkup_archlog_cron_entryyes/noEnable automatic archive log cleanup when not using tooling
    bkup_cfg_filesyes/noEnable backup of Config files
    bkup_daily_timehh24:miTime to execute daily backup
    bkup_archlog_frequency15,20,30…How many minutes apart to execute archive log backups
    bkup_diskyes/noBackups to the FRA
    bkup_disk_recovery_window1-14Recover window of FRA
    bkup_oss_xxx Backup settings when backing up to Object Store in Public Cloud
    bkup_zdlra_xx Backup settings when backing up to a ZDLRA
    bkup_nfs_xxx Backup settings when backing up to NFS
    bkup_set_section_sizeyes/noSet to yes to over ride the default setting
    bkup_section_size Value for Over riding the default setting for section size
    bkup_channels_nodexxNumber of channels to be used by RMAN
    bkup_use_rcatyes/noIf you are using an RMAN catalog
    bkup_rcat_xxx RMAN catalog settings

    TDE queries to view your configuration

    $
    0
    0

     This post contains some of the scripts I have been using on my TDE encrypted database to see the big picture of what is being encrypted by what key.



    1) Wallet information


     The first script I put together will list the status of wallets for all tenants on all nodes. This will give you the wallet location, type of wallet, united, etc.



    Below is the output of this script for my single node, local wallet database.

      INST_ID PDB Name   Type	 WRL_PARAMETER					    Status			   WALLET_TYPE		KEYSTORE Backed Up
    ---------- ---------- ---------- -------------------------------------------------- ------------------------------ -------------------- -------- ----------
    1 CDB$ROOT FILE /home/oracle/app/oracle/admin/tdecdb/wallet/tde/ OPEN AUTOLOGIN NONE NO
    PDB$SEED FILE OPEN AUTOLOGIN UNITED NO
    PDBTDE1 FILE OPEN AUTOLOGIN UNITED NO
    PDBTDE2 FILE OPEN AUTOLOGIN UNITED NO
    PDBTDE3 FILE OPEN AUTOLOGIN UNITED NO



    Below is a the output from a 4 node cluster with OKV configured.



    INST_ID PDB Name Type WRL_PARAMETER Status WALLET_TYPE KEYSTORE Backed Up
    ------ ---------- ---------- ------------------------------------ -------------- ---------------- ------------- -------------------- -------- ----------
    1 CDB$ROOT FILE /u02/app/oracle/admin/jckey/wallet/tde/ OPEN_NO_MASTER_KEY AUTOLOGIN NONE UNDEFINED
    CDB$ROOT OKV OPEN OKV NONE UNDEFINED
    JCKPDB FILE OPEN_NO_MASTER_KEY AUTOLOGIN UNITED UNDEFINED
    JCKPDB OKV OPEN OKV UNITED UNDEFINED
    PDB$SEED FILE OPEN_NO_MASTER_KEY AUTOLOGIN UNITED UNDEFINED
    PDB$SEED OKV OPEN OKV UNITED UNDEFINED

    2 CDB$ROOT FILE /u02/app/oracle/admin/jckey/wallet/tde/ OPEN_NO_MASTER_KEY AUTOLOGIN NONE UNDEFINED
    CDB$ROOT OKV OPEN OKV NONE UNDEFINED
    JCKPDB FILE OPEN_NO_MASTER_KEY AUTOLOGIN UNITED UNDEFINED
    JCKPDB OKV OPEN OKV UNITED UNDEFINED
    PDB$SEED FILE OPEN_NO_MASTER_KEY AUTOLOGIN UNITED UNDEFINED
    PDB$SEED OKV OPEN OKV UNITED UNDEFINED

    3 CDB$ROOT FILE /u02/app/oracle/admin/jckey/wallet/tde/ OPEN_NO_MASTER_KEY AUTOLOGIN NONE UNDEFINED
    CDB$ROOT OKV OPEN OKV NONE UNDEFINED
    JCKPDB FILE OPEN_NO_MASTER_KEY AUTOLOGIN UNITED UNDEFINED
    JCKPDB OKV OPEN OKV UNITED UNDEFINED
    PDB$SEED FILE OPEN_NO_MASTER_KEY AUTOLOGIN UNITED UNDEFINED
    PDB$SEED OKV OPEN OKV UNITED UNDEFINED

    4 CDB$ROOT FILE /u02/app/oracle/admin/jckey/wallet/tde/ OPEN_NO_MASTER_KEY AUTOLOGIN NONE UNDEFINED
    CDB$ROOT OKV OPEN OKV NONE UNDEFINED
    JCKPDB FILE OPEN_NO_MASTER_KEY AUTOLOGIN UNITED UNDEFINED
    JCKPDB OKV OPEN OKV UNITED UNDEFINED
    PDB$SEED FILE OPEN_NO_MASTER_KEY AUTOLOGIN UNITED UNDEFINED
    PDB$SEED OKV OPEN OKV UNITED UNDEFINED





    2) Tablespace information

    This script will list the tablespaces, if the tablespace is encrypted, and what the key is.


    Below is the output from my database.

    PDB Name   Tablespace Name Enc.          Master Key ID              Key ID                             tablespace Encryt key (trunc)
    ---------- --------------- ----- ------------------------- ----------------------------------- ------------------------------
    CDB$ROOT SYSAUX NO AQbOELhZAk9Dv8A2mADBKQQ= 06CE10B859024F43BFC0369800C12904 9C21DCFF8CB7DCC6E038239DD07D3D
    SYSTEM NO AQbOELhZAk9Dv8A2mADBKQQ= 06CE10B859024F43BFC0369800C12904 9C21DCFF8CB7DCC6E038239DD07D3D
    TEMP NO AQbOELhZAk9Dv8A2mADBKQQ= 06CE10B859024F43BFC0369800C12904 9C21DCFF8CB7DCC6E038239DD07D3D
    UNDOTBS1 NO AQbOELhZAk9Dv8A2mADBKQQ= 06CE10B859024F43BFC0369800C12904 9C21DCFF8CB7DCC6E038239DD07D3D
    USERS YES AQbOELhZAk9Dv8A2mADBKQQ= 06CE10B859024F43BFC0369800C12904 9C21DCFF8CB7DCC6E038239DD07D3D

    PDBTDE1 SYSAUX NO AYQysCoXXk+Nv/Q//9sUAV4= 8432B02A175E4F8DBFF43FFFDB14015E 4D7007D0FFFCB3F2702233BDD2702A
    SYSTEM NO AYQysCoXXk+Nv/Q//9sUAV4= 8432B02A175E4F8DBFF43FFFDB14015E 4D7007D0FFFCB3F2702233BDD2702A
    TEMP NO AYQysCoXXk+Nv/Q//9sUAV4= 8432B02A175E4F8DBFF43FFFDB14015E 4D7007D0FFFCB3F2702233BDD2702A
    UNDOTBS1 NO AYQysCoXXk+Nv/Q//9sUAV4= 8432B02A175E4F8DBFF43FFFDB14015E 4D7007D0FFFCB3F2702233BDD2702A
    USERS YES AYQysCoXXk+Nv/Q//9sUAV4= 8432B02A175E4F8DBFF43FFFDB14015E 4D7007D0FFFCB3F2702233BDD2702A

    PDBTDE2 SYSAUX NO AegHs2QPk09xv0HVO3B1alQ= E807B3640F934F71BF41D53B70756A54 C3F9A04600AFE07F023589C0DE0ED8
    SYSTEM NO AegHs2QPk09xv0HVO3B1alQ= E807B3640F934F71BF41D53B70756A54 C3F9A04600AFE07F023589C0DE0ED8
    TEMP NO AegHs2QPk09xv0HVO3B1alQ= E807B3640F934F71BF41D53B70756A54 C3F9A04600AFE07F023589C0DE0ED8
    UNDOTBS1 NO AegHs2QPk09xv0HVO3B1alQ= E807B3640F934F71BF41D53B70756A54 C3F9A04600AFE07F023589C0DE0ED8
    USERS YES AegHs2QPk09xv0HVO3B1alQ= E807B3640F934F71BF41D53B70756A54 C3F9A04600AFE07F023589C0DE0ED8

    PDBTDE3 SYSAUX NO AW5TJ43d8E+ZvxD8A1YhdcM= 6E53278DDDF04F99BF10FC03562175C3 6911A4106D914681528706E03202E6
    SYSTEM NO AW5TJ43d8E+ZvxD8A1YhdcM= 6E53278DDDF04F99BF10FC03562175C3 6911A4106D914681528706E03202E6
    TEMP NO AW5TJ43d8E+ZvxD8A1YhdcM= 6E53278DDDF04F99BF10FC03562175C3 6911A4106D914681528706E03202E6
    UNDOTBS1 NO AW5TJ43d8E+ZvxD8A1YhdcM= 6E53278DDDF04F99BF10FC03562175C3 6911A4106D914681528706E03202E6
    USERS YES AW5TJ43d8E+ZvxD8A1YhdcM= 6E53278DDDF04F99BF10FC03562175C3 6911A4106D914681528706E03202E6




    3) Wallet Contents

    Now let's take a look at what's in my wallet.



    Below you can see the master key ID for each CDB/PDB and information about when it was created.

    Master Key ID                                           Tag                  PDB Name        KEYSTORE_TYPE     Origin     Key Creation Time  Key Act. Time
    ------------------------------------------------------- -------------------- --------------- ----------------- ---------- ------------------ ------------------
    ASd1jY/loU8Bv6HuSfZZFqAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA cdbroot_first_key CDB$ROOT SOFTWARE KEYSTORE LOCAL 06/28/2021 17:46 06/28/2021 17:46
    AQbOELhZAk9Dv8A2mADBKQQAAAAAAAAAAAAAAAAAAAAAAAAAAAAA cdbroot_second_key SOFTWARE KEYSTORE LOCAL 06/28/2021 18:46 06/28/2021 18:46

    AfhjvV/z/U9ev5bICBLYV1MAAAAAAAAAAAAAAAAAAAAAAAAAAAAA pdbtde1_firstkey PDBTDE1 SOFTWARE KEYSTORE LOCAL 06/28/2021 17:53 06/28/2021 17:53
    AYQysCoXXk+Nv/Q//9sUAV4AAAAAAAAAAAAAAAAAAAAAAAAAAAAA pdbtde1_second_key SOFTWARE KEYSTORE LOCAL 06/28/2021 18:50 06/28/2021 18:50

    AVXCNjl3f0+Av+/osXobX2sAAAAAAAAAAAAAAAAAAAAAAAAAAAAA pdbtde2_firstkey PDBTDE2 SOFTWARE KEYSTORE LOCAL 06/28/2021 17:54 06/28/2021 17:54
    AegHs2QPk09xv0HVO3B1alQAAAAAAAAAAAAAAAAAAAAAAAAAAAAA pdbtde2_second_key SOFTWARE KEYSTORE LOCAL 06/28/2021 18:50 06/28/2021 18:50

    Ab1/+jaPck+Ev6rhmBKtxXEAAAAAAAAAAAAAAAAAAAAAAAAAAAAA pdbtde3_firstkey PDBTDE3 SOFTWARE KEYSTORE LOCAL 06/28/2021 17:54 06/28/2021 17:54
    AW5TJ43d8E+ZvxD8A1YhdcMAAAAAAAAAAAAAAAAAAAAAAAAAAAAA pdbtde1_second_key SOFTWARE KEYSTORE LOCAL 06/28/2021 18:50 06/28/2021 18:50


    NOTE: I rotated my master key, and you can see both keys.. Adding a tag to the key helps identify the key also.


    4) Control file Contents

    This query looks at the x$jcbdbk table to determine the master key(s) currently in use.



    PDB Name        Key ID                              Master Key ID
    --------------- ----------------------------------- -------------------------
    CDB$ROOT 06CE10B859024F43BFC0369800C12904 AQbOELhZAk9Dv8A2mADBKQQ=

    PDB$SEED 00000000000000000000000000000000 AQAAAAAAAAAAAAAAAAAAAAA=

    PDBTDE1 8432B02A175E4F8DBFF43FFFDB14015E AYQysCoXXk+Nv/Q//9sUAV4=

    PDBTDE2 E807B3640F934F71BF41D53B70756A54 AegHs2QPk09xv0HVO3B1alQ=

    PDBTDE3 6E53278DDDF04F99BF10FC03562175C3 AW5TJ43d8E+ZvxD8A1YhdcM=



    Conclusion :

     By looking at the queries above you should have a better of idea of how the Master encryption key ties to the tablespace encryption.

     You can also see what happens when you rotate the master key, and how it affects the tablespaces.




    A New ZDLRA feature can help you migrate to a new ZDLRA

    $
    0
    0

     A new feature was included in the 19.2.1.1.2 ZDLRA software release to help you migrate your backup strategy when moving to a new ZDLRA.


    This feature allows you to continue to access your older backups during the cut-over period directly from the new ZDLRA.  You point your database restore to the the new ZDLRA  and it will automagically access the older backups if necessary. Once the cutover period has passed, the old ZDLRA can be retired.

    I am going to walk through the steps.

    1. Configure new ZDLRA

    • Add the new ZDLRA to OEM - The first step is to ensure that the new ZDLRA has been registered within your OEM environment. This will allow it to be managed, and of course monitored.
    • Add a replication VPC user to the new ZDLRA. This will be used to connect from the old ZDLRA.
    • Add the VPC users on the new ZDLRA that match the old ZDLRA
    • Configure policies on new ZDLRA to match old ZDLRA.
              This can done by dynamically executing DBMS_RA.CREATE_PROTECTION_POLICY. 
               Current protection policy information can be read from the RA_PROTECTION_POLICY view.
    • Add databases to proper protection policies on new ZDLRA.
            This can be done by dynamically executing DBMS_RA.ADD_DB. 
            Current database information can be read from the RA_DATABASE view.

    • Grant the replication VPC user access to all databases for replication.
            This can be done by dynamically executing DBMS_RA.GRANT_DB_ACCESS
            The current list of databases can be read from the RA_DATABASE view.

    • Grant the VPC users access to the database for backups/restores
            This can be by dynamically executing DBMS_RA.GRANT_DB_ACCESS
            The current list of grants can be read from the RA_DB_ACCESS view
    • Create a replication server on the old ZDLRA that points to the new ZDLRA
    • Add the protection policies on the old ZDLRA to the replication server created previously..

    NOTE: When these steps are completed, the old ZDLRA will replicate the most recent L0 to the new ZDLRA, and will then replicate all new incremental backups and archive logs.




    2. Switch to new ZDLRA for backups

    • Update the wallet on all clients to include the VPC user/Scan listener of the new ZDLRA.
    • Update the real-time redo configuration (if using real-time redo) to point to the new ZDLRA.
    • Update backup jobs to open channels to the new ZDLRA
    • Remove the VPC replication user from the new ZDLRA  
    • Drop the replication server on the old ZDLRA
    NOTE: The backups will begin with an incremental backup based on the contents of the new ZDLRA and will properly create a "virtual full". Archive logs will automatically pick up with the sequence number following the last log replicated from the old ZDLRA.



    3 . Configure "Read-Only Mode" replication to old ZDLRA

    • Add a replication VPC user on the old ZDLRA. This will be used to connect from the new ZDLRA.
    • Create a replication server from new ZDLRA to the old ZDLRA
    • Grant the replication VPC user on the old ZDLRA access to all databases for replication.
            This can be done by dynamically executing DBMS_RA.GRANT_DB_ACCESS
            The current list of databases can be read from the RA_DATABASE view.
    • Add a replication server for each policy that includes the "Read-Only" flag set to "YES".
    NOTE: this will allow the new ZDLRA to pull backups from the old ZDLRA that only exist on the old ZDLRA.


    4 . Retire old ZDLRA after cutover period

    • Remove replication server from new ZDLRA that points to old ZDLRA
    NOTE: The old ZDLRA can now be decommissioned.



    That's all there is to it. This will allow you to restore from the new ZDLRA, and not have to keep track of which backups are on which appliance during the cutover window !

    Adding immutability to buckets in the Oracle Cloud Object Store

    $
    0
    0

     I am going to demonstrate a new feature of the object store that you might not have known about.  The feature is "Retention Lock" and is used to protect the objects in a bucket.



    Let me first start with a few  links to get you started and then I will demonstrate how to use this feature.


    In order to add a retention lock to a bucket you create a rule for the individual bucket.

    Below is a screen shot of where you will find the retention rules, and the "Create Rule" button. Also note that I highlighted the "Object Versioning" attribute of the bucket.

    NOTE: You cannot add a retention lock to a bucket that has "Object Versioning" enabled. You can also not disable "Object Versioning" once enabled. You MUST suspend "Object Versioning" before adding any retention rules to your bucket.



     There are 3 types of retention locks and below I will describe them and show you how to implement them. They are listed from least restrictive to most restrictive.


    DATA GOVERNANCE

    Data Governance is a time based lock based on the modified time of EACH OBJECT in the bucket.

    The Retention can be set in "days" or "years".

    Below is what the settings look like for data governance. You choose "Time-Bound" for the rule type and ensure that you do not "enable retention rule lock".



    With Data Governance you can both increase and decrease the duration of the retention lock.

    Below you can see after the lock was created, the rule is not locked.



    REGULATORY COMPLIANCE

    Regulatory Compliance is similar to Data Governance with the exception that the duration can only be increased.
    The retention lock of the individual objects, just like Data Governance is based on when the individual object was last modified.
    Another key difference is that when you "enable retention rule lock", you also set when this rule is locked. The default is 14 days, and cannot be set less than 14 days.
    The delay of 14 days is a "cooling off period" that gives you 14 days to test before the rule takes effect. This is because once the cooling off period ends, the retention time cannot be shortened.


    Below is the screen shot of creating a retention rule for regulatory compliance and note that the retention rule lock MUST be enabled to ensure the duration is not shortened.


    It also asked me to confirm the "lock date" before the rule is created.




    Below are the rules that are set after both of these steps.


    .NOTE: I now have 2 rules. I have the original rule that will lock the objects for 30 days (this can be changed as needed). I also have a Regulatory Compliance rule that will lock the objects for 1 day. The Regulatory Compliance rule not take effect for 14 days from today.


    LEGAL HOLD

    The final type of retention is a legal hold.  A legal hold will put a retention lock on the WHOLE bucket. All objects in the bucket are locked and cannot be modified/deleted until the hold is removed. There is no ending time period for a legal hold.

    Below is how you create a legal hold.



    SUMMARY

    You can create the 3 types of retention locks, and you can even layer them. Below you can see that I have 3 locks. The Legal Hold rule will lock everything, but that can be removed leaving the 2 remaining rules.  I can remove the Data Governance rule, but the Regulatory Compliance rule is the most restrictive. Once the 14 day (or whatever you set) has passed this rule cannot be changed.


    Now when I go to delete an object that is protected by a retention rule I get an error. Below is example of what you will see.




    Using rclone to download Objects from OCI

    $
    0
    0

     I previously created a post that walked through how to configure rclone to easily access objects within the Oracle Cloud Object Store.


    Object Store access with rclone


    This post is going to go a little deeper on how to quickly download objects from the OCI object store onto your host.

    In my example, I needed to download RMAN disk backup files that were copied to the Object Store in OCI.

    I have over 10 TB of RMAN backup pieces, so I am going to create an ACFS mount point to store them on.


    1) Create ACFS mount point

    Creating the mount point is made up of multiple small steps that are documented here. This is a link to the 19c documentation so note it is subject to change over time.

    • Use ASMCMD to create a volume on the data disk group of 20 TB 
    - Start ASMCMD connected to the Oracle ASM instance. You must be a user in the OSASM operating system group.

                        - Create the volume "volume1" on the "data" disk group

                        ASMCMD [+] > volcreate -G data -s 20G volume1

    • Use ASMCMD to list the volume information  NOTE: my volume name is volume1-123
                                 
    ASMCMD [+] > volinfo -G data volume1
    Diskgroup Name: DATA

    Volume Name: VOLUME1
    Volume Device: /dev/asm/volume1-123
    State: ENABLED
    ...

    SQL> SELECT volume_name, volume_device FROM V$ASM_VOLUME
    WHERE volume_name ='VOLUME1';

    VOLUME_NAME VOLUME_DEVICE
    ----------------- --------------------------------------
    VOLUME1 /dev/asm/volume1-123


    • Create the file system with mkfs from the volume "/dev/asm/volume1-123"
    $ /sbin/mkfs -t acfs /dev/asm/volume1-123
    mkfs.acfs: version = 19.0.0.0.0
    mkfs.acfs: on-disk version = 46.0
    mkfs.acfs: volume = /dev/asm/volume1-123
    mkfs.acfs: volume size = 21474836480 ( 20.00 GB )
    mkfs.acfs: Format complete.
    • Register the file system with srvctl
    # srvctl add filesystem -device /dev/asm/volume1-123 -path /acfsmounts/acfs2
    -user oracle -mounttowner oracle -mountgroup dba -mountperm 755
    NOTE: This will mount the filesystem on /acfsmounts/acfs2

    • Start the filesystem with srvctl
    $ srvctl start filesystem -device /dev/asm/volume1-123

    • Change the ownership to oracle

    chown -R oracle:dba /acfsmounts/acfs2

    2) Use rclone to view objects

    The next step is to look at the objects I want to copy to my new ACFS file system. The format of accessing the object store in the commands is
     "rclone {command} [connection name]:{bucket/partial object name - optional}.


    NOTE: For all examples my connection name is oci_s3 

    I am going to start with the simplest command list buckets (lsd).

    NOTE: We are using the s3 interface to view the objects in the namespace.  There is a single namespace space for the entire tenancy.  With OCI there is the concept of "compartments" which can be used to separate applications and users.  The S3 interface does not have this concept, which means that all buckets are visible.
    • rclone lsd - This is the simplest command to list the buckets, and as I noted previously, it lists all buckets, not just my bucket.
           ./rclone lsd oci_s3:
              -1 2021-02-22 15:33:06        -1 Backups
    -1 2021-02-16 21:31:05 -1 MyCloudBucket
    -1 2020-09-23 22:21:36 -1 Test-20200923-1719
    -1 2021-07-20 20:03:27 -1 ZDM_bucket
    -1 2020-11-23 23:47:03 -1 archive
    -1 2021-01-21 13:03:33 -1 bsgbucket
    -1 2021-02-02 15:35:18 -1 bsgbuckets3
    -1 2021-03-03 11:42:13 -1 osctransfer
    -1 2021-03-19 19:57:16 -1 repo
    -1 2021-01-21 19:35:24 -1 short_retention
    -1 2020-11-12 13:41:48 -1 jsmithPublicBucket
    -1 2020-11-04 14:10:33 -1 jsmith_top_bucket
    -1 2020-11-04 11:43:55 -1 zfsrepl
    -1 2020-09-25 16:56:01 -1 zs-oci-bucket

    If I want to list what is within my bucket (bsgbucket) I can list that bucket. In this case it treats the flat structure of the object name as if it is a file system, and lists only the top "directories" within my bucket.

    ./rclone lsd oci_s3:bsgbucket
    0 2021-08-14 23:58:02 -1 file_chunk
    0 2021-08-14 23:58:02 -1 sbt_catalog


    • rclone tree - command will list what is within my bucket as a tree structure.
    [opc@rlcone-test rclone]$ ./rclone tree oci_s3:bsgbucket
    /
    ├── expdat.dmp
    ├── file_chunk
    │ └── 2985366474
    │ └── MYDB
    │ └── backuppiece
    │ └── 2021-06-14
    │ ├── DTA_BACKUP_MYDB_4601d1ph_134_1_1
    │ │ └── yHqtjSE51L3B
    │ │ ├── 0000000001
    │ │ └── metadata.xml
    │ └── DTA_BACKUP_MYDB_4d01d1uq_141_1_1
    │ └── lS9Sdnka2nD0
    │ ├── 0000000001
    │ └── metadata.xml
    └── sbt_catalog
    ├── DTA_BACKUP_MYDB_4601d1ph_134_1_1
    │ └── metadata.xml
    └── DTA_BACKUP_MYDB_4d01d1uq_141_1_1
    └── metadata.xml


    • rclone lsl- command will list what is within my bucket as a long listing with more detail
    [opc@rlcone-test rclone]$ ./rclone lsl oci_s3:bsgbucket
    311296 2021-01-21 13:04:05.000000000 expdat.dmp
    337379328 2021-06-14 19:48:45.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001
    1841 2021-06-14 19:48:45.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/metadata.xml
    36175872 2021-06-14 19:49:10.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/0000000001
    1840 2021-06-14 19:49:10.000000000 file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/metadata.xml
    1841 2021-06-14 19:48:46.000000000 sbt_catalog/DTA_BACKUP_MYDB_4601d1ph_134_1_1/metadata.xml
    1840 2021-06-14 19:49:10.000000000 sbt_catalog/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/metadata.xml


    3) Use rclone to copy the objects to my local file system.


    There are 2 command you can use to copy the files from the object store to the local file system.
    • copy - This is as you expect. It copies the files to the local file system and overwrites the local copy
    • sync - This syncronizes the local file system with the objects in the object store, and will not copy down the object if it already has a local copy.

    In my case I am going to use the sync command. This will allow me to re-start copying the objects and it will ignore any objects that were previously successfully copies.

    Below is the command I am using to copy (synchronize) the objects from my bucket in the object store (oci_s3:bsgbucket) to the local filesystem (/home/opc/acfs).

    ./rclone -vv sync -P --multi-thread-streams 12 --transfers 64  oci_s3:bsgbucket   /home/opc/acfs

    To break down the command.

    • -vv  This option to rclone gives me "verbose" output so I can see more of what is being copied as the command is executed.
    • -P  This option to rclone gives me feedback on how much of the object has downloaded so far to help me monitor it.
    • --multi-threaded-streams 12 This option to rclone breaks larger objects into chunks to increase the concurrency.
    • --transfers 64 This option to rclone allows for 64 concurrent transfers to occur. This increases the download throughput
    • oci-s3:bsgbucket - This is the source to copy/sync
    • /home/opc/acfs - this is the destination to copy/.sync with

    Finally, this is the what the command looks like when it is executing.

    opc@rlcone-test rclone]$  ./rclone -vv sync -P --multi-thread-streams 12 --transfers 64  oci_s3:bsgbucket   /home/opc/acfs
    2021/08/15 00:15:32 DEBUG : rclone: Version "v1.56.0" starting with parameters ["./rclone""-vv""sync""-P""--multi-thread-streams""12""--transfers""64""oci_s3:bsgbucket""/home/opc/acfs"]
    2021/08/15 00:15:32 DEBUG : Creating backend with remote "oci_s3:bsgbucket"
    2021/08/15 00:15:32 DEBUG : Using config file from "/home/opc/.config/rclone/rclone.conf"
    2021/08/15 00:15:32 DEBUG : Creating backend with remote "/home/opc/acfs"
    2021-08-15 00:15:33 DEBUG : sbt_catalog/DTA_BACKUP_MYDB_4601d1ph_134_1_1/metadata.xml: md5 = 505fc1fdce141612c262c4181a9122fc OK
    2021-08-15 00:15:33 INFO : sbt_catalog/DTA_BACKUP_MYDB_4601d1ph_134_1_1/metadata.xml: Copied (new)
    2021-08-15 00:15:33 DEBUG : expdat.dmp: md5 = f97060f5cebcbcea3ad6fadbda136f4e OK
    2021-08-15 00:15:33 INFO : expdat.dmp: Copied (new)
    2021-08-15 00:15:33 DEBUG : Local file system at /home/opc/acfs: Waiting for checks to finish
    2021-08-15 00:15:33 DEBUG : Local file system at /home/opc/acfs: Waiting for transfers to finish
    2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: Starting multi-thread copy with 2 parts of size 160.875Mi
    2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: multi-thread copy: stream 2/2 (168689664-337379328) size 160.875Mi starting
    2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: multi-thread copy: stream 1/2 (0-168689664) size 160.875Mi starting
    2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/metadata.xml: md5 = 0a8eccc1410e1995e36fa2bfa0bf7a70 OK
    2021-08-15 00:15:33 INFO : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/metadata.xml: Copied (new)
    2021-08-15 00:15:33 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/metadata.xml: md5 = 505fc1fdce141612c262c4181a9122fc OK
    2021-08-15 00:15:33 INFO : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/metadata.xml: Copied (new)
    2021-08-15 00:15:33 DEBUG : sbt_catalog/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/metadata.xml: md5 = 0a8eccc1410e1995e36fa2bfa0bf7a70 OK
    2021-08-15 00:15:33 INFO : sbt_catalog/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/metadata.xml: Copied (new)
    2021-08-15 00:15:33 INFO : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4d01d1uq_141_1_1/lS9Sdnka2nD0/0000000001: Copied (new)
    2021-08-15 00:15:34 DEBUG : file_chunk/2985366474/MYDB/backuppiece/2021-06-14/DTA_BACKUP_MYDB_4601d1ph_134_1_1/yHqtjSE51L3B/0000000001: multi-thread copy: stream 1/2 (0-168689664) size 160.875Mi finished
    Transferred: 333.398Mi / 356.554 MiByte, 94%, 194.424 MiByte/s, ETA 0s
    Transferred: 6 / 7, 86%
    Elapsed time: 2.0s
    Transferring:

    NOTE: it broke up the larger object into chunks, and you can see that it downloaded 2 chunks simultaneously.  At the end you can see the file that it was in the middle of transferring.

    Conclusion.

    rclone is great alternative to the OCI CLI to manage your objects and download them.  It has  more intuitive commands (like "rclone ls").  And the best part is that it doesn't require python and special privleges to install.

    TDE–How to implement TDE in your database and what to think about (part 4)

    $
    0
    0

     In this post, I am going to include some lessons learned from implementing "Restore as encrypted" of a large database with over 500,000 objects.

     


    The error we were receiving when trying open our database was

    SQL> alter database open;
    alter database open
    *
    ERROR at line 1:
    ORA-00603: ORACLE server session terminated by fatal error
    ORA-01092: ORACLE instance terminated. Disconnection forced
    ORA-00604: error occurred at recursive SQL level 1
    ORA-25153: Temporary Tablespace is Empty
    Process ID: 133196
    Session ID: 1769 Serial number: 6805

    And in the alert log we saw.

    Parallel first-pass transaction recovery timed out. Switching to serial recovery.
    Undo initialization recovery: Parallel FPTR failed: start:685625075 end:685692452 diff:67377 ms (67.4 seconds)
    2021-08-27T10:02:39.567998-04:00
    Undo initialization recovery: err:0 start: 685625075 end: 685693406 diff: 68331 ms (68.3 seconds)
    2021-08-27T10:02:43.015891-04:00
    [339055] Successfully onlined Undo Tablespace 17.
    Undo initialization online undo segments: err:0 start: 685693406 end: 685696854 diff: 3448 ms (3.4 seconds)
    Undo initialization finished serial:0 start:685625075 end:685697235 diff:72160 ms (72.2 seconds)
    Dictionary check beginning
    2021-08-27T10:02:44.819881-04:00
    TT03 (PID:360221): Sleep 80 seconds and then try to clear SRLs in 6 time(s)
    2021-08-27T10:02:54.759120-04:00
    Tablespace 'PSAPTEMP' #3 found in data dictionary,
    but not in the controlfile. Adding to controlfile.
    2021-08-27T10:02:55.826700-04:00
    Errors in file /u02/app/oracle/diag/rdbms/bsg/BSG1/trace/BSG1_ora_339055.trc:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-25153: Temporary Tablespace is Empty
    2021-08-27T10:02:55.826827-04:00
    Errors in file /u02/app/oracle/diag/rdbms/bsg/BSG1/trace/BSG1_ora_339055.trc:
    ORA-00604: error occurred at recursive SQL level 1


    What we found is that there some work the database has to do when opening for the first time after encrypting tablespaces offline.

    Background:

    Movement of data to disk that includes any objects that reside in an encrypted tablespace is encrypted. This means that if an object resides in an encrypted tablespace, the following data is also encrypted.

    • TEMP - If an object resides in an encrypted tablespace, any sort information in the TEMP tablespace is encrypted. This includes joins to other tables.  Any piece of data in a sort operation on disk causes the whole set of data to be encrypted.
    • UNDO - If an object resides in an encrypted tablespace, the blocks stored in UNDO are encrypted.
    • REDO/Archive - If an object resides in an encrypted tablespace, the changes to that object are encrypted in the redo stream (including redo sent through the network to a standby database).

    How this happens:


    The way the database manages encryption is to internally mark an object as an encrypted object so that it ensures the objects data stays encrypted on disk. 
    Now back to "restore as encrypted".  Since we restored the database and encrypted the tablespaces, the database needs to mark all the  objects in the "newly encrypted" tablespaces as encrypted.
    This is part of the database open operation.  The open database operation will sort through the internal object metadata to determine what objects now reside in "newly encrypted" tablespaces.
    There are a few things to be aware of around this process.
    1. It requires a sorting of objects.  Because of this you may need a much bigger sort_area_size or PGA_TARGET.  This is only needed to open the database after encrypting, but this was cause of the issue I was seeing.
    2. It may take some time. Lots of time depending on the # of objects.

    How to mitigate it:


    Since we know this is going to happen, there are a few ways to mitigate it.

    1. Empty out your recycle bin to limit the # of objects to update.
    2. Proactively increase your PGA (or sort_area_size) for opening the database for the first time after encrypting.
    3. Encrypt the database in sections. Do not encrypt every tablespace at once to decrease the # of objects that will be marked encrypted. NOTE: this may not be practical.
    4. Encrypt the tablespace online, as this will mark object as the processing of each tablespace completes.
    5. Check the number of objects that will need to be updated. This can be done by look at the TAB$ internal table using the TS# matching to the tablespaces that will be encrypted.

    ZFS now supports Object Store Pre-Authenticated Requests

    $
    0
    0

    ZFS now supports Pre-Authenticated requests which can be useful for loading data into your data warehouse.


    Version OS8.8.36 of the ZFS software was recently released, and one of the most interesting features is the support for Pre-Authenticated Requests. 

    Here is the note "On-premise Object Storage Best Practices and Recommended Use Cases". This Document outlines how to use the new Object Store features, some of which I will cover in future posts.

    Here is my post on configuring your database to point to ZFS as an object store. It is necessary to  configure ZFS as an object store if you want to do the same testing.

    Here is my post on configuring your database to access ZFS as an object store.  This document walks through how to configure DBMS_CLOUD in an 19c+ database.

    By going through these notes you can reach the same point that I am with my sample database and ZFS.  Below is the environment that I will be using for this demo.

    Environment:

    • Oracle Database 21c (though 19c would work just as well)
      • I have updated the DATABASE_PROPERTIES to point to my wallet for SSL certificates.
      • I added the unsigned SSL certificate for my ZFS simulator HTTPS server to the wallet.
      • I have updated the C##CLOUD$SERVICE.dbms_cloud_store table to point to my ZFS appliance using the ORACLE_BMC authentication method.
      • I installed the sales schema into my account.
    • I am running the ZFS simulator
      • I updated the release of the simulator to OS8.8.36 (2013.06.05.8.36 of the ZFS software.
      • I created a user "salesdata" as a local user
      • I created a share named "salesupload" that is owned by salesdata.
      • The share "salesupload" is configured read/write using the OCI API within HTTP
      • I added the user "salesdata" and it's public SSH keys for authentication with OCI Protocol within HTTP.
    • I have the OCI Client tool installed
      • I installed the OCI client tool
      • I configured an entry for my object store in the ~/.oci/config file called salesdata

    Pre-Authenticated Requests for uploading files


    The first part of this post will go through creating a request for uploading files to an object store.
    In my example, I will be uploading the sales.dat file which comes with the sample sales history schema.

    Step 1: Create the bucket to store the files.


    The first step is to create a bucket that  I am going to use to store the files that are being uploaded to the object store.  The beauty of using an object store, is that I can have many buckets on the same share that are isolated from each other.

    I am going to call my bucket "salesdrop".

    Below is the OCI client call I am going to use to create my bucket "salesdrop".


    [oracle@oracle-server]$ oci os bucket create --config-file ~/.oci/config --profile SALESDATA --compartment-id salesupload --namespace-name salesupload --endpoint http://10.0.0.231/oci   --name salesdrop
    {
    "data": {
    "approximate-count": null,
    "approximate-size": null,
    "compartment-id": "salesupload",
    "created-by": "salesdata",
    "defined-tags": null,
    "etag": "b7ced3b97859a3cc22a23670fc59a535",
    "freeform-tags": null,
    "id": null,
    "is-read-only": null,
    "kms-key-id": null,
    "metadata": null,
    "name": "salesdrop",
    "namespace": "salesupload",
    "object-events-enabled": null,
    "object-lifecycle-policy-etag": null,
    "public-access-type": "NoPublicAccess",
    "replication-enabled": null,
    "storage-tier": "Standard",
    "time-created": "2021-10-17T19:06:47+00:00",
    "versioning": "Disabled"
    },
    "etag": "b7ced3b97859a3cc22a23670fc59a535"
    }


    Step 2: Create a Pre-Authenticated URL for this bucket


    Below is my OCI client call, and the what the parameters mean.


    oci os preauth-request create --config-file ~/.oci/config --profile SALESDATA   --namespace-name salesupload --endpoint http://10.0.0.231/oci --bucket-name salesdrop --name upload_sales_data --access-type AnyObjectWrite --time-expires="2022-11-21T23:00:00+00:00"

    To go through the parameter used they are
    • config-file: Location of the configuration file
    • profile: Entry to use within the configuration file (if not the default)
    • namespace-name: For ZFS this is the share name
    • endpoint: This is the URL for the ZFS http server + "/oci" to use the OCI API
    • bucket-name: Bucket to create the Pre-Authenticated Request for.
    • name: Identifying name given to this request
    • access-type: What type of Pre-Authenticated request to create
    • time-expires: When will this URL expire? This is mandatory.
    Now to execute my request and create the URL.

    [oracle@oracle-server ~]$ oci os preauth-request create --config-file ~/.oci/config --profile SALESDATA   --namespace-name salesupload --endpoint http://10.0.0.231/oci --bucket-name salesdrop  --name upload_sales_data --access-type AnyObjectWrite --time-expires="2022-11-21T23:00:00+00:00"
    {
    "data": {
    "access-type": "AnyObjectWrite",
    "access-uri": "/oci/p/CQmkSnXYrLcVgUnmRhuOmMXGTzDEJrf/n/salesupload/b/salesdrop/o/",
    "id": "11c01b5c-92d8-4c2d-8cba-d9ec4e2649c5",
    "name": "upload_sales_data",
    "object-name": null,
    "time-created": "2021-10-17T19:15:32+00:00",
    "time-expires": "2022-11-21T22:59:59+00:00"
    }
    }


    My request was successful, and I can see the URL that was created.  I just need to add the access-uri to the end of  HTTP host URL.

    http://10.0.0.231/oci/p/CQmkSnXYrLcVgUnmRhuOmMXGTzDEJrf/n/salesupload/b/salesdrop/o/


    Step 3: Upload my file

    Now I am going to upload the file from my Windows PC using curl.
    The file "sh_sales.dat" is on my d: drive.

    d:\> curl -X PUT --data-binary '@d:\sh_sales.dat' http://10.0.0.231/oci/p/CQmkSnXYrLcVgUnmRhuOmMXGTzDEJrf/n/salesupload/b/salesdrop/o/loadfiles/sales_history_05012021.dat
    d:\>

    No errors. Let's check with and make sure the file got uploaded using the OCI Client tool

    [oracle@oracle-server ~]$ oci os object list --endpoint http://10.0.0.231/oci --namespace-name salesupload   --config-file ~/.oci/config --profile SALESDATA     --bucket-name salesdrop --fields name,size,timeCreated
    {
    "data": [
    {
    "etag": null,
    "md5": null,
    "name": "loadfiles/sales_history_05012021.dat",
    "size": 55180902,
    "time-created": "2021-10-17T19:35:34+00:00",
    "time-modified": null
    }
    ],
    "prefixes": []
    }

    I can see the file is there, and the size is 55MB.

    Now where can you go with this ? Below is a diagram of how the Oracle IOT cloud can be used as a hub for datafiles from IOT. You can do the same thing by having all your IOT devices "drop" their data onto a central object store (hosted on ZFS), then filtered and loaded into a database.


    Pre-Authenticated Requests for loading files

    The part of this post is going to show you how to use Pre-Authenticated Requests to load data into your database.

    First I wanted to do shout out to @thatjeffsmith. Jeff Smith is the product manager for SQL Developer, and he has a blog http://www.thatjeffsmith.com where he constantly blogs about SQL Developer and all the great work his team is doing.
    I saw one of his posts on Pre-Authenticated Requests  to load data (which you can find here), and I realized that you can do almost the same things on any version of 19c+ with the object store on ZFS.

    I am going to go through most of the same steps Jeff did in his post.

    Step 1:Create the Pre-Authenticated Request to read the object.

    Jeff does this in the Console, but I am going to do it with the OCI Client tool.

    The command is similar to the command I used to create the "upload" request.
    I am going to use a different access-type. I am going to use "ObjectRead" and create a request that points to the object that was uploaded.

    [oracle@oracle-server]$ oci os preauth-request create --config-file ~/.oci/config --profile SALESDATA   --namespace-name salesupload --endpoint http://10.0.0.231/oci --bucket-name salesdrop  --name upload_sales_data --access-type ObjectRead --time-expires="2022-11-21T23:00:00+00:00" --object-name loadfiles/sales_history_05012021.dat
    {
    "data": {
    "access-type": "ObjectRead",
    "access-uri": "/oci/p/apVWoQmeWWtireCzUqEjGBTRWQwotro/n/salesupload/b/salesdrop/o/loadfiles/sales_history_05012021.dat",
    "id": "547227b4-73b0-4980-bb94-ab5ee87d4c81",
    "name": "upload_sales_data",
    "object-name": "loadfiles/sales_history_05012021.dat",
    "time-created": "2021-10-17T19:56:45+00:00",
    "time-expires": "2022-11-21T22:59:59+00:00"
    }
    }


    Now I have my request URL.

    http://10.0.0.231//oci/p/apVWoQmeWWtireCzUqEjGBTRWQwotro/n/salesupload/b/salesdrop/o/loadfiles/sales_history_05012021.dat


    Step 2: Load the data

    Now back to Jeff's example, I am going to log onto my database and load the data.

    First I am going to count the rows in my table, then check again after.

    SQL> select count(1) from sales;

    COUNT(1)
    ----------
    0

    SQL> BEGIN
    DBMS_CLOUD.COPY_DATA(
    table_name =>'SALES',
    file_uri_list =>'https://10.0.0.231/oci/p/apVWoQmeWWtireCzUqEjGBTRWQwotro/n/salesupload/b/salesdrop/o/loadfiles/sales_history_05012021.dat',
    format => json_object('delimiter' VALUE '|') );
    END;


    PL/SQL procedure successfully completed.


    SQL> select count(1) from sales;

    COUNT(1)
    ----------
    1016271

    SQL>


    I can see that 1 Million rows were successfully loaded into the table.


    Step 3: Verify through USER_LOAD_OPERATIONS

    Now, like Jeff did with his example, I am going to look at the view USER_LOAD_OPERATIONS to see the information about my load job.


    col id format 999
    col type format a8
    col status format a10
    col start_time format a15
    col owner_name format a10
    col table_name format a20
    col file_uri_list format a70
    set linesize 160


    select
    id,
    type,
    to_char(update_time,'mm/dd/yy hh24:mi:ss') update_time,
    status,
    owner_name,
    table_name,
    substr(file_uri_list,60,160) File_uri_list,
    rows_loaded
    from
    user_load_operations
    where status='COMPLETED';

    SQL>

    ID TYPE UPDATE_TIME STATUS OWNER_NAME TABLE_NAME FILE_URI_LIST ROWS_LOADED
    ---- -------- ----------------- ---------- ---------- -------------------- ---------------------------------------------------------------------- -----------
    3 COPY 10/17/21 16:13:21 COMPLETED BGRENN SALES salesupload/b/salesdrop/o/loadfiles/sales_history_05012021.dat 1016271


    Other Pre-Authenticated Requests 


    There are 2 other "access-types" for Pre-Authenticated requests.

    • ObjectReadWrite: This will give both read and write to a specific object. 
    • ObjectWrite: This give write only access to a specific object (rather than having full access to the bucket).
    If you try to use the Pre-Authenticated Requested to anything other than object granted you get an error message.

    {"code": "NotAuthenticated", "message": "The required information to complete authentication was not provided"}

    List Pre-Authenticated Requests 

    You can list all of your Pre-Authenticated Requests to keep a handle on them.


    [oracle@oracle-server ~]$ oci os preauth-request list --config-file ~/.oci/config --profile SALESDATA   --namespace-name salesupload --endpoint http://10.0.0.231/oci --bucket-name salesdrop
    {
    "data": [
    {
    "access-type": "AnyObjectWrite",
    "id": "11c01b5c-92d8-4c2d-8cba-d9ec4e2649c5",
    "name": "upload_sales_data",
    "object-name": null,
    "time-created": "2021-10-17T19:15:32+00:00",
    "time-expires": "2022-11-21T22:59:59+00:00"
    },
    {
    "access-type": "ObjectRead",
    "id": "547227b4-73b0-4980-bb94-ab5ee87d4c81",
    "name": "load_sales_data",
    "object-name": "loadfiles/sales_history_05012021.dat",
    "time-created": "2021-10-17T19:56:45+00:00",
    "time-expires": "2022-11-21T22:59:59+00:00"
    },
    {
    "access-type": "ObjectReadWrite",
    "id": "87b4fe97-3e2e-4b22-96aa-a7e3b566dc59",
    "name": "read_write_sales_data",
    "object-name": "loadfiles/sales_history_06012021.dat",
    "time-created": "2021-10-17T20:37:23+00:00",
    "time-expires": "2022-11-21T22:59:59+00:00"
    },
    {
    "access-type": "ObjectWrite",
    "id": "828a0651-60f7-4d2a-998c-b3518e1bfa92",
    "name": "write_sales_data",
    "object-name": "loadfiles/sales_history_07012021.dat",
    "time-created": "2021-10-17T20:40:08+00:00",
    "time-expires": "2022-11-21T22:59:59+00:00"
    }
    ]
    }


    Get Detail on a Pre-Authenticated Requests 


    If you want the detail of a specific Pre-Authenticated Request you can use the "get" option and include the --par-id (which is the ID from the list request command).

    [oracle@oracle-server ~]$ oci os preauth-request get --config-file ~/.oci/config --profile SALESDATA   --namespace-name salesupload --endpoint http://10.0.0.231/oci --bucket-name salesdrop   --par-id 828a0651-60f7-4d2a-998c-b3518e1bfa92
    {
    "data": {
    "access-type": "ObjectWrite",
    "id": "828a0651-60f7-4d2a-998c-b3518e1bfa92",
    "name": "upload_sales_data",
    "object-name": "loadfiles/sales_history_07012021.dat",
    "time-created": "2021-10-17T20:40:08+00:00",
    "time-expires": "2022-11-21T22:59:59+00:00"
    }
    }


    NOTE: this does not give you the URL.

    Delete a Pre-Authenticated Requests 


    Finally you can delete a Pre-Authenticated Request if it is no longer needed using the par-id of the request.


    [oracle@oracle-server ~]$ oci os preauth-request delete --config-file ~/.oci/config --profile SALESDATA   --namespace-name salesupload --endpoint http://10.0.0.231/oci --bucket-name salesdrop   --par-id 828a0651-60f7-4d2a-998c-b3518e1bfa92
    Are you sure you want to delete this resource? [y/N]: y





    Hopefully this gives you an idea of all the things you can do with Pre-Authenticated URLs.




    ZFSSA now offers immutable snapshots

    $
    0
    0

     The latest ZFSSA software release (as of this post) is OS8.8.39

     This release contains the ability to make both scheduled snapshots and manual snapshots immutable, and I will go through how this works in this post.



    New Authorizations

    By default non-root users are not authorized to create scheduled locked snapshots, or manual locked snapshots and you will see the message below.




    There are 3 new authorizations added to support Snapshot immutability.  The authorizations are

    • releaseSnapRetention - This allows the role to release a snapshot from it's retention hold
    • scheduleLockedSnap - This allows the role to schedule a locked snapshot
    • retainSnap                   - This allows the role to create a manual locked snapshot

    In order to show how this works I created a new role "Security_Admin" and granted this role the new authorizations.

    You can see that the "Security_Admin" role has releaseSnapRetention, scheduleLockedSnap and retainSnap authorizations which reside under the "Projects and shares" scope.





    I then added the new role "Security_Admin" to my administration user.  This limits who has the authority to create and change the status on the immutable snapshots.


    Create a Manual Locked Snapshot (BUI) 

    First I am going to create a manual locked snapshot.  Below is the window that appears when I click on the "+" to create the snapshot.
    Notice below the name there is a new field "Retention policy". This can be either
    • Off           - There is no retention on this snapshot (normal)
    • Unlocked - There is a locked retention on this snapshot 
    I am going to create my manual snapshot with an "unlocked" retention policy



    Change the retention setting of a Snapshot (BUI) 

    Once I create the manual snapshot, I can see that it has an "unlocked" retention when I click on the edit button.  Here I can update the snapshot and turn the retention policy to "Off" to unlock the snapshot when I am ready to delete it. I can also change the status of a snapshot without a retention to have a retention policy.




    Create a Manual Locked Snapshot (CLI) 

    1) Navigate to the share or project you want to snapshot.

    zfssim:shares NFSbackups> select NFS_immutable
    zfssim:shares NFSbackups/NFS_immutable>

     
     2) Enter snapshots
    zfssim:shares NFSbackups> snapshots
    zfssim:shares NFSbackups/NFS_immutable snapshots>


    3) Use the snapshot command followed by a "-r" to set the retention lock, and set the new snapshot name

    zfssim:shares NFSbackups/NFS_immutable snapshots> snapshot -r Save_until_Jan_1_2022
    zfssim:shares NFSbackups/NFS_immutable snapshots>


    4) You can use the list command to see the snapshot, and then select the snapshot
    zfssim:shares objectstore> select rmanbackups
    zfssim:shares objectstore/rmanbackups>

    5) The "show" command will display the settings for the snapshot, and you will see that has a retentionpolicy of "unlocked"

    zfssim:shares NFSbackups/NFS_immutable@Save_until_Jan_1_2022> show
    Properties:
    creation = Tue Nov 16 2021 20:35:25 GMT+0000 (UTC)
    numclones = 0
    isauto = false
    retentionpolicy = unlocked
    pool = generalpool1
    canonical_name = generalpool1/local/NFSbackups/NFS_immutable@Save_until_Jan_1_2022
    shadowsnap = false
    space_unique = 0
    space_data = 31K




    Change the retention setting of a Snapshot (CLI) 

    Continuing from the previous set of commands, with the "show" I can see the status of the retention lock.
    Using the "set retentionpolicy={off | unlocked}" you can change the status of a snapshot.

    Below is the example when I turned the retention policy to off for the snapshot I took in the prior example.


    zfssim:shares NFSbackups/NFS_immutable@Save_until_Jan_1_2022> get retentionpolicy
    retentionpolicy = unlocked
    zfssim:shares NFSbackups/NFS_immutable@Save_until_Jan_1_2022> set retentionpolicy=off
    retentionpolicy = off (uncommitted)
    zfssim:shares NFSbackups/NFS_immutable@Save_until_Jan_1_2022> get retentionpolicy
    retentionpolicy = off (uncommitted)
    zfssim:shares NFSbackups/NFS_immutable@Save_until_Jan_1_2022> commit
    zfssim:shares NFSbackups/NFS_immutable@Save_until_Jan_1_2022> show
    Properties:
    creation = Tue Nov 16 2021 20:35:25 GMT+0000 (UTC)
    numclones = 0
    isauto = false
    retentionpolicy = off
    pool = generalpool1
    canonical_name = generalpool1/local/NFSbackups/NFS_immutable@Save_until_Jan_1_2022
    shadowsnap = false
    space_unique = 0
    space_data = 31K

    Children:
    backups => Configure Cloud Backups
    targets => List snapshot parents per target

    zfssim:shares NFSbackups/NFS_immutable@Save_until_Jan_1_2022>



    Deleting a Manual Locked Snapshot

    BUI 

    When you delete a manual snapshot that has a retention policy, you will receive an error screen once you click through the "are you sure" message. Below is the message that will appear if the snapshot still has a retention lock.



    In order to allow the snapshot to be deleted, you need to edit the snapshot, and set the retention to "Off".  Once you remove the retention lock the snapshot can be deleted.

    CLI

    You will receive an error when trying to delete the snapshot. You must release the lock (or in the case of a schedule snapshot) wait for it to roll off.
    zfssim:shares NFSbackups/NFS_immutable@Save_until_Jan_1_2022> destroy
    This will destroy all data in "Save_until_Jan_1_2022"! Are you sure? (Y/N) y
    error: The action could not be completed because the target 'NFSbackups/NFS_immutable@Save_until_Jan_1_2022' is in use. It cannot be modified while it, or its children, are actively changing. Make sure no other users are editing the
    share configuration and try again. If this problem persists, contact your service provider.




    Enable Scheduled Locked Snapshots (BUI)

    The next step is to enable scheduled locked snapshots. You will notice (highlighted below) that there is a new option to enable the retention policy for locked scheduled snapshots under the project and share.

    shares --> share/project --> snapshots




    Create Scheduled Locked Snapshots (BUI)

    To create a schedule snapshot that is locked, you will noticed there are  addition fields on the scheduling popup.  You have the ability to schedule a snapshot with retention either "Off" or "Locked".  When schedule with "Locked" you must also decide on how many of the "kept" snapshots will be locked. Below I am scheduling snapshots every half hour.  5 snapshots will be kept, and the most recent 3 snapshots will be locked (since I chose locked).


    Viewing retention status of scheduled snapshots (BUI)


    Using the schedule from above (5 snapshots, 3 of which are locked), below is what I am seeing after it has been executing for awhile.  I chose one of the 3 most recent snapshots and I can see that it has a status of "locked" and I am unable to change that status.






    Deleting locked scheduled snapshots (BUI) - Not allowed

    Now I am going to try to delete the schedule that contains locked snapshots.  I click on the delete button and hit apply. I get a message saying the snaps will be converted to manual snapshots.


    I click on "CONVERT" but it won't let me convert them to manual snapshots.


    If I try to change the schedule to have the snapshots no longer be "Locked", I get the same message.



    Enable Scheduled Locked Snapshots (CLI)

    I navigated through the CLI and got to the share that I wanted to create a scheduled, locked snapshot for.  I first need to make sure the property "snap" is set. In my case it wasn't so I set the value and committed the change.


    zfssim:shares NFSbackups/nfstest> get snapret_enabled
    snapret_enabled = false (inherited)
    zfssim:shares NFSbackups/nfstest> set snapret_enabled=true
    snapret_enabled = true (uncommitted)
    zfssim:shares NFSbackups/nfstest> commit
    zfssim:shares NFSbackups/nfstest> get snapret_enabled
    snapret_enabled = true
    zfssim:shares NFSbackups/nfstest>



    Create Schedule Locked Snapshots (CLI)

    Navigate to the share --> snapshots --> automatic

    Once there create a new snapshot, and set the properties for the snapshot.
    In order to make this a locked snapshot, you need to set the property "retentionpolicy" to "locked".

    Below is the steps I followed to create a daily snapshot, kept for 35 days, and immutable for 30 days.



     zfssim:shares NFSbackups/nfstest>
    zfssim:shares NFSbackups/nfstest> snapshots
    zfssim:shares NFSbackups/nfstest snapshots> automatic
    zfssim:shares NFSbackups/nfstest snapshots automatic> create
    zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)> show
    Properties:
    frequency = (unset)
    day = (unset)
    hour = (unset)
    minute = (unset)
    keep = 0
    retentionhold = 0
    retentionpolicy = off

    zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)> set frequency=day
    frequency = day (uncommitted)
    zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)> set hour=06
    hour = 06 (uncommitted)
    zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)> set minute=00
    minute = 00 (uncommitted)
    zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)> set keep=35
    keep = 35 (uncommitted)
    zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)> set retentionhold=30
    retentionhold = 30 (uncommitted)
    zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)> set retentionpolicy=locked
    retentionpolicy = locked (uncommitted)
    zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)> show
    Properties:
    frequency = day (uncommitted)
    day = (unset)
    hour = 06 (uncommitted)
    minute = 00 (uncommitted)
    keep = 35 (uncommitted)
    retentionhold = 30 (uncommitted)
    retentionpolicy = locked (uncommitted)

    zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)>
    zfssim:shares NFSbackups/nfstest snapshots automatic-000 (uncommitted)> commit
    zfssim:shares NFSbackups/nfstest snapshots automatic> show
    Properties:
    convert = false

    Automatics:

    NAME FREQUENCY DAY HH:MM KEEP
    automatic-000 day - 06:00 35





    Viewing retention status of scheduled snapshots (CLI)


    Below I listed out the snapshots that were automatically created. I can see that the snapshot chose has a "retentionpolicy" of "locked" and this lock will be removed according to the schedule.


    zfssim:shares NFSbackups/zfsshare> snapshots
    zfssim:shares NFSbackups/zfsshare snapshots> list
    .auto-Bihourly_snapshots-20211116T193000UTC
    .auto-Bihourly_snapshots-20211116T200000UTC
    .auto-Bihourly_snapshots-20211116T203000UTC
    .auto-Bihourly_snapshots-20211116T210000UTC
    .auto-Bihourly_snapshots-20211116T213000UTC
    zfssim:shares NFSbackups/zfsshare snapshots> select .auto-Bihourly_snapshots-20211116T203000UTC
    zfssim:shares NFSbackups/zfsshare@.auto-Bihourly_snapshots-20211116T203000UTC> show
    Properties:
    creation = Tue Nov 16 2021 20:30:00 GMT+0000 (UTC)
    numclones = 0
    isauto = true
    retentionpolicy = locked
    pool = generalpool1
    canonical_name = generalpool1/local/NFSbackups/zfsshare@.auto-Bihourly_snapshots-20211116T203000UTC
    shadowsnap = false
    space_unique = 0
    space_data = 1.22G

    Children:
    backups => Configure Cloud Backups
    targets => List snapshot parents per target

    zfssim:shares NFSbackups/zfsshare@.auto-Bihourly_snapshots-20211116T203000UTC>




    BONUS : 


    In the audit logs you can see the changes occur, and who made them.. I highlighted where I changed status of one of the shares from Unlocked to OFF and from Off to Unlocked.





    Backing up your database to a bucket in OCI and restoring it in OCI

    $
    0
    0

     This is the first of a multi-part blog series walking through how to copy your TDE encrypted on premise Oracle Database to an OCI VM in the oracle cloud using the Oracle Database Backup Cloud Service. 


    I am going to start with a simple test case of a small database which doesn't contain any TDE encryption or wallet, and back it up to an OCI bucket.

    As far as where to start, below are some documentation links that will help.


    NOTE: You will be doing downloading, and installing the library files on both the source database and the destination database.

    Install Database backup module

    The first thing I am going to do is unzip the Cloud Backup Module (opc_installer.zip).  This can downloaded using the link above, but it can also be found within the $ORACLE_HOME/lib  directory.  As always, I would recommend downloading the current copy to be sure it is the latest release.   Once unzipped you will find the module contains a directory "opc_installer".  Within "opc_installer" there are 2 subdirectories with a ".jar" file to install the library, and a readme file.

        oci_installer/                                                  ---> OCI (Oracle Cloud Native) library install
                           oci_install.jar
                           oci_readme.txt
        opc_installer/                                                  ---> OPC (Oracle Cloud Gen 1/swift) library install
                           opc_install.jar
                           opc_readme.txt

    I am going to use "oci_install.jar" file and access the bucket using the Oracle Cloud Native API.

    If I look in the "readme" file, I can see that I install the library using the following parameters.


    I am going to install my files within a new directory for my Database host.

    /home/oracle/ocicloud/
                                        config/
                                        lib/
                                        wallet/

    To install and configure my library I am going to execute

    java -jar oci_install.jar
             -host https://objectstorage.us-ashburn-1.oraclecloud.com  
            -pvtkeyFile  /home/oracle/ocicloud/myprivatekey.ppk 
            -pubFingerPrint 6d:f9:57:d5:ff:b1:c0:98:81:90:1e:6e:08:0f:d0:69 
            -tOCID ocid1.tenancy.oc1..aaaxxxnoq 
            -uOCID ocid1.user.oc1..aaaaaaaae2mlwyke4gvxxsaouxq 
            -bucket migest_backups  
            -walletDir /home/oracle/ocicloud/wallet 
            -configFile /home/oracle/ocicloud/config/migtestbackup.ora 
             -cOCID ocid1.compartment.oc1..aaaaaaaanqbquh2bwju4igzd3tyq 
            -libDir /home/oracle/ocicloud/lib  

    Oracle Database Cloud Backup Module Install Tool, build MAIN_2021-08-31
    Oracle Database Cloud Backup Module credentials are valid.
    Backups would be sent to bucket migest_backups.
    Oracle Database Cloud Backup Module wallet created in directory /home/oracle/ocicloud/wallet.
    Oracle Database Cloud Backup Module initialization file /home/oracle/ocicloud/config/migtestbackup.ora created.
    Downloading Oracle Database Cloud Backup Module Software Library from Oracle Cloud Infrastructure.
    Download complete.

    Now that it is successfully installed we can go to configuring the module.

    Configure Database backup module

    Running the command below, lets see what is in my directory now.

    find . | sed -e "s/[^-][^\/]*\// |/g" -e "s/|\([^ ]\)/|-\1/"

    .
    |-lib
    | |-bulkimport.pl
    | |-libopc.so
    | |-metadata.xml
    | |-odbsrmt.py
    | |-perl_readme.txt
    | |-python_readme.txt
    |-config
    | |-migtestbackup.ora
    |-wallet
    | |-cwallet.sso.lck
    | |-cwallet.sso
    |-oci_install.jar
    |-myprivatekey.ppk

    Looking at the configuration file created you can see the information used to connect to the bucket in the OCI Object store.

    OPC_HOST=https://objectstorage.us-ashburn-1.oraclecloud.com/n/id20avsofo
    OPC_WALLET='LOCATION=file:/home/oracle/ocicloud/wallet CREDENTIAL_ALIAS=alias_oci'
    OPC_CONTAINER=migest_backups
    OPC_COMPARTMENT_ID=ocid1.compartment.oc1..aaaaaaaanqbquh2bwju4igabu5g7xuiijael2fwcpqyvzzb4ykd3tyq
    OPC_AUTH_SCHEME=BMC


    Now we can create the channel configuration to send backups to the oci bucket.
    The create channel would be executed like the command below filling in the library and configuration file

    CONFIGURE CHANNEL DEVICE TYPE 
            'SBT_TAPE' PARMS 
             'SBT_LIBRARY={library name and location},
                    SBT_PARMS=(OPC_PFILE=/{configuration file})';

    Below are the commands I am going to execute in RMAN to configure my channel and settings to backup my database.




    ## Default device type is tape
    CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';

    ## Backup using the library and config file we just installed
    CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/ocicloud/lib/libopc.so,SBT_PARMS=(OPC_PFILE=/home/oracle/ocicloud/config/migtestbackup.ora)';

    ## Backup with 4 channels to a compressed backupset
    CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;

    ## Use medium compression since this is included in the license for the module.
    CONFIGURE COMPRESSION ALGORITHM 'MEDIUM';

    ## Encrypt the backup being sent, this is mandatory for writing to the cloud.
    CONFIGURE ENCRYPTION FOR DATABASE ON;



    Backup Database to an OCI bucket

    Set a password to encrypt the backup (it must be encrypted to send to a bucket) and perform a full backup.



    set encryption identified by oracle only;
    backup incremental level 0 database plus archivelog not backed up;




    This will send the backup to the object store

    Configure Database backup module in OCI.

    I am going to go through the same series of steps to install the Oracle Database Cloud backup Module in my OCI instance. 

    Oracle Database Cloud Backup Module Install Tool, build MAIN_2021-08-31
    Oracle Database Cloud Backup Module credentials are valid.
    Backups would be sent to bucket migest_backups.
    Oracle Database Cloud Backup Module wallet created in directory /home/oracle/ocicloud/wallet.
    Oracle Database Cloud Backup Module initialization file /home/oracle/ocicloud/config/migtestbackup.ora created.
    Downloading Oracle Database Cloud Backup Module Software Library from Oracle Cloud Infrastructure.
    Download complete.

    Configure pfile for database in OCI.


    I now need to configure my database pfile in OCI. I just need a few basic things 

    audit_file_dest='/u01/app/oracle/admin/migtest/adump'
    *.audit_trail='db'
    *.compatible='19.0.0'
    *.control_files='/u01/app/oracle/oradata/MIGTEST/controlfile/controlfile1.ctl','/u01/app/oracle/oradata/MIGTEST/controlfile/controlfile2.ctl'
    *.db_block_size=8192
    *.db_create_file_dest='/u01/app/oracle/oradata'
    *.db_name='migtest'
    *.processes=300
    *.sga_target=4638m


    Restore pfile and controlfile for database in OCI

    There a few steps to get ready to restore the spile and controlfile
    • I add my database to the "/etc/oratab" to ensure I can connect to it, and ". oraenv" to set the environment.
    • I now start up the database nomount
    • I go back to the original database to retrieve the dbid.

    Now I am ready to restore the spfile (note that I am setting the password to decrypt the backups).


    In RMAN I restore the spfile
    set decryption identified by oracle;

    run {
    allocate CHANNEL c1 TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/ocicloud/ib/libopc.so,SBT_PARMS=(OPC_PFILE=/home/oracle/ocicloud/config/migtestbackup.or)';
    restore spfile from autobackup ;
    release channel c1;
    }
    rman target /
    RMAN> set decryption identified by oracle;

    executing command: SET decryption
    run {
    allocate CHANNEL c1 TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/ocicloud/ib/libopc.so,SBT_PARMS=(OPC_PFILE=/home/oracle/ocicloud/config/migtestbackup.or)';
    restore spfile from autobackup ;
    release channel c1;
    }


    RMAN> 2> 3> 4> 5>
    allocated channel: c1
    channel c1: SID=20 device type=SBT_TAPE
    channel c1: Oracle Database Backup Service Library VER=21.0.0.1

    Starting restore at 20-DEC-21

    channel c1: looking for AUTOBACKUP on day: 20211220
    channel c1: AUTOBACKUP found: c-286701374-20211220-00
    channel c1: restoring spfile from AUTOBACKUP c-286701374-20211220-00
    channel c1: SPFILE restore from AUTOBACKUP complete
    Finished restore at 20-DEC-21


    Then I restore the controlfile.


    set decryption identified by oracle;

    run {
    allocate CHANNEL c1 TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/ocicloud/ib/libopc.so,SBT_PARMS=(OPC_PFILE=/home/oracle/ocicloud/config/migtestbackup.or)';
    restore controlfile from autobackup ;
    release channel c1;
    }

    RMAN>
    executing command: SET decryption
    using target database control file instead of recovery catalog


    RMAN> 2> 3> 4> 5>
    allocated channel: c1
    channel c1: SID=17 device type=SBT_TAPE
    channel c1: Oracle Database Backup Service Library VER=21.0.0.1

    Starting restore at 20-DEC-21

    channel c1: looking for AUTOBACKUP on day: 20211220
    channel c1: AUTOBACKUP found: c-286701374-20211220-00
    channel c1: restoring control file from AUTOBACKUP c-286701374-20211220-00
    channel c1: control file restore from AUTOBACKUP complete
    output file name=/u01/app/oracle/oradata/MIGTEST/controlfile/controlfile1.ctl
    output file name=/u01/app/oracle/oradata/MIGTEST/controlfile/controlfile2.ctl
    Finished restore at 20-DEC-21

    released channel: c1



    Now I can mount the database


    Restore the datafile for the database in OCI


    Since the location in OCI is different.

    My on-premise database  "/home/oracle/app/oracle/oradata/"
    My OCI database  "/u01/app/oracle/oradata/"

    I am going to create a script to set newname from my datafiles to restore to.




    set linesize 160
    set pagesize 0

    SELECT REPLACE(file_name,'/home/oracle/app/oracle/oradata/','/u01/app/oracle/oradata/') "Changes"
    FROM (select
    'set newname for datafile ' || file# || ' to ' || '''' || name || '''' || ';' file_name
    from v$datafile
    )
    ;

    Which will create the script that sets "new name for my datafiles"
    I just need to execute in RMAN within a run block.

    run {
    set newname ....
      }

    Now I configure the channels just like I did in the for my on premise (unless they are are already set from the controlfile).



    ## Default device type is tape
    CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';

    ## Backup using the library and config file we just installed
    CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/ocicloud/lib/libopc.so,SBT_PARMS=(OPC_PFILE=/home/oracle/ocicloud/config/migtestbackup.ora)';

    ## Backup with 4 channels to a compressed backupset
    CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;

    ## Use medium compression since this is included in the license for the module.
    CONFIGURE COMPRESSION ALGORITHM 'MEDIUM';

    ## Encrypt the backup being sent, this is mandatory for writing to the cloud.
    CONFIGURE ENCRYPTION FOR DATABASE ON;


    Now we can restore and recover the database and switch to the new copy of the datafiles.


    run {
    restore database;
    recovery database;
    switch datafile all;
    }


    And finally (if we want to start it up) open it resetlogs.

    RMAN> alter database open resetlogs;

    Statement processed

    RMAN>


    That's all there is to it.


    Backing up Oracle Key Vault from your datacenter to OCI

    $
    0
    0

      This is the second of a multi-part blog series walking through how to copy your TDE encrypted on premise Oracle Database to an OCI VM in the oracle cloud. This blog post will focus on how to leverage OKV (Oracle Key Vault) to help with storing, backing up, and migrating encryption keys. In this post I will walk through backing up OKV to both a local ZFS, and an OCI bucket.

    The first part of this series went through how to migrate a database from on premise to the OCI cloud using the Oracle Database Cloud Backup Module. You can find it here.

    I will add to this first by including how to migrate my OKV (Oracle Key Vault) environment to OCI to allow me to restore my encrypted database in OKV.

    I am going to skip over how to migrate migrate my database to using OKV. If you are starting at the beginning (small database no encryption), the steps to get to this next blog post are.

    1. Create a new database for testing.
    2. Implement Advanced Security (TDE) which is covered in my post here.
    3. Migrating from a local wallet to OKV which is covered in my post here.
    At this point my database (ocitest),  is using my OKV environment on premise, and I am ready to backup and restore my OKV host along with my database.

    Backup the database to an OCI bucket


    First I am going to back my database up to an OCI bucket.

    I am going to set my channels and perform a level 0 backup with archive logs.

    NOTE: It is encrypted using the encryption key from OKV, rather than a password.


    ### Default device is tape
    CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';


    ### Ensure autobackups are sent to my bucket
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default


    ### Backup set is a compressed backupset (this is included for free with the backup module)
    CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;

    ### Channel configuration
    CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/ocicloud/lib/libopc.so,SBT_PARMS=(OPC_PFILE=/home/oracle/ocicloud/config/migtestbackup.ora)';

    ### Encryption is on
    CONFIGURE ENCRYPTION FOR DATABASE ON;
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default

    ### Compression is medium
    CONFIGURE COMPRESSION ALGORITHM 'MEDIUM' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;


    Configure ZFSSA as a destination

    Step # 1 Add a dedicated user on the ZFSSA to own the backups.

    Log onto the ZFSSA console and go to "Configuration" => "Users". Add a new user to be the owner of the OKV backups on the ZFSSA.  Click add when completed.



    Step # 2 Retrieve the public SSH key from OKV.

    Log onto the OKV console and go to "system" => "Settings" ==> "Backup and Restore". Click on "Backup and Restore" and then "Manage Backup destinations".  Once there click on "Create" to add a new backup destination.

    On the screen below you want capture the "Public Key", which is the strings AFTER the "ssh-rsa".  You can save this in notepad, or some sort of scratch pad without the beginning line.


    Step # 3 Add the user to the ZFSSA with the public key

    Now go back to the ZFFSA console, and log into the configuration for SFTP. This can be under "SERVICES" => "SFTP". Click on "SFTP", and you will see the screen in the background below.  Click on the "+" to the left of "Keys".  On the window that pops up you will enter the "Public Key" characters you previously saved, and the "user" that you created as the owner of the OKV backups. Once you complete this click on "ADD" to add the OKV Public Key to the ZFSSA.



    Step # 4 Add a new Project/Share to hold the OKV backups

    Add a new project to hold the backups.  With the project, navigate to the "General" tab, and go to to bottom of the window and change the "Default Settings".  For this project, the "user" should be the user you created on the ZFS. This ensures that the OKV backups are separate from other backups on the ZFS, and are only accessible by the new users.



    Then set the protocol for the project to be SFTP as the only read/write protocol on the "Protocols" tab.


    Navigate to the "Snapshots" tab and we will now create 3 immutable snapshots taken every day.
    Ensure you click on "Enable retention policy for Scheduled Snapshots"
    Under the Snapshots section, click on the “Schedules” tab and click on the “+” next to it.
    Change the desired frequency of the snapshot to daily for a daily backup that matches the OKV backup.
    Change the “scheduled time” to a time of day following the daily backup.
    Decide how many backups in total you wish to keep. This is the “KEEP AT MOST”.
    Change the “RETENTION” to “Locked” with the drop down to ensure the backups will be immutable:
    Decide how many backups you wish to keep as immutable. This is the “RETENTION”.
    Click on “Apply”.


    And then add a new share to the project to backups.

    Step # 5 Add the ZFSSA as a destination.

    Go back to OKV and navigate back to the “Create Backup Destination” under “System”.
    On the “create Backup Destination” page 
    give the “Destination Name” the name you want to use for the ZFS.  
    Change “Transfer Method” to “sftp” using the radio button.
    Enter the “Hostname” for the ZFS. This can be either the IP or the DNS name.
    Under the “Port” ensure the port matches the ZFS port used for “SFTP”, which defaults to 218.
    Enter the “Destination Path” which is “/export/” followed by the share name given in step 021.
    Enter the “User Name” which is the user created in step 006 and the user that owns the share from step 021.
    Click on “Save”


    Backup OKV to  ZFSSA 

    With the “Backup” screen
    Give your backup a descriptive name
    Leave the start time (or change it to the time to run the backup).
    Choose the destination entered in step 022
    Change the dial to “PERIODIC” to schedule a regular backup
    Chose the frequency for the backup
    Click on Schedule.


    Once the first backup completes you will see it on this "Backup and Restore" window.


    Backup   ZFSSA to OCI

     

    Now that we have our backup sent to the ZFSSA, we need to configure the ZFSSA to send the backup to an OCI bucket.  Navigate to "SERVICES" => "Cloud" on the ZFSSA, and click on the "+" sign to the left of "Targets" top add a new cloud target.  On the window that pops up, enter the authentication information for your cloud bucket in OCI (It should be set as immutable). 

    In the “Add Cloud Target” window enter.
    Name of the cloud target, if you are setting up multiple targets to different buckets having the bucket name is most descriptive
    The location is https://objectstorage.{cloud location for your tenancy and bucket}.oraclecloud.com
    Bucket name from the previous step.
    “User” which is the user OCID from the previous steps
    “Tenancy” which is the Tenancy OCID from the previous steps
    “Private key” associated with the public key assigned with OCI.
    Any proxy information and bandwidth information if needed.
    Click on “ADD”.


    Navigate to your project, and go to the "Snapshots" tab. You should see the snapshots that have been created and click on the Symbol under clones that looks like a globe.


    Once there, choose the target you previously created, and send the backup as "tar" format. and click on "APPLY", this will send a copy of your OKV backup (which is encrypted) to your bucket in OCI as an offsite backup.




    Restoring OKV in the Oracle Cloud to manage your encrypted databases

    $
    0
    0

      This is the third of a multi-part blog series walking through how to copy your TDE encrypted on premise Oracle Database to an OCI instance in the oracle cloud. This blog post will focus on how to restore OKV (Oracle Key Vault) into an instance in OCI to manage your encryption keys, and support restoring an encrypted database.



    The first part of this series went through how to migrate a database from on premise to an instance in the cloud using the Oracle Database Cloud Backup Module. You can find it here.

    The second part of this series went through how to backup OKV to an immutable OCI bucket leveraging ZFSSA. You can find it here.

    I will add to this by restoring from my OKV backup into the Oracle Cloud (OCI), and then restoring my database.


    I am going to skip over how to migrate  my database to using OKV. If you are starting at the beginning (small database no encryption), the steps to get to this next blog post are.

    1. Create a new database for testing.
    2. Implement Advanced Security (TDE) which is covered in my post here.
    3. Migrating from a local wallet to OKV which is covered in my post here.
    4. Backup your database to an OCI bucket encrypted, and compressed.
    At this point my database (ocitest),  is using my OKV environment on premise, and I have a backup of both my database, and OKV in Object Storage in the Oracle Cloud.


    Create a ZFS Image in OCI to restore OKV from Object Store.


    Log into OCI (you can do this with the 30 day trial), and create a new instance using the ZFS image. Below you can see that you can find this image under "Oracle images".


    Select this image, upload your public key, and create the new instance.

    There are a couple of great step-by-step guides to help you get started with the ZFS image in OCI.
    I am not going to go through the process, as those 2 documents are extremely thorough, and will give you the detail needed to configure ZFS with attached storage within OCI.

    Create an OKV Image in OCI to restore OKV from Object Store.


    The next step to restore OKV is to create an OKV image in OCI.  At this point it is CRITICAL to create an image that is the same version of the source OKV backup.  As of writing this post, I am on 21.2, and I will create a 21.2 instance in OCI.


    Again there is great documentation on how to do go through this process.  You need to create a "SYSADMIN" user. Since the users within OKV will get replaced during the install, this user will only be used temporarily.  Below are the links to start with.
    NOTE:
    • Always deploy the same version in OCI as the backup you are restoring from.
    • The command when first logging into the image to configure it may be different from the video, but the login screen will give you clear instructions.

    Configure ZFS as a backup location for OKV


    At this point if you follow my last blog post found here, you go through the same series of steps in OCI to configure OKV to use ZFS as a backup location that had been done to configure the original backups.
    • Create the user on the ZFS image to own the backups
    • Log into OKV and save the "public key" for Authentication.
    • Configure SFTP on the ZFS image, and add the "Public Key" for the new user.
    • Configure the OCI Object Store on the ZFS image as a "cloud target" pointing to the same bucket you had written to.
    • Create a new project on the ZFS image with the OKV backup owner as the owner of the project.
    • Configure protocols on the new project to ensure that "SFTP" is read/write.
    The steps left NOT completed are
    • Creating a share within the project
    • Creating a backup location within OKV.

    Restore the share to the ZFS image in OCI


    Now we are ready to restore the backup from the OCI bucket to a share on the ZFS image.
    On the ZFS, navigate to "SERVICES" => "Cloud", and within "Cloud" click on the "Backups" tab. Within that tab you will see the ZFS backups that have been sent to the target.
    Find the backup that you want, and click on the circular arrow to restore that backup.


    This will bring up a popup window where you will choose where to restore the backup to.  Chose the project that you previously created (with the OKV backup user, and "SFTP" protocol enabled"). Give the share a name, and click on "APPLY".


    Then once you click on "APPLY" you will see a status popup telling you when it is completed.


    When it completes the restore, take note of the share name, and you can configure OKV to restore from this share.

    Restore the OKV backup in OCI


    Now return to the OKV image in OCI, and navigate to "System" => "Backup and Restore" and create a new backup location, like we had done to create the original backup.
    This time enter information for the ZFS image in OCI, and include the destination as "/export/{restored share name}".

    Once this is configured click on the "Restore" button, and it will bring up a list of backups that are available to restore from the ZFS share.

    Choose the backup you want to use (the backup time will help narrow it down). Click on "Restore" and it will bring up a popup window to enter the "Recovery Passphrase". Enter the passphrase set when OKV was originally installed in your data center, and click on "Restore".

    NOTE: The backup is encrypted using the "Recovery Passphrase", and it is critical that you have the original passphrase available to complete this step.


    When the restore starts, you will see a message, and OKV will not be available until the restore process completes.


    Re-enroll your database  in OCI

    Once OKV is restored, the users you created within OKV will be restored. The only items that will be saved are
    • root
    • support
    • "recovery passphrase"
    Within OCI where you are restoring your database, you will configure the database environment to start the restore process.  I started by creating a pfile, and some of the directories needed.

    audit_file_dest='/u01/app/oracle/admin/ocitest/adump'
    audit_trail='db'
    compatible='19.0.0'
    control_files='/u01/app/oracle/oradata/OCITEST/controlfile/o1_mf_jo6q53rf_.ctl''
    db_block_size=8192
    db_create_file_dest='/u01/app/oracle/oradata'
    db_name='ocitest'
    db_recovery_file_dest='/u01/app/oracle/fast_recovery_area'
    db_recovery_file_dest_size=32212254720
    diagnostic_dest='/u01/app/oracle'
    enable_pluggable_database=true
    pga_aggregate_target=1547m
    processes=300
    sga_target=4638m
    tde_configuration='KEYSTORE_CONFIGURATION=OKV|FILE'
    undo_tablespace='UNDOTBS1'
    wallet_root='/u01/app/wallets/ocitest'

    NOTE: Since you need OKV to decrypt the RMAN backup of the controlfile, you need to ensure the pfile contains the "WALLET_ROOT" and "TDE_COFIGURATION". 

    Within OKV I re-enrolled the endpoint for my database, and I downloaded and installed the "okvclient.jar" in  the "WALLET_ROOT"/okv location.

    Now to restore my database, I can use a script, like the script below to
    • Startup nomount
    • Open the wallet pointing to my keys in OKV
    • Set the DBID
    • Allocate the channel
    • Restore the controlfile
    • Mount the database.



    sqlplus / as sysdba
    SQL> startup nomount;
    SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "0KV2021!";
    SQL> exit


    rman target /
    RMAN> set dbid=301925655;
    RMAN> run {
    RMAN> allocate CHANNEL c1 TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/ociconfig/lib/libopc.so,SBT_PARMS=(OPC_PFILE=/home/oracle/ociconfig/config/ocitestbackup.ora)';
    RMAN> restore controlfile from autobackup ;
    RMAN> release channel c1;
    RMAN> }
    RMAN> alter database mount;
    Once mounted, I can follow the normal steps to restore my database, and my encryption keys are available.  The backup information for my OCI bucket is in my controlfile.

    Cataloging backups and recovering an Oracle Database from the OCI object store

    $
    0
    0

       This is the fourth and final post of a multi-part blog series walking through how to copy your TDE encrypted on premise Oracle Database to an OCI instance in the oracle cloud. This blog post will focus on how to restore your database from the object store, when the backup pieces are not available from your controlfile. 





    There a few reasons why this might be the case.

    • The backups were written to the ZDLRA directly.
    • You are using an RMAN catalog, and they have aged off the controlfile.
    • They are "keep" backups which will be stored in the RMAN catalog.
    • You had to rebuild the controlfile, and lost history of backups.
    Whatever the reason, there is way to find out what backups are in the Object for your database, and you will be able to catalog them.

    NOTE: You can use this same script to delete old backups directly if you've lost your catalog entries.

    When you download the Oracle Cloud Backup installation zip file, and execute the "oci_install.jar" command to download the library you will find 5 extra files in the /lib directory with the "libopc.so" file that is used by the RMAN channel. The 2 we are going to use are 
    • odbsrmt.py             --> python script to manage the contents of the object store bucket
    • python_readme.txt --> Documentation for how to use the above python script.

    Step #1 Execute odbsrmt.py to get a listing of your backup pieces.

    NOTE: The python script uses python 2.x and will not work with python 3.x.  Python 3.x is typically the default version in your path, and you might have to find the 2.x version on your system. For my system this means executing "python2" rather than "python"

    If I execute the script without any parameters, I can see what parameters are expected.



    [oracle@oracle-19c-test-tde lib]$ python2 odbsrmt.py
    usage: odbsrmt.py [-h] --mode
    {report,rman-listfile,garbage-collection,delete,recall}
    [--ocitype {classic,swift,bmc,archive}]
    [--credential CREDENTIAL] [--token TOKEN] --host HOST
    [--base BASE] [--forcename FORCENAME]
    [--format {text,xml,json}] [--dbid DBID]
    [--container CONTAINER] [--dir DIR] [--prefix PREFIX]
    [--untildate UNTILDATE] [--exclude_deferred]
    [--thread THREAD] [--proxyhost PROXYHOST]
    [--proxyport PROXYPORT] [--tocid TOCID] [--uocid UOCID]
    [--pubfingerprint PUBFINGERPRINT] [--pvtkeyfile PVTKEYFILE]
    [--skip_check_status] [--debug]
    odbsrmt.py: error: argument --mode is required

    Now let's go through the most common parameters I am going to use to report on my backups




    And now to execute the command to see some of the report.


    python2  odbsrmt.py --mode report --ocitype bmc  --host https://objectstorage.us-ashburn-1.oraclecloud.com --dir /home/oracle/ocicloud/report --base mydbreport --pvtkeyfile  /home/oracle/ocicloud/myprivatekey.ppk --pubfingerprint 6d:f9:57:d5:ff:b1:c0:98:81:90:1e:6e:08:0f:d0:69 --tocid ocid1.tenancy.oc1..aaaaaaaanz4trskw6jm57cz2fztoasatto3i6z4h33gzfb3pmei5vvnoq --uocid ocid1.user.oc1..aaaaaaaae2mlwyke4gvd7kzxv5zxgg3k2dlcwvubv7vjy6jvbgsaouxq --container migest_backups  --dbid 301925655


    And this will give me the following output in my report file.

    FileName
    Container Dbname Dbid FileSize LastModified BackupType Incremental Compressed Encrypted
    220h9q5f_66_1_1
    migest_backups OCITEST 301925655 72876032 2021-12-21 19:37:33 ArchivedLog false true true
    230h9q5g_67_1_1
    migest_backups OCITEST 301925655 75759616 2021-12-21 19:37:32 ArchivedLog false true true
    240h9q5g_68_1_1
    migest_backups OCITEST 301925655 54263808 2021-12-21 19:37:12 ArchivedLog false true true
    250h9q5g_69_1_1
    migest_backups OCITEST 301925655 48496640 2021-12-21 19:36:58 ArchivedLog false true true
    260h9q9n_70_1_1
    migest_backups OCITEST 301925655 159645696 2021-12-21 19:42:46 Datafile true true true
    270h9q9n_71_1_1
    migest_backups OCITEST 301925655 408682496 2021-12-21 19:47:04 Datafile true true true
    280h9q9n_72_1_1
    migest_backups OCITEST 301925655 524288 2021-12-21 19:37:46 Datafile true true true
    290h9q9n_73_1_1
    migest_backups OCITEST 301925655 56885248 2021-12-21 19:39:37 Datafile true true true
    2a0h9q9v_74_1_1
    migest_backups OCITEST 301925655 235667456 2021-12-21 19:45:05 Datafile true true true
    2b0h9qdi_75_1_1
    migest_backups OCITEST 301925655 233832448 2021-12-21 19:46:18 Datafile true true true
    2c0h9qjb_76_1_1
    migest_backups OCITEST 301925655 52166656 2021-12-21 19:44:31 Datafile true true true
    2d0h9qmk_77_1_1
    migest_backups OCITEST 301925655 1572864 2021-12-21 19:44:43 Datafile true true true
    2e0h9qn3_78_1_1
    migest_backups OCITEST 301925655 34865152 2021-12-21 19:45:41 Datafile true true true
    2f0h9qns_79_1_1
    migest_backups OCITEST 301925655 524288 2021-12-21 19:45:20 Datafile true true true
    2g0h9qrg_80_1_1
    migest_backups OCITEST 301925655 262144 2021-12-21 19:47:14 ArchivedLog false true true
    c-301925655-20211221-00
    migest_backups OCITEST 301925655 524288 2021-12-21 19:47:22 ControlFile SPFILE false true true
    Total Storage: 1.34 GB


    You can see that this report contains  the backup pieces I need. 

    I am going to use the script (below) and pass it the report name to create the commands to catalog the backup pieces.



    And when I execute the above script passing my report file, it produces my commands to catalog the backup pieces.

    report file used for catalog scripts   : mydbreport4701.lst


    catalog device type 'sbt_tape' backuppiece '220h9q5f_66_1_1';
    catalog device type 'sbt_tape' backuppiece '230h9q5g_67_1_1';
    catalog device type 'sbt_tape' backuppiece '240h9q5g_68_1_1';
    catalog device type 'sbt_tape' backuppiece '250h9q5g_69_1_1';
    catalog device type 'sbt_tape' backuppiece '260h9q9n_70_1_1';
    catalog device type 'sbt_tape' backuppiece '270h9q9n_71_1_1';
    catalog device type 'sbt_tape' backuppiece '280h9q9n_72_1_1';
    catalog device type 'sbt_tape' backuppiece '290h9q9n_73_1_1';
    catalog device type 'sbt_tape' backuppiece '2a0h9q9v_74_1_1';
    catalog device type 'sbt_tape' backuppiece '2b0h9qdi_75_1_1';
    catalog device type 'sbt_tape' backuppiece '2c0h9qjb_76_1_1';
    catalog device type 'sbt_tape' backuppiece '2d0h9qmk_77_1_1';
    catalog device type 'sbt_tape' backuppiece '2e0h9qn3_78_1_1';
    catalog device type 'sbt_tape' backuppiece '2f0h9qns_79_1_1';
    catalog device type 'sbt_tape' backuppiece '2g0h9qrg_80_1_1';
    catalog device type 'sbt_tape' backuppiece 'c-301925655-20211221-00';


    Now in RMAN I can execute these commands to catalog the backup pieces from the OCI bucket.

    Note : By using "untildate" you  can control the dates that will be reported on.






    Managing your ZDLRA replication queue remotely

    $
    0
    0

     With the rise of Cyber Crime, more and more companies are looking at an architecture with a second backup copy that is protected with an airgap.   Below is the common architecture that I am seeing.


    In this post I will walk through an example of how to implement a simple Java program that performs the tasks necessary to manage the airgap for a ZDLRA that is implemented in a cyber vault (DC1 Vault in the picture).  Feel free to use this as a starting point to automate the process.

    Commands

    There are 3 commands that I need to be able execute remotely

    • PAUSE      -This will pause the replication server that I configured
    • RESUME - This will resume the replication server that I configured
    • QUERY    - This will query the queue on the upstream to determine how much is left in the queue.
    First however I need to configure the parameters to execute the calls.

    Config file (airgap.config).

    I create config file to customize the script for my environment. Below are the parameters that I needed to connect to the ZDLRA and execute the commands.
    • HOST                    - This is name of the scan listener on upstream ZDLRA.
    • PORT                     - This is the Sqlnet port being used to connect to the upstream ZDLRA
    • SERVICE_NAME - Service name of the database on the upstream ZDLRA
    • USERNAME         - The username to connect to the upstream database
    • PASSWORD          - Password for the user. Feel free to encrypt this in java.
    • REPLICATION_SERVER - Replication server to manage

    Below is what my config file looks like.

    airgap.host=oracle-19c-test-tde
    airgap.port=1521
    airgap.service_name=ocipdb
    airgap.username=bgrenn
    airgap.password=oracle
    airgap.replication_server=replairgap


    Java code (airgap.java).

    Java snippet start

    The start of the Java Code will import the functions necessary and set up my class


    import java.sql.*;
    import java.io.FileNotFoundException;
    import java.io.IOException;
    import java.io.InputStream;
    import java.io.FileInputStream;
    import java.util.Date;
    import java.util.Properties;

    // Create a arigap class
    public class airgap {

    private Properties prop = new Properties();


    Java snippet get properties

    The first method will get the airgap properties from the property files so that I can use them in the rest of the methods.

    // Create a get_airgap_properties method
    public void get_airgap_properties()
    {
    String fileName = "airgap.config";
    try (FileInputStream fis = new FileInputStream(fileName)) {
    prop.load(fis);
    } catch (FileNotFoundException ex) {
    System.out.println("cannot find config file airgap.config");
    } catch (IOException ex) {
    System.out.println("unknown issue finding config file airgap.config");
    }
    }



    Java snippet pause replication server

    The code below will connect to the database and execute DBMS_RA.PAUSE_REPLICATION_SERVER


    // Create a pause_replication method
    public void pause_replication()
    {
    try {
    //Loading driver
    Class.forName("oracle.jdbc.driver.OracleDriver");

    //creating connection
    Connection con = DriverManager.getConnection
    ("jdbc:oracle:thin:@//"+
    prop.getProperty("airgap.host")+":"+
    prop.getProperty("airgap.port")+"/"+
    prop.getProperty("airgap.service_name"),
    prop.getProperty("airgap.username"),
    prop.getProperty("airgap.password"));

    CallableStatement cs=con.prepareCall("{call dbms_ra.pause_replication_server(?)}");

    //Set IN Parameters
    String in1 = prop.getProperty("airgap.replication_server");
    cs.setString(1,in1);

    ResultSet rs = cs.executeQuery(); //executing statement


    con.close(); //closing connection
    System.out.println("replication server '"+ prop.getProperty("airgap.replication_server")+"' paused");
    }
    catch(Exception e) {
    e.printStackTrace();
    }
    }



    Java snippet resume replication server

    The code below will connect to the database and execute DBMS_RA.RESUME_REPLICATION_SERVER


    // Create a pause_replication method
    public void resume_replication()
    {
    try {
    //Loading driver
    Class.forName("oracle.jdbc.driver.OracleDriver");

    //creating connection
    Connection con = DriverManager.getConnection
    ("jdbc:oracle:thin:@//"+
    prop.getProperty("airgap.host")+":"+
    prop.getProperty("airgap.port")+"/"+
    prop.getProperty("airgap.service_name"),
    prop.getProperty("airgap.username"),
    prop.getProperty("airgap.password"));

    CallableStatement cs=con.prepareCall("{call dbms_ra.resume_replication_server(?)}");

    //Set IN Parameters
    String in1 = prop.getProperty("airgap.replication_server");
    cs.setString(1,in1);

    ResultSet rs = cs.executeQuery(); //executing statement


    con.close(); //closing connection
    System.out.println("replication server '"+ prop.getProperty("airgap.replication_server")+"' resumed");
    }
    catch(Exception e) {
    e.printStackTrace();
    }
    }


    Java snippet query replication server

    The java code below will query the replication queue in the upstream ZDLRA and return 4 columns
    • REPLICATION SERVER - name of the replication server
    • TASKS QUEUED - Number of tasks in the queue to be replicated
    • TOTAL GB QUEUED - Amount of data in the queue
    • MINUTES IN QUEUE - The number of minutes the oldest replication piece has been in the queue.
    The last piece of information can be very useful to tell you how current the replication is. With real-time redo, the queue may never be empty.

    // Create a queue_select method
    public void queue_select()
    {
    try {
    //Loading driver
    Class.forName("oracle.jdbc.driver.OracleDriver");

    //creating connection
    Connection con = DriverManager.getConnection
    ("jdbc:oracle:thin:@//"+
    prop.getProperty("airgap.host")+":"+
    prop.getProperty("airgap.port")+"/"+
    prop.getProperty("airgap.service_name"),
    prop.getProperty("airgap.username"),
    prop.getProperty("airgap.password"));

    Statement s=con.createStatement(); //creating statement

    ResultSet rs=s.executeQuery("select replication_server_name,"+
    " count(*) tasks_queued,"+
    " trunc(sum(total)/1024/1024/1024,0) AS TOTAL_GB_QUEUED,"+
    " round("+
    " (cast(current_timestamp as date) - cast(min(start_time) as date))"+
    " * 24 * 60"+
    " ) as queue_minutes "+
    "from RA_SBT_TASK "+
    " join ra_replication_config on (lib_name = SBT_library_name) "+
    " where archived = 'N'"+
    "group by replication_server_name"); //executing statement

    System.out.println("Replication Server,Tasks Queued,Total GB Queued,Minutes in Queue");

    while(rs.next()){
    System.out.println(rs.getString(1)+","+
    rs.getInt(2)+","+
    rs.getInt(3)+","+
    rs.getString(4));
    }

    con.close(); //closing connection
    }
    catch(Exception e) {
    e.printStackTrace();
    }
    }



    Java snippet main section

    Below is the main section, and as you can see you can pass one of the 3 parameters mentioned earlier.





    public static void main(String[] args)
    {

    // import java.sql.*;
    airgap airgap = new airgap(); // Create a airgap object


    airgap.get_airgap_properties(); // Call the queue_select() method
    switch(args[0]) {

    case "resume":
    airgap.resume_replication(); // Call the resume_replication() method
    break;
    case "pause":
    airgap.pause_replication(); // Call the pause_replication() method
    break;
    case "query":
    airgap.queue_select(); // Call the queue_select() method
    break;
    default:
    System.out.println("parameter must be one of 'resume','pause' or 'query'");
    }
    }
    }


    Executing the Java code (airgap.class).

    Now if you take the snipets above and put them in a file airgap.java you can compile them into a class file.

    javac airgap.java
    This creates a class file airgap.class

    In order to connect to my oracle database, I downloaded the jdbc driver.

    "ojdbc8.jar"

    Now I can execute it with the 3 parameters 

    $ java -Djava.security.egd=file:/dev/../dev/urandom -cp ojdbc8.jar:. airgap pause
    replication server 'replairgap' paused

    $ java -Djava.security.egd=file:/dev/../dev/urandom -cp ojdbc8.jar:. airgap resume
    replication server 'replairgap' resumed

    $ java -Djava.security.egd=file:/dev/../dev/urandom -cp ojdbc8.jar:. airgap query
    Replication Server,Tasks Queued,Total GB Queued,Minutes in Queue
    ra_replication_config,4,95,58


    It's that easy to create a simple java program that can manage your replication server from within an Airgap.


    Article 0

    $
    0
    0

    X

    Backup Anywhere offers Expanded Replication for High Availability and More Flexibility







    The previous release of the Zero Data Loss Recovery Appliance software (19.2.1.1.2) includes 3 new exciting features for replication. 

    • Backup Anywhere - Providing the ability to change roles (upstream vs downstream).
    • Read Only replication - Providing seamless migration to a different Recovery Appliance.
    • Request Only Replication - Providing a High Availability option for backups.

    Backup Anywhere

     Backup Anywhere provides even more options for HADR (High Available/Disaster Recovery) with the ability to redirect backups and redo to another Recovery Appliance. In addition, Backup Anywhere provides the ability to perform a role reversal, removing the concept of upstream/downstream.  As the name implies, when replicating between two or more Zero Data Loss Recovery Appliances you can switch the Recovery Appliance that is receiving backups from your protected databases. 

    With Backup Anywhere you configure two Recovery Appliance as pairs and create replication servers that point to each other.  The metadata synchronization will ensure backups are replicated to its pair and ensures the Replication Appliance pairs stay in sync.

    NOTE: In order to use Backup Anywhere you must use the new REPUSER naming convention of REPUSER_FROM_<source>_TO_<destination>.

    For my example, the diagram below depicts a three Zero Data Loss Appliance architecture with the primary databases in New York sending backups to the Recovery Appliance in the New York Data Center,  The Recovery Appliance in the New York Data Center replicates backups to the Recovery Appliance in the London Data Center. And finally, the Recovery Appliance in the London Data Center replicates backups to the Recovery Appliance in Singapore.

    New York --> London --> Singapore



    But what happens If I want to change which Recovery Appliance I am sending my backups to? With Backup Anywhere I can change the Recovery Appliance receiving backups, and the flow of replicated backups will be taken care of automatically.  With Backup Anywhere the Recover Appliances will seamlessly change the direction of the replication stream based on which Recover Appliance is currently receiving the backups.  Backup Anywhere does this automatically and will still ensure backups on the three Zero Data Loss Appliances are synchronized and available

    Singapore --> London --> New York.


     


    Read Only Replication

    This is my favorite new feature included in the latest Recovery Appliance release. Read Only allows you to easily migrate your backups to a new Recovery Appliance while leaving the older backups still available.

    Replication normally synchronizes the upstream catalog with the downstream catalog AND ensures that backups are replicated to the downstream. With Read Only Replication, only the synchronization occurs.  The upstream Recovery Appliance (typically the new RA) knows about the backups on the downstream Recovery Appliance (the old RA).  If a restore is requested that is not on the upstream Recovery Appliance, the upstream will pull the backup from the downstream.

    The most common use case is retiring older pieces of equipment, but Read Only Replication can be used for additional use cases.

    • Migrating backups to a new datacenter
    • Migrating backups for a subset of database from an overloaded Recovery Appliance to a new Recovery Appliance to balance the workload

     Replace older Recovery Appliance

    In this example I want to replace the current Recovery Appliance (ZDLRAOLD) with a new Recovery Appliance (ZDLRANEW).  During this transition period I want ensure that backups are always available from the protected database.  This example will show the migration of backups from ZDLRAOLD to ZDLRANEW. I am keeping 30 days of backups for my databases and I am starting the migration on September 1.

    Step #1 - September 1, configure replication from ZDLRAOLD to ZDLRANEW

    Create a replication server from ZDLRAOLD to ZDLRANEW and add the policy(s) for the databases to the replication server.  This will replicate the most current level 0 backup (FULL)  onto ZDLRANEW for all databases without changing the backup location from the protected databases.



    Once you have ensured that all databases have replicated a level 0 backup to ZDLRANEW you can remove the replication server from ZDLRAOLD which will stop the replication.

    Step #2 - September 2, configure Read Only replication from ZDLRANEW to ZDLRAOLD

    Create a replication server from ZDLRANEW to ZDLRAOLD. Add the policies all databases to the replication server and ensure that the read only flag is set when adding the policy.

     

    PROCEDURE add_replication_server (
       replication_server_name IN VARCHAR2,
       protection_policy_name IN VARCHAR2
       skip_initial_replication IN BOOLEAN DEFAULT FALSE,
       read_only IN BOOLEAN DEFAULT FALSE,
       request_only IN BOOLEAN DEFAULT FALSE);
     

    Note: The Read Only flag must be set when adding the policy to the replication server to ensure backups are NOT replicated from ZDLRANEW to ZDLRAOLD.

     


     

    Step #3 - September 3, configure backups from the protected databases to backup to ZDLRANEW.

    At this point ZDLRANEW should contain at least 1 full backup for all databases, and the incremental backups will begin on September 3rd.  ZDLRANEW will now contain backups from September 1 (when replication began) until the most current Level 0 virtualized backup taken.  ZDLRAOLD will contain backups from August 4 until September 2nd when protected database backups to ZDLRAOLD were moved to be sent to ZDLRANEW.



    Step #4 - September 4+, ZDLRANEW contains all new backups and old backups age off ZDLRAOLD

    Below is a snapshot of what the backups would look like 15 days later on September 15th.  Backups are aging off of ZDLRAOLD and ZDLRANEW now contains 15 days of backups.



     

    Step #5 - September 15, Restore backups

    To restore the protected database using a point in time you would connect the protected database to ZDLRANEW and ZDLRANEW would provide the correct virtual full backup regardless of its location.

    1.       If the Full backup prior to the point-in-time is on ZDLRANEW it is restored directly from there.

    2.     If the Full backup is NOT on ZDLRANEW, it will get pulled from ZDLRAOLD through ZDLRANEW back to the protected database

    The location of the backups is transparent to the protected database, and ZDLRANEW manages where to restore the backup from.



    Step #6 - September 30  Retire ZDLRAOLD

    At this point the new Recovery Appliance ZDLRANEW contains 30 days of backups and the old Recovery Appliance ZDLRAOLD can be retired.



      

    Request Only Mode

     

    Request Only Mode is used when Data Guard is present and both the Primary database and the Data Guard database are backing up to a local Recovery Appliance. The two Recovery Appliances synchronize only  the metadata, no backup pieces are actively replicated. But, in the event of a prolonged outage of either Recovery Appliance, this features provides the ability to fill gaps by replicating backups from its paired Recovery Appliance. 

    To implement this feature, replication servers are configured on both Recovery Appliances, and the policies are added to the replication server specifying REQUEST_MODE=TRUE.

     

    PROCEDURE add_replication_server (
       replication_server_name IN VARCHAR2,
       protection_policy_name IN VARCHAR2
       skip_initial_replication IN BOOLEAN DEFAULT FALSE,
       read_only IN BOOLEAN DEFAULT FALSE,
       request_only IN BOOLEAN DEFAULT FALSE);
     

    Below is my environment that is configured and running in a normal mode. I have my primary database in San Francisco, and my standby database in New York.  Both databases, Primary and Standby are backing up to the local Recovery Appliance in their respective same data center.  Request Only Mode is configured between the two Recovery Appliances.



     

    To demonstrate what happens when a failure occurs, I will assume that the Recovery Appliance in the SFO datacenter is down for a period of time.  In this scenario, backups can no longer be sent to the SFO Recovery Appliance, but Data Guard Redo Traffic still occurs to the standby database in New York, and the standby database in New York is still backing up locally to the Recovery Appliance in New York.



    When the SFO appliance comes back on-line, it will synchronize the backup information with that on the NYC Recovery Appliance.  The SFO appliance will request datafile backups and any controlfile backups that are older than 48 hours, from NYC appliance.

    NOTE: The assumption is that a new backup will occur locally over a faster LAN network and fill any gaps within the last 48 hours. The backups requested from its pair will be transferred over a slower WAN and fill any gaps older than 48 hours

    If Real-Time redo is configured, the protected databases will immediately begin the archived log gap fetch process, and fill any gaps in archive logs on SFO appliance that are available on the protected databases. The SFO appliance will also check for new logs to be requested from NYC appliance once per hour over the next 6 hours. This gives time for local arch log gap fetch to run via LAN, which is faster than replicating logs via WAN from NYC.

    HADR Bonus Feature: Since the SFO appliance recovery catalog is immediately synchronized with the NYC recovery catalog, backup pieces on the NYC Recovery Appliance are available for recovery.  With this capability you have full recovery protection as soon as the catalog synchronization completes.

     



     

     



    This ensures that the SFO Recovery Appliance will be able to provide a short Recovery Point Object without waiting for the next backup job to occur.

    All of this happens transparently and quickly returns the Recovery Appliance to the expected level of protection for the database backups.

     

    For more details on implementing different replication modes, refer to the Administrator’s Guide.

     

     

     


    Viewing all 147 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>