Quantcast
Channel: Bryan's Oracle Blog
Viewing all 147 articles
Browse latest View live

"enq: CR - block range reuse ckpt" and the recycle bin

$
0
0
I decided to write this blog entry because we ran into an issue over the weekend.  The process that normally runs fine, was running slow.. "like molasses" was the comment from one of the folks.  After digging into an AWR report I found one of the top waits for the sql was


enq: CR - block range reuse ckpt

OK.. what is that, and why ?   I didn't find a whole lot except that it indiciated contention on blocks from multiple processes trying to update blocks.. hmmm..

I looked further in the AWR and saw the top Segment for Logical reads was "RECYCLEBIN$"

and one of the top queries was.

"delete from RecycleBin$ where bo=:1"


Well since the process was finished I did my own test.. I created a tablespace and created an object in it.. dropped the object, added the to table, then dropped the table.. over and over again, until the number of objects in  the tablespace remained constant.  I then created the table again, and started to let it grow (so it would have to free up the recycle bin to get space)... And what I saw in the top 5 wait events was again.....

enq: CR - block range reuse ckpt

I wanted to document this for others that may hit this.. If a search for this wait event brought you to my blog, Please check your recycle bin and make sure that it isn't cleaning itself out to make room causing this wait event ...





Anayzing query Cost example

$
0
0
Well,
  Here is the issue I've been dealing with.  The query cost doesn't stay consistent, and I was wondering if a profile would help keep it stable, and if it keeps things better or worse.

  I'm sure you have all run into this.  For some reason your cardinality can't be well estimated, the optimizer changes plans on you, and you want to know if a profile will help or hurt.

  The good news I had the perfect test case.  The query was part of a data load, and there was a driving table with a varying number of rows in it.  This is how I went about analyzing.

First I created a table to store the results.

create table bgrenn.mytable
(ROW_COUNT NUMBER,
PLAN_HASH_VALUE VARCHAR2(15)
COST number);




Next I took a copy of my driving table ( a full size table). I used this to create a smaller copy of the table.



declare


row_count number:= 0;
v_statement_id varchar2(10);
v_plan_hash_value varchar2(15);
v_cost number;
v_statement varchar2(4000);

begin

for row_count in 0..100000 loop


execute immediate 'drop table BGRENN.TMP_DRIVER purge';
execute immediate 'create table BGRENN.TMP_DRIVERT as select * from BGRENN.TMP_DRIVERB where rownum<' || to_char(row_count,'9999999');

dbms_stats.gather_table_stats('BGRENN','TMP_DRIVERT');

v_statement_id := to_char(row_count,'99999');

v_statement := 'explain plan SET STATEMENT_ID = ''' || v_statement_id || ''' for ' ||
'select * ' ||
' from BGRENN.TMP_DRIVERT TMP_DRIVERA ' ||
' INNER JOIN BGRENN.TABA TABA ON TMP_DRIVERA.ID=TABA.ID ' ||
' INNER JOIN BGRENN.TABB ON TABA.GCC_ID=TABA.ID AND TABA.LOCN <> 8031431 ' ||
' INNER JOIN BGRENN.TABC TABC ON TABA.ID=TABC.ID AND TABC.id = 1168583 ' ||
' INNER JOIN BGRENN.TABD TABD ON TABA.ID=TABD.ID ' ||
' INNER JOIN BGRENN.TABE TABE ON TABD.ID=TABE.ID ' ||
' INNER JOIN BGRENN.TABF TABF ON TABD.ID=TABF.ID ' ||
' INNER JOIN BGRENN.TABG TABG ON TABF.ID=TABG.ID ' ||
' INNER JOIN BGRENN.TABH TABH ON TABG.ID=TABH.ID AND TABH.SEQ_NBR < 500 ' ||
' INNER JOIN BGRENN.TABI ON TABC.ID=ID ' ||
' INNER JOIN BGRENN.TABJ TABJ ON TABC.ID=TABJ.ID ' ||
' INNER JOIN BGRENN.TABK TABK ON TABJ.ID=TABK.ID and TABK.id in ( 1221589, 1219009, 1191882, 1221590, 1171956) ' ||
' LEFT OUTER JOIN ERD.TABL TABL ON TABH.ID=TABL.ID ' ||
' LEFT OUTER JOIN ERD.TABM TABM ON TABE.ID=TABM.ID ' ||
' where (1=1)';

dbms_output.put_line(v_statement);

execute immediate v_statement;



SELECT substr(plan_table_output,18,12) into v_plan_hash_value FROM TABLE(dbms_xplan.display(statement_id => v_statement_id)) where rownum <2;
select cost into v_cost from plan_table where id=0 and rownum<2 and statement_id=v_statement_id;

insert into bgrenn.mytable values(row_count,v_plan_hash_value,v_cost);

delete from plan_table where statement_id=v_statement_id;
commit;
end loop;

end;
/ number);




This produced a set of rows in the table with the cost.

I then copied the table, installed a profile and reran.

After joining the 2 tables on row count I created an "r" program and analyzed the results.
Here is the program.


# open Libarary psych for functions
library(psych)

# open file
query_data <- read.table("c:/r/data/query_output.txt", header=T)

#what are the variables
describe (query_data)


pdf("c:/r/data/querydata.pdf")
plot(query_data$ROW_COUNT,query_data$orig_cost,type='l',col="red")
lines(query_data$ROW_COUNT,query_data$new_cost,type='l',col="green")
dev.off()


And here is the output ..  The red is the orginal plan, and the green is the plan with the profile. I can see that the cost of the profile plan remains more consistent. and is probably a better choice.

ODI Monitoring scripts

$
0
0
I have included some usefule ODI monitoring scripts (if you want the abridged version of this blog post).

I haven't been blogging in a while (it's been crazy), but I wanted to share some information on ODI.

I have been working on trying to monitor ODI (Oracle Data Integrator).  ODI is a somewhat recent Oracle purchase, and it has a client GUI that is used by the developers.

Me (like many of you), are DBA's, and we want to go into the database to see what is happening.  We eithor don't have access to the GUI, or we don't want access. 

ODI is a great tool used for transforming data.  It can be used to build sql statments that are executed directly in the database.  This makes it a bit different from a tool like Datastage that runs sql remotely.

Here is the first sql I was able to come up with.  It will tell you information about the load plans that have been run. You need to qualify the tables with the owner of the ODI repository..

SELECT SLI.I_LP_INST AS "Load Plan Instance #"
, SLR.NB_RUN AS "Load Plan Run #"
, SLI.LOAD_PLAN_NAME AS "Load Plan Name"
, SLR.CONTEXT_CODE AS "Source System"
, SLR.STATUS AS "Load Plan Status"
, SLR.RETURN_CODE AS "error code"
, CASE WHEN SLR.END_DATE IS NULL
THEN TRUNC(ROUND((NVL(SLR.END_DATE , SYSDATE) - SLR.START_DATE)*86400) / 3600) || ':' ||
LPAD(TRUNC(MOD(ROUND((NVL(SLR.END_DATE , SYSDATE) - SLR.START_DATE)*86400), 3600) / 60), 2, 0) || ':' ||
LPAD(MOD(ROUND((NVL(SLR.END_DATE , SYSDATE) - SLR.START_DATE)*86400), 60), 2, 0)
ELSE TRUNC(SLR.DURATION / 3600) || ':' || LPAD(TRUNC(MOD(SLR.DURATION, 3600) / 60), 2, 0) || ':' || LPAD(MOD(SLR.DURATION, 60), 2, 0)
END AS "Load Time"
, SLR.START_DATE
, SLR.END_DATE
, substr(to_char(SLR.START_DATE,'mm/dd/yy:hh24'),1,11) start_date_hour
FROM SNP_LP_INST SLI
JOIN SNP_LPI_RUN SLR ON SLI.I_LP_INST = SLR.I_LP_INST
where 'JRNL_LOAD'=sli.load_plan_name

I was able to include this in an apex report, and it display some of the history.

The next SQL igives you detail of the scenarios, for the load plan. The input is the Load plan



SELECT SLI.Load_plan_name as "Load Plan Name",
SUBSTR(SLR.CONTEXT_CODE, 9, 5) AS "Source System",
SLS.LP_STEP_NAME AS "Target Table",
SLS.scen_name as "scenario name",
TRUNC(SUM(SSTL.TASK_DUR) / 3600) || ':' ||
LPAD(TRUNC(MOD(SUM(SSTL.TASK_DUR), 3600) / 60), 2, 0) ||
':' || LPAD(MOD(SUM(SSTL.TASK_DUR), 60), 2, 0) AS "Load Time"
, SST.SESS_NO AS "Session Number"
, SLSL.start_date as "Start Time"
, SLSL.End_date as "End Time"
, sum(sstl.nb_ins) as "Rows Inserted"
, sum(sstl.nb_upd) as "Rows Updated"
, sum(sstl.nb_del) as "Rows Deleted"
, sum(sstl.nb_err) as "Rows Errors"
, case
when (sum(sstl.nb_ins) + sum(sstl.nb_upd)) > 0 then trunc(sum(sstl.task_dur)/(sum(sstl.nb_ins) + sum(sstl.nb_upd)) ,4)
else 0
end as "Throughput"
FROM SNP_LP_INST SLI
JOIN SNP_LPI_STEP SLS
ON SLI.I_LP_INST = SLS.I_LP_INST
JOIN SNP_LPI_STEP_LOG SLSL
ON SLS.I_LP_STEP = SLSL.I_LP_STEP
AND SLS.I_LP_INST = SLSL.I_LP_INST
JOIN SNP_SESS_TASK SST
ON SST.SESS_NO = SLSL.SESS_NO
JOIN SNP_SESS_TASK_LOG SSTL
ON SSTL.SCEN_TASK_NO = SST.SCEN_TASK_NO
AND SST.SESS_NO = SSTL.SESS_NO
JOIN SNP_LPI_RUN SLR
on SLI.I_LP_INST = SLR.I_LP_INST
WHERE (1=1)
AND SLSL.I_LP_INST = :P6_SCENARIO
AND SLS.LP_STEP_TYPE = 'RS'
-- AND SLSL.STATUS IN ('M','D')
GROUP BY SUBSTR(SLR.CONTEXT_CODE, 9, 5),
SLSL.start_date,SLSL.end_date,SLI.load_plan_name,
SLS.scen_name,SLS.LP_STEP_NAME, SST.SESS_NO

Finally, this is the last query. This query takes the task number as an input, and will display the detail for all the tasks contained in a scenario.
SELECT SST.TASK_NAME2 AS "Session Name"
, SST.TASK_NAME3 AS "Task Name"
, CASE WHEN SSTL.TASK_END IS NULL
THEN TRUNC(ROUND((NVL(SSTL.TASK_END , SYSDATE) - SSTL.TASK_BEG)*86400) / 3600) || ':' ||
LPAD(TRUNC(MOD(ROUND((NVL(SSTL.TASK_END , SYSDATE) - SSTL.TASK_BEG)*86400), 3600) / 60), 2, 0) || ':' ||
LPAD(MOD(ROUND((NVL(SSTL.TASK_END , SYSDATE) - SSTL.TASK_BEG)*86400), 60), 2, 0)
ELSE TRUNC(TASK_DUR / 3600) || ':' || LPAD(TRUNC(MOD(TASK_DUR, 3600) / 60), 2, 0) || ':' || LPAD(MOD(TASK_DUR, 60), 2, 0)
END AS "Load Time"
, substr(sst.def_context_code,9,5) "Context"
, SSTL.TASK_BEG AS "Start Time"
, SSTL.TASK_END AS "End Time"
, SSTL.NB_DEL AS "Rows Deleted"
, SSTL.NB_UPD AS "Rows Updated"
, SSTL.NB_INS AS "Rows Inserted"
, SSTL.NB_ERR AS "# Of Errors"
, SST.SESS_NO
, sst.scen_task_no
/* UNCOMMENT TO GET SQL EXECUTED FOR THIS STEP */
FROM SNP_SESS_TASK SST,
SNP_SESS_TASK_LOGv SSTL
WHERE (1=1)
AND SST.SESS_NO =:P7_TASK
AND SSTL.TASK_STATUS IN ('D','M','R')
AND SSTL.SCEN_TASK_NO = SST.SCEN_TASK_NO
AND SST.SESS_NO = SSTL.SESS_NO

Big Data and Privacy

$
0
0
I am writing an editorial (which is unusual for me). This was caused by a conversation I had with my Dad about how the world is changing with big data, and how much retiree's (like him) should know.  So here goes.

Dear Dad,

  I know you are involved in college education for Older folks that have a passion to learn.  An interesting topic would be "Big Data".  For someone outside the IT field, I would tell you that Big Data describes the plethora of new data that is generated in today's society.
Where does it come from ??


  • Logs from webservers 
  • Cell phones (including location data).
  • Search data
  • Medical data
  • Machine generated data (like from your computer in your car)
  • Sales data
All this can be tied together in new ways, that most people didn't think were possible years ago.  My favorite example is... You are walking in the mall past past a store, and you get a text on your phone.  The store's computer system has texted you a 30% offer for a new sweater, good for 2 hours.  The store knows ...
  • Your location from phone
  • That you have a ski trip planned for the next week, from your search history, and purchase history
  • That you are looking for a new sweater from your facebook, and twitter posts.
Amazing huh ?

All these things can open up miraculous possibilities. The day may come when your doctor calls you to schedule a preventive test.  Medical history gathered from a large group of people has shown that others with a similar medical history as yours have had a issue that can be tested for and prevented. Wow. really amazing things.

They can predict something that may go wrong on your car given all the information the computer has gathered.

Target (the department store), has even predicted when a woman is pregnant based on purchase history.. 

All these things are amazing possiblities, but they are also scary.

I find this topic very exciting, but I'm sure for a lot of people these ideas are very scary. Where do you cross the line and enter privacy issues.

This is going to be a interesting battle. Who ones all this data ? what data is public, and what data is private ?

But the most interesting thing is what can be done with the data....

Me, I'm an optimist so I can all the good that can come out of all this new data.

At the very least, the students would at least have a better idea what is going on in the world (behind the scenes), and they would understand why Target things "grandma" is pregnant.

Performance and Indexes with oracle

$
0
0
It's been a while since I've written a post, but twice this week, the same issue came up..

The story goes like this.. "I have a query that is using indexes not FTS, but it is much slower then expected".  It seems most folks have it drummed into their heads that Indexes are fastest.

This is the statement I've made twice....

"The only thing worse than a FTS is an index lookup of the whole table."  I figured I would show you what I mean.

First I create 2 tables..

TEST_TABLE with 76,989 rows of data.  There is a primary key.
DRIVER  with the same rows of test_table.

This is the 2 test I did.

1) I created DRIVER with 1 row, and analyzed it , then I deleted the row, and inserted all the rows from TEST_TABLE.
2) I ran the following query which should return every row.

select  /*+ GATHER_PLAN_STATISTICS */  count(distinct a.capture_time)
from test_table a,
driver b
where a.pval=b.pval
and b.instance_number < 20;

So what happens ?  since the statistics on DRIVER say there is only row you can see the plan and actual vs estimated below.







----------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time |
----------------------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.86 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.86 |
| 2 | VIEW | VW_DAG_0 | 1 | 1 | 77961 |00:00:01.28 |
| 3 | HASH GROUP BY | | 1 | 1 | 77961 |00:00:01.01 |
| 4 | NESTED LOOPS | | 1 | | 77989 |00:00:01.00 |
| 5 | NESTED LOOPS | | 1 | 1 | 77989 |00:00:00.58 |
|* 6 | TABLE ACCESS FULL | DRIVER | 1 | 1 | 77989 |00:00:00.07 |
|* 7 | INDEX UNIQUE SCAN | PVAL_KEY | 77989 | 1 | 77989 |00:00:00.21 |
| 8 | TABLE ACCESS BY INDEX ROWID| TEST_TABLE | 77989 | 1 | 77989 |00:00:00.22 |
----------------------------------------------------------------------------------------------


Since the Optimizer is only expecting 1 row to come back from the table DRIVER, it does an index lookup on TEST_TABLE, for every row. Notice the actual number of rows is the full table.

Now lets look at the cost of this index lookup.


select  /*+ GATHER_PLAN_STATISTICS */  count(distinct a.capture_time)
from test_table a,
driver b
where a.pval=b.pval
and b.instance_number < 20

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.84 0.85 0 83932 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.84 0.85 0 83932 0 1


OK.  so the cost of the index lookup row-by-row is .85 seconds elapsed time.

Now after analyzing the DRIVER table, you can see the plan changed to a FTS.


------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time |
------------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.16 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.16 |
| 2 | VIEW | VW_DAG_0 | 1 | 76723 | 77961 |00:00:00.40 |
| 3 | HASH GROUP BY | | 1 | 76723 | 77961 |00:00:00.23 |
|* 4 | HASH JOIN | | 1 | 76723 | 77989 |00:00:01.23 |
|* 5 | TABLE ACCESS FULL| DRIVER | 1 | 76723 | 77989 |00:00:00.22 |
| 6 | TABLE ACCESS FULL| TEST_TABLE | 1 | 79110 | 77989 |00:00:00.22 |
------------------------------------------------------------------------------------



Notice the actual rows, and estimates match.  You can also see it is a FTS.

Now for the run time stats with the FTS.

select  /*+ GATHER_PLAN_STATISTICS */  count(distinct a.capture_time)
from test_table a,
driver b
where a.pval=b.pval
and b.instance_number < 20

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.15 0.15 0 1390 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.15 0.15 0 1390 0 1



WOW.. look at that the FTS took .15 seconds compared to .85 seconds.

The bottom line is.. The next time you talk to developers, and they think their query should be fast because they are using an index, look deeper.  The biggest clue is to look at the Estimate vs Actual for the plan.  The index might not be what you want. FTS's can be good.




Oracle 12c PL/SQL improvements.

$
0
0

Last week I was giving a presentation for the UNYOUG (Upstate NY Oracle users Group) and I talked about the new features in 12c (along with In-Memory database).

I thought I would share some thoughts after the meeting.

I went through Tom Kyte's top 12 new features, and surprisingly the top feature that excited people was the PL/SQL improvements. 

The PL/SQL improvements had to do with the ability to write a PL/SQL as part of the query.

Lets say currently with 11g you have a function that includes the days between 2 dates.

  CREATEORREPLACEFUNCTION Days_Between 
             (first_dt DATE, second_dt DATE)
                RETURNNUMBERIS 
       dt_one NUMBER
       dt_two NUMBER
BEGIN 
      dt_one := TO_NUMBER(TO_CHAR(first_dt, 'DDD')); 
      dt_two := TO_NUMBER(TO_CHAR(second_dt, 'DDD'));  
           RETURN (dt_two - dt_one); 
 END Days_Between;   

select Days_between(start_date,end_date) from mytable;



The problem is that in order to test this function you need to create the function.  There are multiple issues that developers face with having to do this.
  1. Developers often don't have the authority to create/change functions, especially if they need to be owned by a different schema
  2. Replacing the current function affects other users and this may not be desirable while debugging changes.
  3. Testing against production data is often not possible because of authorization, and collision issues.

The answer in 12c is the ability to include a function in the "WITH" clause.. The above would become


WITH  FUNCTION Days_Between
             (first_dt DATE, second_dt DATE)
               RETURNNUMBERIS
       dt_one NUMBER;
       dt_two NUMBER;
BEGIN
      dt_one :=TO_NUMBER(TO_CHAR(first_dt,'DDD'));
      dt_two :=TO_NUMBER(TO_CHAR(second_dt,'DDD'));
           RETURN(dt_two - dt_one);
 END Days_Between; 
select Days_between(start_date,end_date) from mytable;

So. what about Procedures you ask ? You can also include procedures in the mix.  The main purpose of doing this is to include any procedures that are invoked from the function.  This way you can include all the "dependencies" in the with clause. 

Finally, I read an article talking about how much this improves performance too.

http://www.oracle-base.com/articles/12c/with-clause-enhancements-12cr1.php#plsql-support

but to the developers I talked to the big advantage was with the ability to test..

As far as performance gains, I don't know how much I would put reusable code (like functions) directly into a sql statement. It would be a bear to support any changes to a "common function" defined multiple places.
 

 

X4-2 Exadata Announcement

$
0
0
These the differences with the new X4-2 just announced, along with a table comparing the differences.
1) Double the size of flashcache
2) Switch from 3tb drives to 4tb drives (HC)
3) More cpu cores
4) Increase in Infiniband throughput by using an Active-Active configuration
5) Automatic Flash compression on X3 and X4 systems (using the ACO option)


x2X3X4
Database
Processesor2 x Six-Core Intel Xeon® X5675 Processors (3.06 GHz)2 x Eight-Core Intel Xeon®E5-2690 Processors (2.9 GHz)2 X Twelve-Core Intel® Xeon® E5-2697 V2 Processors (2.7 GHz)
Memory96G128g/256g256g
Disk controller
Disk Controller HBA with 512MB Batter Backed Write Cache
Disk Controller HBA with 512MB Batter Backed Write CacheDisk Controller HBA with 512MB Batter Backed Write Cache
Internal disks4 x 300 GB 10,000 RPM SAS Disks4 x 300 GB 10,000 RPM Disks4 x 600 GB 10,000 RPM Disks
Infiniband2 x QDR (40Gb/s) Ports2 x QDR (40Gb/s) Ports2 x QDR (40Gb/s) Ports
Ethernet2 x 10 Gb Ethernet Ports based on the Intel 82599 10GbE Controller 4 x 1/10 Gb Ethernet Ports (copper)4 x 1/10 Gb Ethernet Ports (copper)
Ethernet4 x 1 Gb Ethernet Ports2 x 10 Gb Ethernet Ports (optical)2 x 10 Gb Ethernet Ports (optical)
Full Rack96 CPU cores and 768 gb memory for database processing (12 CPU cores and 96 GB memory per Database Server) 128 CPU cores and 1TB or  2 TB memory for database processing (16 CPU cores and 256 GB memory per Database Server) 192 CPU cores and 2TB memory for database processing (24 CPU cores and up to 512 GB memory per Database Server)
Storage Cells
CPU
2 x Six-Core Intel® Xeon® L5640 (2.26 GHz) Processors
2 x Six-Core Intel® Xeon® E5-2630L (2.0 GHz processors)2 x Six-Core Intel® Xeon® E5-2630 v2 (2.6 GHz processors)
Memory24 GB64 GB96 GB
HC (High Capacity)
Disk Bandwidth¹
Up to 18 GB/second of uncompressed disk bandwidth
Up to 18 GB/second of uncompressed disk bandwidth
Up to 20 GB/second of uncompressed disk bandwidth
Flash Bandwidth¹Up to 68 GB/second of uncompressed Flash data bandwidthUp to 93 GB/second of uncompressed Flash data bandwidthUp to 100 GB/second of uncompressed Flash data bandwidth
Disk IOPS ²Up to 28,000 Database Disk IOPSUp to 28,000 Database Disk IOPSUp to 32,000 Database Disk IOPS
Flash read IOPS ²Up to 1,500,000 Database Flash IOPSUp to 1,500,000 Database Flash IOPSUp to 2,660,000 Database Flash IOPS
Flash write IOPS³N/AUp to 1,000,000 Database Flash IOPSUp to 1,680,000 Database Flash IOPS
Flash Data Capacity (raw)5.3 TB Exadata Smart Flash Cache22.4 TB44.8 TB
Disk Data capacity (raw)504 TB of raw disk data capacity504 TB672 TB
Disk Data capacity (Usable)
Up to 224 TB of uncompressed usable capacity
224 TB300 TB
HP (High Performance)
Disk Bandwidth¹
Up to 25 GB/second of uncompressed disk bandwidth
Up to 25 GB/second of uncompressed disk bandwidth
Up to 24 GB/second of uncompressed disk bandwidth
Flash Bandwidth¹Up to 75 GB/second of uncompressed Flash data bandwidthUp to 100 GB/second of uncompressed Flash data bandwidthUp to 100 GB/second of uncompressed Flash data bandwidth
Disk IOPS ²Up to 50,000 Database Disk IOPSUp to 50,000 Database Disk IOPSUp to 50,000 Database Disk IOPS
Flash read IOPS ²Up to 1,500,000 Database Flash IOPSUp to 1,500,000 Database Flash IOPSUp to 2,660,000 Database Flash IOPS
Flash write IOPS³N/AUp to 1,000,000 Database Flash IOPSUp to 1,680,000 Database Flash IOPS
Flash Data Capacity (raw)5.3 TB Exadata Smart Flash Cache22.4 TB44.8 TB
Disk Data capacity (raw)100 TB of raw disk data capacity100 TB200 TB
Disk Data capacity (Usable)
Up to 45 TB of uncompressed usable capacity
45 TB90 TB
¹Bandwidth is peak physical scan bandwidth achieved running SQL, assuming no database compression. Effective user data bandwidth is higher when database compression is used.
 ²Based on 8K IO requests running SQL. Note that the IO size greatly affects Flash IOPS. Others quote IOPS based on 2K or smaller IOs and are not relevant for databases.
³Based on 8K IO requests running SQL. Flash write I/Os measured at the storage servers after ASM mirroring. Database writes will usually issue multiple storage IOs to maintain redundancy.
⁴Raw capacity is measured in standard disk drive terminology with 1 GB = 1 billion bytes. Capacity is measured using normal powers of 2 space terminology with 1 TB = 1024 * 1024 * 1024 * 1024 bytes. Actual formatted capacity is less.
⁵Raw capacity is measured in standard disk drive terminology with 1 GB = 1 billion bytes. Capacity is measured using normal powers of 2 space terminology with 1 TB = 1024 * 1024 * 1024 * 1024 bytes. Actual formatted capacity is less.
⁶Actual space available for a database after mirroring (ASM normal redundancy) while also providing adequate space (one disk on Quarter and Half Racks and two disks on a Full Rack) to reestablish the mirroring protection after a disk failure in the normal redundancy case.

--> -->

Monitoring your Exadata health

$
0
0
One of the biggest topics I talk to customers about is the monitoring of your exadata health. 

The best tool for this is the Exachk (see MOS Doc ID 1070954.1)

This document contains the current Exachk release, and any new beta release that is available.

The recommendation for Exachk is to

1) Run the exachk (at a minimum) quarterly, and after any changes are made to the configuration
2) ALWAYS run the current exachk.  This script is periodically updated/improved upon so it is very important to be current
3) Keep track of any failures to ensure that you can identify any new items that appear in the report
4) A score of 80 or above is a good score for production. It is very rare to have a score that is 99+.

There are also a great whitepaper  released in Sept. 2013 (just a few months ago).

This white paper can be  here.

http://www.oracle.com/technetwork/database/availability/exadata-health-resource-usage-2021227.pdf

Performance tuning using Oracle Internal Packages

$
0
0
I had an interesting problem last week with a customer who was performance testing on a new system compared to their current system.

The script was pretty simple. It was a PL/SQL package that inserted into a table 10M rows, and committed every 1,000 rows.  To make the data more "normal" the customer used DBMS_RANDOM .

The basic insert looked like this.

INSERT INTO TEST_TABLE1
     (ID,DTVAL,
      DTSTAMP,
      COL1,
      NUM)
VALUES
    (:B1 ,
      SYSDATE,
      SYSTIMESTAMP,
      DBMS_RANDOM.STRING('A', 100),
      DBMS_RANDOM.RANDOM);

To me it seemed like a simple test.  Unfortunately the performance results were not as expected.    To step back for a minute the current system was running on 11.1.0.7 and the new system they were benchmarking against was 11.2.0.4.

I even had them check the output of the  Table to ensure no changes in the output.. Everything looked the same.

You wouldn't think that would matter, but the differences in DBMS_RANDOM between versions seemed to be issue.  You see DBMS_RANDOM periodically has logic changes, and the performance of DBMS_RANDOM cannot be compared between versions in a performance benchmark.

I had the customer re-run the tests with constants instead of calling DBMS_RANDOM and the results were much better.

To reproduce what they saw  I finally tested against 11.2.0.2 and  12.1.0.1 (on the same machine).  I could not get a copy of 11.1.0.7 and 11.2.0.4 to test.  These 2 versions were enough to see the difference that affected the Customers Benchmark.

Below I've included the TKPROF formatted output from the trace file on 11.2.0.2

SQL ID: fg7gf0m6a2ca4 Plan Hash: 0

INSERT INTO TEST_TABLE1 (ID,DTVAL,DTSTAMP,COL1,NUM)
VALUES
(:B1 , SYSDATE, SYSTIMESTAMP, DBMS_RANDOM.STRING('A', 100),
DBMS_RANDOM.RANDOM)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 100000 25.81 39.51 0 2464 119161 100000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 100001 25.81 39.51 0 2464 119161 100000


Notice the CPU time.. 25.81 seconds of CPU time on 11.2.0.2

Below is the TKPROF formatted output from the trace file on 12..1.0.1

INSERT INTO TEST_TABLE1 (ID,DTVAL,DTSTAMP,COL1,NUM)
VALUES
(:B1 , SYSDATE, SYSTIMESTAMP, DBMS_RANDOM.STRING('A', 100),
DBMS_RANDOM.RANDOM)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 100000 74.01 90.31 1 3722 111116 100000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 100001 74.01 90.31 1 3722 111116 100000


This time notice that it 74.01 seconds of CPU.. Same statement executed the same number of times..

The difference between the 2 versions is almost 3X longer in 12.1.0.1

No I re-ran it with constants

11.2.0.2 Test

INSERT INTO TEST_TABLE1 (ID,DTVAL,DTSTAMP,COL1,NUM)
VALUES
(:B1 , SYSDATE, SYSTIMESTAMP, 'a', 1)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 100000 4.62 6.02 0 536 108087 100000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 100001 4.62 6.02 0 536 108087 100000


12.1.0.1 test

INSERT INTO TEST_TABLE1 (ID,DTVAL,DTSTAMP,COL1,NUM)
VALUES
(:B1 , SYSDATE, SYSTIMESTAMP, 'a', 1)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 100000 4.78 7.09 1 586 105731 100000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 100001 4.78 7.09 1 586 105731 100000



Wow.. Now that I use constants, the CPU time was almost identical.

There are absolutely some major performance differences with DBMS_RANDOM between versions.

Moral of the story is don't use internal packages for benchmarking (unless they are critical to your application).



Finally, this is the package code I used for testing..

OWNER        SET TIME ON
SET TIMING ON
SET ECHO ON
SET SERVEROUTPUT ON
SET TERMOUT ON
SET VERIFY ON
SET FEEDBACK ON

WHENEVER SQLERROR CONTINUE

select to_char(sysdate,'mm/dd/yyyy hh:mi:ss AM') from dual;


prompt create the test_table1

drop table test_table1;

create table test_table1
(
id NUMBER,
dtval DATE,
dtstamp TIMESTAMP,
col1 varchar2(100),
num NUMBER
);

prompt Insert 1M with commit every 1000 records

alter session set tracefile_identifier = 'test_sess_1';
exec dbms_monitor.session_trace_enable( waits => true );


DECLARE
x PLS_INTEGER;
rn NUMBER(20);
BEGIN

SELECT hsecs
INTO rn
FROM v$timer;

dbms_random.initialize(rn);
FOR i IN 1..100000
LOOP
x := dbms_random.random;
rn := x;

insert into test_table1 (id,dtval,dtstamp,col1,num)
values(x, sysdate, systimestamp, DBMS_RANDOM.string('A', 100), dbms_random.random);

If ( MOD(i,100) = 0) then
commit;
end if;

END LOOP;
dbms_random.terminate;
END;
/

EXEC DBMS_MONITOR.session_trace_disable;

prompt Count of all records

select count(*) from test_table1;
select count(distinct col1) from test_table1 ;
select count(distinct num) from test_table1 ;





12.1.0.2 New Features PDB CONTAINERS Clause

$
0
0
When 12.1.0.2 came out, one of the features I wanted to play was the Cross-container functionality, and I finally had time to play with it.

First here is the description of the feature.

The CONTAINERS clause is a new way of looking at multitenant container databases (CDBs). With this clause, data can be aggregated from a single identical table or view across many pluggable databases (PDBs) from the root container. The CONTAINERS clause accepts a table or view name as an input parameter that is expected to exist in all PDBs in that container. Data from a single PDB or a set of PDBs can be included with the use of CON_ID in the WHERE clause. 

I decided to play with this and see what it really means.... And mostly to see the explain plan to see what happens under the covers.

Step 1 --   The first thing you need to do is create a "common user".  A common user is a new term that comes with Pluggable databases.  A common user is a user which is created in the CDB (the Root container), and is then available as a user in all the PDB's that are part of the CDB.  There are some rules around this.

  • In Oracle Database 12c Release 1 (12.1.0.1), the name of a commonuser must begin with C## or c## and the name of a localuser must not begin with C## or c##.
  • Starting with Oracle Database 12c Release 1 (12.1.0.2):
    • The name of a commonuser must begin with characters that are a case-insensitive match to the prefix specified by the COMMON_USER_PREFIX initialization parameter. By default, the prefix is C##.
    • The name of a localuser must not begin with characters that are a case-insensitive match to the prefix specified by the COMMON_USER_PREFIX initialization parameter. Regardless of the value of COMMON_USER_PREFIX, the name of a local user can never begin with C## or c##.

 So here goes for step 1 ..
$ sqlplus "/ as sysdba"

SQL*Plus: Release 12.1.0.2.0 Production on Thu Aug 7 22:23:26 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL create user c##bgrenn identified by bgrenn;
grant dba to c##bgrenn;

User created.

SQL
Grant succeeded.


One thing to mention at this point is that I granted DBA role to my new common user c##bgrenn.  You also need to set individual privileges at each PDB level

Step 2 --   Now that I have a common user (c##bgrenn), I now need to go into my pdb's and create the objects that I want to have shared across PDB's. 

For simplicity, I chose DBA_TABLES, and DBA_OBJECTS.

The 3  PDBS I want to create this in are "orclpdb", "orclpdba" and "orcldbb"

SQL> COLUMN NAME FORMAT A15
COLUMN RESTRICTED FORMAT A10
COLUMN OPEN_TIME FORMAT A30

SELECT NAME, OPEN_MODE, con_id FROM V$PDBS
SQL /

NAME OPEN_MODE CON_ID
--------------- ---------- ----------
PDB$SEED READ ONLY 2
ORCLPDB READ WRITE 3
ORCLPDBA READ WRITE 4
ORCLPDBB READ WRITE 5





I then went into all 3 PDB's and created a local copy of DBA_OBJECTS and DBA_TABLES.

connect sys/oracle@localhost:1521/orclpdb as sysdba

grant dba to c##bgrenn;

create table c##bgrenn.local_objects as select * from dba_objects;

create table c##bgrenn.local_tables as select * from dba_tables; 
Connected.
SQL
Grant succeeded.

SQL

Table created.

SQL
 Table created.

SQL


Step 3 - Now that the same objects are created with data in all 3 PDB's, I can now do a combined query from the root PDB.  The next step is create a table in the CDB (root PDB) that is empty.

create table c##bgrenn.local_objects as select * from dba_objects where 1=0;

create table c##bgrenn.local_tables as select * from dba_tables where 1=0;

Table created.

Table created.


Then finally query and see the rows in the local_objects tables across all the PDB's

select count(*) from containers(c##bgrenn.local_objects) where con_id in (3) ;
COUNT(*)
----------
90925
select count(*) from containers(c##bgrenn.local_objects) where con_id in (4) ;
COUNT(*)
----------
90923
select count(*) from containers(c##bgrenn.local_objects) where con_id in (5) ;
COUNT(*)
----------
90925
select count(*) from containers(c##bgrenn.local_objects) ;

COUNT(*)
----------
272773


Step 4 - Now to  look at the explain plan for one of the tables across containers

SET LINESIZE 130
SET PAGESIZE 0
SELECT * FROM table(DBMS_XPLAN.DISPLAY);
SQL> SQL> Plan hash value: 1439328272

----------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 381 | 0 (0)| | | | | |
| 1 | PX COORDINATOR | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 381 | | | | Q1,00 | P->S | QC (RAND) |
| 3 | PX PARTITION LIST ALL | | 1 | 381 | | 1 | 254 | Q1,00 | PCWC | |
| 4 | FIXED TABLE FULL | X$CDBVW$ | 1 | 381 | | | | Q1,00 | PCWP | |
----------------------------------------------------------------------------------------------------------------------



Finally, this is the explain plan for a join query to get all the tables


explain plan for
select *
from containers(c##bgrenn.local_objects) a,
containers(c##bgrenn.local_tables) b
where
a.object_name = b.table_name and
a.object_type = 'TABLE';

SET LINESIZE 150
SET PAGESIZE 0
SELECT * FROM table(DBMS_XPLAN.DISPLAY);

2 3 4 5 6 7
Explained.

SQL> SQL> SQL> SQL> Plan hash value: 198107036

-------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 1226 | 0 (0)| 00:00:01| | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10002 | 1 | 1226 | 0 (0)| 00:00:01| | | Q1,02 | P-S | QC (RAND) |
|* 3 | HASH JOIN BUFFERED | | 1 | 1226 | 0 (0)| 00:00:01| | | Q1,02 | PCWP | |
| 4 | PX RECEIVE | | 1 | 381 | | | | | Q1,02 | PCWP | |
| 5 | PX SEND HYBRID HASH | :TQ10000 | 1 | 381 | | | | | Q1,00 | P-P | HYBRID HASH|
| 6 | STATISTICS COLLECTOR | | | | | | | | Q1,00 | PCWC | |
| 7 | PX PARTITION LIST ALL | | 1 | 381 | | | 1 | 254 | Q1,00 | PCWC | |
|* 8 | FIXED TABLE FULL | X$CDBVW$ | 1 | 381 | | | | | Q1,00 | PCWP | |
| 9 | PX RECEIVE | | 1 | 845 |
| | | | Q1,02 | PCWP | |
| 10 | PX SEND HYBRID HASH | :TQ10001 | 1 | 845 | | | | | Q1,01 | P-P | HYBRID HASH|
| 11 | PX PARTITION LIST ALL | | 1 | 845 | | | 1 | 254 | Q1,01 | PCWC | |
| 12 | FIXED TABLE FULL | X$CDBVW$ | 1 | 845 | | | | | Q1,01 | PCWP | |
-------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("A"."OBJECT_NAME"="B"."TABLE_NAME")
8 - filter("A"."OBJECT_TYPE"='TABLE')

25 rows selected.






This looks like a very useful feature to get a high level view of all PDB's.  The things to note are.

1) You need to use a common user and this user needs to be the schema owner in all PDB's
2) By looking at the plan, I'm sure there is some high level view that does this using partitions.



M7 is Here

$
0
0
Yes, I know I am more of a sofware geek, than a hardware geek, but I spent the day listening to all the goodness of the new Sparc M7 chip.. and Wow..

This was announced at OOW '15, and you can find a lot of the information here.

I think what excited me wasn't just the benchmarks that you can find here, ir was the idea of Software-on-Silicon.

That's the big story for software geeks.. The idea of the DAX...

I know DAX sounds like something out of Dr. Seuss.

The DAX (Database Analytics Accelerator) is a special section of the new processor dedicated to In-Memory processes.

If you have read through Maria Colgans blog (which you should) you learn about how In-memory takes advantage of the SIMD instruction set available on the intel Chip.. The SIMD instructions are able to scan multiple rows of data in one CPU cycle.  That's part of what makes the In-Memory option process data so fast.

What does this have to do the M7 ?  The DAX replaces the SIMD instructions when you are running in-memory queries on the M7.  The DAX is specifically built to run this instruction set and the results are then fed to the CPU.  




What does this mean for you ? It means that the DAX is able to not only process the data faster than the SIMD processing on Intel, but it also does not use any of the CPU power to execute the In-Memory scanning.  You get faster performance, and you use less CPU.
That's the point of the DAX, and the Software-on-Silicon.  Faster performance with silicon that is built specially for an oracle workload (In-Memory in this case).












ZDLRA

$
0
0
I wanted to write a post on one of Oracle's newest products (Well not that new).. The ZDLRA.
The ZDLRA is often referred to as "Zelda".  I know the name ZDLRA does not roll off the tongue well.  Zelda is a much better (and easier to say name).
The other name you will hear the ZDLRA referred to as is RA or Recover Appliance.

Recover Appliance is probably the best description of the product.  One of the things that makes this product unique is the emphasis on RECOVERY.. Notice there is not a mention of backup in the name.


Here is a great starting point for information


It does a lot of it's magic by using incremental forevers   --

    I know what you are thinking... The incremental forever strategy has been around for a long time (since 10.2 I think).  The idea is simple.  You take a full backup (database copy NOT backup set), and then take incrementals from then on.  You use RMAN to apply the incremental to the full, and create a new full (destroying the old full in the process).  I've seen many customers (and me also) use the rolling incrementals in the online recovery area to keep a full backup online from the previous day.
This is used with a second backup strategy for longer term storage to a backup device.

  The way the RA handles this differently, is that it creates "virtual fulls" for each incremental backup you take.  You also tell it how far back to store virtual fulls.  Using this methodology, if you do an incremental backup nightly, you can keep "virtual fulls" from each night as far back as you want.
There is no need to keep an online backup, and one in backup appliance.

Why is the RA different from most backup strategies ?

1) The use of only needing to do incremental forevers uses less I/O to read database blocks, and less backup network I/O -  Using this method, the RA ONLY needs the incremental backups to keep a restore point.  This saves the I/O of doing a full, and it saves the bytes going across the network.

2) RMAN keeps track of all the backups.  RMAN is the backbone of the RA, and the RA contains a recovery catalog.  RMAN verifies that backups are good, and you will always know if you have a good backup to restore.

3) Real time apply.  You can think of the RA receiving redo log information in the same vein as a Dataguard database.  Without the RA, you would backup archive logs as they are written from the redo logs.  This leaves you open to data loss from your backup.. The RA reads from the current Redo log stream in the database (like dataguard) to ensure there there is almost no data loss. Nothing else does this.

4) Performance.  The performance the RA is phenomenal.  You can find a whitepaper here on the performance with multiple databases backing up to a single RA appliance.

This is a fantastic product to backup multiple databases and most importantly be able to recover your databases with next-to-no data loss.

ZDLRA and the FRA

$
0
0

ZDLRA and the FRA


I often get questions around the FRA (Flash Recovery Area), and how it should be used when moving backups to the ZDLRA.

First though, the recommendation is to ALWAYS set your db_recovery_file_dest_size to be 10% less than the amount of space available, and don't put other files in this same location (that are not managed as part of the FRA).  
Having a 10% buffer ensures that you can increase the available storage if necessary.  For those experienced on-call DBA's I'm sure there have been times where increasing the db_recovery_file_dest_size by that last 10% was used to keep the database running while space was cleaned up.. And of course this is often at 3:00 AM when the dreaded "archive log destination full" alert comes across.



First let's go through what's in the FRA and how it's being used.   


There is a lot of information in MOS and I will include pertinent MOS notes at the end of this post.

What's in the FRA  (V$FLASH_RECOVERY_AREA_USAGE shows us )?

Here is a sample output 


SQL> Select file_type, percent_space_used as used,
percent_space_reclaimable as reclaimable, 
    number_of_files as "number" from v$flash_recovery_area_usage; 
     
    FILE_TYPE          USED RECLAIMABLE     number 
    ------------ ---------- ----------- ---------- 
    CONTROLFILE           0           0          0 
    ONLINELOG             0           0          0 
    ARCHIVELOG         4.77           0          2 
    BACKUPPIECE       56.80           0         10 
    IMAGECOPY             0           0          0 
    FLASHBACKLOG      11.68       11.49         63 



From this you can see the following items are in the FRA.


CONTROLFILE  -- This comes from setting the location of the CONTROLFILE backup.


configure controlfile autobackup on;

If you configure controlfile backups using the 'FORMAT" option, it will not be managed by the FRA.


ONLINELOG


A copy of the online redo logs go to the FRA  when the DB_RECOVERY_FILE_DEST is set and the DB_CREATE_ONLINE_LOG_DEST_n is not set.





ARCHIVELOG -- 

Archive logs are managed by the FRA when the archive LOG_ARCHIVE_DEST_n parameter contains the clause 'LOCATION=USE_DB_RECOVERY_FILE_DEST'

alter system set LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'';

BACKUPPIECE or IMAGECOPY

RMAN backups are managed by the FRA when you configure RMAN to (or explicitly) send backups to disk AND the FORMAT option is not specified.

FLASHBACKLOG 

If flashback database is turned on for the database, flashback logs will be kept in the FRA automatically.  The database parameter db_flashback_retention_target is set to determine the amount of flashback logs that are kept for the database.

Now lets take a look at how space is managed for each piece 


ONLINELOG  -- Since online redo logs are necessary for the database, this is not affected with space pressure.

FLASHBACKLOG   -- Flashback logs are automatically removed to keep the window specificed in the DB_FLASHBACK_RETENTION_TARGET setting.  If there is space pressure the flashback management will automatically release space until it hits a window of 1 hour.  This is the default and comes from the _MINIMUM_DB_FLASHBACK_RETENTIONparameter.

BACKUPPIECE or IMAGECOPY and/or CONTROLFILE and ARCHIVELOG --  these are managed by the setting in RMAN for the retention policy. 

The recommendation for the ZDLRA is to set this parameter to

CONFIGURE RETENTION POLICY SHIPPED to all STANDBY;


Now lets take a look at how space is managed (315098.1)

There is a MOS note on this, but it there are still some misconceptions on what you see happening.



Looking at the output of view V$RECOVERY_FILE_DEST you will see 3 pertinent columns.

SPACE_LIMIT                              ==> this is the amount of space is allocated to the FRA.
SPACE_USED                               ==> This is the amount of space currently used
SPACE_RECLAIMABLE             ==> This is the amount space that the FRA considers reclaimable.
                                                                This NOT the total amount space reclaimable, just the amount
                                                                that the FRA knows about (I'll get to what this means soon).


The note says that 

If the free space becomes less then 15% in the Flash Recovery Area then all  the archivelogs in the Flash Recovery Area which are not needed for recovery by the current backups in the FRA will become obsolete and the space occupied will be shown in the SPACE RECLAIMABLE column of V$RECOVERY_FILE_DEST.

From this I made some assumptions that were incorrect.

I assumed 

1) Since I was immediately sending all backups and archive logs to the ZDLRA (Real Time Apply), the space for the ARCHIVELOG would all show up in the SPACE_RECLAIMABLE column.
2) When the FRA reaches 85%, it would automatically clean up ARCHIVELOGS to bring it down to 85%.

Both of these assumptions where wrong.  By testing I found that this is what actually happens, and this still falls within the verbiage in that note.


This is what happens as the space fills up.  I have a 1 TB FRA.


849 GB used in the FRA.  The reclaimable space is NOT calculated yet because we have not hit the 85% full mark. 

Space_limit             1000GB
Space_used              849GB
Space_reclaimable       0GB


At 850GB  I reached 85% full. This is the point where the database  calculates SPACE_RECLAIMABLE.  Note that it does not fully calculate what's reclaimable, it only finds a portion of the reclaimable space.
 

Space_limit             1000GB
Space_used              850GB
Space_reclaimable       300GB
 

At 999GB it's not quite full.  The reclaimable space shows there is space available, 
but it still only finds a portion of the available space.
 
Space_limit             1000GB
Space_used              999GB
Space_reclaimable       160GB
 

At 1000GB it is completely full. The reclaimable space shows space available to be reclaimed.
At this point I can see in the alert log that archive logs are being removed. 
Only enough logs are removed to make space for new logs.  it remains at 99% used.

Space_limit             1000GB
Space_used              999GB
Space_reclaimable       160GB
 

As I added more logs, it remained at 99% used.

This makes it very difficult to know how much space you have available in your FRA. 
The warnings start occurring at the 85% full mark.
Since the FRA recalculates the SPACE_RECLAIMABLE at 85% full, but only adjusts it to keep  from warning,
it is impossible to tell how much TOTAL space is reclaimable.

Using the formula 

SPACE_LIMIT - SPACE_RECLAIMABLE 

Does not give you amount of space that is actually reclaimable.  
It is only useful to tell you when the amount of unreclaimable space > 85%.



Here are the useful MOS notes.



NOTE:305817.1 - FAQ - Flash Recovery Area feature


How is the space pressure managed in the Flash Recovery Area - An Example. (Doc ID 315098.1)

Correctly configuring the Flash Recovery Area to allow the release of reclaimable space (Doc ID 316074.1)


Creating Popup windows on your Apex Page

$
0
0
I have been playing with Apex for an internal application  Application Express is a great tool, and Oracle has an internal Apex environment that groups can use for their own internal applications.

In creating the application I learned how to do a Javascript window that pops up within a page to help enter data. This can be very useful to add a function to your application without adding more pages.


This is how its down..

First I created a table to contain breweries..  Since this is running on 12.1, I was able to use the new feature to automagically use a sequence as a default value (it's about time right) ?

Here is my table creation script..


CREATE SEQUENCE brewery_seq
 START WITH     1000
 INCREMENT BY   1
 NOCACHE
 NOCYCLE;

Create table breweries
(brewery_id number default brewery_seq.nextval primary key,
         brewery_name varchar2(255),
         Brewery_rating number);

So first I did was create a new region on the a new page.



Is corruption always true corruption ?

$
0
0

Well the short answer is that there are different types of corruption.  I am writing this blog because I recently ran across “nologging” corruption which is considered softl corruption, and the handling of this type of corruption has changed across versions (which I will cover at the end of this article).

First, how does this happen ?  It can be seen in a physical standby, but you can also get this type of corruption in a Primary database.  I will go through how to reproduce this type of corruption shortly. 

Next, what is nologging ?  Nologging processes are direct path updates to the database that do not contain detailed log information.  Notice I used the word “detailed” .  This is because some logging is captured as to what blocks are updated, but the individual updates are not captured.
Nologging is most common in datawarehouse load processes (ELT)  that are part of workflow that can be restarted.  Often tables are created during this processing that are only kept as part of the processing.  Nologging can also be used for performing large inserts into existing tables.  Because this type of processing tends to be “logging intensive”, and steps can be re-run, nologging is utilized for these objects.  Nologging can speed up processing by performing limited logging.  The downside of nologging is that for objects updated with nologging, there is no point in time recovery capability.  The object can only be restore/recovered to the point where a backup is taken (full or incremental).  More on that later.

I will start by showing a nologging workflow.

Below are the steps on how to reproduce a nologging test.


  1)      Ensure that force_logging=false   ---  If FORCE_LOGGING is turned on, any nologging processing is handled as logging



SQL> select force_logging from v$database;


FOR
---
NO


2)   Create a nologging table


SQL>  create table bgrenn.test nologging as select * from dba_objects where 0=1;

Table created.
 

3)      Ensure the new table is nologging


SQL> select owner,table_name,logging from dba_tables where owner='BGRENN';

OWNER                          TABLE_NAME                     LOG
------------------------------ ------------------------------ ---
BGRENN                         TEST                           NO


4)      Perform a full backup of the database


RMAN> backup incremental level 0 database;

Starting backup at 23-february -2018 12:45:45
using target database control file instead of recovery catalog

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1069 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_1: starting incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/oradata/db1/soe.dbf
channel ORA_DISK_7: starting incremental level 0 datafile backup set
channel ORA_DISK_7: specifying datafile(s) in backup set
input datafile file number=00004 name=/oradata/db1/users01.dbf
channel ORA_DISK_7: starting piece 1 at 23-february -2018 12:46:22
channel ORA_DISK_8: starting incremental level 0 datafile backup set
channel ORA_DISK_8: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_8: starting piece 1 at 23-february -2018 12:46:28
channel ORA_DISK_2: finished piece 1 at 23-february -2018 12:46:30

….
Finished backup at 23-february -2018 12:50:01


5)      Update the table no logging using append hint

SQL> insert into /*+ append */ bgrenn.test select * from dba_objects nologging;

68947 rows created.

SQL> Commit;



6)      Switch the logfile to ensure the changes are written to archive logs.

SQL> alter system switch logfile;




OK. Now we have done a Full backup of the database, and performed a nologging change to my table “bgrenn.test”.  I did a log switch to ensure the change is written to the archive log.

The next step is to reproduce the nologging “soft corruption” through a restore.
At this point, blocks containing my table were inserted into, but the actual changes were not logged.  The block numbers were written to the log file, and on recovery these blocks will be marked as being changed.


1)      Check for block corruption before restoring

SQL> select * from v$database_block_corruption;

no rows selected


2)       Retart the database mount and restore the database

RMAN> connect target /

connected to target database: DB16 (DBID=3618808394)

RMAN> shutdown immediate;

using target database control file instead of recovery catalog
startup mount;
database closed
database dismounted
Oracle instance shut down
using target database control file instead of recovery catalog
startup mount;
database closed
database dismounted
Oracle instance shut down


RMAN> startup mount;

connected to target database (not started)
Oracle instance started
database mounted

Total System Global Area   14564409344 bytes

Fixed Size                     2149720 bytes
Variable Size               6308233896 bytes
Database Buffers            8187281408 bytes
Redo Buffers                  66744320 bytes

RMAN> restore database;

Starting restore at 23-february -2018 12:55:54
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=1110 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Secure Backup
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1109 device type=DISK
channel ORA_DISK_5: starting datafile backup set restore
channel ORA_DISK_5: specifying datafile(s) to restore from backup set
channel ORA_DISK_5: restoring datafile 00004 to /oradata/db1116/users01.dbf
channel ORA_DISK_5: reading from backup piece /u01/app/oracle/flash_recovery_area/DB1116/backupset/2018_02_23/o1_mf_nnnd0_TAG20180223T12454
7_f90vwp3j_.bkp
Finished restore at 23-february -2018 12:57:41






3)      Recover database


RMAN> recover database;

Starting recover at 23-february -2018 12:58:22
using channel ORA_SBT_TAPE_1
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4
using channel ORA_DISK_5
using channel ORA_DISK_6
using channel ORA_DISK_7
using channel ORA_DISK_8

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 23-february -2018 12:58:23

RMAN> alter database open;

database opened

RMAN>




4)      Check for corruption after restoring the database


SQL> select * from v$database_block_corruption;

no rows selected


5)      Select from the table in which we ran our nologging process



SQL> select * from bgrenn.test;
select * from bgrenn.test
                     *
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 4, block # 60)
ORA-01110: data file 4: '/oradata/db1/users01.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option




6)      Check corruption again


SQL> select * from v$database_block_corruption;

no rows selected



7)       Validate datafile



RMAN> validate datafile 4;

Starting validate at 23-february -2018 14:15:36
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1107 device type=DISK
channel ORA_DISK_8: SID=1073 device type=DISK
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
input datafile file number=00004 name=/oradata/db1/users01.dbf
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
List of Datafiles
=================



File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
4    OK     1016           307          1440            1760824163
  File Name: /oradata/db1/users01.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              63             
  Index      0              2              
  Other      0              1068           

Finished validate at 23-february -2018 14:15:42


Now this is where it gets interesting between version of Oracle.  Oracle 10g/11.1 reports this soft corruption differently from Oracle 11.2, and  both of these report it differently from 12.1+

Oracle 11.1   -
1)      Check for corruption in v$DATABASE_BLOCK_CORRUPTION  -- NOTE the corrupt_type only reports "CORRUPTION"



SQL> select * from v$database_block_corruption;

     FILE#     BLOCK#     BLOCKS CORRUPTION_CHANGE# CORRUPTIO
---------- ---------- ---------- ------------------ ---------
         4         60         13         1760822701 CORRUPT
         4         74         15         1760822716 CORRUPT
         4         90         15         1760822716 CORRUPT
         4        106         15         1760822730 CORRUPT
         4        122         15         1760822730 CORRUPT
         4        138         15         1760822745 CORRUPT
         4        154         15         1760822745 CORRUPT
         4        170         15         1760822759 CORRUPT
         4        267        126         1760822759 CORRUPT
         4        395        126         1760822763 CORRUPT
         4        523        126         1760822767 CORRUPT
         4        651        126         1760822771 CORRUPT
         4        779        126         1760822775 CORRUPT
         4        907        126         1760822779 CORRUPT
         4       1035        126         1760822784 CORRUPT
         4       1163         16         1760822788 CORRUPT


2)      Definition  of  CORRUPTION_TYPE

·         ALL ZERO - Block header on disk contained only zeros. The block may be valid if it was never filled and if it is in an Oracle7 file. The buffer will be reformatted to the Oracle8 standard for an empty block.
·         FRACTURED - Block header looks reasonable, but the front and back of the block are different versions.
·         CHECKSUM - optional check value shows that the block is not self-consistent. It is impossible to determine exactly why the check value fails, but it probably fails because sectors in the middle of the block are from different versions.
·         CORRUPT - Block is wrongly identified or is not a data block (for example, the data block address is missing)
·         LOGICAL - Specifies the range is for logically corrupt blocks. CORRUPTION_CHANGE# will have a nonzero values

3)      OEM Schedule Backup screen shows corruption

Oracle 11.2   -
1)      Check for corruption in v$DATABASE_BLOCK_CORRUPTION  -- NOTE 11.2 reports the soft corrupt in the view as "NOLOGGING" corruption.



SQL> select * from v$database_block_corruption;

     FILE#     BLOCK#     BLOCKS CORRUPTION_CHANGE# CORRUPTIO
---------- ---------- ---------- ------------------ ---------
         4         60         13         1760822701 NOLOGGING
         4         74         15         1760822716 NOLOGGING
         4         90         15         1760822716 NOLOGGING
         4        106         15         1760822730 NOLOGGING
         4        122         15         1760822730 NOLOGGING
         4        138         15         1760822745 NOLOGGING
         4        154         15         1760822745 NOLOGGING
         4        170         15         1760822759 NOLOGGING
         4        267        126         1760822759 NOLOGGING
         4        395        126         1760822763 NOLOGGING
         4        523        126         1760822767 NOLOGGING
         4        651        126         1760822771 NOLOGGING
         4        779        126         1760822775 NOLOGGING
         4        907        126         1760822779 NOLOGGING
         4       1035        126         1760822784 NOLOGGING
         4       1163         16         1760822788 NOLOGGING



2)      Definition  of  CORRUPTION_TYPE
·         ALL ZERO - Block header on disk contained only zeros. The block may be valid if it was never filled and if it is in an Oracle7 file. The buffer will be reformatted to the Oracle8 standard for an empty block.
·         FRACTURED - Block header looks reasonable, but the front and back of the block are different versions.
·         CHECKSUM - optional check value shows that the block is not self-consistent. It is impossible to determine exactly why the check value fails, but it probably fails because sectors in the middle of the block are from different versions.
·         CORRUPT - Block is wrongly identified or is not a data block (for example, the data block address is missing)
·         LOGICAL - Block is logically corrupt
·         NOLOGGING - Block does not have redo log entries (for example, NOLOGGING operations on primary database can introduce this type of corruption on a physical standby)

3)      OEM Schedule Backup screen shows corruption



Oracle 12.1   -
1)      Check for corruption in v$DATABASE_BLOCK_CORRUPTION  -- NOTE - 12.1 does not report it as corruption, but is reported in a new view  V$NONLOGGED_BLOCK


SQL> select * from v$database_block_corruption;

no rows selected



SQL> select file#,block#,blocks from v$nonlogged_block

     FILE#     BLOCK#     BLOCKS
---------- ---------- ----------
         6       6786        126
         6       6914        126
         6       7042        126
         6       7170        126
         6       7298        126
         6       7426        126
         6       7554        126
         6       7682        126
         6       7810        126
         6       7938        126
         6       8066        126

     FILE#     BLOCK#     BLOCKS
---------- ---------- ----------
         6       8194        126
         6       8322        126
         6       8512         64

47 rows selected.




2)      Definition  of  CORRUPTION_TYPE
·         ALL ZERO - Block header on disk contained only zeros. The block may be valid if it was never filled and if it is in an Oracle7 file. The buffer will be reformatted to the Oracle8 standard for an empty block.
·         FRACTURED - Block header looks reasonable, but the front and back of the block are different versions.
·         CHECKSUM - optional check value shows that the block is not self-consistent. It is impossible to determine exactly why the check value fails, but it probably fails because sectors in the middle of the block are from different versions.
·         CORRUPT - Block is wrongly identified or is not a data block (for example, the data block address is missing)
·         LOGICAL - Block is logically corrupt

3)      OEM Schedule Backup screen

Nothing appears


Now we have seen how to recreate “soft corruption” caused by nologging.  I have also shown how this is displayed in different versions of Oracle.

There are a few items to note that I have learned from this testing.

·        This is considered “soft corruption” so it is not reported when restoring a database.  This makes it very hard detect.

·        The ZDLRA does validation during backups, but since this is “soft corruption”, the database is backed up without no alerting.

·        OEM reports this corruption differently between versions.  With version 12.1 it is no longer reported in V$DATABASE_BLOCK_CORRUPTION, so OEM does not alert on this.

How to avoid Nologging corruption.

Ensure that you schedule backups when there isn’t any nologging operations occurring.This is a situation where the ZDLRA shines.  You can take an incremental backup before and after your nologging process then you have the capability to perform a full restore from either of these checkpoints.





Backup and Recovery of Multitenant Databases

$
0
0
This post covers what you need to consider around backup/recovery when implementing true multitenant databases (More than 1 PDB).

First the architecture of CDB and PDBs that matter with recovery.


What you notice (and what I highlighted) is that redo logs, archived redo logs, and flashback logs are associated with the CDB.

I verified this by querying against the v$parameter view to display where these can be modified
    Parameter Name               Session      System     Instance    PDB
    ------------------------------ ---------- ---------- ---------- ----------
    log_archive_dest_1 TRUE IMMEDIATE TRUE FALSE
    db_recovery_file_dest FALSE IMMEDIATE FALSE FALSE


    This is important to understand when considering backup/recovery.
     This means

    •  Archiving is an all or nothing choice.  The CDB is either in ARCHIVELOG or NOARCHIVELOG mode.  All PDB's inherit this from CDB.
    • Force logging is at the CDB level, the PDB level and the tablespace level.
    • There is one set of archive logs and redo logs.
    • DG is all or nothing.
    • There is one location for flashback logs
    • There is one BCT file (block change tracking).  
    • PDB recovery is within the same CDB
    Let's understand the implications one step at time.

    Archiving is all or nothing.

    If you do want to perform a large load of a PDB nologging, which I commonly do for POC's, you cannot turn off archive logging for a single PDB.  

    Make sure you group together databases that have the requirement for noarchivelog.  In my case where I perform POCs and to quickly load data, I can load the PDB in a noarchive container, and later unplug and plug it into a the proper container later.



    Force logging is at the CDB level, the PDB level and the tablespace level.


    There are now 3 levels you can set/override force logging.  Here  is a great article going through the possible logging levels.  The most interesting one I saw in this was the PDB logging and the introduction of the command..

    SQL alter pluggable database pdb1 disable force logging;

    Pluggable database altered.

    SQ select pdb_name, logging, force_logging, force_nologging from cdb_pdbs;

    PDB_NAME LOGGING FORCE_LOGGING FORCE_NOLOGGING
    ---------- --------- --------------- ---------------
    PDB1 LOGGING NO NO

    Enabling force logging is recommended when using Dataguard, or GoldenGate to ensure you capture all changes.  However there may some controlled circumstances where you want to have nologging operations.  In my prior blog I wrote about the issues that can occur with logical corruption after recovery.


    There is one set of archive logs and redo logs.


    This is something you need to keep in mind for active very active databases.  If you combine 2 databases into the same CDB, you may have to double the size/quantity of the redo logs.  It will also affect the recovery time of a single PDB.  The archive logs restored and read will contain all transactions for all PDBs.

    The extra workload of the combined logging, is something to keep in mind.


    DG is an all or nothing.


    Yes, as you would imagine from above, all the redo log stream goes to the dataguard database and gets applied, and all PDBs have a Dataguard PDB.

    This is something to think about as it's a little different from non-CDB procedures.

      For new PDBs


            If you have active DG, the Dataguard PDB will be cloned from seed database just like it does on the primary database

          If you don't have active DG, you have to copy over the datafiles from the primary database (or restore) onto the Dataguard database server.

      For existing PDBs unplugging and plugging


            If you want to keep DG you have to plug into a CDB that already has DG configured.  You would unplug from the primary database, and unplug from the standby database.

           When plugging in, you would plug into the standby CDB first, then plug into the Primary CDB.  This ensures that the standby database is ready when log information starts coming over from the primary CDB


    There is one location for flashback logs


    This item isn't a big deal.  All the archive logs, and all the flashback logs go to the same location.  The size of the fast recovery area is managed as a whole.

    There is one BCT file (block change tracking). 


    This means that when you unplug/plug between CDBs, there is no chance of  performing an incremental backup.

    I did notice that the V$BLOCK_CHANGE_TRACKING view contains the container ID. I also noticed that 18.1 contains the ability to restore between CDBs ( I believe).

    PDB recovery is within the same CDB


    I believe this changed with 18.1, but I'm not positive.  When you register your "database" in the RMAN catalog, the CDB database gets registered.  All the PDBs within the CDB are backed up with the CDB's DBID.  Each individual PDBs backup are cataloged within the CDB by container ID. Yes you can backup PDB's individually and you can recover the PDBs individually.  However, once you unplug from the CDB and plug into another CDB, you need to take a new  full backup.  From an RMAN catalog standpoint, the PDB is now part of a new database.

    Below is an example of  the output from

    RMAN> List backup of pluggable database pocpdb2;


            Piece Name: +RECOC1/POCCDB/675073AC990B6478E05370A1880AEC9C/BACKUPSET/2018_03_14/nnndn1_tag20180314t131530_0.330.970751731
    List of Datafiles in backup set 498531
    Container ID: 4, PDB Name: POCPDB2
    File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
    ---- -- ---- ---------- --------- ----------- ------ ----
    15 1 Incr 111150927 14-MAR-18 NO +DATAC1/POCCDB/675073AC990B6478E05370A1880AEC9C/DATAFILE/system.298.970659025
    16 1 Incr 111150927 14-MAR-18 NO +DATAC1/POCCDB/675073AC990B6478E05370A1880AEC9C/DATAFILE/sysaux.299.970659025
    17 1 Incr 111150927 14-MAR-18 NO +DATAC1/POCCDB/675073AC990B6478E05370A1880AEC9C/DATAFILE/undotbs1.297.970659025
    18 1 Incr 111150927 14-MAR-18 NO +DATAC1/POCCDB/675073AC990B6478E05370A1880AEC9C/DATAFILE/undo_2.301.970659041
    19 1 Incr 111150927 14-MAR-18 NO +DATAC1/POCCDB/675073AC990B6478E05370A1880AEC9C/DATAFILE/users.302.970659041



    Notice that Container ID, and the PDB name is given.



    Backing up a  PDB


    PDBs can be backed up with the CDB, or by themselves (just like datafiles can).

    RMAN  > backup incremental level 1 database plus archivelog not backed up;

    A single PDB is backed up by specifying the PDB.

    RMAN  > backup incremental level 1 pluggable database pocpdb2 plus archivelog not backed up;

    Restoring a PDB



    RMAN  > restore pluggable database pocpdb1;
    RMAN > recover pluggable database pocpdb1;
    RMAN> alter pluggable database pocpdb1 open;



    if you are migrating from non-CDB to using multiple PDB's (true multi tenant) you should think through all the ramifications of backup/recovery.

    ZDLRA "store and Forward " feature

    $
    0
    0
    Most people didn't notice, but there was a new feature added to the ZDLRA called "store and forward".

    Documentation on how to implement it is in the "ZDLRA Administration Guide" under the topic of implementing high availability strategies.

    Within that you will find the section on
    "Managing Temporary Outages with a Backup and Redo Failover Strategy".

    First lets take a look at the architecture.  

    • We have a database we are backing up "PROTDB"
    • We have 2 different ZDLRA's.
      • ZDLRA #1  "RA01" which is the preferred ZDLRA that backups and the redo log stream will go to
      • ZDLRA #2 "RA02" which is the alternate ZDLRA that backups and the redo log stream will go to in the event of an outage on the preferred ZDLRA

    NOTE : A database has to be unique within a ZDLRA. What this means is that the alternate ZDLRA cannot already used for replication or to backup a dataguard copy of the same database. 



    Now that we have defined the architecture let's go through the pieces that make up the store-and-forward methodology.

    1) Configure "RA01" to be the down stream replicated pair of RA02. 
    2) Ensure that the protected database is added to policies on both RAs (this process is described in the 12.2 admin guide)
    3) Ensure "PROTDB" is aware has a wallet entry for both RAs, and that it is properly registered (using the admin guide)
    3) Configure real-time redo apply using "RA01" as the primary RA and "RA02" as the alternate.

    Real-time redo isn't mandatory to use but it makes the switching over of redo a lot easier. I will show how the environment looks with real-time redo.  if you are manually sending archive logs and level 0 backups, the flow will be similar.


    Real-time Redo flow


    First lets take a look at the configuration for real-time redo.

    Below is the configuration. This is well described in this blog post.

    LOG_ARCHIVE_DEST_3=“SERVICE=<"RA01" string from wallet>”, VALID_FOR=(ALL_LOGFILES, ALL_ROLES) ASYNC DB_UNIQUE_NAME=’<"RA01" ZDLRA DB>’ noreopen alternate=log_archive_dest_4;

    log_archive_dest_state_3=enable;

    LOG_ARCHIVE_DEST_4=“SERVICE=<"RA02" string from wallet>”, VALID_FOR=(ALL_LOGFILES, ALL_ROLES) ASYNC DB_UNIQUE_NAME=’<"RA02" ZDLRA DB> ;
    LOG_ARCHIVE_STATE__4=alternate;


    Below is what the flow looks like.

    Redo log traffic and backups are sent from "PROTDB" to "RA01".  "RA02" (since it is the upstream pair of "RA01") is aware of the backups in it's RMAN catalog.




    Now let's take a look at the status of the destinations


    SQL> select dest_id, dest_name, status from 
    v$archive_dest_status where status <> 'INACTIVE';
    DEST_ID DEST_NAME STATUS
    ---------- --------------------- ---------
    1 LOG_ARCHIVE_DEST_3 VALID
    2 LOG_ARCHIVE_DEST_4 UNKNOWN

    Now lets see what happens when "RA01" can't be reached.



    SQL> select dest_id, dest_name, status from 
    v$archive_dest_status where status <> 'INACTIVE';
    DEST_ID DEST_NAME STATUS
    ---------- --------------------- ---------
    1 LOG_ARCHIVE_DEST_3 DISABLED
    2 LOG_ARCHIVE_DEST_4 VALID

    After the second failed attempt, the original destination is marked as disabled, and the alternate is valid.

    Below you can see that the redo logs, and the backups (Level 1) are being sent to "RA02".

    "PROTDB" connects to the catalog on "RA02" which is aware of the previous backups and synchronizes its backup information with the control file.

    This allows the next Level 1 incremental backup to be aware of the most current virtual full backup on "RA01".

    This also allows the redo log stream to continue where it left off with "RA01".  The RMAN catalog on "RA02" is aware of all redo logs backups on "RA01" and is able to continue with the next log.



    Now lets see what happens when "RA01" becomes available.

    When "RA01" becomes available, you start the replication flow downstream. This will allow all the backups (redo and/or Level 1) to replicate to "RA01" and be applied to the RA, and be in the RMAN catalog.

    Once this complete, RA01 will have virtualized any backups, along with storing and cataloging all redo logs captured.



    BUT, at this point the primary log destination is still disabled so we need to renable it to start the redo log flow back.



    SQL> alter system set log_archive_dest_state_3=enable;
    System altered.
    SQL> alter system set log_archive_dest_state_4=alternate;
    System altered.

    Once this is complete.  We are back to where we started.



    That's it.

    Store-and-forward is a great HA solution for capturing real-time redo log information to absorb any hiccups that may occur.

    Where is my space on DBFS

    $
    0
    0

    I just ran into an issue on DBFS where I ran out of space.

    First here is the df -k

    Filesystem 1K-blocks Used Available Use% Mounted on
    dbfs-dbfs_admin2@:/ 20983808 11443696 9540112 55% /dbfs/dba


    OK, everything looks good.. I am using 11g and I have 9.5g available.

    I go to copy a file on the os (you can see it is 240m).  Lots of room

     ls -al bsg.out
    -rw-r--r-- 1 oracle oinstall 240794862 May 18 11:37 bsg.out


    cp bsg.out bsg.out1
    cp: writing `bsg.out1': No space left on device
    cp: closing `bsg.out1': No space left on device


    So where is my space.  ?? I find this query..

    set serveroutput on;
    declare
    v_segment_size_blocks number;
    v_segment_size_bytes number;
    v_ number;
    v_used_blocks number;
    v_used_bytes number;
    v_expired_blocks number;
    v_expired_bytes number;
    v_unexpired_blocks number;
    v_unexpired_bytes number;
    begin
    dbms_space.space_usage ('DBFS_OWNER', 'LOB_SFS$_FST_12345', 'LOB',
    v_segment_size_blocks, v_segment_size_bytes,
    v_used_blocks, v_used_bytes, v_expired_blocks, v_expired_bytes,
    v_unexpired_blocks, v_unexpired_bytes );
    dbms_output.put_line('Segment Size blocks = '||v_segment_size_blocks);
    dbms_output.put_line('Segment Size bytes = '||v_segment_size_bytes);
    dbms_output.put_line('Used blocks = '||v_used_blocks);
    dbms_output.put_line('Used bytes = '||v_used_bytes);
    dbms_output.put_line('Expired Blocks = '||v_expired_blocks);
    dbms_output.put_line('Expired Bytes = '||v_expired_bytes);
    dbms_output.put_line('UNExpired Blocks = '||v_unexpired_blocks);
    dbms_output.put_line('UNExpired Bytes = '||v_unexpired_bytes);
    end;
    /



    And I see this output

    Segment Size blocks = 2619024
    Segment Size bytes = 21455044608
    Used blocks = 1425916
    Used bytes = 11681103872
    Expired Blocks = 1190111
    Expired Bytes = 9749389312
    UNExpired Blocks = 0
    UNExpired Bytes = 0


    So.. according to this.. The segment is 21.4 g

    11.7g is used space
      9.7g is Expired space
       0g   is unexpired space.

    So if I have 9.7g of Expired space why can't I use it ??  My file is only 244m, and I should have 9.7 g available.

    So my questions out of this are (if anyone knows the answer).

    1) How does this happen and how do I avoid it ?

    2) How do I size tablespaces for DBFS ?  They need more space to be available then I need for the file system.

    3) How do I monitor the sizing since the DF -k does not report unexpired bytes that are available to be used ?

    4) How does the clause "retention" fit into this ?  retention defaults to "auto" rather than "none".  Can I set it to "none", but what happens and does this solve my problem ?


    Oh, and I wanted to make sure that I included the ouput of  whats in the tablespace.

    SEGMENT_NAME                                             SEGMENT_TYPE       SEGMENT_SIZE
    -------------------------------------------------------- ------------------ ------------
    LOB_SFS$_FST_12345 LOBSEGMENT 20461
    T_ADMIN TABLE 17
    IP_SFS$_FST_12345 INDEX 4
    IPG_SFS$_FST_12345 INDEX 3
    IG_SFS$_FST_12345 INDEX 2
    SYS_IL0000095835C00007$$ LOBINDEX 0




    UPDATE (6/13/12)  --

    After working with support on this, it was filed as a bug.  This occured because I was using DBFS as a filesytem for my dbreplay capture.  After thinking about it the dbcapture is probably the most intensive workload I could throw at DBFS.  Not only does it simultaneously write to multiple files, but it writes to those files across multiple nodes.  In my capture there were 4 nodes writing to 100's of files at the same time.
       I will be testing the patch, and see if it corrects the problem.  Support is telling me that the writing across multiple nodes is causing some of the issues..

    ZDLRA "Store and Forward " feature

    $
    0
    0
    Most people didn't notice, but there was a new feature added to the ZDLRA called "store and forward".

    Documentation on how to implement it is in the "ZDLRA Administration Guide" under the topic of implementing high availability strategies. 

    Within that you will find the section on
    "Managing Temporary Outages with a Backup and Redo Failover Strategy". This section describes what I have called “store and forward”.

    ZDLRA offers the customer the ability to send backups (Redo logs and Level 1 backups) to an alternate ZDLRA location.  This provides an efficient HA solution for this information if the primary ZDLRA can't be reached.


    Now in order to explain how Store and Forward works, first lets take a look at the architecture.  

    • We have a database we are backing up called "PROTDB"
    • We have 2 different ZDLRA's. Store and Forward requires a minimum of 2 ZDLRA appliances in a datacenter. In this case some of the databases have one of their ZDLRAs as their backup target and the remaining databases have the other ZDLRA as their backup target.
    • For databases backing up to ZDLRA #1 "RA01" will be the preferred ZDLRA that their Level 1 backups and the redo log stream will go to.  ZDLRA #2 "RA02" will be the alternate ZDLRA that Level 1 backups and the redo log stream will go to in the event of an outage communicating with preferred ZDLRA "RA01".
    • The reverse will be true for databases backing up to ZDLRA #2 with the alternate being ZDLRA #1

      NOTE : A database has to be unique within a ZDLRA. What this means is that the alternate ZDLRA cannot already used for replication or to backup a dataguard copy of the same database. 



      Now that we have defined the architecture let's go through the pieces that make up the store-and-forward methodology.

      First however I will define what I mean by "Upstream" and "downstream".

      UPSTREAM - This is the ZDLRA that sends replicated backup copies.  

      DOWNSTREAM - This is the ZDLRA that receives the replicated backup copies.

      A ZDLRA can act as both an UPSTREAM and a DOWNSTREAM. This is common when a customer has 2 active datacenters.  Each ZDLRA acts as both an Upstream (receiving backups directly) and as a Downstream (receiving replicated backups).

      In the store-and-forward methodology backups are sent to the Downstream as the primary, and the Upstream as the Alternate.  This allows for backups to replicate from the Alternate (Upstream) to the Primary (Downstream).  This will be explained as you walk through flow.

      Configuring Store-and-Forward




      1) Configure "RA01" to be the down stream replicated pair of RA02. 
      2) Ensure that the protected database ("PROTDB") is added to policies on both RAs (this process is described in the 12.2 admin guide)
      3) Ensure "PROTDB"  has a wallet entries for both RAs, and that it the database is properly registered in both RMAN catalogs (using the admin guide).
      3) Configure real-time redo apply using "RA01" as the primary RA and "RA02" as the alternate.

      NOTE: Real-time redo isn't mandatory to use but it makes the switching over of redo a lot easier. I will show how the environment looks with real-time redo.  if you are manually sending archive logs and level 0 backups, the flow will be similar.


      Real-time Redo flow


      First lets take a look at the configuration for real-time redo.

      Below is the configuration for a database with both a primary and and alternate ZDLRA. Working with an alternate destination is well described in this blog post.



      Primary ZDLRA (RA01) configuration


      LOG_ARCHIVE_DEST_3=“SERVICE=<"RA01" string from wallet>”, VALID_FOR=(ALL_LOGFILES, ALL_ROLES) ASYNC DB_UNIQUE_NAME=’<"RA01" ZDLRA DB>’ noreopen alternate=log_archive_dest_4;

      log_archive_dest_state_3=enable;


      Alternate ZDLRA (RA02) configuration


      LOG_ARCHIVE_DEST_4=“SERVICE=<"RA02" string from wallet>”, VALID_FOR=(ALL_LOGFILES, ALL_ROLES) ASYNC DB_UNIQUE_NAME=’<"RA02" ZDLRA DB> ;
      LOG_ARCHIVE_STATE__4=alternate;


      Below is what the flow looks like.

      Redo log traffic and backups are sent from "PROTDB" to "RA01".  "RA02" (since it is the upstream pair of "RA01") is aware of the backups in it's RMAN catalog.





      Now let's take a look at the status of the destinations


      SQL> select dest_id, dest_name, status from 
      v$archive_dest_status where status <> 'INACTIVE';
      DEST_ID DEST_NAME STATUS
      ---------- --------------------- ---------
      1 LOG_ARCHIVE_DEST_3 VALID
      2 LOG_ARCHIVE_DEST_4 UNKNOWN

      You can see that the redo logs are sent to DEST_3 ("RA01") and DEST_4 ("RA02") is not active.



      Now lets see what happens when "RA01" can't be reached.




      SQL> select dest_id, dest_name, status from 
      v$archive_dest_status where status <> 'INACTIVE';
      DEST_ID DEST_NAME STATUS
      ---------- --------------------- ---------
      1 LOG_ARCHIVE_DEST_3 DISABLED
      2 LOG_ARCHIVE_DEST_4 VALID

      After the second failed attempt, the original destination is marked as disabled, and the alternate is valid.

      Below you can see that the redo logs, and the backups (Level 1) are being sent to "RA02".

      "PROTDB" connects to the catalog on "RA02" which is aware of the previous backups and synchronizes its backup information with the control file.

      This allows the next Level 1 incremental backup to be aware of the most current virtual full backup on "RA01".

      This also allows the redo log stream to continue where it left off with "RA01".  The RMAN catalog on "RA02" is aware of all redo logs backups on "RA01" and is able to continue with the next log.



      Now lets see what happens when "RA01" becomes available.


      When "RA01" becomes available, you start the replication flow downstream. This will allow all the backups (redo and/or Level 1) to replicate to "RA01", be applied to the RA, and update the RMAN catalog.

      Once this complete, RA01 will have virtualized any backups, along with storing and cataloging all redo logs captured.



      BUT, at this point the primary log destination is still disabled so we need to renable it to start the redo log flow back.



      SQL> alter system set log_archive_dest_state_3=enable;
      System altered.
      SQL> alter system set log_archive_dest_state_4=alternate;
      System altered.

      Once this is complete.  We are back to where we started.




      That's it.

      Store-and-forward is a great HA solution for capturing real-time redo log information to absorb any hiccups that may occur.

      Oracle RMAN restore/recovery prioritization

      $
      0
      0
      This blog post is on what happens when you ask RMAN to restore and recover your database or a portion of your database (like a datafile).

      This came up with a colleague who asked about the order of recovery.  The question was "If my archive logs are on disk for 48 hours, I performed a full backup 36 hours ago (to SBT_TAPE) and an incremental backup 12 hours ago (to SBT_TAPE), after restoring from SBT_TAPE, will it recover from the archive logs on disk, or will it go to tape for the incremental backup first ?

      Well the answer is in the manual here.


      Incremental Backups and Archived Redo Log Files
      Except for RECOVER BLOCK, RMAN can use both incremental backups and archived redo log files for recovery. RMAN uses the following search order:
      1. Incremental backup sets on disk or tape
      2. Archived redo log files on disk
      3. Archived redo log backups on disk
      4. Archived redo log backup sets on tape

      Now let's see this in action.

      In my scenario, I am using the "USERS" tablespace.  The datafile I'm going to drop is datafile 7.

      Let's walk down what happens at each SCN interval.


      3435417  Full backup of datafile 7



      BS Key  Type LV Size       Device Type Elapsed Time Completion Time
      ------- ---- -- ---------- ----------- ------------ ---------------
      1 Incr 0 1.64G DISK 00:01:30 14-MAY-18
      BP Key: 1 Status: AVAILABLE Compressed: NO Tag: TAG20180514T065607
      Piece Name: D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\DATABASE\01T2ROOB_1_1
      List of Datafiles in backup set 1
      File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
      ---- -- ---- ---------- --------- ----------- ------ ----
      7 0 Incr 3435417 14-MAY-18 NO D:\APP\BGRENN\VIRTUAL\ORADATA\ORCL\USERS01.DBF



                Backup of Archive logs to defuzzy full backup


      BS Key  Size       Device Type Elapsed Time Completion Time
      ------- ---------- ----------- ------------ ---------------
      5 143.22M DISK 00:00:05 14-MAY-18
      BP Key: 5 Status: AVAILABLE Compressed: NO Tag: TAG20180514T065904
      Piece Name: D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\DATABASE\05T2ROTO_1_1

      List of Archived Logs in backup set 5
      Thrd Seq Low SCN Low Time Next SCN Next Time
      ---- ------- ---------- --------- ---------- ---------
      1 38 3394580 13-MAY-18 3435493 14-MAY-18



                Backup of archive logs prior to incremental backup


      BS Key  Size       Device Type Elapsed Time Completion Time
      ------- ---------- ----------- ------------ ---------------
      9 952.50K DISK 00:00:00 14-MAY-18
      BP Key: 9 Status: AVAILABLE Compressed: NO Tag: TAG20180514T070224
      Piece Name: D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\DATABASE\0BT2RP41_1_1

      List of Archived Logs in backup set 9
      Thrd Seq Low SCN Low Time Next SCN Next Time
      ---- ------- ---------- --------- ---------- ---------
      1 39 3435493 14-MAY-18 3435597 14-MAY-18
      1 40 3435597 14-MAY-18 3435668 14-MAY-18


      h3 style="text-align: left;">

      3435627  Incremental backup of datafile 7




      BS Key  Type LV Size       Device Type Elapsed Time Completion Time
      ------- ---- -- ---------- ----------- ------------ ---------------
      7 Incr 1 1.24M DISK 00:00:37 14-MAY-18
      BP Key: 7 Status: AVAILABLE Compressed: NO Tag: TAG20180514T070122
      Piece Name: D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\DATABASE\07T2RP23_1_1
      List of Datafiles in backup set 7
      File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
      ---- -- ---- ---------- --------- ----------- ------ ----
      7 1 Incr 3435627 14-MAY-18 NO D:\APP\BGRENN\VIRTUAL\ORADATA\ORCL\USERS01.DBF

                Backup of archive logs


      BS Key  Size       Device Type Elapsed Time Completion Time
      ------- ---------- ----------- ------------ ---------------
      9 952.50K DISK 00:00:00 14-MAY-18
      BP Key: 9 Status: AVAILABLE Compressed: NO Tag: TAG20180514T070224
      Piece Name: D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\DATABASE\0BT2RP41_1_1

      List of Archived Logs in backup set 9
      Thrd Seq Low SCN Low Time Next SCN Next Time
      ---- ------- ---------- --------- ---------- ---------
      1 39 3435493 14-MAY-18 3435597 14-MAY-18
      1 40 3435597 14-MAY-18 3435668 14-MAY-18

      BS Key Size Device Type Elapsed Time Completion Time
      ------- ---------- ----------- ------------ ---------------
      11 3.11M DISK 00:00:00 14-MAY-18
      BP Key: 11 Status: AVAILABLE Compressed: NO Tag: TAG20180514T085551
      Piece Name: D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\DATABASE\0DT2RVOO_1_1

      List of Archived Logs in backup set 11
      Thrd Seq Low SCN Low Time Next SCN Next Time
      ---- ------- ---------- --------- ---------- ---------
      1 41 3435668 14-MAY-18 3435770 14-MAY-18
      1 42 3435770 14-MAY-18 3437175 14-MAY-18
      1 43 3437175 14-MAY-18 3437200 14-MAY-18
      1 44 3437200 14-MAY-18 3437210 14-MAY-18
      1 45 3437210 14-MAY-18 3437216 14-MAY-18



      OK.. Now we have backups on disk (full and level 1) to restore and recover from.


      First let's do the restore of datafile 7



      RMAN> 
      Starting restore at 14-MAY-18
      using channel ORA_DISK_1

      channel ORA_DISK_1: starting datafile backup set restore
      channel ORA_DISK_1: specifying datafile(s) to restore from backup set
      channel ORA_DISK_1: restoring datafile 00007 to D:\APP\BGRENN\VIRTUAL\ORADATA\ORCL\USERS01.DBF
      channel ORA_DISK_1: reading from backup piece D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\DATABASE\01T2ROOB_1_1
      channel ORA_DISK_1: piece handle=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\DATABASE\01T2ROOB_1_1 tag=TAG20180514T065607
      channel ORA_DISK_1: restored backup piece 1
      channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
      Finished restore at 14-MAY-18

      Now let's do the recovery of datafile 7 and watch what happens.



      RMAN> 
      Starting recover at 14-MAY-18
      using channel ORA_DISK_1
      channel ORA_DISK_1: starting incremental datafile backup set restore
      channel ORA_DISK_1: specifying datafile(s) to restore from backup set
      destination for restore of datafile 00007: D:\APP\BGRENN\VIRTUAL\ORADATA\ORCL\USERS01.DBF
      channel ORA_DISK_1: reading from backup piece D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\DATABASE\07T2RP23_1_1
      channel ORA_DISK_1: piece handle=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\DATABASE\07T2RP23_1_1 tag=TAG20180514T070122
      channel ORA_DISK_1: restored backup piece 1
      channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

      starting media recovery

      archived log for thread 1 with sequence 40 is already on disk as file D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000040_0974378014.0001
      archived log for thread 1 with sequence 41 is already on disk as file D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000041_0974378014.0001
      archived log for thread 1 with sequence 42 is already on disk as file D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000042_0974378014.0001
      archived log for thread 1 with sequence 43 is already on disk as file D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000043_0974378014.0001
      archived log for thread 1 with sequence 44 is already on disk as file D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000044_0974378014.0001
      archived log for thread 1 with sequence 45 is already on disk as file D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000045_0974378014.0001
      archived log file name=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000040_0974378014.0001 thread=1 sequence=40
      archived log file name=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000041_0974378014.0001 thread=1 sequence=41
      archived log file name=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000042_0974378014.0001 thread=1 sequence=42
      archived log file name=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000043_0974378014.0001 thread=1 sequence=43
      media recovery complete, elapsed time: 00:00:03
      Finished recover at 14-MAY-18




      You can see in the recovery, there are 2 steps performed, but this isn't whole picture.

       Step 1 Restore all incremental backups.

      using channel ORA_DISK_1
      channel ORA_DISK_1: starting incremental datafile backup set restore
      channel ORA_DISK_1: specifying datafile(s) to restore from backup set
      destination for restore of datafile 00007: D:\APP\BGRENN\VIRTUAL\ORADATA\ORCL\USERS01.DBF
      channel ORA_DISK_1: reading from backup piece D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\DATABASE\07T2RP23_1_1
      channel ORA_DISK_1: piece handle=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\DATABASE\07T2RP23_1_1 tag=TAG20180514T070122
      channel ORA_DISK_1: restored backup piece 1
      channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

       Step 2 Recover using archive logs on disk


      starting media recovery

      archived log for thread 1 with sequence 40 is already on disk as file D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000040_0974378014.0001
      archived log for thread 1 with sequence 41 is already on disk as file D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000041_0974378014.0001
      archived log for thread 1 with sequence 42 is already on disk as file D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000042_0974378014.0001
      archived log for thread 1 with sequence 43 is already on disk as file D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000043_0974378014.0001
      archived log for thread 1 with sequence 44 is already on disk as file D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000044_0974378014.0001
      archived log for thread 1 with sequence 45 is already on disk as file D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000045_0974378014.0001
      archived log file name=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000040_0974378014.0001 thread=1 sequence=40
      archived log file name=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000041_0974378014.0001 thread=1 sequence=41
      archived log file name=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000042_0974378014.0001 thread=1 sequence=42
      archived log file name=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000043_0974378014.0001 thread=1 sequence=43
      media recovery complete, elapsed time: 00:00:03
      Finished recover at 14-MAY-18


      Now I reran with trace on and let's look at the pieces


      Rman determines the current checkpoint SCN of the datafile I'm recovering (3435417)



      DBGSQL:                 TARGET> select fhscn, to_date(fhtim,'MM/DD/RR HH24:MI:SS', 'NLS_CALENDAR=Gregorian'), fhcrs, fhrls, to_date(fhrlc,'MM/DD/RR HH24:MI:SS', 'NLS_CALENDAR=Gregorian'), fhafs, fhrfs, fhrft, hxerr, fhfsz, fhsta, fhdbi, fhfdbi, fhplus, fhtyp into :ckpscn, :ckptime, :crescn, :rlgscn, :rlgtime, :afzscn, :rfzscn, :rfztime, :hxerr, :blocks, :fhsta, :fhdbi, :fhfdbi, :fhplus, :fhtyp from x$kcvfhall where  hxfil = :fno 
      DBGSQL: sqlcode = 0
      DBGSQL: D :ckpscn = 3435417
      DBGSQL: D :ckptime = "14-MAY-18"
      DBGSQL: D :crescn = 29665
      DBGSQL: D :rlgscn = 1490582
      DBGSQL: D :rlgtime = "25-APR-18"
      DBGSQL: D :afzscn = 0
      DBGSQL: D :rfzscn = 0
      DBGSQL: D :rfztime = NULL
      DBGSQL: D :hxerr = 0
      DBGSQL: D :blocks = 640
      DBGSQL: D :fhsta = 0
      DBGSQL: D :fhdbi = 1502158741
      DBGSQL: D :fhfdbi = 0
      DBGSQL: D :fhplus = 0
      DBGSQL: D :fhtyp = 3
      DBGSQL: B :fno = 7
      DBGMISC: krmkrfh: [10:07:46.470]
      DBGMISC: DF fno=7 pplfno=0 pdbid=1 pdbname= crescn=29665
      DBGMISC: blksize=8192 blocks=640 rfno=7 pdbForeignDbid=0
      DBGMISC: fn=D:\APP\BGRENN\VIRTUAL\ORADATA\ORCL\USERS01.DBF
      DBGMISC: ts=USERS, flags=KRMKDF_INBACKUP
      DBGMISC: fedata: sta=0x0e crescn=29665
      DBGMISC: fhdata: ckpscn=3435417 rlgscn=1490582
      DBGMISC: EXITED krmkrfh [10:07:46.779] elapsed time [00:00:00:01.074]

      RMAN finds any incremental backups that can applied. Then restores and applies as much incremental backups as it can.


        TARGET> declare allCopies boolean; begin dbms_rcvman.resetthisBackupAge; if (:allCopies > 0) then allCopies := TRUE; else allCopies := FALSE; end if; :rc := dbms_rcvman.computeRecoveryActions( fno     => :fno, crescn   => :crescn, df_rlgscn  => :rlgscn, df_rlgtime => :rlgtime, df_ckpscn  => :ckpscn, offlscn   => :offlscn, onlscn   => :onlscn, onltime   => :onltime, cleanscn  => :cleanscn, clean2scn  => :clean2scn, clean2time => :clean2time, allowfuzzy => FALSE, partial_rcv => FALSE, cf_scn   => :cfscn, cf_cretime => :cfcretime, cf_offrrid => :cfoffrrid, allCopies  => allCopies, df_cretime => :cretime, rmanCmd   => :rmanCmd, foreignDbid  => :foreignDbid, pluggedRonly => :pluggedRonly, pluginSCN   => :pluginSCN, pluginRlgSCN => :pluginRlgSCN, pluginRlgTime => :pluginRlgTime, creation_thread => :creation_thread, creation_size  => :creation_size, pdbId      => :pdbId, pdbForeignDbid => :pdbForeignDbid ); if (:maxact > 0) then dbms_rcvman.trimRecoveryActions( maxActions  => :maxact, containerMask => dbms_rcvman.proxyCopy_con_t + dbms_rcvman.imageCopy_con_t + dbms_rcvman.backupSet_con_t, actionMask  => dbms_rcvman.full_act_t); end if; end;  
      DBGSQL: sqlcode = 0
      DBGSQL: B :rc = 0
      DBGSQL: B :allCopies = 2
      DBGSQL: B :fno = 7
      DBGSQL: B :crescn = 29665
      DBGSQL: B :rlgscn = 1490582
      DBGSQL: B :rlgtime = "25-APR-18"
      DBGSQL: B :ckpscn = 3435417
      DBGSQL: B :offlscn = 1490581
      DBGSQL: B :onlscn = 1490582
      DBGSQL: B :onltime = "25-APR-18"
      DBGSQL: B :cleanscn = 0
      DBGSQL: B :clean2scn = 0
      DBGSQL: B :clean2time = "01-JAN-88"
      DBGSQL: B :cfscn = 3440786
      DBGSQL: B :cfcretime = "25-APR-18"
      DBGSQL: B :cfoffrrid = 1
      DBGSQL: B :cretime = "08-MAR-17"
      DBGSQL: B :rmanCmd = 1
      DBGSQL: B :maxact = 0
      DBGSQL: B :foreignDbid = 0
      DBGSQL: B :pluggedRonly = 0
      DBGSQL: B :pluginSCN = 0
      DBGSQL: B :pluginRlgSCN = 0
      DBGSQL: B :pluginRlgTime = NULL
      DBGSQL: B :creation_thread = 0
      DBGSQL: B :creation_size = 0
      DBGSQL: B :pdbId = 1
      DBGSQL: B :pdbForeignDbid = 0
      DBGRCVMAN: thisBackupAge= 0
      DBGRCVMAN: ENTERING computeRecoveryActions
      DBGRCVMAN: computeRecoveryActions fno: 7
      DBGRCVMAN: computeRecoveryActions crescn: 29665
      DBGRCVMAN: computeRecoveryActions df_rlgscn: 1490582
      DBGRCVMAN: computeRecoveryActions df_ckpscn: 3435417
      DBGRCVMAN: computeRecoveryActions offlscn: 1490581
      DBGRCVMAN: computeRecoveryActions onlscn: 1490582
      DBGRCVMAN: computeRecoveryActions cleanscn: 0
      DBGRCVMAN: computeRecoveryActions clean2scn: 0
      DBGRCVMAN: computeRecoveryActions cf_scn: 3440786
      DBGRCVMAN: computeRecoveryActions cf_offrrid: 1
      DBGRCVMAN: computeRecoveryActions foreignDbid: 0
      DBGRCVMAN: computeRecoveryActions pluggedRonly: 0
      DBGRCVMAN: computeRecoveryActions pluginSCN: 0
      DBGRCVMAN: computeRecoveryActions pluinRlgSCN: 0
      DBGRCVMAN: computeRecoveryActions creation_thread: 0
      DBGRCVMAN: computeRecoveryActions creation_size: 0
      DBGRCVMAN: computeRecoveryActions pdbid: 1
      DBGRCVMAN: computeRecoveryActions pdbForeignDbid: 0
      DBGRCVMAN: allCopies is TRUE
      DBGRCVMAN: doing recover
      DBGRCVMAN: ENTERING computeRecoveryActions2
      DBGRCVMAN: computeRecoveryActions2 doing recovery.
      DBGRCVMAN: computeRecoveryActions2 This is ancestor.
      DBGRCVMAN: ENTERING openRecoveryActionCursor
      DBGRCVMAN: openRecoveryActionCursor target scn is NULL,creSCN=29665,dfCkpSCN=3435417,dbincRlgSCN=1490582,offlSCN=1490581,onlSCN=1490582,cleanSCN=0,clean2SCN=0,fno=7,pluginSCN=0,rmanCmd=1
      DBGRCVMAN: openRecoveryActionCursor currc1.type_con=NULL currc1.fno=7 currc1.crescn=29665
      DBGRCVMAN: openRecoveryActionCursor restoreSource=, restoreSparse=0
      DBGRCVMAN: openRecoveryActionCursor cursor1 not open yet
      DBGRCVMAN: OPENING cursor rcvRecCursor1_c in openRecoveryActionCursor
      DBGRCVMAN: ENTERING fetchCursor1RecoveryAction
      DBGRCVMAN: fetchCursor1RecoveryAction opcode=1
      DBGRCVMAN: fetchCursor1RecoveryAction seekNext
      DBGRCVMAN: fetchCursor1RecoveryAction rcvRecCursor1_c record
      DBGRCVMAN: DUMPING RECOVERY CONTAINER
      DBGRCVMAN: Incremental Backup Set
      DBGRCVMAN: bsKey=7 bsRecid=7 bsStamp=976086120 setStamp=976086083 setCount=7 site_key=0
      DBGRCVMAN: bsLevel=1 bsType=I pieceCount=1
      DBGRCVMAN: multi_section=N
      DBGRCVMAN: key=17 recid=17 stamp=976086085 sparse_backup_con =NO
      DBGRCVMAN: compTime=14-MAY-18
      DBGRCVMAN: blocks=1 blockSize=8192
      DBGRCVMAN: fromSCN=3435417 toSCN=3435627 toTime=14-MAY-18 level=1 section_size=0
      DBGRCVMAN: rlgSCN=1490582 rlgTime=25-APR-18 dbincKey=
      DBGRCVMAN: afzSCN=0
      DBGRCVMAN: pdbKey=1
      DBGRCVMAN: dfNumber=7 creationSCN=29665 pluginSCN=0 foreignDbid=0 pluggedRonly=0
      DBGRCVMAN: cfType=B
      DBGRCVMAN: keep_options=0 keep_until=NULL
      DBGRCVMAN: EXITING fetchCursor1RecoveryAction filter accepted
      DBGRCVMAN: EXITING openRecoveryActionCursor
      DBGRCVMAN: ENTERING fetchRecoveryAction
      DBGRCVMAN: ENTERING fetchCursor1RecoveryAction
      DBGRCVMAN: fetchCursor1RecoveryAction opcode=1
      DBGRCVMAN: fetchCursor1RecoveryAction seekNext
      DBGRCVMAN: fetchCursor1RecoveryAction no more records
      DBGRCVMAN: EXITING fetchCursor1RecoveryAction seekCurrent - beyond current fno, creSCN
      DBGRCVMAN: EXITING fetchRecoveryAction with TRUE
      DBGRCVMAN: fetched recovery action
      DBGRCVMAN: DUMPING RECOVERY CONTAINER
      DBGRCVMAN: Incremental Backup Set
      DBGRCVMAN: bsKey=7 bsRecid=7 bsStamp=976086120 setStamp=976086083 setCount=7 site_key=0
      DBGRCVMAN: bsLevel=1 bsType=I pieceCount=1
      DBGRCVMAN: multi_section=N
      DBGRCVMAN: key=17 recid=17 stamp=976086085 sparse_backup_con =NO
      DBGRCVMAN: compTime=14-MAY-18
      DBGRCVMAN: blocks=1 blockSize=8192
      DBGRCVMAN: fromSCN=3435417 toSCN=3435627 toTime=14-MAY-18 level=1 section_size=0
      DBGRCVMAN: rlgSCN=1490582 rlgTime=25-APR-18 dbincKey=
      DBGRCVMAN: afzSCN=0
      DBGRCVMAN: pdbKey=1
      DBGRCVMAN: dfNumber=7 creationSCN=29665 pluginSCN=0 foreignDbid=0 pluggedRonly=0
      DBGRCVMAN: cfType=B
      DBGRCVMAN: keep_options=0 keep_until=NULL
      DBGRCVMAN: found an incremental backup set
      DBGRCVMAN: ENTERING addAction
      DBGRCVMAN: addAction action.type_con=
      DBGRCVMAN: ENTERING redoNeeded
      DBGRCVMAN: EXITING redoNeeded with: FALSE
      DBGRCVMAN: CheckRecAction called 04/25/18 12:33:34; rlgscn=1490582; pdbId=1; cleanscn=0
      DBGRCVMAN: CheckRecAction:matches inc=0,fromscn=3435417,toscn=3435627,afzSCN=0
      DBGRCVMAN: cacheFindValidBackupSet: setStamp=976086083 setCount=7 tag=NULL deviceType=NULL mask=1
      DBGRCVMAN: ENTERING loadBsRecCache
      DBGRCVMAN: loadBsRecCache mixcopy=0
      DBGRCVMAN: *****BsRecCache Statistics*****
      DBGRCVMAN: Cache size=0 hit=0
      DBGRCVMAN: loadBsRecCache loadRedundDf_c
      DBGRCVMAN: loadBsRecCache tag=NULL deviceType=NULL mask=1
      DBGRCVMAN: loadBsRecCache Cache contains 12 records
      DBGRCVMAN: loadBsRecCache Minimum SetCount=1
      DBGRCVMAN: EXITING loadBsRecCache
      DBGRCVMAN: ENTERING validateBackupSet0
      DBGRCVMAN: cacheGetValidBackupSet: returning valid rec deviceType=DISK tag=TAG20180514T070122 copyNumber=1
      DBGRCVMAN: validateBackupSet0 exiting loop with rc: SUCCESS
      DBGRCVMAN: EXITING validateBackupSet0 with rc:0
      DBGRCVMAN: DUMPING RECOVERY CONTAINER
      DBGRCVMAN: Incremental Backup Set
      DBGRCVMAN: bsKey=7 bsRecid=7 bsStamp=976086120 setStamp=976086083 setCount=7 site_key=0
      DBGRCVMAN: bsLevel=1 bsType=I pieceCount=1
      DBGRCVMAN: multi_section=N
      DBGRCVMAN: key=17 recid=17 stamp=976086085 sparse_backup_con =NO
      DBGRCVMAN: tag=TAG20180514T070122 compTime=14-MAY-18
      DBGRCVMAN: deviceType=DISK blocks=1 blockSize=8192
      DBGRCVMAN: fromSCN=3435417 toSCN=3435627 toTime=14-MAY-18 level=1 section_size=0
      DBGRCVMAN: rlgSCN=1490582 rlgTime=25-APR-18 dbincKey=
      DBGRCVMAN: afzSCN=0
      DBGRCVMAN: pdbKey=1
      DBGRCVMAN: dfNumber=7 creationSCN=29665 pluginSCN=0 foreignDbid=0 pluggedRonly=0
      DBGRCVMAN: cfType=B
      DBGRCVMAN: keep_options=0 keep_until=NULL
      DBGRCVMAN: rcvRecPush:from_scn=3435417,to_scn=3435627,rcvRecStackCount=1
      DBGRCVMAN: addAction Added action:
      DBGRCVMAN: addAction allCopies is TRUE, trying to add other copies
      DBGRCVMAN: ENTERING validateBackupSet0
      DBGRCVMAN: validateBackupSet0 rc is null, setting to unavailable
      DBGRCVMAN: EXITING validateBackupSet0 with rc:1
      DBGRCVMAN: EXITING addAction with: action_OK
      DBGRCVMAN: addAction returned code 0
      DBGRCVMAN: done set to true - 3
      DBGRCVMAN: EXITING computeRecoveryActions2 - 3
      DBGRCVMAN: computeRecoveryActions: Top of stack=1
      DBGRCVMAN: EXITING computeRecoveryActions with: SUCCESS
      DBGMISC: EXITED krmkcra with status 0 [10:07:52.484] elapsed time [00:00:00:06.814]
      DBGRCV: ENTERED krmkgrr
      DBGRCV: krmkgrr(funcode=10) (krmkgrr)
      DBGRCVMAN: ENTERING getRcvRec
      DBGRCVMAN: getRcvRec funCode=10
      DBGRCVMAN: ENTERING getRecoveryAction
      DBGRCVMAN: rcvRecPop:from_scn=3435417,to_scn=3435627,rcvRecStackCount=1
      DBGRCVMAN: EXITING getRecoveryAction with: FALSE#
      DBGRCVMAN: DUMPING RECOVERY CONTAINER
      DBGRCVMAN: Incremental Backup Set
      DBGRCVMAN: bsKey=7 bsRecid=7 bsStamp=976086120 setStamp=976086083 setCount=7 site_key=0
      DBGRCVMAN: bsLevel=1 bsType=I pieceCount=1
      DBGRCVMAN: multi_section=N
      DBGRCVMAN: key=17 recid=17 stamp=976086085 sparse_backup_con =NO
      DBGRCVMAN: tag=TAG20180514T070122 compTime=14-MAY-18
      DBGRCVMAN: deviceType=DISK blocks=1 blockSize=8192
      DBGRCVMAN: fromSCN=3435417 toSCN=3435627 toTime=14-MAY-18 level=1 section_size=0
      DBGRCVMAN: rlgSCN=1490582 rlgTime=25-APR-18 dbincKey=
      DBGRCVMAN: afzSCN=0
      DBGRCVMAN: pdbKey=1
      DBGRCVMAN: dfNumber=7 creationSCN=29665 pluginSCN=0 foreignDbid=0 pluggedRonly=0
      DBGRCVMAN: cfType=B
      DBGRCVMAN: keep_options=0 keep_until=NULL
      DBGRCVMAN: EXITING getRcvRec with rc:0
      DBGRCV: ENTERED krmrrcvc
      DBGRCV: EXITED krmrrcvc
      DBGRCV: EXITED krmkgrr with status 0
      DBGMISC: EXITED krmkfbs [10:07:54.256] elapsed time [00:00:00:08.618]
      DBGRESTORE: EXITED krmrrcv_dfile
      DBGRCV: ENTERED krmkbbsbp
      DBGSQL: TARGET> begin dbms_rcvman.translateBackupPieceBsKey( startBskey => :bsKey, tag => :tag, statusMask => :statusMask); end;
      DBGSQL: sqlcode = 0
      DBGSQL: B :bsKey = 7
      DBGSQL: B :tag = NULL
      DBGSQL: B :statusMask = 1
      DBGRCVMAN: ENTERING computeAvailableMask
      DBGRCVMAN: EXITING computeAvailableMask with rc:1
      DBGSQL: TARGET> begin dbms_rcvman.translateSeekBpBsKey( bsKey => :bsKey, pieceCount => :pieceCount, duplicates => :duplicates, deviceType => :deviceType, copyNumber => :copyNumber); end;
      DBGSQL: sqlcode = 0
      DBGSQL: B :bsKey = 7
      DBGSQL: B :pieceCount = 1
      DBGSQL: B :duplicates = 0
      DBGSQL: B :deviceType = DISK
      DBGSQL: B :copyNumber = 1
      DBGRCVMAN: ENTERING translateSeekBpBsKey
      DBGRCVMAN: translateSeekBpBsKey bskey=7
      DBGRCVMAN: EXITING translateSeekBpBsKey got key=7
      DBGRCV: ENTERED krmkgtbp
      DBGRCVMAN: ENTERING getBackupPiece
      DBGRCVMAN: bskey = 7
      DBGRCVMAN: next bskey=8
      DBGRCVMAN: EXITING getBackupPiece
      DBGSQL: ocirc = 0 (krmkgtbp)
      DBGRCVMAN: ENTERING getBackupPiece
      DBGRCVMAN: bskey = 8
      DBGRCVMAN: end of backupset
      DBGRCVMAN: EXITING getBackupPiece no more records
      DBGRCV: EXITED krmkgtbp
      DBGSQL: TARGET> begin dbms_rcvman.translateBpBsKeyCancel; end;
      DBGSQL: sqlcode = 0
      DBGRCV: EXITED krmkbbsbp
      DBGRCV: EXITED krmrrcv with address 53622976
      DBGMISC: krmkibap: the incremental backup source tree is: [10:07:55.852]
      DBGMISC: 1 BS (incremental datafile) key=7 recid=7 stamp=976086120 setstamp=976086083 setcount=7
      DBGMISC: level=1 level_i=0 piececount=1 keepopts=0, site_key=0 [10:07:55.918]
      DBGMISC: site_key=0 [10:07:55.941]
      DBGMISC: chid=NIL parm=NIL [10:07:55.976]
      DBGMISC: flags=<has site key> [10:07:56.001]
      DBGMISC: valid backup set list is [10:07:56.034]
      DBGMISC: 1 VBS copy#=1 tag=TAG20180514T070122 deviceType=DISK status=A
      DBGMISC: 1 BPIECEX key=7 recid=7 stamp=976086084
      DBGMISC: bskey=7 vbkey=0 set_stamp=976086083 set_count=7 site_key=0
      DBGMISC: pieceno=1 handle=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\DATABASE\07T2RP23_1_1 ba_access=U
      DBGMISC: device=DISK krmkch { count=0 found=FALSE }
      DBGMISC: restore target list is [10:07:56.217]
      DBGMISC: 1 ACT type=incremental fromSCN=3435417 toSCN=3435627 fno=7
      DBGMISC: DF fno=7 pplfno=0 pdbid=1 pdbname= crescn=29665
      DBGMISC: blksize=8192 blocks=640 rfno=7 pdbForeignDbid=0
      DBGMISC: fn=D:\APP\BGRENN\VIRTUAL\ORADATA\ORCL\USERS01.DBF
      DBGMISC: ts=USERS, flags=KRMKDF_INBACKUP
      DBGMISC: fedata: sta=0x0e crescn=29665
      DBGMISC: fhdata: ckpscn=3435627 rlgscn=1490582
      DBGRESTORE: ENTERED krmkrstp
      DBGRESTORE: ENTERED krmkgdn
      DBGRESTORE: Looking up newname for 7 (krmkgdn)
      DBGRCV: ENTERED krmklknn
      DBGRCV: Looking for newname for datafile: 7, Translate: 1, dosearch=1 (krmklknn)
      DBGRCV: Looking up in unprocessed newname list, need_dfinfo=0 (krmklknn)
      DBGRCV: ENTERED krmksearchnewname
      DBGRCV: EXITED krmksearchnewname with address 0
      DBGRCV: No newname found for datafile 7 (krmklknn)
      DBGRCV: EXITED krmklknn with address 0
      DBGRESTORE: Restoring datafile 7 to filename in controlfile: D:\APP\BGRENN\VIRTUAL\ORADATA\ORCL\USERS01.DBF (krmkgdn)
      DBGRESTORE: EXITED krmkgdn with status D:\APP\BGRENN\VIRTUAL\ORADATA\ORCL\USERS01.DBF
      DBGRESTORE: ENTERED krmkgbp
      DBGRESTORE: EXITED krmkgbp
      DBGRESTORE: EXITED krmkrstp
      DBGRCV: EXITED krmkibap with address 53624008
      DBGMISC: EXITED krmkomp [10:07:57.652] elapsed time [00:00:00:14.968]
      DBGPLSQL: the compiled command tree is: [10:07:57.689] (krmicomp)
      DBGPLSQL: 1 CMD type=incremental backup restore cmdid=1 status=NOT STARTED
      DBGPLSQL: 1 STEPstepid=1 cmdid=1 status=NOT STARTED devtype=DISK bs.stamp=976086120 step_size=0 Bytes
      DBGPLSQL: 1 TEXTNOD = --
      DBGPLSQL: 2 TEXTNOD = declare
      DBGPLSQL: 3 TEXTNOD = /* restoreStatus */
      DBGPLSQL: 4 TEXTNOD = state binary_integer;
      DBGPLSQL: 5 TEXTNOD = pieces_done binary_integer;
      DBGPLSQL: 6 TEXTNOD = files binary_integer;
      DBGPLSQL: 7 TEXTNOD = datafiles boolean;
      DBGPLSQL: 8 TEXTNOD = incremental boolean;
      DBGPLSQL: 9 TEXTNOD = device boolean;
      DBGPLSQL: 10 TEXTNOD = /* restorebackuppiece */
      DBGPLSQL: 11 TEXTNOD = done boolean;
      DBGPLSQL: 12 TEXTNOD = currcf boolean;
      DBGPLSQL: 13 TEXTNOD = fhandle varchar2(512);
      DBGPLSQL: 14 TEXTNOD = handle varchar2(512);
      DBGPLSQL: 15 TEXTNOD = outhandle varchar2(512);
      DBGPLSQL: 16 TEXTNOD = params varchar2(512);
      DBGPLSQL: 17 TEXTNOD = fromdisk boolean; -- TRUE => backupset on disk
      DBGPLSQL: 18 TEXTNOD = /* Miscellaneous */
      DBGPLSQL: 19 TEXTNOD = memnum number;
      DBGPLSQL: 20 TEXTNOD = piecenum number;
      DBGPLSQL: 21 TEXTNOD = dfnumber number;
      DBGPLSQL: 22 TEXTNOD = thread number := null;
      DBGPLSQL: 23 TEXTNOD = sequence number := null;
      DBGPLSQL: 24 TEXTNOD = toname varchar2(512);
      DBGPLSQL: 25 TEXTNOD = cfname varchar2(512);
      DBGPLSQL: 26 TEXTNOD = fuzzy_hint number;
      DBGPLSQL: 27 TEXTNOD = set_count number;
      DBGPLSQL: 28 TEXTNOD = set_stamp number;
      DBGPLSQL: 29 TEXTNOD = first_time boolean := TRUE;
      DBGPLSQL: 30 TEXTNOD = validate boolean := FALSE;
      DBGPLSQL: 31 TEXTNOD = val_bs_only boolean := FALSE; -- TRUE => only bs validation
      DBGPLSQL: 32 TEXTNOD = --
      DBGPLSQL: 33 TEXTNOD = max_corrupt binary_integer := 0;
      DBGPLSQL: 34 TEXTNOD = check_logical boolean := FALSE;
      DBGPLSQL: 35 TEXTNOD = tag varchar2(31);
      DBGPLSQL: 36 TEXTNOD = outtag varchar2(31);
      DBGPLSQL: 37 TEXTNOD = bmr boolean := FALSE;
      DBGPLSQL: 38 TEXTNOD = blocks number;
      DBGPLSQL: 39 TEXTNOD = blksize number;
      DBGPLSQL: 40 TEXTNOD = failover boolean := FALSE;
      DBGPLSQL: 41 TEXTNOD = devtype varchar2(512);
      DBGPLSQL: 42 TEXTNOD = rcvcopy boolean := FALSE;
      DBGPLSQL: 43 TEXTNOD = islevel0 binary_integer := 0;
      DBGPLSQL: 44 TEXTNOD = rsid number;
      DBGPLSQL: 45 TEXTNOD = rsts number;
      DBGPLSQL: 46 TEXTNOD = err_msg varchar2(2048);
      DBGPLSQL: 47 TEXTNOD = start_time date;
      DBGPLSQL: 48 TEXTNOD = recid number;
      DBGPLSQL: 49 TEXTNOD = stamp number;
      DBGPLSQL: 50 TEXTNOD = preview boolean := FALSE;
      DBGPLSQL: 51 TEXTNOD = recall boolean := FALSE;
      DBGPLSQL: 52 TEXTNOD = isstby boolean := FALSE;
      DBGPLSQL: 53 TEXTNOD = nocfconv boolean := FALSE;
      DBGPLSQL: 54 TEXTNOD = msrpno number := 0;
      DBGPLSQL: 55 TEXTNOD = msrpct number := 0;
      DBGPLSQL: 56 TEXTNOD = isfarsync boolean := FALSE;
      DBGPLSQL: 57 TEXTNOD = preplugin boolean := FALSE;
      DBGPLSQL: 58 TEXTNOD = pdbid number := 0;
      DBGPLSQL: 59 TEXTNOD = pplcdbdbid number := 0;
      DBGPLSQL: 60 TEXTNOD = ppltrans number;
      DBGPLSQL: 61 TEXTNOD = old_dfnumber number;
      DBGPLSQL: 62 TEXTNOD = restore_not_complete exception;
      DBGPLSQL: 63 TEXTNOD =
      DBGPLSQL: 64 TEXTNOD = begin
      DBGPLSQL: 65 TEXTNOD =
      DBGPLSQL: 66 PRMVAL = set_count := 7; set_stamp := 976086083; rsid := 53; rsts := 976097261; params := null; isstby := false; isfarsync := false; nocfconv := true;
      DBGPLSQL: 67 TEXTNOD =
      DBGPLSQL: 68 TEXTNOD = --
      DBGPLSQL: 69 TEXTNOD = if preview then
      DBGPLSQL: 70 TEXTNOD = deb('ridf_start', 'preview');
      DBGPLSQL: 71 TEXTNOD = return;
      DBGPLSQL: 72 TEXTNOD = end if;
      DBGPLSQL: 73 TEXTNOD =
      DBGPLSQL: 74 TEXTNOD = sys.dbms_backup_restore.restoreStatus(state, pieces_done, files, datafiles,
      DBGPLSQL: 75 TEXTNOD = incremental, device);
      DBGPLSQL: 76 TEXTNOD = if (msrpno > 1) then
      DBGPLSQL: 77 TEXTNOD = --
      DBGPLSQL: 78 TEXTNOD = --
      DBGPLSQL: 79 TEXTNOD = --
      DBGPLSQL: 80 TEXTNOD = --
      DBGPLSQL: 81 TEXTNOD = pieces_done := msrpno - 1;
      DBGPLSQL: 82 TEXTNOD = end if;
      DBGPLSQL: 83 TEXTNOD =
      DBGPLSQL: 84 TEXTNOD = select sysdate into start_time from x$dual;
      DBGPLSQL: 85 TEXTNOD = if state = sys.dbms_backup_restore.restore_no_conversation then
      DBGPLSQL: 86 TEXTNOD = goto start_convo;
      DBGPLSQL: 87 TEXTNOD = elsif state = sys.dbms_backup_restore.restore_naming_files then
      DBGPLSQL: 88 TEXTNOD = goto name_files;
      DBGPLSQL: 89 TEXTNOD = else
      DBGPLSQL: 90 TEXTNOD = goto restore_piece;
      DBGPLSQL: 91 TEXTNOD = end if;
      DBGPLSQL: 92 TEXTNOD =
      DBGPLSQL: 93 TEXTNOD = <<start_convo>>
      DBGPLSQL: 94 TEXTNOD = sys.dbms_backup_restore.setRmanStatusRowId(rsid=>rsid, rsts=>rsts);
      DBGPLSQL: 95 TEXTNOD = sys.dbms_backup_restore.applySetDatafile(
      DBGPLSQL: 96 TEXTNOD = check_logical => check_logical
      DBGPLSQL: 97 TEXTNOD = ,cleanup => FALSE
      DBGPLSQL: 98 TEXTNOD = ,service => NULL
      DBGPLSQL: 99 TEXTNOD = ,chunksize => 0
      DBGPLSQL: 100 TEXTNOD = ,rs_flags => 0
      DBGPLSQL: 101 TEXTNOD = ,preplugin => preplugin);
      DBGPLSQL: 102 TEXTNOD = incremental := TRUE;
      DBGPLSQL: 103 TEXTNOD = krmicd.writeMsg(8039, krmicd.getChid);
      DBGPLSQL: 104 TEXTNOD =
      DBGPLSQL: 105 TEXTNOD = setRestoreParams;
      DBGPLSQL: 106 TEXTNOD = <<name_files>>
      DBGPLSQL: 107 TEXTNOD = deb('ridf_start', 'set_stamp=' || set_stamp || ' set_count=' || set_count,
      DBGPLSQL: 108 TEXTNOD = rman_constant.DEBUG_IO, rman_constant.LEVEL_MIN);
      DBGPLSQL: 109 TEXTNOD = --
      DBGPLSQL: 110 TEXTNOD = toname := null;
      DBGPLSQL: 111 TEXTNOD = max_corrupt := 0;
      DBGPLSQL: 112 TEXTNOD =
      DBGPLSQL: 113 PRMVAL = memnum := 1;
      DBGPLSQL: 114 TEXTNOD =
      DBGPLSQL: 115 PRMVAL = fuzzy_hint := 0; islevel0 := 0; recid := 0; stamp := 0; dfnumber := 7; old_dfnumber := 7; toname := 'D:\APP\BGRENN\VIRTUAL\ORADATA\ORCL\USERS01.DBF'; blocks := 640; blksize := 8192;
      DBGPLSQL: 116 TEXTNOD = if msrpno > 1 and not bmr then
      DBGPLSQL: 117 TEXTNOD = declare
      DBGPLSQL: 118 TEXTNOD = tempfno number;
      DBGPLSQL: 119 TEXTNOD = begin
      DBGPLSQL: 120 TEXTNOD = krmicd.getMSR(tempfno, toname);
      DBGPLSQL: 121 TEXTNOD = end;
      DBGPLSQL: 122 TEXTNOD = end if;
      DBGPLSQL: 123 TEXTNOD = if files < memnum then
      DBGPLSQL: 124 TEXTNOD = sys.dbms_backup_restore.applyDataFileTo(
      DBGPLSQL: 125 TEXTNOD = dfnumber => dfnumber,
      DBGPLSQL: 126 TEXTNOD = toname => toname,
      DBGPLSQL: 127 TEXTNOD = fuzziness_hint => fuzzy_hint,
      DBGPLSQL: 128 TEXTNOD = max_corrupt => max_corrupt,
      DBGPLSQL: 129 TEXTNOD = islevel0 => islevel0,
      DBGPLSQL: 130 TEXTNOD = recid => recid,
      DBGPLSQL: 131 TEXTNOD = stamp => stamp,
      DBGPLSQL: 132 TEXTNOD = old_dfnumber => old_dfnumber);
      DBGPLSQL: 133 TEXTNOD =
      DBGPLSQL: 134 TEXTNOD = if msrpno = 1 and not bmr then
      DBGPLSQL: 135 TEXTNOD = sys.dbms_backup_restore.initMSR(dfnumber, toname);
      DBGPLSQL: 136 TEXTNOD = end if;
      DBGPLSQL: 137 TEXTNOD =
      DBGPLSQL: 138 TEXTNOD = if msrpno > 0 then
      DBGPLSQL: 139 TEXTNOD = krmicd.setMSR(dfnumber, toname);
      DBGPLSQL: 140 TEXTNOD = end if;
      DBGPLSQL: 141 TEXTNOD =
      DBGPLSQL: 142 TEXTNOD = if first_time then
      DBGPLSQL: 143 TEXTNOD = if bmr then
      DBGPLSQL: 144 TEXTNOD = krmicd.writeMsg(8108, krmicd.getChid);
      DBGPLSQL: 145 TEXTNOD = elsif rcvcopy then
      DBGPLSQL: 146 TEXTNOD = krmicd.writeMsg(8131, krmicd.getChid);
      DBGPLSQL: 147 TEXTNOD = else
      DBGPLSQL: 148 TEXTNOD = krmicd.writeMsg(8089, krmicd.getChid);
      DBGPLSQL: 149 TEXTNOD = end if;
      DBGPLSQL: 150 TEXTNOD = first_time := FALSE;
      DBGPLSQL: 151 TEXTNOD = end if;
      DBGPLSQL: 152 TEXTNOD =
      DBGPLSQL: 153 TEXTNOD = if bmr then
      DBGPLSQL: 154 TEXTNOD = krmicd.writeMsg(8533, to_char(dfnumber, 'FM09999'));
      DBGPLSQL: 155 TEXTNOD = elsif toname is not null then
      DBGPLSQL: 156 TEXTNOD = if rcvcopy then
      DBGPLSQL: 157 TEXTNOD = krmicd.writeMsg(8551, to_char(dfnumber, 'FM09999'), toname);
      DBGPLSQL: 158 TEXTNOD = else
      DBGPLSQL: 159 TEXTNOD = krmicd.writeMsg(8509, to_char(dfnumber, 'FM09999'), toname);
      DBGPLSQL: 160 TEXTNOD = end if;
      DBGPLSQL: 161 TEXTNOD = deb('ridf_name', 'blocks=' || blocks || ' block_size=' || blksize,
      DBGPLSQL: 162 TEXTNOD = rman_constant.DEBUG_IO, rman_constant.LEVEL_MIN);
      DBGPLSQL: 163 TEXTNOD = end if;
      DBGPLSQL: 164 TEXTNOD = if (msrpno > 0) then
      DBGPLSQL: 165 TEXTNOD = krmicd.writeMsg(8555, krmicd.getChid, to_char(msrpno), to_char(msrpct));
      DBGPLSQL: 166 TEXTNOD = end if;
      DBGPLSQL: 167 TEXTNOD = end if;
      DBGPLSQL: 168 TEXTNOD = --
      DBGPLSQL: 169 TEXTNOD = <<restore_piece>>
      DBGPLSQL: 170 TEXTNOD = --
      DBGPLSQL: 171 TEXTNOD = fhandle := NULL;
      DBGPLSQL: 172 TEXTNOD =
      DBGPLSQL: 173 PRMVAL = piecenum := 1;
      DBGPLSQL: 174 TEXTNOD = --
      DBGPLSQL: 175 TEXTNOD =
      DBGPLSQL: 176 PRMVAL = handle := 'D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\DATABASE\07T2RP23_1_1'; fromdisk := true; recid := 7; stamp := 976086084; tag := 'TAG20180514T070122';
      DBGPLSQL: 177 TEXTNOD = -- handle, tag, fromdisk, recid, stamp
      DBGPLSQL: 178 TEXTNOD = if (pieces_done+1) = piecenum then
      DBGPLSQL: 179 TEXTNOD = sys.dbms_backup_restore.restoreSetPiece(handle => handle,
      DBGPLSQL: 180 TEXTNOD = tag => tag,
      DBGPLSQL: 181 TEXTNOD = fromdisk => fromdisk,
      DBGPLSQL: 182 TEXTNOD = recid => recid,
      DBGPLSQL: 183 TEXTNOD = stamp => stamp);
      DBGPLSQL: 184 TEXTNOD = if fhandle is NULL then
      DBGPLSQL: 185 TEXTNOD = fhandle := handle;
      DBGPLSQL: 186 TEXTNOD = end if;
      DBGPLSQL: 187 TEXTNOD = end if;
      DBGPLSQL: 188 TEXTNOD = if (restore_piece_int(pieces_done, piecenum, fhandle, done, params,
      DBGPLSQL: 189 TEXTNOD = outhandle, outtag, failover, err_msg, val_bs_only, validate,
      DBGPLSQL: 190 TEXTNOD = devtype, bmr, set_stamp, set_count, start_time, incremental,
      DBGPLSQL: 191 TEXTNOD = currcf, ppltrans)) then
      DBGPLSQL: 192 TEXTNOD = goto restore_failover;
      DBGPLSQL: 193 TEXTNOD = end if;
      DBGPLSQL: 194 TEXTNOD =
      DBGPLSQL: 195 TEXTNOD = if done then
      DBGPLSQL: 196 TEXTNOD = return;
      DBGPLSQL: 197 TEXTNOD = end if;
      DBGPLSQL: 198 TEXTNOD = --
      DBGPLSQL: 199 TEXTNOD = krmicd.writeMsg(8001);
      DBGPLSQL: 200 TEXTNOD =
      DBGPLSQL: 201 TEXTNOD = if not krmicd.doRestoreFailover(rman_constant.BACKUPPIECE) then
      DBGPLSQL: 202 TEXTNOD = begin
      DBGPLSQL: 203 TEXTNOD = sys.dbms_backup_restore.restoreCancel(FALSE);
      DBGPLSQL: 204 TEXTNOD = exception
      DBGPLSQL: 205 TEXTNOD = when others then
      DBGPLSQL: 206 TEXTNOD = krmicd.writeMsg(1005,
      DBGPLSQL: 207 TEXTNOD = 'a. dbms_backup_restore.restoreCancel() failed');
      DBGPLSQL: 208 TEXTNOD = end;
      DBGPLSQL: 209 TEXTNOD =
      DBGPLSQL: 210 TEXTNOD = if (ppltrans > 0) then
      DBGPLSQL: 211 TEXTNOD = sys.dbms_backup_restore.endPrePluginTranslation;
      DBGPLSQL: 212 TEXTNOD = end if;
      DBGPLSQL: 213 TEXTNOD = raise restore_not_complete;
      DBGPLSQL: 214 TEXTNOD = end if;
      DBGPLSQL: 215 TEXTNOD =
      DBGPLSQL: 216 TEXTNOD = --
      DBGPLSQL: 217 TEXTNOD = --
      DBGPLSQL: 218 TEXTNOD = --
      DBGPLSQL: 219 TEXTNOD = --
      DBGPLSQL: 220 TEXTNOD = if (not validate) then
      DBGPLSQL: 221 TEXTNOD = getFileRestored(FALSE);
      DBGPLSQL: 222 TEXTNOD = end if;
      DBGPLSQL: 223 TEXTNOD =
      DBGPLSQL: 224 TEXTNOD = devtype := krmicd.checkBsFailover;
      DBGPLSQL: 225 TEXTNOD =
      DBGPLSQL: 226 TEXTNOD = if (incremental and devtype is null) then
      DBGPLSQL: 227 TEXTNOD = begin
      DBGPLSQL: 228 TEXTNOD = sys.dbms_backup_restore.restoreCancel(TRUE);
      DBGPLSQL: 229 TEXTNOD = exception
      DBGPLSQL: 230 TEXTNOD = when others then
      DBGPLSQL: 231 TEXTNOD = krmicd.writeMsg(1005,
      DBGPLSQL: 232 TEXTNOD = 'b. dbms_backup_restore.restoreCancel() failed');
      DBGPLSQL: 233 TEXTNOD = end;
      DBGPLSQL: 234 TEXTNOD =
      DBGPLSQL: 235 TEXTNOD = if (ppltrans > 0) then
      DBGPLSQL: 236 TEXTNOD = sys.dbms_backup_restore.endPrePluginTranslation;
      DBGPLSQL: 237 TEXTNOD = end if;
      DBGPLSQL: 238 TEXTNOD = end if;
      DBGPLSQL: 239 TEXTNOD =
      DBGPLSQL: 240 TEXTNOD = if (dfnumber is not null) then
      DBGPLSQL: 241 TEXTNOD = krmicd.writeMsg(1005, 'Restore did not complete for some' ||
      DBGPLSQL: 242 TEXTNOD = ' files from backup piece ' ||
      DBGPLSQL: 243 TEXTNOD = outhandle || ' (piecenum=' || to_char(piecenum) ||
      DBGPLSQL: 244 TEXTNOD = ', pieces_done=' || to_char(pieces_done) ||
      DBGPLSQL: 245 TEXTNOD = ', done=' || bool2char(done) ||
      DBGPLSQL: 246 TEXTNOD = ', failover=' || bool2char(failover) || ')');
      DBGPLSQL: 247 TEXTNOD = else
      DBGPLSQL: 248 TEXTNOD = krmicd.writeMsg(1005, 'Restore did not complete for some' ||
      DBGPLSQL: 249 TEXTNOD = ' archived logs from backup piece ' || outhandle ||
      DBGPLSQL: 250 TEXTNOD = ' (piecenum=' || to_char(piecenum) ||
      DBGPLSQL: 251 TEXTNOD = ', pieces_done=' || to_char(pieces_done) ||
      DBGPLSQL: 252 TEXTNOD = ', done=' || bool2char(done) ||
      DBGPLSQL: 253 TEXTNOD = ', failover=' || bool2char(failover) || ')');
      DBGPLSQL: 254 TEXTNOD = end if;
      DBGPLSQL: 255 TEXTNOD =
      DBGPLSQL: 256 TEXTNOD = krmicd.writeMsg(1005, 'Please check alert log for ' ||
      DBGPLSQL: 257 TEXTNOD = 'additional information.');
      DBGPLSQL: 258 TEXTNOD =
      DBGPLSQL: 259 TEXTNOD = if (devtype is not null) then
      DBGPLSQL: 260 TEXTNOD = krmicd.writeMsg(8612, krmicd.getChid, devtype);
      DBGPLSQL: 261 TEXTNOD = end if;
      DBGPLSQL: 262 TEXTNOD =
      DBGPLSQL: 263 TEXTNOD = --
      DBGPLSQL: 264 TEXTNOD = <<restore_failover>>
      DBGPLSQL: 265 TEXTNOD = begin
      DBGPLSQL: 266 TEXTNOD = sys.dbms_backup_restore.restoreCancel(FALSE);
      DBGPLSQL: 267 TEXTNOD = exception
      DBGPLSQL: 268 TEXTNOD = when others then
      DBGPLSQL: 269 TEXTNOD = krmicd.writeMsg(1005,
      DBGPLSQL: 270 TEXTNOD = 'c. dbms_backup_restore.restoreCancel() failed');
      DBGPLSQL: 271 TEXTNOD = end;
      DBGPLSQL: 272 TEXTNOD =
      DBGPLSQL: 273 TEXTNOD = if (ppltrans > 0) then
      DBGPLSQL: 274 TEXTNOD = sys.dbms_backup_restore.endPrePluginTranslation;
      DBGPLSQL: 275 TEXTNOD = end if;
      DBGPLSQL: 276 TEXTNOD =
      DBGPLSQL: 277 TEXTNOD = sys.dbms_backup_restore.setRmanStatusRowId(rsid=>0, rsts=>0);
      DBGPLSQL: 278 TEXTNOD = end;
      DBGMISC: EXITED krmicomp with address 53624008 [10:08:11.848] elapsed time [00:00:00:29.192]
      DBGMISC: ENTERED krmiexe [10:08:11.876]
      DBGMISC: Executing command incremental backup restore [10:08:11.995] (krmiexe)
      DBGRPC: krmxr - entering
      DBGRPC: krmxpoq - returning rpc_number: 14 with status: FINISHED129 for channel default
      DBGRPC: krmxr - channel default has rpc_count: 14
      DBGRPC: krmxpoq - returning rpc_number: 27 with status: FINISHED129 for channel ORA_DISK_1
      DBGRPC: krmxr - channel ORA_DISK_1 has rpc_count: 27
      DBGRPC: ENTERED krmqgns
      DBGRPC: krmqgns: looking for work for channel default (krmqgns)
      DBGRPC: krmqgns: commands remaining to be executed: (krmqgns)
      DBGRPC: CMD type=incremental backup restore cmdid=1 status=NOT STARTED
      DBGRPC: 1 STEPstepid=1 cmdid=1 status=NOT STARTED devtype=DISK bs.stamp=976086120 step_size=0 Bytes
      DBGRPC: krmqgns: no work found for channel default (krmqgns)
      DBGRPC: (krmqgns)
      DBGRPC: EXITED krmqgns with status 1
      DBGRPC: ENTERED krmqgns
      DBGRPC: krmqgns: looking for work for channel ORA_DISK_1 (krmqgns)
      DBGRPC: krmqgns: commands remaining to be executed: (krmqgns)
      DBGRPC: CMD type=incremental backup restore cmdid=1 status=NOT STARTED
      DBGRPC: 1 STEPstepid=1 cmdid=1 status=NOT STARTED devtype=DISK bs.stamp=976086120 step_size=0 Bytes
      DBGRPC: krmqgns: channel ORA_DISK_1 assigned step 1 (krmqgns)
      DBGRPC: (krmqgns)
      DBGRPC: EXITED krmqgns with status 0
      DBGRPC: krmxcis - channel ORA_DISK_1, calling pcicmp
      DBGRPC: krmxr - channel ORA_DISK_1 calling peicnt
      DBGRPC: krmxrpc - channel ORA_DISK_1 kpurpc2 err=0 db=target proc=SYS.DBMS_BACKUP_RESTORE.RESTORESTATUS excl: 0
      DBGRPC: krmxrpc - channel ORA_DISK_1 kpurpc2 err=0 db=target proc=SYS.DBMS_BACKUP_RESTORE.SETRMANSTATUSROWID excl: 0
      DBGRPC: krmxrpc - channel ORA_DISK_1 kpurpc2 err=0 db=target proc=SYS.DBMS_BACKUP_RESTORE.APPLYSETDATAFILE excl: 0
      DBGMISC: ENTERED krmzlog [10:08:12.863]
      RMAN-08039: channel ORA_DISK_1: starting incremental datafile backup set restore
      DBGMISC: EXITED krmzlog [10:08:12.957] elapsed time [00:00:00:00.094]
      DBGMISC: ENTERED krmzgparms [10:08:12.979]
      DBGMISC: Step id = 1; Code = 2 [10:08:13.102] (krmzgparms)
      DBGMISC: EXITED krmzgparms with status 0 (FALSE) [10:08:13.145] elapsed time [00:00:00:00.166]
      DBGIO: channel ORA_DISK_1: set_stamp=976086083 set_count=7 [10:08:13.181] (ridf_start)
      DBGRPC: krmxrpc - channel ORA_DISK_1 kpurpc2 err=0 db=target proc=SYS.DBMS_BACKUP_RESTORE.APPLYDATAFILETO excl: 0
      DBGMISC: ENTERED krmzlog [10:08:13.270]
      RMAN-08089: channel ORA_DISK_1: specifying datafile(s) to restore from backup set
      DBGMISC: EXITED krmzlog [10:08:13.356] elapsed time [00:00:00:00.086]
      DBGMISC: ENTERED krmzlog [10:08:13.378]
      RMAN-08509: destination for restore of datafile 00007: D:\APP\BGRENN\VIRTUAL\ORADATA\ORCL\USERS01.DBF
      DBGMISC: EXITED krmzlog [10:08:13.456] elapsed time [00:00:00:00.078]
      DBGIO: channel ORA_DISK_1: blocks=640 block_size=8192 [10:08:13.477] (ridf_name)
      DBGRPC: krmxrpc - channel ORA_DISK_1 kpurpc2 err=0 db=target proc=SYS.DBMS_BACKUP_RESTORE.RESTORESETPIECE excl: 0
      DBGMISC: ENTERED krmzlog [10:08:13.571]
      RMAN-08003: channel ORA_DISK_1: reading from backup piece D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\DATABASE\07T2RP23_1_1
      DBGMISC: EXITED krmzlog [10:08:13.680] elapsed time [00:00:00:00.109]
      DBGRPC: krmxrpc - channel ORA_DISK_1 kpurpc2 err=3123 db=target proc=SYS.DBMS_BACKUP_RESTORE.RESTOREBACKUPPIECE excl: 0
      DBGRPC: krmxr - channel ORA_DISK_1 returned from peicnt
      DBGRPC: krmxpoq - returning rpc_number: 33 with status: STARTED40 for channel ORA_DISK_1
      DBGRPC: ENTERED krmqgns
      DBGRPC: krmqgns: looking for work for channel default (krmqgns)
      DBGRPC: krmqgns: commands remaining to be executed: (krmqgns)
      DBGRPC: CMD type=incremental backup restore cmdid=1 status=STARTED
      DBGRPC: 1 STEPstepid=1 cmdid=1 status=STARTED devtype=DISK bs.stamp=976086120 step_size=0 Bytes
      DBGRPC: krmqgns: no work found for channel default (krmqgns)
      DBGRPC: (krmqgns)
      DBGRPC: EXITED krmqgns with status 1
      DBGRPC: krmxpoq - returning rpc_number: 33 with status: FINISHED40 for channel ORA_DISK_1
      DBGRPC: krmxr - channel ORA_DISK_1 calling peicnt
      DBGRPC: krmxrpc - channel ORA_DISK_1 kpurpc2 err=0 db=target proc=SYS.DBMS_BACKUP_RESTORE.RESTOREBACKUPPIECE excl: 0
      DBGMISC: ENTERED krmzlog [10:08:14.452]
      RMAN-08611: channel ORA_DISK_1: piece handle=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\DATABASE\07T2RP23_1_1 tag=TAG20180514T070122
      DBGMISC: EXITED krmzlog [10:08:14.527] elapsed time [00:00:00:00.075]
      DBGMISC: ENTERED krmzlog [10:08:14.560]
      RMAN-08023: channel ORA_DISK_1: restored backup piece 1
      DBGMISC: EXITED krmzlog [10:08:14.677] elapsed time [00:00:00:00.117]
      DBGIO: Type %Comp Blocks Tot Blocks Blksize ElpTime(s) IO Rt(b/s) Name [10:08:14.710] (krmkqio)
      DBGIO: ---- ----- ---------- ---------- ---------- ---------- ---------- ----- [10:08:14.732] (krmkqio)
      DBGIO: IN 127 8192 0 0 D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\DATABASE\07T2RP23_1_1 [10:08:14.759] (krmkqio)
      DBGIO: AGG 0 8192 0 0 [10:08:14.790] (krmkqio)
      DBGMISC: ENTERED krmzlog [10:08:14.818]
      RMAN-08180: channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
      DBGMISC: EXITED krmzlog [10:08:14.893] elapsed time [00:00:00:00.075]
      DBGRPC: krmxrpc - channel ORA_DISK_1 kpurpc2 err=0 db=target proc=SYS.DBMS_BACKUP_RESTORE.FETCHFILERESTORED excl: 0
      DBGRPC: krmxrpc - channel ORA_DISK_1 kpurpc2 err=0 db=target proc=SYS.DBMS_BACKUP_RESTORE.FETCHFILERESTORED excl: 0
      DBGRPC: krmxrpc - channel ORA_DISK_1 kpurpc2 err=0 db=target proc=SYS.DBMS_BACKUP_RESTORE.RESTORECANCEL excl: 0
      DBGRPC: krmxr - channel ORA_DISK_1 returned from peicnt
      DBGRPC: krmxr - channel ORA_DISK_1 finished step
      DBGRPC: ENTERED krmqgns
      krmqgns: looking for work for channel default (krmqgns)
      DBGRPC: krmqgns: commands remaining to be executed: (krmqgns)
      DBGRPC: CMD type=incremental backup restore cmdid=1 status=STARTED
      DBGRPC: 1 STEPstepid=1 cmdid=1 status=FINISHED devtype=DISK bs.stamp=976086120 step_size=0 Bytes
      DBGRPC: krmqgns: no work found for channel default (krmqgns)
      DBGRPC: (krmqgns)
      DBGRPC: EXITED krmqgns with status 1
      DBGRPC: ENTERED krmqgns
      DBGRPC: krmqgns: channel ORA_DISK_1 finished step 1 (krmqgns)
      DBGRPC: krmqgns: looking for work for channel ORA_DISK_1 (krmqgns)
      DBGRPC: krmqgns: commands remaining to be executed: (krmqgns)
      DBGRPC: krmqgns: no work found for channel ORA_DISK_1 (krmqgns)
      DBGRPC: (krmqgns)
      DBGRPC: EXITED krmqgns with status 1
      DBGRPC: ENTERED krmqgns
      DBGRPC: krmqgns: looking for work for channel default (krmqgns)
      DBGRPC: krmqgns: commands remaining to be executed: (krmqgns)
      DBGRPC: krmqgns: no work found for channel default (krmqgns)
      DBGRPC: (krmqgns)
      DBGRPC: EXITED krmqgns with status 1
      DBGRPC: ENTERED krmqgns
      DBGRPC: krmqgns: looking for work for channel ORA_DISK_1 (krmqgns)
      DBGRPC: krmqgns: commands remaining to be executed: (krmqgns)
      DBGRPC: krmqgns: no work found for channel ORA_DISK_1 (krmqgns)
      DBGRPC: (krmqgns)
      DBGRPC: EXITED krmqgns with status 1
      DBGRPC: krmxr - all done
      DBGRPC: krmxr - exiting with 0
      DBGMISC: EXITED krmiexe with status 0 [10:08:16.147] elapsed time [00:00:00:04.271]
      DBGMISC: Finished incremental backup restore at 14-MAY-18 [10:08:16.169]
      DBGMISC: ENTERED krmkmrsr [10:08:16.196]
      DBGSQL: ENTERED krmkosqlerr
      DBGSQL: TARGET> select /*+ rule */ round(sum(MBYTES_PROCESSED)), round(sum(INPUT_BYTES)), round(sum(OUTPUT_BYTES)) from V$RMAN_STATUS START WITH RECID = :row_id and STAMP = :row_stamp CONNECT BY PRIOR RECID = parent_recid
      DBGSQL: sqlcode = 24347
      DBGSQL: B :row_id = 53
      DBGSQL: B :row_stamp = 976097261
      DBGSQL: success: ORA-24347: Warning of a NULL column in an aggregate function (krmkosqlerr)
      DBGSQL: (krmkosqlerr)
      DBGSQL: EXITED krmkosqlerr
      DBGSQL: ENTERED krmkgodevtype
      DBGMISC: krmkgodevtype: return device type [10:08:16.641]
      DBGSQL: EXITED krmkgodevtype
      DBGSQL: TARGET> begin sys.dbms_backup_restore.commitRmanStatusRow( row_id => :row_id, row_stamp => :row_stamp, mbytes => :mb, status => :status, ibytes => :ib, obytes => :ob, odevtype => :odevtype); end;
      DBGSQL: sqlcode = 0
      DBGSQL: B :row_id = 53
      DBGSQL: B :row_stamp = 976097261
      DBGSQL: B :mb = 0
      DBGSQL: B :status = 2
      DBGSQL: B :ib = 1040384
      DBGSQL: B :ob = 0
      DBGSQL: B :odevtype =
      DBGMISC: EXITED krmkmrsr [10:08:17.225] elapsed time [00:00:00:01.029]
      DBGMISC: EXITED krmice [10:08:17.251] elapsed time [00:00:00:36.481]
      DBGMISC: ENTERED krmice [10:08:17.343]
      DBGMISC: command to be compiled and executed is: starting media recovery [10:08:17.368] (krmice)
      DBGMISC: command after this command is: restoring and applying logs [10:08:17.390] (krmice)
      DBGMISC: current incarnation must match for starting media recovery [10:08:17.417] (krmice)
      DBGMISC: ENTERED krmkcrsr [10:08:17.456]
      DBGSQL: TARGET> begin sys.dbms_backup_restore.createRmanStatusRow( level => :level, parent_id => :pid, parent_stamp => :pts, status => :status, command_id => :command_id, operation => :operation, row_id => :row_id, row_stamp => :row_stamp, flags => :flags); end;
      DBGSQL: sqlcode = 0
      DBGSQL: B :row_id = 54
      DBGSQL: B :row_stamp = 976097297
      DBGSQL: B :level = 2
      DBGSQL: B :pid = 51
      DBGSQL: B :pts = 976097246
      DBGSQL: B :status = 1
      DBGSQL: B :command_id = 2018-05-14T10:05:22
      DBGSQL: B :operation = starting media recovery
      DBGSQL: B :flags = 2
      DBGMISC: EXITED krmkcrsr [10:08:18.371] elapsed time [00:00:00:00.915]
      DBGMISC: ENTERED krmicomp [10:08:18.398]
      DBGMISC: ENTERED krmkomp [10:08:18.420]
      DBGRCV: ENTERED krmkucls
      DBGRCV: EXITED krmkucls with address 0
      DBGMISC: krmkcomp - Name translation defaults

      RMAN gets the list of archive logs that are needed.



       DBGSQL:       TARGET> declare thread  number; sequence number; recid  number; alRec  dbms_rcvman.alRec_t; begin dbms_rcvman.getArchivedLog(alRec => alRec); if (:rlscn = alRec.rlgSCN and :stopthr = alRec.thread and ((alRec.sequence >= :stopseq and :toclause = 0) or (alRec.sequence > :stopseq and :toclause = 1))) then :flag := 1; else :flag := 0; :al_key:al_key_i     := alRec.key; :recid:recid_i      := alRec.recid; :stamp:stamp_i      := alRec.stamp; :thread         := alRec.thread; :sequence        := alRec.sequence; :fileName:fileName_i   := alRec.fileName; :lowSCN         := alRec.lowSCN; :lowTime         := alRec.lowTime; :nextSCN         := alRec.nextSCN; :nextTime        := nvl(alRec.nextTime, to_date('12/31/9999', 'MM/DD/YYYY')); :rlgSCN         := alRec.rlgSCN; :rlgTime         := alRec.rlgTime; :blocks         := alRec.blocks; :blockSize        := alRec.blockSize; :status         := alRec.status; :compTime:compTime_i   := alRec.compTime; :duplicate        := alRec.duplicate; :compressed:compressed_i := alRec.compressed; :isrdf:isrdf_i      := alRec.isrdf; :stby          := alRec.stby; :terminal        := alRec.terminal; :site_key:site_key_i   := alRec.site_key; :source_dbid       := alRec.source_dbid; end if; end;   
      DBGSQL: sqlcode = 0
      DBGSQL: B :flag = 0
      DBGSQL: B :al_key = NULL
      DBGSQL: B :recid = NULL
      DBGSQL: B :stamp = NULL
      DBGSQL: B :thread = 1
      DBGSQL: B :sequence = 44
      DBGSQL: B :fileName = NULL
      DBGSQL: B :lowSCN = 3437200
      DBGSQL: B :lowTime = "14-MAY-18"
      DBGSQL: B :nextSCN = 3437210
      DBGSQL: B :nextTime = "14-MAY-18"
      DBGSQL: B :rlgSCN = 1490582
      DBGSQL: B :rlgTime = "25-APR-18"
      DBGSQL: B :blocks = 25
      DBGSQL: B :blockSize = 512
      DBGSQL: B :status = D
      DBGSQL: B :compTime = NULL
      DBGSQL: B :duplicate = 1
      DBGSQL: B :compressed = NO
      DBGSQL: B :isrdf = NO
      DBGSQL: B :stby = N
      DBGSQL: B :terminal = NO
      DBGSQL: B :site_key = 0
      DBGSQL: B :source_dbid = 0
      DBGSQL: B :rlscn = 1490582
      DBGSQL: B :stopthr = 0
      DBGSQL: B :stopseq = 0
      DBGSQL: B :toclause = 0
      DBGRCVMAN: ENTERING getArchivedLog
      DBGRCVMAN: getArchivedLog - resetscn=1490582 thread=1 seq=44 lowscn=3437200 nextscn=3437210 terminal=NO site_key_order_col=0 isrdf=NO stamp=-1
      DBGRCVMAN: getArchivedLog - currInc =0
      DBGRCVMAN: getArchivedLogLast(translateArcLogSCNRange2) := local
      DBGRCVMAN: getArchivedLogLast := local
      DBGRCVMAN: EXITING getArchivedLog
      DBGMISC: EXITED krmkgal with status Done [10:08:57.766] elapsed time [00:00:00:01.696]
      DBGMISC: ENTERED krmkgal [10:08:57.792]
      BGMISC: krmrfalb: archive log list: [10:09:02.627]
      DBGRESTORE: 1 ALSPEC
      DBGRESTORE: 1 RAL key=3 recid=3 stamp=976086144 thread=1 seq=40 site_key=0
      DBGRESTORE: lowscn=3435597 nxtscn=3435668 rstscn=1490582
      DBGRESTORE: lowtime=2018-05-14 07:00:45 nexttime=2018-05-14 07:02:24 rlgtime=2018-04-25 12:33:34
      DBGRESTORE: status=A blocks=9 krmkch { count=0 found=FALSE }
      DBGRESTORE: name=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000040_0974378014.0001
      DBGRESTORE: 2 RAL key=0 recid=0 stamp=0 thread=1 seq=40 site_key=0
      DBGRESTORE: lowscn=3435597 nxtscn=3435668 rstscn=1490582
      DBGRESTORE: lowtime=2018-05-14 07:00:45 nexttime=2018-05-14 07:02:24 rlgtime=2018-04-25 12:33:34
      DBGRESTORE: status=D blocks=9 krmkch { count=0 found=FALSE }
      DBGRESTORE: name=
      DBGRESTORE: 3 RAL key=4 recid=4 stamp=976086175 thread=1 seq=41 site_key=0
      DBGRESTORE: lowscn=3435668 nxtscn=3435770 rstscn=1490582
      DBGRESTORE: lowtime=2018-05-14 07:02:24 nexttime=2018-05-14 07:02:55 rlgtime=2018-04-25 12:33:34
      DBGRESTORE: status=A blocks=1970 krmkch { count=0 found=FALSE }
      DBGRESTORE: name=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000041_0974378014.0001
      DBGRESTORE: 4 RAL key=0 recid=0 stamp=0 thread=1 seq=41 site_key=0
      DBGRESTORE: lowscn=3435668 nxtscn=3435770 rstscn=1490582
      DBGRESTORE: lowtime=2018-05-14 07:02:24 nexttime=2018-05-14 07:02:55 rlgtime=2018-04-25 12:33:34
      DBGRESTORE: status=D blocks=1970 krmkch { count=0 found=FALSE }
      DBGRESTORE: name=
      DBGRESTORE: 5 RAL key=5 recid=5 stamp=976092346 thread=1 seq=42 site_key=0
      DBGRESTORE: lowscn=3435770 nxtscn=3437175 rstscn=1490582
      DBGRESTORE: lowtime=2018-05-14 07:02:55 nexttime=2018-05-14 08:45:45 rlgtime=2018-04-25 12:33:34
      DBGRESTORE: status=A blocks=4349 krmkch { count=0 found=FALSE }
      DBGRESTORE: name=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000042_0974378014.0001
      DBGRESTORE: 6 RAL key=0 recid=0 stamp=0 thread=1 seq=42 site_key=0
      DBGRESTORE: lowscn=3435770 nxtscn=3437175 rstscn=1490582
      DBGRESTORE: lowtime=2018-05-14 07:02:55 nexttime=2018-05-14 08:45:45 rlgtime=2018-04-25 12:33:34
      DBGRESTORE: status=D blocks=4349 krmkch { count=0 found=FALSE }
      DBGRESTORE: name=
      DBGRESTORE: 7 RAL key=6 recid=6 stamp=976092358 thread=1 seq=43 site_key=0
      DBGRESTORE: lowscn=3437175 nxtscn=3437200 rstscn=1490582
      DBGRESTORE: lowtime=2018-05-14 08:45:45 nexttime=2018-05-14 08:45:58 rlgtime=2018-04-25 12:33:34
      DBGRESTORE: status=A blocks=16 krmkch { count=0 found=FALSE }
      DBGRESTORE: name=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000043_0974378014.0001
      DBGRESTORE: 8 RAL key=0 recid=0 stamp=0 thread=1 seq=43 site_key=0
      DBGRESTORE: lowscn=3437175 nxtscn=3437200 rstscn=1490582
      DBGRESTORE: lowtime=2018-05-14 08:45:45 nexttime=2018-05-14 08:45:58 rlgtime=2018-04-25 12:33:34
      DBGRESTORE: status=D blocks=16 krmkch { count=0 found=FALSE }
      DBGRESTORE: name=
      DBGRESTORE: 9 RAL key=7 recid=7 stamp=976092375 thread=1 seq=44 site_key=0
      DBGRESTORE: lowscn=3437200 nxtscn=3437210 rstscn=1490582
      DBGRESTORE: lowtime=2018-05-14 08:45:58 nexttime=2018-05-14 08:46:14 rlgtime=2018-04-25 12:33:34
      DBGRESTORE: status=A blocks=25 krmkch { count=0 found=FALSE }
      DBGRESTORE: name=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000044_0974378014.0001
      DBGRESTORE: 10 RAL key=0 recid=0 stamp=0 thread=1 seq=44 site_key=0
      DBGRESTORE: lowscn=3437200 nxtscn=3437210 rstscn=1490582
      DBGRESTORE: lowtime=2018-05-14 08:45:58 nexttime=2018-05-14 08:46:14 rlgtime=2018-04-25 12:33:34
      DBGRESTORE: status=D blocks=25 krmkch { count=0 found=FALSE }
      DBGRESTORE: name=
      DBGRESTORE: 11 RAL key=8 recid=8 stamp=976092381 thread=1 seq=45 site_key=0
      DBGRESTORE: lowscn=3437210 nxtscn=3437216 rstscn=1490582
      DBGRESTORE: lowtime=2018-05-14 08:46:14 nexttime=2018-05-14 08:46:21 rlgtime=2018-04-25 12:33:34
      DBGRESTORE: status=A blocks=1 krmkch { count=0 found=FALSE }
      DBGRESTORE: name=D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000045_0974378014.0001
      DBGRESTORE: 12 RAL key=0 recid=0 stamp=0 thread=1 seq=45 site_key=0
      DBGRESTORE: lowscn=3437210 nxtscn=3437216 rstscn=1490582
      DBGRESTORE: lowtime=2018-05-14 08:46:14 nexttime=2018-05-14 08:46:21 rlgtime=2018-04-25 12:33:34
      DBGRESTORE: status=D blocks=1 krmkch { count=0 found=FALSE }
      DBGRESTORE: name=

      RMAN validates to see if they are already on disk or need to be restored.


       DBGSQL:       TARGET> begin :valrc := sys.dbms_backup_restore.validateArchivedLog( recid       => :recid, stamp       => :stamp, fname       => :fname, thread      => :thd, sequence     => :seq, resetlogs_change => :resetscn, first_change   => :lowscn, blksize      => :blksize, signal      => 0, terminal     => :terminal); end;   
      DBGSQL: sqlcode = 0
      DBGSQL: B :valrc = 0
      DBGSQL: B :recid = 3
      DBGSQL: B :stamp = 976086144
      DBGSQL: B :fname = D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000040_0974378014.0001
      DBGSQL: B :thd = 1
      DBGSQL: B :seq = 40
      DBGSQL: B :resetscn = 1490582
      DBGSQL: B :lowscn = 3435597
      DBGSQL: B :blksize = 512
      DBGSQL: B :terminal = 0
      DBGMISC: EXITED krmkflog with status 0 [10:09:05.892] elapsed time [00:00:00:00.516]
      RMAN-06050: archived log for thread 1 with sequence 40 is already on disk as file D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000040_0974378014.0001
      DBGMISC: ENTERED krmkflog [10:09:06.000]
      DBGSQL: TARGET> begin :valrc := sys.dbms_backup_restore.validateArchivedLog( recid => :recid, stamp => :stamp, fname => :fname, thread => :thd, sequence => :seq, resetlogs_change => :resetscn, first_change => :lowscn, blksize => :blksize, signal => 0, terminal => :terminal); end;
      DBGSQL: sqlcode = 0
      DBGSQL: B :valrc = 0
      DBGSQL: B :recid = 4
      DBGSQL: B :stamp = 976086175
      DBGSQL: B :fname = D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000041_0974378014.0001
      DBGSQL: B :thd = 1
      DBGSQL: B :seq = 41
      DBGSQL: B :resetscn = 1490582
      DBGSQL: B :lowscn = 3435668
      DBGSQL: B :blksize = 512
      DBGSQL: B :terminal = 0
      DBGMISC: EXITED krmkflog with status 0 [10:09:06.407] elapsed time [00:00:00:00.407]
      RMAN-06050: archived log for thread 1 with sequence 41 is already on disk as file D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000041_0974378014.0001
      DBGMISC: ENTERED krmkflog [10:09:06.482]
      DBGSQL: TARGET> begin :valrc := sys.dbms_backup_restore.validateArchivedLog( recid => :recid, stamp => :stamp, fname => :fname, thread => :thd, sequence => :seq, resetlogs_change => :resetscn, first_change => :lowscn, blksize => :blksize, signal => 0, terminal => :terminal); end;
      DBGSQL: sqlcode = 0
      DBGSQL: B :valrc = 0
      DBGSQL: B :recid = 5
      DBGSQL: B :stamp = 976092346
      DBGSQL: B :fname = D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000042_0974378014.0001
      DBGSQL: B :thd = 1
      DBGSQL: B :seq = 42
      DBGSQL: B :resetscn = 1490582
      DBGSQL: B :lowscn = 3435770
      DBGSQL: B :blksize = 512
      DBGSQL: B :terminal = 0
      DBGMISC: EXITED krmkflog with status 0 [10:09:06.915] elapsed time [00:00:00:00.433]
      RMAN-06050: archived log for thread 1 with sequence 42 is already on disk as file D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000042_0974378014.0001
      DBGMISC: ENTERED krmkflog [10:09:07.146]
      DBGSQL: TARGET> begin :valrc := sys.dbms_backup_restore.validateArchivedLog( recid => :recid, stamp => :stamp, fname => :fname, thread => :thd, sequence => :seq, resetlogs_change => :resetscn, first_change => :lowscn, blksize => :blksize, signal => 0, terminal => :terminal); end;
      DBGSQL: sqlcode = 0
      DBGSQL: B :valrc = 0
      DBGSQL: B :recid = 6
      DBGSQL: B :stamp = 976092358
      DBGSQL: B :fname = D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000043_0974378014.0001
      DBGSQL: B :thd = 1
      DBGSQL: B :seq = 43
      DBGSQL: B :resetscn = 1490582
      DBGSQL: B :lowscn = 3437175
      DBGSQL: B :blksize = 512
      DBGSQL: B :terminal = 0
      DBGMISC: EXITED krmkflog with status 0 [10:09:07.705] elapsed time [00:00:00:00.559]
      RMAN-06050: archived log for thread 1 with sequence 43 is already on disk as file D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000043_0974378014.0001
      DBGMISC: ENTERED krmkflog [10:09:07.787]
      DBGSQL: TARGET> begin :valrc := sys.dbms_backup_restore.validateArchivedLog( recid => :recid, stamp => :stamp, fname => :fname, thread => :thd, sequence => :seq, resetlogs_change => :resetscn, first_change => :lowscn, blksize => :blksize, signal => 0, terminal => :terminal); end;
      DBGSQL: sqlcode = 0
      DBGSQL: B :valrc = 0
      DBGSQL: B :recid = 7
      DBGSQL: B :stamp = 976092375
      DBGSQL: B :fname = D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000044_0974378014.0001
      DBGSQL: B :thd = 1
      DBGSQL: B :seq = 44
      DBGSQL: B :resetscn = 1490582
      DBGSQL: B :lowscn = 3437200
      DBGSQL: B :blksize = 512
      DBGSQL: B :terminal = 0
      DBGMISC: EXITED krmkflog with status 0 [10:09:08.209] elapsed time [00:00:00:00.422]
      RMAN-06050: archived log for thread 1 with sequence 44 is already on disk as file D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000044_0974378014.0001
      DBGMISC: ENTERED krmkflog [10:09:08.294]
      DBGSQL: TARGET> begin :valrc := sys.dbms_backup_restore.validateArchivedLog( recid => :recid, stamp => :stamp, fname => :fname, thread => :thd, sequence => :seq, resetlogs_change => :resetscn, first_change => :lowscn, blksize => :blksize, signal => 0, terminal => :terminal); end;
      DBGSQL: sqlcode = 0
      DBGSQL: B :valrc = 0
      DBGSQL: B :recid = 8
      DBGSQL: B :stamp = 976092381
      DBGSQL: B :fname = D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000045_0974378014.0001
      DBGSQL: B :thd = 1
      DBGSQL: B :seq = 45
      DBGSQL: B :resetscn = 1490582
      DBGSQL: B :lowscn = 3437210
      DBGSQL: B :blksize = 512
      DBGSQL: B :terminal = 0
      DBGMISC: EXITED krmkflog with status 0 [10:09:08.799] elapsed time [00:00:00:00.505]
      RMAN-06050: archived log for thread 1 with sequence 45 is already on disk as file D:\APP\BGRENN\VIRTUAL\PRODUCT\12.2.0\DBHOME_1\RDBMS\ARC0000000045_0974378014.0001
      DBGRCV: ENTERED krmksmr


      Finally the logs are applied




       BGSQL:       TARGET> select decode(LOG_MODE, 'ARCHIVELOG', 1, 0) into :dbstate from V$DATABASE  
      DBGSQL: sqlcode = 0
      DBGSQL: D :dbstate = 1
      DBGRCV: EXITED krmksmr with address 0
      DBGMISC: EXITED krmkrcas [10:09:09.165] elapsed time [00:00:00:07.119]
      DBGMISC: EXITED krmkdmr [10:09:09.189] elapsed time [00:00:00:29.103]
      DBGMISC: EXITED krmkomp [10:09:09.232] elapsed time [00:00:00:31.701]
      DBGPLSQL: the compiled command tree is: [10:09:09.255] (krmicomp)
      DBGPLSQL: 1 CMD type=restoring and applying logs cmdid=1 status=NOT STARTED
      DBGPLSQL: 1 STEPstepid=1 cmdid=1 status=NOT STARTED chid=default
      DBGPLSQL: 1 TEXTNOD = --
      DBGPLSQL: 2 TEXTNOD = declare
      DBGPLSQL: 3 TEXTNOD =
      DBGPLSQL: 4 TEXTNOD = mr_cancelled exception;
      DBGPLSQL: 5 TEXTNOD = pragma exception_init(mr_cancelled, -283);
      DBGPLSQL: 6 TEXTNOD =
      DBGPLSQL: 7 TEXTNOD = --
      DBGPLSQL: 8 TEXTNOD = mr_cont_rcv exception;
      DBGPLSQL: 9 TEXTNOD = pragma exception_init(mr_cont_rcv, -288);
      DBGPLSQL: 10 TEXTNOD =
      DBGPLSQL: 11 TEXTNOD = mr_need_log exception;
      DBGPLSQL: 12 TEXTNOD = pragma exception_init(mr_need_log, -279);
      DBGPLSQL: 13 TEXTNOD =
      DBGPLSQL: 14 TEXTNOD = mr_aborted exception;
      DBGPLSQL: 15 TEXTNOD = pragma exception_init(mr_aborted, -20500);
      DBGPLSQL: 16 TEXTNOD =
      DBGPLSQL: 17 TEXTNOD = bmr_block_errors exception;
      DBGPLSQL: 18 TEXTNOD = pragma exception_init(bmr_block_errors, -19680);
      DBGPLSQL: 19 TEXTNOD =
      DBGPLSQL: 20 TEXTNOD = mr_createdf exception;
      DBGPLSQL: 21 TEXTNOD = pragma exception_init(mr_createdf, -20505);
      DBGPLSQL: 22 TEXTNOD =
      DBGPLSQL: 23 TEXTNOD = archivelog_missing_exception exception;
      DBGPLSQL: 24 TEXTNOD = pragma exception_init(archivelog_missing_exception, -20515);
      DBGPLSQL: 25 TEXTNOD =
      DBGPLSQL: 26 TEXTNOD = archivelog_backup_not_found exception;
      DBGPLSQL: 27 TEXTNOD = pragma exception_init(archivelog_backup_not_found, -20506);
      DBGPLSQL: 28 TEXTNOD =
      DBGPLSQL: 29 TEXTNOD = alfrec v$archived_log%ROWTYPE;
      DBGPLSQL: 30 TEXTNOD = dellog boolean := FALSE; -- set by rman compiler
      DBGPLSQL: 31 TEXTNOD = deltarget boolean := FALSE; -- set by rman compiler
      DBGPLSQL: 32 TEXTNOD = bmr boolean := FALSE; -- set by rman compiler
      DBGPLSQL: 33 TEXTNOD = flash boolean := FALSE; -- set by rman compiler
      DBGPLSQL: 34 TEXTNOD = pdbpitr boolean := FALSE; -- set by rman compiler
      DBGPLSQL: 35 TEXTNOD = untilcancel boolean := FALSE; -- set by rman compiler
      DBGPLSQL: 36 TEXTNOD = preplugin boolean := FALSE; -- set by rman compiler
      DBGPLSQL: 37 TEXTNOD =
      DBGPLSQL: 38 TEXTNOD = scn number;
      DBGPLSQL: 39 TEXTNOD = stopthd number;
      DBGPLSQL: 40 TEXTNOD = stopseq number;
      DBGPLSQL: 41 TEXTNOD = stoprcv boolean;
      DBGPLSQL: 42 TEXTNOD = rlc number; -- resetlogs count
      DBGPLSQL: 43 TEXTNOD = resetlogs_change number; -- resetlogs SCN used when untilLog is set
      DBGPLSQL: 44 TEXTNOD = start_time date;
      DBGPLSQL: 45 TEXTNOD = elapsed number;
      DBGPLSQL: 46 TEXTNOD = hours number;
      DBGPLSQL: 47 TEXTNOD = mins number;
      DBGPLSQL: 48 TEXTNOD = secs number;
      DBGPLSQL: 49 TEXTNOD = pstandby number;
      DBGPLSQL: 50 TEXTNOD =
      DBGPLSQL: 51 TEXTNOD = --
      DBGPLSQL: 52 TEXTNOD = unnamed varchar2(1024);
      DBGPLSQL: 53 TEXTNOD = dfname varchar2(1024);
      DBGPLSQL: 54 TEXTNOD = newdfname varchar2(1024);
      DBGPLSQL: 55 TEXTNOD = fileno number := 0;
      DBGPLSQL: 56 TEXTNOD = recovdf boolean := false;
      DBGPLSQL: 57 TEXTNOD = filelist varchar2(512):=NULL;
      DBGPLSQL: 58 TEXTNOD = tmp number:=0;
      DBGPLSQL: 59 TEXTNOD = toclause boolean;
      DBGPLSQL: 60 TEXTNOD = tsnum number;
      DBGPLSQL: 61 TEXTNOD = tsname varchar2(32);
      DBGPLSQL: 62 TEXTNOD = pdbname varchar2(128);
      DBGPLSQL: 63 TEXTNOD = bnewomf boolean;
      DBGPLSQL: 64 TEXTNOD = dropf boolean;
      DBGPLSQL: 65 TEXTNOD = createdf boolean := false;
      DBGPLSQL: 66 TEXTNOD = type numTab_t is table of number index by binary_integer;
      DBGPLSQL: 67 TEXTNOD = df_offln_list numTab_t;
      DBGPLSQL: 68 TEXTNOD = con_id number := sys_context('userenv', 'con_id');
      DBGPLSQL: 69 TEXTNOD = internal_error exception;
      DBGPLSQL: 70 TEXTNOD = pragma exception_init(internal_error, -600);
      DBGPLSQL: 71 TEXTNOD =
      DBGPLSQL: 72 TEXTNOD = function continue_rcv(createdf OUT boolean) return boolean is
      DBGPLSQL: 73 TEXTNOD = begin
      DBGPLSQL: 74 TEXTNOD = createdf := false;
      DBGPLSQL: 75 TEXTNOD = <<do_cont_again>>
      DBGPLSQL: 76 TEXTNOD = begin
      DBGPLSQL: 77 TEXTNOD = krmicd.clearErrors;
      DBGPLSQL: 78 TEXTNOD = krmicd.execSql('alter database recover continue');
      DBGPLSQL: 79 TEXTNOD = exception
      DBGPLSQL: 80 TEXTNOD = when mr_cont_rcv then
      DBGPLSQL: 81 TEXTNOD = goto do_cont_again;
      DBGPLSQL: 82 TEXTNOD = when mr_need_log then
      DBGPLSQL: 83 TEXTNOD = return true;
      DBGPLSQL: 84 TEXTNOD = when mr_createdf then
      DBGPLSQL: 85 TEXTNOD = createdf := true;
      DBGPLSQL: 86 TEXTNOD = return true;
      DBGPLSQL: 87 TEXTNOD = end;
      DBGPLSQL: 88 TEXTNOD = return false;
      DBGPLSQL: 89 TEXTNOD = end;
      DBGPLSQL: 90 TEXTNOD =
      DBGPLSQL: 91 TEXTNOD = begin
      DBGPLSQL: 92 TEXTNOD =
      DBGPLSQL: 93 TEXTNOD =
      DBGPLSQL: 94 PRMVAL = bmr := false; flash := false; pdbpitr := false; preplugin := false; dellog := false; deltarget := false;
      DBGPLSQL: 95 TEXTNOD =
      DBGPLSQL: 96 TEXTNOD = toclause := krmicd.checkUntil(stopthd, stopseq);
      DBGPLSQL: 97 TEXTNOD =
      DBGPLSQL: 98 TEXTNOD = select count(*) into pstandby from V$DATABASE
      DBGPLSQL: 99 TEXTNOD = where database_role='PHYSICAL STANDBY';
      DBGPLSQL: 100 TEXTNOD =
      DBGPLSQL: 101 TEXTNOD = <<restart_recovery>> -- recovery is never restarted for bmr
      DBGPLSQL: 102 TEXTNOD =
      DBGPLSQL: 103 TEXTNOD = begin
      DBGPLSQL: 104 TEXTNOD = select sysdate into start_time from x$dual;
      DBGPLSQL: 105 TEXTNOD = --
      DBGPLSQL: 106 TEXTNOD = --
      DBGPLSQL: 107 TEXTNOD = --
      DBGPLSQL: 108 TEXTNOD = --
      DBGPLSQL: 109 TEXTNOD = --
      DBGPLSQL: 110 TEXTNOD = if not bmr and not flash and not pdbpitr and recovdf then
      DBGPLSQL: 111 TEXTNOD = deb('apply_log', 're-start recovery');
      DBGPLSQL: 112 TEXTNOD =
      DBGPLSQL: 113 TEXTNOD = krmicd.execSQL('alter database recover datafile list clear');
      DBGPLSQL: 114 TEXTNOD = if filelist is not null then
      DBGPLSQL: 115 TEXTNOD = krmicd.execSql('alter database recover datafile list ' || filelist);
      DBGPLSQL: 116 TEXTNOD = end if;
      DBGPLSQL: 117 TEXTNOD = krmicd.execSql('alter database recover' || '
      DBGPLSQL: 118 PRMVAL = if needed datafile 7
      DBGPLSQL: 119 TEXTNOD = ');
      DBGPLSQL: 120 TEXTNOD = fileno := 0;
      DBGPLSQL: 121 TEXTNOD = recovdf := false;
      DBGPLSQL: 122 TEXTNOD = end if;
      DBGPLSQL: 123 TEXTNOD = exception
      DBGPLSQL: 124 TEXTNOD = when mr_need_log then
      DBGPLSQL: 125 TEXTNOD = krmicd.clearErrors;
      DBGPLSQL: 126 TEXTNOD = end;
      DBGPLSQL: 127 TEXTNOD =
      DBGPLSQL: 128 TEXTNOD = --
      DBGPLSQL: 129 TEXTNOD = <<get_log>>
      DBGPLSQL: 130 TEXTNOD =
      DBGPLSQL: 131 TEXTNOD = if createdf then
      DBGPLSQL: 132 TEXTNOD = createdf := false;
      DBGPLSQL: 133 TEXTNOD = raise mr_createdf;
      DBGPLSQL: 134 TEXTNOD = end if;
      DBGPLSQL: 135 TEXTNOD =
      DBGPLSQL: 136 TEXTNOD = begin
      DBGPLSQL: 137 TEXTNOD = select thr, seq, scn, rls, rlc into
      DBGPLSQL: 138 TEXTNOD = alfrec.thread#, alfrec.sequence#, scn, alfrec.resetlogs_change#,
      DBGPLSQL: 139 TEXTNOD = rlc from x$kcrmx;
      DBGPLSQL: 140 TEXTNOD =
      DBGPLSQL: 141 TEXTNOD = exception
      DBGPLSQL: 142 TEXTNOD = when no_data_found then
      DBGPLSQL: 143 TEXTNOD = if bmr then
      DBGPLSQL: 144 TEXTNOD = begin
      DBGPLSQL: 145 TEXTNOD = sys.dbms_backup_restore.bmrCancel;
      DBGPLSQL: 146 TEXTNOD = exception
      DBGPLSQL: 147 TEXTNOD = when bmr_block_errors then
      DBGPLSQL: 148 TEXTNOD = krmicd.writeMsg(8111);
      DBGPLSQL: 149 TEXTNOD = end;
      DBGPLSQL: 150 TEXTNOD = elsif flash then
      DBGPLSQL: 151 TEXTNOD = sys.dbms_backup_restore.flashbackCancel;
      DBGPLSQL: 152 TEXTNOD = krmicd.checkSetDatabase;
      DBGPLSQL: 153 TEXTNOD = elsif pdbpitr then
      DBGPLSQL: 154 TEXTNOD = sys.dbms_backup_restore.recoverCancel;
      DBGPLSQL: 155 TEXTNOD = elsif preplugin then
      DBGPLSQL: 156 TEXTNOD = sys.dbms_backup_restore.prePluginRecoveryCancel;
      DBGPLSQL: 157 TEXTNOD = end if;
      DBGPLSQL: 158 TEXTNOD = --
      DBGPLSQL: 159 TEXTNOD = --
      DBGPLSQL: 160 TEXTNOD = --
      DBGPLSQL: 161 TEXTNOD = --
      DBGPLSQL: 162 TEXTNOD = --
      DBGPLSQL: 163 TEXTNOD = --
      DBGPLSQL: 164 TEXTNOD = --
      DBGPLSQL: 165 TEXTNOD = --
      DBGPLSQL: 166 TEXTNOD =
      DBGPLSQL: 167 TEXTNOD = --
      DBGPLSQL: 168 TEXTNOD = delete_logs(FALSE, dellog, deltarget, preplugin);
      DBGPLSQL: 169 TEXTNOD =
      DBGPLSQL: 170 TEXTNOD = return;
      DBGPLSQL: 171 TEXTNOD = end;
      DBGPLSQL: 172 TEXTNOD =
      DBGPLSQL: 173 TEXTNOD = select resetlogs_change# into resetlogs_change from v$database_incarnation
      DBGPLSQL: 174 TEXTNOD = where status='CURRENT';
      DBGPLSQL: 175 TEXTNOD =
      DBGPLSQL: 176 TEXTNOD = if (resetlogs_change=alfrec.resetlogs_change# and
      DBGPLSQL: 177 TEXTNOD = stopthd = alfrec.thread# and
      DBGPLSQL: 178 TEXTNOD = alfrec.sequence# >= stopseq)
      DBGPLSQL: 179 TEXTNOD = then
      DBGPLSQL: 180 TEXTNOD = stoprcv := FALSE;
      DBGPLSQL: 181 TEXTNOD = if bmr then
      DBGPLSQL: 182 TEXTNOD = begin
      DBGPLSQL: 183 TEXTNOD = sys.dbms_backup_restore.bmrCancel;
      DBGPLSQL: 184 TEXTNOD = exception
      DBGPLSQL: 185 TEXTNOD = when bmr_block_errors then
      DBGPLSQL: 186 TEXTNOD = krmicd.writeMsg(8111);
      DBGPLSQL: 187 TEXTNOD = end;
      DBGPLSQL: 188 TEXTNOD = stoprcv := TRUE;
      DBGPLSQL: 189 TEXTNOD = elsif flash then
      DBGPLSQL: 190 TEXTNOD = --
      DBGPLSQL: 191 TEXTNOD = --
      DBGPLSQL: 192 TEXTNOD = if alfrec.sequence# > stopseq then
      DBGPLSQL: 193 TEXTNOD = sys.dbms_backup_restore.flashbackCancel;
      DBGPLSQL: 194 TEXTNOD = krmicd.checkSetDatabase;
      DBGPLSQL: 195 TEXTNOD = stoprcv := TRUE;
      DBGPLSQL: 196 TEXTNOD = end if;
      DBGPLSQL: 197 TEXTNOD = elsif pdbpitr then
      DBGPLSQL: 198 TEXTNOD = sys.dbms_backup_restore.recoverCancel;
      DBGPLSQL: 199 TEXTNOD = stoprcv := TRUE;
      DBGPLSQL: 200 TEXTNOD = elsif preplugin then
      DBGPLSQL: 201 TEXTNOD = sys.dbms_backup_restore.prePluginRecoveryCancel;
      DBGPLSQL: 202 TEXTNOD = stoprcv := TRUE;
      DBGPLSQL: 203 TEXTNOD = else
      DBGPLSQL: 204 TEXTNOD = krmicd.execSql('alter database recover cancel');
      DBGPLSQL: 205 TEXTNOD = stoprcv := TRUE;
      DBGPLSQL: 206 TEXTNOD = end if;
      DBGPLSQL: 207 TEXTNOD =
      DBGPLSQL: 208 TEXTNOD = if stoprcv then
      DBGPLSQL: 209 TEXTNOD = --
      DBGPLSQL: 210 TEXTNOD = --
      DBGPLSQL: 211 TEXTNOD = --
      DBGPLSQL: 212 TEXTNOD = --
      DBGPLSQL: 213 TEXTNOD = --
      DBGPLSQL: 214 TEXTNOD =
      DBGPLSQL: 215 TEXTNOD = --
      DBGPLSQL: 216 TEXTNOD = delete_logs(FALSE, dellog, deltarget, preplugin);
      DBGPLSQL: 217 TEXTNOD = select abs(sysdate-start_time) into elapsed from x$dual;
      DBGPLSQL: 218 TEXTNOD = dur2time(elapsed, hours, mins, secs);
      DBGPLSQL: 219 TEXTNOD = --
      DBGPLSQL: 220 TEXTNOD = krmicd.writeMsg(8181, to_char(hours, 'FM09') || ':' ||
      DBGPLSQL: 221 TEXTNOD = to_char(mins, 'FM09') || ':' ||
      DBGPLSQL: 222 TEXTNOD = to_char(secs, 'FM09'));
      DBGPLSQL: 223 TEXTNOD = return;
      DBGPLSQL: 224 TEXTNOD = end if;
      DBGPLSQL: 225 TEXTNOD = end if;
      DBGPLSQL: 226 TEXTNOD =
      DBGPLSQL: 227 TEXTNOD = begin
      DBGPLSQL: 228 TEXTNOD = deb('log_apply', 'looking for log with scn ' ||scn||' thread='||
      DBGPLSQL: 229 TEXTNOD = alfrec.thread#||' sequence='||alfrec.sequence# ||' resetlogs scn '||
      DBGPLSQL: 230 TEXTNOD = alfrec.resetlogs_change#||' resetlogs time='||
      DBGPLSQL: 231 TEXTNOD = to_char(stamp2date(rlc)));
      DBGPLSQL: 232 TEXTNOD =
      DBGPLSQL: 233 TEXTNOD = begin
      DBGPLSQL: 234 TEXTNOD = --
      DBGPLSQL: 235 TEXTNOD = alfrec.name := krmicd.checkLog(scn,
      DBGPLSQL: 236 TEXTNOD = alfrec.thread#,
      DBGPLSQL: 237 TEXTNOD = alfrec.sequence#,
      DBGPLSQL: 238 TEXTNOD = alfrec.recid,
      DBGPLSQL: 239 TEXTNOD = alfrec.stamp,
      DBGPLSQL: 240 TEXTNOD = alfrec.resetlogs_change#,
      DBGPLSQL: 241 TEXTNOD = stamp2date(rlc),
      DBGPLSQL: 242 TEXTNOD = alfrec.first_change#,
      DBGPLSQL: 243 TEXTNOD = alfrec.next_change#,
      DBGPLSQL: 244 TEXTNOD = alfrec.block_size,
      DBGPLSQL: 245 TEXTNOD = preplugin);
      DBGPLSQL: 246 TEXTNOD =
      DBGPLSQL: 247 TEXTNOD = exception
      DBGPLSQL: 248 TEXTNOD = when archivelog_backup_not_found or archivelog_missing_exception then
      DBGPLSQL: 249 TEXTNOD = if (untilcancel) then
      DBGPLSQL: 250 TEXTNOD = --
      DBGPLSQL: 251 TEXTNOD = --
      DBGPLSQL: 252 TEXTNOD = --
      DBGPLSQL: 253 TEXTNOD = alfrec.name := NULL;
      DBGPLSQL: 254 TEXTNOD = krmicd.writeMsg(8194, to_char(alfrec.thread#),
      DBGPLSQL: 255 TEXTNOD = to_char(alfrec.sequence#));
      DBGPLSQL: 256 TEXTNOD = else
      DBGPLSQL: 257 TEXTNOD = --
      DBGPLSQL: 258 TEXTNOD = --
      DBGPLSQL: 259 TEXTNOD = --
      DBGPLSQL: 260 TEXTNOD = raise;
      DBGPLSQL: 261 TEXTNOD = end if;
      DBGPLSQL: 262 TEXTNOD = end;
      DBGPLSQL: 263 TEXTNOD = exception
      DBGPLSQL: 264 TEXTNOD = when archivelog_backup_not_found then
      DBGPLSQL: 265 TEXTNOD = raise;
      DBGPLSQL: 266 TEXTNOD = when others then
      DBGPLSQL: 267 TEXTNOD = if (is_db_in_noarchivelog) then
      DBGPLSQL: 268 TEXTNOD = krmicd.writeMsg(8187, to_char(scn));
      DBGPLSQL: 269 TEXTNOD = else
      DBGPLSQL: 270 TEXTNOD = if pstandby = 1 then
      DBGPLSQL: 271 TEXTNOD = krmicd.clearErrors;
      DBGPLSQL: 272 TEXTNOD = --
      DBGPLSQL: 273 TEXTNOD = delete_logs(FALSE, dellog, deltarget, preplugin);
      DBGPLSQL: 274 TEXTNOD = select abs(sysdate-start_time) into elapsed from x$dual;
      DBGPLSQL: 275 TEXTNOD = dur2time(elapsed, hours, mins, secs);
      DBGPLSQL: 276 TEXTNOD = --
      DBGPLSQL: 277 TEXTNOD = krmicd.writeMsg(8181, to_char(hours, 'FM09') || ':' ||
      DBGPLSQL: 278 TEXTNOD = to_char(mins, 'FM09') || ':' ||
      DBGPLSQL: 279 TEXTNOD = to_char(secs, 'FM09'));
      DBGPLSQL: 280 TEXTNOD = return;
      DBGPLSQL: 281 TEXTNOD = else
      DBGPLSQL: 282 TEXTNOD = krmicd.writeMsg(8060); -- unable to find log
      DBGPLSQL: 283 TEXTNOD = krmicd.writeMsg(8510, to_char(alfrec.thread#),
      DBGPLSQL: 284 TEXTNOD = to_char(alfrec.sequence#));
      DBGPLSQL: 285 TEXTNOD = raise;
      DBGPLSQL: 286 TEXTNOD = end if;
      DBGPLSQL: 287 TEXTNOD = end if;
      DBGPLSQL: 288 TEXTNOD = end;
      DBGPLSQL: 289 TEXTNOD =
      DBGPLSQL: 290 TEXTNOD = deb('log_apply', 'log file name returned is ' || alfrec.name );
      DBGPLSQL: 291 TEXTNOD =
      DBGPLSQL: 292 TEXTNOD = begin
      DBGPLSQL: 293 TEXTNOD =
      DBGPLSQL: 294 TEXTNOD = if alfrec.name is not NULL then
      DBGPLSQL: 295 TEXTNOD = if bmr then
      DBGPLSQL: 296 TEXTNOD = sys.dbms_backup_restore.bmrDoMediaRecovery(alfrec.name);
      DBGPLSQL: 297 TEXTNOD = elsif flash then
      DBGPLSQL: 298 TEXTNOD = sys.dbms_backup_restore.flashbackFiles(alfrec.name);
      DBGPLSQL: 299 TEXTNOD = elsif pdbpitr then
      DBGPLSQL: 300 TEXTNOD = sys.dbms_backup_restore.recoverDo(alfrec.name);
      DBGPLSQL: 301 TEXTNOD = elsif preplugin then
      DBGPLSQL: 302 TEXTNOD = sys.dbms_backup_restore.prePluginDoMediaRecovery(alfrec.name);
      DBGPLSQL: 303 TEXTNOD = else
      DBGPLSQL: 304 TEXTNOD = krmicd.writeMsg(8515, alfrec.name,
      DBGPLSQL: 305 TEXTNOD = to_char(alfrec.thread#),
      DBGPLSQL: 306 TEXTNOD = to_char(alfrec.sequence#));
      DBGPLSQL: 307 TEXTNOD = --
      DBGPLSQL: 308 TEXTNOD = krmicd.execSql( 'alter database recover logfile ''' ||
      DBGPLSQL: 309 TEXTNOD = replace(alfrec.name,'''','''''') || '''');
      DBGPLSQL: 310 TEXTNOD = end if;
      DBGPLSQL: 311 TEXTNOD =
      DBGPLSQL: 312 TEXTNOD = --
      DBGPLSQL: 313 TEXTNOD = --
      DBGPLSQL: 314 TEXTNOD = --
      DBGPLSQL: 315 TEXTNOD = --
      DBGPLSQL: 316 TEXTNOD = --
      DBGPLSQL: 317 TEXTNOD =
      DBGPLSQL: 318 TEXTNOD = --
      DBGPLSQL: 319 TEXTNOD = delete_logs(FALSE, dellog, deltarget, preplugin);
      DBGPLSQL: 320 TEXTNOD =
      DBGPLSQL: 321 TEXTNOD = if bmr then
      DBGPLSQL: 322 TEXTNOD = begin
      DBGPLSQL: 323 TEXTNOD = sys.dbms_backup_restore.bmrCancel;
      DBGPLSQL: 324 TEXTNOD = exception
      DBGPLSQL: 325 TEXTNOD = when bmr_block_errors then
      DBGPLSQL: 326 TEXTNOD = krmicd.writeMsg(8111);
      DBGPLSQL: 327 TEXTNOD = end;
      DBGPLSQL: 328 TEXTNOD = elsif flash then
      DBGPLSQL: 329 TEXTNOD = sys.dbms_backup_restore.flashbackCancel;
      DBGPLSQL: 330 TEXTNOD = krmicd.checkSetDatabase;
      DBGPLSQL: 331 TEXTNOD = elsif pdbpitr then
      DBGPLSQL: 332 TEXTNOD = sys.dbms_backup_restore.recoverCancel;
      DBGPLSQL: 333 TEXTNOD = elsif preplugin then
      DBGPLSQL: 334 TEXTNOD = sys.dbms_backup_restore.prePluginRecoveryCancel;
      DBGPLSQL: 335 TEXTNOD = end if;
      DBGPLSQL: 336 TEXTNOD = select abs(sysdate-start_time) into elapsed from x$dual;
      DBGPLSQL: 337 TEXTNOD = dur2time(elapsed, hours, mins, secs);
      DBGPLSQL: 338 TEXTNOD = --
      DBGPLSQL: 339 TEXTNOD = krmicd.writeMsg(8181, to_char(hours, 'FM09') || ':' ||
      DBGPLSQL: 340 TEXTNOD = to_char(mins, 'FM09') || ':' ||
      DBGPLSQL: 341 TEXTNOD = to_char(secs, 'FM09'));
      DBGPLSQL: 342 TEXTNOD = return;
      DBGPLSQL: 343 TEXTNOD = else
      DBGPLSQL: 344 TEXTNOD = return;
      DBGPLSQL: 345 TEXTNOD = end if;
      DBGPLSQL: 346 TEXTNOD = exception
      DBGPLSQL: 347 TEXTNOD = when mr_cont_rcv then
      DBGPLSQL: 348 TEXTNOD = if continue_rcv(createdf) then
      DBGPLSQL: 349 TEXTNOD = goto get_log;
      DBGPLSQL: 350 TEXTNOD = end if;
      DBGPLSQL: 351 TEXTNOD = when mr_need_log then
      DBGPLSQL: 352 TEXTNOD = --
      DBGPLSQL: 353 TEXTNOD = --
      DBGPLSQL: 354 TEXTNOD = krmicd.clearErrors;
      DBGPLSQL: 355 TEXTNOD =
      DBGPLSQL: 356 TEXTNOD = --
      DBGPLSQL: 357 TEXTNOD = --
      DBGPLSQL: 358 TEXTNOD = --
      DBGPLSQL: 359 TEXTNOD = --
      DBGPLSQL: 360 TEXTNOD = --
      DBGPLSQL: 361 TEXTNOD = delete_logs(TRUE, dellog, deltarget, preplugin);
      DBGPLSQL: 362 TEXTNOD =
      DBGPLSQL: 363 TEXTNOD = goto get_log;
      DBGPLSQL: 364 TEXTNOD = when mr_createdf then
      DBGPLSQL: 365 TEXTNOD = if (bmr or flash or pdbpitr) then
      DBGPLSQL: 366 TEXTNOD = raise;
      DBGPLSQL: 367 TEXTNOD = end if;
      DBGPLSQL: 368 TEXTNOD =
      DBGPLSQL: 369 TEXTNOD = --
      DBGPLSQL: 370 TEXTNOD = for df_rec in (select fnfno, fnnam, fnonm, ts.ts#, ts.name,
      DBGPLSQL: 371 TEXTNOD = fepfdi, fepdi,
      DBGPLSQL: 372 TEXTNOD = (case when pdb.con_id > 1 then
      DBGPLSQL: 373 TEXTNOD = pdb.name else null end) pdbname
      DBGPLSQL: 374 TEXTNOD = from x$kccfn fn, x$kccfe fe, v$tablespace ts, v$containers pdb
      DBGPLSQL: 375 TEXTNOD = where fn.fnunn = 1
      DBGPLSQL: 376 TEXTNOD = and fn.fnfno=fe.fenum
      DBGPLSQL: 377 TEXTNOD = and fe.fefnh=fnnum
      DBGPLSQL: 378 TEXTNOD = and fe.fetsn=ts.ts#
      DBGPLSQL: 379 TEXTNOD = and fe.con_id = ts.con_id
      DBGPLSQL: 380 TEXTNOD = and fe.con_id = pdb.con_id) loop
      DBGPLSQL: 381 TEXTNOD =
      DBGPLSQL: 382 TEXTNOD = --
      DBGPLSQL: 383 TEXTNOD = --
      DBGPLSQL: 384 TEXTNOD = if (df_rec.fepdi > 0 or df_rec.fepfdi > 0) then
      DBGPLSQL: 385 TEXTNOD = raise;
      DBGPLSQL: 386 TEXTNOD = end if;
      DBGPLSQL: 387 TEXTNOD =
      DBGPLSQL: 388 TEXTNOD = fileno := df_rec.fnfno;
      DBGPLSQL: 389 TEXTNOD = unnamed := df_rec.fnnam;
      DBGPLSQL: 390 TEXTNOD = dfname := df_rec.fnonm;
      DBGPLSQL: 391 TEXTNOD = tsnum := df_rec.ts#;
      DBGPLSQL: 392 TEXTNOD = tsname := df_rec.name;
      DBGPLSQL: 393 TEXTNOD = pdbname := df_rec.pdbname;
      DBGPLSQL: 394 TEXTNOD =
      DBGPLSQL: 395 TEXTNOD = deb('apply_log', 'tsnum ' || tsnum);
      DBGPLSQL: 396 TEXTNOD = deb('apply_log', 'tsname ' || tsname);
      DBGPLSQL: 397 TEXTNOD = deb('apply_log', 'fileno ' || fileno);
      DBGPLSQL: 398 TEXTNOD = deb('apply_log', 'dfname ' || dfname);
      DBGPLSQL: 399 TEXTNOD = deb('apply_log', 'pdbname' || nvl(pdbname, 'NULL'));
      DBGPLSQL: 400 TEXTNOD =
      DBGPLSQL: 401 TEXTNOD = deb('apply_log', 'file old name is ' || dfname);
      DBGPLSQL: 402 TEXTNOD =
      DBGPLSQL: 403 TEXTNOD = recovdf := true;
      DBGPLSQL: 404 TEXTNOD = if krmicd.getDfInfo(fileno, tsnum, tsname, pdbname,
      DBGPLSQL: 405 TEXTNOD = newdfname, bnewomf, dropf)
      DBGPLSQL: 406 TEXTNOD = then
      DBGPLSQL: 407 TEXTNOD = if (newdfname is not null) then
      DBGPLSQL: 408 TEXTNOD = dfname := newdfname;
      DBGPLSQL: 409 TEXTNOD = deb('apply_log', 'file new name is ' || newdfname);
      DBGPLSQL: 410 TEXTNOD = else
      DBGPLSQL: 411 TEXTNOD = deb('apply_log', 'using name at creation ' || dfname);
      DBGPLSQL: 412 TEXTNOD = end if;
      DBGPLSQL: 413 TEXTNOD =
      DBGPLSQL: 414 TEXTNOD = krmicd.writeMsg(6064, fileno, dfname);
      DBGPLSQL: 415 TEXTNOD = sys.dbms_backup_restore.createDatafile(fno => fileno,
      DBGPLSQL: 416 TEXTNOD = newomf => bnewomf,
      DBGPLSQL: 417 TEXTNOD = recovery => TRUE,
      DBGPLSQL: 418 TEXTNOD = fname => dfname);
      DBGPLSQL: 419 TEXTNOD =
      DBGPLSQL: 420 TEXTNOD = --
      DBGPLSQL: 421 TEXTNOD = if filelist is not null then
      DBGPLSQL: 422 TEXTNOD = filelist := filelist || ', ' || fileno;
      DBGPLSQL: 423 TEXTNOD = else
      DBGPLSQL: 424 TEXTNOD = filelist := fileno;
      DBGPLSQL: 425 TEXTNOD = end if;
      DBGPLSQL: 426 TEXTNOD =
      DBGPLSQL: 427 TEXTNOD = else
      DBGPLSQL: 428 TEXTNOD = dfname := null;
      DBGPLSQL: 429 TEXTNOD = deb('apply_log', 'no filename - ignore creation of file# '
      DBGPLSQL: 430 TEXTNOD = || fileno);
      DBGPLSQL: 431 TEXTNOD = deb('apply_log', 'This is recover database skip tablespace cmd');
      DBGPLSQL: 432 TEXTNOD =
      DBGPLSQL: 433 TEXTNOD = if (df_offln_list.exists(fileno)) then
      DBGPLSQL: 434 TEXTNOD = deb('apply_log', 'file is already offlined ' || fileno);
      DBGPLSQL: 435 TEXTNOD =
      DBGPLSQL: 436 TEXTNOD = else
      DBGPLSQL: 437 TEXTNOD =
      DBGPLSQL: 438 TEXTNOD = df_offln_list(fileno) := 1;
      DBGPLSQL: 439 TEXTNOD =
      DBGPLSQL: 440 TEXTNOD = if (dropf = true) then
      DBGPLSQL: 441 TEXTNOD = krmicd.writeMsg(6958, 'alter database datafile ' || fileno ||
      DBGPLSQL: 442 TEXTNOD = ' offline drop');
      DBGPLSQL: 443 TEXTNOD = krmicd.execSql('alter database datafile ' || fileno ||
      DBGPLSQL: 444 TEXTNOD = ' offline drop');
      DBGPLSQL: 445 TEXTNOD = else
      DBGPLSQL: 446 TEXTNOD = krmicd.writeMsg(6958, 'alter database datafile ' || fileno ||
      DBGPLSQL: 447 TEXTNOD = ' offline');
      DBGPLSQL: 448 TEXTNOD = krmicd.execSql('alter database datafile ' || fileno ||
      DBGPLSQL: 449 TEXTNOD = ' offline');
      DBGPLSQL: 450 TEXTNOD = end if;
      DBGPLSQL: 451 TEXTNOD = end if;
      DBGPLSQL: 452 TEXTNOD = end if;
      DBGPLSQL: 453 TEXTNOD = end loop;
      DBGPLSQL: 454 TEXTNOD = --
      DBGPLSQL: 455 TEXTNOD = --
      DBGPLSQL: 456 TEXTNOD = krmicd.clearErrors;
      DBGPLSQL: 457 TEXTNOD = goto restart_recovery;
      DBGPLSQL: 458 TEXTNOD = end;
      DBGPLSQL: 459 TEXTNOD = end;



      So there are the steps that RMAN goes through


      With RMAN the restore/recovery is 3 steps and works like this.  

      RESTORE Process:
      1. Restore the necessary pieces of the database

      RECOVER Process:

      1. Find and recover incremental backups of the database to bring the SCN as close as close as possible to the recovery point.

           NOTE : this happens as one process.  RMAN reads the backup sets and applies the blocks as it processes the backup sets,  Like a full restore, the backupset is streamed into the RMAN channels, and RMAN writes the necessary blocks to the files. 

      Media recovery :
       
          NOTE : From an RMAN standpoint, media recovery is archive log processing only.  Incremental backup processing is considered recover

      3) Identify the archive logs needed for recovery
      4) Validate archive logs needed are on disk, and if not restore archive logs
      5) recover database using archive logs on disk

      Viewing all 147 articles
      Browse latest View live


      <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>