Oracle Full Database Platform Migration from Solaris to Linux ( Big Endian to Little Endian) using XTTS and FTEX
Oracle documents to read :
V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (DocID 2471245.1)
12C - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup(Doc ID 2005729.1)
Database Size : 120 TB Solaris 11 ( DB Version 18c). DB version could be 11g as well.
Target : Oracle Linux 7 64 bit ( DB Version 19c)
Methods:
There are multiple ways this could be done
- Using Golden Gate Software
- Using XTTS V4 scripts ( RMAN method)
Golden Gate Software is extra cost option while there is no extra cost with XTTS method/RMAN.
Golden Gate potentially can minimize downtime to bare minimum while XTTS method would require some downtime depending upon Final incremental backup, Export/Import dump etc. In our case our downtime windows was 12 hrs so we choose XTTS method.
Pre Requisite :
1. Download XTTS V4 scripts from Oracle.
2. You would need NFS mount between Source and Target. Space should be similar to DB Size
3. We did use NFS for Script Location and as well as Backup Location. So Backup taken on source which is on NFS and same NFS is mounted on target as well
Use ALL tablespaces ( just exclude sys, system, sysaux, undo)
Scripts to check whether all tablespaces are self contained or not
STEP 1
cat chk.sql
spool chk.lst
set linesize 1000
set heading off
set pages 0
set feedback off
set serveroutput on size unlimited
declare
sql_text varchar2(4000) :='test' ;
begin
select 'SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => ' || '''' || listagg(tablespace_name,',') || '''' || ', incl_constraints => TRUE);' into sql_text from dba_tablespaces where not regexp_like (tablespace_name,'SYSTEM|SYSAUX|*TEMP*|*UNDO*') Order by 1 ;
dbms_output.put_line (sql_text) ;
execute immediate 'begin ' || sql_text || ' end ;' ;
end ;
/
prompt TTS Violations
SELECT * FROM transport_set_violations
/
spool off
If there any violations then these needs to be taken care first.
We did have some issues specially with partition tables and we need to run following statements
ORA-39922: Default Partition (Index) Tablespace for not contained in transportable set.
ORA-39921: Default Partition (Table) Tablespace for not contained in transportable set.
ORA-39918: PLSQL Functional Index in tablespacenot allowed in transportable set.
Example to fix :
alter table Partition_TableName MODIFY DEFAULT ATTRIBUTES TABLESPACE TBS_DATA ;
You need to make sure that there is no TTS Violations
SQL> SELECT * FROM transport_set_violations ;
e.g. /mig_backup is NFS mount and it is available on both source and target
export TMPDIR=/mig_backup/XTTS/scripts/
( Put in your .profile so you don't forget)
Where you have unzipped V4 scripts
STEP 2
edit xtt.properties
and change according to your environments
tablespaces=List_of_all_tablespaces seprated by comma, check chk.lst output
platformid=2 ( This could be Solaris, AIX, HP-UX etc). Solaris Sparc Platform ID is 2
src_scratch_location=/mig_backup/XTTS/backup/
dest_datafile_location=+DATA ( We have ASM on target )
dest_scratch_location=/mig_backup/XTTS/backup/
parallel=8
rollparallel=8
getfileparallel=8
usermantransport=1
#(if using 12c) -- usermantransport=1
STEP 3
Run the backup on the source system
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SIDenvironment variables) pointing to the source database, run the backup as follows:
cd $TMPDIR (TMPDIR was setup in prior step)
First Time you run this backup, it will take level 0 full backup which might take long time depending upon your DB size.
Please also make sure that also enable block change tracking on source system, if it is not setup already. It will help in reducing time when taking incremental backup.
Checking Whether Change Tracking is enabled
From SQL*Plus, we can query V$BLOCK_CHANGE_TRACKING to determine whether change tracking is enabled or not.
SQL> select status from V$BLOCK_CHANGE_TRACKING;
ENABLED => block change tracking is enabled.
DISABLED => block change tracking is disabled.
[oracle@source]$ nohup $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup &
Review nohup.out after end of job ( Just to make sure backup ran fine)
Assuming that you are using NFS shared location between source and target.
On Target :
Assumption on Target :
You have created a starter database ( same block size as source and possible other parameters like source ). You just need sys, system, sysaux, temp, Undo.
Make sure that you have db_files parameter correct.It should be same or higher than source.
If your source has users tablespace then you do need to drop users tablespace on target but you can't drop users since users is default permanent tablespace. So you create some USERS_SomeUniqueName on Target and assign this is as default permanent tablespace on target so now you should be able to drop users on target.
On Target system, you can create other temps tablespace, if you have on source system
Start Restore on Target
export TMPDIR=/mig_backup/XTTS/scripts/
[oracle@dest]$ nohup $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore &
Datafiles will be placed on the destination system in the defined dest_datafile_location which in my case is ASM ( +DATA)
After job is finished, you can review nohup.out. nohup.out will keep appending to existing nohup.out
So what we have done so far :
We have taken backup on source and restored on target. Since this is first time, it is Level 0 full backup and restored on target.
Target database still does not know about these restored datafiles so if you query v$datafile, you will not find any info other than just your starter database tablespace datafiles.
After restore is done, you can delete backup files from NFS location which was done to free up space taken by Level 0 backup, however, if there are errors during restore then don't delete backup and re-run restore till there are no errors reported. After successful, you could delete backup.
src_scratch_location=/mig_backup/XTTS/backup/
Roll Forward Phase
During this phase an incremental backup is created from the source database, transferred to the destination system, convertedto the destination system endian format, then applied to the converted destination datafile copies to roll them forward. Thisphase may be run multiple times. Each successive incremental backup should take less time than the prior incremental backup,and will bring the destination datafile copies more current with the source database. The data being transported (source) isfully accessible during this phase.
Create an incremental backup of the tablespaces being transported on the source system
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SIDenvironment variables) pointing to the source database, run the create incremental step as follows:
Just make sure TMPDIR is setup correctly and also cd to $TMPDIR
[oracle@source]$ nohup $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup &
This step will create an incremental backup for all tablespaces listed in xtt.properties.
This will take incremental backup and will be smaller in size than prior backup
Apply the incremental backup to the datafile copies on the destination system
On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SIDenvironment variables) pointing to the destination database, run the roll forward datafiles step as follows:
Just make sure TMPDIR is setup correctly and also cd to $TMPDIR
[oracle@dest]$ nohup $ORACLE_HOME/perl/bin/perl xttdriver.pl --restoren&
The roll forward step connects to destination database and applies the incremental backups on the tablespaces'datafiles for each tablespace being transported.
This will restore incremental backup taken in last step.
**Note**
Although multiple backups can be executed against the source without being applied on thedestination, the res.txt must be copied after the last backup and before the '--restore' is executed at thedestination ( In case if you are not using NFS)
I prefer to apply backup on source and restore on target ( Just keep it simple)
Repeat the roll forward or proceed to final incremental backup:
This step requires downtime for your source since this will keep your source in Read Only Mode.
Final Incremental Backup
Alter source tablespace(s) to READ ONLY in the source database
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SIDenvironment variables) pointing to the source database, alter the tablespaces being transported to READ ONLY.
system@source/prod SQL> alter tablespace TS1 read only;
Tablespace altered.
system@source/prod SQL> alter tablespace TS2 read only;
Tablespace altered.
Create the final incremental backup of the tablespaces being transported on the source system:
**Make sure that you have put all tablespaces which you have mentioned in xtt.properties to read only on source.
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environmentvariables) pointing to the source database, run the backup as follows:
[oracle@source]$ nohup $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup &
NOTE: As the tablespaces are in READ ONLY mode, the following warning received can be ignored: ####################################################################Warning: ------ Warnings found in ####################################################################Prepare newscn for Tablespaces: DECLARE* ERROR at line 1: ORA-20001: TABLESPACE(S) IS READONLY OR, OFFLINE JUST CONVERT, COPY ORA-06512: at line 284
While backup is running, you can do expdp of metadata
Please create parfile
** Assuming that you have data_pump_dir directory already created
You can use same location as $TMPDIR where you have xtt script or create another subdir
This is using FTEX method, here we have not specified any specific tablespaces but rather all excluding sys,sysaux etc
cd $TMPDIR/exp
cat exp_dbname.par
userid='/ as sysdba'
transport_full_check=no
directory=data_pump_dir
dumpfile=xttsdump_dbname.dmp
full=y
transportable=always
parallel=4
nohup expdp parafile=exp_dbname.par &
if data_pump_dir points to local dir on source then after expdp is done, you need to either copy xttsdump_dbname.dmp to target or copy to your NFS share
Check If your final incremental backup which was done before expdp step completed, if completed then you are ready to do import metadata on target.
If Both steps are completed i.e. final incremental backup and expdp then you can put source database tablespace in Read Write mode and hence ending downtime for your source database.
Lets create imp.par file as well
** Assume that your copied xttsdump_dbname.dmp to target or it is on NFS share and data_pump_dir is pointing to this location
Here you need to put all datafiles ( excluding sys,sysaux,undo,temp)
TRANSPORT_DATAFILES=
Here you could have many TRANSPORT_DATAFILES.
How you can get TRANSPORT_DATAFILES easily, will be discussed below
cat imp_dbname.par
userid='/ as sysdba'
directory=data_pump_dir
dumpfile=xttsdump_dbname.dmp
logfile=xtts_imp_dbname.log
TRANSPORT_DATAFILES='+DATA/MDEV/DATAFILE/ADMIN.354.1089403171'
TRANSPORT_DATAFILES='+DATA/MDEV/DATAFILE/AIM_DATA.280.1089404215'
....
....
...
I have setup of scripts to generate this but edit it based on your requirement
As Grid user and setup +ASM as ORACLE_SID
create script
cat list_asm_files.sh
asmcmd -p <<EOF
cd +DATA
ls -l
EOF
list_asm_files.sh > list_asm_files.txt
This will generate list_asm_files.txt
## Replace DB_NAME with your DB Name
##cat list_asm_files.txt | grep -i DATAFILE | grep -i DB_NAME | awk '{print $8 $9 $10}'
cat list_asm_files.txt | grep -i DATAFILE | grep -i DB_NAME | awk '{print "TRANSPORT_DATAFILES=""\047"$10"\047"}'
This should generate TRANSPORT_DATAFILES and you can take all of these and paste in imp_dbname.par
If you notice, we are using full path name of ASM files vs ASM Aliases created by XTTS restore.
Important Point
You could use ASM Alias name as well e.g. TRANSPORT_DATAFILES='+DATA/alias_name.dbf'
However, I will advise to use full path name vs ASM Alias since ASM Alias can cause corruption issue in case if you are trying multiple/different database XTTS migration on same server.
So what happens is that ASM Aliases are not unique across different databases so if you do another XTTS migration on same server, restore would corrupt datafile.
However, if want to build standby environment at the same time then ASM Alias way is only way to do. We will cover building standby environment in different section.
At this time, lets assume that we are not building standby. So we will use full path name in TRANSPORT_DATAFILES
FINALLY Lets import metadata
On Target system
nohup impdp parfile=imp_dbname.par &
Watchout impdp logfile. There will some warnings but look for whether all tablespace metadata gets imported and what issues that you have to resolve.
Error during Import :
Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
ORA-39002: invalid operation
ORA-39352: Wrong number of TRANSPORT_DATAFILES specified: expected 22, received 19
This error means some you did not specify all TRANSPORT_DATAFILES, may be missed out some files.
After fixing issue, you can re-run impdp
You can ignore these warnings
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
>>> Cannot set an SCN larger than the current SCN. If a Streams Capture configuration was imported then the Apply that processes the captured messages needs to be dropped and recreated. See My Oracle Support article number 1380295.1.
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
ORA-39083: Object type PROCOBJ:"SYS"."IDX_RB$CLASS_1721" failed to create with error:
ORA-00042: Unknown Service name 1
Failing sql is:
BEGIN
dbms_scheduler.create_job_class('"IDX_RB$CLASS_1721"',NULL,'1',64,NULL,
'Transient MV Refresh class'
);COMMIT; END;
ORA-39083: Object type PROCOBJ:"SYS"."IDX_RB$CLASS_111" failed to create with error:
ORA-00042: Unknown Service name 1
Failing sql is:
BEGIN
dbms_scheduler.create_job_class('"IDX_RB$CLASS_111"',NULL,'1',64,NULL,
'Transient MV Refresh class'
);COMMIT; END;
ORA-39083: Object type PROCOBJ:"SYS"."IDX_RB$CLASS_46" failed to create with error:
ORA-00042: Unknown Service name 1
....
...
Watchout for other errors in impdp logs
After impdp is done successfully and metadata imported
You can query v$datafile , v$tablespaces etc and see whether all tablespaces exists.
Some operation issue observed :
- AQ did not start, we have to start AQ
- If your DB users have password version to 10g or below, those account might be locked.
- If you have Database level triggers on source then you do need to create then on new destination
- We have observed some Oracle Type issue. Some Oracle Types have some extra characters which was causing issue. We manually fixed those
TEST TEST & TEST:
It is important that you test out these steps in some test system couple times to familiarize with whole process since whole process involves lots of stuff.
Building Standby at the same time
If want to build standby also same time as when you are building as your target system, it involves more steps :
You would build standby as soon as you create your starter database on destination.
You would need to mount NFS backup location ( so now NFS location is on source, target and target standby)
On Target Standby : You need to create another TMPDIR which points to another location. It can't be same location as Target
e.g. if target TMPDIR =/mig_backup/XTTS/scripts/
then on target standby TMPDIR =/mig_backup/XTTS/scripts/standby
Unzip V4 scripts on /mig_backup/XTTS/scripts/standby as well
After or during whenever you do backup ( doesn't matter which backup level 0 or incremental). after backup, copy res.txt from $TMPDIR to Standby TMPDIR /mig_backup/XTTS/scripts/standby
cp /mig_backup/XTTS/scripts/res.txt /mig_backup/XTTS/scripts/standby/res.txt
When you are doing restore on target, you can also do simultaneously restore on Target standby as well.
make sure that TMPDIR is pointing to standby TMPDIR i.e. /mig_backup/XTTS/scripts/standby
and same process need to be repeated from all backup and restore i.e. copy res.txt after backup and then only run restore on target and target standby
Final step is also same except that impdp, we need to use ASM Alias name, full path name will not work. You will run impdp on target only
cat imp_dbname.par
userid='/ as sysdba'
directory=data_pump_dir
dumpfile=xttsdump_dbname.dmp
logfile=xtts_imp_dbname.log
TRANSPORT_DATAFILES='+DATA/ADMIN_059.dbf'
TRANSPORT_DATAFILES='+DATA/AIM_DATA_044.dbf'
Make sure that final restore is completed on target and target standby as well then you will do on target
impdp parafile=imp_dbname.par
Since Target and Target Standby are already Primary/Standby setup so impdp on Target will carry out metadata changes on target standby as well.
This way you can create XTTS primary and standby around same time as well.
No comments:
Post a Comment