X
    Categories: Fariddin Shaikh

How to export LOB objects in oracle

Given article “How to export LOB objects in oracle” will help you to reduce maximum amount of time while exporting LOB tables:

LOB object does not use parallelism so here we will distribute export and take backup. It will reduce your maximum time
Note: This is for export of LOB tables.

#!/bin/bash
#############################################################################
#  EXPORT IN PARALLEL WITH DATAPUMP
#############################################################################
#
# Author: Cyrille MODIANO
# Description: This script will create multiple export job in parallel
#              to export table
#
#############################################################################

BASE_SCHEMA=$1
BASE_TABLE=$2
PARALLEL=$3;
PARTITION=$4
 
function usage(){
  echo "USAGE: 
	    Parameter 1 is the SCHEMA
	    Parameter 2 is the TABLE NAME
	    Parameter 3 is the DEGREE of parallelism
	    Parameter 4 (optional) is the partition (if any)"
}
 
if [ $# -lt 3 ]; then
	usage
	exit 1
fi
 
if [ $# -eq 4 ]; then
	PARFILE=${BASE_SCHEMA}_${BASE_TABLE}_${PARTITION}.par
	echo "tables=${BASE_SCHEMA}.${BASE_TABLE}:${PARTITION}" >  $PARFILE
	START_MESSAGE="Beginning export of partition : ${BASE_SCHEMA}.${BASE_TABLE}:${PARTITION} "
	END_MESSAGE "Finished export of partition: ${BASE_SCHEMA}.${BASE_TABLE}:${PARTITION}"
	DUMPFILE_BASE=${BASE_SCHEMA}_${BASE_TABLE}_${PARTITION}
    LOGFILE_BASE=${BASE_SCHEMA}_${BASE_TABLE}_${PARTITION}
else
	PARFILE=${BASE_SCHEMA}_${BASE_TABLE}.par
	echo "tables=${BASE_SCHEMA}.${BASE_TABLE}" >  $PARFILE
	START_MESSAGE="# Beginning export of table : ${BASE_SCHEMA}.${BASE_TABLE}"
	END_MESSAGE "# Finished export of table: ${BASE_SCHEMA}.${BASE_TABLE}"
	DUMPFILE_BASE=${BASE_SCHEMA}_${BASE_TABLE}
    LOGFILE_BASE=${BASE_SCHEMA}_${BASE_TABLE}
fi
 
# Adding parameters to the parfile
echo "directory=DATA_PUMP" >>  $PARFILE
echo "EXCLUDE=STATISTICS" >>  $PARFILE
echo "CLUSTER=N" >>  $PARFILE
 
echo "#########################################################################################"
echo $START_MESSAGE
echo "#########################################################################################"
echo " "
 
LIMIT=$(expr $PARALLEL - 1)
 
START_TIME=`date`
 
for i in `seq 0 $LIMIT`
do
   QUERY="where mod(dbms_rowid.rowid_block_number(rowid), ${PARALLEL}) = $i"
   expdp userid=\'/ as sysdba\' query=$BASE_SCHEMA.$BASE_TABLE:\"$QUERY\" dumpfile=${DUMPFILE_BASE}_${i}.dmp logfile=${LOGFILE_BASE}_${i}.log parfile=$PARFILE &
   sleep 3
done
 
wait `pidof expdp`
 
echo "#########################################################################################"
echo $END_MESSAGE
echo "# Start time : $START_TIME "
echo "# End time is: `date`"
echo "#########################################################################################"

[oracle@rac2 LOB]$ ./parallel_script.sh MUZALFA CART_AUDIT_TRAIL 5
explanation of command:
Where MUZALFA is schema_name
CART_AUDIT_TRAIL is Table_name
5 is parallelism

OUTPUT:

#########################################################################################
# Beginning export of table : MUZALFA.CART_AUDIT_TRAIL
#########################################################################################


Export: Release 12.1.0.2.0 - Production on Thu Jun 22 15:11:11 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

Export: Release 12.1.0.2.0 - Production on Thu Jun 22 15:11:14 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

Export: Release 12.1.0.2.0 - Production on Thu Jun 22 15:11:17 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

Export: Release 12.1.0.2.0 - Production on Thu Jun 22 15:11:20 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

Export: Release 12.1.0.2.0 - Production on Thu Jun 22 15:11:24 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

Starting "SYS"."SYS_EXPORT_TABLE_03":  userid="/******** AS SYSDBA" query=MUZALFA.CART_AUDIT_TRAIL:"where mod(dbms_rowid.rowid_block_number(rowid), 5) = 2" dumpfile=MUZALFA_CART_AUDIT_TRAIL_2.dmp logfile=MUZALFA_CART_AUDIT_TRAIL_2.log parfile=MUZALFA_CART_AUDIT_TRAIL.par
Starting "SYS"."SYS_EXPORT_TABLE_01":  userid="/******** AS SYSDBA" query=MUZALFA.CART_AUDIT_TRAIL:"where mod(dbms_rowid.rowid_block_number(rowid), 5) = 0" dumpfile=MUZALFA_CART_AUDIT_TRAIL_0.dmp logfile=MUZALFA_CART_AUDIT_TRAIL_0.log parfile=MUZALFA_CART_AUDIT_TRAIL.par
Starting "SYS"."SYS_EXPORT_TABLE_04":  userid="/******** AS SYSDBA" query=MUZALFA.CART_AUDIT_TRAIL:"where mod(dbms_rowid.rowid_block_number(rowid), 5) = 3" dumpfile=MUZALFA_CART_AUDIT_TRAIL_3.dmp logfile=MUZALFA_CART_AUDIT_TRAIL_3.log parfile=MUZALFA_CART_AUDIT_TRAIL.par
Starting "SYS"."SYS_EXPORT_TABLE_02":  userid="/******** AS SYSDBA" query=MUZALFA.CART_AUDIT_TRAIL:"where mod(dbms_rowid.rowid_block_number(rowid), 5) = 1" dumpfile=MUZALFA_CART_AUDIT_TRAIL_1.dmp logfile=MUZALFA_CART_AUDIT_TRAIL_1.log parfile=MUZALFA_CART_AUDIT_TRAIL.par
Starting "SYS"."SYS_EXPORT_TABLE_05":  userid="/******** AS SYSDBA" query=MUZALFA.CART_AUDIT_TRAIL:"where mod(dbms_rowid.rowid_block_number(rowid), 5) = 4" dumpfile=MUZALFA_CART_AUDIT_TRAIL_4.dmp logfile=MUZALFA_CART_AUDIT_TRAIL_4.log parfile=MUZALFA_CART_AUDIT_TRAIL.par
Estimate in progress using BLOCKS method...
Estimate in progress using BLOCKS method...
Estimate in progress using BLOCKS method...
Estimate in progress using BLOCKS method...
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 80 MB
Total estimation using BLOCKS method: 80 MB
Total estimation using BLOCKS method: 80 MB
Total estimation using BLOCKS method: 80 MB
Total estimation using BLOCKS method: 80 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "MUZALFA"."CART_AUDIT_TRAIL"                11.43 MB   52124 rows
. . exported "MUZALFA"."CART_AUDIT_TRAIL"                11.46 MB   53073 rows
. . exported "MUZALFA"."CART_AUDIT_TRAIL"                11.39 MB   51760 rows
. . exported "MUZALFA"."CART_AUDIT_TRAIL"                11.45 MB   52563 rows
. . exported "MUZALFA"."CART_AUDIT_TRAIL"                11.42 MB   52797 rows
Master table "SYS"."SYS_EXPORT_TABLE_02" successfully loaded/unloaded
******************************************************************************
Master table "SYS"."SYS_EXPORT_TABLE_04" successfully loaded/unloaded
Dump file set for SYS.SYS_EXPORT_TABLE_02 is:
Master table "SYS"."SYS_EXPORT_TABLE_03" successfully loaded/unloaded
Master table "SYS"."SYS_EXPORT_TABLE_05" successfully loaded/unloaded
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
  /dump/muzalfa_dump/MUZALFA_CART_AUDIT_TRAIL_1.dmp
******************************************************************************
******************************************************************************
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_04 is:
Dump file set for SYS.SYS_EXPORT_TABLE_03 is:
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
Dump file set for SYS.SYS_EXPORT_TABLE_05 is:
  /dump/muzalfa_dump/MUZALFA_CART_AUDIT_TRAIL_3.dmp
  /dump/muzalfa_dump/MUZALFA_CART_AUDIT_TRAIL_2.dmp
  /dump/muzalfa_dump/MUZALFA_CART_AUDIT_TRAIL_0.dmp
  /dump/muzalfa_dump/MUZALFA_CART_AUDIT_TRAIL_4.dmp
Job "SYS"."SYS_EXPORT_TABLE_05" successfully completed at Thu Jun 22 15:15:10 2017 elapsed 0 00:03:41

Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Thu Jun 22 15:15:10 2017 elapsed 0 00:03:54

Job "SYS"."SYS_EXPORT_TABLE_03" successfully completed at Thu Jun 22 15:15:10 2017 elapsed 0 00:03:48

Job "SYS"."SYS_EXPORT_TABLE_04" successfully completed at Thu Jun 22 15:15:10 2017 elapsed 0 00:03:43

Job "SYS"."SYS_EXPORT_TABLE_02" successfully completed at Thu Jun 22 15:15:10 2017 elapsed 0 00:03:50

#########################################################################################

# Start time : Thu Jun 22 15:11:11 IST 2017
# End time is: Thu Jun 22 15:16:12 IST 2017
#########################################################################################

If you see at database level you will find 5 export are running from this script, it will be like below:

SQL> select owner_name,job_name,job_mode,state from dba_datapump_jobs

OWNER_NAME           JOB_NAME                                 JOB_MODE                       STATE
-------------------- ---------------------------------------- ------------------------------ ------------------------------
SYS                  SYS_EXPORT_TABLE_01                      TABLE                          EXECUTING
SYS                  SYS_EXPORT_TABLE_02                      TABLE                          EXECUTING
SYS                  SYS_EXPORT_TABLE_03                      TABLE                          EXECUTING
SYS                  SYS_EXPORT_TABLE_04                      TABLE                          EXECUTING
SYS                  SYS_EXPORT_TABLE_05                      TABLE                          EXECUTING

 

Export of LOB segment takes very huge time but from above method it will reduce maximum amount of time. After Export you may have to import the data, for that use below script:

Note: Export of LOB takes time by legacy method(means without using above script) but import will not take time at all by legacy method.

Script:

#!/bin/bash
#############################################################################
# IMPORT IN PARALLEL WITH DATAPUMP
#############################################################################
# Author: Cyrille MODIANO
# Description: This script will import multiple dumpfiles in parallel
#
# Please change directory name or destination as per requirment.
#############################################################################
export ORAENV_ASK=NO
export ORACLE_SID=$1
. oraenv
TABLE_NAME=$2
PARTITION=$3

function usage(){
echo "USAGE:
Parameter 1 is the SID of the database where you want to import
Parameter 2 is the TABLE you want to import
Parameter 3 (optional) is the PARTITION name you want to import (if any)"
}

if [ $# -lt 2 ]; then
usage
exit 1
fi

if [ $# -eq 3 ]; then
PARFILE=${TABLE_NAME}_${PARTITION}.par
START_MESSAGE="Beginning import of partition : ${TABLE_NAME}:${PARTITION} "
END_MESSAGE "Finished import of partition: ${TABLE_NAME}:${PARTITION}"
SEARCH_PATTERN=${BASE_TABLE}_${PARTITION}
SUCCESS_MESSAGE="partition: ${TABLE_NAME}:${PARTITION} successfully imported, started

at"
ERROR_MESSAGE="partition: ${TABLE_NAME}:${PARTITION} failed to import, check logfile

for more info"
MAIL_OBJECT="Successfully imported partition ${TABLE_NAME}:${PARTITION}"
else
PARFILE=${TABLE_NAME}.par
START_MESSAGE="Beginning import of table : ${TABLE_NAME}"
END_MESSAGE "Finished import of table : ${TABLE_NAME}"
SEARCH_PATTERN=${BASE_TABLE}
SUCCESS_MESSAGE="Table ${TABLE_NAME} successfully imported, started at "
ERROR_MESSAGE="Table ${TABLE_NAME} failed to import, check logfile for more info"
MAIL_OBJECT="Successfully imported table ${TABLE_NAME}"
fi

#directories
BASEDIR=/dump/muzalfa_dump
DUMPDIR=$BASEDIR/DUMP
PARFILEDIR=$BASEDIR/parfiles

mkdir -p $PARFILEDIR

# building the parfile
echo "DIRECTORY=DATA_PUMP" > ${PARFILEDIR}/$PARFILE
echo "CLUSTER=N" >> ${PARFILEDIR}/$PARFILE
echo "TABLE_EXISTS_ACTION=APPEND" >> ${PARFILEDIR}/$PARFILE
echo "DATA_OPTIONS=DISABLE_APPEND_HINT" >> ${PARFILEDIR}/$PARFILE

echo

"#########################################################################################"
echo $START_MESSAGE
echo

"#########################################################################################"
echo " "

START_TIME=`date`

for dump in `ls ${DUMPDIR}/*${SEARCH_PATTERN}*.dmp`
do
DUMPFILE=${dump}
LOGFILE=imp_${dump}.log
impdp userid=\'/ as sysdba\' dumpfile=$DUMPFILE logfile=${LOGFILE} parfile=${PARFILEDIR}/

$PARFILE &
sleep 3
done

wait `pidof impdp`

echo

"#########################################################################################"
echo $END_MESSAGE
echo "# Start time : $START_TIME "
echo "# End time : `date`"
echo

"#########################################################################################"

# Verifying errors
errors_count=`grep ORA- *${SEARCH_PATTERN}*.log | wc -l`

if [ $errors_count -eq 0 ]; then
echo "$SUCCESS_MESSAGE $START_TIME and finished at `date`" | mail -s $MAIL_OBJECT

you@your-email.com
else
echo $ERROR_MESSAGE | mail -s $MAIL_OBJECT you@your-email.com
fi

For running above import script use below method:

[oracle@rac2 LOB_script]$ ./parallel_script_imp.sh testdb2 james.CART_AUDIT_TRAIL
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/12.1.0/dbhome_1

where  testdb2 is Oracle_SID name
james.CART_AUDIT_TRAIL is schema.table_name

Now, if you login into database you will see below jobs executing in 5 parallel modes, this is what we call running LOB segments in multiple import statement at a time:

SQL> select OWNER_NAME,JOB_NAME,OPERATION,JOB_MODE,STATE from dba_datapump_jobs;
OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE
-------------------- ---------------------------------------- -------------------- --------- ------------
SYS SYS_IMPORT_FULL_01 IMPORT FULL EXECUTING
SYS SYS_IMPORT_FULL_02 IMPORT FULL EXECUTING
SYS SYS_IMPORT_FULL_03 IMPORT FULL EXECUTING
SYS SYS_IMPORT_FULL_04 IMPORT FULL EXECUTING
SYS SYS_IMPORT_FULL_05 IMPORT FULL EXECUTING

All the best guys…

Fariddin Shaikh:

View Comments

  • You made some respectable points there. I looked on the internet for the problem and found most people will associate with with your website.

  • The next time I learn a blog, I hope that it doesnt disappoint me as a lot as this one. I imply, I know it was my choice to read, but I really thought youd have one thing attention-grabbing to say. All I hear is a bunch of whining about something that you may fix in case you werent too busy in search of attention.

  • I intended to draft you one bit of observation to give many thanks the moment again about the great thoughts you've shown on this website. It has been so tremendously open-handed with you to grant without restraint all that most of us might have offered for sale for an ebook to make some money for themselves, particularly considering the fact that you might have done it if you desired. Those points as well acted like a fantastic way to be aware that some people have the identical keenness much like my own to learn lots more on the subject of this condition. I'm sure there are several more enjoyable occasions up front for those who read through your blog.

  • I really wanted to construct a simple word in order to say thanks to you for some of the pleasant suggestions you are giving on this site. My time intensive internet research has at the end of the day been rewarded with incredibly good facts and strategies to exchange with my great friends. I would suppose that many of us readers are undoubtedly endowed to live in a very good place with so many wonderful individuals with beneficial solutions. I feel pretty lucky to have come across your entire webpage and look forward to so many more enjoyable times reading here. Thanks once more for all the details.