The expdp and impdp tools appeared in Oracle 10g many years ago, from that moment these were the “supported” way to export and import information from Oracle Databases (in Oracle proprietary format).
In this post I’ll try to explain the most misunderstood/not known points of this tools.
An export is not a backup. Well, so many people use exports as backups of our databases, and in some cases exports seem sufficient, but is important to take this into account:
- To fully recover a database from an export file you must create a new database first, the export file only contains the user data (and not the Oracle internal information).
- A database or a table recovered from an export cannot be “moved forward on time” applying redo information, the data is the one existent on the moment of the backup.
- The import recreates objects, index and tables differs form the original ones (the import recreates them “fully compacted”).
- The internal objects (the ones owned by SYS) and user objects created on SYS schema are not exported. In some cases your expdp may leave some things away….
An export, by default, is “consistent” only to object level. This implies that if the database is active at the moment of the export the different objects may have inconsistencies between them.
For instance, in case of tables is not possible to “partially” export a row, every row of a table is exported or not depending if there exist or not (if it is comited or not) at the beginning of the export of the table that contains them. However it is possible to insert registers on a master table after expdp has exported them, but before the export of the child table with related records, leaving “orphan rows” when we try to import them again.
This problem can be solved using the parameter “flashback_time” or “flasback_scn”, this parameter produces a full consistent image of the database at the moment or scn specified.
The expdp and impdp tasks are server executed tasks. The expdp and impdp are only command line parsers to launch a tasks on the database. The real export/import task is done by the database binaries in the database server. This implies that all the generated or read dump files must reside on locations accessible to database server, and also that if we exit the expdp/impdp tool after the task is started, it continues working (and we can monitor or attach to them later).
In fact we can launch an expdp/impdp task using PL/SQL packages:
DECLARE l_dp_handle NUMBER; BEGIN l_dp_handle := dbms_datapump.open(operation => 'EXPORT', job_mode => 'SCHEMA', job_name => 'EXPORT_TASK'); dbms_datapump.add_file(handle => l_dp_handle, filename => 'MYUSER.dmp', directory => 'MY_DIR'); dbms_datapump.metadata_filter(handle => l_dp_handle, name => 'SCHEMA_EXPR', value => '= ''MY_USER'''); dbms_datapump.start_job(l_dp_handle); dbms_datapump.detach(l_dp_handle); END; /
set serveroutput on DECLARE l_dp_handle NUMBER; BEGIN l_dp_handle := dbms_datapump.open(operation => 'IMPORT', job_mode => 'SCHEMA', job_name => 'MYUSER_IMPORT'); dbms_datapump.add_file(handle => l_dp_handle, filename => 'EXPIMP%U.DMP', directory => 'EXPIMP', filetype=>1); dbms_datapump.add_file(handle => l_dp_handle, filename => 'EXPIMP.LOG', directory => 'EXPIMP_LOG', filetype=>3); dbms_datapump.start_job(l_dp_handle); dbms_datapump.detach(l_dp_handle); EXCEPTION WHEN OTHERS THEN dbms_output.put_line('Error:' || sqlerrm || ' on Job-ID:' || l_dp_handle); END; /
The compression of the export data requires “Advanced Compression License”. The expdp data compression implies the usage of an Enterprise Edition optional feature, and the database “records” expdp compression for license auditing purposes, be aware to only use it if you have the necessary licences.
With old “exp” command and in Unix environment it is possible to send the output of the command to a pipe and compress them “on the fly”, with expdp this is not possible because the output of data to the file is not sequential. The only option to compress, if not have Advanced Compression, is to use OS tools at the end of the export.
Move data between different database versions. If you plan to extract data from one database and load it in another of lower version, it is necessary to create an export file compatible with the database of the lower version. This is possible using the “version” parameter of expdp. For example to extract data from an 220.127.116.11 database and load on a 18.104.22.168 database, we must include this parameter on the export “version=22.214.171.124”
It is possible to export/import over the network (without creating a file on disk). It is possible to execute an impdp operation using a dblink as data source. In this case the data is exported and imported at the same time, on the disk of destination server only the log file is written.
To use this function it is necessary to create a dblink from the target database to the source database and refer to them on the import command using the NETWORK_LINK parameter.
There exists some limitations on imports using dblink, and this limitations may change over different database versions NETWORK_LINK
Parallel export/import. With the Enterprise Edition of the database it is possible to use the parameter “parallel” to use more than one process to unload or load the data. With this parameter the time need for the task can change dramatically.
Filters and transformations. With expdp it is possible to specify a subset of schemas, schema’s objects or rows from tables to export, and with the impdp it is possible to rename schemas, relocate objects to other tablespaces or change storage attributes. The expdp and impdp have improved a lot on this functionalities compared to the old exp and imp tools.
Reuse dumpfiles. In many export scripts we can find some type of OS command to remove old dump files before creating new ones. This is due to the first releases of the expdp tool (10.2 and 11.1 basically). If the expdp task finds a dump file from a previous run, it is unable to overwrite it and fails. From release 11.2 it is possible to use the parameter REUSE_DUMPFILES=Y to avoid this.