Export/Import Interview Questions

Breadcrumb Abstract Shape
Breadcrumb Abstract Shape
Breadcrumb Abstract Shape
Breadcrumb Abstract Shape
Breadcrumb Abstract Shape
Breadcrumb Abstract Shape
Banner of Export/Import Interview Questions
  • User AvatarAshiwini
  • 26 Jun, 2024
  • 0 Comments
  • 13 Mins Read

Export/Import Interview Questions

1. What is Network Link Parameter and how it works?

Network Link parameter
The Network Link parameter is used in Oracle’s Data Pump utility (expdp and impdp) to facilitate the export and import of data between databases over a network. This parameter allows you to transfer data directly between databases without the need for intermediate dump files.
How it works
The Network Link parameter leverages database links to perform the data transfer. A database link is a connection from one database to another, enabling SQL statements to access data on the remote database as if it were part of the local database.
Setting up a network link
Before you can use the Network Link parameter in Data Pump, you need to create a database link in the source or target database:

CREATE DATABASE LINK remote_db_link CONNECT TO remote_user IDENTIFIED BY password USING 'remote_db_service';
 Export Using Network Link (expdp)

When you perform an export operation using Data Pump with the Network Link parameter, Data Pump extracts the data directly from the remote database using the database link.

expdp local_user/password@local_db schemas=remote_schema network_link=remote_db_link directory=exp_dir dumpfile=expdp.dmp logfile=expdp.log

Let’s say you have two databases, called local and remote. In order to use the NETWORK_LINK parameter and pass data directly over the network, follow these steps:
Create a database link to the remote database, which is named remote in this example:

SQL> CREATE DATABASE LINK remote
CONNECT TO PDBUSER IDENTIFIED BY oracle
USING 'remote.data';

If there isn’t one already, create a Data Pump directory object:

SQL> CREATE DIRECTORY remote_dir1 AS '/data/app/oracle/dp_dir';

Set the new directory as your default directory, by exporting the directory value:

$ export DATA_PUMP_DIR=remote_dir1

Perform the network export from the database named remote:

$ expdp system/U1 SCHEMAS=PDBUSER FILE_NAME=network.dmp NETWORK_LINK=finance

2 .What is COMPRESSION parameter in expdp?

Oracle Data Pump (EXPDP) allows for compression of dump files to save disk space and improve performance during data export and import operations. Here’s a breakdown of how you can use compression in EXPDP Data Pump jobs in Oracle:

expdp username/password DIRECTORY=directory_name DUMPFILE=dumpfile_name.dmp COMPRESSION=ALL LOGFILE=logfile_name.log
Type of COMPRESSION parameter in expdp

1 .Metadata-Only Compression:
Compresses only the metadata, not the table data. This is the default for Data Pump if compression is specified but no type is provided.

expdp username/password DIRECTORY=exp_dir DUMPFILE=mydumpfile.dmp COMPRESSION=METADATA_ONLY LOGFILE=mylogfile.log

2 .Data-Only Compression:
Compresses only the table data

expdp username/password DIRECTORY=exp_dir DUMPFILE=mydumpfile.dmp COMPRESSION=DATA_ONLY LOGFILE=mylogfile.log

3 .All Compression:
Compresses both metadata and table data.

expdp username/password DIRECTORY=exp_dir DUMPFILE=mydumpfile.dmp COMPRESSION=ALL LOGFILE=mylogfile.log

4 .None:
No compression is applied.

expdp username/password DIRECTORY=exp_dir DUMPFILE=mydumpfile.dmp COMPRESSION=NONE LOGFILE=mylogfile.log

3 .How to improve imp performance?

1. Use Parallelism:

The PARALLEL parameter allows you to specify multiple active worker processes, which can significantly speed up the import process.

2 .Disable Indexes and Constraints
Disabling indexes and constraints during the import can speed up the process. Rebuild them after the import is complete.

3 .Use Direct Path

If feasible for your data, the direct path can be faster than conventional path loads.

4 .Adjust Buffer Size

Setting a larger BUFFER size can improve performance, especially for large LOB data.

5 .Use REMAP_SCHEMA and REMAP_TABLESPACE

If remapping schemas or tablespaces, use these parameters to streamline the process.

6 .Disable Archive Logging

Disable archive logging during the import to reduce I/O overhead.

7 .Use Network_Link

For importing data from a remote database, use NETWORK_LINK to avoid the creation of dump files.

8 .Monitor and Tune System Resources

Ensure that the system has sufficient CPU, memory, and I/O bandwidth. Monitor these resources during the import process and adjust as necessary.

9 .Disable Triggers

Disabling triggers during the import can speed up the process. Re-enable them afterward.

10 .Use TABLE_EXISTS_ACTION

Specify how to handle existing tables with TABLE_EXISTS_ACTION.

11 .Use STATISTICS=NONE

Skipping statistics import can save time. You can gather statistics later.

4 .What are Export Filtering Parameters in expdp?

Export filtering parameters in expdp (Oracle Data Pump Export) allow you to selectively include or exclude specific objects or data during the export process. These parameters help you fine-tune what is exported, making it more efficient and tailored to your requirements.

Common Export Filtering Parameters
1.INCLUDE

The INCLUDE parameter specifies the types of objects to include in the export. You can specify object types and use filters to narrow down the inclusion criteria.

expdp username/password DIRECTORY=exp_dir DUMPFILE=mydumpfile.dmp INCLUDE=object_type[:name_clause] LOGFILE=mylogfile.log

2.EXCLUDE

The EXCLUDE parameter specifies the types of objects to exclude from the export.

expdp username/password DIRECTORY=exp_dir DUMPFILE=mydumpfile.dmp EXCLUDE=object_type[:name_clause] LOGFILE=mylogfile.log

3 .QUERY

The QUERY parameter allows you to filter the rows exported from a table based on a SQL query condition.

expdp username/password DIRECTORY=exp_dir DUMPFILE=mydumpfile.dmp QUERY=table_name:"where_clause" LOGFILE=mylogfile.log

4 .CONTENT

The CONTENT parameter allows you to specify whether to export only metadata, only data, or both.

expdp username/password DIRECTORY=exp_dir DUMPFILE=mydumpfile.dmp CONTENT=content_type LOGFILE=mylogfile.log

5 .SAMPLE

The SAMPLE parameter allows you to specify a percentage of rows to be sampled from the tables for export.

expdp username/password DIRECTORY=exp_dir DUMPFILE=mydumpfile.dmp SAMPLE=percentage LOGFILE=mylogfile.log

5 .What are the datapump export modes?

Oracle Data Pump Export (expdp) provides several modes to export data from an Oracle database. These modes determine the scope and granularity of the data to be exported. Here are the main Data Pump export modes:

1. Full Mode

Exports the entire database. This includes all database objects and data.

2. Schema Mode

Exports all objects within one or more specified schemas.

3. Table Mode

Exports specific tables and their dependent objects.

4. Tablespace Mode

Exports all objects contained in the specified tablespaces.

5. Transportable Tablespace Mode

Exports metadata for tablespaces specified. This is used in conjunction with data files to perform a transportable tablespace export.

6 .How to improve exp performance?

1. Use Parallelism

The PARALLEL parameter allows you to specify multiple active worker processes, which can significantly speed up the export process.

2. Use Direct Path

If feasible for your data, using the direct path can be faster than the conventional path.

3. Adjust File Sizes

Setting an appropriate file size for the dump files can help manage disk I/O efficiently.

4. Disable Logging

Disable archive logging during the export to reduce I/O overhead.

5. Exclude Statistics

Excluding statistics during the export process can save time. You can gather statistics later after the import.

6. Use Compression

Using compression can reduce the size of the dump files, which can speed up the export process.

7. Use INCLUDE/EXCLUDE Parameters

Filtering objects using the INCLUDE or EXCLUDE parameters can help streamline the export process.

8. Monitor and Tune System Resources

Ensure that the system has sufficient CPU, memory, and I/O bandwidth. Monitor these resources during the export process and adjust as necessary.

9. Use NETWORK_LINK

If exporting data from a remote database, use NETWORK_LINK to avoid the creation of dump files on the source database.

10. Optimize Table Data

For large tables, ensure that they are optimized and not fragmented. Rebuilding tables and indexes before export can sometimes improve performance.

11. Increase SGA and PGA

Increasing the System Global Area (SGA) and Program Global Area (PGA) can improve performance by reducing disk I/O.

7 .Which are the common IMP/EXP problems?

ORA-00001:
Unique constraint … violated – Perhaps you are importing duplicate rows. Use IGNORE=N to skip tables that already exist (imp will give an error if the object is re-created) or the table could be dropped/ truncated and re-imported if we need to do a table refresh..

IMP-00015:
Statement failed … object already exists… – Use the IGNORE=Y import parameter to ignore these errors, but be careful as you might end up with duplicate rows.

ORA-01555:
Snapshot too old – Ask your users to STOP working while you are exporting or use parameter CONSISTENT=NO (However this option could create possible referential problems, because the tables are not exported from one snapshot in time).

ORA-01562:
Failed to extend rollback segment – Create bigger rollback segments or set parameter COMMIT=Y (with an appropriate BUFFER parameter) while importing.

8 .What is the Benefits of the Data Pump Technology?

1. Ease of Integration
  • Integration with Oracle Enterprise Manager: Data Pump jobs can be managed and monitored using Oracle Enterprise Manager.
  • Support for PL/SQL APIs: Data Pump functionality can be accessed programmatically using PL/SQL APIs, allowing for automation and integration with custom applications.
2. Robustness and Reliability
  • Restartable Jobs: If a Data Pump job fails, it can be restarted from the point of failure, which is particularly useful for large data transfers.
  • Detailed Logging: Data Pump provides comprehensive log files that help in monitoring the job status and troubleshooting issues.
3. Security
  • Encryption: Data Pump supports encrypting dump files to ensure data security during transfers.
  • Fine-Grained Access Control: Data Pump operations can be controlled through Oracle Database security mechanisms, allowing fine-grained access control.
4 .Advanced Features
  • Flashback Technology: Data Pump can export data consistent to a point in time using Oracle’s flashback technology, ensuring data consistency.
  • Metadata-Only Exports: You can export only the metadata of database objects, which is useful for cloning schemas or tables without the data.
  • Transportable Tablespaces: The TRANSPORTABLE_TABLESPACES feature allows for faster data transfer by exporting tablespace metadata and then using the actual data files.
5 .Flexibility and Manageability
  • Remapping: Options like REMAP_SCHEMA, REMAP_TABLESPACE, and REMAP_DATAFILE allow you to transform data during the import process.
  • Transformation: The TRANSFORM parameter can be used to modify the structure of the data during the export/import process.
  • Compression: The COMPRESSION option helps reduce the size of dump files, saving storage space and potentially speeding up transfers.
6 .Enhanced Usability
  • Network Mode: The NETWORK_LINK parameter allows direct data transfers between databases over the network without intermediate dump files.
  • Job Monitoring and Control: Data Pump jobs can be monitored, paused, and resumed using the interactive command-line interface.
  • Reusability: Jobs can be defined in parameter files for reuse and better management.
7. Advanced Filtering and Selection
  • Fine-Grained Filtering: Use INCLUDE, EXCLUDE, and QUERY parameters to selectively export/import specific objects or data subsets.
  • Object-Level Granularity: You can choose to export/import specific tables, schemas, tablespaces, or the entire database.
8 .Improved Performance
  • Parallel Processing: Data Pump can use multiple processes working in parallel to speed up data export and import operations.
  • Direct Path: The DIRECT_PATH option allows for faster data transfers by bypassing some of the database’s SQL processing layers.

9.What is use of COMPRESS option in exp?

2 .DATA_ONLY

Compress=Compress
When you want to reduce the size of the data portion of the dump file while keeping the metadata easily readable.

3. METADATA_ONLY
Compress=METADATA_ONLY
  • When the metadata is extensive and you want to save space but the data itself does not need compression.
  • Useful when the majority of the dump file size is due to metadata.

4 .NONE

Compress=None
  • When you have ample storage space and do not need to reduce the size of the dump file.
  • When you want to ensure the fastest possible export process without the overhead of compression.
Benefits of Using COMPRESS Option

Storage Savings:
Compressing dump files can save a significant amount of storage space, making it easier to manage backups and archives.
Reduced Network Load:
Smaller dump files are quicker to transfer over the network, which can be beneficial in distributed environments or when moving data to a remote location.
Efficiency:
Depending on the type of data and the chosen compression method, the export process can be more efficient, especially in environments where storage I/O is a bottleneck.
Flexibility:
The different compression options (ALL, DATA_ONLY, METADATA_ONLY, NONE) provide flexibility to meet various requirements and constraints.

10 .What is use of DIRECT=Y option in exp?

Normally export will follow the process of SELECT statement i.e data from disk will be copied to buffer cache and then it will be written to dump file. When we use direct path by specifying DIRECT=Y in export command, then oracle will copy data directly from disk to PGA and from there it is written to dumpfile.

11 .What is use of CONSISTENT option in exp?

When you export a table, you are guaranteed that the contents of that table will be consistent with the time that the export of that table was started. This means that if you start exporting the table at 12:00 and someone makes changes to the data in the table at 12:05 and your export of this table finishes at 12:10, then the export will not contain any of the changes made between 12:00 and 12:10. You cannot change this behavior with Oracle’s export utility.

12 .How to import only metadata?

CONTENT= METADATA_ONLY

13 .What is the order of importing objects in impdp?

 Tablespaces
 Users
 Roles
 Database links
 Sequences
 Directories
 Synonyms
 Types
 Tables/Partitions
 Views
 Comments
 Packages/Procedures/Functions
 Materialized views

14 .How to import into different user/tablespace/datafile/table?

REMAP_SCHEMA
REMAP_TABLESPACE
REMAP_DATAFILE
REMAP_TABLE
REMAP_DATA

15 .In Data Pump, where the jobs info will be stored (or) if you restart a job in Data Pump, how it will know from where to resume?

Whenever Data Pump export or import is running, Oracle will create a table with the JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how much job has completed and from where to continue etc.
Default export job name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.
Default import job name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.

16 .How to improve impdp performance?

Using parallel option which increases worker threads. This should be set based on the number of cpus.

17 .How to improve expdp performance?

Using parallel option which increases worker threads. This should be set based on the number of cpus.

18 .What is use of INDEXFILE option in imp?

Will write DDLs of the objects in the dumpfile into the specified file.

19 .Why expdp is faster than exp (or) why Data Pump is faster than conventional export/import?

  • Data Pump is block mode, exp is byte mode.
  • Data Pump will do parallel execution.
  • Data Pump uses direct path API.

20 .What are the differences between expdp and exp (Data Pump or normal exp/imp)?

  • Data Pump is server centric (files will be at server).
  • Data Pump has APIs, from procedures we can run Data Pump jobs.
  • In Data Pump, we can stop and restart the jobs.
  • Data Pump will do parallel execution.
  • Tapes & pipes are not supported in Data Pump.
  • Data Pump consumes more undo tablespace.
  • Data Pump import will create the user, if user doesn’t exist.

21 .What is use of IGNORE option in imp?

Will ignore the errors during import and will continue the import.