Master in AWS | New Batch Starting From 10th November 2025 at 8.30 PM IST | Register for Free Demo

Can Data Pump Export to ASM Storage?

Breadcrumb Abstract Shape
Breadcrumb Abstract Shape
Breadcrumb Abstract Shape
Breadcrumb Abstract Shape
Breadcrumb Abstract Shape
Breadcrumb Abstract Shape
  • User AvatarPradip
  • 17 Nov, 2025
  • 0 Comments
  • 2 Mins Read

Can Data Pump Export to ASM Storage?

Can Data Pump Export to ASM Storage? (Solved for Oracle 19c/21c)

When working with Oracle databases, especially on OCI Base DB Systems or Exadata, most high-performance storage resides in ASM (Automatic Storage Management). Recently, while preparing benchmark tests for a Data Pump session, a common question came up:

Can Oracle Data Pump export dumps directly into ASM storage?

Yes, Data Pump can write dump files to ASM
But it cannot write the log file to ASM

This limitation often surprises DBAs trying to fully leverage ASM for backup and export operations. Let’s walk through a working example and the right way to configure it.


Exporting to ASM Storage – Step-by-Step

1. Create an ASM Directory

Start in ASMCMD:

ASMCMD> cd DATA
ASMCMD> mkdir DATA_PUMP

This prepares a dedicated ASM directory for Data Pump dump files.


2. Create a Database Directory Object Pointing to ASM

SQL> CREATE DIRECTORY myasmdir AS '+DATA/DATA_PUMP';
SQL> GRANT READ, WRITE ON DIRECTORY myasmdir TO <username>;

This tells Oracle where Data Pump should write the dump files.


3. Start Data Pump Export

You may try:

expdp userid=... \
directory=myasmdir \
dumpfile=exp%U.dmp \
logfile=exp.log

But this will fail with:

ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation: nonexistent file or path [29434]

Why Does It Fail?

Because ASM storage does not support regular OS log file writes.
Data Pump can create dump files inside ASM, but the log file must be placed on a traditional filesystem (EXT4, XFS, NFS, etc.).

This is an internal restriction in Data Pump’s logging mechanism.


4. Create a Directory for the Log File

Point it to a normal filesystem, like /tmp or /u01/app/oracle:

SQL> CREATE DIRECTORY mylogdir AS '/tmp';

5. Run Data Pump Export Again (Correctly)

expdp userid=... \
directory=myasmdir \
dumpfile=exp%U.dmp \
logfile=mylogdir:exp.log

Now your Data Pump export writes:

  • Dump files → ASM

  • Log file → OS filesystem

Perfectly valid and fully supported.


Additional Tips

1. If You Don’t Need a Log File

If your environment (e.g., OCI Base DB System) restricts OS directories:

expdp userid=... directory=myasmdir dumpfile=exp%U.dmp nologfile=Y

Useful for scripted executions or when you only monitor job progress from DBA views.


2. Always Apply the Data Pump Bundle Patch (19c)

Oracle’s 19c Data Pump updates include 150+ bug fixes related to:

  • Directory handling

  • Performance

  • Parallel execution

  • Transportable tablespaces

  • ASM handling

For 19c users (especially 19.20+), installing the Data Pump bundle patch is strongly recommended for stability and performance.


Final Thoughts

Yes, exporting Data Pump dump files directly into ASM is absolutely possible, and it works smoothly once you separate:

  • Dump files → ASM

  • Log files → Regular filesystem

This gives you the advantage of fast ASM performance while maintaining Data Pump compatibility. It’s a best practice for OCI, Exadata, and RAC environments.

Want to see how we teach?
Head over to our YouTube channel for insights, tutorials, and tech breakdowns:
👉 www.youtube.com/@learnomate

To know more about our courses, offerings, and team:
Visit our official website:
👉 www.learnomate.org

Interested in mastering Oracle Database Administration?
Check out our comprehensive Oracle DBA Training program here:
👉 https://learnomate.org/oracle-dba-training/

Want to explore more tech topics?
Check out our detailed blog posts here:
👉 https://learnomate.org/blogs/

And hey, I’d love to stay connected with you personally!
Let’s connect on LinkedIn: Ankush Thavali 😎

Happy Vibes!

ANKUSH😎