I am going to perform an upgrade on a Hadoop cluster but am wanting to backup the Ambari metastore schema first in case anything goes wrong.
Oracle is used to store the data, so I looked at using expdp
to make a quick backup of the schema in its current state. However, I see in several different documents it is mentioned this is used to "unload" data. Does that mean the data will be removed from the database during the dump process? I want to keep everything in place and just make a quick backup, similar to the Postgres command pg_dump
.
Don't worry, your data will stay where it is.
Here's a simple example: I'm exporting Scott's DEPT table. You'll see that data is in the table before and after EXPDP was executed.