Migrating Many Databases: Why and How

Migrating Many Databases: Why and How
Slide Note
Embed
Share

Discover the challenges of migrating over 800 databases from older versions to newer ones and different environments. Learn about adjusting tablespaces, security parameters, and more. Explore the use of Data Pump, Oracle Golden Gate, and potential blockers like Flashback Table Archive and Edition Based Redefinition.

  • Migration
  • Databases
  • Data Pump
  • Oracle Golden Gate
  • Database Management

Uploaded on Mar 09, 2025 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. DKOUG Migrating many Databases To newer Versions and different Environment The 6thof December 2021 By Martin Jensen Confidential

  2. Why ? We needed to move a large amount (about 800) of databases from version 11.2 and 12.1 (mostly VM ware) to 12.2 / 19c Multi-tenancy; and on the fly we needed to - Adjust Tablespaces to BIGFILE and cover automatic tablespace extension - Adjust security parameters - Archive and clean-up audit records. - Some databases need to move from Big to Little Endian Confidential

  3. How ! As many of the databases did have a longer history from older versions, we wanted to have a clean fresh start, so we decided to use a logical transfer using basically Data Pump and automated as much as possible. And we can use the same method for databases migrating from Big to Small Endian For Databases where the maintenance window need to be very small Oracle Golden Gate is suggested. Confidential

  4. Data Pump over the Network If there are no blockers, we can do a Data Pump over the network from 12.2 (or 19c) avoiding some of the shortcomings from previous versions, and with a reasonable parallel count. FULL=YES network_link=<source-db>_impdp_link directory=DIR_MIG_DUMP logfile=<source-db>_<date>_impdp.log exclude=tablespace:"IN ( )" exclude=schema:"in( )" exclude=statistics exclude=table:"in('AUD$','FGA_LOG$')" - for non-tenant otherwise don t use DATA_OPTIONS= ENABLE_NETWORK_COMPRESSION maybe compression license needed exclude=PASSWORD_HISTORY -- see Doc ID 1551778.1 SKIP_UNUSABLE_INDEXES=YES -- continue even if an index is unusable TRANSFORM=LOB_STORAGE:SECUREFILE METRICS=Y -- the number of objects and the elapsed time are recorded ESTIMATE=STATISTICS -- Seems to be fastest for our databases LOGTIME=ALL cluster=N -- Most of our source databases were non RAC parallel=8 -- 8 seems to be a fine balance for us on network and database load -- better generate new statistics on target -- use always LOB SECUREFILE Confidential

  5. Possible Blocker Flashback Table Archive It is not supported to export / import FBDA tables in 11g (as they use rowid), we would need to drop the table archive prior to the export (ALTER TABLE <table- name> NO FLASHBACK ARCHIVE). In 12c however special routines exists in the dbms_flashback_archive package to be able to export / import all archived information. The general use of the NORDEA_AUDIT_HISTORY archive information may be dropped. Confidential

  6. Possible Blocker Edition Based Redefinition The usage of the Edition Based Redefinition (EBR) feature to have more editions of the same non-data objects (like views, packages etc.) will complicate the usage of DataPump as a possible to be used in the copy phase. Basically, the different editions will have to be moved in different phases, and potentially GoldenGate will only be able to concentrate on one edition. Confidential

  7. Possible Blocker Dont use utl_file_dir If the database is using the deprecated utl_file_dir parameter, we need to ask the application to use DIRECTORIES instead, as the utl_file_dir parameter is not supported in Pluggable databases. Confidential

  8. Possible Blocker Tables with ROWID columns The usage of ROWID columns in user tables (exclude MV-logs, domain indexes, ) in the database ROWID columns values do not make sense for logical transports like DataPump or GoldenGate, as the ROWID column is a physical reference. We can exclude CHAINED_ROWS tables as well as tables used for Advanced Queuing, Domain indexes as well as ROWID based Materialized view logs. This complexity will need to be taken care of prior to and after the migration, if the tables having this complexity are not empty they need to be migrated using a different method. Confidential

  9. Possible Blocker Tables with LONG columns The usage of LONG columns in user tables (apart from the usual tables like optimizer plan tables) in the database will collide with restrictions on DataPump. This complexity will need to be taken care of in the copy phase, especially if the tables having this complexity are not empty. These tables will have to be excluded from DataPump network import and copied directly. Even though version 12.2 of DataPump is suggested to support LONG columns (https://dbaclass.com/article/datapump-new-features-in-oracle-12-2/) we still get error ORA-31679 and the table content for tables holding LONG columns are skipped. You might be able to bypass this blocker by temporarily copy the data to a table using CLOB instead of LONG. Confidential

  10. Possible Blocker No User objects in SYS If user objects (typical user defined audit or trigger objects) are located in the SYS or SYSTEM schemas (or any other oracle-maintained schema), then DataPump will have difficulties including these objects in the copy phase. Please consider to move such objects to a different schema prior to migration Confidential

  11. Extra care Materialized Views Good care on local Materialized Views. They must be taken seriously regardless if the dependent Materialized View Logs are in the same database or remotely. Any remote materialized view depending on base tables in the migrated database will need to be located and full refreshed after the migration. Generally, all VALID Materialized views must be refreshed complete. Other Materialized views with possible errors are moved to the target as well but not refreshed. Also observe that Materialized views from the MFT account (Oracle Managed File Transfer) are left untouched. Please note that it is important for all materialized views depending on materialized views logs to refresh regularly, otherwise the materialized view logs may continue to grow. This is a special issue where more materialized views (potentially in different databases) are getting information from the same base table. So when the old database is being decommissioned it is important that any local materialized views depending on remote base tables gets deleted or unregistered. Confidential

  12. Extra care Materialized Views cont. Before the source database is being decommissioned or the new pluggable database is being potentially dropped for a new trial migration, any local materialized views with remote base tables need to be unregistered and have the relevant records cleared from the materialized view logs in the remote databases; otherwise the related Materialized View Logs may keep growing. This is also a potential cloning issue. Confidential

  13. Migrated Database right-sizing - All tablespaces are pre-created as BIGFILE prior to Data-Pump - CPU_COUNT is set (after migration) to the a peak value selected from dba_host_osstat. - SGA_TARGET is set to a value from the SGA-advisor with a 10% penalty from source, and then 30% is added to accommodate more Shared pool usage in 12.2 and 19c. - PGA_AGGREGATE_TARGET is set to the maximum value from gv$pgastat select the size of maximum PGA allocated. - PGA_AGGREGATE_LIMIT is set to 3 times PGA_AGGREGATE_TARGET. Confidential

  14. Post migration Tasks - Apply missing grants (to Internal Oracle schemas) - Recompile all structures and compare with the status before the migration - Full refresh of any related Materialized views (local and remote) - Create Standby if needed - Create fresh statistic (incl. dictionary and fixed tables) - Assure that backup is working - Prepare export of all audit information as XML tagged files Confidential

  15. Q & A Thanks Be careful out there Confidential

More Related Content