Reducing Downtime on Upgrades or Migrations to Oracle Database server




Download 90.19 Kb.
NameReducing Downtime on Upgrades or Migrations to Oracle Database server
page1/3
A typeDocumentation
manual-guide.com > manual > Documentation
  1   2   3

Database

Reducing Downtime on Upgrades or Migrations to Oracle Database server

April C Sims, Southern Utah University


Introduction

This session/paper outlines some of the lesser known options for upgrading Oracle Server or migrating an existing Database to new hardware. Most of the features covered in this session will be specific to a particular version or edition of Oracle but they don’t necessarily require an additional license. Oracle's specific recommendations on reducing downtime for each type of migration path will be covered in this session. An introduction to an alternate method called the Step-Ordered Approach which takes the standard method and splits it into smaller staggered steps for migrating the separate components of Oracle Database. The binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times.

While most organizations implement safeguards for unplanned downtime, it is actually the planned downtime that uses most of our resources. Reducing both planned and unplanned downtime will be emphasized in this paper/session.

Maximum Availability Architecture (MAA)

Oracle’s recommendations. See on Oracle Technology’s Website – http://www.oracle.com/technetwork/database/features/availability/maa-090890.html

You will find specifics and case studies for enterprise-wide implementations of Oracle Software. One of the best places to start when beginning a disaster recovery plan for reducing downtime. In particular pay special attention to the Data Guard white papers Best Practices series. There are tuning tips suitable for any customer/situation not just Data Guard and not just Very Large Databases (VLDB).

Step-Ordered Approach

Most often the project, whether it is an upgrade or migration, consists of several smaller steps that can be implemented in a more step-ordered fashion. Because certain Oracle components are backward-compatible a larger project can be divided into smaller incremental steps reducing downtime, both planned and unplanned. The following recommended order are different than what you find in the following document from My Oracle Support (MOS): Complete Checklist for Manual Upgrades to 11gR2 [ID 837570.1]. Oracle’s document assumes that all of the steps happen within the same outage period. This step-ordered method will also allow several fall-back positions for the larger overall project. This method can be standard operating procedure from now on as it applies to any upgrade or migration project.

As a DBA, it is important to realize that the binary upgrade and the database upgrade are two different events, most often executed at different times. A binary upgrade is the ORACLE_HOME software that is installed, upgraded, and maintained using Oracle-provided tools. A database upgrade is basically updating the data dictionary from one version to another. The following method is different than Oracle’s recommendation of accomplishing all of the steps during a single outage window.

Breaking up a large task into smaller chunks gives you multiple safe fall back positions for each shorter outage window. If something in one of the smaller steps doesn't work, back it out, reengineer, and redeploy.

In a general sense, Oracle is backwards compatible for making that transition from an earlier version to a later one. The following components can be upgraded to 11g Release 2 while still being compatible with earlier versions of Oracle Database:

  • Oracle Net Services: LISTENER.ORA, SQLNET.ORA

  • Clients (SQL*Net, JDBC, ODBC )

  • RMAN Binary, Catalog, and Database

  • Grid Control Repository Database

  • Grid Control Management Agents

  • ASM (Automatic Storage Management) and CRS (Clusterware)

  • PL/SQL Toolkit

  • Transportable Tablespaces (TTS)

Personal Recommended Order of Migration (change as needed for your situation):

  • Install/Patch Oracle Software Binaries in new $ORACLE_HOME – this is Oracle’s definition of an Out-of-Place Upgrade. See the following document - Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2 [ID 1189783.1]

  • Pre-Spin Listener in Upper level-versioned Oracle Home

Recommendation- Don’t use the Automatic Registration Listener Port 1521, see - How to Create Multiple Oracle Listeners and Multiple Listener Addresses [ID 232010.1]

  • Install/Upgrade ODBC, JDBC, SQLPLUS or similar clients

Best information for any issues - Client / Server / Interoperability Support Between Different Oracle Versions [ID 207303.1]

http://www.oracle.com/technology/tech/java/sqlj_jdbc/htdocs/jdbc_faq.htm#02_02

JDBC, JDK and Oracle Database Certification [Note 401934. 1]

  • Install/Upgrade/Migrate Cobol, C and Precompilers – consider a move to the highly-recommended separately installed client (can be different node) instead of using the Database $ORACLE_HOME. This creates an environment that can be delegated to other personnel as well as allowing upgrades or patch installations that aren’t as disruptive.

  • GC and RMAN Repository (not RMAN binaries, that needs to match database version)

This is the command upgrade catalog will allow all down level databases to use an RMAN catalog.

There is a bug in the Upgrade Catalog command in 11.2.0.2 RDBMS, bug# 10157249.

RMAN> CONNECT CATALOG *

connected to recovery catalog database

PL/SQL package RMAN.DBMS_RCVCAT version 11.02.00.01 in RCVCAT database is not current

PL/SQL package RMAN.DBMS_RCVMAN version 11.02.00.01 in RCVCAT database is not current

RMAN>

RMAN> **end-of-file**

RMAN> upgrade catalog;

recovery catalog owner is RMAN

enter UPGRADE CATALOG command again to confirm catalog upgrade

RMAN> upgrade catalog;

error creating set_site_key_for_single_site_dbs

RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================

RMAN-06004: ORACLE error from recovery catalog database: ORA-00001: unique constraint (RMAN.SITE_TFATT_P) violated

To fix this issue please do the following:

1. Make a backup copy of the $ORACLE_HOME/rdbms/admin/recover.bsq file (the $ORACLE_HOME of the rman executable being used)

2. Edit $ORACLE_HOME/rdbms/admin/recover.bsq and change:

update site_tfatt set site_key = onesite_row.site_key where

tf_key in (select tf_key from df, dbinc

where dbinc.dbinc_key = df.dbinc_key

and dbinc.db_key = onesite_row.db_key);

To read:

update site_tfatt set site_key = onesite_row.site_key where

tf_key in (select tf_key from tf, dbinc

where dbinc.dbinc_key = tf.dbinc_key

and dbinc.db_key = onesite_row.db_key);

The sub-query should look for tf_key in TF table (instead of DF table).

3. Connect with rman to the catalog database and run the upgrade again.

  • Upgrade the GC repository database and the RMAN catalog database to the new version, that is one of the best ways to gain practice in using the upper-level version of the RDBMS.

  • ASM – Oracle has further divided this software/OS owner, job definitions. There is the possibility of also pre-spinning the listener in this ASM home as well.

  • Upgrade/Migrate Database

  • Upgrading the Optimizer – mostly specific to 11g (both SQL Plan Management and Optimizer Features) accomplished by changing database initialization parameters

The utilities that will have specific compatibility issues between Oracle versions include both export/import and data pump.

  • Export the data using the $ORACLE_HOME/bin/exp of the lowest database version involved.

  • Import the data with the $ORACLE_HOME/bin/imp of the target database.

  • EXP/IMP files cannot be interchanged with DATA PUMP files.

See the following support documents for the latest up-to-date information:

Compatibility Matrix for Export And Import Between Different Oracle Versions [Doc ID: 132904.1]

Export/import data pump parameter version—Compatibility of Data Pump Between Different Oracle Versions [Doc ID: 553337.1]

Client compatibility (SQL*Net, JDBC, ODBC)

In a general sense, client compatibility is supported on a minimum release (usually what is known as the terminal or last release for older products). In other words, a higher-level client can work with a lower-level database. The clients in this list that have an asterisk (*) will have few issues when used in this mixed environment.

  • ODBC *

  • SQL*Plus, Instant Client, SQL Developer *

  • JDBC, JDK—Application specific

  • Precompilers—Application specific

  • Export/import or data pump—MOS article, very strict guidelines

  • Database links*

  • 32/bit to 64/bit **—SQL*Plus, C, Cobol, database link

  • PL/SQL features compatibility—New release features will be associated with the lowest version client

  • Features availability—New release features will be associated with the lowest version client

  • BEQUEATH connections are not supported between different releases—Unix-specific Oracle protocol that connect without a listener.

Check out my blog post for a specific example using this method: Migrating to 11gR2 – A Case Study on the Step-Ordered Approach

http://aprilcsims.wordpress.com/migrating-to-11gr2/

Other Less-Known Methods to Reduce Planned Downtime for Oracle Upgrades

  • When running catalog.sql, catproc.sql, catpatch.sql or

  • When using the TTS method of upgrading/migrating

Take all other tablespaces other than SYSTEM, SYSAUX, UNDO and ROLLBACK SEGMENTS offline. Either OFFLINE NORMAL or even READ ONLY, as the data in application-specific tablespaces isn't changed during an Oracle upgrade. Any ordinary READ ONLY tablespace will need to be made READ WRITE temporarily after the upgrade, so the data file headers can be updated and then restored to READ ONLY status. See the MOS document Increasing Migration Performance and Recovery time using offline Tablespaces [ID 780318.1] for more details.

It would be necessary to check that there are no SYS-owned objects in any off-lined tablespaces. These same steps would also save time if you need to downgrade or restore the database from a failed upgrade attempt.

Consolidated Reference List Of Notes For Migration / Upgrade Service Requests [ID 762540.1]

762540.1 Note References for Migration/Upgrade Requests

455744.1 Best Practices to Minimize Downtime During Upgrade

Export/Import and DataPump

Most of the information provided is devoted to the new utility Data Pump but there may be specific conditions where the traditional EXP / IMP utilities are actually faster. See MOS Note 286496.1 for how-to trace long running Data Pump jobs.

There is usually some sort of technical limitation that requires an upgrade using exp/imp utilities. This example covers one of the most common- Converting to a new Characterset that is not a superset of the existing set. Most often this is to convert data to Unicode – MOS Note 306411.1

Three possible methods to converting charactersets:

  • Using Oracle’s csalter utility – small amounts of data to be converted, no need to create a new database, requires extensive testing and DBA resources, faster for databases starting on US7ASCII

  • Export/Import – suitable for small databases and/or small amounts of convertible data

  • Data Pump – for larger databases and/or larger amounts of data that needs to be converted

Characterset Conversion Project Overview – EXPORT/IMPORT or DATA PUMP

  • A new target database precreated with new characterset.

  • Make sure the source database schema-level statistics are gathered and saved/exported – DBMS_STATS. EXPORT_SCHEMA_STATS

You will be creating a new database so you don’t want data dictionary, fixed objects (DBA views) or system statistics from the source database.

  • A full database export on the source database excluding indexes, statistics, and constraints.

  • Import into the target database with new characterset but exclude the indexes, statistics, and constraints.

  • Export indexes and constraints (DDL commands) to a file.

  • Run index, constraints script by parallel execution. Google - paresh Parallel Execution Script

http://orajourn.blogspot.com/2008/01/datapump-index-work-around.html

See MOS Using Parallel Execution [ID 203238.1]

  • Extract SYS grants from source database, run in target database.

  • Import saved statistics into target database.

  • Compare schemas on both databases to check for any missing objects. – 3rd party products such as TOAD

Note on Characterset Selection:

  • US7ASCII: better to migrate to WE8MSWIN1252, or WE8ISO8859P15 etc.

  • WE8ISO8859P1: WE8MSWIN1252 is a superset

  • UTF8: better to migrate to AL32UTF8

  • ZHS16CGB231280: ZHS16GBK is a superset

  • ZHS32GB18030: better to migrate to AL32UTF8

  • KO16KSC5601: KO16MSWIN949 is a superset

  • ZHT16BIG5: ZHT16MSWIN950 solves various problems of ZHT16BIG5

See Blog for specific case study with a small database - http://aprilcsims.wordpress.com/migrating-to-alu32utf8/

Improving Performance for EXPORT/IMPORT Utilities

Quote from documentation - “Data Pump Export and Import are self-tuning utilities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import. Sequential media, such as tapes and pipes, are not supported.”

Data Pump – can use four different mechanisms

There are data-related structure differences (character type, table type, column type, encryption, VPD, constraints) that may force Data Pump to use a particular mechanism.

  • data file copying - Transportable Tablespaces – fastest method because the data is not converted, only metadata.

  • direct path – Second fastest, default method used by Data Pump utility.

  • external table – Used when parallel SQL, external table used for mapping, SQL engine moves data, this is also the mechanism when NETWORK_LINK is used during exports only.

  • network link import – slowest, uses INSERT SELECT statement over a database link.

Use the following MOS article to for more information on tuning the two most commonly used methods – direct path vs. external table. Export/Import DataPump Parameter ACCESS_METHOD - How to Enforce a Method of Loading and Unloading Data ? [ID 552424.1]

Data Pump Parameters Worth Mentioning

BUFFER – be sure and increase this from the default size

PARALLEL – Won’t really help with small jobs or large amounts of metadata.

  • Set Parallelism =2X CPUs, adjust if needed.

  • expdp PARALLEL < or = to the number of dump files.

  • impdp, PARALLEL should not that much larger than the number of files in the dump file set.

  • PARALLEL > 1 is only available in Enterprise Edition

  • ESTIMATE (BLOCKS or STATISTICS) vs. ESTIMATE_ONLY - Y, N (both export and estimate)

METRICS=Y –Undocumented parameter adds more info related to the number of objects and the time it took for processing them. Output goes into the job logfile.

Bugs – several version-specific bugs are out there, please research/test before using in a production environment. See the MOS document for a list of key issues: Checklist for Slow Performance of Export Data Pump (expdp) and Import DataPump (impdp) [ID 453895.1]

Initialization Parameters on the Target Database – How to drastically improve import performance

Consider turning off non-essential initialization parameters, other performance-hogging features and increasing certain parameters just before starting datapump on the target database. Don’t forget to adjust parameters back, turn autoextend off when finished.

  • Turn off archivelogmode

  • Turn off flashback

  • Disable Recycle bin

  • Turn off auditing

  • Turn db_block_checking off– system tablespace is always turned on, Block checking may be from 1% to 10% overhead. DB_BLOCK_CHECKING to LOW or OFF in versions 10.2 and 11g or FALSE in 10.1 and 9i.

  • _disable_logging = TRUE

  • pga_aggregate_target

  • sort_area_size

  • shared_pool_size

  • sga_max_size

  • parallel_max_servers

  • CLUSTER_DATABASE=0

  • PARALLEL_SERVER=0

  • job_queue_processes=0

  • aq_tm_processes=0

  • grant exempt access policy to “userdoingexportimport” ; --removes any VPD issues

  • ALTER SYSTEM SET max_dump_file_size = unlimited SCOPE = both;

  • CURSOR_SHARING

The larger the database the better performance can be realized during the import process by splitting the export indexes, constraints, and referential constraints (metadata) into a separate file from the data on the source database. There is a package that will help speed up building partitioned indexes - DBMS_PCLXUTIL.BUILD_PART_INDEX, research on MOS for more information.

Overview of the Migration Steps:

  1. Precreate target database, decide whether to keep the current data file locations.

With standard IMP utility, change tablespace/datafile locations by precreating the tablespaces that you are importing. Make sure these aren’t overwritten with import command. imp destroy=n

Data Pump(impdp) has the REMAP_DATAFILE, use a parameter file because of the syntax:

REMAP_DATAFILE=double-quotes | single-quote | datafile name in the source format | single-quote | : |single-quote | datafile name in target format | single-quote | double-quotes | comma | double-quotes | single-quote...

  1. Open source database in restricted mode if at all possible, at least do a consistent export with a privileged account that is not SYS…” SYSDBA is used internally in the Oracle database and has specialized functions. Its behavior is not the same as for generalized users. For example, the SYS user cannot do a transaction level consistent read (read-only transaction). Queries by SYS will return changes made during the transaction even if SYS has set the transaction to be READ ONLY. Therefore export parameters like CONSISTENT, OBJECT_CONSISTENT, FLASHBACK_SCN, and FLASHBACK_TIME cannot be used.” How to Connect AS SYSDBA when Using Export or Import [ID 277237.1]

Interesting research on data pump export flashback_time and flashback_scn at

http://yong321.freeshell.org/oranotes/DataPump.txt

  1. Be sure that the default temp tablespace and default tablespace for the user performing the import can autoextend. Add additional undo tablespace or increase the value of UNDO_RETENTION. First run of the export includes all of the data but excludes items that will be imported or created at different times than the import.

expdp parfile=expdp_export_full.parfile > expdp_export_full.out 2>&1 &

expdp_export_full.file contains:

directory=DATA_PUMP_DIR

dumpfile=${ORACLE_SID}_full_%U.dmp

logfile=${ORACLE_SID}_export_full.log

full=y

parallel=12 –see notes above

metrics=y –undocumented parameter

userid="system/password"

EXCLUDE=TABLESPACE,STATISTICS,INDEX,CONSTRAINT,REF_CONSTRAINT



  1. Export indexes, constraints, and referential constraints into a file

expdp parfile=expdp_export_indexes.parfile > expdp_export_indexes.out 2>&1 &

expdp_export_indexes.parfile contents:

directory=DATA_PUMP_DIR

dumpfile=${ORACLE_SID}_indexes_%U.dmp

logfile=${ORACLE_SID}_export_indexes.log

full=y

metrics=y --undocumented parameter

userid="system/password"

include=INDEX,CONSTRAINT,REF_CONSTRAINT

  1. Then to format the DDL correctly for the database version/characterset/NLS settings you need to run the import command on the target database. This only creates the resulting sql file, no actual import is accomplished at this step.

impdp parfile=impdp_indexes.parfile > impdp_indexes.out 2>&1 &

impdp_indexes.file contains:

directory=DATA_PUMP_DIR

dumpfile=${ORACLE_SID}_indexes_%U.dmp

logfile=${ORACLE_SID}_index_sqlfile.log

full=y

metrics=y

userid="system/password"

sqlfile="index_sqlfile.sql"

  1. Import the data into the target database excluding indexes, constraints, referential constraints and statistics. DATA PUMP serializes the creation of the excluded items which can be speed up using other methods. Consider using the REMAP_TABLESPACE parameter to move imported items into a previously created tablespace.

impdp parfile=impdp_alldata.parfile > impdp_alldata.out 2>&1 &

impdp_alldata.file contains:

directory=DATA_PUMP_DIR

dumpfile=${SOURCE_ORACLE_SID}_full_%U.dmp

logfile=${TARGET_ORACLE_SID}_import_full.log

full=y

metrics=y

parallel=12

userid="system/password"

  1. Run DDL-sqlfile created earlier, recreate indexes, rebuild partition indexes, investigate examples of parallelizing this part.

  2. Spool any direct grants which won’t be migrated to the TARGET database during the IMPDP process. Review the script, it may be advisable to add the list of grantees to be excluded such as the default roles (DBA, EXP_FULL_DATABASE, etc).

set head off

set pagesize 0

set feedback off

spool grants_from_sys.sql

select ‘grant ‘||privilege||’ on ‘||owner||’.'||table_name||’ to ‘||grantee||’ ‘||decode(grantable,’YES’,'WITH Grant option’)||’;'from dba_tab_privswhere owner = ‘SYS’/select ‘grant ‘||privilege||’ (‘||column_name||’) ‘||’ on ‘||owner||’.'||table_name||’ to ‘||grantee||’ ‘||decode(grantable,’YES’,'WITH Grant option’)||’;'from dba_col_privswhere owner = ‘SYS’;

An older method of parallelizing using the standard EXPORT/IMPORT utilities.

Oracle Export/Import Example Using a UNIX pipe process running in parallel

http://dbarajabaskar.blogspot.com/2010/08/oracle-exportimport-using-unix-pipe.html

DBUA vs. Manual Upgrade

There is no significant difference in the performance between DBUA and the manual method of upgrading because the largest block of time is updating the data dictionary to the new version.

The Manual Method has more flexibility for migration projects:

  1. Transportable Tablespace

  2. Different Servers

  3. Physical or Logical Standbys

Oracle specifically mentions that the manual method of upgrading on different severs must be the same OS version and Database version. Recently did a migration project moving a database to a new server with a single-level higher version of the Operating System (Linux RH 4 to 5) by doing a switchover to a physical standby. The database was the same version. Both servers were taken down, renaming the new one with the same IP/hostname. This prevented any DNS caching issues or as well an untested configuration of a hardware load balancer. See the section later on adjusting SQLNET for a change in hostname and/or IP address with an installed ORACLE Server.

Automatic Temp File Recreation on Startup

Since 10g if any of the temp files are missing they are automatically recreated on startup. Easy to recreate by cycling the database, especially if you can’t find the original creation information:

  1. If temp files are accidentally deleted, renamed, become corrupted or otherwise not available.

  2. No longer need to document the tempfile information, this is stored in the controlfile.

  3. When cloning or copying a database to another server, tempfiles are not needed. This procedure is different than using the Recovery Manager (RMAN) duplicate/backup/recover commands. RMAN doesn’t backup temp files. Recovery Manager and Tempfiles [ID 305993.1] – this doc mentions that RMAN backs up tempfiles but that is incorrect since later versions of Oracle, temporary tablespaces are locally managed tempfiles not datafiles. When taking a backup of a database only the datafiles are included.
  1   2   3

Share in:

Related:

Reducing Downtime on Upgrades or Migrations to Oracle Database server iconDatabase Administrator’s Guide to sql server Database Engine. Net clr environment

Reducing Downtime on Upgrades or Migrations to Oracle Database server iconThe bfenterprise Database 5 The besreporting Database 6 Configure...

Reducing Downtime on Upgrades or Migrations to Oracle Database server iconOracle – 9i to 10g database Upgrade using dbua

Reducing Downtime on Upgrades or Migrations to Oracle Database server iconDescription of issue: Wrong terminology used for Oracle database

Reducing Downtime on Upgrades or Migrations to Oracle Database server iconAll department managers alert employees of scheduled downtime and...

Reducing Downtime on Upgrades or Migrations to Oracle Database server icon9247 – Enterprise Content Management; 9249 – Systems Analysis; 9255...

Reducing Downtime on Upgrades or Migrations to Oracle Database server iconOracle e-business Suite Implementations and Upgrades / Functional...

Reducing Downtime on Upgrades or Migrations to Oracle Database server iconThe act of making a Remote File System (rfs) resource available by...

Reducing Downtime on Upgrades or Migrations to Oracle Database server iconHaving 3 years of experience as an oracle dba, having expertise under...

Reducing Downtime on Upgrades or Migrations to Oracle Database server iconPartners in 17 Categories Recognized at Oracle OpenWorld for Excellence...

Reducing Downtime on Upgrades or Migrations to Oracle Database server iconIdc white Paper sponsored by Oracle Corporation, "Oracle e-business...

Reducing Downtime on Upgrades or Migrations to Oracle Database server iconFull life cycle development, implementation and support with expertise...

Reducing Downtime on Upgrades or Migrations to Oracle Database server iconProfile: Over ten years relational database management systems experience...

Reducing Downtime on Upgrades or Migrations to Oracle Database server iconOracle® e-business Suite, Siebel crm, PeopleSoft Enterprise and jd...

Reducing Downtime on Upgrades or Migrations to Oracle Database server icon1756-rio for Migrations About This Hands-On Lab

Reducing Downtime on Upgrades or Migrations to Oracle Database server iconTo the modern mind, such downtime seems ludicrous

Reducing Downtime on Upgrades or Migrations to Oracle Database server iconAbstract This book provides detailed information about PerformancePoint...

Reducing Downtime on Upgrades or Migrations to Oracle Database server iconCurrent Owner Upgrades

Reducing Downtime on Upgrades or Migrations to Oracle Database server iconAbstract This integration is a part of Siebel Integrations between...

Reducing Downtime on Upgrades or Migrations to Oracle Database server iconAdd Data Manual The following manual describes how to add Companies,...




manual




When copying material provide a link © 2017
contacts
manual-guide.com
search