25+ Oracle GoldenGate Interview Questions [ 95% SUCCESS ] | 2020
Oracle GoldenGate Interview Questions and Answers

25+ Oracle GoldenGate Interview Questions [ 95% SUCCESS ]

Last updated on 04th Jul 2020, Blog, Interview Questions

About author

Madhan (Sr Cloud Engineer )

(5.0) | 16547 Ratings 10307

Oracle GoldenGate is a comprehensive software package for real-time data integration and replication in heterogeneous IT environments. The product set enables high availability solutions, real-time data integration, transactional change data capture, data replication, transformations, and verification between operational and analytical enterprise systems. Oracle GoldenGate 19c brings extreme performance with simplified configuration and management, tighter integration with Oracle Database, support for cloud environments, expanded heterogeneity, and enhanced security. In addition to the Oracle GoldenGate core platform for real-time data movement, Oracle provides the Management Pack for Oracle GoldenGate—a visual management and monitoring solution for Oracle GoldenGate deployments—as well as Oracle GoldenGate Veridata, which allows high-speed, high-volume comparison between two in-use databases.

1) What type of Topology does Goldengate support?

Ans:

GoldenGate supports the following topologies. More details can be found

  • Unidirectional
  • Bidirectional
  • Peer-to-peer
  • Broadcast
  • Consolidation
  • Cascading

2) What are the main components of the Goldengate replication?

Ans:

The replication configuration consists of the following processes.

  • Manager
  • Extract
  • Pump
  • Replicate

3) What database does GoldenGate support for replication?

Ans:

  • Oracle Database
  • TimesTen
  • MySQL
  • IBM DB2
  • Microsoft SQL Server
  • Informix
  • Teradata
  • Sybase
  • Enscribe
  • SQL/MX

4) What transaction types does Goldengate support for Replication?

Ans:

Goldengate supports both DML and DDL Replication from the source to target.

5) What are the supplemental logging pre-requisites?

Ans:

The following supplemental logging is required.

  • Database supplemental logging
  • Object level logging

6) Why is Supplemental logging required for Replication?

Ans:

When a transaction transaction is committed on the source database, only new data is written to the Redo log. However for Oracle to apply these transactions on the destination database, the before image key values are required to identify the affected rows. This data is also placed in the trail file and used to identify the rows on the destination, using the key value the transactions are executed against them.

7) List important considerations for bi-directional replication?

Ans:

The customer should consider the following points in an active-active replication environment.

  • Primary Key: Helps to identify conflicts and Resolve them.
  • Sequences: Are not supported. The work around is to use odd/even, range or concatenate sequences.
  • Triggers: These should be disabled or suppressed to avoid using uniqueness issue
  • Data Looping: This can easy avoided using OGG itself
  • LAG: This should be minimized. If a customer says that there will not be any LAG due to network or huge load, then we don’t need to deploy CRDs. But this is not the case always as there would be some LAG and these can cause Conflicts.
  • CDR (Conflict Detection & Resolution): OGG has built in CDRs for all kinds of DMLs that can be used to detect and resolve them.
  • Packaged Application: These are not supported as it may contain data types which are not supported by OGG or it might not allow the application modification to work with OGG.

8) What is the data pump process in Goldengate?

Ans:

  • The Data Pump (not to be confused with the Oracle Export Import Data Pump) is an optional secondary Extract group that is created on the source system. When Data Pump is not used, the Extract process writes to a remote trail that is located on the target system using TCP/IP. When Data Pump is configured, the Extract process writes to a local trail and from here Data Pump will read the trail and write the data over the network to the remote trail located on the target system.
  • The advantages of this be it protects against a network failure as in the absence of a storage device on the local system, the Extract process writes data into memory before the same is sent over the network. Any failures in the network could then cause the Extract process to abort (abend). Also, if we are doing any complex data transformation or filtering, the same can be performed by the Data Pump. It will also be useful when we are consolidating data from several sources into one central target where data pumps on each individual source system can write to one common trail file on the target.

9) Where can filtering of data for a column be configured?

Ans:

Filtering of the columns of a table can be set at the Extract, Pump or Replicat level.

Desired to gain proficiency on Oracle GoldenGate? Explore the blog post on

Oracle GoldenGate Training to become a pro in Oracle GoldenGate.

10) Is it a requirement to configure a PUMP extract process in OGG replication?

Ans:

A PUMP extract is an option, but it is highly recommended to use this to safeguard against network failures. Normally it is configured when you are setting up OGG replication across the network.

11) What are the differences between the Classic and integrated Capture?

Ans:

Classic Capture:

  • The Classic Capture mode is the traditional Extract process that accesses the database redo logs (optionally archive logs) to capture the DML changes occurring on the objects specified in the parameter files.
  • At the OS level, the GoldenGate user must be a part of the same database group which owns the database redo logs.
  • This capture mode is available for other RDBMS as well.
  • There are some data types that are not supported in Classic Capture mode.
  • Classic capture can’t read data from the compressed tables/tablespaces.

Integrated Capture (IC):

  • In the Integrated Capture mode, GoldenGate works directly with the database log mining server to receive the data changes in the form of logical change records (LCRs).
  • IC mode does not require any special setup for the databases using ASM, transparent data encryption, or Oracle RAC.
  • This feature is only available for oracle databases in Version 11.2.0.3 or higher.
  •  It also supports various object types which were previously not supported by Classic Capture.
  • This Capture mode supports extracting data from source databases using compression.
  • Integrated Capture can be configured in an online or downstream mode.

12) List the minimum parameters that can be used to create the extract process?

Ans:

The following are the minimum required parameters which must be defined in the extract parameter file.

  • EXTRACT NAME
  • USERID
  • EXTTRAIL
  • TABLE

13) What are macros?

Ans:

Macro is an easier way to build your parameter file. Once a macro is written it can be called from different parameter files. Common parameters like username/password and other parameters can be included in these macros. A macro can either be another parameter file or a library.

14) What is the command line utility in GoldenGate (or) what is ggsci?

Ans:

Golden Gate Command Line Interface essential commands – GGSCI

GGSCI   — (Oracle) GoldenGate Software Command Interpreter

15) Where can macros be invoked?

Ans:

The macros can be called from the following parameter files.

  • Manager
  • Extract
  • Replicat
  • Gobels

16) How is a macro defined?

Ans:

A macro statement consists of the following.

  • Name of the Macro
  • Parameter list
  • Macro body

Sample:

  • MACRO #macro_name
  • PARAMS (#param1, #param2, …)
  • BEGIN
  • < macro_body >
  • END;

17) What type of Encryption is supported in Goldengate?

Ans:

Oracle Goldengate provides 3 types of Encryption.

  • Data Encryption using Blowfish.
  • Password Encryption.
  • Network Encryption.

18) What are the different password encryption options available with OGG?

Ans:

You can encrypt a password in OGG using

  • Blowfish algorithm and
  • Advanced Encryption Standard (AES) algorithm

19) Is there a way to check the syntax of the commands in the parameter file without running the GoldenGate process?

Ans:

Yes, you can place the SHOWSYNTAX parameter in the parameter file and try starting. If there is any error, you will see it.

20) How can you increase the maximum size of the read operation into the buffer that holds the results of the reads from the transaction log?

Ans:

If you are using the Classical Extract you may use the TRANSLOG OPTION ASM BUFSIZE parameter to control the read size for ASM Databases.

    Subscribe For Free Demo

    21) What information can you expect when there is data in the discard file?

    Ans:

    When data is discarded, the discard file can contain:

    • Discard row details
    • Database Errors
    • Trail file number

    22) What are the different encryption levels in AES?

    Ans:

    You can encrypt the password/data using the AES in three different keys

    • 128 bit
    • 192 bit and
    • 256 bit

    23) Is there a way to check the syntax of the commands in the parameter file without actually running the GoldenGate process

    Ans:

    Yes, you can place the SHOWSYNTAX parameter in the parameter file and try starting. If there is any error you will see it.

    24) What command can be used to switch writing the trail data to a new trail file?

    Ans:

    You can use the following command to write the trail data to a new trail file.

    SEND EXTRACT ext_name, ROLLOVER

    25) How can you determine if the parameters for a process was recently changed.

    Ans:

    Whenever a process is started, the parameters in the .prm file for the process is written to the process REPORT. You can look at the older process reports to view the parameters which were used to start up the process. By comparing the older and the current reports you can identify the changes in the parameters.

    26) Without going into details, explain high level steps of setting up GoldenGate.

    Ans:

    Below are the key steps to install/configure the GoldenGate system.

    • Download the software from the Oracle website and upload to server
    • Unpack/Unzip the installation zip file
    • Prepare source and target system
    • Install the software on the source and target system (for 12c use OUI)
    • Prepare the source database (some DB parameters need to be adjusted) Configure the Manager process on the source and target system
    • Configure the Extract process on the source system
    • Configure the data pump process on the source system
    • Configure the Replicat process on the target system
    • Start the Extract process
    • Start the data pump process
    • Start the Replicat process

    27) When creating a GoldenGate database user for database 12c, what special precaution do you need to take?

    Ans:

    You must grant the GoldenGate admin user access to all database containers on the source side so that GoldenGate can access the redo logs for all the databases (container and pluggable)

    You must also grant the DBA role with the container=all option.

    SQL> GRANT DBA TO C##GOLDEN ADMIN CONTAINER=ALL

    28) What is the Downstream capture mode of GoldenGate?

    Ans:

    Traditionally log mining work for the source data happens on the Source database side but in Downstream capture mode Oracle Data Guard redo transport mechanism is used. This enables continuous log shipping to the target database’s standby redo logs in real time. Log mining work to fetch DDL/DML transactions happens on the target side.

    29)  How do you take backup of GoldenGate?

    Ans:

    Your source/database you can backup easily using backup tools like Oracle Recovery Manager (RMAN) but to backup the GoldenGate you will need to back up the GoldenGate home and subdirectories that contain the trail files, checkpoint files etc. Without these key files, GoldenGate will not be able to recover from the last checkpoint. It means that if somehow you lose all these key GoldenGate files then you will have no option but to go for a new initial load. RMAN simply does not have the capability to backup the OS or no database files.

    • So either you keep all your GoldenGate related files on some kind of SAN setup which gets backed up daily at storage level or use Unix shell commands etc in cron job to take file system backups..

    30)  What is the checkpoint table? In which capture mode it is used: classic or integrated?

    Ans:

    Oracle GoldenGate extract and replicat processes perform checkpoint operations. Now in the event of some unexpected failure, the checkpoint file or database table ensures extract and replicat re-start from the point of failure and avoid recapture and re-apply of transactions.

    • So, the Checkpoint table enables the checkpoint to be included within Replicat’s transaction, ensuring complete recovery from all failure scenarios.
    • You use the GGSCI add checkpointtable command to create the checkpoint table.
    • Checkpoint table is used for Classic capture/replicate mode.
    • For Integrated mode, the Checkpoint table is not required and should not be created.
    Course Curriculum

    Learn From Comprehensive Oracle Goldengate Training By Expert Trainers

    • Instructor-led Sessions
    • Real-life Case Studies
    • Assignments
    Explore Curriculum

    31)  What transaction types does Golden Gate support for Replication?

    Ans:

    Goldengate supports both DML and DDL Replication from the source to target.

    32)  What are the supplemental logging prerequisites?

    Ans:

    The following supplemental logging is required.

    • Database supplemental logging
    • Object level logging

    33)  What is Integrated Capture(IC)?

    Ans:

    Integrated Capture (IC):

    • In the Integrated Capture mode, GoldenGate works directly with the database log mining server to receive the data changes in the form of logical change records (LCRs).
    • IC mode does not require any special setup for the databases using ASM, transparent data encryption, or Oracle RAC.
    • This feature is only available for oracle databases in Version 11.2.0.3 or higher.
    • It also supports various object types which were previously not supported by Classic Capture.
    • This Capture mode supports extracting data from source databases using compression.

    Integrated Capture can be configured in an online or downstream mode.

    34)  I want to configure multiple extracts to write to the same exttrail file? Is this possible?

    Ans:

    Only one Extract process can write to one exttrail at a time. So, you can’t configure multiple extracts to write to the same exttrail.

    35)  What are some of the key features of GoldenGate 12c?

    Ans:

    The following are some of the more interesting features of Oracle GoldenGate 12c:

    • Support for Multitenant Database
    • Coordinated Replicat
    • Integrated Replicat Mode
    • Use of Credential store
    • Use of Wallet and master key
    • Trigger-less DDL replication
    • Automatically adjusts threads when RAC node failure/start
    • Supports RAC PDML Distributed transaction
    • RMAN Support for mined archive logs

    36)  If I have created a Replicat process in OGG 12c and forgot to specify the DISCARDFILE parameter. What will happen?

    Ans:

    Starting with OGG 12c, if you don’t specify a DISCARDFILE OGG process now generates a discard file with default values whenever a process is started with START command through GGSCI.

    37)  Is it possible to start OGG EXTRACT at a specific CSN?

    Ans:

    Yes, Starting with OGG 12c you can now start Extract at a specific CSN in the transaction log or trail.

    Example:

    START EXTRACT fin ATCSN 12345

    START EXTRACT finance AFTERCSN 67890

    38) What Traction Types Does Goldengate Support For Replication?

    Ans:

    Goldengate supports both DML and DDL Replication from the source to target.

    39)  What Command Is Used To Create The Credential Store?

    Ans:

    ADD CREDENTIALSTORE.

    40) If Have Created A Replicate Process In Ogg 12c And Forgot To Specify Discardfile Parameter. What Will Happen?

    Ans:

    Starting with OGG 12c, if you don’t specify a DISCARDFILE OGG process now generates a discard file with default values whenever a process is started with START command through GGSCI.

    41) How Do You Create A Csv File From The Extracted Data?

    Ans:

    You will have to use the CSV Flat File Adaptor to create CSV files. The source would be the extract trail files which use the configuration of the adapter settings to generate CSV files.

    42) What Is The Minimum Database Version Which Supports Integrated Delivery?

    Ans:

    Oracle 11.2.0.4 is the minimum required database version which supports both Integrated extract and Integrated Replicat.

    43) How Do You View The Data Which Has Been Extracted From The Redo Logs?

    Ans:

    The logdump utility is used to open the trail files and look at the actual records that have been extracted from the redo or the archive log files.

    44) What is the job of the DBWR process and how many DBWR processes are there?

    Ans:

    DBWR Means, database writer is mainly responsible for writing modified blocks(ie. dirty blocks) to the physical data files(disks).

    For below events, DBWR writes to datafiles.

    • Checkpoint is issued
    • Too much dirty buffers in Buffer cache
    • No free space in the buffer cache
    • During database shutdown(Except abort method)
    • Tablespace being dropped (or)taken offline(or) placed in read only mode, (or) in hot backup mode.

    The parameter DB_WRITER_PROCESS controls the number of DBWR processes you want to use. It can be from 1 to 20. Using more DBWR processes will increase the write performance. However, it will increase the CPU usage on the DB server.

    45) What is the job of the LGWR process?

    Ans:

    LGWR, i.e log writer writes the contents of the redo log buffer to an online redo log file.

    LGWR writes when any of the below events occurs.

    • When a commit is issued
    • Every 3 seconds
    • When 1/3rd of redo buffer is full
    • Before DBWr writes to disk.

    46) What is the function of the SMON process?

    Ans:

    SMON (System Monitor) background performs below tasks like

    • It performs instance recovery(including crash recovery during instance startup)
    • It cleans up unused temporary segments for proper space utilization 

    46) What is the function of the PMON process?

    Ans:

    PMON (Process Monitor) performs below tasks like

    • cleanup dead processes and sessions
    • terminate sessions which exceed idle timeout limit
    • managing job queue processes 

    47) What is the function of the MMON process?

    Ans:

    MMON i.e Manageability Monitor ,background process which performs tasks like taking AWR snapshots and performing ADDM analysis.

    48) How does instance recovery work in an oracle database?

    Ans:

    • When an instance terminated as abnormally or crashed the database goes down in an inconsistent state that means all ongoing transactions committed or uncommitted were not completed.So before it can be opened, the database must be in a consistent mode.
    • Hence SMON performs a critical role in recovering the database.  Oracle uses the last SCN in the control file and will apply committed or uncommitted transactions from Redo logs, which is known as roll forward. In this state database is in MOUNT state . Database then checks the accessibility of UNDO segments and opens the database. Now uncommitted transactions are rolled back with the help of UNDO ,which is called ROLL BACK.

    49) How to list restore points in RMAN?

    Ans:

    In RMAN you can use the LIST RESTORE POINT [ALL|restore_point_name] command. If you use a recovery catalog you can use the view RC_RESTORE_POINT in the recovery catalog repository, or the command the V$RESTORE_POINT in the target database.

    50) Can you see the progress of a FLASHBACK DATABASE operation?

    Ans:

    Yes, you can. During a FLASHBACK DATABASE operation, you can query V$SESSION_LONGOPS from another session to see the progress of the flashback.

    • The FLASHBACK DATABASE operation has two distinct phases: the actual flashback and the media recovery that happens afterwards to bring the database to a consistent state.
    • While the actual flashback is running, you’ll see the following message in V$SESSION_LONGOPS, on Oracle 11gR2:
    • Flashback Database: Flashback Data Applied : 238 out of 282 Megabytes done
    • During the media recovery, the following messages will be seen:
    • Media Recovery: Redo Applied: 263 out of 0 Megabytes done
    • Media Recovery: Average Apply Rate: 1164 out of 0 KB/sec done
    • Media Recovery: Last Applied Redo: 626540 out of 0 SCN+Time done
    • Media Recovery: Elapsed Time: 232 out of 0 Seconds done
    • Media Recovery: Active Time: 116 out of 0 Seconds done
    • Media Recovery: Active Apply Rate: 1859 out of 0 KB/sec done
    • Media Recovery: Maximum Apply Rate: 1859 out of 0 KB/sec done
    • Media Recovery: Log Files: 15 out of 0 Files done
    • Media Recovery: Apply Time per Log: 7 out of 0 Seconds done
    Course Curriculum

    Join Best Oracle Goldengate Course with Global Recognised Certification

    Weekday / Weekend BatchesSee Batch Details

    51) How should I set the database to improve Flashback performance?

    Ans:

    Oracle’s recommendations are:

    • Use a fast file system for your flash recovery area, preferably without operating system file caching. It is recommended to use a file system that avoids operating system file caching, such as ASM.
    • Configure enough disk spindles for the file system that will hold the flash recovery area. For large production databases, multiple disk spindles may be needed to support the required disk throughput for the database to write the flashback logs effectively.
    • If the storage system used to hold the flash recovery area does not have non-volatile RAM, try to configure the file system on top of striped storage volumes, with a relatively small stripe size such as 128K. This will allow each write to the flashback logs to be spread across multiple spindles, improving performance
    • For large production databases, set the init.ora parameter LOG_BUFFER to be at least 8MB. This makes sure the database allocates maximum memory (typically 16MB) for writing flashback database logs.

    52) What is the OGG s/w version to use if source DB is Oracle 12c R2 and platform Linux and destination is DB2 10.1 (32-bit) with 32-bit OS. Does it mean we have to download and install separate OGG software versions at source and target servers? Will they be able to communicate with each other?

    Ans:

    In any case, you will have to install and configure Goldengate on both target and source even if versions are the same. however, they will be two separate binaries.

    53) Can the source platform be 32-bit and target platform on 64 bit OS for OGG implementation?

    Ans:

    Yes

    54) We ran an update statement in the source database which updates ten million records, committed it and immediately ran “shut abort” in the source database. Will data get replicated correctly to the target database by OGG?

    Ans:

    If DB is down, Goldengate will abide. As long as data is written in logs when you restart the process, goldengate will pick from the point where it stopped. So everything depends on what is getting written to logs and goldengate will pick committed transactions.

    55)  Instead of running ‘add trandata’, if I directly run ‘alter table add supplemental logging’ at SQL prompt, will OGG still work?

    Ans:

    Yes

    56) What happens if we add trandata for a table which do not have a primary key or unique key, but has invisible columns:

    a) Will the invisible column be considered for uniqueness while enabling supplemental logging?

    b) What happens when we make the invisible column of the table visible?

    Ans:

    No invisible columns will not be considered for uniqueness if invisible columns are made visible they will be treated as normal columns.

    57) When we talk about OGG initial load, the target tables should be empty, but the metadata should be present. What about the indexes corresponding to the tables at the Target database? Should they be defined at the target database before starting OGG initial load?

    Ans:

    Goldengate Initial load takes care of only data. So yes metadata should be present. For faster load, it is advisable to turn off indexes but it’s not mandatory.

    58) OGG “Skip Transaction” option can be provided only for replicat process? Can it be used for extract and data pump processes also?

    Ans:

    Replicat only

    59) If both the source database and target database are RAC databases, will the OGG instances also be RAC?

    Ans:

    There is no such thing called an OGG RAC. Only DB is RAC.

    60)  Under which circumstances do we need to run dblogin in ggsci?

    Ans:

    Whenever any change to DB is required from GGSCI

    61)  How will the OGG processes behave? Will they get ABENDED?

    Ans:

    Yes. OGG processes get AMENDED

    62)  How will the underlying source and databases behave?

    Ans:

    Source and Target DB are independent of OGG hence no effect on DBS.

    63) How will the applied gap be detected and restored?

    Ans:

    As soon as you restart the process, it will be taken care of. This is done using the Goldengate Checkpoint process.

    64) What is a Credential Store in OGG 12c?

    Ans:

    OGG Credential Store manages Encrypted Passwords and USERIDs that are used to interact with the local database and Associate them with an Alias. Instead of specifying actual USER ID and Password in a command or a parameter file, you can use an alias. The Credential Store is implemented as an auto login wallet within the Oracle Credential Store Framework (CSF).

    65) How to configure a Credential Store in OGG 12c?

    Ans:

    Steps to configure Oracle Credential Store are as follows:

    By Default Credential Store is is located under “dircrd” directory. If you want to specify a different location use can specify “CREDENTIALSTORELOCATION” parameter in GLOBALS file.

    Example: CREDENTIALSTORELOCATION /u01/app/oracle/OGG_PASSWD

    Goto OGG home and connect to GGSCI.

    cd $OGG_HOME

    ./ggsci

    GGSCI>

    66) How do you add credentials to the credential store?

    Ans:

    ALTER CREDENTIALSTORE ADD USER userid, [PASSWORD password] [ALIAS alias]

    [DOMAIN domain]

    Example: GGSCI> ALTER CREDENTIALSTORE ADD USER GGS@orcl, PASSWORD oracle ALIAS external DOMAIN OracleGoldenGate

    67) How do you retrieve information from the Oracle Credential Store?

    Ans:

    GGSCI> INFO CREDENTIALSTORE

    OR

    GGSCI> INFO CREDENTIALSTORE DOMAIN OracleGoldenGate

    68) What are the different data encryption methods available in OGG 12c?

    Ans:

    In OGG 12c you can encrypt data with the following 2 methods:

    • Encrypt Data with Master Key and Wallet
    • Encrypt Data with ENCKEYS

    69) How do you enable Oracle GoldenGate for Oracle database 11.2.0.4?

    Ans:

    The database services required to support Oracle GoldenGate capture and apply must be enabled explicitly for an Oracle 11.2.0.4 database. This is required for all modes of Extract and Replicat.

    To enable Oracle GoldenGate, set the following database initialization parameter.

    • All instances in Oracle RAC must have the same setting.
    • ENABLE_GOLDENGATE_REPLICATION=true

    70) How does the Replicat work in a Coordinated Mode?

    Ans:

    In a Coordinated Mode Replicat operates as follows:

    • Read the Oracle GoldenGate trail.
    • Performs data filtering, mapping, and conversion.
    • Constructs SQL statements that represent source database DML or DDL transactions (in committed order).
    • Applies the SQL to the target through the SQL interface that is supported for the given target database, such as ODBC or the native database interface.
    Oracle Goldengate Sample Resumes! Download & Edit, Get Noticed by Top Employers! Download

    71) What is the difference between Classic and Coordinated Replicat?

    Ans:

    The difference between classic mode and coordinated mode is that Replicat is multi-threaded in coordinated mode. Within a single Replicated instance, multiple threads read the trail independently and apply transactions in parallel. Each thread handles all of the filtering, mapping, conversion, SQL construction, and error handling for its assigned workload. A coordinator thread coordinates the transactions across threads to account for dependencies among the threads.

    72) How do you create a COORDINATED REPLICATE in OGG 12c?

    Ans:

    You can create the COORDINATED REPLICATE with the following OGG

    Command:

    ADD REPLICAT rfin, COORDINATED MAXTHREADS 50, EXTTRAIL dirdat/et

    73) If I have created a Replicat process in OGG 12c and forgot to specify DISCARDFILE parameter. What will happen?

    Ans:

    Starting with OGG 12c, if you don’t specify a DISCARDFILE OGG process now generates a discard file with default values whenever a process is started with START command through GGSCI.

    74) Is there a separate background process for writing flashback logs?

    Ans:

    Yes. RVWR (Recovery Writer, a.k.a Flashback Writer) was introduced in Oracle 10g to write flashback data from the Flashback Buffer in the SGA to the flashback database logs on disk.

    75) Do I need to shutdown and mount the database to turn flashback on?

    Ans:

    ALTER DATABASE FLASHBACK ON is an online operation in 11g Release 2. In 10g Release 2 the database must be mounted, but not open.

    Once flashback is turned on Oracle starts to save the before-images of the database blocks before they are changed. This guarantees that the database can be flashbacked to the exact point-in-time when the flashback mode was turned on.

    76) What happens if RVWR cannot write to disk?

    Ans:

    • It depends on the context where the write error occurs:
    • If there’s a Guaranteed Restore Point, the database crashes to ensure the restore point guarantee is not voided.
    • If there isn’t a Guaranteed Restore Point and it’s a primary database, the Flashback Mode will be automatically turned off for the database, which will continue to operate normally.
    • If there isn’t a Guaranteed Restore Point and it’s a standby database, the database will hang until the cause of the write failure is fixed.

    77) Is it possible to specify the size of the Flashback Buffer in the SGA?

    Ans:

    Yes, but indirectly. The size of the Flashback Buffer is set to 2 * LOG_BUFFER.

    For performance reasons, it’s recommended to set LOG_BUFFER to at least 8MB for

    databases running in Flashback Mode.

    UPDATE: For large 11.1.0.7+ databases with more than a 4GB SGA, you may consider setting LOG_BUFFER to values in the range of 32-64 MB.

    78) Can RMAN be used to backup flashback logs?

    Ans:

    No. Flashback Logs are not backed up. Even if the command BACKUP RECOVERY AREA is used to backup the contents of the FRA to tape only the following file types are backed up: full and incremental backup sets, control file autobackups, datafile copies, and archived redo logs.

    • Flashback Logs are considered to be transient files and cannot be backed up by RMAN. They are not needed for media recovery.

    79) When are the flashback logs deleted?

    Ans:

    Flashback logs are managed by Oracle only. Oracle will try to keep as much Flashback logs as needed to satisfy the DB_FLASHBACK_RETENTION_TARGET parameter. However, if there’s space pressure in the Flash Recovery Area (FRA), flashback logs may be deleted to make room for other things, like backups and archived logs.

    80) How can you determine if the parameters for a process was recently changed?

    Ans:

    Whenever a process is started, the parameters in the .prm file for the process is written to the process REPORT. You can look at the older process reports to view the parameters which were used to start up the process. By comparing the older and the current reports you can identify the changes in the parameters.

    81) List a few useful Logdump commands to view and search data stored in OGG trail files.

    Ans:

    Below are few logdump commands used on a daily basis for displaying or analyzing data stored in a trail file.

    • $ ./logdump – to connect to the logdump prompt
    • logdump> open /u01/app/oracle/dirdat/et000001 – to open a trail file in logdump
    • logdump> fileheader on – to view the trail file header
    • logdump> ghdr on – to view the record header with data
    • logdump> detail on – to view column information
    • logdump> detail data – to display HEX and ASCII data values to the column list
    • logdump> reclen 200 – to control how much record data is displayed
    • logdump> pos 0 – To go to the first record
    • logdump> next (or simply n) – to move from one record to another in sequence
    • logdump> count – counting records in a trail

    82) How would you comment out a line in the parameter file?

    Ans:

    You can use the “–” character to comment out a line.

    83) I have configured Oracle GoldenGate integrated capture process using the default values. As the data load increases I see that extract starts lagging behind by an hour (or more) and database performance degrades. How will you resolve this performance issue?

    Ans:

    When operating in integrated capture mode, you must make sure that you have assigned sufficient memory to STREAMS_POOL_SIZE. An undersized STREAMS_POOL_SIZE or limiting the streams pool to use a specific amount of memory can cause troubles.

    The best practice is to allocate STREAMS_POOL_SIZE at the instance level and allocate the MAX. SGA at GG process level as below:

    SQL> alter system set STREAMS_POOL_SIZE=3G

    TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 2048, PARALLELISM 4)

    84) Why would you segregate the tables in a replication configuration? How would you do it?

    Ans:

    In OGG you can configure replicat at the data at the schema level or at the table level using TABLE parameter of extract and MAP parameter of replicat.

    • For replicating the entire database you can list all the schemas in the database in the extract/replicat parameter file.
    • Depending the amount of redo generation you can split the tables in a schema in multiple extracts and replicats to improve the performance of data replication. Alternatively you can also group a set of tables in the configuration by the application functionality.
    • Alternatively you may need to remove tables which have long running transactions in a separate extract process to eliminate lag on the other tables.
    • Let’s say that you have a schema named SCOTT and it has 100 hundred tables.
    • Out of these hundred tables, 50 tables are heavily utilized by application.

    To improve the overall replication performance you create 3 extract and 3 replicats as follows:

    • Ext_1/Rep_1 –> 25 tables
    • Ext_2/Rep_2 –> 25 tables
    • Ext_3/Rep_3 –> 50 tables
    • Ext_1/Rep_1 and Ext_2/Rep_2 contain 25 tables each which are heavily utilized or generate more redo.
    • Ext_3/Rep_3 contains all the other 50 tables which are least used.

    85) How can we report on long running transactions?

    Ans:

    The WARNLONGTRANS parameter can be specified with a threshold time that a transaction can be open before Extract writes a warning message to the ggs error log.

    Example: WARNLONGTRANS 1h, CHECKINTERVAL 10m

    86) What command can be used to view the checkpoint information for the extract process?

    Ans:

    Use the following command to view the Extract checkpoint information.

    • GGSCI> info extract , showch
    • GGSCI> info extract ext_fin, showch

    87) How is the RESTARTCOLLISION parameter different from HANDLECOLLISIONS?

    Ans:

    The RESTARTCOLLISION parameter is used to skip ONE transaction only in a situation when the GoldenGate process crashed and performed an operation (INSERT, UPDATE & DELETE) in the database but could not checkpoint the process information to the checkpoint file/table. On recovery it will skip the transaction and AUTOMATICALLY continue to the next operation in the trail file.

    • When using HANDLECOLLISIONS GoldenGate will continue to overwrite and process transactions until the parameter is removed from the parameter files and the processes restarted.

    88) What does the RMAN-08147 warning signify when your environment has a GoldenGate Capture Processes configured?

    Ans:

    This occurs when the V$ARCHIVED_LOG.NEXT_CHANGE# is greater than the SCN required by the GoldenGate Capture process and RMAN is trying to delete the archived logs. The RMAN-08147 error is raised when RMAN tries to delete these files.

    • When the database is open it uses the DBA_CAPTURE values to determine the log files required for mining. However if the database is in the mount state the V$ARCHIVED_LOG. NEXT_CHANGE# value is used.

    89) What are the steps required to add a new table to an existing replication setup?

    Ans:

    The steps to be executed would be the following:

    • Include the new table to the Extract & pump process.
    • Obtain starting database SCN and Copy the source table data to the target database
    • Start Replicat on target at the source SCN database point.

    90) What does the GoldenGate CAN equate to, in the Oracle Database?

    Ans:

    It is equivalent to the Oracle database SCN transaction number.

    91)  What is the purpose of the DEFGEN utility?

    Ans:

    When the source and the target schema objects are not the same (different DDL) the Replicat process needs to know the source definition of the objects. The output from the DEFGEN utility is used in conjunction with the trail data to determine which column value in the trail belongs to which column.

    92) We want to set up one-way data replication for my online transaction processing application. However there are compressed tables in the environment. Please suggest how I can achieve it.

    Ans:

    You must use OGG 11.2 and configure GoldenGate Integrated Capture process to extract data from compressed tables.

    Note: Pre OGG 11.2 doesn’t support extracting data from compressed tables

    93) We want to set up one-way data replication using Integrated Extract for my Oracle database running 10.2.0.4. Please suggest the best architecture to achieve it.

    Ans:

    Oracle GoldenGate Integrated Capture process supports Oracle databases 10.2 and higher. But if you are running Oracle database 10.2 and want your Oracle GoldenGate Integrated Capture process then you must configure downstream topology.

    94) I am migrating my Oracle database from non-exadata to exadata but my source OGG software and target OGG software versions are different (Say Source is running OGG 11.1 and target is running OGG 11.2). How can I configure my OGG process to work in such configuration?

    Ans:

    It is recommended that all instances of Oracle GoldenGate be the same version to take advantage of the new functionality, but this is not possible all the time and is not required. In this scenario, OGG provides a parameter called ‘FORMAT RELEASE’ which allows customers to use different versions of Oracle GoldenGate Extract, trail files, and Replicate together.

    Example: RmtTrail /u01/app/oracle/dirdat, Format Release 11.1

    Are you looking training with Right Jobs?

    Contact Us
    Get Training Quote for Free