![](https://csdnimg.cn/release/download_crawler_static/7178845/bg11.jpg)
Introduction to GoldenGate
Overview of the GoldenGate architecture
15
Oracle® GoldenGate Administration Guide
..............................................................................
For the SQL/MX database, the Extract module also includes a program named VAMSERV.
Extract starts
VAMSERV, and together they retrieve and process database changes from the
audit trails created by TMF-enabled applications on a NonStop system.
Overview of data pumps
A data pump is a secondary Extract group within the source GoldenGate configuration. If
a data pump is not used, Extract must send data to a remote trail on the target. In a typical
configuration that includes a data pump, however, the primary Extract group writes to a
trail on the source system. The data pump reads this trail and sends the data across the
network to a remote trail on the target. The data pump adds storage flexibility and also
serves to isolate the primary Extract process from TCP/IP activity.
Like a primary Extract group, a data pump can be configured for either online or batch
processing. It can perform data filtering, mapping, and conversion, or it can be configured
in pass-through mode, where data is passively transferred as-is, without manipulation.
Pass-through mode increases the throughput of the data pump, because all of the
functionality that looks up object definitions is bypassed.
In most business cases, it is best practice to use a data pump. Some reasons for using a data
pump include the following:
● Protection against network and target failures: In a basic GoldenGate configuration,
with only a trail on the target system, there is nowhere on the source system to store
data that Extract continuously extracts into memory. If the network or the target
system becomes unavailable, the primary Extract could run out of memory and abend.
However, with a trail and data pump on the source system, captured data can be moved
to disk, preventing the abend. When connectivity is restored, the data pump extracts
the data from the source trail and sends it to the target system(s).
● You are implementing several phases of data filtering or transformation. When using
complex filtering or data transformation configurations, you can configure a data pump
to perform the first transformation either on the source system or on the target system,
and then use another data pump or the Replicat group to perform the second
transformation.
● Consolidating data from many sources to a central target. When synchronizing multiple
source databases with a central target database, you can store extracted data on each
source system and use data pumps on each of those systems to send the data to a trail
on the target system. Dividing the storage load between the source and target systems
reduces the need for massive amounts of space on the target system to accommodate
data arriving from multiple sources.
● Synchronizing one source with multiple targets. When sending data to multiple target
systems, you can configure data pumps on the source system for each target. If network
connectivity to any of the targets fails, data can still be sent to the other targets.
Overview of Replicat
The Replicat process runs on the target system. Replicat reads extracted data changes and
DDL changes (if supported) that are specified in the Replicat configuration, and then it
replicates them to the target database. You can configure Replicat in one of the following
ways:
● Initial loads: For initial data loads, Replicat can apply data to target objects or route
them to a high-speed bulk-load utility.