Redis Enterprise Oracle Golden Gate Connector
Redis Enterprise Change Data Capture (CDC) Connector for Oracle Golden Gate is a custom user exit, that makes it easier to configure and move changes from Oracle to Redis Enterprise Database or other Sinks e.g. JMS. Below are few important features of the connector,
Oracle Golden Gate Connector Redis high-level Architecture Diagram
Oracle Golden Gate Connector Redis Architecture Diagram
The Goldengate application adapter for Java should be setup and configured. This connector has been tested with the following GG application adapter versions,
12.2.0.1.1
19.1
The installation and setup has also been tested with the built in file handler and we have necessary velocity template file in the package to convert the CDC events into a XML file.
Download the latest release e.g. wget https://github.com/RedisLabs-Field-Engineering/rl-connector-gg/releases/download/v1.0/rl-connector-gg-1.0.tar.gz
and untar (tar -xvf rl-connector-gg-1.0.tar.gz) the rl-connector-gg-1.0.tar.gz archive.
All the contents would be extracted under rl-connector-gg
Contents of rl-connector-gg
#### mapper configuration file.
### Sample mapper.xml under rl-connector-gg/ogg/extlib/config/mappers folderxml
<Schema xmlns="http://cdc.connector.ivoyant.com/Mapper/Config" name="ORCLPDB1.HR"> <!-- Schema name e.g. HR. One mapper file per schema and you can have multiple tables in the same mapper file as long as schema is same, otherwise create multiple mapper files e.g. mapper1.xml, mapper2.xml or <table_name>.xml etc. under mappers folder of your config dir.-->
<Tables>
<Table name="EMPLOYEES"> <!-- EMPLOYEES table under HR schema -->
<!-- publishBefore - Global setting, that specifies if before values have to be published for all columns
* - This setting could be overridden at each column level -->
<Processors>
<RedisProcessor id="EMPLOYEES" processorID="REDIS_HASH_PROCESSOR" publishChangedColumnsOnly="false" deleteOnKeyUpdate="true" prependTableNameToKey="true">
<Mapper>
<Column src="EMPLOYEE_ID" target="EmpNumber" type="INT"/> <!-- key column on the source emp table -->
<Column src="FIRST_NAME" target="FName"/>
<Column src="LAST_NAME" target="LName"/>
<Column src="JOB_ID" target="JobId"/>
<Column src="HIRE_DATE" target="StartDate"/>
<Column src="SALARY" target="Salary" type="DOUBLE"/>
</Mapper>
<RedisSink connectionId="cluster1" async="true"></RedisSink>
</RedisProcessor>
</Processors>
</Table>
<Table name="EMPLOYEES_DATA">
<Processors>
<RedisProcessor id="EMPLOYEES_DATA" processorID="REDIS_STRING_PROCESSOR" publishChangedColumnsOnly="false" deleteOnKeyUpdate="true" prependTableNameToKey="false">
<Mapper>
<Column src="EMPLOYEE_ID" target="EmpNumber" type="INT"/> <!-- key column on the source EMPLOYEES_DATA table -->
<Column src="FIRST_NAME" target="FName"/>
<Column src="LAST_NAME" target="LName"/>
<Column src="JOB_ID" target="JobId"/>
<Column src="HIRE_DATE" target="StartDate"/>
<Column src="SALARY" target="Salary" type="DOUBLE"/>
</Mapper>
<RedisSink connectionId="cluster1" async="true">
<Formatter>FREEMARKER_FORMATTER</Formatter>
<FormatterTemplate>pipe-delimited.ftl</FormatterTemplate>
</RedisSink>
</RedisProcessor>
</Processors>
</Table>
</Tables>
</Schema>
If you don’t need any transformation of source columns then you can simply use passThrough option and you don’t need to explicitly map each source columns to Redis target data structure.xml
<Schema xmlns="http://cdc.connector.ivoyant.com/Mapper/Config" name="ORCLPDB1.HR"> <!-- Schema name e.g. HR. One mapper file per schema and you can have multiple tables in the same mapper file as long as schema is same, otherwise create multiple mapper files e.g. mapper1.xml, mapper2.xml or <table_name>.xml etc. under mappers folder of your config dir.-->
<Tables>
<Table name="EMPLOYEES"> <!-- EMPLOYEES table under HR schema -->
<!-- publishBefore - Global setting, that specifies if before values have to be published for all columns
* - This setting could be overridden at each column level -->
<Processors>
<RedisProcessor id="EMPLOYEES-Passthrough" processorID="REDIS_HASH_PROCESSOR" passThrough="true" publishChangedColumnsOnly="false" deleteOnKeyUpdate="true" prependTableNameToKey="true">
<Mapper>
<Column src="EMPLOYEE_ID" target="EmpNumber" type="INT"/> <!-- key column on the source emp table -->
</Mapper>
<RedisSink connectionId="cluster1" async="true"></RedisSink>
</RedisProcessor>
</Processors>
</Table>
</Tables>
</Schema>
Please see here for Redis URI syntax.
If you had the sample file handler working well before, this connector should work if all the configs are in the right place. Follow the steps below to start the connector
:informationsource:
You can use the command “info all” to see the _EXTRACT status and view its report by using “view extract javaue”.
Logs of the extract are also located within the /u01/app/ogg/dirrpt directory.
If everything has been configured properly, you will be able to observe the data being replicated in redis.
If you make any changes by replacing a jar or changing a configuration file, you must stop the extract and start it again to see the new changes.
For troubleshooting look at the standard user exit log files and connector log file (dafault location is logs/cdc-1.log). Please ensure that logback jars are available in the classpath as shown in the sample javaue.properties
under javawriter.bootoptions
.
The connector works with EXTRACT setup as shown in the sample JAVAUE.prm
file under dirprm directory but starting with Oracle Golden Gate version 19.x, you need a REPLICAT setup. Here is a sample of REPLICAT,
REPLICAT JAVAUE
getEnv (JAVA_HOME)
getEnv (LD_LIBRARY_PATH)
getEnv (PATH)
TARGETDB LIBFILE libggjava.so SET property=dirprm/javaue.properties
STATOPTIONS RESETREPORTSTATS
REPORT AT 11:59
REPORTROLLOVER AT 00:01
REPORTCOUNT EVERY 30 MINUTES, RATE
GROUPTRANSOPS 1000
GETUPDATEBEFORES
MAP HR.*, TARGET HR.*;
Please see TABLE | MAP options for EXTRACT and REPLICAT.