Dbvisit Replicate Connector for Kafka 


The Dbvisit Replicate Connector integrates with Kafka to enable you to stream change data in real-time from Oracle transactional databases through to Kafka, and beyond. Working alongside Dbvisit Replicate, which uses its own efficient internal redo log-based mining change data capture technology to detect changes in the Oracle database, the Dbvisit Replicate Connector then ingests these changes to Kafka. 

As companies become more acutely aware of the true value of customer and operational data within their own databases (RDBMS), they are seeking ways to efficiently access, aggregate, stream and analyze this resource. Those resulting insights have the power to transform the way companies work, through trend analysis, improved customer engagement and more accurate business decisions. If you have any questions about how the Dbvisit Replicate Connector will enable you to harness your data for your business, complete the form and we will get back to you very shortly.

Get Started

Our connector is now LIVE, and you can find us listed on
Confluent's Certified Kafka Connector page.

Stay tuned to the Dbvisit Blog for more on our Connector and its developments.


The Dbvisit Replicate Connector for Kafka is a SOURCE connector for the Kafka Connect framework, run within the Confluent Platform. It enables a continuous flow of Oracle database event data, enabling companies to directly access, process in flight, channel and direct this stream of information in real-time.

This connector enables you to stream DML change data records (inserts, deletes, updates) from an Oracle database, which have been made available in PLOG (parsed log) files generated by the Dbvisit Replicate application, through into Kafka topics.

Using custom crafted, proprietary technology, Replicate Mine reads changes as they are made directly from the Oracle redo logs, extracting this information (inserts, deletes, updates and DDL) for the tables and schemas it has been configured for.

This is then bundled up into a proprietary binary format, called a PLOG, that is streamed in real-time, translated by the Dbvisit Replicate Connector, and data is made available immediately, in industry standard formats for consumption, to a number of messaging and dataflow applications. 

Download the Dbvisit Replicate Connector for Kafka Datasheet.

 

 

Typical use cases include:

  • Financial Trading (fraud detection)
  • Real-time System Monitoring (system anomalies, correlations)
  • Business Intelligence (review, planning, forecasting)
  • Real-time Analytics (planning, customized services)

If you are curious about how real-time data streaming works alongside Kafka to deliver your data where you need it, take a break, and listen to our on-demand webinar hosted by Kafka expert, Gwen Shapira, from Confluent, (the creators of Apache™ Kafka), and Mike Donovan, CTO at Dbvisit, who will discuss: