标签:
转自:https://prefrontaldump.wordpress.com/2016/05/02/using-confluents-jdbc-connector-without-installing-the-entire-platform/
I was interested in trying out Confluent’s JDBC connector without installing their entire platform (I’d like to stick to vanilla Kafka as much as possible). Here are the steps I followed to get it working with SQL Server.
Download Kafka 0.9, untar the archive, and create a directory named connect_libs in the kafka root (kafka_2.10-0.9.0.1/connect_libs
).
Download the Confluent platform and extract the following jars (you should also be able to pull these from Confluent’s Maven repo, though I was unsuccessful):
*Place these jars along with the SQL Server driver in kafka_2.10-0.9.0.1/connect_libs
. Update bootstrap.servers
in kafka_2.10-0.9.0.1/config/connect-standalone.properties
with the broker list and create kafka_2.10-0.9.0.1/config/connect-jdbc.properties
with the settings to try out:
name=sqlserver-feed
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:sqlserver://xxx.xxx.xxx.xxx:1433;databaseName=FeedDB;user=user;password=password
table.whitelist=tblFeedIP,tblFeedURL
mode=timestamp+incrementing
timestamp.column.name=localLastUpdated
incrementing.column.name=id
topic.prefix=stg-
Create the topics stg-tblFeedIP
and stg-tblFeedURL
on the cluster.
Add the connect_libs directory to the classpath:export CLASSPATH="connect_libs/*"
And finally, run the connector in standalone mode with (make sure you are in the root kafka directory): bin/connect-standalone.sh config/connect-standalone.properties config/connect-jdbc.properties
Then, tail your topics to verify that messages are being produced by the connector.
* If you don’t care about cluttering up the default libs directory (kafka_2.10-0.9.0.1/libs
), you can also just dump the jars there and not have to worry about setting the classpath.
Using Confluent’s JDBC Connector without installing the entire platform
标签:
原文地址:http://www.cnblogs.com/zdfjf/p/5647486.html