码迷,mamicode.com
首页 > 其他好文 > 详细

Trident整合Kafka

时间:2018-09-10 13:30:09      阅读:320      评论:0      收藏:0      [点我收藏+]

标签:input   each   system   整合   ++   输出   broker   function   err   

首先编写一个打印函数KafkaPrintFunction

import org.apache.storm.trident.operation.BaseFunction;
import org.apache.storm.trident.operation.TridentCollector;
import org.apache.storm.trident.tuple.TridentTuple;
import org.apache.storm.tuple.Values;

public class KafkaPrintFunction extends BaseFunction {

    @Override
    public void execute(TridentTuple input, TridentCollector collector) {
        String msg = input.getStringByField("str");
        System.out.println(this.getClass().getSimpleName() + ": " + msg);
        collector.emit(new Values(msg));
    }

}

然后编写trident整合kafka的topology

import net.baiqu.shop.report.utils.Constants;
import org.apache.storm.Config;
import org.apache.storm.LocalCluster;
import org.apache.storm.kafka.BrokerHosts;
import org.apache.storm.kafka.StringScheme;
import org.apache.storm.kafka.ZkHosts;
import org.apache.storm.kafka.trident.TransactionalTridentKafkaSpout;
import org.apache.storm.kafka.trident.TridentKafkaConfig;
import org.apache.storm.spout.SchemeAsMultiScheme;
import org.apache.storm.trident.Stream;
import org.apache.storm.trident.TridentTopology;
import org.apache.storm.tuple.Fields;

/**
 * kafka连接trident
 */
public class KafkaTrident {

    public static void main(String[] args) {
        TridentTopology topology = new TridentTopology();

        BrokerHosts hosts = new ZkHosts(Constants.ZK_HOSTS);
        String topic = "tridentTestTopic";
        String id = "kafka.queue.tridentTestTopic";
        TridentKafkaConfig kafkaConfig = new TridentKafkaConfig(hosts, topic, id);
        kafkaConfig.scheme = new SchemeAsMultiScheme(new StringScheme());
        TransactionalTridentKafkaSpout kafkaSpout = new TransactionalTridentKafkaSpout(kafkaConfig);

        Stream stream = topology.newStream("kafkaSpout", kafkaSpout);
        stream.shuffle().each(new Fields("str"), new KafkaPrintFunction(), new Fields("result"));

        LocalCluster cluster = new LocalCluster();
        cluster.submitTopology("kafkaTridentDemo", new Config(), topology.build());
    }

}

另一个Java项目发送kafka数据

    @Scheduled(fixedRate = 3000)
    public void shopDataTestJob9() {
        for (int i = 0; i < 1; i++) {
            kafkaTemplate.send("tridentTestTopic", "test kafka trident");
            System.out.println("test kafka trident");
        }
    }

最后运行storm项目以及java项目(需要先运行java项目往kafka发数据,建立此topic,storm才能消费这个topic)

观察结果,storm项目控制台输出

KafkaPrintFunction: test kafka trident
KafkaPrintFunction: test kafka trident
KafkaPrintFunction: test kafka trident

表示storm trident消费kafka数据成功

 

Trident整合Kafka

标签:input   each   system   整合   ++   输出   broker   function   err   

原文地址:https://www.cnblogs.com/tangzhe/p/9618495.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!