码迷,mamicode.com
首页 > 其他好文 > 详细

Spark-Cassandra-Connector 插入数据函数saveToCassandra

时间:2016-01-21 19:49:38      阅读:450      评论:0      收藏:0      [点我收藏+]

标签:

在spark-shell中将数据保存到cassandra:

var data = normalfill.map(line => line.split("\u0005"))

data.map(
 line => (line(0), line(1), line(2))) 
).saveToCassandra(
 "cui", 
 "oper_ios",
 SomeColumns("user_no", "cust_id", "oper_code","oper_time")
)

 

 saveToCassandra方法 当字段类型是counter的时候,默认行为是计数 

 

CREATE TABLE cui.incr(
 name text,
 count counter,
 PRIMARY KEY (name)
)

 

scala> var rdd = sc.parallelize(Array(("cui", 100 )))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[820] at parallelize at <console>:42

scala>  rdd.saveToCassandra("cui","incr", SomeColumns("name","count"))
16/01/21 16:55:35 INFO core.Cluster: New Cassandra host /172.25.1.158:9042 added
……

// name     count

// cui          100

scala> var rdd = sc.parallelize(Array(("cui", 100 )))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[821] at parallelize at <console>:42

scala>  rdd.saveToCassandra("cui","incr", SomeColumns("name","count"))

// name     count

// cui          200

Spark-Cassandra-Connector 插入数据函数saveToCassandra

标签:

原文地址:http://www.cnblogs.com/tugeler/p/5148909.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!