标签:dataset 5.0 sch str 必须 imp nal tor vector
Scala:
import org.apache.spark.ml.linalg.Vectors
val data = Seq(
(7, Vectors.dense(0.0, 0.0, 18.0, 1.0), 1.0),
(8, Vectors.dense(0.0, 1.0, 12.0, 0.0), 0.0),
(9, Vectors.dense(1.0, 0.0, 15.0, 0.1), 0.0)
)
val df = spark.createDataset(data).toDF("id", "features", "clicked")
Python:
from pyspark.ml.linalg import Vectors
df = spark.createDataFrame([
(7, Vectors.dense([0.0, 0.0, 18.0, 1.0]), 1.0,),
(8, Vectors.dense([0.0, 1.0, 12.0, 0.0]), 0.0,),
(9, Vectors.dense([1.0, 0.0, 15.0, 0.1]), 0.0,)], ["id", "features", "clicked"])
如果是pair rdd则:
stratified_CV_data = training_data.union(test_data) #pair rdd #schema = StructType([ # StructField("label", IntegerType(), True), # StructField("features", VectorUDT(), True)]) vectorized_CV_data = sqlContext.createDataFrame(stratified_CV_data, ["label", "features"]) #,schema)
因为spark交叉验证的数据集必须是data frame,也是醉了!
标签:dataset 5.0 sch str 必须 imp nal tor vector
原文地址:http://www.cnblogs.com/bonelee/p/7805358.html