码迷,mamicode.com
首页 > 其他好文 > 详细

Spark 基础 —— 创建 DataFrame 的三种方式

时间:2019-12-07 16:18:20      阅读:620      评论:0      收藏:0      [点我收藏+]

标签:dna   object   attribute   Parquet   EDA   import   ext   rdd   nts   

1.自定义 schema(Rdd[Row] => DataSet[Row])

import org.apache.spark.sql.types._
val peopleRDD = spark.sparkContext.textFile("README.md")

val schemaString = "name age"
val fields = schemaString.split(" ")
.map(fieldName => StructField(fieldName, StringType, nullable = true))
val schema = StructType(fields)

val rowRDD = peopleRDD
.map(_.split(","))
.map(attributes => Row(attributes(0), attributes(1).trim))
rowRDD.collect().foreach(println)
val df = spark.createDataFrame(rowRDD, schema)

  

2.借助 case class 隐式转换(Rdd[Person] => DataSet[Row])

object DFTest {
  
  case class Person(name: String, age: Int)

  def main(args: Array[String]): Unit = {
    val spark = SparkSession
      .builder
      .appName("DataFrame Application").
      master("local")
      .getOrCreate()
    import spark.implicits._
    val peopleRDD = spark.sparkContext.textFile("README.md")

    val personRDD = peopleRDD
      .map(_.split(","))
      .map(attributes => Person(attributes(0), attributes(1).toInt))
    personRDD.collect().foreach(println)
    personRDD.toDF().show()
  }
}

 

3.直接从数据源创建

val df = spark
      .read
      .option("header", value = true)
      .csv("/home/lg/Documents/data/1987.csv")

此外 

spark.read.jdbc
spark.read.json
spark.read.parquet

 

233

Spark 基础 —— 创建 DataFrame 的三种方式

标签:dna   object   attribute   Parquet   EDA   import   ext   rdd   nts   

原文地址:https://www.cnblogs.com/lemos/p/12001729.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!