标签:类型 sele 结构 sel imp build frame ace tmp
1 import org.apache.spark.sql.SparkSession 2 val spark = SparkSession 3 .builder() 4 .appName("Spark SQL basic example") 5 .getOrCreate() 6 //引入Spark的隐式类型转换,如将RDD转换成 DataFrame 7 import spark.implicits._ 8 val df = spark.read.json("/data/tmp/SparkSQL/people.json") 9 df.show() //将DataFrame的内容进行标准输出 10 //+---+-------+ 11 //|age| name| 12 //+---+-------+ 13 //| |Michael| 14 //| 19| Andy| 15 //| 30| Justin| 16 //+---+-------+ 17 18 df.printSchema() //打印出DataFrame的表结构 19 //root 20 // |-- age: string (nullable = true) 21 // |-- name: string (nullable = true) 22 23 df.select("name").show() 24 //类似于select name from DataFrame的SQL语句 25 26 df.select($"name", $"age" + 1).show() 27 //类似与select name,age+1 from DataFrame的SQL语句 28 //此处注意,如果对列进行操作,所有列名前都必须加上$符号 29 30 df.filter($"age" > 21).show() 31 //相当于select * from DataFrame where age>21 的SQL语句 32 33 df.groupBy("age").count().show() 34 //相当于select age,count(age) from DataFrame group by age; 35 36 //同时也可以直接写SQL进行DataFrame数据的分析 37 df.createOrReplaceTempView("people") 38 val sqlDF = spark.sql("SELECT * FROM people") 39 sqlDF.show()
标签:类型 sele 结构 sel imp build frame ace tmp
原文地址:https://www.cnblogs.com/followees/p/8908635.html