标签:create 连接 sch 设置 asto done ase after cli
我用VMWare搭建了一个Hadoop集群,Spark与Hive等组件都已经安装完毕。现在我希望在我的开发机上使用IDEA连接到集群上的Hive进行相关操作。
在hive-site.xml中找到这个配置,将改成如下形式
<property>
<name>hive.metastore.uris</name>
<value>thrift://master节点的ip地址:9083</value>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
在hive-site.xml中找到如下配置,将
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
<description>
Enforce metastore schema version consistency.
True: Verify that version information stored in is compatible with one from Hive jars. Also disable automatic
schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures
proper metastore schema migration. (Default)
False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.
</description>
</property>
在master节点的命令行中启动
hive --service metastore
hive --service hiveserver2
以上2条命令可以在后台运行,使用nohup
即可
示例代码如下:
import ml.dmlc.xgboost4j.scala.spark.XGBoost
import org.apache.spark.ml.feature.{StringIndexer, VectorAssembler}
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.types.{DoubleType, StringType, StructField, StructType}
object XgbPredict {
def main(args: Array[String]): Unit = {
val spark = SparkSession
.builder()
.master("spark://172.16.74.128:7077") // standalone模式
.config("hive.metastore.uris", "thrift://172.16.74.128:9083") // 配置1
.config("spark.sql.warehouse.dir", "hdfs://172.16.74.128:9000/user/hive/warehouse") // 配置2
.enableHiveSupport()
.getOrCreate()
spark.sql("show databases").show()
println("Done!")
}
}
标签:create 连接 sch 设置 asto done ase after cli
原文地址:https://www.cnblogs.com/shayue/p/ben-despark-zhi-jie-ji-qun-shang-dehive.html