码迷,mamicode.com
首页 > 其他好文 > 详细

hadoop+spark+kudu

时间:2018-04-04 18:16:35      阅读:213      评论:0      收藏:0      [点我收藏+]

标签:sci   card   instead   兼容   ssi   5.0   cte   nat   class   

1.spark 和kudu 的兼容版本

Spark Integration Known Issues and Limitations
Spark 2.2+ requires Java 8 at runtime even though Kudu Spark 2.x integration is Java 7 compatible. Spark 2.2 is the default dependency version as of Kudu 1.5.0.

Kudu tables with a name containing upper case or non-ascii characters must be assigned an alternate name when registered as a temporary table.

Kudu tables with a column name containing upper case or non-ascii characters may not be used with SparkSQL. Columns may be renamed in Kudu to work around this issue.

<> and OR predicates are not pushed to Kudu, and instead will be evaluated by the Spark task. Only LIKE predicates with a suffix wildcard are pushed to Kudu, meaning that LIKE "FOO%" is pushed down but LIKE "FOO%BAR" isn’t.

Kudu does not support every type supported by Spark SQL. For example, Date and complex types are not supported.

Kudu tables may only be registered as temporary tables in SparkSQL. Kudu tables may not be queried using HiveContext.

spark 2.2 需要 kudu 1.5.0

hadoop+spark+kudu

标签:sci   card   instead   兼容   ssi   5.0   cte   nat   class   

原文地址:https://www.cnblogs.com/chengjunhao/p/8718036.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!