Excuse me for the inconvenience but I did not find an answer in the Doc or Internet.
I have a platform with :
- Hadoop 2.7.3
- Hive 2.1.0
- Hbase 1.2.4
- Spark 1.6
I have integrated Flink 1.1.3 to use it on local mode and Yarn mode.
I'm interested to use Flink with Hive (As hiveContext with Spark) to read data in scala-shell, is it possible ? And How ?
Regards.
From Flink 1.9.0, we are officially supporting Flink with Hive. https://ci.apache.org/projects/flink/flink-docs-master/dev/table/hive/
are you still looking into this option?