Excuse me for the inconvenience but I did not find an answer in the Doc or Internet.
I have a platform with :
- Hadoop 2.7.3
- Hive 2.1.0
- Hbase 1.2.4
- Spark 1.6
I have integrated Flink 1.1.3 to use it on local mode and Yarn mode.
I'm interested to use Flink with Hive (As hiveContext with Spark) to read data in scala-shell, is it possible ? And How ?
Regards.
Flink does not support direct connections to Hive as it is supported in Spark with SQL context. But there is a simple way of analyzing data in Hive table in Flink using Flink Table API
What you need to do is first get the exact HDFS location of the Hive table you wish to analyze with Flink e.g.
Then you read data
Then you need to create a table from the DataSet and then register it with the TableEnvironment
And now you are all set to query this table using Table API syntax.
Here is a link to the sample code.
Hope this helps.