I am trying to build a Movie Recommender System Using Apache Spark MLlib.
I have written a code for recommender in java and its working fine when run using spark-submit command.
My run command looks like this
bin/spark-submit --jars /opt/poc/spark-1.3.1-bin-hadoop2.6/mllib/spark-mllib_2.10-1.0.0.jar --class "com.recommender.MovieLensALSExtended" --master local[4] /home/sarvesh/Desktop/spark-test/recommender.jar /home/sarvesh/Desktop/spark-test/ml-latest-small/ratings.csv /home/sarvesh/Desktop/spark-test/ml-latest-small/movies.csv
Now I want to use my recommender in real world scenario, as a web application in which I can query recommender to give some result.
I want to build a Spring MVC web application which can interact with Apache Spark Context and give me results when asked.
My question is that how I can build an application which interacts with Apache Spark which is running on a cluster. So that when a request comes to controller it should take user query and fetch the same result as the spark-submit command outputs on console.
As far as I have searched, I found that we can use Spark SQL, integrate with JDBC. But I did not find any good example.
Thanks in advance.
To interact with data model (call its invoke method?), you could build a rest service inside the driver. This service listens for requests, and invokes model's predict method with input from the request, and returns result.
http4s (https://github.com/http4s/http4s) could be used for this purpose.
Spark SQL is not relevant, as it is to handle data analytics (which you have done already), with sql capabilities.
Hope this helps.