Setting up JanusGraph

671 Views Asked by At

I'm new to JanusGraph. Can somebody help me edit this docker-compose file to use ScyllaDB instead of Cassandra and Apache Solr instead of Elasticsearch? Also does Apache Spark get installed automatically or do I have to also put it into the docker-compose file?

Thank you.


services:
  janusgraph:
    image: janusgraph/janusgraph:latest
    container_name: jce-janusgraph
    environment:
      JANUS_PROPS_TEMPLATE: cassandra-es
      janusgraph.storage.backend: cql
      janusgraph.storage.hostname: jce-cassandra
      janusgraph.index.search.hostname: jce-elastic
    ports:
      - "8182:8182"
    networks:
      - jce-network
    healthcheck:
      test: ["CMD", "bin/gremlin.sh", "-e", "scripts/remote-connect.groovy"]
      interval: 10s
      timeout: 30s
      retries: 3
  cassandra:
    image: cassandra:3
    container_name: jce-cassandra
    ports:
      - "9042:9042"
      - "9160:9160"
    networks:
      - jce-network
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
    container_name: jce-elastic
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - "http.host=0.0.0.0"
      - "network.host=0.0.0.0"
      - "transport.host=127.0.0.1"
      - "cluster.name=docker-cluster"
      - "xpack.security.enabled=false"
      - "discovery.zen.minimum_master_nodes=1"
    ports:
      - "9200:9200"
    networks:
      - jce-network

networks:
  jce-network:
volumes:
  janusgraph-default-data:

Edit: I've successfully switched Cassandra to Scylla

version: "3"

services:
  graph:
    image: janusgraph/janusgraph:0.5.2
    environment:
      JANUS_PROPS_TEMPLATE: cassandra-es
      janusgraph.storage.backend: cql
      janusgraph.storage.hostname: db
      janusgraph.index.search.hostname: index
    ports:
      - "8182:8182"
      - "8184:8184"
    depends_on:
      - db
      - index
  db:
    image: scylladb/scylla:4.2.1
    ports:
      # REST API
      - "10000:10000"
      # CQL ports (native_transport_port)
      - "9042:9042"
      # Thrift (rpc_port)
      - "9160:9160"
      # Internode
      - "7000:7000"
      - "7001:7001"
      # JMX
      - "7199:7199"
      # Prometheus monitoring
      - "9180:9180"
      - "9100:9100"
    volumes:
      - ./data/db/data:/var/lib/db
  index:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.6.0
    environment:
      - discovery.type=single-node
      - http.host=0.0.0.0
      - transport.host=127.0.0.1
    ports:
      - "9200:9200"
      - "9300:9300"
    volumes:
     - ./data/index/data:/usr/share/index/data
1

There are 1 best solutions below

2
On

Ok, so Scylla installation is fine now. Regarding Spark, you'll need to install it yourself or add another container for it.