I have a set of log data in the form of flat files from which I want to form a graph (based on information in the log) and load it into the Titan database. This data is a few gigabytes in size. I am exploring bulk loading options Faunus and BatchGraph ( which I read about in https://github.com/thinkaurelius/titan/wiki/Bulk-Loading) . The tab separated log data I have needs a bit of processing on each line of the file to form the graph nodes and edges I have in mind. Will Faunus/BatchGraph serve this use case? If yes, what format should my input file be in for these tools to work? If not, is using the BluePrints API the way to go? Any resources you can share on your suggestion is very much appreciated since I'm a novice. Thanks!
Loading data into Titan database
2.8k Views Asked by DaTaBomB At
1
There are 1 best solutions below
Related Questions in GRAPH-DATABASES
- From 2 column csv to 2 color NetworkX graph
- neo4j Nodes Deleted (but not Actually)
- Context parameters in Blazegraph Nano SPARQL
- Differences between GrapheneDB and Graph Story
- Show nodes with more than one relationship using NEO4j
- py2neo constructing graph using cypher.builders
- Neo4j 2.2.2 and wrong Cypher query
- Fixed length path between two nodes without knowing relationship type
- neo4j coudn't start giving error You are using an unsupported version of Java, please use Oracle HotSpot 1 .7
- store temp variables in neo4j
- End User Authentication using Neo4j
- How to query/search/retrieve an Edge with a pair of Vertices in OrientDB using Java API?
- using DataStax graphdb, How to create vertexIndex using multiple columns with materialized view method
- OrientDB traverse (children) while connected to a vertex and get an other vertex
- Finding vertices with exact edge matches traversing through children
Related Questions in TITAN
- How to efficiently create a vertex with a label and several properties?
- gremlin outputs different from as seen on the internet, I think in bytes
- Is there any way to create an index in titan-0.4.4 without cleaning up the storage backend?
- How to get tuples in gremlin?
- Creating graph in titan from data in csv - example wiki.Vote gives error
- Loading data into Titan with bulbs and then accessing it
- How to set a vertex's Index myself in Titan Graph Database
- How to list the nodes associated to a given vertex through intermediate nodes in TinkerPop3?
- No "Access-Control-Allow-Origin" on using POST in ajax request
- TitanGraph slows down on commit()
- Does titan (graph database) support update notifications?
- Titan 1.0.0 graph cache memory leaks
- how to make a spatial query with lucene syntax in elastic search
- How to cascade deletes in Titan DB?
- Setting up geolocation index in Aurelius Titan 0.5+
Related Questions in FAUNUS
- Faunus json reader error in json file format
- At what size is the Faunus Graph Analytics Framework needed for Titan?
- Faunus graph not printing nodes without using side effect from gremlin shell
- Trying to write to HDFS on a remote machine using Faunus
- faunus script map completing but not mutating graph
- Shortest Path with Titan and Cassandra
- Loading data into Titan database
- gremlin user defined steps in faunus script map
- Issue while ingesting a Titan graph into Faunus
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
To answer your question in simple fashion, I think you will want to use Faunus to load your data. I would recommend cleaning and transforming your data with external tools first if possible. Tab-delimited is a fine format, but how you prepare these file can have impact on loading performance (e.g. sometimes simply sorting the data the right way can provide a big speed boost.)
The more complete answer lies in these two resources. They should help you decide on an approach:
http://thinkaurelius.com/2014/05/29/powers-of-ten-part-i/ http://thinkaurelius.com/2014/06/02/powers-of-ten-part-ii/
I would offer this additional advice - if you are truly a novice, I recommend that you find some slice of your data that produces somewhere between 100K and 1M edges. Focus on simply loading that with
BatchGraphor just the Blueprints API as described in Part I of those blog posts. Get used to Gremlin a bit by querying the data in this small case. Use this time to develop methods for validating what you've loaded. Once you feel comfortable with all of that, then work on scaling it up to the full size.