Is there any way to compress the data while using mongo persistence with NEventStore?

624 Views Asked by At

I'm working with C#, Dotnet core, and NeventStore( version- 9.0.1), trying to evaluate various persistence options that it supports out of the box.

More specifically, when trying to use the mongo persistence, the payload is getting stored without any compression being applied.

Note: Payload compression is happening perfectly when using the SQL persistence of NEventStore whereas not with the mongo persistence.

I'm using the below code to create the event store and initialize:

    private IStoreEvents CreateEventStore(string connectionString) 
    { 
        var store = Wireup.Init() 
                        .UsingMongoPersistence(connectionString,  
                             new NEventStore.Serialization.DocumentObjectSerializer()) 
                        .InitializeStorageEngine() 
                        .UsingBsonSerialization() 
                        .Compress() 
                        .HookIntoPipelineUsing() 
                        .Build(); 
        return store; 
    }

And, I'm using the below code for storing the events:

public async Task AddMessageTostore(Command command) 
{ 
    using (var stream = _eventStore.CreateStream(command.Id)) 
         { 
                stream.Add(new EventMessage { Body = command }); 
                stream.CommitChanges(Guid.NewGuid()); 
         }
} 

The workaround did: Implementing the PreCommit(CommitAttempt attempt) and Select methods in IPipelineHook and by using gzip compression logic the compression of events was achieved in MongoDB.

Attaching data store image of both SQL and mongo persistence: enter image description here

enter image description here

So, the questions are:

  1. Is there some other option or setting I'm missing so that the events get compressed while saving(fluent way of calling compress method) ?
  2. Is the workaround mentioned above sensible to do or is it a performance overhead?
1

There are 1 best solutions below

0
Manu Radhakrishnan On

I also faced the same issue while using the NEventStore.Persistence.MongoDB.

Even if I used the fluent way of compress method, the payload compression is not happening perfectly in the mongo persistence like SQL persistence. Finally, I have achieved the compression/decompression by customizing the logic inside the PreCommit(CommitAttempt attempt) and Select(ICommit committed) methods.

Code used for compression:

using (var stream = new MemoryStream())
   {
      using (var compressedStream = new GZipStream(stream,
                                              CompressionMode.Compress))
        {

           var serializer = new JsonSerializer { 
                           TypeNameHandling = TypeNameHandling.None, 
                           ReferenceLoopHandling = ReferenceLoopHandling.Ignore 
           };
           
           var writer = new JsonTextWriter(new StreamWriter(compressedStream));
           serializer.Serialize(writer, this);
           writer.Flush();
      }
    return stream.ToArray();
}

Code used for decompression:

using (var stream = new MemoryStream(bytes))
 {
    var decompressedStream = new GZipStream(stream, CompressionMode.Decompress);
    var serializer = new JsonSerializer {  
                            TypeNameHandling = TypeNameHandling.None, 
                            ReferenceLoopHandling = ReferenceLoopHandling.Ignore 
    };
    
   var reader = new JsonTextReader(new StreamReader(decompressedStream));
   var body = serializer.Deserialize(reader, type);
   return body as Command;
}

I'm not sure if this a right approach or will this have any impact on the performance of EventStore operations like Insert and Select..