In-Memory database versus shared memory IPC

104 Views Asked by At

I want to set up a microservice architecture that includes services with a variety of technologies(C++, Golang, PHP, ...).

The duty of one of the services is fetching high-rated data from a device (approximately 4Kbyte for 15 times per sec) and passing it to three other separate services.

As far as I know, shared memory is the fastest method for IPC, and I know eCAL supports it well, but eCal only supports C++ and Python, and for example, it doesnt support Golang (I know there are some wrapper packages).

Now, I think, I could utilize In-Memory databases as IPC and all services could connect to the database through standard interfaces easily

Do in-memory databases behave comparably and have tolerable performance versus shared-memory IPC(regarding my high-rated traffic)? or there arent comparable in respect of performance? and generally, what are the pros and cons of this idea?

1

There are 1 best solutions below

0
Syeda Maira Saad On BEST ANSWER

Using in-memory databases as a means of inter-process communication (IPC) in a microservices architecture is a viable option and has both advantages and drawbacks. Here are some considerations:

Advantages:

Language Agnostic: In-memory databases usually provide standard interfaces or APIs that are language-agnostic, allowing services implemented in various programming languages (C++, Golang, PHP, etc.) to interact with them.

Ease of Development: Integrating with an in-memory database is often straightforward, as it typically involves using standard query languages (SQL or NoSQL query languages) or APIs provided by the database.

Scalability: In-memory databases are designed for high-performance and can handle a large number of transactions per second. This makes them suitable for scenarios with high data rates, like the one you described.

Data Persistence (Optional): Some in-memory databases offer persistence options, allowing you to persist data if needed. This can be an advantage in case of service failures or restarts.

Drawbacks: Latency: While in-memory databases are fast, they may introduce some latency compared to shared-memory IPC, especially in scenarios where extremely low latency is crucial. Shared memory communication is typically more direct and immediate.

Complexity: Introducing an in-memory database adds complexity to the system. You need to manage and maintain the database along with handling potential issues such as synchronization, data consistency, and access control.

Resource Consumption: In-memory databases consume system resources. Depending on the size of your data and the frequency of access, this could impact the overall resource usage of your microservices.

Single Point of Failure: If the in-memory database becomes a single point of failure, it might impact the reliability of your entire system.

Overhead: In-memory databases may introduce some overhead in terms of memory usage and CPU cycles, especially when compared to shared-memory IPC, which operates directly on memory without an intermediate database layer.

Recommendation:

Given the high data rate (15 times per second, 4 KB each time), it's crucial to evaluate the performance requirements of your system. For ultra-low latency scenarios, shared-memory IPC might be more suitable. However, if the latency introduced by an in-memory database is acceptable for your use case, the advantages of language-agnostic interfaces and scalability could make it a good choice.

Ultimately, the choice between shared-memory IPC and in-memory databases depends on your specific performance, scalability, and architectural requirements, as well as the trade-offs you are willing to make in terms of complexity and latency. Conducting performance tests with both approaches in your specific use case could provide valuable insights into the most suitable solution.