Adding Centralized DB for existing C# Win App

658 Views Asked by At

We have a windows application that have a local database. Few of our clients asking for centralized report from all their location. Our proposed solution is to have a central database in the Client's Head Office over Static IP and sync all the data using WCF Service.

Since the Central server may not be available always, i need a simple solution, to know from each Offline Databases all the records that are not synchronized with Central DB. So that a timed service in the back end copies all the remaining data to the central DB.

one option is have a boolean column in all the tables as IsSynched to denote whether it is in sync. But this require a lot of code change in all the referenced procedures and code.

Any alternative solution?

3

There are 3 best solutions below

2
On

I would suggest boolean IsSynced is very very weak, in terms of synchronization. If the office is offline for longer, you will certainly struggle getting everything in pace.

If you add column VersionNumber, you will know, that every row, with VersionNumber below the actual VersionNumber needs update.

More sophisticated approach could be adding LastUpdated TimeStamp column to each table, which could possibly save some traffic, because VersionNumber could increase and it could contain a row update, but may not! Whereas LastUpdated as from what I see is row-specific and creates the opportunity to fetch (=update) only that data (=rows), that are newer, than the newest record in your (=the central) entire database. Which could in fact work like a dandy.

0
On

Sounds like you need to implement 'multi-master' or 'multi-source' replication. Multi-master replication is a very tricky proposition to implement yourself.

I've done it in the following manner: You will need triggers on every table (insert, delete, update), to track all changes made to each row to a 'change tracking' table (you must ensure the triggers remain with any DB/table updates). The changes table will need a globally unique key(s) (usually a combination of the unique 'site code' and each row's unique ID), the change type, the UTC date/time of change, the table name, the row id and the replication status (i.e. date sent), of each row CRUD. You use the change tracking table with the WCF services to synchronize the client with the server tables. You also need to ensure they contain the same data before initiating the replication system (perhaps by uploading the sites db into the central db at the start).

If you can switch switch to another DB, the latest version of MariaDB includes this functionality out-of-the-box (https://mariadb.com/kb/en/multi-source-replication/) for free. Functionality of this kind at a database level, will make this task considerably easier.

Alternatively, depending on your budget and DBMS, there may be some off-the-shelf replication product to assist you?

0
On

A few things to consider:

  • Why will the central DB not always be available?
  • Will the reports be executed against the central DB or the local DB?
  • Is 2-way syncing required (local <-> central)?
  • How reliable will the connection be during syncing?

A solution I recently inherited handles this in the following way:

  • A windows service runs in the background - it's responsible for syncing. The local application is not.
  • For central --> local syncing, each record has a last_updated (DateTime) column. This is updated to CURRENT_TIMESTAMP each time an update to the record occurs. For each table in the local DB the windows service checks for the newest record timestamp. A query is fired off to the corresponding table in the central DB to get any new / updated records, i.e. something like SELECT * FROM central.table WHERE last_update > @lastLocalUpdate;. Any results are inserted / updated in the local DB.
  • For local --> central syncing, each record in the local DB only has an is_sent (boolean) column. This is set to false each time a local update to the record occurs. For each table in the local DB the windows service checks for records which have not been sent to the central DB, i.e. SELECT * FROM local.table WHERE is_sent = 0;. Any results are inserted / updated in the central DB and then is_sent is set to 1.

It's a bit chatty, but it ultimately works. In my case, the local device frequently loses connectivity so this kind of granularity works.

High on my wishlist is to overhaul this mechanism completely and implement a message queue.

EDIT: if you insist that you don't want to make any changes to your existing codebase, you could implement the last_updated / is_sent updates on a SQL level and have the sync. service as an entirely separate project. The 'invisible updates' wouldn't make for the cleanest solution, but hey - pick your battles.