The database I am using is a global variable, initialized reading the file sql-repo.db
:
const dbFile = "sql-repo.db"
var globalDB *LocalDB
type LocalDB struct {
Path string
handle *sql.DB
}
func InitSqlDB(dbDir string) error {
if globalDB != nil {
return nil
}
db := LocalDB{Path: filepath.Join(dbDir, dbFile)}
var err error
db.handle, err = sql.Open("sqlite3", db.Path)
if err != nil {
return err
}
globalDB = &db
return nil
}
From time to time, I will have an updated version of this database that I can download and store in dbDir
.
Ideas that I have:
- Use
ATTACH DATABASE sql-repo.db AS dbMain
to attach a copy of the first database and use it by default.
When I have my new.db
file, I attach it as wellATTACH DATABASE sql-repo-new.db AS dbNew
Then I detachdbMain
and renamedbNew
todbMain
- Simply change the address my
globalDB
is pointing to:
const newDBFile = "sql-repo-new.db"
func PullNewDB(dbDir string) error {
db := LocalDB{Path: filepath.Join(dbDir, newDBFile)}
var err error
db.handle, err = sql.Open("sqlite3", db.Path)
if err != nil {
return err
}
globalDB = &db
return nil
}
How can I or how should I update my globalDB
with the new version/file as I want to avoid any interference if the clients that I have in my code are connected to the DB and are querying it ?
Should I attach a sync.RWMutex
to my LocalDB
struct and then lock/unlock it when I do the update ?
Or should I use a channel to ask every client to stop querying the DB ?
Thank you for any help / advice / suggestion !
Or you can start a separate go routine or process that syncs your old database with the new file. Do an insert or update on all rows from new to old, then delete on missing rows. If it's all done in a single transaction, all queries will either read all old or new data without ever blocking.
An additional benefit is the separation of concerns, your application code doesn't get clustered with update logic, and in the case, the new file is corrupted, the update transaction errors out and no harm is done.