I have a Golang process that runs SQL queries on a 400MB SQLite file.
I am using https://github.com/mattn/go-sqlite3 with the connection string:
file:mydb.sqlite?mode=ro&_journal=DELETE
When run on my dev machine on Docker it only needs 20MB of RAM, but on Google Run any instance smaller than 512MB will return HTTP code 500 with a memory exceeded
limit in the logs.
docker diff x
shows that the DB file is not modified (which I assume would cause gVisor to copy the whole binary SQLite db file to RAM to modify it).
How the docker image is built
I am copying the SQLite DB file into the image with the source code:
FROM golang:latest
...
COPY . /go/src/api
I have a global var in my Golang file: var db *sqlx.DB
This gets set in the main fn, before ListenAndServe
:
conStr := fmt.Sprintf("file:%s?mode=ro&_journal=DELETE", *fileName)
dbConn, err := sqlx.Open("sqlite3", conStr)
db = dbConn
I query the db within a HTTP request:
err := db.Selectv(&outDataSet, "SELECT...", arg1, arg2)
Why this must be an issue with the Cloud Run environment
docker stats
never goes above 20MB when run locally.
Limiting docker run
to 20MB RAM also runs fine on my dev machine:
docker run \
--memory=20m \
--memory-swap=20m \
The Cloud Run "Container Memory Allocation" metric also stays well below 128M:
https://console.cloud.google.com/monitoring/metrics-explorer
Thanks.
According to the official documentation:
Configuring Memory Limits
Also I would suggest to consider:
Are your container instances exceeding memory?
It seems that your container file systems is using the memory.