We have got around 800 on-premise customers with databases (MS-SQL and Oracle) ranging from 20 to 400 GB. My team is architecting an Azure version and I would like to get some advice.
Our idea is to use Azure SQL Server. Every customer (tenant) will get its own database. This is important as the customers can decide if they want to update to a newer version or not. I know it sounds strange for a cloud software but for their FDA compliance they need to validate any new version and this can take weeks for them. This means we could have two or three "active" vesions and - for example - 30% of the customers with V1, 60% with V2 and 10% with V3.
The business server will be implemented using Service Fabric so that we can scale when we get new cloud customers. At any point in time we will have these 2 or 3 versions of our software in the clusters. The tenants will then connect to the right "logic" and the servers will then connect to the customer's database.
And now to my question: could this work? I mean, the machines in the cluster will need to connect to maybe 300 different databases. The connection pool on-prem uses the connection string and the user as key for reusing the connections. We hope that the cloud works somehow differently and will not be limited by the number of different connections. Does it use pools at all? If it does, what is the impact? (memory, performance, ...)
Any other ideas about implementing it?
Connection pooling works the same on prem as in the cloud. Connections are pooled by connection string. You can connect to as many databases as you want.
Have a look at Azure Sql elastic pools for a way to manage many databases in the cloud in a cost effective manner.