We have a jdbc program which moves data from one database to another.
- Source database is using Latin9 character set
- Destination database uses UTF-8 encoding and the size of a column is specified in bytes instead of characters
- We have converted ddl scripts of source database to equivalent script in destination database keeping the size of the column as-is.
- In some cases, if there are some special characters, the size of the data after converting to UTF-8 is exceeding the size of the column in destination database causing the jdbc program to fail.
I understand that UTF-8 is variable-width encoding scheme which can take 1-4 bytes per character, given this the worst case solution would be to allocate 4 times the size of a column in destination database.
Is there a better estimate?
Since there's no telling in advance exactly how much a text string will grow, I think that all you can do is a trial run to convert the text to UTF-8, and generate a warning that certain columns need to be increased in size. Any ASCII (unaccented) characters will remain single bytes, and most Latin-9 accented characters will probably be 2 bytes each, but there are some that might be 3. You'd have to look at the Latin-9 and UTF-8 tables to see if any will be 3 or 4 bytes after conversion. Still, you'd have to examine your Latin-9 text to see how much it will grow.