Here is a scenario.
We are collecting programmatically some data into Javascript object which contain Polish characters.
Then we are converting it to string with JSON2CSV library and sending to Azure blob with uploadBlockBlob method from @azure/storage-blob library.
After a while we are using Azure Functions triggered by blob storage trigger. We get "Myblob" property with string with CSV content. Then we are using Papaparse library to convert it back to object. And finally we are using content of the object to update database through mssql library.
In the process we are loosing polish characters.
JSON2CSV when converting to string does not seem to have "encoding" property exposed. Nor the uploadBlockBlob. With Papaparse enforcing encoding with "UTF-8" does not have any effect on the process (changing to cp1250 also does not help). The original content is scraped from a web page with software running on WIndows machine.
Any ideas how to preserve the encoding throughout the pipeline?
Closing the question as it appeared to be my mistake. I pushed Varchar type into NVarchar column - that is why non-latin characters got wrong.