I am using node-odbc to connect a Node.js-based application to a legacy IBM DB2 database.
- On Windows 7
- The DB2 ODBC drivers are 32bit, therefore node-odbc binaries have been created with
node-gyp clean configure build --arch=ia32 --msvs_version=2015
- Node.js 4.4.5 in 32bit
This mostly works fine. However, I currently have Unicode disabled in the node-odbc binaries (flag UNICODE in binding.gyp) because of the following problem:
If I enable UNICODE, I can
- Correctly retrieve data with simple SQL statements (like query =
select * from sysibm.sysdummy1
) - Run queries with non-string parameter binding (like query =
select * from syscat.tables where tableid = ?
, bindings =[0]
;
However, if I run a query with string parameters (like query = select * from syscat.tables where tabname = ?
, bindings = ['SYSDUMMY1']
), the ODBC driver throws the error message
{ [Error: [IBM][CLI Driver] CLI0002W Data truncated. SQLSTATE=01004] errors: [ { message: '[IBM][CLI Driver] CLI0002W Data truncated. SQLSTATE=01004', state: '01004' } ], error: '[node-odbc] SQL_ERROR', message: '[IBM][CLI Driver] CLI0002W Data truncated. SQLSTATE=01004', state: '01004' }
Its explanation at IBM documentation doesn't help me.
I looked at the UNICODE-enabled snippets in the relevant node-odbc code which write the string into the buffer which will be passed to the ODBC driver:
#ifdef UNICODE
params[i].ParameterType = SQL_WVARCHAR;
params[i].BufferLength = (length * sizeof(uint16_t)) + sizeof(uint16_t);
#else
params[i].ParameterType = SQL_VARCHAR;
params[i].BufferLength = string->Utf8Length() + 1;
#endif
params[i].ParameterValuePtr = malloc(params[i].BufferLength);
params[i].StrLen_or_IndPtr = SQL_NTS;//params[i].BufferLength;
#ifdef UNICODE
string->Write((uint16_t *) params[i].ParameterValuePtr);
#else
string->WriteUtf8((char *) params[i].ParameterValuePtr);
#endif
Apparently, string->Write
writes the string with a null-terminal into the buffer ParameterValuePtr. SQL_NTS means null-terminated string (see explanation of StrLen_or_IndPtr at IBM documentation).
Judging from the error message, I assume the buffer does not have the correct size, which is calculated as (length * sizeof(uint16_t)) + sizeof(uint16_t)
(length of string in bytes for unicode characters + 1 additional character, the null terminator):
- Either it's too small, so the ODBC driver cannot find the null character which terminates the string
- Or too big, so maybe the drivers complains that the null character comes before the end of the buffer. This is really an assumption!
So: I wonder how the code should be corrected so that string-like parameters can be passed to the ODBC driver. Any ideas?