Can I catch azure synapse pipeline python pyodbc exceptions?

39 Views Asked by At

I have a sproc which contains a PRINT statement that prints a 2500 character NVARCHAR(MAX) variable.

When executing it within SSMS, it works fine. I put a line after it to dump a row into a logging table saying 'success!' which works. No errors.

Within synapse I have a pipeline and notebook with a python script, which calls my SPROC. This reports executing successfully, no exception is raised, however the 'success!' never appears, although another logging line 'before print!' does run, so PRINT is breaking it.

My two questions are - why is print breaking it, I suspect something to do with the character limit or pyodbc or something. But the main question is, how can I get the exception?

My code is as follows within the notebook:

try:
    driver = '{ODBC Driver 18 for SQL Server}'
    sqlconnsettings = str('DRIVER='+driver+';SERVER='+datadb_sqlserver_url+';PORT=1433;DATABASE='+datadb_db+';UID='+datadb_user_name+';PWD='+ datadb_password)

    connection = pyodbc.connect(sqlconnsettings)
    connection.autocommit = True
    connection.execute(sqlscript)
    connection.close()
    logger.info('SPROC called successfully!')  
except pyodbc.Error as e:
    error_message = str(e)
    logger.error('SPROC call FAILED!')  error_message)
    raise

I was hoping that if the sql generated an error that it'd bubble up. Of course I have no visibility of what is happening, if it's somehow silently failing or similar. I realise I might have to expand my except for other types of error but it seems moot at the moment given SPROC called successfully! is output when I run the pipeline.

Any advice on debugging?

0

There are 0 best solutions below