The server type is database engine and the authentication type used while connecting through SSMS is Windows authentication, similarly, I want to connect the same database through AZURE Databricks.
I have used this code for the connection
jdbcHostname = "#####" jdbcPort = "1433" jdbcDatabase = "####" jdbcUsername = "##########" jdbcPassword = "####"
jdbcUrl = f"jdbc:sqlserver://{jdbcHostname}:{jdbcPort};database={jdbcDatabase};integratedSecurity=true" connectionProperties = { "user": jdbcUsername,
"authenticationScheme": "NativeAuthentication", "driver": "com.microsoft.sqlserver.jdbc.SQLServerDriver" } jdbcDF = spark.read.jdbc(url=jdbcUrl, table="######",properties=connectionProperties)
But I am getting this error:
The TCP/IP connection to the host #####, port 1433 has failed. Error: "#####. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall
So, what should be the Python code for this connection?

The server may not be configured to accept TCP/IP, which could be the reason for the error above. To resolve the issue, enable TCP/IP for the SQL Server as follows:
Go to SQL Server Configuration Manager, expand SQL Server Network Configuration, and select the protocol. Enable TCP/IP by double-clicking on TCP/IP as shown below:
Additionally, go to SQL Native Client, select Client Protocols, double-click on TCP/IP, and set 1433 as the default port as shown below:
Restart the SQL server service. After that, as per this, use a Windows authenticated username and password with the JDBC driver to connect to the SQL database from a Databricks notebook. You can use the code below for the connection:
You will get the table data as output, as shown below:
For more information, you can refer to this.