By default when creating an AKS cluster a service principal is being created for that cluster.
Then that Service Principal can be set on the level of some other Azure Resource (VM?) in order for them to be able to establish a network connection and for them to be able to communicate (except of of course general network settings)
I am really not sure and can not understand when this is required and when not. If for example I have db on VM level do I need to grant the AKS service principal access to the VM to be able to communicate with it through the network or not?
Can someone provide me some guidance for this, and not general documentation. When this is required to be used/set on the level of those other Azure resources and when it is not? I cannot find proper explanation for this. Thank you
Regarding your question about the DB, you do not need to give the service principal any access to that VM. Given that the Database runs outside of Kubernetes does not need to access that VM in any way. The database could even be in a different data center or hosted on another cloud provider entirely, applications running inside kubernetes will still be able to communicate with it as long as the traffic is allowed by firewalls etc.
I know you did not ask for generic documentation, but the documentation on Kubernetes Service Principals puts it well:
In other words, the Service principal is the identity that the Kubernetes cluster authenticates with when it interacts with other Azure resources such as:
To delegate access to other Azure resources you can use the azure cli to assign a role to a an assignee on a certain scope:
Here is a detailed list of all cluster identity permissions in use