We use the imperative authorization method to secure endpoints exposing sensitive resources. A simple scenario looks like this:
public async Task<IActionResult> OnGetAsync(Guid documentId)
{
Document = _documentRepository.Find(documentId);
if (Document == null) return new NotFoundResult();
var authorizationResult = await _authorizationService.AuthorizeAsync(User, Document, "EditPolicy");
if (authorizationResult.Succeeded) return Page();
else if (User.Identity.IsAuthenticated) return new ForbidResult();
else return new ChallengeResult();
}
Behind the scenes the _authorizationService.AuthorizeAsync
method call is handled by a custom AuthorizationHandler. Evaluating the AuthorizationRequirement is quite complex and involves information not known to the application. It is not possible to translate said authorization logic to other systems such as the application's datastore. Therefore it is important to note all authorization decisions should only be handled by ASP.NET's authorization system.
This works fine for simple scenarios where the dataset to return is already heavily reduced upfront using other criteria. For example a HTTP GET by 'id' can only match one record. In case the endpoint returns multiple records the same can be achieved by requiring some search criteria.
public async Task<IActionResult> OnGetAsync(DocumentSearchCriteria searchCriteria)
{
foreach (var document in _documentRepository.Find(searchCriteria)) {
var authorizationResult = await _authorizationService.AuthorizeAsync(User, document, "EditPolicy");
if (authorizationResult.Succeeded) Documents.Add(document);
}
return Page();
}
There is quite some overhead in this approach; the application datastore returns too much data and loaded in application memory. It becomes even more problematic when we cannot reduce the dataset upfront. This happens when the endpoint should return all potential matches. I am firm believer premature optimization is the root of all evil, however, we already have some instances where the initial dataset exceeds 60.000 records, it's too much. When this happens we review the code and come up with some compromise. My goal is to discover alternative techniques, ideally some generic solution that we can pose as an internal guideline for these scenarios.