Batch loading a field in absinthe with dataloader

803 Views Asked by At

I have an object in my Absinthe graphql schema that looks like this:

object :match do
    field(:id, non_null(:id))
    field(:opponent, non_null(:string))
    @desc "The number of votes that have been cast so far."
    field(:vote_count, non_null(:integer), resolve: &MatchResolver.get_vote_count/2)
    # etc...
end

I'm using a resolver for vote_count that performs an ecto query using the parent match. This would run into the n+1 query problem however if a list of matches is queried. It currently looks like this:

  def get_vote_count(_root, %{source: %Match{} = match}) do
    count = match |> Ecto.assoc(:votes) |> Repo.aggregate(:count, :id)

    {:ok, count}
  end

I'm already using dataloader to batch load child entities but I'm can't seem to get a custom run_batch function to work when using the Absinthe.Resolution.Helpers.dataloader function provided by Absinthe.

What's the recommended approach for implementing custom batch queries using dataloader/ecto? Can someone give an example, including the schema definition part?

1

There are 1 best solutions below

0
On

Assuming you have already at some point done something like this:

Dataloader.add_source(Match, Match.data())

Then I'm pretty sure you want to use a resolver function of arity /3 like this:

def get_vote_count(_, %{source: %Match{} = match}, %{context: %{loader: loader}}) do
  loader
  |> Dataloader.load_many(Match, Match.Vote, [match_id: match.id])
  |> Dataload.run()
  |> Dataload.get_many(Match, Match.Vote, [match_id: match.id])
  |> Enum.count
end

I just finished writing an Absinthe/Phoenix backend so it's fresh in my head but I actually haven't tested this at all. It probably needs some tweaking. Hopefully it puts you in the right direction.