Is there a more efficient way of do this maybe with F expressions? Some how to reducing hitting the DB?
# 1st way hits DB twice per object
def something():
pks = [4, 2, 1, 3, 0]
for i in range(len(pks)):
mymodel.objects.get(pk=pks[i]).update(attr=i)
# 2nd way hits DB once per obj and once for the queryset
def something():
pks = [4, 2, 1, 3, 0]
order = Case(*[When(pk=pk, then=pos) for pos, pk in enumerate(pks)])
query = mymodel.objects.filter(pk__in=pks).order_by(order)
for i in range(len(query)):
query[i].attr = i
query[i].save()
Editing to move variables consistent.
Generally making multiple database queries instead of one large query is to be avoided. However your case seems to be an exception. (Note: I am making this answer based on the information provided. Please don't edit your question to say I actually meant ....)
The following query will be very fast no matter how many records you have because it's retrieving a single item from a primary key. All RDBMS are designed to handle this really well.
Hower you can make your function more efficient like this:
Now it hits the DB only once per object. the above query just translates to