Maximum acceleration of large peewee python queries

37 Views Asked by At

I have a table in a database that has only 2 fields. I just need to send a certain number of new rows to it, regardless of their pk. The main problem is that queries even to the local database are much slower than the rest of the code. I would like to speed up this action as much as possible. I use python 3.10.10 and peewee ORM. Below is the code of the function that I call in several subprocesses. MySQL Database, MariaDB-10.8-Win10 (Open Server Panel).

def insert_many_multi(self, data):
        with conn.atomic():
            if len(data)>998:
                for i in range(0, len(data), 998):
                        Binary.insert_many(data[i:i+998], fields=[Binary.id, Binary.value]).execute()
            else:
                Binary.insert_many(data, fields=[Binary.id, Binary.value]).execute()

List data is an array of tuples in the form [(None, 0), (None, 0), (None, 0), ... , (None, 0)]

Base and Binary model code

conn = peewee.MySQLDatabase('Main', user='root', password='',
                         host='127.0.0.1', port=3306)
conn.close()

class BaseModel(peewee.Model):
    class Meta:
        database = conn


class Binary(BaseModel):
    id = peewee.AutoField(column_name='binary_id')
    value = peewee.IntegerField(column_name='value', null=True)
    class Meta:
        table_name = 'Binary'

I tried both doing massive transactions via .atomic() and doing them without them, the differences in speed are minimal.

0

There are 0 best solutions below