SQLAlchemy insert millions data inefficiently

243 Views Asked by At

It takes a long time to finish 100000 (user,password) tuple insertions.

def insertdata(db,name,val):
    i = db.insert()
    i.execute(user= name, password=val)
#-----main-------
tuplelist = readfile("C:/py/tst.txt")  #parse file is really fast
mydb = initdatabase()
for ele in tuplelist:
    insertdata(mydb,ele[0],ele[1])

which function take more time ? Is there a way to test bottleneck in python? Can I avoid that by caching and commit later?

1

There are 1 best solutions below

0
zzzeek On BEST ANSWER

have the DBAPI handle iterating through the parameters.

def insertdata(db,tuplelist):
    i = db.insert()
    i.execute([dict(user=elem[0], password=elem[1]) for elem in tuplelist])
#-----main-------
tuplelist = readfile("C:/py/tst.txt")  #parse file is really fast
mydb = initdatabase()
insertdata(mydb,tuplelist)