How to solve OdbError in Abaqus Python script?

1.4k Views Asked by At

I am running a 3D solid model in Abaqus Python script, which is supposed to be analyzed for 200 times as the model has been arranged in a for loop (for i in range(0,199):). Sometimes, I receive the following error and then the analysis terminates. I can't realize the reason.

Odb_0=session.openOdb(name='Job-1'+'.odb')

odberrror: the .lck file for the output database D:/abaqus/Model/Job-1.odb indicates that the analysis Input File Processor is currently modifying the database. The database cannot be opened at this time.

It is noted that all the variables including "Odb_0" and .... are deleted at the end of each step of the loop prior to starting the further one.

2

There are 2 best solutions below

2
On BEST ANSWER

From the Abaqus documentation

The lock file (job_name.lck) is written whenever an output database file is opened with write access, including when an analysis is running and writing output to an output database file. The lock file prevents you from having simultaneous write permission to the output database from multiple sources. It is deleted automatically when the output database file is closed or when the analysis that creates it ends.

When you are deleting your previous analysis you should be sure that all processes connected with that simulation have been terminated. There are several possibilities to do so:

  • Launching simulation through subprocess.popen could give you much more control over the process (e.g. waiting until it ends, writing of a specific log, etc.);
  • Naming your simulations differently (e.g. 'Job-1', 'Job-2', etc.) and deleting old ones with a delay (e.g. deleting 'Job-1' while 'Job-3' has started);
  • Less preferable: using the time module
0
On

I don't believe your problem will be helped by a change in element type.

The message and the .lck file say that there's an access deadlock in the database. The output file lost out and cannot update the .odb database.

I'm not sure what database Abaqus uses. I would have guessed that the input stream would have scanned the input file and written whatever records were necessary to the database before the solution and output processing began.