How to log queries taking long time

556 Views Asked by At

I have a group of queries around 900 inserted in a column x on a table A.

Then I'm using a basic for loop to loop through and using Execute Immediate x , the problem is that i want to identify the queries taking long time to execute let's say 10 min in order to optimize them. Is there any method to get the execution time if some query takes over 10 min I stop it and log it's ID in an other table and loop continue on.

begin

   for query  in (select x from A)
   loop
      -- Some method to put a listner on the execution time
      execute immediate query.x;
   end loop
end; 

Thank's

2

There are 2 best solutions below

5
On BEST ANSWER

In simplest "Heath Robinson" terms you can write to a log table "Query X start" and "Query x end" with timestamps to measure elapsed time. There was a time back in the day when you could have used tkprof but these days the security policy tyranny of most companies dictates that developers cannot access the server trace files so that option is out. Tkprof (with alter session set sql_trace = true) would have told you parse times, cpu time, elapsed time, rows fetched etc.

You can also switch monitoring on for a query so that when viewed through Grid COntrol/Enterprise Manager you get to see real-time info. See this link for details:

http://www.oracle.com/technetwork/database/manageability/sqlmonitor-084401.html

The second part of your question, is tricky. I am not sure, or rather I doubt, that you can set a timeout on a query. Perhaps the Resource Management suite has something but I'm not sure. Assuming there isn't how would you go about killing a query that is running? You can't tell Oracle "Kill query x", but you can tell it "kill session x", but that would kill your whole ... session. Thinking purely in terms of the technical challenge, rather than whether it is elegant or practical, you could implement each query as a job and schedule it to run, then loop to monitoring its running, and if it goes over your time limit, kill the job session or terminate the job. Again I'm not saying this is how you should do it, but it's one way I would attempt it of I needed to do this, and I would expect it to through up some curved balls along the way. It would though be an interesting challenge .. one I might look to do if I have the time and inclination.

2
On

I have a bad feeling about this approach: identifying a "bad" query just because of its execution time sounds quirky and may lead to further mistakes. Execution time depends on many factors which in turn may not be directly related with the query itself (e.g. data-set is small / huge, cache configuration) and also a short execution time (e.g. less then 10 minutes) does not necessarily mean that your query is good. I would rather test my query against their execution plan while, of course, a good test data-set is available.

Any way, Oracle should be able to give you the possibility to configure some expiration time: in that scenario you have to surround your "execute immediate" with a

BEGIN
   ...
EXCEPTION WHEN ... THEN ...;
END;

block using the EXCEPTION WHEN THEN statement to catch a timeout. If a timeout comes you store that exception somewhere to track down your "bad" query.