Today, a quick performance comparison came back to bite me, and it was all about percentages.
A few days ago, I was asked to do a quick comparison between two (2) test runs using AWR data. So, I produced a report similar to the following:
|Metric||Run #1||Run #2|
|% CPU Usage||0.3||0.3|
|% DB Time on User I/O||53.0||14.0|
|% DB Time on Network O/H||8.0||8.0|
|% DB Time as CPU||39.0||78.0|
What I had hoped to say was that the second run had less User I/O overhead (O/H) than the first one.
What people read was that the second run was more CPU using than the first.
It was only after I produced a chart similar to the following that people got the message:
They could see that there was the same amount of CPU time consumed in both runs thereby giving the same overall CPU Usage % figure. And most noticeably, the biggest improvement between the two (2) runs was the big drop in the amount of time that the database engine spent waiting for User I/O.
Next time, I should spend some more time creating charts to accompany the raw figures.