How to Read Oracle AWR Report?
Demystifying the Oracle AWR Report: A Beginner’s Guide to Performance Gold
If you’re an Oracle DBA or a performance tuner, you’ve likely heard of the AWR report. It’s often described as the ultimate tool for diagnosing database performance issues. But for many, opening an AWR report for the first time is like staring at the cockpit of a jumbo jet—overwhelming, complex, and full of mysterious gauges.
Fear not! This guide will break down the AWR report into digestible chunks and show you exactly where to look to find the clues you need. By the end of this blog, you’ll be able to navigate this powerful report with confidence.
What is an AWR Report, Anyway?
First, let’s define it. The Automatic Workload Repository (AWR) is a built-in performance monitoring and diagnostics feature in Oracle Database (available in Enterprise Edition). It automatically collects and stores performance statistics at regular snapshots.
The AWR Report is a snapshot-in-time comparison of these statistics. It answers the fundamental question: “What was my database doing between two points in time?”
You generate a report by comparing a “begin snapshot” and an “end snapshot.” This allows you to analyze a specific period of poor performance, like a slow batch job or a user-reported application hang.
Step 1: Generating the AWR Report
Before we can read it, we need to generate it. The most common way is from SQL*Plus:
– Connect to your database as a privileged user (SYSDBA or those with the DBA role)
SQL> @$ORACLE_HOME/rdbms/admin/awrrpt.sql-
The script will prompt you for:
-
Report Type: Choose
html(highly recommended for readability) ortext. -
Number of Days of Snapshots to List: Show the available snapshots.
-
Begin Snapshot ID: The starting point for your analysis.
-
End Snapshot ID: The ending point.
-
Report Name: The filename for the output.
Once generated, open the .html file in your browser.
Step 2: The AWR Report “Cheat Sheet” – Where to Look First
An AWR report can be 100+ pages long. You don’t need to read every line. Follow this systematic approach to zero in on the problem.
Start with the Executive Summary: The “DB Time”
The very first section you must analyze is the “Report Summary” at the top.
-
DB Time: This is the single most important metric. It represents the total time spent by the database servicing user requests. If
DB Timeis much higher thanElapsed Time, it means your database was processing many concurrent requests and was busy. A highDB Timeis a clear indicator of a performance issue during the snapshot window. -
Load Profile: This gives you a high-level overview of the database’s workload.
-
Redo per sec: Logging activity.
-
Logical Reads per sec: Data block access from the buffer cache.
-
Physical Reads per sec: Data block access from disk (I/O). High numbers here often point to I/O bottlenecks or insufficient memory.
-
Hard Parses per sec: High hard parsing is a killer for OLTP systems. It means the database is spending significant CPU time building new execution plans instead of executing queries.
-
The “Top 5” Timed Foreground Events: The Root Cause
This is arguably the most crucial section of the entire report. It tells you where the database spent most of its time waiting.
-
What to look for: The events at the top of this list are your primary suspects.
-
DB CPU: If this is #1, your database is CPU-bound. You need to find what’s consuming the CPU (often SQL statements). -
db file sequential read: Typically single-block reads, often associated with index lookups. High waits here indicate I/O contention on index-related files. -
db file scattered read: Typically multi-block reads, associated with full table scans. High waits point to I/O contention or a lack of proper indexes. -
log file sync: Waits for commits. This is about write speed to the online redo logs. Slow I/O on the log disk or very frequent commits can cause this. -
enq: TX - row lock contention: This is a true application-level lock, where one session is waiting for another to release a row. This points to application design issues.
-
Pro Tip: Your tuning effort should always start with the #1 event in the “Top 5” list.
Step 3: Drilling Deeper into the Findings
Once you have a hypothesis from the “Top 5” events, drill down into the supporting sections.
1. If the issue is SQL-Related (which it often is):
Navigate to the “SQL Statistics” section.
-
SQL ordered by Elapsed Time: Finds queries that took the longest total time to run.
-
SQL ordered by CPU Time: Finds the biggest CPU consumers.
-
SQL ordered by Gets (Logical Reads): Finds queries that access the most memory blocks. High logical reads can consume CPU and indicate poorly tuned SQL (missing indexes, bad joins).
-
SQL ordered by Physical Reads: Finds queries that cause the most I/O from disk.
Find the same SQL ID appearing in multiple lists? You’ve likely found your culprit. Grab that SQL and start your standard tuning process (check execution plan, indexes, etc.).
2. If the issue is I/O-Related:
Check the “IO Stats” section. Look at the “Tablespace IO Stats” and “File IO Stats” to see if a specific tablespace or datafile has exceptionally high read times (Av Rd(ms)). This can help pinpoint hotspots on your storage.
3. If the issue is Memory-Related:
-
Buffer Cache Hit Ratio: Found in the “Instance Efficiency Indicators” section. While not a perfect metric, a very low percentage (e.g., < 90%) can suggest the need for a larger buffer cache or better SQL that does fewer logical reads.
-
PGA Memory: Look for high “PGA Cache Hit %”. A low percentage might indicate inefficient memory-intensive operations like large sorts or hash joins happening on disk.
Step 4: Instance Efficiency Indicators – The “Percentages”
This section provides a quick health check. The goal for most of these is to be as close to 100% as possible.
-
Buffer Nowait %: > 99% – Should almost always be fine.
-
Library Hit %: > 95% – Indicates SQL and PL/SQL is being reused from the shared pool.
-
Soft Parse %: > 95% – The opposite of hard parsing. You want this high.
-
Execute to Parse %: A low ratio can indicate an application that is parsing statements repeatedly instead of reusing them (common in apps that don’t use bind variables).
A Practical Example: A Slow Hour
Imagine users complain the system was slow between 2:00 PM and 3:00 PM.
-
Generate an AWR Report for snapshots covering 2:00 PM to 3:00 PM.
-
Check the “Report Summary”: You see
DB Timeis 120 minutes, whileElapsed Timeis 60 minutes. The database was twice as busy as the wall-clock time! Confirms a problem. -
Check “Top 5 Timed Events”:
-
#1:
db file sequential read– 45 minutes -
#2:
DB CPU– 35 minutes
-
-
Hypothesis: The primary issue is slow I/O for index reads, and this is also causing high CPU consumption.
-
Drill Down: Go to “SQL ordered by Gets” and “SQL ordered by CPU Time.” You find a specific SQL ID at the top of both lists—a query that is doing a massive number of index lookups.
-
Action: You now have a clear target. You can analyze the execution plan for that specific SQL ID (using
awrsqrpt.sqlor looking in ASH data) to see why it’s performing so poorly.
Conclusion: Practice Makes Perfect
Reading an AWR report is a skill. The first few times will be slow, but the pattern becomes recognizable with practice. Remember this simple workflow:
DB Time -> Top 5 Events -> Drill into SQL/IO/Memory -> Formulate Action Plan.
Don’t try to boil the ocean. Use the AWR report to identify the top one or two bottlenecks. By systematically eliminating the biggest issues, you’ll dramatically improve your database’s performance and become the tuning hero your team needs.
Explore more with Learnomate Technologies!
🔗 Website:
https://learnomate.org/
🔗 YouTube Channel:
https://www.youtube.com/@LearnomateTechnologies
🔗 Want to explore more tech topics?
Check out our detailed blog posts here: https://learnomate.org/blogs/
🔗 LinkedIn (Founder – Ankush Thavali):
https://www.linkedin.com/in/ankushthavali/





