IMPACT Β· Auditing

Auditing & Observation

Structured, criterion-level observation that tells you not just what happened β€” but exactly why, and what to do next. Built around your standards, not generic benchmarks.

The Process

How our audits work

From setting up your form to receiving an AI-generated report β€” every step is designed to be fast, consistent, and deeply insightful.

πŸ—οΈ
01

Build Your Form

Select parameters and criteria, write auditor and auditee questions, set AI directives and flags

βš™οΈ
02

Configure PACT

Set weightage for each audit type β€” Peer, Internal, External, Self β€” to generate a composite score

πŸ”
03

Conduct the Audit

Rate against criteria, add comments, attach evidence, and flag observations in real time

πŸ‘₯
04

Stakeholder Review

Admin or Super Admin reviews and approves before the report is published to the auditee

πŸ€–
05

AI Report Generated

Narrative insights, root causes, performance shape, comparisons, strengths, and action plan β€” in under 60 seconds

πŸ“Š
06

KPI Feedback

Every stakeholder completes a short anonymous feedback β€” contributing to fairness, accuracy and professionalism scores

Scoring System

The PACT Score

PACT stands for the four audit types IMPACT supports: Peer, Assessor (Internal), Client (External), and Trailer (Self). You decide how much each type contributes to the final composite score.

  • Peer β€” scored by a colleague at the same level
  • Internal (Assessor) β€” scored by a manager or internal observer
  • External (Client) β€” scored by a customer, student, or external evaluator
  • Self (Trailer) β€” scored by the auditee themselves

The weights you assign determine how much each perspective counts. A final PACT composite score is calculated and placed in a performance band: Excellent Β· Good Β· Fair Β· Needs Support

Audit Mode

PACT Weight Configuration
Peer20%
Internal (Assessor)40%
External (Client)30%
Self (Trailer)10%
Composite PACT Score
3.8/5
Good

Form Builder

Build your audit form

Every audit in IMPACT is powered by a structured form you design. Select what matters, define how it's measured, and tell the AI what to look for.

Form Sections
Step 1

Select Parameters & Criteria

Choose which performance dimensions you want to evaluate. Each parameter contains multiple criteria β€” the specific, observable behaviours that will be rated.

Communication & Delivery4 criteria
Student Engagement3 criteria
Lesson Planning & Structure5 criteria

In the Audit

Rating, comments & evidence

Auditors rate each criterion on a structured 1–5 scale with defined behavioural descriptors at every level. Every rating can be enriched with a typed comment and physical evidence β€” building an unambiguous, defensible record of what was observed.

  • 1–5 Rating Scale β€” each level has a defined behavioural descriptor so every auditor applies the same standard
  • Comments β€” free-text observations tied to each criterion, not just an overall note
  • Evidence Attachments β€” photos, documents, screenshots, or recordings attached at criterion level
  • Flag Raising β€” auditors can raise a flag mid-audit for any criterion that warrants immediate attention
  • Partial Save β€” audits can be saved mid-way and resumed without losing data
Rating Scale β€” What each score means
5
Exceptional β€” consistently exceeds the expected standard
4
Good β€” meets and occasionally surpasses the standard
3
Fair β€” meets the standard with some inconsistency
2
Below standard β€” partially meets the expected behaviour
1
Unsatisfactory β€” does not meet the standard; flag triggered
Live Audit Β· Criterion View
Criterion
Clarity of explanation during the session
β˜…
Poor
β˜…
Below
β˜…
Fair
β˜…
Selected
β˜…
Excel
Auditor Comment
Explanations were clear overall but mathematical steps were skipped too quickly β€” students at the back appeared confused after Q3.
πŸ“Ž photo_evidence_01.jpg
+ Add Evidence
Criterion 2 of 12 Β· Communication

Quality Assurance

Multi-stakeholder review before publishing

A completed audit does not go directly to the auditee. It enters a review queue where nominated admins or super admins can examine, annotate, and approve β€” ensuring quality, fairness, and accuracy before any report is published.

  • Auditor completes the audit β€” submits all ratings, comments, and evidence
  • Admin review queue β€” audit enters a pending approval state; admins are notified
  • Reviewer actions β€” approve, request revision, add reviewer notes, or escalate to Super Admin
  • Super Admin override β€” final approval authority with visibility across all review stages
  • Published to auditee β€” report released only after approval; auditee is notified instantly
πŸ“‹
Audit Submitted
Auditor: Kalidas K. Β· Apr 9 Β· 14 criteria rated
Submitted
πŸ‘οΈ
Admin Review
Reviewer: Dr. Ananya M. Β· Cross-check discrepancy noted
In Review
πŸ‘‘
Super Admin Approval
Final authority Β· Reviewed reviewer notes Β· Approved
Approved
πŸ“€
Report Published to Auditee
Anuratha V. notified Β· AI report + action plan delivered
Published

AI-Generated Reports

Reports that explain, not just score

Generated according to the AI directive you set β€” every report is tailored to your context, your criteria, and your language. Not a template. A genuine analysis of the specific person, in the specific session.

πŸ“

Performance Shape

A radar/spider chart showing scores across all parameters β€” revealing the performance profile visually. Strengths and dips are instantly visible.

πŸ“Š

Group & Historical Comparison

Each score is compared against the group mean (colleagues in the same role) and the individual's own historical mean β€” showing growth and benchmarking in one view.

πŸ”

Root Cause Analysis

AI traces low scores back to the specific criteria that drove them β€” not just "engagement was low" but which exact behaviours were missing and why they matter.

✨

Key Strengths

Highlights the top-performing criteria with specific observations β€” giving the auditee clear evidence of what to build on and replicate.

⚠️

Development Areas

Identifies the specific criteria most in need of attention β€” linked to root causes and framed in a way the auditee can act on immediately.

πŸ—ΊοΈ

Action Plan

A structured 3-step improvement plan generated by AI based on the directive β€” specific, time-bound, and directly linked to the gaps identified in the report.

AI Report Preview Β· Nivedita S.
Classroom Observation Β· April 9, 2026
3.7
PACT Score
Good
Key Strength
Communication and delivery scored 4.4/5 β€” lesson pacing was structured and vocabulary was appropriately calibrated for the class level. This is a consistent strength across 3 audits.
Root Cause β€” Engagement Gap
Student engagement scored 2.6/5. Observed cause: teacher-led monologue exceeded 18 minutes without a student-participation break. Criterion β€œinteractive elements” scored 1.0.
3-Step Action Plan
Step 1: Introduce a think-pair-share activity at the 12-minute mark in every lesson this week.
Step 2: Watch Peer video: Riya's Apr 7 session (rated 4.8 on engagement) β€” note interactive techniques used.
Step 3: Record and upload one lesson for AI video audit by Apr 23 β€” recheck engagement score.

IMPACT KPIs

Measuring the audit itself

After every audit is completed and published, IMPACT asks each stakeholder involved β€” auditor, auditee, reviewer, and any observers β€” to complete a short anonymous feedback form. Their responses are aggregated anonymously and contribute to each person's professional profile KPIs.

  • Fairness β€” was the audit conducted impartially and without bias?
  • Accuracy β€” did the scores reflect what was actually observed?
  • Professionalism β€” was the auditor respectful and thorough?
  • Clarity β€” was the report easy to understand and act on?
  • Cooperation β€” was the auditee open, collaborative, and prepared?
Why anonymous?

Anonymity encourages honesty. No individual feedback is ever attributed. Only the aggregated score β€” across multiple audits β€” appears on a person's profile, making the KPI meaningful and tamper-resistant.

Post-Audit Feedback Β· Anonymous Β· 2 min
How fair was this audit?
1
2
3
4
5
How professional was the auditor?
1
2
3
4
5
Auditor KPI Profile Β· Kalidas K.
4.7
Fairness
4.5
Accuracy
4.8
Professionalism
4.3
Clarity
Based on 23 anonymous post-audit responses

Audit Type

Anecdotal Records

Anecdotal Records are IMPACT's format for recurring, grid-based observation of a group β€” ideal for classrooms, shift teams, or any setting where the same individuals are observed regularly against the same set of criteria.

Rather than conducting a full audit for each individual separately, the auditor completes a single session and rates every person in the group simultaneously β€” saving time while maintaining criterion-level precision.

  • Grid-based layout β€” rows for each individual, columns for each criterion, rated in one sitting
  • Recurring by design β€” the same form repeats across sessions, building a longitudinal observation record over time
  • Quick observations β€” ideal for daily, weekly, or per-session snapshots without the overhead of a full audit workflow
  • Individual profiles still built β€” each person's scores contribute to their PACT profile even though the observation was group-based
  • Pattern detection β€” AI surfaces trends across sessions, highlighting who is consistently strong, who is declining, and who shows erratic performance
  • Flags per individual β€” each person can still receive criterion-level flags even within a group session
Anecdotal Record Β· Class 10B Β· Apr 9
StudentParticipationFocusComprehensionAvg
Anuratha V.5454.7
Samim S.4343.7
Riya M.2232.3 🚩
Dev K.3544.0
🚩 Flag triggered β€” Riya M. scored below 2.5 on 2 criteria Β· Notified: Class Teacher
Session 6 of recurring record Β· Apr 9, 2026
AI Video Audit Β· Call Recording Β· 14m 32s Β· 87% Confidence
AI-Flagged Timestamps
00:42Strong opening rapport β€” greeting and empathy criteria met5/5
05:18Customer objection β€” agent went off-script, no resolution offered1/5 🚩
09:55Compliance script partially read β€” 2 mandatory disclosures missed2/5
13:40Clean close β€” confirmation and next-steps communicated clearly4/5
68%
Agent Speak
142
Words/min
87%
AI Confidence

Audit Type

AI Video Audits

Upload any video recording β€” a customer service call, a classroom lesson, a shop floor walkthrough, or an interview β€” and IMPACT's AI observes, scores, and generates a full audit report against your own criteria. No human observer required.

  • Your criteria, not ours β€” AI scores against the exact parameters and criteria you define in your audit form
  • Timestamped observations β€” every significant moment is flagged on a visual timeline with the criterion it relates to and a score
  • Audio analysis β€” speech pace, speaking distribution, tone, and clarity analysed across the full recording
  • 87%+ confidence scoring β€” every observation carries a confidence score so you know how certain the AI is about each rating
  • Full report generated β€” the same AI report produced for manual audits: criterion breakdown, strengths, root causes, and action plan
  • Scale without cost β€” audit 100% of recordings instead of the 8% your team has capacity for manually
Supported Formats
MP4, MOV, WebM, audio-only recordings, and live call integrations
Report Turnaround
Full report generated in under 60 seconds from upload completion