IMPACT Β· Auditing
Auditing & Observation
Structured, criterion-level observation that tells you not just what happened β but exactly why, and what to do next. Built around your standards, not generic benchmarks.
The Process
How our audits work
From setting up your form to receiving an AI-generated report β every step is designed to be fast, consistent, and deeply insightful.
Build Your Form
Select parameters and criteria, write auditor and auditee questions, set AI directives and flags
Configure PACT
Set weightage for each audit type β Peer, Internal, External, Self β to generate a composite score
Conduct the Audit
Rate against criteria, add comments, attach evidence, and flag observations in real time
Stakeholder Review
Admin or Super Admin reviews and approves before the report is published to the auditee
AI Report Generated
Narrative insights, root causes, performance shape, comparisons, strengths, and action plan β in under 60 seconds
KPI Feedback
Every stakeholder completes a short anonymous feedback β contributing to fairness, accuracy and professionalism scores
Scoring System
The PACT Score
PACT stands for the four audit types IMPACT supports: Peer, Assessor (Internal), Client (External), and Trailer (Self). You decide how much each type contributes to the final composite score.
- Peer β scored by a colleague at the same level
- Internal (Assessor) β scored by a manager or internal observer
- External (Client) β scored by a customer, student, or external evaluator
- Self (Trailer) β scored by the auditee themselves
The weights you assign determine how much each perspective counts. A final PACT composite score is calculated and placed in a performance band: Excellent Β· Good Β· Fair Β· Needs Support
Audit Mode
Form Builder
Build your audit form
Every audit in IMPACT is powered by a structured form you design. Select what matters, define how it's measured, and tell the AI what to look for.
Select Parameters & Criteria
Choose which performance dimensions you want to evaluate. Each parameter contains multiple criteria β the specific, observable behaviours that will be rated.
In the Audit
Rating, comments & evidence
Auditors rate each criterion on a structured 1β5 scale with defined behavioural descriptors at every level. Every rating can be enriched with a typed comment and physical evidence β building an unambiguous, defensible record of what was observed.
- 1β5 Rating Scale β each level has a defined behavioural descriptor so every auditor applies the same standard
- Comments β free-text observations tied to each criterion, not just an overall note
- Evidence Attachments β photos, documents, screenshots, or recordings attached at criterion level
- Flag Raising β auditors can raise a flag mid-audit for any criterion that warrants immediate attention
- Partial Save β audits can be saved mid-way and resumed without losing data
Quality Assurance
Multi-stakeholder review before publishing
A completed audit does not go directly to the auditee. It enters a review queue where nominated admins or super admins can examine, annotate, and approve β ensuring quality, fairness, and accuracy before any report is published.
- Auditor completes the audit β submits all ratings, comments, and evidence
- Admin review queue β audit enters a pending approval state; admins are notified
- Reviewer actions β approve, request revision, add reviewer notes, or escalate to Super Admin
- Super Admin override β final approval authority with visibility across all review stages
- Published to auditee β report released only after approval; auditee is notified instantly
AI-Generated Reports
Reports that explain, not just score
Generated according to the AI directive you set β every report is tailored to your context, your criteria, and your language. Not a template. A genuine analysis of the specific person, in the specific session.
Performance Shape
A radar/spider chart showing scores across all parameters β revealing the performance profile visually. Strengths and dips are instantly visible.
Group & Historical Comparison
Each score is compared against the group mean (colleagues in the same role) and the individual's own historical mean β showing growth and benchmarking in one view.
Root Cause Analysis
AI traces low scores back to the specific criteria that drove them β not just "engagement was low" but which exact behaviours were missing and why they matter.
Key Strengths
Highlights the top-performing criteria with specific observations β giving the auditee clear evidence of what to build on and replicate.
Development Areas
Identifies the specific criteria most in need of attention β linked to root causes and framed in a way the auditee can act on immediately.
Action Plan
A structured 3-step improvement plan generated by AI based on the directive β specific, time-bound, and directly linked to the gaps identified in the report.
IMPACT KPIs
Measuring the audit itself
After every audit is completed and published, IMPACT asks each stakeholder involved β auditor, auditee, reviewer, and any observers β to complete a short anonymous feedback form. Their responses are aggregated anonymously and contribute to each person's professional profile KPIs.
- Fairness β was the audit conducted impartially and without bias?
- Accuracy β did the scores reflect what was actually observed?
- Professionalism β was the auditor respectful and thorough?
- Clarity β was the report easy to understand and act on?
- Cooperation β was the auditee open, collaborative, and prepared?
Anonymity encourages honesty. No individual feedback is ever attributed. Only the aggregated score β across multiple audits β appears on a person's profile, making the KPI meaningful and tamper-resistant.
Audit Type
Anecdotal Records
Anecdotal Records are IMPACT's format for recurring, grid-based observation of a group β ideal for classrooms, shift teams, or any setting where the same individuals are observed regularly against the same set of criteria.
Rather than conducting a full audit for each individual separately, the auditor completes a single session and rates every person in the group simultaneously β saving time while maintaining criterion-level precision.
- Grid-based layout β rows for each individual, columns for each criterion, rated in one sitting
- Recurring by design β the same form repeats across sessions, building a longitudinal observation record over time
- Quick observations β ideal for daily, weekly, or per-session snapshots without the overhead of a full audit workflow
- Individual profiles still built β each person's scores contribute to their PACT profile even though the observation was group-based
- Pattern detection β AI surfaces trends across sessions, highlighting who is consistently strong, who is declining, and who shows erratic performance
- Flags per individual β each person can still receive criterion-level flags even within a group session
| Student | Participation | Focus | Comprehension | Avg |
|---|---|---|---|---|
| Anuratha V. | 5 | 4 | 5 | 4.7 |
| Samim S. | 4 | 3 | 4 | 3.7 |
| Riya M. | 2 | 2 | 3 | 2.3 π© |
| Dev K. | 3 | 5 | 4 | 4.0 |
Audit Type
AI Video Audits
Upload any video recording β a customer service call, a classroom lesson, a shop floor walkthrough, or an interview β and IMPACT's AI observes, scores, and generates a full audit report against your own criteria. No human observer required.
- Your criteria, not ours β AI scores against the exact parameters and criteria you define in your audit form
- Timestamped observations β every significant moment is flagged on a visual timeline with the criterion it relates to and a score
- Audio analysis β speech pace, speaking distribution, tone, and clarity analysed across the full recording
- 87%+ confidence scoring β every observation carries a confidence score so you know how certain the AI is about each rating
- Full report generated β the same AI report produced for manual audits: criterion breakdown, strengths, root causes, and action plan
- Scale without cost β audit 100% of recordings instead of the 8% your team has capacity for manually