Conducting Quality Assurance with Vela
Vela is your end-to-end Quality Assurance (QA) solution, providing 100% QA by analysing every customer interaction. This guide walks Team Leads and Administrators through the process of reviewing interactions, applying scorecards, and providing coaching feedback to agents.
1. Prioritise Interactions for Reviewβ
Instead of manually reviewing every interaction, use Velaβs built-in intelligence to focus on conversations that need your attention most.
A. Review Smart Search Alertsβ
Smart Searches automatically flag interactions based on your defined keywords and compliance terms. These should be your first priority.
- Navigate to Smart Detector β Smart Search.
- Review the Results for critical alerts (e.g., "complaint", "refund", "escalation", or specific compliance violations).
- Sort results by Number of matched results (High to Low) to see the most matched issues first.
Set up Smart Searches for all mandatory compliance terms. This is the fastest way to ensure 100% adherence to critical policies.
B. Use Dashboard Metricsβ
The Dashboard highlights agents and teams that are underperforming.
- Go to the Dashboard.
- Check the Agent Scores Distribution to quickly identify agents with scores below your performance threshold.
- Look for a high volume of No. Alerts or Negative Sentiment Spikes that could indicate systemic issues.
C. Filter Interactions Directlyβ
Filter the list of all interactions to find specific examples based on performance data.
- Go to Interactions (Calls or Chats).
- Use the Filter options to narrow the list:
- Score range: Filter for interactions that received a low Automatic Scorecard score.
- Sentiment: Filter for interactions with Negative customer sentiment.
- Agent/Team: Focus reviews on agents you are coaching.
- Review status: Filter for interactions not yet reviewed.
2. Review and Analyse the Interactionβ
Once you've selected an interaction, the Interaction Detail View provides all the tools you need for comprehensive analysis.
A. Access the Detail Viewβ
- Click on the interaction from the Dashboard, Smart Search results, or the Interactions list.
B. Use the AI Analysisβ
Vela's AI provides key insights immediately, which you should use to inform your manual review.
AI Analysis Component | Purpose | What to Look For |
---|---|---|
Summary Generation | AI-created overview of interaction key points. | Did the agent address the main issue? Was the resolution captured? |
Sentiment Journey | Customer emotion tracking throughout the interaction. | Did the agent improve customer mood, or did it worsen? Where were the critical shifts? |
Pain Points | AI-detected indicators of customer frustration. | Did the agent effectively address the technical difficulty or unclear process mentioned? |
Keyword Detection | Automatic identification of important terms and phrases. | Were all mandatory script phrases used? Were specific product terms mentioned correctly? |
Intent Classification | Customer's goal (e.g., Sales, Complaint, Support). | Did the agent match their approach to the customer's intent? |
C. Listen to the Call or Read the Chatβ
The audio/chat playback controls and the synchronised transcript are essential for quality assessment.
- Listen to the Audio or read the Chat Transcript.
- Use the Speed Adjustment (e.g., 1.5x) to review calls efficiently.
- Click on a Timestamp in the transcript to jump to that exact moment in the audio/chat.
- Focus on the agent's tone, active listening skills, and adherence to procedures.
3. Score and Provide Feedbackβ
Your manual scorecard and comments are the core of the quality process, turning data into actionable coaching.
A. Complete a Manual Scorecardβ
The Automatic Scorecard provides a base score, but your expertise is required for the final evaluation.
- On the Interaction Detail View, locate the Scorecard section.
- Click "Manual".
- Evaluate the agent against your organisation's criteria in each category (e.g., Communication Skills, Compliance Adherence, Problem Resolution).
- Be consistent: Ensure your scoring aligns with the established quality standards and training.
- Be objective: Base your score only on the evidence from the interaction and the defined criteria.
- Add detailed comments explaining your score for each category.
- Click "Save Changes".
- The manual score will now be used in the agentβs overall performance metrics instead of the initial Vela AI generated one.
B. Use the Comment System for Targeted Coachingβ
Add specific, time-stamped feedback to make coaching clear and actionable.
- In the transcript, or on the Scorecard, locate the Comment System.
- Add your comment. Remember the best practices:
- Be specific: "At 1:45, you missed the required closing statement".
- Be constructive: "Try to summarise the solution before ending the call next time".
- Tag the Agent: Use the tag feature to ensure the agent receives a notification and can act on the feedback immediately.
- The agent can read and respond to your comments in their Agent Portal.
Tip: Manual Scorecard vs. Automatic Scorecard
The Automatic Scorecard is based purely on Vela AI analysis and your Knowledge Base. The Manual Scorecard uses your human judgment to interpret the conversation context, and this score takes precedence over the AI's assessment.
4. Finalise the QA Workflow and Coachβ
Your QA process is complete when the interaction is scored and the next steps are planned.
A. Track Review Statusβ
Mark the interaction's review status to keep your team's QA process clear.
- Mark the interaction as Reviewed or Completed.
- If follow-up is needed, flag the interaction or add it to the agent's coaching queue.
B. Plan Next Stepsβ
Use the analysis to inform your coaching strategy.
- Review all the agent's recent scorecards and comments.
- Look for consistent patterns in low-scoring areas (e.g., always scoring low on 'Active Listening' or 'Compliance').
- Go to the Coaching section.
- Assign targeted training courses that specifically address the identified skill gaps.
- Schedule a coaching discussion with the agent to review the feedback and performance trend.
The expected outcome of this workflow is a thorough quality assessment of customer interactions with documented findings and a clear plan for the agent's skill development.