Quality at scale with Udemy

Udemy’s mission is to improve lives through learning. Founded in 2010, they are the world’s largest destination for learning and teaching online. With over forty million learners on their site, seventy thousand teachers and one hundred and fifty five thousand courses to choose from, they are truly the leading destination for people, businesses and governments looking to learn the hottest skills online. They offer courses in over 65 languages and have big enterprise customers like Lyft, General Mills and Adidas that rely on Udemy to help upskill their teams. This is all to say, the role of their customer support team is formidable as they support all sides of this marketplace from the learners to the teachers to the enterprise partners and beyond. 

The Challenge: 

Udemy has always prided itself on providing a high level of support across their customer segments. Historically, to make sure associates were meeting quality standards, managers would regularly score associate tickets and share feedback about what was working and what could be improved. On the face of it, it sounds like a good process to make sure that there are no egregious issues and that associates are getting regular performance feedback. 

As Udemy scaled, they realized they were facing new challenges and wanted to get more data, insights and value out of the QA process. They felt that managers were taking the time to review tickets but were only scratching the surface of what could be gained out of the exercise. 

There were a number of issues they were trying to work through: 

  • The same rubric was used to score associates across different teams, making it hard to measure the business outcomes that mattered most to each team 
  • There was not a system to pick a random sample of tickets for each associate, leaving room for bias and making it hard to screen for issues like ticket cherry picking 
  • The process lived in spreadsheets and was time consuming and manual 
  • The system gave a score to each associate but didn’t highlight trends about how training and processes could be refined to improve quality 

With these issues in hand, Udemy turned to PartnerHero to help build a better system. 

The Solution 

Kim Wagner joined PartnerHero in June of 2020. Between her years scaling quality assurance at Airbnb during their rapid growth phase and her self described “passionate quality-nerd” tendencies, she was the perfect person to lead this initiative. She pulled in Giulia Gasparin, who had previously worked as a Team Lead at Udemy and who has data management expertise, to help implement the program. 

The first milestone they worked toward was to run a pilot within one of the Udemy support verticals that they could learn from and scale to the rest of the team. They chose Udemy Instructor support as the testing ground. 

Next they needed to implement a tool that would allow them to get off of Google Sheets while keeping the process accessible and streamlined. They chose Aprikot which has a powerful Chrome extension that allows you to review tickets directly in your Helpdesk as well as allowing you to have multiple rubrics, making it easy to grade each team on the business outcomes that matter most to that team. 

They built a rubric specific to the Udemy Instructor team that took into account key metrics that matter for that team. The foundation for any good rubric will measure three main categories: Customer Critical, Business Critical, and Compliance. The rubric they built for the pilot had the following categories: 


  • Was the issue understood and all inquiries solved correctly?
  • Did the associate check the resources?


  • Was the question answered with empathy?
  • Did the associate anticipate future questions?
  • Did the associate provide helpful resources when possible?

Ticket Structure:

  • Was the interaction easily readable?

Data Integrity:

  • Were system tags and categories applied correctly? 

Once they got Aprikot and the rubric up and running they started to collect data and the QA pilot was in motion. Aprikot allowed them to streamline the process, get out of sheets and easily spot trends both about agent performance and issues that were impacting the quality of support that is being delivered.

Along the way, they ran multiple calibration sessions with evaluators, subject matter experts and points of contact at Udemy to make sure all evaluators were using the same standards so that scores would mean the same thing across evaluators. 

Next came reporting and making sure that the aggregated data with insights is consistently shared with the right stakeholders. There is a monthly report, a quarterly report as well as reports that are specific to CSAT and deep dives. These reports are shared with stakeholders at Udemy and managers and Team Leads within PartnerHero.   

While the QA rollout was generally smooth there were a few lessons learned. QA often has a bad reputation among front-line workers as a purely punitive exercise and, at many BPOs, that is really true. For example, there is a term that is used in most quality practices at BPOs known as an “autofail” which generally triggers negative feedback for an agent. The goal of this QA program was to understand the program and associate performance and insights so Udemy can proactively solve issues and improve customer experience instead of acting reactively when things go wrong. The PartnerHero program rebranded “auto-fail” to “priority review” knowing that what causes issues in the first place are often not because an “agent failed” but because a system is broken, a training wasn’t clear or the resource they would have needed to successfully answer the ticket was not available. By taking the idea of blame and failure out of the equation and adding more transparency around how the data is being used to solve real product and training issues, this QA program has changed the meaning of QA for front-line workers on the team. 

The Results 

The pilot ran in September of 2020 and is now being scaled across Udemy’s support verticals with each team following the same process, each with their own rubric and reporting. The result of the program is a new system for understanding customer experience and how it can be improved at Udemy. For example, leaders now know what the biggest customer pain points are and what impact fixing those things will have on metrics like CSAT. 

On top of the regular reporting, the team also conducts regular deep dives when there is a trend that gets surfaced that needs additional research. One recent deepdive on CSAT revealed login issues that were causing some pain for customers that have since been resolved. 

Ultimately, the team can now identify the main areas for improvement, the root causes for the issues (training, process, workflow, associate) and plan to make improvements to move the needle. The team manages a progress tracker that includes dozens of specific suggestions on how to improve customer experience: everything from tone improvements on a certain type of ticket to streamlining process for dealing with questions related to a policy to changes that need to be made to the Udemy product itself to reduce customer confusion. The tracker makes it easy to see which improvements have been made and what is left to do. You can order them in terms of impact and effort, making it easy to see paths to improvement. 


Most QA programs barely scratch the surface of QA’s true potential. Instead of just a score assigned to associates (and often used punitively) it can transform how your team functions and deliver real value in the form of insights, both about what’s working and not working for your team’s training and workflow but far beyond; it can deliver actionable insights for your product, marketing and operations teams as well.