Developing a quality program for a fast-growing apparel resale marketplace


PartnerHero increased the average error free rate (EFR) on tickets from 40% to 74% through a robust QA program.


  • Apparel Resale Marketplace
  • Ecommerce
  • United States
  • Email, Chat







Apparel Resale Marketplace

Secondhand fashion is becoming increasingly popular for globally conscious Gen Zers. While thrift stores are hardly a new model, clothing resale has moved online over the past decade and platforms that allow individuals to buy and sell clothing from other users at a discounted cost have become abundant. However, the resale business model comes with challenges around customer expectations—customers can often receive clothing that is damaged, stained, or not what they expected.

Consumer-to-consumer marketplaces of this nature tend to generate a large volume of customer support as a result. One large clothing reseller platform approached PartnerHero about building a quality program at scale to make sure its associates were delivering exceptional service at every customer touchpoint.

PartnerHero has built customer experience and support teams for many consumer platforms. For this partner, we developed a quality assurance (QA) program they could use to understand how both their captive and outsourced associates were performing, if there were workflow inefficiencies or training gaps, and how they could improve conversation quality (and conversation speed).

PartnerHero increased the average error free rate (EFR) on tickets from 40% to 74% through a robust QA program.

Internal data

The challenge

The company’s 40 person customer support team was tasked with handling an average of 44,000 conversations from 22,000 different users each month. While team leads had occasionally reviewed tickets, there was no formal quality review process due to sheer volume. As their business kept growing, they realized they needed to get more data and insights out of the QA process to ensure that all policies and guidelines were being followed—and to improve their customer experience.

PartnerHero immediately identified a few big opportunities:

  • The rubric originally used to score the associates had questions that were too subjective and didn't cover all the details of a ticket.
  • The process was established on a platform that didn't allow much versatility to conclude the audits.
  • The system wasn't able to highlight trends about quality across the program and to spot possible areas of weaknesses.
  • There was not a dedicated specialist to take care of the program and ensure that the associates would get regular performance feedback. 

PartnerHero worked with the business to develop a QA process that would motivate associates, create alignment across business functions, and get insights about the program as a whole. 

The solution

The next step after discovery was getting every team on board. We created a proposal to share with all of the partner’s CS managers, which implemented processes and rubrics for the following areas:

  • Scoring Tickets
  • Disputes Policies
  • Calibration Sessions
  • Coaching Requests

Additionally, we built a complete QA program that delivered statistically significant results based on a random sample of the overall ticket volume. We also established a minimum amount of samples for each associate. Given that not all associates responded to the same amount of conversations, the randomized nature of the assignment meant that not all were guaranteed to receive the same amount of reviews per week. Setting a minimum amount of reviews per week helped to resolve this problem.

Along with better defined rubrics, we also assisted the business in implementing new processes:

  • We developed a recommendations tracker that facilitated feedback from associates on things like knowledge base edits, training updates, or Slack workflow improvements.
  • We defined a path for doing deep dives on specific ticket categories or agents needing additional insights.
  • We initiated coaching and feedback processes
  • We launched calibration sessions to help supervisors and QA identify areas within the rubrics that needed improvement.
  • We created a disputes process for handling any QA assessment that an associate disagreed with.
  • We implemented a “Best Start” program, which facilitated growth for new hires within the first 60 days.
  • We established active CSAT engagement processes for the team to reach out directly to any customers after solving a problem to take advantage of the service recovery paradox.

We initially rolled out the QA program to only buyer tickets, as this represents the biggest part of the business. About 80% of all the tickets handled come from the customers that are buying with this retailer. Then, after 4 months of success on the buyer side, we were able to launch QA for seller tickets as well.

The results

Over 10 months of the program, the QA team and reviewers worked on about 9,400 reviews. 

When we started reviewing buyer tickets, the program had an average error free rate (EFR) of around 40%. An error free rate is the percentage of reviews that have no errors and are rated as perfect. So an EFR of 40%, meant that 6 out of 10 cases evaluated had mistakes. 

The program’s average EFR steadily improved, reaching a high point of 74%. Now less than 3 cases out of 10 had mistakes.

In addition to the overall EFR, we also analyzed how often mistakes were made within rubric categories, which was crucial to understand where the team was struggling the most. While all categories improved over time, Communication had the biggest boost with quality. In January 2022, 4 out of 10 cases had opportunities regarding soft skills. By September 2022, 94% of cases were perfect under this category.

Similarly, once we rolled the same set of QA processes to the seller support team, we saw an increase in EFR across the board. Our largest improvement was in Data Integrity, which went from 62% to 84% EFR, in part due to a significantly improved communication and documentation workflow resulting from the new QA processes.