Cogito is a tool used by call center agents to monitor how an interaction with a customer is going and ensure that they are delivering the best customer service possible. The application identifies negative behaviors that the agent is engaging in and coaches them to avoid those behaviors.
For example, if the agent is speaking too quickly, Cogito triggers a notification recommending the agent slow their pace.
At the end of a call, Cogito provides a summary of the agent’s behaviors. Based on feedback and observations this experience was identified as needing a redesign.
Using a standard Design Thinking/Lean UX approach, I sought to first Understand the problem, Build the designs, and Validate the solution.
Understand
For this project we needed to understand the Call Center Agents and their environments. Through a series of site visits and interviews, I learned the call center agents have a fairly stressful job given that they frequently dealing with unhappy customers, wanted feedback on how they were doing, and enjoy friendly competition with one another.
Using this information I was able to create an Empathy Map that captured the concerns and challenges facing the agents.
Another key observation was the hardware and tools available to the agents. All agents had at least two large monitors, however that was a result of their having to work with multiple CRM and other systems during the course of a call. Cogito was one of many applications they were using and, as a result, screen real estate was at a premium.
In addition, since there was an existing solution I captured the As Is for that experience and observed how the agents interacted with the experience. The agents indicated that the experience featured a limited amount of information and they wanted to know more, particularly about the areas they needed to improve on. Also, the carousel aspect of the existing experience could be distracting.
Iterate and Design
Having a good understanding of what our users wanted and needed, I created user stories that would help focus the design. For example, As an agent, I get feedback on how I did on key behaviors that directly impact the effectiveness of the call. I can identify which behaviors I need to focus on improving.
I rapidly iterated on various wireframes, experimenting with a single page shelf solution as well as employing an Android mobile approach. Though the application was on the desktop, I wanted to maintain a minimal footprint, so a mobile sized experience made sense.
Validate
During the design process, I held feedback sessions with agents to get input on the design.
As part of this process I created an interactive prototype using Axure and conducted remote usability sessions that allowed the agents to interact with the prototype directly.
Overall, the feedback was positive, with the agents responding well to the way the design identified behaviors they needed to focus on. They also liked the scorecard feel to the experience.
In addition, I ran an A/B test with the landing page to determine the value of showing a Tip. While the agents liked the concept of the tip shown in the A version, overall it seemed it would not be useful after the first few calls, so the B version won out.
Another interesting bit of feedback was the strong divide over the leader board page included in the design. Some agents loved having a listed ranking of how they measured up against their peers, while others hated the idea of competing against their co-workers.
Further discussion revealed that agents that did not like the leaderboard were okay with a more general “how am I doing compared to the team” approach. And the agents that liked the leaderboard were most interested in finding out which other agents were excelling at their jobs so they could learn from them.
I anticipate that those two insights will play a key role in the next iteration of the leaderboard component.