Bill Gates might be discussing the importance of feedback for teachers in the US, but there’s an underlying message here that’s relevant to anyone interested in improving their business – feedback is essential for improvement. In fact, there’s some fascinating statistics that prove that it’s true what they say: what gets measured, gets managed.
A white paper discussing that surveys such as VIRTUATel’s IS the “missing link” when using technology analytics.
- Using Analytics alone, is like looking at a house through a letterbox!
- Analytics is an overview only – customers are individuals
- Analytics can give a false guide
- If you use analytics you must use something customer specific as well
- Using Analytics only is like looking at a house through a letterbox (we could do with some cleverer images to describe analytics)
The gap between data analytics and research is now being closed as marketers realise how valuable this marriage can be for customer insight. Louise Druce looks at how firms are putting this into action and what it means for customer experience.
By Louise Druce, editor
Businesses are in no doubt that customer insight is invaluable – and moreso than ever during a downturn. Yet, despite this, market research and data analytics haven’t always gone hand in hand. Now, however, agencies on both sides of the fence are wising up to the benefits of such a union – and presenting customer-focused businesses with an appealing proposition to deal with their traditional research and analysis challenges.
In February 2009, data solutions provider EuroDirect and communications and insight specialists Broadsystem became the latest firms from their collective fields to merge, forming Callcredit Marketing Solutions. With Broadsystem specialising in customer insight and touchpoints, and EuroDirect owning a wealth of data assets and capabilities, the merger allowed the two sides of the business to be brought together in order to provide clients with a more joined up, comprehensive service.
“The issue with customer insight is that many businesses start research projects but fail to follow them through and utilise the results properly.”
Caroline Worboys, managing director, Callcredit Marketing Solutions.
“The issue with customer insight is that many businesses start research projects but fail to follow them through and utilise the results properly,” says Caroline Worboys, managing director of Callcredit Marketing Solutions. “Many companies undertake qualitative research programmes in order to build customer profiles, using simple techniques such as focus groups and questionnaires. Where they struggle is then overlaying the findings from these sessions onto their wider customer base, and using this information to deliver effective communications strategies. For example, if the qualitative research shows up 10 different customer types or segments, the trick is then how to effectively divide up your customer database so that it fits into these segments.”
She says one solution is to build your research panel using people who sit within specific classifications in your existing database. This means that attitudinal data can then easily be fed back into the system, without having to work out new classifications. “This approach means that a lot less work is needed to match the research with the data, leading to a swifter, more effective result for clients,” Worboys explains. “Another advantage of adding an analytical approach is that your data will be more robust and multi-layered as a result. For example, if geo-demographic data gained through analysis is added to the attitudinal information acquired through research, marketers are then better placed to gauge consumers’ preferences.”
She believes the benefits of combining research techniques with data analysis are considerable for both clients and their consumers. “For clients, it enables them to establish which of their customers are more likely to have specific attitudes and, as a result, they can feed this information into their database and deliver targeted and relevant communications,” says Worboys. “For consumers, because their attitudinal and prospect data is combined, they will be treated less like numbers in a system and, instead, will receive information and services that are specifically tailored to their own needs and requirements.”
Full view of the customer
Dorothy Kelly, senior manager at business consultants Distinct Consulting, also demonstrates how research and analytics were successfully brought together for improved marketing insight in a mobile telecommunications company. “After years of growth through acquisition, there was a move in marketing strategy to bring the customer to the centre of the organisation. It was recognised that propositions should be developed from true customer insights,” she explains
“At the time, analytics sat in operations. Their work had little strategic context and was primarily driven by user requests. Once analyses were delivered, they would often need to be reworked as they answered the wrong business question. Research sat in marketing. The work was request driven, ad hoc and not strategic. The research would be delivered, used for one purpose and forgotten. Customer insight was not seen as being something just analysis or research could produce in isolation, so the organisational structure was reviewed.
The aim of the review was to ensure a consultative approach to insights generation with a solid strategic context and to use a full view of the customer when addressing business problems. With this in mind, a customer insights team was created, consisting of market research, data mining and analytics, and campaigns execution, with the head reporting to the marketing director.
“There is effort involved in getting more technical resources to think like marketers and research resources to grasp more complex analytical concepts.”
Dorothy Kelly, senior manager, Distinct Consulting.
An insights generation process was developed to provide a framework for the collaboration between the two disciplines to:
* Define the business problem from the customer’s perspective.
* Build a view of the key questions that need to be answered to address the business issue.
* Identify existing data sources (across research and analysis) and identify gaps.
* Build an analytics and research plan to fill the gaps.
– Review all the resultant data sources in a workshop with relevant business owners, identify trends and generate insights.
* Build action from the insight.
The collaboration also facilitated the development of hybrid marketing tools such as a customer segmentation based on cluster analysis of transactional data, augmented by research, to understand the usage groups’ differentiating needs and motivations.
“There is effort involved in getting more technical resources to think like marketers and research resources to grasp more complex analytical concepts. But the crossover of skills such as statistics, analytical capabilities and seeing the customer in data aided in their respective up-skilling,” says Kelly. “There have been issues in identifying senior resources with skills in both areas to head up such teams, highlighting a need for effective succession planning.” However, she adds that the effort is relatively minor in the context of the potential benefits of such an organisational structure and processes including:
* Cost reduction – reusing existing research and data
* Moving from reactive ad hoc work to proactive true insights generation.
* Developing a 360 view of the customer.
* Ensuring research is in context and not used to answer questions that can be addressed through relatively low cost analytics.
* Enhancing analytics with the customer perspective only provided by talking to them.
Ian Robinson, chief intelligence officer at digital and direct marketing agency TMW, also agrees that the fusing together of market research and customer database insights can be of massive value. “Customer databases provide detailed information about the behaviour of individual customers which can be invaluable in driving personalised CRM communications. However, this gives a limited perspective as it only tracks customer interactions with the brand that owns the database,” he says. “Add in market research and you can then see the wider context in which they take place – and this can put a very different complexion on the customer.”
The firm carried out a database segmentation of an airline’s frequent fliers, which revealed eight distinct segments comprising fliers that varied dramatically in their extent and nature of their airline patronage. The segmentation insights helped to tailor CRM communications to stimulate higher frequency flying and greater usage of profitable premium cabins. The greater relevance resulted in marketing campaigns that generated a healthy ROI. However, focus groups were then conducted with members of the segments to provide a more holistic view of their overall business flying across all airlines. This revealed a hither-to undiscovered dimension of behaviour which revolutionised its thinking behind the way in which frequent flier programmes (FFPs) affected airline choice.
“Metaphorically, we had been given a brand new pair of 3D glasses.”
Ian Robinson, chief intelligence officer, TMW
“We identified that FFP members viewed the reward of air miles, redeemable against personal flights, as an important work benefit, and they had an interesting way of maximising this,” Robinson explains. “They knew that rewards increased if they achieved ‘silver’ or ‘gold’ status as they would be given a higher air miles earn rate. They also knew that long-haul flights were rewarded with generous quantities of air miles, but they travelled long-haul much less frequently. As a result, members tended to give all their short-haul flights to the airline, which enabled them to reach silver or gold status more quickly. Once this had been achieved, the obliging airline would then also receive the profitable long-haul business class flights so it was possible to earn even more air miles.”
He says the strategic implications of these findings were immense. “Many low value members were never going to reach silver status as their short haul flying needs could not be met efficiently by the airline’s schedule. Any communications incentivising short-haul to these members were, therefore, a complete waste of money. However, offering incentives on the less frequent but highly profitable, long-haul flying where the airline’s departure times were not so important was far more effective,” says Robinson. “Conversely, it became clear that ring-fencing frequent short-haul fliers against competitors who offered viable alternatives was an essential tool in protecting the lucrative long-haul custom.”
Robinson claims that without research, this behaviour would never have been identified and without the database, exploiting the opportunity would not have been possible. “Metaphorically, we had been given a brand new pair of 3D glasses,” he concludes.
British Gas installs VIRTUATel’s automated customer survey solution, to capture the “Voice of their Customer”, from three and a half million calls per month.
London, England January 2010
In 2007, Phil Bentley, the Managing Director of British Gas, stated that Customer Service was his number one priority. As part of that focus, British Gas used VIRTUATel’s hosted, cloud based customer survey service to gather customer feedback from a percentage of their callers.
Following the success of the surveys – which helped increase customer advocacy and increase customer retention – and as part of the new “One British Gas” transition, British Gas are now installing a centrally located VIRTUATel customer survey platform to offer a feedback opportunity to every one of their 3.5 million callers per month. Using VIRTUATel, all British Gas customers can be offered a tailored telephone, web or email survey, with results and feedback being delivered to a single, central data warehouse.
“We wanted a survey solution that could cover all our strategic call centres from a single location” said Carl Skerritt, Group Network Architect for Centrica. “It was also vital that the platform could use our existing VoiceXML based Genesys AVP and CTI infrastructure, to protect our past and future investment” he added. “VIRTUATel matched all these requirements and more and as we had already used their hosted service, we knew the company and its performance level very well”.
Added Paul Lodwidge, Centrica’s Customer Insight Research Manager, “As our “One British Gas” strategy developed, it was essential that customer feedback from any contact point, was collected in a single place, so our marketing efforts had a complete view.
Alan Weaser, Director of VIRTUATel, said “British Gas are a valuable and innovative client and have made excellent use of both our service AND the results. They were one of the first of our clients to use our automated surveys to increase customer retention and we are proud to have been chosen for conducting what we think is the largest automated telephone survey challenge in the UK.”
What is The Next BIG Thing In Call Centres ?
A Solution That Delivers:
- Increased Customer Satisfaction
- Increased Agent Satisfaction
- Increased Revenue, and
- Reduced Costs
This is a article that was published by Dr. Jon Anton, Director of Research, Purdue University Center for Customer-Driven Quality & Anita Rockwell, Director of Business Intelligence, BenchmarkPortal, Inc.
PLEASE NOTE: This 2005 White Paper has been reproduced by VIRTUATel to show the reasoning and innovative thinking that was behind our range of products and services when we first launched our services in 2004. It cannot say better, why we developed our fully automated survey solution, ASMA
For “Emerging Model” read VIRTUATel’s current product offering!
We recently visited multiple world-class call centers and interviewed their leaders to determine the best practices in agent monitoring and coaching. We expected these world class organizations to have implemented the kind of best practices for effective agent monitoring and coaching that would result in increasingly better customer satisfaction scores. Our plan was to document their best practices for the rest of the call center world. Well, surprise! What we found was that, when it came to agent monitoring and coaching, almost every call center was struggling
While some were doing better than others in this arena, even the best performing centers didn’t have this process nailed down. What those centers that were doing agent monitoring and coaching better than the others had in common is that they weren’t using the Traditional Model. (The Traditional Model involves the use of a quality criteria checklist to randomly audit a set number of calls per agent per month.) As part of our research, we went back to understand where and how the Traditional Model was developed. We found that it was adopted from the manufacturing world.
Because processes like automotive assembly lines benefited greatly by being strictly defined, call center leaders of the time assumed (because they didn’t know of any other way) that the same approach would work in call centers.
Some of the principles applied were: _ reduce variation _ study the best and replicate _ define and measure metrics _ define quality by “making the numbers” While the criteria companies used in most centers were well-intended, we found that it didn’t produce the desired and/or necessary results. At best, it provided a lower floor for the results delivered, meaning that no customer would receive terrible service (because agents quickly learned the basics they needed to do to pass an audit). But, it didn’t produce the kind of experiences that customers raved to their friends about. In short, we found that the Traditional Model was intrinsically flawed. Next we looked at why the Traditional Model didn’t work.
To illustrate, we’ll look at three of the most common elements of the Traditional Model.
1.Traditional Model – Sample Size: Five Calls Per Agent Per Month
First, we asked,” Who came up with that number?” There is no magic in monitoring 5 calls per agent per month. It’s similar to the arbitrary service level of 80 percent of the calls being answered in 20 seconds. From a statistical standpoint, this sample size is hardly valid. Yet, many call centers include an agent’s QA score as a significant portion of an agent’s performance evaluation. You don’t have to be a statistician to realize that a sample of five calls, randomly selected out of 1,000 to 1,500 calls does not a valid statistical sample make.
2.Traditional Model – Sample Selection Criteria: Calls to be Monitored are Usually Randomly Selected
The odds of finding any significant coaching/learning examples within a random sample are minimal, in fact, almost non-existent. We have found that a natural bell curve applies to most service experiences. The vast majority of calls are routine or repetitive, non-emotionally based contacts.
A much smaller volume of calls are slightly better or worse than average. Very few calls are exceptional or horrible. Imagine if you could isolate the horrible calls and learn from those interactions. Those are the contacts that your customers are telling their friends about. But, you can’t isolate and evaluate those types of calls within the Traditional Model, which requires that you evaluate agent performance primarily from QA scores that were based on a small and randomly selected group of calls.
3.Traditional Model – Evaluation Criteria:
A Checklist Approach to Evaluating the Call
While many centers have shifted the actual criteria on their checklists to better align them with satisfaction drivers, the core premise of this approach is flawed. We need to ask ourselves,” Why do we evaluate the call in the first place?” As the chart illustrates, there are two reasons:
- To evaluate the customer’s service experience:
- To ensure that the agent provided accurate answers and adhered to company policies and/or procedures. When we and/or a supervisor and/or QA team member evaluate the customer’s service experience by listening and completing a checklist, we’re essentially acting “as if ” we ARE the caller. Kind of silly when, in fact, the caller is often quite willing to tell us how they actually felt about the experience, if only we could ask.
While we need to check for accuracy, most companies don’t realize they can use “gold star” type programs which limit the degree of monitoring based on the agent’s experience and historical accuracy. Once agents move to higher levels (think mentors, senior reps, etc.), they really only need spot checks done. There are also philosophical principles that factor into why the Traditional Model isn’t effective in the call center environment – never was and never will be for the following two reasons: _ Call centers are a business-within-a-business _ Call centers are a blend of art and science .
These call center philosophies illustrate why the Traditional Model fails. The Traditional Model is fundamentally trying to measure an art as if it were a science. Also, the Traditional Model assumes that the call center is just like any other department within a company, when it is not.
The Next Big Thing…the Emerging Model
In the process of confirming that the Traditional Model does not work, we discovered our Emerging Model as a better approach to agent monitoring and coaching. We’ve seen measurable positive results from call centers that utilize most or all of the components of our Emerging Model. Let’s look at some of the ways that the Emerging Model for agent monitoring and coaching differs from the Traditional Model:
With the Emerging Model:
- The caller self-selects to provide feedback to the agent.
- Callers evaluate their service experiences.
- Caller feedback, not feedback from team members acting as callers, is forwarded directly to the agent in real-time.
- The agent is unaware that the caller has volunteered to monitor the call, just like having a secret shopper monitor the call.
- The caller’s satisfaction is top-of-mind for the agent.
- Callers are aware that they are monitoring the call throughout the call; this heightens their awareness of the agent’s performance.
- The sample size used in the survey design is statistically significant.
- Instead of using call monitoring scores as a measure of agent performance (possibly tied to compensation), the caller’s feedback is used strictly for agent coaching and thereby agent development
The Five Components of the Emerging Model
There are five main components of the Emerging Model. There are additional nuances and enhancers, but for purposes of this overview, we will outline the primary aspects.
1. The Agent Receives the Feedback Directly From the Calling Customer.
While the entire company exists to serve the customer, it only makes sense that the agent, as the primary point of contact, would be the most focused on satisfying each customer. With the Traditional Model, we have skewed the emphasis to drive the agent’s attention to an internal measure (i.e., the QA scorecard, or checklist) rather than doing whatever it takes to meet each customer’s needs.
Agents love getting feedback from their own customers. So when one of their customers has an improvement suggestion, they take it seriously because there is no built-in bias (feeling that the feedback or suggestion is that of the QA person, rather than that of the customer).
By having the customer provide the feedback on the service experience, it also eliminates the need for lengthy calibration sessions. Most of us agree on accuracy issues. The debates are usually around judgments on the approach an agent took and perceptions around what the customer’s reaction was.
2. The Metrics That the Agent (and Direct Supervisor) is Measured Against are Aligned to Promote the Right Behaviour.
- Comparing top-box caller satisfaction (namely, a five out of five caller satisfaction rating) with the team average.
- Comparing a quantity measure, like calls/agent/hour, with the team average. The Emerging Model uses top-box as determined by your customers (instead of a QA Team or auditor) as the primary metric for evaluating an agent’s performance. Caller satisfaction should be what’s most important to your agents, so focus their energy and efforts on achieving this metric first.
Few companies would survive without being efficient, so we also support a metric on quantity (like number of calls handled per hour, or time spent in “talk” mode).This metric should be secondary to the focus on top-box scores. Also, both the agents and their leadership should be evaluated on basically the same measures of performance.
3. Every Dissatisfied Case is Reviewed by a Dedicated Review Team.
The above chart shows the impact of having an effective service recovery process for dissatisfied customers.
In the Emerging Model, bottom-box satisfaction (dissatisfied customer feedback) is a gold mine. The customers who rate their service experience at the low end of the scale want to be heard. They are extremely unhappy with the company.
The good news is that they’re still engaged with you as proven by their willingness to take the survey. There’s a chance to change their perception of the company, if you act quickly. While you may not achieve a 180-degree turnaround, you can usually at least neutralize the negative emotion from their experience.
The dedicated review team does four things as they review the dissatisfied customer’s situation:
- Determine if the situation was within the agent’s control.
- Determine if there is a chance to recover the customer. c. Determine the root cause of the customer’s dissatisfaction. d. Determine potential coaching comments and behavioural change suggestions.
4. Valuable customers are routed to agents with the highest satisfaction scores.
Inbound call routing is very common in call centers. Routing criteria include:
- The simplest routing is to the next available agent – very traditional.
- Slightly more complex is to route the call to the next available agent with the proper training to handle a specific kind of call – about 50 percent of call centers use these criteria.
- Even more complex is to route high-value customers through a shorter queue to reach agents that are most capable of handling their call quickly. Only about 10 percent of call centers use these important criteria.
- Finally, the Emerging Model allows for the routing of high-value customers through a shorter queue to reach agents who are not only the most capable of handling the call quickly, but also to those agents who have the highest caller satisfaction scores. This is the trend that we predict will become the New Model in the months ahead.
5. Reporting of Caller Feedback Results are Available in Real-Time at All Levels.
A critical component of the Emerging Model is that caller feedback data is processed in real-time, and actionable reports are available to the agent, the supervisor, and the call center manager through personalized dashboards via their computer. This immediate availability of caller feedback information allows for self-directed corrective action to be implemented at all levels.
Other Benefits of the Emerging Model
In addition to the benefits highlighted so far, there are other benefits when the Emerging Model is used:
- Increased agent satisfaction/reduced turnover.
- Increased morale in the call center overall.
- Reduced costs/more for your money.
- Pinpoint service improvement initiatives and instantly monitor impact.
- Increased customer satisfaction – especially top-box satisfaction. Seldom have we been so excited to be at the forefront of change in customer service contact centers.
To actually witness a paradigm shift in a key call center process is, quite frankly, extraordinary. In conclusion, we would summarize:
- 1. The Traditional Model of agent monitoring and coaching does not produce the intended results, yet it is both expensive to maintain and time consuming.
- 2. Call center managers are seriously searching for practical alternatives to achieving their call volume goals (productivity), while maintaining their caller satisfaction goals (quality).
- 3. The Emerging Model is a much better utilization of resources to produce quantitative changes within a contact center, including behavioural change at the agent level, plus policy and procedural change at the supervisor, manager, and company levels.