Subscribe to the blog and receive recommendations to boost your CX
The daily work of the teams of Workforce Management (WFM) requires accurate updating certain indicators to measure the necessary capacity, whether in terms of hours, human resources or required simultaneity.
In this note we are going to take a tour of the most commonly used dimensioning processes, from the perspective of those who use the processes on a daily basis and of those who need to decide and calibrate based on the information received.
Prior to the Capacity process
Even after many years of experience and in leadership positions in Operations- WFM, many analysts think that the history-based volume forecasting process is part of the capacity sizing process.
We can describe this process in detail in the future, but we want to list three important reasons why this process is unique and isolated:
1) The forecast becomes only an input to be used for dimensioning; is only a variable to be used in a statistical model. Your individual process can be modified without impacting the sizing process.
2) The forecasting process is stochastic, understanding by this that events change probabilistically as time goes by. For example, the forecast includes factors of change due to seasonality, product launches, and general failures that modify the probability of contacts entering.
3) In certain cases, the forecasting process is not implemented by the Workforce team; the results are only received from the External Client, or from a central planning team (GRP - Global Resource Planning).
The three most used processes to determine Required Capacity
What is the most convenient capacity sizing process? We are going to review in detail three of the most used processes, what are the pros and cons of each of them.
1) Empirical Process
(or “busy boss opening the calculator app”)
For educational purposes we are going to call it a “process”, but the truth is that this calculation is one of the simplest and most used “black boxes” in the industry to simply validate a more complex process. This process consists of consecutive arithmetic operations that lead us to determine the required capacity in hours or in human resources. Many leaders don't have the time or don't find it within their reach to have to know in-depth details of criteria or formulas.
In this simple case, we can afford to give an example.
Let's say we expect to receive 20000 calls in a week and every call has a average duration of 5 minutes. We must prepare for a time spoken of 5×20000 = 100000 minutes (167 hours). Let's say an agent works 40 hours per week, of which 70% of the time we expect him to actually be on the phone. This gives us a working time per agent of 40 * 70% = 28 hours per week. By doing the corresponding division, we obtain that with 60 agents we fulfill the service.
Pros of the Empirical Process:
-Easy to understand.
-Fast application.
-Variants of general formula applied depending on the type of contract.
—It allows you to detect possible errors in more complex methods.
Cons of the Empirical Process:
—Wrong expectations. The actual outcome of specialized processes can vary greatly from this general exercise.
—It does not take into account seasonal variations, nor variables such as absenteeism/retention.
-The causal determinism on which these arithmetic solutions are based, they leave no room for the generation of complex scenarios.
2) Erlang
(or “we all use it, but we don't really know it”)
The process for applying the Erlang distribution is based on the theory of the Danish mathematician Agner Erlang (1917) to examine the number of contacts that can be made at the same time to telephone operators that change stations.
The formula widely known and used in the world of contact centers is known as Erlang C and is based on the following assumptions:
a. The time between call arrivals is a random variable with an exponential distribution of parameters according to how many calls are expected to be received in an interval. This hypothesis is highly realistic.
b. The duration of a call is also a random variable with an exponential distribution whose parameter is adjusted to respect the TMO. This second hypothesis is not realistic in practice.
c. The caller waits indefinitely in line until their call is answered. There are no abandonments; another hypothesis that is not realistic.
A later formula known as Erlang A (1946) -although it was developed by the Swedish Conny Palm-, it allows us to calculate the expected number of dropouts adding to the Erlang C hypotheses a patience that is also modeled with an exponential variable. This formula certainly helps to complement Erlang C, but it doesn't solve the problem since it doesn't provide a service level forecast on its own.
It is common to receive only an Excel with a ready-to-use macro module and mechanical instructions on how to apply this Erlang code.. And in the case of robust WFM systems, the code is ready to be part of the dimensioning process based on segmented historical information.
Pros of Erlang:
— It's a process known to all BPO operations today.
— Custom versions are used to be able to generate calculations on other channels.
— In addition to indicating the required agents, it allows you to forecast other indicators such as Service Level and Average Response Time.
Cons of Erlang:
— The call duration hypothesis does not fit the truth.
— If the process receives more parameters than necessary, you can create overestimates in your result which are not always interpreted correctly. The vast majority of teams take the result without checking if it contains an unnecessary level of over-adjustment.
— The Erlang model does not contemplate abandonments. For this application, the formula is based on the assumption that all calls wait indefinitely until they are answered.
— Although it is a common practice today,Erlang's applied formula is designed for a single service. It does not contemplate that current agents are trained in several lines of business and are dynamically moved during the day to meet coverage needs. The multichannel assignment or refill by line of business cannot be dimensioned. This is one of the main reasons for the oversizing in multichannel operations when projecting Erlang-based resources.
— The original formula starts from a finite number of incoming contacts only. Transfers made from other lines of service, direct calls to an extension, additional calls made by contacts who hang up and call back are not taken into account within the Erlang base. To compensate for these events, many teams adjust with increase values according to holidays, days when offers or products are launched, and intervals with historic increases.
3) Custom Simulation
(or “yes, we can do that and more, based on mathematics”)
Taking into account the benefits of current computing capacity, let's start with a simplification of the mathematical simulation process. Let's simulate at a fast pace every call of the day with all its attributes:
— who's taking the call
— if you meet the service level
— if it is abandoned or not
— how long does it take
— if the agent makes another contact right away or if he goes to his recess
A robust statistical model allows us to carry out different simulations of a day or full weeks in a matter of minutes. And based on trends, it makes it possible to define the most accurate projection according to the historical information and criteria used to feed the model.
Pros of Custom Simulation:
— Real criteria such as waiting time or abandonments apply.
— The algorithm generates calls and assigns them to available agents.
— Simulate a full day in milliseconds allowing you to view all the relevant indicators.
— It allows you to repeat the day many times, to give more real results of scenarios that could be considered (it gives an idea of what is known in statistics as variance) instead of offering only the average result.
— Ability to simulate agents with multiskill.
Cons of Custom Simulation:
— High computing power required. We are talking about dedicated servers and a large amount of information processed to create each of the simulations.
—A structured interface is required to be able to process input information and calibrate specific details such as the treatment of multiskill agents, changes in absenteeism trends and the use of aids such as holidays or anomalies, agent profiles, etc. Without this facility, managing the statistical model requires a dedicated person.
-Being able to implement this solution from scratch requires an advanced level of exact sciences and experience in WFM which is difficult to achieve.
Beyond the chosen process, there is a critical role: that of the Workforce Management team
Taking into account the benefits and obstacles that each of the dimensioning processes has, the factor that will actually make the difference between the success and failure of these planning cycles is the level of freedom allowed to WFM teams so that they can calibrate the results.
For example, Real-time analysts can easily explain anomalies in results (human factors, emergencies related to connectivity or software, among others). But many times they don't have a structured way of documenting this knowledge generated on the front lines and be able to effectively include it in the new planning cycles.
The sizing process must include parallel business intelligence processes, to be able to capitalize on process errors, human errors and anomalies external to the process. And then replicate that learning to the rest of our teams. Like the well-known “if it's not measured, it doesn't work”, we have the opportunity to apply the following day by day:”Current computing power, such as Artificial Intelligence, will not replace analysts and creators, it will only make our work more humane.”
The more we share knowledge and train our teams, opening the door to forums where these processes are questioned, validated and improved; The more we support our talent within WFM, the better results these processes and the others that are related to BPO cycles will have. After all, it's a people business for people.