Automating many of the tasks performed by reserving actuaries could greatly reduce errors and increase efficiency.
In many ways, data defines the pace and quality of the loss reserving process, but gaining access to it is often problematic for many actuaries. Legacy systems are typically ill-suited to the data-intensive requirements of a modern reserving process. This bottleneck is often a prime constraint leading to the rushed and sometimes fitful pace of the quarterly reserve review. In the end, sometimes just getting it done becomes the focus of actuarial analysis when much more is needed and possible.
Actuarial departments are under increasing pressure from regulators, rating agencies, and audit committees to deliver more sophisticated loss and reserve estimates under shorter timeframes. Conventional methods of accessing data, however, can stand in the way.
Today’s reserving process is often riddled by delays and interruptions that begin with a request for the data, typically to IT or Claims departments. This is usually followed by the need to verify that data and adjustments flow through a maze of spreadsheets, reconcile to the original source, and are consistent with prior analyses. Time-consuming and labor-intensive, these adjusting and reconciling tasks not only delay the reserving process but also misallocate actuarial resources that could be better utilized with a reengineered system.
The best data-management for actuaries
A better solution would give actuaries direct and easy access to the data in a format that accommodates their specific needs, not those of Claims or Accounting. In short, actuaries need their own data source that is maintained on a basis suitable to their analyses, populated by the data they request, and which allows direct access to the data at the level of granularity they need for analysis.
A central reserving data repository eliminates the need for IT or Claims to provide triangulated information from the claims system(s) to begin analysis. Actuaries should be able to directly query for specific data sets from a database that they control.
Having ownership of the data means actuaries can re-segment their analyses without delays, drill down into specific cells of loss triangles to look at claims-level detail, quickly exclude specific claims from those triangles, and easily reconcile to the original source data. All these tasks may have previously been difficult or even impossible given the time constraints and rudimentary tools.
Expecting that the traditional flow of data will continue to be sufficient to meet the growing needs of raters and regulators is, at best, shortsighted. And asking an IT department to shift priorities every reporting period to accommodate the data needs of the loss reserving process may soon become infeasible. Giving actuaries control of their data is the first step to expanding the scope and power of the loss reserving process and preparing for the evolving expectations of internal and external stakeholders.
This article was originally published on April, 19, 2022.