Aggregation and Establishment of Onerous Contracts as Required by IFRS17 Based on a Stochastic Matrix and the Lumpability Concept of a Markov Chain

Abstract

This paper aims to formulate a new innovative method for contract classification under IFRS17 for motor insurance operating under the typical Bonus-Malus System. The specific method of classification provides robust results with respect to borderlines drawn between the three categories imposed by IFRS17, as the criterion of lumpability preserves the important stochastic elements of the typical Bonus-Malus System. This approach eliminated any subjectivity which may arise when considering only the profitability criterion based on loss ratio which is affected by the high volatility of the claim experience. We apply this method to various forms of indicative stochastic matrices. Our results reveal the impact of the proposed method on contract classification. This provides valuable insights for practitioners and regulators seeking to enhance and empower their professional actuarial judgement on financial reporting and risk assessment within the IFRS17 framework.

Share and Cite:

Zimbidis, A. and Gkounopoulos, I. (2024) Aggregation and Establishment of Onerous Contracts as Required by IFRS17 Based on a Stochastic Matrix and the Lumpability Concept of a Markov Chain. Journal of Financial Risk Management, 13, 278-304. doi: 10.4236/jfrm.2024.132013.

1. Introduction

1.1. Evolution of Accounting Standards: Transitioning from IFRS 4 to IFRS 17 and Implications for Insurance Companies

The insurance industry faces a great challenge, effectively from January1st, 2023. From that date onwards, all insurance companies should adopt the new accounting standard encoded as IFRS17 (International Financial Reporting Standards, number 17). That will replace the previous standard of IFRS4 and will induce great changes. It was only a few years ago, January 1st, 2016, when the adoption of the new regulation of Solvency II, instead of the old Solvency I, changed radically the total framework in the European insurance market. New more strict capital requirements, detailed instructions for exhaustive governance and full disclosure of information to the policyholders and to the local regulator were the three main points amongst many other innovations that the new legislation has focused on. If the transition from Solvency I to Solvency II brought a great number of changes, the transition from IFRS4 to IFRS17 will bring additionally a massive number of changes and will enforce all insurance companies to upgrade their scientific and technical tools in every day’s operations.

In this paper, we focus on a new important concept which appears in accounting rules described in IFRS17. That is the concept of ‘onerous’ contracts. The new standard pays too much attention and imposes different and strict rules for aggregation of insurance contracts to increase transparency in the financial statements and secure the correct reporting of the profit margin for an insurance company through time. So, the assessment of aggregation is not a routine technical task of pooling similar risks but plays a key role to the accounting report of the company and consequently to its capital requirements. One of these kinds of aggregation (perhaps the most important!) is the classification under a three-pillar system of each portfolio:

1) Onerous contracts, 2) Potentially Onerous contracts and 3) Profitable contracts

Formally and quoting the original text of IFRS17, “An insurance contract is onerous at the date of initial recognition if the fulfilment cash flows (FCF) allocated to the contract, any previously recognized insurance acquisition cash flows, and any cash flows arising from the contract at the date of initial recognition in total are a net outflow” (IFRS 17, Article 47). Since the cash-flow in total is a net out-flow, there is no profit margin. Hence, a portfolio of contracts that have been classified as “onerous” should be recognized immediately at their inception and their total outflow should be reported immediately in the financial statements of the company.

The grouping procedure may prove to be cumbersome for many entities especially by taking into consideration the required high level of granularity. Entities may face several significant practical and operational issues in respect of their administration, valuation, and accounting systems. So, all insurance companies should have a robust approach in order to detect the above three groups and determine which contracts are certainly profitable, certainly non-profitable (onerous) and potentially onerous (i.e. neither profitable nor non-profitable under a certain way). In this paper, we focus on the motor market.

The choice to focus on the motor insurance market stems from several key considerations:

Industry Relevance: Motor insurance is a significant segment within the insurance industry, characterized by its widespread adoption and unique risk dynamics. By focusing on this market, we address a critical area of financial reporting and risk assessment that is pertinent to insurers worldwide.

Complexity of Contract Classification: The complex nature of motor insurance contracts, particularly those operating under the Bonus-Malus System, presents a challenging environment for contract classification under IFRS17. Our innovative method aims to address this complexity by providing robust results that align with the regulatory requirements while preserving the stochastic elements inherent in the Bonus-Malus System.

Eliminating Subjectivity: Traditional methods of contract classification based solely on profitability criteria, such as loss ratios, may introduce subjectivity and volatility into the classification process. Our approach eliminates this subjectivity by incorporating the criterion of lumpability, thereby ensuring consistency and objectivity in contract classification.

Practical Implications: Motor insurance is a convenient topic for demonstration purposes due to its well-defined parameters and readily available data. This convenience allows for a clear and illustrative presentation of our method and its impact on contract classification outcomes, making it accessible to practitioners, regulators, and other stakeholders.

For many years now, the motor insurance market has adopted a pricing model based upon the theory of discrete homogeneous Markov Chains. This system is widely known and quoted as the “Bonus-malus system (BMS)”, also known as the No-Claims Discount (NCD) or a merit-rating system. That is an automatic risk classification and pricing system which categorizes each policyholder according to its individual claim history. It is commonly applied in motor insurance but can also be found in other lines of insurance as well. The concept behind a Bonus-Malus system is to reward policyholders who have a good claims history by offering them reduced premiums (Bonus) and penalize those with poor claims history by increasing their premiums (Malus). So, policyholders can move up and down the BMS “ladder” based on their claims’ history. On the one hand, claim-free periods can lead to higher (Bonus) levels with greater discounts while on the other hand, claims may result in downgrading to lower (Malus) Levels with increased premiums.

The BMS normally has ten to twenty categories, where normally the first (1st) class corresponds to the “best” drivers, producing fewer accidents, while the very last (10th or 20th) class corresponds to the “bad” drivers, producing many accidents each year. Normally, a new driver (customer for the company) is classified somewhere in the middle and each year is transferred to a higher or lower category according to his/her claim experience. This BMS is fully determined by a stochastic matrix of dimension n, where n is the number of risk categories (or states).

Mathematically, the Bonus-Malus system can be viewed as a Markov chain with constant transition probabilities under certain conditions. This system’s stationary distribution is crucial for understanding long-term insurance portfolio behaviour and informing premium calculations.

Analysis of various Bonus-Malus systems worldwide sheds light on premium determinants and portfolio stability, emphasizing the importance of stochastic modeling for risk assessment and insurance operations (Das, 2010) .

In addition, seminal work traces the historical evolution and global adoption of Bonus-Malus systems, particularly in Europe and Asia (Lemaire, 1998) . Originating in the 1960s, these systems revolutionized insurance practices and shaped risk assessment worldwide. Research underscores the potential efficiency gains and regulatory pressures driving their adoption, providing valuable insights for practitioners navigating evolving regulatory landscapes.

In the analysis of a Bonus-Malus system, it is commonly assumed that the claim frequency of an individual policyholder remains unaltered, ensuring constant transition probabilities and enabling the system to be modelled with a homogeneous Markov chain (Lemaire, 1995) . However, recent research (Niemiec, 2007) has highlighted the dynamic nature of claim frequencies among policyholders, which may fluctuate over time due to various reasons. To address this challenge, our paper explores novel approaches to modelling Bonus-Malus systems, aiming to capture the variability in transition probabilities within defined ranges. By considering these fluctuations, we seek to provide a more comprehensive analysis of Bonus-Malus systems and their implications for insurance risk assessment and portfolio management.

The central question at hand pertains to the transformation of a Markov chain with e.g. 20 states representing driver categories, ranging from the best to the worst driver, into a Markov chain with only 3 states, specifically denoting profitable, onerous, and potentially onerous categories. This transformation is essential within the context of IFRS 17 contract aggregation, and it aligns with the concept of Lumpability for a Markov chain.

This issue bears a resemblance to a scenario outlined in previous papers (Loizides & Yannacopoulos, 2012; Georgiou, Domazakis, Pappas, & Yannacopoulos, 2021) concerning IFRS 9, where financial institutions are tasked with classifying loans into three distinct categories, known as IFRS 9 Stages. In both cases, institutions governed by IFRS regulations must undertake the aggregation process, necessitating the reduction of their state space to one that is compliant with regulatory standards. Importantly, this reduction must be executed while preserving the Markov property, ensuring that the underlying probabilistic behaviour remains intact throughout the transformation.

To date, the definition of initial profitability boundaries to demarcate the three contract groups under IFRS 17 has remained somewhat uncertain, largely owing to the inherent stochastic elements at play. Nevertheless, the proposed approach introduces a notable enhancement in terms of robustness, effectively surpassing the constraints imposed by the initial stochastic transition matrix. This innovative method not only considers the properties of the initial matrix but also actively strives to ameliorate the stochastic attributes. That is achieved by factoring simultaneously the initial transition matrix and the preliminary contract classifications, which are contingent upon specific profitability indices or criteria. As a result, this approach refines and elevates the accuracy of contract classification, providing a more comprehensive and precise framework for this critical task.

Drawing from recent advancements in land cover classification using Markov chains (Koukiou & Anastassopoulos, 2021) , this paper introduces a novel method for contract classification under IFRS17, tailored for motor insurance policies with the Bonus-Malus System. By leveraging Markov chains, known for capturing dynamic transitions, our approach aims to enhance classification robustness in insurance risk assessment.

Inspired by transition matrices in land cover classification (Koukiou & Anastassopoulos, 2021) , our method preserves stochastic elements in the Bonus-Malus System. This addresses subjectivity in profitability criteria, affected by claim volatility. Through analyzing various stochastic matrices, our findings offer insights for practitioners and regulators, empowering informed decisions within IFRS17.

1.2. Analyzing the Changes from IFRS 4 to IFRS 17

The transition from IFRS 4 to IFRS 17 represents a seismic shift in the way insurance contracts are accounted for and reported, with significant implications for insurers’ financial statements, risk management practices, and operational processes. Below, we explore the key changes introduced by IFRS 17 and their implications:

Measurement of Insurance Contracts:

IFRS 17 introduces a standardized approach to the measurement of insurance contracts, departing from the diverse range of accounting policy options available under IFRS 4.

Under IFRS 17, insurers are required to measure insurance contracts at current values, with the inclusion of explicit provisions for risk adjustment and discounting.

Recognition of Profits:

Unlike IFRS 4, which allowed for the recognition of profits at the inception of an insurance contract, IFRS 17 mandates the recognition of profit over the coverage period.

This shift from an “incurred claims” to a “building block” approach to profit recognition aims to provide a more accurate reflection of the timing and magnitude of insurance contract profitability.

Disclosures and Transparency:

IFRS 17 imposes more stringent disclosure requirements on insurers, necessitating greater transparency regarding the nature, risks, and financial impact of insurance contracts.

Insurers are required to provide detailed information on the assumptions, methodologies, and judgments used in the measurement and recognition of insurance contracts, enhancing the quality and reliability of financial information.

Actuarial and Technical Challenges:

The transition to IFRS 17 presents significant actuarial and technical challenges for insurance companies, requiring the development and implementation of sophisticated models and analytical tools.

Insurers must enhance their actuarial capabilities to accurately assess the financial implications of insurance contracts under the new standard, including the estimation of future cash flows, risk adjustments, and discount rates.

Data and Systems Requirements:

IFRS 17 places greater demands on insurers’ data management and reporting systems, necessitating the integration of data from disparate sources and the implementation of robust controls and processes.

Insurers must invest in advanced technology and data analytics capabilities to ensure the timely and accurate measurement, recognition, and disclosure of insurance contracts in compliance with the new standard.

In summary, the transition from IFRS 4 to IFRS 17 represents a paradigm shift in insurance accounting, with far-reaching implications for insurers’ financial reporting practices and operational processes. By embracing these changes and investing in the necessary analytical and technical capabilities, insurance companies can navigate the complexities of IFRS 17 and position themselves for long-term success in a rapidly evolving regulatory and market environment.

2. The Model

2.1. Motivation

In IFRS 17, contract classification (for onerous) requires the use of a profitability criterion, often the Loss Ratio, which inherently carries stochastic characteristics. Insurance companies aim for sustainable loss ratios to ensure profitability and financial stability. However, various factors introduce errors and randomness into loss ratio values:

Claim Estimation Errors;

Randomness of Loss Events (timing and magnitude);

Underwriting decisions affecting claim frequency and severity.

To address these challenges and establish objective contract aggregation criteria, we additionally employ the concept of lumpability, (refer to the Appendix for preliminaries of this concept) which allows certain states to be grouped together without altering the chain’s probabilistic behaviour. That simplifies the Markov chain, creates cohesive sets where states share identical transition probabilities, preserves essential system attributes within grouped states and finally ensures that probabilistic behaviour remains consistent.

Lumpability enhances contract classification under IFRS 17 by harmonizing contracts with similar attributes, reducing randomness. For example, a contract with a 102% loss ratio (above the critical value of 100%) may be reclassified using lumpability, leading to more favourable profitability categories. It offers a systematic, data-driven approach to forming aggregated states and improves classification precision.

So, we may form a new problem with a twofold challenge. In addition to the initial proposed partition of states, based on the profitability criterion, we also address the lumpability problem. Consequently, we aim to identify the optimal partition that balances two key objectives:

Lumpability: Ensuring that the Markov chain can be aggregated into the desired number of states while preserving the Markov property, thereby maintaining the probabilistic behaviour within each aggregated state.

Partition Optimization: Identifying an optimal partition of the states that minimizes the divergence or distance from the initial suggested partition (based on the profitability criterion of loss ratio). This could involve various distance metrics or criteria based on the specific goals and constraints of the problem.

To address this two-fold challenge effectively, a combination of mathematical modelling, optimization techniques, and probabilistic analysis is necessary. This approach should aim to find the partition that not only satisfies lumpability but also minimizes the specified distance metric, aligning with the requirements of IFRS 17 or any other regulatory framework.

In the context of IFRS17 the entity will need to classify its contracts to the three main group categories. As stated above, in a BMS there will be an initial state space S with cardinality | S | = n . Given a profitability criterion the entity will classify the contracts into the three new sets, so let S ( | S | = 3 ).

However, the profitability boundaries which determine to which of the three new states a contract is classified to, is usually set by a rather subjective instead of an objective criterion. For example, it may be assumed that all contracts which fall in the range of loss ratio values above 100% are classified as onerous at initial recognition. In order to remove this subjectivity and also the stochasticity in setting these boundaries between the new states we need to consider both the lumpability problem but also given the initial rough partition S’, we need to find the best partition, let S1, that has the smaller distance from the initial one. In fact, we must adjust the initial partition S’ of S, which was determined without rigorous criteria but probably heuristically.

The general formulation of the new problem in the context of IFRS17 for the aggregation of contracts regarding a BMS includes the approximate lumpability problem as this was set previously but also adding a second minimization problem.

2.2. Establishing the Approximate Lumpability Problem with Classification Criterion

In this subsection, we provide the technical details to formulate our problem in a standard probabilistic framework.

Let a Markov chain M = ( S , P ) with transition matrix P n × n , which is not exactly lumpable with respect to an initial partition S’, with dimension m < n, defined by the corresponding matrices V, U. Let P L S n × n the transition matrix of the new Markov chain M’ with same dimensions as M and be exactly lumpable with respect to partition S’ and being as close as possible under some norm with the original one.

The state space S represents the various driving risk categories within a Bonus-Malus System (BMS), while the transition matrix P serves as the stochastic matrix within this BMS. This matrix encapsulates all the probabilities associated with how an insurance entity transitions policyholders from one risk category to another.

Given the initial partition S’, with dimension m, let a new partition S1 with | S 1 | = | S | = m < n and P L S 1 n × n the transition matrix of the new Markov chain M 1 = ( S 1 , P L S 1 ) with same dimensions as M and M’ and be exactly lumpable with respect to partition S1 and be as close as possible under some norm with the original one but also partition S1 be at the same time as close as possible, under some other norm, with the initial partition S’.

So, the Approximate Lumpability Problem with Classification Criterion is defined as follows.

min S i L ( S ) ( a P P L S i 2 + b S S i ) 1 / 2 (1)

where L ( S ) is the set that contains all possible partitions of the state space S, subject to

V U P L S i V = P L S i V , P L S i 1 n = 1 n , ( P ) i , j 0 (2)

The first part of the above equation is the so-called Approximate Lumpability Problem1.

Also, 2 denotes the l 2 norm and the choice of the norm is indicative and various weighted l 2 norms can be considered as it is in the Approximate Lumpability Problem, while denotes every appropriate vector norm which can calculate the distance (dissimilarity) between two partitions.

Remark 2.1

The number of all possible partitions of the original state space depends on the dimension of the original state space but also on the context of the problem. In general, the number of all possible partitions of an n-element set, is the Bell number B n . Bell number B n represents the number of partitions of a set with n elements. It can be computed using the formula: B n = k = 1 n S ( n , k ) where: S ( n , k ) is a Stirling number of the second kind. The Stirling number of the second kind S ( n , k ) , represents the number of ways to partition a set of n distinguishable objects into k non-empty indistinguishable subsets.

Lemma 2.1

Assuming (in line with the typical expectations and standard practice) that only consecutive states can be grouped together, the number of all possible partitions is reduced to k = m 1 n 1 ( n k ) = ( n 1 m 1 ) , since the number of the new sets m is predetermined, usually m = 3, as IFRS 17 allows further aggregation of the sets apart from the main three categories which is the default requirement. Thus, the number of all possible partitions in our model is

k = 2 n 1 ( n k ) = ( n 1 2 ) .

Proof

The left-hand side of this result can be obtained by a step reasoning where an n-dimensional vector can be cut either after:

The 1st element to formulate the 1st group leaving n-2 possible choices to formulate the 2nd group or

The 2nd element to formulate the 1st group leaving n − 1 possible choices to formulate the 2nd group or

The (n − 2)-nd element to formulate the 1st group leaving only 1 possible choice to formulate the 2nd group.

And since all possible outcomes are disjoint then according to the Addition Principle, we end up to k = 2 n 1 ( n k ) = ( n 1 2 ) .

For the right-hand side of the equation the approach used to tackle the issue is the following:

Let x, y, and z represent the number of elements to be allocated in each of the three sets.

The constraints are:

x + y + z = | S | = n (since all elements must be distributed).

x, y, and z are positive integers (since each set must be non-empty).

To calculate the number of solutions to this equation, the so-called “Stars and Bars” combinatorial technique with restricted partitions can be used. The formula for the number of solutions is: the number of combinations where to choose m − 1 from n − 1: ( n 1 m 1 ) .

Therefore, this will be the number of all possible partitions of a set with n consecutive elements into three distinct sets, with each set containing consecutive elements □.

The lemma above implies that when partitioning a state space of 20 states into three sets (IFRS17 groups of contracts), we can derive 171 possible partitions.

Hence, our original minimization problem described in expression (1) under the constraintς (2) may be treated and solved under the following theorem.

Theorem 2.1 For the discrete optimization problem defined as follows, where L ( S ) is a finite set containing all possible partitions of the state space S:

min S i L ( S ) ( a P P L S i 2 + b S S i ) 1 / 2 (1)

subject to the constraints:

V U P L S i V = P L S i V (2a)

P L S i 1 n = 1 n (2b)

( P ) i , j 0 (2c)

As the L ( S ) set is finite, the algorithm below successfully finds the optimal solution to this discrete optimization problem that satisfies both the objective function (1) and the constraints (2a), (2b) and (2c) for any given state space S, where L ( S ) is a finite set of possible partitions. This is viable by the standard mathematical method of exhaustion for all possible partitions.

Proof

By taking into consideration the finite set of partitions and the discrete nature of the function the following algorithm capitalizes on these characteristics to find an optimal solution.

Algorithm

Step 1: Find the Lumpability matrix P L for the initial partition S’ given, based on some profitability criterion by the use of the Approximate Lumpability Method;

Step 2: For every single partition S i , i = 1 , , ( n 1 m 1 ) of the original state space S find the respective Lumpability matrices P L i as in step 1;

Step 3: For each partition in step 2 calculate their respective partition error as their distance from the initial partition S’;

Step 4: Based on Equation (1) select the optimal partition S1 and its relative Lumpability matrix which is lumpable in respect with partition S1 to the original transition matrix P.

Our algorithm is built upon the Approximate Lumpability algorithm, originally developed by Geogriou et al. (Georgiou, Domazakis, Pappas, & Yannacopoulos, 2021) . Specifically, our algorithm incorporates the pre-existing Approximate Lumpability algorithm, which is applied iteratively as a viable alternative to the exceedingly rare exact lumpability property. Our novel algorithm extends the previous one by introducing two additional steps: Step 3 for estimating the partition error and Step 4 for making the optimal choice. For a more comprehensive understanding, we recommend referring to (Loizides & Yannacopoulos, 2012) and (Georgiou, Domazakis, Pappas, & Yannacopoulos, 2021) . The complete Approximate Lumpability Problem algorithm can be found in (Georgiou, Domazakis, Pappas, & Yannacopoulos, 2021) .

Hence, we have constructed a synoptic algorithm for solving the problem of aggregation into three categories considering profitability of contracts while also the stochastic structure of the states of BMS model.

3. Applications in IFRS 17 Modelling

In this section, we apply the proposed methodology to the context of IFRS17 for contract classification. We begin with an initial partition of the state space and then apply the new method to various stochastic matrices to analyse its impact and sensitivity. To illustrate our findings and convey our concepts effectively, we choose a Bonus-Malus System (BMS) with 20 states governed by the transition matrix, as a representative insurance system. Let the state space S = { 1 , 2 , 3 , , 20 } . State 1 represents the super-Bonus state, where the best drivers are classified, while, conversely, state 20 represents the super-malus state.

3.1. Description

3.1.1. Transition Matrices

In all subsequent applications, a state space consisting of 20 states is considered. Consequently, the respective transition matrices have dimensions of 20 × 20.

Ten variations of transition matrices are considered in total. Three different initial transition matrix forms are utilized, corresponding to matrix indices 11, 21, and 31. The remaining seven matrices are derived from simple variations of these initial forms. Therefore, we can categorize these matrices into three groups, as shown in the following Table 1.

The first group comprises of a tridiagonal matrix (11) along with three variations of it. This matrix implies that the transition rules only allow movements between consecutive states, either to an immediately better state or an immediately worse one compared to the current state.

In its first variation (12), the last state (state 20), representing the super-malus stage, is treated as an absorbing state, with no other changes from (11). In the second variation (13), the initial matrix includes non-zero transition probabilities for the last state, which can be interpreted as a shift in the transition rules of the BMS, allowing immediate penalty under certain conditions for its policyholders. The last variation (14) is identical to the previous one but designates only the 20th state as an absorbing state. These three similar variations also apply to the initial matrix of the second group (21), producing the rest of its matrices (22, 23, 24).

Matrix (21) exhibits a banded-like structure, starting with an “almost tridiagonal” form that gradually includes more non-zero elements as you progress through the states. As you move along more states, additional non-zero elements appear beyond the three main diagonals, creating a broader pattern of non-zero elements. This expansion of non-zero elements beyond the initial tridiagonal structure results in a band-like pattern in the matrix, which can be regarded as a “block tridiagonal matrix.” This suggests that the transition rules of the BMS differ from those in the previous group (Group 1), or that the transition probabilities have been empirically estimated to indicate real transitions between these states. The gradual emergence of non-zero elements beyond the initial tridiagonal structure indicates an increasing level of complexity or interconnectedness between the states as you traverse the matrix. Describing it as a banded matrix or a block tridiagonal matrix highlights these specific structural characteristics.

Finally, the initial matrix of the last group (31) is an upper tridiagonal matrix, with the only variation considered being the introduction of the absorbing super-malus state (32).

Table 1. Description of transition matrices.

3.1.2. Distance Metric for Estimating the Partition Error (Measuring Dissimilarity)

The selection of the metric is important since there is a potential impact on the results. A distance metric, widely used in a range of problems is the Wassertstein distance. It is a representative and ordinary distance metric which can ensure the validity of the results.

As this metric is a distance function between probability distributions, we convert all the partitions to probability distributions by assigning probabilities to each set of the reference partition based on different rules for each set of them.

Remark 3.1

The probabilities are assigned to each state i which belongs to the j-th set of the Reference Partition as a function of their respective Loss Ratio ( L R i ).

p r o b a b i l i t i e s 1 = 1 1 + L R i , s t a t e i s e t 1

p r o b a b i l i t e s 2 ~ t r . n o r m a l ( μ , σ 2 , a , b ) ( 1 p r o b a b i l i t i e s 1 ) , s t a t e i s e t 2

μ = m e a n ( L R R e f . S e t 2 )

σ 2 = v a r ( L R R e f . S e t 2 )

a = m i n ( L R R e f . S e t 2 )

b = m a x ( L R R e f . S e t 2 )

p r o b a b i l i t i e s 3 = ( 1 p r o b a b i l i t i e s 1 p r o b a b i l i t i e s 2 ) 1 1 + e p e n a l t y S F

where, SF (Scaling Factor) = 0.5 and p e n a l t y = e L R i 101 , i s e t 3 .

Probabilities are normalized as:

n o r m a l i z e d p r o b s j = p r o b a b i l i t i e s j p r o b a b i l i t i e s , j = 1 , 2 , 3

And the probability distribution of the reference set is:

d i s t r i b u t i o n S R = ( ( n o r m a l i z e d p r o b s 1 ) , ( n o r m a l i z e d p r o b s 2 ) , ( n o r m a l i z e d p r o b s 3 ) ) □.

Remark 3.2

The probability distribution for each non-reference partition is established in a similar manner. It involves summing the probabilities assigned to each state in the reference set, taking into account the specific set within the controlled partition where each state is positioned.

For every non-reference partition S i , the corresponding probability distribution is assigned as:

p r o b a b i l i t y d i s t r i b u t i o n S i = ( k S i 1 R P P k , k S i 2 R P P k , k S i 2 R P P k )

where: R P P k is the Reference Partition’s Probability assigned for the state k.

Hence, the probability distribution naturally varies for each set within every partition □.

By allocating probabilities to individual states according to their corresponding sets in the Reference Partition, the intention is to implement a distinct distribution pattern. This pattern facilitates the distance metric in imposing more substantial penalties for transitions involving the onerous group of contracts, while also exhibiting greater tolerance for states situated between the groups. Additionally, these probabilities are contingent on the Loss Ratios of each state, enabling the distance metric to account not only for set transitions but also for the magnitude of these changes. Consequently, this metric gains, enhanced resilience and accuracy.

Cost Matrix

A significant component of the Wasserstein distance is the cost matrix. A Cost Matrix is used to add weight in penalising transition when classifying a state between the middle and the onerous group of contracts and inflict on the other hand a lenient penalty on transition in classifying states between the profitable and the middle group of contracts. So, a typical cost matrix (CM) may be a 3 × 3 matrix like this:

C M = ( a b c b a d c d a )

where a , b , c , d , b , c , d 0 (a typical choice may equate b = b , c = c , d = d ).

The above cost matrix functions as a transition matrix, with its elements signifying the penalties associated with state transitions between different groups. Specifically, ‘a’ denotes the penalty for remaining within the initial group, ‘b’ accounts for transitions between groups 1 and 2 in either direction, ‘c’ pertains to transitions between groups 1 and 3 in either direction, and ‘d’ represents transitions between groups 2 and 3 in either direction.

In our numerical simulation, the subsequent cost matrix is employed to impose more severe penalties on transitions between groups 1 and 3. Notably, this cost matrix exhibits an intentional lack of symmetry concerning movements from group 1 to group 2 and the reverse transition. This asymmetry is deliberately designed to introduce a more lenient approach for transitions from group 1 to group 2 in our exercise.

C M = ( 0 25 100 35 0 80 100 80 0 )

3.2. Results

The solution algorithm is employed for all ten previously mentioned matrices, and the results are compared based on the minimum error.

The initial partition, denoted as S = { { 1 , 2 , , 7 } , { 8 , 9 , , 16 } , { 17 , , 20 } } , groups states together based on their profitability. This grouping likely relies on a relative profitability index, similar to the loss ratio described earlier. The first group encompasses profitable states, including contracts and policyholders. The second group is comprised of states with a lower likelihood of becoming onerous in the future, and the third group comprises contracts initially recognized as onerous.

The Loss Ratios used in our application are randomly assigned as percentages to each state, as shown in the following figure (Figure 1).

Figure 1. Loss Ratios (%) by State.

The utilization of randomly selected Loss Ratios enables us to assess the extent of robustness exhibited by our distance metric. The presented loss ratio values are assigned with the assumption of a general increasing pattern from the profitable to the onerous contract groups. While this trend may not always hold, it is a common assumption in dealing with loss ratio values. This assumption is a key factor in explaining why, in Lemma 2.1, we restrict the grouping of contracts to consecutive states.

3.2.1. Application 1

The algorithm selects the following partition indices as optimal for each of the ten matrices used, and the results are summarized in Table 2.

Table 2. Application 1 Results.

76 76 60 60 74 74 60 60 60 60

Partition index 60 represents the initial partition, and the results suggest that for most of the matrix formats, the Approximate Lumpability Problem enhanced with the classification criterion algorithm has no significant effect on contract classification. However, for matrices 11 and 12, the new approach shows that the optimal partition is:

S 76 = { { 1 , 2 , 3 , , 5 } , { 6 , 9 , , 15 } , { 16 , , 20 } }

This change in partition clearly affects the contract classification. Transition matrices 21 and 22 are also affected by the new approach, leading to the following optimal partition:

S 74 = { { 1 , 2 , 3 , , 7 } , { 8 , 9 , , 15 } , { 16 , , 20 } }

It is clear that simply adding an absorbing state to the BMS does not significantly affect the results.

When compared to the initial partition

S = { { 1 , 2 , , 7 } , { 8 , 9 , , 16 } , { 17 , , 20 } } ,

the optimal partitions classify state 16 in the onerous group of contracts, while dissimilarity arises in classifying states 6 and 7 within the profitable or middle group of contracts. Partition 76 classifies both of these states in group 2, while partition 74 has no changes in group 1 compared to the reference partition. No weights have been applied in the selection of the optimal distance ( a = b = 1 ).

It is crucial to stress that both the entries within the transition matrix and the matrix’s structure itself are vital in determining the optimal selection. However, a more significant consideration is to assess the overall behavior of the distance metric across a range of weight combinations. Therefore, by exploring all possible combinations of weights a and b, with a step of 0.01 (1%), we generate a spectrum of total error values (Figure 2). This analysis aids in understanding how changes in the weight values affect the overall performance of the distance metric.

Figure 2. Weighted combination of total error (Tridiagonal matrix form, wasserstein metric).

The weighted-distance combinations exhibit a rising trend as the weight assigned to the Lumpability Error increases. Notably, the combined errors’ values are consistently lower than the initial error. This suggests that partition S76 remains the optimal choice, irrespective of the assigned error weights. Nevertheless, the solver can adapt to fine-tune the relative importance of lumpability and partition errors.

In the case of a matrix form returning to the reference partition 60, the plot primarily shows that the weighted combinations initiate from zero, aligning seamlessly with the underlying model (Figure 3).

Figure 3. Weighted Combination of Total Error (Tridiagonal Matrix Form With non-zero elements in the last column, Wasserstein metric).

In general, the shape and behaviour of the curves remain consistent regardless of the specific matrix form used.

3.2.2. Application 2—Sensitivities and Variations

Now, we examine whether a shift in the BMS rules can provide better results.

Let us describe a BMS model for motor insurance premiums. For simplicity we will consider only three states {good, bad, reckless} where the bonuses (discounts) are 60%, 30% and 0% respectively. So, the 1st state is the super bonus and the 3rd is the super malus.

The following rules apply to this BMS:

1) All new policyholders enter the system at the 0% level (malus state);

2) If no claim is made during the current year the policyholder moves up to the next level or remains at the super bonus;

3) If one or more claims are made the policyholder moves down one level, or remains at the 0% level.

Assume that the insurance company has estimated that the probability of making at least one claim each year is ¼ and also that this probability is independent of the current level of the policyholder.

Then this model can clearly be represented as Markov Chain with state space S = { 1 , 2 , 3 } and transition probability matrix given by:

P = ( 1 4 3 4 0 1 4 0 3 4 0 1 4 3 4 )

and the relative transition graph is as follows (Figure 4).

Figure 4. BMS—transition graph.

We now aim to examine two underlying assumptions regarding the modification in the BMS regulations. In particular, a revision in the BMS rules implies an alternation of the transition probabilities in the initial transition matrix. Furthermore, these probabilities may undergo modification in a manner that could potentially favor either the Bonus or the Malus direction, resulting in an either more lenient or stricter set of rules. Consequently, our investigation is focused on assessing the compatibility of a rule change with the proposed methodology and exploring how the direction of this rule shift may impact the outcomes.

To perform our investigation, we modify the initial transition matrix P by a small value δ without disturbing the properties of the matrix nor those of the chain. These δ values indicate a shift in the set of the rules of the initial BMS.

The idea behind creating matrix variations of the initial matrix P that favors a shift to a stricter set of rules (shift to Malus direction), is by adding a value to all elements of the main diagonal, adding another to the elements of the super-diagonal and subtracting their sum from the elements of the sub-diagonal. On the other hand, in order to favor a shift that allows a BMS with more lenient set of rules (shift to Bonus direction), a value is added to the elements of the main diagonal, another one is added to the elements of the sub-diagonal but now their sum is subtracted from the elements of the super-diagonal. Keep in mind that the aforementioned changes do not apply to the last row of the matrix (super malus state (20)).

We use a tridiagonal matrix, like Matrix11, with no absorbing state. The range of δ values to be used in this exercise, depends on the entries of the initial transition matrix. As the form of matrix P must be preserved as well as its properties (non-negative elements & row stochasticity) the δ values must comply with specific constraints.

Hence, in this exercise δ [ 0.01 , 0.35 ] and using a step of 0.01 a number of 352 = 1225 combinations is produced. Since this number leads to 1225 different matrix variations a random sample of size 13 is chosen. To enhance comparability regarding the shift direction, matrices for the same δ values are selected for both shift directions.

Results under the Wasserstein distance for estimating the partition error are summarized in the following Table 3.

The first row of the matrix tabulates the results of the initial transition matrix with no δ modifications applied. The subsequent 13 rows present the outcomes of the Malus-shifted matrices, while the last 13 rows summarize the results of the Bonus-shifted matrices, all for the same values of δ as discussed earlier.

It is evident that the Malus-shifted systems appear to be largely unaffected. However, a shift in the Bonus direction occasionally yields some noticeable effects.

More specifically, in four cases, there is a change in the optimal partition. This change results in either Partition:

S 59 = { { 1 , 2 , 3 , , 8 } , { 9 , , 16 } , { 17 , , 20 } }

Or partition:

S 61 = { { 1 , 2 , 3 , , 6 } , { 7 , , 16 } , { 17 , , 20 } }

While these changes are relatively minor and mainly involve transitions between groups 1 and 2, they indicate that the new approach does have an impact.

It is important to note that the distance metric, cost matrix, and loss ratio values are crucial components that can significantly influence the algorithm’s optimal recommendations.

Figure 5 serve the purpose of enabling comparisons of total error values across different δ values and shift directions, while also aiding in the identification of potential patterns within the data.

Table 3. Application 2 results.

Notably, when we observe the effect of increasing δ values for the main diagonal for the Malus-shifted BMS, it becomes evident that the total error exhibits a decreasing trend (Figure 5, red line). However, in the case of a shift in the Bonus direction, a consistent behavior is not clearly discernible (Figure 5, green line). By putting them together at the same plot, we observe that the Total Error’s values for the Bonus-shifted BMS (Figure 6, green line) are always much greater than those of the Malus-shifted (Figure 6, red line).

(a)(b)

Figure 5. Distance by δ values and Bonus-Malus shift direction.

Figure 6. Distance by δ values and Bonus-Malus shift direction (together).

3.2.3. Application 3—Borderline Alterations

In this final section, we delve into the impact of introducing modifications to the rules of the BMS on the boundaries of the initial grouping. To provide some context, let’s first recall that the initial (Reference) partition, denoted as S’, is structured as follows:

S = { { 1 , 2 , , 7 } , { 8 , 9 , , 16 } , { 17 , , 20 } } .

With this foundation in mind, we proceed to investigate the effect of these boundary rule changes using the default transition matrix from the previous exercise (case 1) and the Matrix11 employed in application 1 (case 2). Specifically, we sequentially replace the non-zero elements of states 7, 8, 16, and 17 with the values 0.7, 0.1, 0.2. This substitution results in four distinct matrix variations derived from the initial default transition matrix. Consequently, these changes have a direct impact on the rules defining the boundaries for contract classification (profitable, no significant probability of becoming onerous, onerous) at states 7, 8, 16, and 17. Results are summarized in the following tables, respectively.

Case 1 (Table 4)

Table 4. Application 3, Case1 Results.

Case 2 (Table 5)

Table 5. Application 3, Case2 Results.

First and foremost, it’s crucial to highlight that the new method significantly impacts the recommendations for optimal partitions. Additionally, the values within the transition matrix play a pivotal role, with Matrix11 appearing more susceptible to the influence of the new algorithm. Moreover, as observed in application 1, Matrix1 is affected by the new method, resulting in the optimal partition 76:

S 76 = { { 1 , 2 , 3 , , 5 } , { 6 , 9 , , 15 } , { 16 , , 20 } }

in contrast to the transition matrix used in application 2, which reverts to the reference partition 60:

S = S 60 = { { 1 , 2 , , 7 } , { 8 , 9 , , 16 } , { 17 , , 20 } }

In Case 1, changing state 8 leads to partition 59:

S 59 = { { 1 , 2 , 3 , , 8 } , { 9 , , 16 } , { 17 , , 20 } }

while altering state 17 leads to partition 74:

S 74 = { { 1 , 2 , 3 , , 7 } , { 8 , 9 , , 15 } , { 16 , , 20 } }

In Case 2, when changes occur at state 16, partition 63 becomes optimal:

S 63 = { { 1 , 2 , 3 , 4 } , { 5 , , 16 } , { 17 , , 20 } }

and if the change is applied to state 8, partition 73 becomes the optimal choice:

S 73 = { { 1 , 2 , , 8 } , { 9 , , 15 } , { 16 , , 20 } }

Lastly, when a change takes place at state 7, partition 74 emerges as the optimal solution:

S 74 = { { 1 , 2 , 3 , , 7 } , { 8 , 9 , , 15 } , { 16 , , 20 } }

4. Discussion and Conclusion

The insurance industry grapples with a formidable challenge considering the implementation of the new regulatory standard, IFRS 17. Among the complex issues it confronts in the realm of contract classification, particularly centred on profitability levels, is the delineation of boundaries that distinguish contract categories. A commonly employed method to discern these demarcations relies on a broad profitability criterion, such as the loss ratio. However, this approach yields only approximate estimates lacking in robustness.

In this paper, we present a novel methodology that not only encapsulates the attributes of the initial transition matrix while preserving the lumpability property but also takes into account the initial classification grounded in profitability criteria. This innovative approach furnishes more dependable outcomes pertaining to contract classification within the framework of IFRS 17. Throughout the paper, a Bonus-Malus System (BMS) serves as an illustrative model for insurance systems.

Our proposed methodology builds upon an existing approach related to the lumpability property of Markov Chains and demonstrates its applicability when reducing the state space to a smaller one. In tackling this minimization problem, we introduce an additional dimension by incorporating considerations related to the initial contract classification. The results of this analysis highlight that the new approach significantly influences the outcomes, resulting in a shift in the optimal classification.

Moreover, it is worth noting that the new approach demonstrates sensitivity to several critical parameters, including the entries within the Transition Matrix and its structure (e.g., tridiagonal), as well as the selection of an appropriate distance metric. Notably, adopting the widely recognized Wasserstein distance enhances the robustness and reliability of our findings, effectively mitigating concerns related to metric sensitivity. The incorporation of Loss Ratio values, probability distributions of the partitions, and the cost matrix also exert a significant influence on the selection of optimal solutions.

Consequently, our method can be confidently applied across various contexts, with the assurance that its outcomes remain robust and dependable, even when considering different metric options. The innovative method, introduced by incorporating the classification criterion, introduces an additional verification step. This step empowers us to ascertain whether, under specific circumstances, an alternative partition can achieve exact lumpability or potentially outperform the initial one. Furthermore, to enhance the accuracy of these assessments, one can also leverage Monte Carlo simulations or employ bootstrapping techniques, thereby reducing potential probabilistic errors in the analysis.

As an alternative approach, one may explore the use of specialized optimization solvers tailored to address nonlinear constrained optimization problems. Nevertheless, it is essential to acknowledge that delving into this discussion exceeds the boundaries of the present paper’s scope.

Furthermore, our findings emphasize that even subtle modifications to the rules within a BMS can yield divergent optimal classifications. Comparable observations arise when transition probabilities are adjusted near the initial classification boundaries.

In summary, our methodology unveils a sensitivity to the direction of shifts within a BMS, whether moving towards more stringent or more lenient rules. This comprehensive analysis illuminates the complexities of contract classification under IFRS 17 and highlights the multifaceted factors that influence this pivotal domain.

Appendix

Preliminaries—The Lumpability Concept in the context of a Markov Chain.

A Bonus-Malus System (BMS) can be represented as a Markov Chain either of Discrete or Continuous time depending on its rules with Discrete State Space.

The concept of Lumpability introduced by Kemeny and Snell (1960) , is a general method where a Markov chain of either discrete (Kemeny & Snell, 1960) or continuous (Tian & Kannan, 2006) time with a very large number of states is reduced to a Markov chain with a smaller sate space while at the same time the properties of the original chain are maintained. The new process is called the Lumped Process. In fact, this reduction of the dimensionality of the original state space comes by aggregating (combining) states together to form new compound states resulting thus to a partition of the initial one.

In order to ensure that the new Markov Chain will preserve the properties of the original Markov chain two fundamental concepts must hold, Lumpability and Commutativity.

Definition (Lumped Markov Chain)

Let { X t : t } and M = ( S , P ) be a Markov chain with initial vector π and S = { A 1 , A 2 , , A m } , where m < n , be a partition of S = { 1 , 2 , , n } . The chain M is called lumpable with respect to S' if, for any initial distribution it holds that:

P ( X t A j | X t 1 A i 1 , , X t k A i k ) = P ( X t A j | X t 1 A i 1 )

For any t , k , j and any A i 1 , , A i k S , whenever these conditional probabilities are well-defined, i.e. these conditions occur with positive probability.

Thus, it follows from the above definition that the transition probabilities are independent of the choice of the initial vector π.

In other words, the above definition states that for the Markov chain M to be lumpable with respect to a partition S' is that for every pair of sets A η and A ξ the probabilities P k A ξ have the same value for every k A η : κ A η P i , k = κ A η P j , k , for all i , j A ξ .

These are the probabilities that form the transition matrix of the lumped chain.

According to Kemeny and Snell (1960) , the above constitutes a necessary and sufficient condition for a Markov chain to be lumpable with respect to a partition S'. As already mentioned, the theorem is extended for continuous-time Markov chains (Tian & Kannan, 2006) .

Theorem (U-V Lumpability)

If P is the transition matrix of a Markov chain { X t } , then { X t } is lumpable with respect to a partition S', if and only if

V U P V = P V

and the transition matrix P is equally U-V - lumpable.

When the above statement holds the, reduced stochastic process on S' retains the Markov property and the matrix P : = U P V , which is stochastic, is called the transition matrix of the lumped system.

Matrix U is an m×n matrix whose ξth row ξ = 1 , 2 , , m is the probability vector having equal component for states in A ξ and 0 elsewhere.

Matrix V is an n×m matrix with the ηth column, η = 1 , 2 , , m is a vector with 1’ s in the components corresponding to states in A η and 0 elsewhere.

The reader is encouraged to refer to Georgiou et al. (Georgiou, Domazakis, Pappas, & Yannacopoulos, 2021) for more information and details.

However, exact lumpability may not hold for a Markov chain and it is quite a delicate property to hold at most of the times especially under the context of specific fields.

A general method to approach exact lumpability as much as possible, named as Approximate lumpability was first discussed by Kemeny and Snell (1960) .

The idea behind Approximate lumpability is to find a new Markov chain, which has the same dimension as the original one having its transition matrix as close as possible to the original chain, so as the new chain is exactly lumpable with respect to partition S'.

Approximate Lumpability problem

Let a Markov chain M = ( S , P ) with transition matrix P n × n , which is not exactly lumpable with respect to partition S', with dimension m < n, defined by the corresponding matrices V, U. Let P L n × n the transition matrix of the new Markov chain M' with same dimensions as M and be exactly lumpable with respect to partition S' and being as close as possible under some norm with the original one.

From the above it follows that there will be an approximation error which can be minimized based on some appropriate norm for the difference P L P .

Thus, the Approximate Lumpability Problem can be described as a minimizing problem under specific constraints as follows.

min P L n × n P L P 2

Subject to V U P L V = P L V , P L 1 n = 1 n , ( P ) i , j 0 ,

Where 2 denotes the l 2 norm, the choice of the norm is indicative and various weighted l 2 norm can be considered instead and 1 n is the n-dimensional column vector whose entries are all 1. The constraints of the problem ensure the row-stochasticity of the transition matrix which is a required condition for P L to be transition matrix.

The Approximate Lumpability Problem can be reformulated as follows.

min P L n × n p L p 2

Subject to A p L = b , ( P ) i , j 0 ,

Where the column vector b is given by b = [ 0 m n 1 n ] T and A = ( ( V U I n ) V T I n × n 1 n T ) with dimensionality n ( m + 1 ) × n 2 .

A is called the Lumpability condition matrix and in fact it substitutes for the second and third conditions of the initial definition of the exact lumpability problem using the Kronecker product and vectoriazations p L , p of matrices P L , P respectively.

Specifically, in (Georgiou, Domazakis, Pappas, & Yannacopoulos, 2021) it is mentioned that due to the Cauchy-Schwartz inequality, solutions can be obtained using l 1 norm.

This method applies in our case since based on IFRS17 contract aggregation in fact only consecutive contracts in terms of profitability are lumped together. For example, the last class (state 3) of the three main aggregate classes consists of all contracts which are considered as onerous. All these states of onerous contracts before aggregation will be represented as consecutive states in the initial transition matrix P.

The A p L = b captures the lumpability and the unit-row sum condition for P L while, P i , j 0 captures the non-negativity condition.

Consequently, the solution to our problem is reduced to the orthogonal projection of P onto the intersection L M n + , where:

L : = L ( U , V ) is the set of all U-V lumpable, n-dimensional square matrices with unit-row sums and

M n + is the set of all non-negative n-dimensional square matrices.

Georgiou et al. proposed a general algorithm, to approach the Exact Lumpability Problem. For convenience we include the proposed algorithm. The reader is encouraged to refer to Georgiou et al. (Georgiou, Domazakis, Pappas, & Yannacopoulos, 2021) for more information and details.

Algorithm

Given an initial and aggregate state space S and S', respectively, with | S | = n and | S | = m , with the mapping Φ : S S and the partition matrices L i , for i S as defined above:

1) Construct the condition matrix A;

2) Construct the reparametrized transition matrix p (rearrange the entries of P based on the mapping);

3) Construct vector b;

4) Use Dykstra’s Iterative Scheme along with an appropriate stopping criterion, to find the entries of the lumpable matrix in vector form p L ;

5) Use relabeling to obtain the solution in matrix form.

For the above algorithm Georgiou et al. proposed analytically a method to construct matrix A using mapping function Φ. We do not provide further details about the construction of condition matrix and the reader is called to refer to (Georgiou, Domazakis, Pappas, & Yannacopoulos, 2021) for more information.

NOTES

1Please refer to the Appendix for more information.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Das, K. P. (2010). A Stochastic Approach to the Bonus-Malus System. Neural, Parallel, and Scientific Computations, 18, 283-290.
[2] Georgiou, K., Domazakis, G. N., Pappas, D., & Yannacopoulos, Y. N. (2021). Markov chain Lumpability and Applications to Credit Risk Modelling in Compliance with the International Financial Reporting Standard 9 framework. Elsevier.
https://doi.org/10.1016/j.ejor.2020.11.014
[3] Kemeny, J. G., & Snell, J. L. (1960). Finite Markov Chains. Springer.
[4] Koukiou, G., & Anastassopoulos, V. (2021). Fully Polarimetric Land Cover Classification Based on Markov Chains. Advances in Remote Sensing, 10, 47-65.
https://doi.org/10.4236/ars.2021.103003
[5] Lemaire, J. (1995). Bonus-Malus Systems in Automobile Insurance. Kluwer Academic Publishers.
[6] Lemaire, J. (1998). Bonus-Malus Systems. North American Actuarial Journal, 2, 26-38.
https://doi.org/10.1080/10920277.1998.10595668
[7] Loizides, M. I., & Yannacopoulos, A. N. (2012). Lumpable Markov Chains in Risk Management. Optimization Letters, 6, 489-501.
https://doi.org/10.1007/s11590-010-0275-x
[8] Niemiec, M. (2007). Bonus-Malus Systems as Markov Set-Chains. ASTIN Bulletin, 37, 53-65.
https://doi.org/10.2143/AST.37.1.2020798
[9] Tian, J. P., & Kannan, D. (2006). Lumpability and Commutativity of Markov Processes. Stochastic Analysis and Applications, 24, 685-702.
https://doi.org/10.1080/07362990600632045

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.