Presentation Type

Oral

Track

Finance/Insurance Application

Abstract

A size-biased left-truncated Lognormal (SB-ltLN) mixture is proposed as a robust alternative to the Erlang mixture for modeling left-truncated insurance losses with a heavy tail. The weak denseness property of the weighted Lognormal mixture is studied along with the tail behavior. Explicit analytical solutions are derived for moments and Tail Value at Risk based on the proposed model. An extension of the regularized expectation–maximization (REM) algorithm with Shannon's entropy weights (ewREM) is introduced for parameter estimation and variability assessment. The left-truncated internal fraud data set from the Operational Riskdata eXchange is used to illustrate applications of the proposed model. Finally, the results of a simulation study show promising performance of the proposed SB-ltLN mixture in different simulation settings.

Start Date

2-6-2024 11:00 AM

End Date

2-6-2024 12:00 PM

Share

COinS
 
Feb 6th, 11:00 AM Feb 6th, 12:00 PM

Session 6: The Size-biased Lognormal Mixture with the Entropy Regularized Algorithm

Pasque 255

A size-biased left-truncated Lognormal (SB-ltLN) mixture is proposed as a robust alternative to the Erlang mixture for modeling left-truncated insurance losses with a heavy tail. The weak denseness property of the weighted Lognormal mixture is studied along with the tail behavior. Explicit analytical solutions are derived for moments and Tail Value at Risk based on the proposed model. An extension of the regularized expectation–maximization (REM) algorithm with Shannon's entropy weights (ewREM) is introduced for parameter estimation and variability assessment. The left-truncated internal fraud data set from the Operational Riskdata eXchange is used to illustrate applications of the proposed model. Finally, the results of a simulation study show promising performance of the proposed SB-ltLN mixture in different simulation settings.