Deep Learning for Forensic Identification of Source

Presentation Type

Poster

Student

Yes

Track

Forensic Statistics

Abstract

We used contrastive neural networks to learn a similarity score within the framework of the forensic common-but-unknown source problem. Similarity scores are often used for the interpretation of forensic evidence. Utilizing the NBIDE dataset of 144 spent cartridge casings, this work tested the ability of contrastive neural networks to learn useful similarity scores. The results obtained by contrastive learning were directly compared to a standard forensic statistics algorithm, Congruent Matching Cells (CMC). When trained on the E3 dataset of 2967 spent cartridge casings, contrastive networks outperformed the CMC algorithm. Generally held principles in deep learning would suggest that a larger training dataset would yield even more effective similarity scores. We also considered the effects of varying the neural network architecture; specifically, altering the network's width or depth. This work was in part motivated by the potential to use similarity scores learned via contrastive networks for standard evidence interpretation methods such as likelihood ratios.

Start Date

2-7-2025 1:00 PM

End Date

2-7-2025 2:30 PM

This document is currently not available here.

Share

COinS
 
Feb 7th, 1:00 PM Feb 7th, 2:30 PM

Deep Learning for Forensic Identification of Source

Volstorff A

We used contrastive neural networks to learn a similarity score within the framework of the forensic common-but-unknown source problem. Similarity scores are often used for the interpretation of forensic evidence. Utilizing the NBIDE dataset of 144 spent cartridge casings, this work tested the ability of contrastive neural networks to learn useful similarity scores. The results obtained by contrastive learning were directly compared to a standard forensic statistics algorithm, Congruent Matching Cells (CMC). When trained on the E3 dataset of 2967 spent cartridge casings, contrastive networks outperformed the CMC algorithm. Generally held principles in deep learning would suggest that a larger training dataset would yield even more effective similarity scores. We also considered the effects of varying the neural network architecture; specifically, altering the network's width or depth. This work was in part motivated by the potential to use similarity scores learned via contrastive networks for standard evidence interpretation methods such as likelihood ratios.