Document Type
Thesis - Open Access
Award Date
2024
Degree Name
Master of Science (MS)
Department / School
Mathematics and Statistics
First Advisor
Michael Puthawala
Abstract
Forensic identification of source problems often fall under the category of verification problems, where recent advances in deep learning have been made by contrastive learning methods. Many forensic identification of source problems deal with a scarcity of data, an issue addressed by few-shot learning. In this work, we make specific what makes a neural network a contrastive network. We then consider the use of contrastive neural networks for few-shot learning classification problems and compare them to other statistical and deep learning methods. Our findings indicate similar performance between models trained by contrastive loss and models trained by cross-entropy loss. We also perform an ablation study to investigate the effects of different contrastive loss functions, metric functions, and margin values within contrastive learning. To test contrastive networks on real forensic data, we use the NBIDE cartridge casing dataset. Results are promising, as contrastive learning competed with older statistical methods while taking significantly less data preprocessing. Finally, we detail the desired invariance properties of embedding functions learned by contrastive networks in hopes that future work can enforce them through model architecture.
Library of Congress Subject Headings
Identification.
Deep learning (Machine learning)
Forensic sciences.
Publisher
South Dakota State University
Recommended Citation
Patten, Cole Ryan, "Contrastive Learning, with Application to Forensic Identification of Source" (2024). Electronic Theses and Dissertations. 972.
https://openprairie.sdstate.edu/etd2/972