Show simple item record

dc.contributor.authorLangille, Jack
dc.date.accessioned2024-08-22T14:38:19Z
dc.date.available2024-08-22T14:38:19Z
dc.date.issued2024-08-21
dc.identifier.urihttp://hdl.handle.net/10222/84446
dc.description.abstractThis thesis studies the performance and robustness of post-training INT8 quantized convolutional neural networks under various perturbation regimes. Perturbations include additive white Gaussian noise (AWGN), spatially correlated Brownian noise, and structured vertical and horizontal occlusions. Three state-of-the-art models, including VGG-16, ResNet-18, and SqueezeNet1_1, are examined. Performance metrics include top-1 accuracy, top-5 accuracy, and F1 score. We also employ Kullback-Leibler (KL) divergence to measure differences in model confidences in their output class probabilities. We depart from traditional benchmark datasets and instead study fine-grained visual classification to a) better model real-world image classification tasks where specificity of sub-classes derived from a parent class is favored over generality and b) better stress the reduced precision model. This research aims to identify points of instability or ill-conditioning in the quantized model relative to its full-precision version to provide experimental bounds on quantization for deployment in scenarios where random perturbation may be present, as is common in computer vision systems owing to thermal noise, sensor faults, and environmental conditions. It was found that across all three models and under each perturbation scheme, the relative error between the quantized and full-precision model was consistently low, with the maximum error being in VGG-16 under Brownian noise with a top-1 accuracy drop of 1.62% in the quantized model. We also find that KL divergence was on the same order of magnitude as the unperturbed tests across all perturbation regimes except Brownian noise, where maximum divergences ranged from 1.6631 (VGG-16) to 2.3271 (SqueezeNet1_1). While secondary to quantization-induced errors, it was also observed that, in general, models were most sensitive to vertical occlusions, with accuracy degrading to sub-50% at the lowest level of perturbation.en_US
dc.language.isoenen_US
dc.subjectComputer visionen_US
dc.subjectQuantizationen_US
dc.subjectPerturbation modellingen_US
dc.subjectConvolutional neural networksen_US
dc.titleOn the Robustness of Quantized Convolutional Neural Networksen_US
dc.typeThesisen_US
dc.date.defence2024-08-15
dc.contributor.departmentDepartment of Engineering Mathematics & Internetworkingen_US
dc.contributor.degreeMaster of Scienceen_US
dc.contributor.external-examinern/aen_US
dc.contributor.thesis-readerGuy Kemberen_US
dc.contributor.thesis-readerKamal El-Sankaryen_US
dc.contributor.thesis-supervisorIssam Hammaden_US
dc.contributor.ethics-approvalNot Applicableen_US
dc.contributor.manuscriptsNot Applicableen_US
dc.contributor.copyright-releaseNot Applicableen_US
 Find Full text

Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record