SUCCESS OF UNCERTRAINITY-AWARE DEEP MODELS DEPENDS ON DATA MANIFOLDS GEOMETRY

ABSTRACT

For responsible decision making in safety-critical settings, machine learning models must effectively detect and process edge-case data. Although existing works show that predictive uncertainty is useful for these tasks, it is not evident from literature which uncertainty-aware models are best suited for a given dataset. Thus, we compare six uncertainty-aware deep learning models on a set of edge-case tasks: robustness to adversarial attacks as well as out-of-distribution and adversarial detection. We find that the geometry of the data sub-manifold is an important factor in determining the success of various models. Our finding suggests an interesting direction in the study of uncertainty-aware deep learning models.







DOWNLOADS


> GitHub

> Paper



KEY REFERENCES

> Abe, T., Buchanan, E. K., Pleiss, G., Zemel, R., and Cunningham, J. P, “Deep ensembles work, but are they necessary?”, 2022

> Arnez, F., Espinoza, H., Radermacher, A., and Terrier, F, “A comparison of uncertainty estimation approaches in deep learning components for autonomous vehicle applications", CoRR, abs/2006.15172, 2020

> Bradshaw, J. F., de G. Matthews, A. G., and Ghahramani, “ Adversarial examples, uncertainty, and transfer testing robustness in gaussian process hybrid deep networks”, arXiv: Machine Learning, 2017

> Carbone, G., Wicker, M., Laurenti, L., Patane, A., Bortolussi, L., and Sanguinetti, G, “Robustness of bayesian neural networks to gradient-based attacks", 2020

> Carlini, N. and Wagner, D, “Adversarial examples are not easily detected: Bypassing ten detection methods", 2017