5. Tidjon LN, Khomh F. Never trust, always verify: a roadmap for Trustworthy AI? [Internet]. Ithaca (NY): arXiv. org; 2022 [cited at 2023 Oct 31]. Available from:
https://arxiv.org/abs/2206.11981
6. Neff G.. Talking to bots: symbiotic agency and the case of Tay. Int J Commun 2016;10:4915-31.
9. Graham KC, Cvach M.. Monitor alarm fatigue: standardizing use of physiological monitors and decreasing nuisance alarms. Am J Crit Care 2010 19(1):28-34.
https://doi.org/10.4037/ajcc2010651
10. Arrieta AB, Diaz-Rodriguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, et al. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion 2020 58:82-115.
https://doi.org/10.1016/j.inffus.2019.12.012
11. Antoniadi AM, Du Y, Guendouz Y, Wei L, Mazo C, Becker BA, et al. Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: a systematic review. Appl Sci 2021 11(11):5088.
https://doi.org/10.3390/app11115088
13. Das A, Rad P. Opportunities and challenges in explainable artificial intelligence (XAI): a survey [Internet]. Ithaca (NY): arXiv.org; 2020 [cited at 2023 Oct 31]. Available from:
https://arxiv.org/abs/2006.11371
15. Chang J, Lee J, Ha A, Han YS, Bak E, Choi S, et al. Explaining the rationale of deep learning glaucoma decisions with adversarial examples. Ophthalmology 2021 128(1):78-88.
https://doi.org/10.1016/j.ophtha.2020.06.036
16. Moosavi-Dezfooli SM, Fawzi A, Frossard P. DeepFool: a simple and accurate method to fool deep neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2016 Jun 27–30. Las Vegas, NV; p. 2574-82.
https://doi.org/10.1109/CVPR.2016.282
17. Goodfellow IJ, Shlens J, Szegedy C. Explaining and harnessing adversarial examples [Internet]. Ithaca (NY): arXiv.org; 2014 [cited at 2023 Oct 31]. Available from:
https://arxiv.org/abs/1412.6572
18. Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A. The limitations of deep learning in adversarial settings. Proceedings of 2016 IEEE European Symposium on Security and Privacy (EuroS&P); 2016 Mar 21–24. Saarbruecken, Germany; p. 372-87.
https://doi.org/10.1109/EuroSP.2016.36
20. Chromik M, Butz A. Human-XAI interaction: a review and design principles for explanation user interfaces. In: Ardito C, Lanzilotti R, Malizia A, , editors. Human-computer interaction–INTERACT 2021. Cham, Switzerland: Springer; 2021. p. 619-40.
https://doi.org/10.1007/978-3-030-85616-8_36
21. Grgic-Hlaca N, Lima G, Weller A, Redmiles EM. Dimensions of diversity in human perceptions of algorithmic fairness [Internet]. Ithaca (NY): arXiv.org; 2022 [cited at 2023 Oct 31]. Available from:
https://arxiv.org/abs/2005.00808
22. Baniecki H, Kretowicz W, Piatyszek P, Wisniewski J, Biecek P.. Dalex: responsible machine learning with interactive explainability and fairness in Python. J Mach Learn Res 2021;22(1):9759-65.
25. Rawls J. Justice as fairness: political not metaphysical. In: Corlett JA, editors. Equality and liberty: analyzing Rawls and Nozick. London, UK: Palgrave Macmillan; 1991. p. 145-73.
https://doi.org/10.1007/978-1-349-21763-2_10
29. Awasthi P, Beutel A, Kleindessner M, Morgenstern J, Wang X. Evaluating fairness of machine learning models under uncertain and incomplete information. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency; 2021 Mar 3–10. Virtual Event, Canada; p. 206-14.
https://doi.org/10.1145/3442188.3445884
30. Hinnefeld JH, Cooman P, Mammo N, Deese R. Evaluating fairness metrics in the presence of dataset bias [Internet]. Ithaca (NY): arXiv.org; 2018 [cited at 2023 Oct 31]. Available from:
https://arxiv.org/abs/1809.09245
32. Hardt M, Price E, Srebro N.. Equality of opportunity in supervised learning. Adv Neural Inf Process Syst 2016;29:3315-23.
33. Srivastava M, Heidari H, Krause A. Mathematical notions vs. human perception of fairness: a descriptive approach to fairness for machine learning. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining; ; 2019 Aug 4–8. Anchorage, AK; p. 2459-68.
https://doi.org/10.1145/3292500.3330664
34. Saravanakumar KK. The impossibility theorem of machine fairness: a causal perspective [Internet]. Ithaca (NY): arXiv.org; 2020 [cited at 2023 Oct 31]. Available from:
https://arxiv.org/abs/2007.06024
35. Dwork C, Ilvento C. Fairness under composition [Internet]. Ithaca (NY): arXiv.org; 2018 [cited at 2023 Oct 31]. Available from:
https://arxiv.org/abs/1806.06122
36. Binns R. On the apparent conflict between individual and group fairness. Proceedings of the 2020 Conference on Fairness, Accountability, And Transparency; 2020 Jan 27–30. Barcelona, Spain; 514-24.
https://doi.org/10.1145/3351095.3372864
39. Trewin S, Basson S, Muller M, Branham S, Treviranus J, Gruen D, et al. Considerations for AI fairness for people with disabilities. AI Matters 2019 5(3):40-63.
https://doi.org/10.1145/3362077.3362086
40. Huq AZ.. Racial equity in algorithmic criminal justice. Duke Law J 2019;68(6):1043.
41. Hu L, Kohler-Hausmann I. What’s sex got to do with fair machine learning? [Internet]. Ithaca (NY): arXiv. org; 2020 [cited at 2023 Oct 31]. Available from: .
https://arxiv.org/abs/2006.01770
42. Chohlas-Wood A, Nudell J, Yao K, Lin Z, Nyarko J, Goel S. Blind justice: algorithmically masking race in charging decisions. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society; 2021 May 19–21. Virtual Event, USA; p. 35-45.
https://doi.org/10.1145/3461702.3462524
46. Bartoletti I. AI in healthcare: ethical and privacy challenges. Riano D, Wilk S, ten Teije A. In: Artificial intelligence in medicine. Cham, Switzerland: Springer; 2019 7-10.
https://doi.org/10.1007/978-3-030-21642-9_2
50. Bai T, Luo J, Zhao J, Wen B, Wang Q. Recent advances in adversarial training for adversarial robustness [Internet]. Ithaca (NY): arXiv.org; 2021 [cited at 2023 Oct 31]. Available from:
https://arxiv.org/abs/2102.01356
51. Qiu S, Liu Q, Zhou S, Wu C.. Review of artificial intelligence adversarial attack and defense technologies. Appl Sci 2019 9(5):909.
https://doi.org/10.3390/app9050909
53. Taghanaki SA, Das A, Hamarneh G. Vulnerability analysis of chest X-ray image classification against adversarial attacks. Stoyanov D, Taylor Z, Kia SM, . In: Understanding and interpreting machine learning in medical image computing applications. Cham, Switzerland: Springer; 2018 87-94.
https://doi.org/10.1007/978-3-030-02628-8_10