Ethical & Social Implications

 


Ethical & Social Implications

There are several ethical and social implications connected to deep learning model. Ranging from bias and fairness to even job displacement for many in workforce. Privacy concerns arise due to the vast amount of personal data processed. The opacity of deep learning models hampers transparency and accountability, with research like Lipton's (2016) highlighting the "black box" nature of these systems. Automation driven by deep learning may disrupt job markets (Bessen, 2019), necessitating ethical considerations for workforce adaptation.

Another big social implication is the use of deep learning model to screen large number of job applicants. As per the article “An Ethical Framework for Guiding the Development of Affectively-Aware Artificial Intelligence” published by IEEE, “companies around the world are utilizing emotion recognition in AI for automated candidate assessment. This has faced backlash and has even generated legislation. Other controversial examples include emotion detection for employee and student monitoring. Psychologists have also questioned whether current emotion recognition models are scientifically validated enough to afford the inferences that companies are drawing. This has led some AI researchers to start calling for an outright ban on deploying emotion recognition technologies in "decisions that impact people’s lives and access to opportunities".” (Ong, 2021)

Due to the example set forth above, there are various concerns of bias and fairness, on how this data is not representative of the population. It could potentially adapt discriminatory features that could perpetuate such biases. And as discussed with “black boxes” it can be difficult to understand how these algorithms come to a certain decision or output. This lack of transparency raises many concerns about “black boxes” aka Deep learning models.

Privacy and security issues arise because personal data such as biometrics, facial images, emails, phone numbers and other information associated with several users could potentially be exploited to get desired results or even prone to risk of theft by these models if used by unauthorized personnel in the wrong environment. (Adebiyi et al., 2023) Some other unintended consequences could be harm to humans or even humanity which could be a potential AI deep learning model, that could be used to manipulate government and secret national data from their databases.


Comments