Security Aspects
Security Aspects
There
are several security related aspects to deep learning since the main work of
the AI is based on the input data. If that is compromised, then it has
catastrophic outcomes for the AI model. It also has a threat of privacy
breaches. As per the article “Security and Privacy Issues in Deep Learning” the
researchers concluded following security aspects of deep learning “1) Attacks
on DL models: The two major types of attacks on DL relating to different
phases—evasion and poisoning attacks—evasion attacks involve the inference
phase whereas poisoning attacks involve the training phase.
2)
Privacy attacks on AI systems: The potential privacy threats to DL-based
systems arising from service providers, information silos and users.”
As
per IEEE’s article “Privacy and Security Issues in Deep Learning: A Survey”, recent
studies revealed that the attacker is capable of duplicating the model's
parameters and hyperparameters that are used to provide Machine Learning as a
Service (MLaaS). In this work, the term "DL privacy" refers to both
the sensitive training datasets and the DL model's intellectual property, such
as its parameters and architectural design. On the other side, because of the
DL model's flaws, the adversary can create a sample to trick it or trick the
learner into building a subpar model. (Liu et al., 2021)
They
also tell us the two types of privacy attacks done on Deep Learning Models “The
attacks that invade the privacy of the model fall into two categories: model
extraction attack and model inversion attack. In model extraction attacks, the
adversary aims to duplicate the parameters/hyperparameters of the model that is
deployed to provide cloud-based ML services”. (Liu et al., 2021)
While
“In model inversion attacks, the adversary aims to infer sensitive information
by utilizing available information.” (Liu et al., 2021) The adversary attacks
have evolved with time, and they became black-box attacks. As per IEEE, in
black-box attack “the adversary has no knowledge of the model, such as model
architecture parameters, training data. The adversary crafts an adversarial
example by sending a series of queries, which is more practical in real scenarios.
In poisoning attacks, the adversary aims to pollute the training data by
injecting malicious samples, modifying data such that the learner train a bad
classifier, which would misclassify malicious samples or activities crafted by
the adversary at the testing stage” (Liu et al., 2021)
Now
since we discussed security aspects, let’s focus on some moral aspects.
Comments
Post a Comment