PyCon X

firenze

2-5 maggio 2019

Defence Against the Dark Arts: Adversarial ML

Security and Privacy issues need no introduction. But how exactly is this affecting the field of Machine Learning? This is what this talk will cover. We first expose the attack surface of systems deploying machine learning. We then describe how an attacker may force models to make wrong predictions with very little information about the victim. One such attack can be biometric recognition where fake biometric traits may be exploited to impersonate a legitimate user. We demonstrate that these attacks are practical against existing machine learning as a service platform. Towards the end, we will discuss current research to defend models from such attacks.

in on sabato 4 maggio at 11:30 See schedule

Comments

  1. Gravatar
    Wonderfull Idea, After the elephant in the room it is known that detection power by ML should be vulnerable. Will you talk also of about masterprint dictio0nary attack? https://arxiv.org/pdf/1705.07386.pdf
    — Pietro Brunetti,
  2. Gravatar
    Wonderfull Idea, After the elephant in the room it is known that detection power by ML could be vulnerable. Will you talk also of about masterprint dictio0nary attack? https://arxiv.org/pdf/1705.07386.pdf
    — Pietro Brunetti,
  3. Gravatar
    Will this be based off the paper from Ian Goodfellow?
    — Michele De Simoni,
  4. Gravatar
    @Michele - Yes, It will be based on his recent paper. But, I will try to keep the track more on the introductory side.
    — Amit Kushwaha,

Nuovo commento