Loading…
UWEC CERCA 2025
Monday April 21, 2025 12:00pm - 12:20pm CDT
While decentralized machine learning offers several advantages, this process introduces the potential danger of adversarial agents. Rather than changing the learning protocol to accommodate possible adversaries, our work seeks to develop a short, low-overhead validation procedure that allows each honest agent to determine if there is adversarial presence in the network’s learned models. The validation procedure essentially checks whether the local models of neighboring agents differ by too much to reasonably belong to honest nodes. To test the robustness of the algorithm, we simulated a “worst-case” adversarial attack operating in our learning scheme that makes no specific assumptions about how the adversary is able to act on the network.​​​ The simulation seeks to quantify how much noise the adversary can inject into the trained non adversarial model trained on MNIST such that our algorithm still accepts it at the validation step. However, results of the simulations revealed a major flaw in our approach and raised subsequent questions which will be discussed in the talk.
Presenters
MF

Morgan Fiebig

University of Wisconsin - Eau Claire
Faculty Mentor
AB

Allison Beemer

MATHEMATICS, University of Wisconsin - Eau Claire
Monday April 21, 2025 12:00pm - 12:20pm CDT
Hibbard Hall 320 146 Garfield Ave, Eau Claire, WI 54701, USA

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link