While decentralized machine learning offers several advantages, this process introduces the potential danger of adversarial agents. Rather than changing the learning protocol to accommodate possible adversaries, our work seeks to develop a short, low-overhead validation procedure that allows each honest agent to determine if there is adversarial presence in the network’s learned models. The validation procedure essentially checks whether the local models of neighboring agents differ by too much to reasonably belong to honest nodes. To test the robustness of the algorithm, we simulated a “worst-case” adversarial attack operating in our learning scheme that makes no specific assumptions about how the adversary is able to act on the network. The simulation seeks to quantify how much noise the adversary can inject into the trained non adversarial model trained on MNIST such that our algorithm still accepts it at the validation step. However, results of the simulations revealed a major flaw in our approach and raised subsequent questions which will be discussed in the talk.