Hacker-proof AI

AI workflows in the real world can be vulnerable to adversarial attacks. Securing AI Systems with Adversarial Robustness describes how IBM Research is helping AI to resist hacks, rooting out its weaknesses, anticipating new attack strategies, and designing robust models that perform as well in the wild as they do in a sandbox. I ghost-wrote the article in close collaboration with the IBM Research team.