Eminent industry leaders worry that the biggest risk tied to artificial intelligence is the militaristic downfall of humanity. But there’s a smaller community of people committed to addressing two more tangible risks: AI created with harmful biases built into its core, and AI that does not reflect the diversity of the users it serves. I am proud to be part of the second group of concerned practitioners. And I would argue that not addressing the issues of bias and diversity could lead to a different kind of weaponized AI.