Tainted Data Can Teach Algorithms the Wrong Lessons
An important leap for artificial intelligence in recent years is machines’ ability to teach themselves, through endless practice, to solve problems, from mastering ancient board games to navigating busy roads.But a few subtle tweaks in the training regime can poison this “reinforcement learning,” so that the resulting algorithm responds—like a sleeper agent—to a specified trigger…