Google taught artificial intelligence to encrypt messages on its own

A team at Google has built a system to show that artificial intelligence can build its own form of encryption. While not very complex currently, this research could set the table for encryption that gets stronger as hackers attempt to crack it.

To see if the artificial intelligence could learn to encrypt on its own, the AI researchers at Google Brain, a unit of the search company focused on deep learning, built a game with three entities powered by deep neural networks: Alice, Bob, and Eve.

Alice was designed to send an encrypted message of 16 zeroes and ones to Bob, which was designed to decrypt the message. The two bots started with a shared key, a foundation for the message’s encryption.

Eve was placed squarely in the middle, intercepting the information and attempting to decrypt it as well. To avoid Eve working out the encryption, Alice started transforming the message in different ways, and Bob adaptively learned how to shift his decryption to keep up. The researchers measured Eve’s success by how close it got to the correct message, Alice’s by whether Eve’s answer was further from the original message than a random guess, and Bob’s by whether he met a certain threshold for arriving at the right answer.

The three networks were designed as generative adversarial networks, meaning they weren’t taught anything about encryption or shown examples of encrypted and decrypted messages. They learned by trying to outsmart each other.

Bob, in red, quickly adapted to learn new encryption while Eve, in green, was unable to keep up. 

For the first 7,000 messages, Alice and Bob started out simply. Alice’s encryption was easy for Bob to figure out, but that meant it was easy for Eve to guess as well. But over the next 6,000 messages, Alice and Bob devised a kind of encryption that Eve simply couldn’t crack. Bob was able to reliably decrypt the message with no errors, while Eve consistently got seven or eight of the 16 characters wrong. Since the answers were either a zero or one, Eve would have had the same chances if it just flipped a coin.

This kind of learning is done over thousands of iterations by a mosaic of mathematic weights being changed within the algorithms, so not even the researchers can understand how the encryption was built without an intensive and time-consuming analysis.

While this was a simple test of whether AI can succeed at generating a form of encryption, it does spark questions about the future of security. Cryptography has always been a game of cat and mouse between those encrypting and those attempting to decrypt, constrained by the speed of modern computers. And while authors write that keeping data secure is more than just having strong encryption, cybersecurity might mean one day having AI agents fighting continually to protect your information, while others try to ferret it out.

References: qz.com

1 comment:

  1. Dario De AngelisOctober 30, 2016

    Also, Google Analysts couldn't break the code apparently

    ReplyDelete