Google teaches “AIs” to invent their own crypto and avoid eavesdropping

Google Brain has created two artificial intelligences that evolved their own cryptographic algorithm to protect their messages from a third AI, which was trying to evolve its own method to crack the AI-generated crypto. The study was a success: the first two AIs learnt how to communicate securely from scratch.

The Google Brain team (which is based out in Mountain View and is separate from Deep Mind in London) started with three fairly vanilla neural networks called Alice, Bob, and Eve. Each neural network was given a very specific goal: Alice had to send a secure message to Bob; Bob had to try and decrypt the message; and Eve had to try and eavesdrop on the message and try to decrypt it. Alice and Bob have one advantage over Eve: they start with a shared secret key (i.e. this is symmetric encryption).

Importantly, the AIs were not told how to encrypt stuff, or what crypto techniques to use: they were just given a loss function (a failure condition), and then they got on with it. In Eve's case, the loss function was very simple: the distance, measured in correct and incorrect bits, between Alice's original input plaintext and its guess. For Alice and Bob the loss function was a bit more complex: if Bob's guess (again measured in bits) was too far from the original input plaintext, it was a loss; for Alice, if Eve's guesses are better than random guessing, it's a loss. And thus an adversarial generative network (GAN) was created.

Read 7 remaining paragraphs | Comments