AI that can predict future source code changes from past edits might be an invaluable tool for programmers, but it’s a challenge that has yet to be fully conquered by researchers. A team at Google Brain, though, describe a promising new approach in a preprint paper on Arxiv.org (“Neural Networks for Modeling Source Code Edits“) that they say provides the best overall performance and scalability of any yet tested.

“At any given time, a developer will approach a code base and make changes with one or more intents in mind,” the paper’s authors write. “It is … an interesting research challenge, because edit patterns cannot be understood only in terms of the content of the edits (what was inserted or deleted) or the result of the edit (the state of the code after applying the edit). An edit needs to be understood in terms of the relationship of the change to the state where it was made, and accurately modeling a sequence of edits requires learning a representation of the past edits that allows the model to generalize the pattern and predict future edits.”

Toward that end, they first developed two representations to capture intent information that would scale “gracefully” with the length of code sequences: explicit representations, which “instantiate” edits in the sequence (represented as tokens in a 2D grid), and implicit representations, which instantiate subsequent edits. Then, they architected a machine learning model that could capture the relationship of edits with the context in which they were made, specifically by encoding the initial code and edits, assembling said contexts, and predicting the next edits and their positions.

In order to gauge the system’s generalizability, the researchers developed a suite of synthetic data sets inspired by edits that might occur in real data, but simplified to allow for clearer interpretation of results. Additionally, they compiled a large data set of edit sequences from snapshots of a Google code base containing eight million edits from 5,700 developers and divided it into training, development, and test sets.

In experiments, the researchers found that the model reliably and accurately predicted positions where an edit needed to be made, as well as the content of those edits. They believe the model could be adapted to improve autocomplete systems that ignore edit histories, or to predict code search queries developers will perform next given their recent edits.

“We are particularly interested in the setting where models only make predictions when they are confident, which could be important for usability of an eventual edit suggestion system,” the team wrote. “In general, there are many things that we may want to predict about what a developer will do next. We believe edit histories contain significant useful information, and the formulation and models proposed in this work are a good starting point for learning to use this information.”