Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Large pretrained language models have improved the state-of-the-art on a range of natural language processing tasks, chiefly because they’re able to learn contextual representations from text without supervision. In a preprint paper, a team of researchers at Microsoft Research Asia used this to their advantage to create a system — CodeBERT — for programming languages like Python, Java, JavaScript, and more that supports natural language understanding tasks (like code search) and generation tasks (like code documentation generation).
CodeBERT — the “BERT” acronym within which refers to Google’s BERT architecture for natural language processing — builds upon a multi-layer, bidirectional Transformer neural framework. As with all deep neural networks, Transformers contain neurons (mathematical functions) arranged in interconnected layers that transmit signals from input data and slowly adjust the synaptic strength (weights) of each connection. That’s how all AI models extract features and learn to make predictions, but Transformers uniquely have attention such that every output element is connected to every input element. The weightings between them are calculated dynamically, in effect.
In the pre-training phase, the researchers fed CodeBERT two segments with a special separator token: (1) natural language text and (2) code from a certain programming language. The model trained both with bimodal data, which refers to parallel data of natural language-code pairs, and with unimodal data, which stands for codes without paired natural language texts.
The training data set comprised data points captured from public GitHub repositories — specifically a data set that includes 2.1 million bimodal data points (individual functions with paired documentation) and 6.4 million unimodal codes (functions without paired documentation) across Python, Java, JavaScript, PHP, Ruby, and Go. They fine-tuned CodeBERT before tasking it with finding code within CodeSearchNet, an open source data set published by GitHub in partnership with Weights & Biases, and with generating documentation for code it hadn’t encountered in the pre-training step.
VB Event
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
The researchers say that CodeBERT achieved state-of-the-art performance in both natural language code search and code-to-documentation generation. In future work, they plan to investigate better generations and more complicated neural architectures, as well as new generation-related learning objectives.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.