CodeBERT — the “BERT” acronym within which refers to Google’s BERT architecture for natural language processing — builds upon a multi-layer, bidirectional Transformer neural framework. As with all deep neural networks, Transformers contain neurons (mathematical functions) arranged in interconnected layers that transmit signals from input data and slowly adjust the synaptic strength (weights) of each connection. That’s how all AI models extract features and learn to make predictions, but Transformers uniquely have attention such that every output element is connected to every input element. The weightings between them are calculated dynamically, in effect.
In the pre-training phase, the researchers fed CodeBERT two segments with a special separator token: (1) natural language text and (2) code from a certain programming language. The model trained both with bimodal data, which refers to parallel data of natural language-code pairs, and with unimodal data, which stands for codes without paired natural language texts.
The researchers say that CodeBERT achieved state-of-the-art performance in both natural language code search and code-to-documentation generation. In future work, they plan to investigate better generations and more complicated neural architectures, as well as new generation-related learning objectives.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more