VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
The Intel AI Lab has open-sourced a library for natural language processing to help researchers and developers give conversational agents like chatbots and virtual assistants the smarts necessary to function, such as name entity recognition, intent extraction, and semantic parsing to identify the action a person wants to take from their words.
Just a few months old, the Intel AI Lab plans to open-source more libraries to help developers train and deploy artificial intelligence, publish research, and reproduce the latest innovative techniques from members of the AI research community in order to “push AI and deep learning into domains it’s not a part of yet.”
“We would like to contribute this back to the open source community so that either as a beginner or as an engineer or researcher you can look at what with reproduce and investigated and verified and then use it for your own purpose,” Intel AI Lab head of data science Yinyin Liu told VentureBeat in an interview at Intel AI DevCon.
The first-ever conference by Intel for AI developers is being held Wednesday and Thursday, May 23 and 24, at the Palace of Fine Arts in San Francisco.
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
The Intel AI Lab now employs about 40 data scientists and researchers and works with divisions of the company developing products like the nGraph framework and hardware like Nervana Neural Network chips, Liu said.
“At this point we’ve put together a set of deep learning-driven NLP models. It’s not specific for any particular applications or domains just yet, but at Intel we are working with partners and developers to look at potential use cases and use some of these building blocks in order to have [them] in the library,” she said.
Since its launch in December, Intel AI Lab has also open-sourced libraries to help people deploy reinforcement learning and neural networks.
The neural network distiller library, released last month, is used to strip away neural connections that are not relevant to the task you wish to accomplish. Coach, the reinforcement learning library, allows users to embed an agent in training environments like robotics or self driving car simulators.
The NLP library, nlp-architect, includes tools made using datasets often seen as benchmarks by members of the academic research community, such as the Stanford Question Answering Dataset (SQuAD), used to test machine reading comprehension. It can also train models using custom data or public benchmark datasets with popular open source frameworks like Google’s TensorFlow or Facebook’s PyTorch.
“We allow developers to actually go download the public benchmark dataset and train the network we created using deep learning architecture and then they can launch the training themselves, and then after the training the developer within the NLP Architect, you can save the model into a certain model file, and then you can use that for inference for your application,” Liu said.
Also announced at AI DevCon: TensorFlow and Xeon processors are working better together due to software improvements; and Intel Nervana Neural Net L-1000, the company’s first widely available chip for the accelerated training of neural nets, is due out in late 2019.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.