Governments around the world should create special visa classifications for the international AI and machine learning community, the Partnership on AI said in a newly released report. Such visa classifications should be made for working professionals, as well as students and interns in AI and ML, in order for them to more easily attend conferences and study for extended periods.

The report points out that some nations permit special visa classifications for medical professionals, athletes, religious workers, and entrepreneurs.

Such steps are necessary, the group asserts, to enable members of the world’s AI research community to collaborate and share ideas. Allowing a diverse group of researchers to attend international AI conferences is valuable because it can introduce new ideas, combat groupthink, ensure that a wider swath of researchers enjoy the prestige of major conference presentations, and let “organizations have the opportunity to benefit from all available talent.”

The Partnership on AI is a nonprofit conglomerate of major AI companies, including Amazon, Facebook, Google, and Microsoft, alongside nonprofit organizations like the ACLU and Amnesty International. The report released today was compiled with contributions from all partner organizations, as well as independent researchers, a Partnership on AI spokesperson told VentureBeat in an email.

Researchers can at times face challenges from immigration officials in the country hosting international AI conferences and adjoining workshops or symposiums.

For example, researchers based in Africa, Eastern Europe, and Asia encountered difficulties last year when traveling to Montreal to attend NeurIPS, the world’s most well-attended AI conference. The incident drew ire from Black in AI cofounders Timit Gebru and Rediet Abebe, as well as Google AI chief Jeff Dean and Turing award winner Yoshua Bengio.

As a result, next year the International Conference on Learning Representations (ICLR) will be held in Addis Ababa, Ethiopia, the first international AI conference to be held in Africa.

The report cites this specific incident as an example of the importance of such visa classifications. Organizers of the Black in AI workshop at NeurIPS assert that near 50% of visa applications were denied, primarily on the basis that Canadian officials believed letters of recommendation to be fraudulent or that participants in the workshop would not return home.

“One partner [organization] expressed concern that researchers from the Middle East may have been declining to participate in U.S.-based AI/ML conferences in advance, anticipating the high likelihood of visa denial,” the report reads. “In order to advance scientific understanding and create opportunities for global cooperation, multidisciplinary experts from around the world must be able to obtain visas, and in a timely manner, to participate in important global conferences and convenings.”

Analysis of the world’s AI talent pool and most popular AI conferences by Element AI CEO Jean Francois Gagné found that the number of published AI research authors around the world is up 36% from 2015.

Among its nine recommendations for consular and immigration officials, the report recommends that countries broaden the definition of family to include spouses, partners, and others with family ties so researchers may work and study in a host country. Immigration officials should also evaluate applications based on the merits of the applicant, not their country of origin.

“Security-based denials of applications should not be nationality based, but rather should be founded on specific and credible security and public safety threats, evidence of visa fraud, or indications of human trafficking,” the report reads.

In addition to advice for world governments, the paper recommends that the world’s AI and ML community explain technical terms in plain English to host country consular and immigration officials and make all attempts to share relevant information well in advance.

The Partnership on AI is scheduled to host its annual gathering of member organizations in London in September.

In an interview with VentureBeat last month, executive director Terah Lyons talked extensively about the Partnership’s slow start, initiatives now underway, the role of power in AI ethics, and AI for good moonshots.

Last week, the Partnership joined member organizations Facebook and Microsoft to create the Deepfake Detection Challenge and data sets scheduled to be released in December around the time of NeurIPS, which is being held this year in Vancouver.