Multinational consultancy firm Ernst & Young today introduced the Trusted AI platform, a service accessible through a web interface that assigns numerical values to the trustworthiness of an AI system.

The Trusted AI platform takes into account factors such as the objective of an AI model, whether a human is in the loop, and underlying technologies used to create a model. Analytical models are then used to score each model.

“The technical score it provides is also subject to a complex multiplier, based on the impact on users, taking into account unintended consequences such as social and ethical implications,” according to a statement shared with VentureBeat announcing the news. “An evaluation of governance and control maturity acts as a further mitigating factor to reduce residual risk.”

The Trusted AI platform will be made available later this year, and takes into account the Trusted AI conceptual framework Ernst & Young released last year to spell out its views on matters of bias, ethics, and social responsibility. Ernst & Young’s framework states that establishing trust involves adherence to AI design standards, seeking independent audits, training executives and AI developers on the ethical implications of their work, and potentially convening an ethics advisory council like the kind Google recently disbanded.

Factors that can amplify or reduce the risk of an AI system include a model’s goals, the complexity of an agent, the environment its meant to operate inside, and whether or not there’s a human in the loop, Ernst & Young Trusted AI leader Cathy Cobey told VentureBeat in an email.

A number of tools and services have been introduced in the past year to detect AI bias or mitigate its impact, including IBM’s bias detection cloud service and Audit AI from Pymetrics, and MIT researchers have introduced automated bias detection as well as methods to remove AI bias without loss of accuracy.