Trust and Transparency for AI on the IBM Cloud
Today IBM introduced a comprehensive new set of trust and transparency capabilities for AI on the IBM Cloud. This software service brings to market new technologies developed by IBM Research together with Watson’s engineering team. The capabilities address the principles of explainability, fairness, and lineage in AI services, and their release is an important step towards developing trusted AI services. The services can be used with a wide variety of machine learning frameworks and AI build environments, including Watson, Tensorflow, SparkML, AWS SageMaker, and AzureML.
The new services incorporate several IBM Research innovations, including checkers to detect bias in training data and models; tools to pinpoint the source of bias at various stages of the AI pipeline and to suggest mitigation strategies; and end-to-end lineage management to track the complete development of AI. These components are an initial set of capabilities that can help engineer trust into AI systems and enable AI solutions that inspire confidence.
Detecting and analyzing the source of biasOur bias checkers consider both individual and group discrimination. Individual discrimination is when a person in an advantaged group and a person in a disadvantaged group receive different decisions when all other attributes are the same. Group discrimination is when advantaged groups and disadvantaged groups receive differing decisions on average. Disparate impact is a commonly understood measure of group discrimination.
Our data bias checker analyzes the training data to find disparate impact given user-specified protected attributes. Similarly, the group model bias checker considers model predictions instead of labels in the training data. The individual model bias checker systematically generates extensive test cases to check for the presence of individually biased decisions. For easy comprehension by data scientists, the checkers present results via natural language explanations and also output relevant instances illustrating the bias that is found.
In addition, the data bias checker can identify the part of the dataset that is the source of the unfairness. For example, if the data bias checker finds there to be bias against black people in a home loan dataset, the source of bias analysis might further determine that the bias was specifically against black women of a certain age. A data scientist can then use this information to supplement this part of the dataset appropriately by, for example, gathering more data.
Efficient end-to-end lineage servicesIn many cases, regulations like GDPR mandate that businesses maintain complete records on provenance, or lineage, of their AI platforms and components. Our new service meets this need by tracking the complete development of an AI system: data acquisition, pre-processing, model training, sharing, deployment, and retraining. The system stores information about assets (data, model, code), events affecting these assets (preprocessing, training, etc.), and entities involved in these events. At each step, the system also manages the core metadata.
The system makes it easy to track the accuracy, performance, and fairness of AI applications and recall it for regulatory, compliance, or customer service reasons. It also provides explanations for an application’s decisions, including which versions of the model and data were used to make the decisions.
The system includes several core components:
- Instrumentation and event generation: Each component of a typical AI platform identifies key event types and generates an event whenever a specific activity takes place. The events could be as fine-grained as adding a layer to a deep learning network or as high-level as sharing the complete dataset. Our event structure is consistent with w3c provenance standard.
- Scalable event ingestion and management: It is easy to imagine that millions of events could be generated on a daily basis in a real-world setting. To handle this volume, we developed efficient database design and indexing.
- Efficient lineage query services: Next, we developed highly efficient queries to cater to different use cases. We specifically focus on forward and backward lineage queries. Backward queries provide details on how an asset has been developed, whereas forward queries focus on how an asset has been used or shared. The query results are presented to the user via interactive interfaces.
Continuing to build trust into AI services
IBM Research has developed a comprehensive strategy that addresses multiple dimensions of trust in AI solutions. We will continue to push innovations not just for checking and explaining bias, but also for debiasing or mitigating bias in data and models. We are also working to add more temporal query classes to the lineage management system to increase its applicability.
ReferencesAutomated Test Generation to Detect Individual Discrimination in AI Models
Aniya Agarwal, Pranay Lohia, Seema Nagar, Kuntal Dey, Diptikalyan Saha
Efficiently Processing Workflow Provenance Queries on SPARK
Rajmohan C, Pranay Lohia, Himanshu Gupta, Siddhartha Brahma, Mauricio Hernandez, Sameep Mehta
Provenance in Context of Hadoop as a Service (HaaS) – State of the Art and Research Directions
Himanshu Gupta, Sameep Mehta, Sandeep Hans, Bapi Chatterjee, Pranay Lohia, Rajmohan C
( Press Release Image: https://photos.webwire.com/prmedia/6/229079/229079-1.png )
WebWireID229079
This news content was configured by WebWire editorial staff. Linking is permitted.
News Release Distribution and Press Release Distribution Services Provided by WebWire.