-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Updates the graph with more structure and some improvements in linkin…
…g between pages
- Loading branch information
Showing
18 changed files
with
219 additions
and
29 deletions.
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +1 @@ | ||
{:highlights [], :extra {:page 1}} | ||
{:highlights [], :extra {:page 4}} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
alias:: the day OpenAI released ChatGPT | ||
|
||
- On this day | ||
- [[OpenAI]] released [[ChatGPT]], marking a paradigm shift in general availability of [[Artificial Intelligence]] to humans by way of [[language use]] | ||
- |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,6 +1,76 @@ | ||
- Required Tests | ||
- [[Sentiment Disparity Test]] | ||
- Actor Swap Test | ||
- Occupational Test | ||
- Social Role Test | ||
- [[...]] | ||
- Part of [[Data & AI Governance]] | ||
- Implemented by [[AI Monitoring/Continuous AI Ethics check]] | ||
- ### Overview | ||
- The **AI Monitoring Policy** outlines the *continuous monitoring process* for [[AI systems]] to ensure they adhere to **ethical**, **performance**, and **security standards**. | ||
- This policy is designed to detect and mitigate [[biases]], ensure fairness, and monitor the performance and reliability of deployed AI systems. | ||
- #### Executive Summary | ||
background-color:: green | ||
- The **AI Monitoring Policy** ensures that AI systems remain ethical, secure, and performant over time. By following this policy, the organisation commits to responsible AI deployment and continuous improvement of its AI models. | ||
- ### Purpose | ||
background-color:: yellow | ||
- The purpose of this policy is to: | ||
- Continuously monitor AI systems for potential biases, performance degradation, and security risks. | ||
- Ensure the responsible and ethical deployment of AI models. | ||
- Provide guidelines for intervention when AI systems deviate from acceptable standards. | ||
- ### Scope | ||
background-color:: yellow | ||
- This policy applies to all AI models deployed across [[the organisation]]. | ||
- It covers: | ||
- Models deployed for decision-making, automation, and customer interaction. | ||
- Both internal and external-facing AI applications. | ||
- Monitoring AI systems in real-time, including bias detection, security vulnerabilities, and performance metrics. | ||
- ### Key Components | ||
background-color:: yellow | ||
- #### 1. **Bias Detection and Fairness Monitoring** | ||
- **Objective**: Ensure that AI models do not introduce or amplify biases (e.g., based on age, gender, race, socioeconomic status). | ||
- **Frequency**: Bias detection tests will be run on a weekly basis. | ||
- **Tools**: Use of automated bias detection tools such as [[AI Governance/Test Suite/Age Bias Detection Suite]] to monitor biases. | ||
- **Action Plan**: If bias is detected, the AI model will be retrained or adjusted based on the severity of the issue. | ||
- #### 2. **Performance Monitoring** | ||
- **Objective**: Monitor the accuracy, reliability, and responsiveness of AI models in production. | ||
- **Metrics**: Accuracy, latency, failure rate, user satisfaction. | ||
- **Frequency**: Real-time performance monitoring will be enabled. | ||
- **Tools**: Monitoring tools like cloud-based dashboards (e.g., AWS CloudWatch, Azure Monitor). | ||
- **Action Plan**: In case of performance degradation, immediate investigation and retraining will be initiated. | ||
- #### 3. **Security Monitoring** | ||
- **Objective**: Ensure that AI models are protected from malicious attacks, data leaks, or adversarial threats. | ||
- **Frequency**: Continuous monitoring with security audits conducted quarterly. | ||
- **Tools**: Use of security tools for AI model integrity (e.g., model monitoring services, threat detection tools). | ||
- **Action Plan**: If a security breach is detected, an emergency response team will be deployed to mitigate and resolve the issue. | ||
- #### 4. **Explainability and Transparency** | ||
- **Objective**: Ensure that AI models remain explainable and transparent, especially in decision-making systems. | ||
- **Tools**: Use of explainability frameworks (e.g., SHAP, LIME) to provide insights into model decisions. | ||
- **Frequency**: Monthly reports on the explainability of AI models will be generated. | ||
- **Action Plan**: If a model’s decisions are found to be non-explainable, alternative models or approaches will be considered. | ||
- ### Roles and Responsibilities | ||
background-color:: yellow | ||
- **AI Governance Team**: Responsible for implementing this policy, conducting monitoring, and enforcing ethical guidelines. | ||
- **Data Science Team**: Ensures that models are built, trained, and deployed in accordance with the monitoring policy. | ||
- **IT Security Team**: Handles security monitoring and breach response. | ||
- **Model Owners**: Responsible for ensuring that the models under their purview are compliant with this policy. | ||
- **Compliance Officer**: Oversees adherence to regulatory and ethical standards. | ||
- ### Procedures | ||
background-color:: yellow | ||
- #### 1. **Monitoring Setup** | ||
- Ensure that all deployed AI models are connected to the monitoring system. | ||
- Configure automated alerts for bias detection, performance degradation, and security risks. | ||
- #### 2. **Review and Reporting** | ||
- Weekly and monthly reviews will be conducted by the AI Governance team. | ||
- Regular reports on model performance, bias detection results, and security events will be submitted to senior leadership. | ||
- #### 3. **Intervention Process** | ||
- If an AI model fails to meet ethical or performance standards, the following steps will be taken: | ||
- Investigate the root cause of the issue. | ||
- Retrain, recalibrate, or modify the AI model. | ||
- Suspend the AI model if the issue poses a significant risk. | ||
- Communicate the findings and resolution to stakeholders. | ||
- ### Compliance | ||
background-color:: yellow | ||
- This policy adheres to industry regulations, including GDPR, CCPA, and other data protection and ethical AI standards. Regular audits will be conducted to ensure ongoing compliance. | ||
- WIP | ||
- Use [[test suites]] | ||
- Required Tests | ||
- [[Sentiment Disparity Test]] | ||
- Actor Swap Test | ||
- Occupational Test | ||
- Social Role Test | ||
- [[...]] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +1,4 @@ | ||
alias:: Monitoring of AI Systems | ||
|
||
- Implements [[AI Governance/Policies/AI Monitoring Policy]] | ||
- |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,109 @@ | ||
alias:: Continuous application of AI Ethics, Continuous AI monitoring requirements | ||
|
||
- Implemented by | ||
- [[AI Governance/Tools/Bias Detector]] | ||
- ### Overview | ||
- The **AI Monitoring/Continuous AI Ethics Check** is designed to ensure that AI models in production adhere to ethical standards, including fairness, transparency, and security. This involves real-time monitoring, scheduled audits, and automated alerts for potential issues such as bias or model drift. | ||
- ### Key Components of Continuous AI Ethics Monitoring | ||
- 1. **Bias Detection and Mitigation** | ||
- - **Objective**: Continuously monitor for bias in AI outputs (e.g., gender, race, age, socioeconomic). | ||
- - **Tools**: Use automated bias detection tools such as [[AI Governance/Tools/Bias Detector]] to monitor for bias. | ||
- - **Action Plan**: Retrain or adjust AI models when bias is detected. | ||
- 2. **Fairness and Equity Checks** | ||
- - **Objective**: Ensure AI treats all users fairly, without favoring or disadvantaging any group. | ||
- - **Tools**: Regular audits and fairness assessments. | ||
- 3. **Transparency and Explainability** | ||
- - **Objective**: Ensure that AI decisions are explainable and transparent, particularly in decision-making systems. | ||
- - **Tools**: Use explainability frameworks like **LIME** or **SHAP**. | ||
- 4. **Security and Data Privacy** | ||
- - **Objective**: Ensure AI systems respect user privacy and are secure from attacks or data breaches. | ||
- - **Tools**: Real-time monitoring for security incidents. | ||
- 5. **Accountability and Compliance** | ||
- - **Objective**: Ensure AI complies with internal governance rules and external regulations such as **GDPR** or **CCPA**. | ||
- - **Action Plan**: Maintain an audit trail for accountability and compliance checks. | ||
- ### Implementation Steps | ||
- #### 1. [[AI Governance/Frameworks/Ethics Monitoring Framework] | ||
- - Define the key ethical principles (e.g., fairness, transparency, privacy) that AI must adhere to. | ||
- - Develop specific metrics (e.g., bias metrics, transparency scores). | ||
- #### 2. [[AI Governance/Procedures/Real-Time Monitoring Setup]] | ||
- - **Data Collection**: Collect real-time data from AI systems, including inputs, outputs, and decision logs. | ||
- - **Bias Detection**: Continuously run tests (like age bias tests) to monitor for bias. | ||
- - **Performance Monitoring**: Track the AI’s performance to detect any ethical issues or degradation. | ||
- #### 3. [[AI Governance/Procedures/Alerting and Reporting System]] | ||
- - **Automated Alerts**: Set up alerts to notify teams of detected ethical violations (bias, fairness, or privacy breaches). | ||
- - **Dashboard Reporting**: Use tools like **AWS CloudWatch**, **Azure Monitor**, or | ||
- **Google Cloud Monitoring** to display real-time ethics and bias data. | ||
- #### 4. [[AI Governance/Tools/Explainability Tools Integration]] | ||
- - Tools like **LIME** and **SHAP** should be integrated to ensure that model decisions are explainable and easily interpreted. | ||
- #### 5. [[AI Governance/Policies/Data Privacy and Security]] | ||
- - Ensure compliance with **GDPR**, **CCPA**, and other data protection laws. | ||
- - Use security tools to monitor the AI system for adversarial attacks or data leaks. | ||
- #### 6. [[AI Governance/Procedures/Feedback Loops and Model Retraining]] | ||
- - Establish feedback loops for flagged ethical issues, triggering model adjustments or retraining when necessary. | ||
- #### 7. [[AI Governance/Procedures/Compliance and Accountability Auditing]] | ||
- - Keep logs of AI decisions, monitoring reports, and ethics check outcomes for audits. | ||
- - Automate logging and auditing to ensure traceability. | ||
- ### Tools and Technologies for Continuous AI Ethics Monitoring | ||
- 1. **Bias Detection Tools** | ||
- - **Fairness Indicators** (Google) | ||
- - **Aequitas**: Bias and fairness audit tool. | ||
- - **What-If Tool**: Bias detection and what-if scenarios. | ||
- 2. **Explainability Tools** | ||
- - **LIME**: Local interpretable model-agnostic explanations. | ||
- - **SHAP**: Quantifies feature importance in AI decisions. | ||
- 3. **Model Monitoring and Alerting** | ||
- - **AWS CloudWatch**: For real-time monitoring and alerting. | ||
- - **Azure Monitor**: For tracking AI model performance and bias metrics. | ||
- - **Google Cloud Monitoring**: For performance and bias monitoring on Google Cloud. | ||
- 4. **Governance and Compliance Tools** | ||
- - **Azure Machine Learning Governance**: Responsible AI tools. | ||
- - **IBM Watson OpenScale**: Bias and drift detection. | ||
- ### Sample Code for Real-Time AI Ethics Monitoring in AWS Lambda | ||
```python | ||
import os | ||
import boto3 | ||
from datetime import datetime | ||
from bias_detection_module import detect_age_bias | ||
# Initialize AWS CloudWatch client | ||
cloudwatch = boto3.client('cloudwatch') | ||
# Lambda function for real-time AI ethics check | ||
def lambda_handler(event, context): | ||
# Get AI model output (hypothetical event structure) | ||
model_output = event['model_output'] | ||
prompt = event['prompt'] | ||
# Run bias detection check | ||
bias_result = detect_age_bias(prompt, model_output) | ||
# If bias is detected, log to CloudWatch and send an alert | ||
if bias_result['bias_detected']: | ||
log_bias_to_cloudwatch(bias_result) | ||
send_alert(bias_result) | ||
return bias_result | ||
# Log bias metrics to AWS CloudWatch | ||
def log_bias_to_cloudwatch(bias_result): | ||
cloudwatch.put_metric_data( | ||
Namespace='AI/EthicsMonitoring', | ||
MetricData=[{ | ||
'MetricName': 'BiasDetection', | ||
'Dimensions': [{'Name': 'BiasType', 'Value': 'AgeBias'}], | ||
'Timestamp': datetime.utcnow(), | ||
'Value': 1 if bias_result['bias_detected'] else 0, | ||
'Unit': 'Count' | ||
}] | ||
) | ||
# Send an alert if bias is detected | ||
def send_alert(bias_result): | ||
sns = boto3.client('sns') | ||
sns.publish( | ||
TopicArn=os.getenv('SNS_ALERT_TOPIC'), | ||
Message=f"Bias Detected in AI Model: {bias_result['details']}", | ||
Subject="AI Ethics Monitoring Alert" | ||
) | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
alias:: test suites |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
alias:: undesired outcomes | ||
|
||
- |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
- |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +0,0 @@ | ||
- | ||