Security Team Guidelines

Responsible AI at CSC

Our commitment to using AI ethically, securely, and responsibly. These guidelines help ensure we harness AI's potential while protecting our company, employees, and customers.

Core Principles

Security First

Never input sensitive, confidential, or personally identifiable information into external AI tools without explicit approval from the security team.

Transparency

Always disclose when AI has been used to generate or assist in creating content, especially in customer-facing communications.

Data Protection

Ensure all AI tools comply with our data protection policies and relevant regulations like GDPR and CCPA.

Human Oversight

AI should augment human decision-making, not replace it. Critical decisions must always have human review and approval.

Quick Reference Guide

Data Handling

Use AI tools with anonymized or synthetic data for testing
Input publicly available information into approved AI tools
Use customer data only with approved, enterprise-grade AI tools
Never input passwords, API keys, or authentication credentials
Never share employee personal information with external AI

Content Generation

Use AI to draft internal documentation and reports
Generate code suggestions with approved coding assistants
AI-generated marketing content requires human review and approval
Do not publish AI content without fact-checking and review
Never use AI to impersonate individuals or create deepfakes

Tool Usage

Use tools from the approved AI tools list without additional approval
Experiment with new AI tools using only non-sensitive data
New tools handling company data require security review
Do not use personal AI accounts for work-related tasks
Never bypass security controls or use unapproved workarounds

Frequently Asked Questions

Need More Details?

For the complete AI policy documentation, training materials, and certification requirements, visit the Security Team's SharePoint site or contact security@csc.com.