OARC December 2025 Newsletter
Navigating AI Responsibly: Balancing Opportunity and Oversight
Christopher Griffith, University Privacy Officer
AI is transforming research, teaching, and operations. Here's how the University is leading with integrity and foresight.
Artificial Intelligence (AI) offers transformative opportunities for research, teaching, and university operations. Realizing these benefits responsibly, however, requires sustained attention to privacy, security, fairness, and compliance. These considerations are already reshaping the institution’s risk landscape and will continue to do so as AI becomes more embedded in core processes.
AI adoption is now mainstream. The 2025 Stanford AI Index reports that organizational use rose from 55% in 2023 to 78% in 2024, while a recent McKinsey global survey places adoption at 88% of organizations using AI in at least one business function. At the same time, emerging research shows that AI can match—or in some tasks exceed—clinician performance in areas such as imaging-based cancer detection. This underscores both the technology’s promise and the importance of oversight to ensure reliability and effective application.
Building on these trends, our university is advancing a comprehensive risk management approach to AI that balances innovation with responsibility. Read on for an overview of initiatives shaping this effort.
KU Guidelines for Responsible AI
KU provides guidance for ethical and secure AI use at ai.ku.edu, including best practices for faculty, staff, and students. These resources address privacy protection, academic integrity, and risk assessment for generative AI tools.
Introducing the Privacy & AI Risk Council
To strengthen oversight, KU has established the Privacy & AI Risk Council, a cross-functional body charged with advancing responsible AI practices. Its core responsibilities include:
- Policy and Standards Development: Recommending policies and practices for ethical AI use.
- Risk Oversight: Reviewing high-impact AI applications and mitigating risks related to privacy, security, and compliance.
- Regulatory Monitoring: Translating evolving laws and frameworks into actionable guidance.
- Education and Literacy: Promoting informed, ethical AI adoption across campus.
Leveraging the NIST AI Risk Management Framework
KU will use the NIST AI Risk Management Framework (RMF) as a foundation for these efforts. The RMF’s principles—Govern, Map, Measure, Manage—align closely with the Council’s goals:
- Govern: Establish clear roles and accountability for AI oversight.
- Map: Identify and understand AI-related risks in social, ethical, and operational contexts.
- Measure: Evaluate AI systems for trustworthiness, including privacy, fairness, and security.
- Manage: Continuously mitigate and adapt to emerging risks.
Moving Forward
Our office remains committed to supporting responsible AI adoption by sharing guidance, monitoring emerging regulations, and fostering open dialogue with stakeholders. Together, we can navigate this evolving landscape with confidence and integrity.
Internal Audit
Jarod Kastning, Director - Internal Audit
How Internal Audit Considers AI in Our Work
Bringing AI into the Conversation Early
When planning audits, we ask units about their use of AI tools and systems. This helps us identify potential risks—like how sensitive data is handled, whether systems are secure, if university policies are being followed, and how ethical considerations are being addressed.
Tailoring Audit Plans to Fit AI Use
If a unit is using AI in significant capacities, we adjust our audit plans accordingly. We look to understand how the technology is being managed, what oversight exists, and what controls are in place to ensure responsible use.
Taking a Closer Look During Audits
When AI is part of the picture, our audit testing may include reviewing any AI-related guidelines or procedures, examining how data feeding the AI is controlled, and assessing how the unit monitors the tool’s use—including ethical and security aspects.
Policy Administration
New and Updated University Policies
Camping Policy
Camping is generally prohibited on KU property, with exceptions for approved tailgating, authorized university events, temporary hammocks or loungers during daylight hours, and designated housing areas. Read more
Public Assembly Policy
This policy outlines how registered student, campus, and external groups may conduct assemblies, rallies, speeches, marches, protests, and tabling on campus, requiring pre-registration and adherence to safety and operational guidelines. Read more
Posting of Materials and Sidewalk Chalking
Postings such as flyers and chalk messages are only allowed on designated university bulletin boards and surfaces, prohibiting placement on walls, doors, windows, vehicles, and other non-designated areas. Read more
Nondiscrimination, Harassment, and Equal Opportunity
This updated policy reaffirms KU’s commitment to prohibit discrimination, harassment, and retaliation based on protected characteristics, with clear definitions and processes to ensure compliance with federal law. Read more
Signature Blocks Policy
This policy standardizes employee signature blocks for email, videoconferencing, university webpages, and course syllabi, specifying formatting, content, and approved styles to comply with state and Board of Regents guidelines. Read more
Accessibility Notice
Please remember, all images for newsletters require Alt-Text. Tables must also follow accessibility guidelines. Resources are available on the Office of Integrity & Compliance Technology Accessibility site and KU sites for additional information and guidelines.