X Close Search

How can we assist?

Demo Request

How to Manage AI Risk in Healthcare Security

Explore how healthcare leaders manage AI risks, enhance security, and balance innovation with patient safety.

Post Summary

The rapid acceleration of artificial intelligence (AI) adoption in healthcare offers transformative opportunities, from improving diagnostics to streamlining administrative processes. Yet, its unmatched potential comes with significant risks, especially in the areas of security, governance, and operational integrity. With the stakes as high as patient safety, healthcare leaders must balance innovation with robust risk management.

This article captures a compelling discussion between Willie Leer, Chief Marketing Officer of Point Guard AI, and Manmahan Singh, Deputy Chief Information Security Officer (CISO) at UT Southwestern Medical Center, during a podcast recorded at the Black Hat security conference. These industry experts explore the intersection of AI, cybersecurity, and healthcare, offering actionable insights for decision-makers navigating this complex landscape.

The Dual Promise and Perils of AI in Healthcare

AI’s applications in healthcare are as varied as they are impactful. From enhancing patient diagnostics to streamlining hospital admissions and revenue management, the possibilities appear boundless. However, as Willie Leer aptly puts it, "AI has descended on us so quickly, it’s unprecedented", creating a governance gap that organizations must urgently address.

Manmahan Singh emphasizes that healthcare institutions must adopt a curated approach to AI. Instead of chasing trends, organizations should identify pain points - such as inefficiencies in operations - and evaluate how AI solutions can enhance patient safety and outcomes. Singh highlights, "It’s always a patient-first approach. Productivity is important, but ensuring patient safety and adding value to the care process is paramount."

Challenges AI Introduces

The rise of AI in healthcare brings with it several inherent risks:

  • Security vulnerabilities: Expanding attack surfaces and the potential for data poisoning or model manipulation.
  • Governance gaps: AI’s rapid adoption has outpaced the development of oversight frameworks.
  • Regulatory ambiguity: While existing laws like HIPAA provide a foundation, they are insufficient for AI-specific risks, particularly in handling model-generated decisions.
  • Operational risks: Shadow AI use, where researchers or employees implement unapproved AI models, increases the likelihood of security breaches.

Key Considerations for AI Risk Management

1. Value vs. Vulnerability Assessment

Manmahan Singh stresses that every AI initiative should begin with a thorough evaluation of value versus risk. This includes:

  • Identifying how the technology will enhance patient safety.
  • Assessing vulnerabilities tied to data usage and sharing.
  • Establishing clear criteria for acceptable operational risks.

The goal is to ensure AI adoption is purposeful and aligned with organizational and patient priorities.

2. Creating Robust Governance Frameworks

A recurring theme in the conversation is the need for formal governance structures. Singh highlights UT Southwestern’s AI risk oversight committee, which brings together stakeholders from healthcare leadership, research, compliance, and legal departments. This multidisciplinary approach enables collaborative decision-making that prioritizes security, compliance, and innovation.

The governance process should include:

  • Early-stage security and compliance reviews (shift-left security).
  • Detailed evaluations of AI models’ data handling, access permissions, and operational behavior.
  • Ongoing monitoring to detect and address issues such as model drift or unauthorized data access.

Willie Leer echoes this sentiment, emphasizing that good governance should not stifle innovation. Instead, it should provide guardrails, allowing organizations to experiment safely.

3. Tackling Shadow AI

Shadow AI - unauthorized AI implementations - poses significant risks, particularly in academic medical centers where researchers often experiment with open-source AI models. Singh explains that his institution has implemented an education-first approach, offering support and guidance to encourage secure practices without stifling academic progress.

Willie Leer adds that discovery tools can reveal the extent of AI activity within an organization, enabling leaders to take necessary control measures. "You can’t tell people not to use AI", Leer says, "but you can ensure it’s done securely and within proper governance frameworks."

4. Addressing Emerging Threats

AI brings new risk vectors to healthcare, including:

  • Data poisoning: A malicious actor manipulates training data, leading the AI model to produce incorrect results.
  • Autonomous agent vulnerabilities: AI agents with insufficient access controls may inadvertently expose sensitive data.
  • Prompt injections: Hackers craft queries to manipulate AI tools, potentially accessing sensitive patient information.

Both experts agree that organizations must develop AI-specific security practices, such as red-teaming AI models to identify vulnerabilities, implementing zero-trust principles, and continuously monitoring for anomalies.

Regulatory Factors in AI Risk Management

While existing frameworks like HIPAA provide a baseline for securing patient data, they fail to address the intricacies of AI-powered systems. Singh notes that state-level regulations, such as Texas House Bill 127, and federal executive orders are beginning to address AI usage and governance, but the regulatory landscape remains fragmented.

In the absence of comprehensive AI-focused regulations, organizations are turning to frameworks like the NIST AI Risk Management Framework (AI RMF) and the OWASP Top 10 LLM Risks to guide their AI security strategies.

Key Takeaways

  • Adopt a patient-first approach: Evaluate how AI projects will improve patient safety and outcomes before pursuing them.
  • Establish governance early: Create cross-functional committees to ensure AI adoption aligns with security and compliance requirements.
  • Combat shadow AI: Use discovery tools to identify unauthorized AI use and engage with users to improve adherence to security policies.
  • Secure AI models proactively: Conduct red-team testing, enforce zero-trust architecture, and monitor for data drift or anomalies.
  • Leverage regulatory frameworks: Use tools like NIST’s AI RMF to build a structured, compliant approach to AI risk management.
  • Prioritize education: Train employees on the risks of improper AI use and the importance of adhering to governance standards.
  • Avoid new security silos: Integrate AI risk management into existing security processes to avoid fragmentation.

Conclusion

The integration of AI into healthcare is inevitable and necessary to meet the growing demands of the industry. However, its adoption comes with complex risks that require a thoughtful and collaborative approach to manage effectively. From establishing governance frameworks to securing models against threats, healthcare organizations must prioritize patient safety and operational integrity while embracing the transformative potential of AI.

As Willie Leer aptly summarizes, "We can’t stifle innovation - we must create safe environments to experiment while maintaining governance." By following these insights, healthcare leaders can navigate the challenges of AI adoption, ensuring it serves its ultimate purpose: improving patient care.

Source: "Securing Healthcare in the Digital Age | CISO Podcast Series (Episode 3)" - The Cyber Express, YouTube, Aug 28, 2025 - https://www.youtube.com/watch?v=tZD9dNVnTQQ

Use: Embedded for reference. Brief quotes used for commentary/review.

Related Blog Posts

Key Points:

Censinet Risk Assessment Request Graphic

Censinet RiskOps™ Demo Request

Do you want to revolutionize the way your healthcare organization manages third-party and enterprise risk while also saving time, money, and increasing data security? It’s time for RiskOps.

Schedule Demo

Sign-up for the Censinet Newsletter!

Hear from the Censinet team on industry news, events, content, and 
engage with our thought leaders every month.

Terms of Use | Privacy Policy | Security Statement | Crafted on the Narrow Land