Ok
Sign inSupport
2 min.

Governing Access in Amazon Bedrock

Neha Duggal

Nov 7, 2025

Content
Gain control of your cloud access.
Get a demo
Share article
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Generative AI has fully entered the enterprise mainstream and platforms like Amazon Bedrock allow organisations to build and scale AI use cases with Foundational Models such as Anthropic’s Claude, Mistral, Amazon’s Titan and more through familiar AWS APIs. While enterprises can use Bedrock to experiment, customize, and deploy generative AI tools, they also introduce new access surfaces. Model interactions frequently contain sensitive prompt content, intellectual property or customer data. Security organisations must therefore carefully consider access privileges that balance AI-enabled innovation with appropriate access.

Mismanaged Bedrock permissions can quietly bypass existing identity controls. A developer role with broad bedrock:* permissions, or an automation pipeline configured with unrestricted invocation rights, can lead to exposure of internal data or spiralling AI usage costs. 

Managing who can invoke, manage, and share these models is an access governance problem, not just a cloud operations issue. This post explores where risks lie, what controls security leaders should consider, and how to measure governance maturity for Bedrock adoption.

Mapping Identity Risks in AWS Bedrock

A review of permissioning within Bedrock reveals several gotchas for unwary security and identity teams:

Runtime access risk: The bedrock:InvokeModel permission determines who can call a model and run inference, making it a critical identity checkpoint. Poorly scoped permissions can expose sensitive workloads or enable mass inference on internal data without oversight - with the corresponding data risks and cost implication.

Configuration and lifecycle risk: Permissions such as bedrock:CreateModelCustomizationJob or bedrock:UpdateModel allow teams to modify or customize models. If granted broadly, they can lead to unapproved model fine-tuning or injection of unvetted training data.

Cross-account and multi-region risk: Bedrock supports resource-based policies and cross-account access. While this enables shared inference services, it also creates new trust boundaries. Without central governance, an account in one region could invoke or share models in another, creating data residency or compliance blind spots.

Auditability and detectability risk: Model invocations generate CloudTrail events, but these events should be correlated back to human or workload identities, otherwise audit trails will require additional effort to link to identities.

Bedrock Governance Guidance

Governing Bedrock usage is an identity necessity, not nicety. Identity teams should be sure to:

Standing access vs. just-in-time invocation: Bedrock supports programmatic access through IAM roles, which can easily accumulate standing permissions if not governed correctly. Instead, require users and workloads to obtain short-lived access, scoped specifically to a model or use case, and expire them automatically. This limits the exposure if keys or roles are compromised.

Privilege scope and separation of duties: The same user or role should not both administer models and invoke them in production. Blurred privilege boundaries create insider risk and weaken accountability. Separate high-privilege model management actions (CreateModelCustomizationJob, UpdateModel) from runtime invocation (InvokeModel) and implement permission boundaries to prevent privilege escalation and enforce least privilege across the Bedrock ecosystem.

Identity provenance and trust chain: Bedrock interactions often originate from federated identities or machine roles. AWS IAM entities and corporate identities should be mapped to allow for auditable and compliant-friendly event tracking. Every invocation must trace back to a verified user, service, or workload in your authoritative identity provider (e.g., Okta, Entra ID). Federation and attribute mapping should be standardized, validated, and logged.

Cross-environment and data-governance risk: Bedrock’s cross-account and cross-region features make it easy to share inference workloads, but also to leak access or data unintentionally. Therefore, apply governance boundaries using AWS Organizations SCPs and resource-based policies. Limit model invocation to trusted accounts and approved regions. Tie these policies to your enterprise data classification and residency rules.

Summary

Generative AI enablers like Amazon Bedrock unlock the innovation potential of AI across the enterprises, but they also significantly expand the identity attack surface. As invocation rights become the new form of privilege, security maturity will be defined not just by how quickly teams can experiment, but how that experimentation can be enabled for the right identities, with tightly scoped permissions, at the right time.

Struggling to control production access in hybrid or multi-cloud environments?

Get a demo of P0 Security, the next-gen PAM platform built for every identity.