AWS IAM Footguns¶
Mistakes that cause security incidents, access lockouts, or compliance failures in AWS IAM.
1. Using the root account for daily work¶
The root account has unrestricted access to everything in the AWS account. It cannot be limited by SCPs, permission boundaries, or any IAM policy. Using it for daily operations means a compromised session has unlimited blast radius. One phished MFA code and the attacker owns the account.
Fix: Lock the root account. Enable MFA (hardware key preferred). Create an admin IAM role for daily use. Set a CloudWatch alarm on root account usage: aws cloudtrail lookup-events --lookup-attributes AttributeKey=Username,AttributeValue=root. Store root credentials in a physical safe with dual-custody access.
2. Wildcard * in the Resource field¶
You write a policy that allows s3:DeleteObject with "Resource": "*" intending it for one bucket. It now allows deleting objects from every bucket in the account — including backups, audit logs, and CloudTrail logs.
Fix: Always scope resources to specific ARNs. Use arn:aws:s3:::my-bucket/* instead of *. Use IAM Access Analyzer's validate-policy to catch overly broad resources. Treat "Resource": "*" in any Allow statement as a security finding.
3. Inline policies that cannot be audited¶
Inline policies are embedded in individual users, groups, or roles. They do not show up in the IAM policy list. If you have 200 roles each with inline policies, auditing permissions is a nightmare — you must check every single principal individually.
Fix: Use managed policies (customer-managed, not AWS-managed for custom permissions). Managed policies appear in the policy list, can be versioned, and can be attached to multiple principals. Migrate inline policies with: aws iam list-role-policies --role-name <role> to find them, then convert to managed policies.
4. Forgetting to attach the policy to the role¶
You create a beautiful least-privilege policy. You create the role. You forget aws iam attach-role-policy. The role has zero permissions. Your application gets Access Denied on every call. You spend 30 minutes debugging the policy document when the policy was never attached at all.
# Check: is anything actually attached?
aws iam list-attached-role-policies --role-name app-role
aws iam list-role-policies --role-name app-role
# Both return empty → nothing attached
Fix: Always verify after creating a role. Script the creation as a unit: create role + attach policies + verify. Use aws iam simulate-principal-policy to confirm permissions before deploying.
5. Not enabling MFA on IAM users¶
An IAM user with a password and no MFA is one credential stuffing attack away from compromise. If that user has console access to production, the attacker has console access to production.
Fix: Enforce MFA via a deny-all-without-MFA policy (see primer). Use aws iam generate-credential-report to find users without MFA:
aws iam get-credential-report --query 'Content' --output text | \
base64 -d | awk -F, 'NR>1 && $4=="true" && $8=="false" {print "NO MFA:", $1}'
6. Access keys committed to code or git¶
Hardcoded access keys in source code are the leading cause of AWS account compromise. Bots scan GitHub for AWS keys within seconds of a push. Even in private repos, keys in code get copied to local machines, CI systems, logs, and error messages.
# NEVER do this
client = boto3.client(
"s3",
aws_access_key_id="AKIA...",
aws_secret_access_key="wJalrXUtnFEMI/K7..."
)
Fix: Use IAM roles everywhere. EC2 gets credentials from instance profiles. Lambda gets them from execution roles. EKS pods get them from IRSA. CI/CD uses OIDC federation (GitHub Actions uses AssumeRoleWithWebIdentity). If you must use access keys, use environment variables or ~/.aws/credentials, never source code. Install git-secrets or gitleaks as pre-commit hooks. Rotate immediately if leaked: aws iam delete-access-key.
7. Overly broad AssumeRole trust policy¶
A trust policy that allows "AWS": "*" means any AWS principal in any AWS account can assume this role. This is functionally open to the internet.
Fix: Always specify exact account IDs or principal ARNs. Use conditions — sts:ExternalId for third-party access, aws:PrincipalOrgID to restrict to your organization, aws:PrincipalArn to restrict to specific roles. Use IAM Access Analyzer to find roles with external access.
8. Not using conditions in trust policies¶
You create a cross-account role and trust the entire source account ("AWS": "arn:aws:iam::111111111111:root"). Now any role or user in that account can assume your role, not just the intended CI pipeline. A compromised developer laptop in the source account becomes a path into your account.
Fix: Restrict the trust policy to a specific role ARN:
{
"Principal": {"AWS": "arn:aws:iam::111111111111:role/ci-pipeline"},
"Condition": {"StringEquals": {"sts:ExternalId": "unique-secret-id"}}
}
sts:ExternalId for third-party integrations. Add aws:SourceIp for IP-restricted access. Add aws:MultiFactorAuthPresent for human access.
9. IAM eventual consistency (policy changes take time)¶
You attach a new policy to a role and immediately test it. It fails. You think the policy is wrong and start debugging. In reality, IAM is eventually consistent — changes can take several seconds (sometimes up to a minute) to propagate globally. Cross-region propagation is even slower.
Fix: After making IAM changes, wait 10-30 seconds before testing. In automation, add a sleep or retry loop after IAM modifications. Do not assume IAM changes are instantaneous. This also means revoking access is not instant — a deleted key may work for a brief period after deletion.
Under the hood: IAM is a global service replicated to every AWS region. When you create a policy, it is written to the primary store and asynchronously replicated. STS tokens issued before a policy change remain valid until they expire (up to 12 hours for role sessions). This means revoking an IAM role's permissions does not immediately revoke active sessions — you must also revoke active sessions via the IAM console's "Revoke active sessions" feature, which adds an inline deny policy with a timestamp condition.
10. S3 bucket policy + IAM policy interaction confusion¶
Your IAM policy allows s3:GetObject on a bucket. But the bucket policy has an explicit Deny for your role. Or vice versa — the bucket policy allows your role but your IAM policy does not grant the action. The rules for how these interact are different for same-account vs cross-account access.
Same account: if EITHER the IAM policy or the bucket policy allows the action, and NEITHER has an explicit deny, access is granted.
Cross account: BOTH the IAM policy in the source account AND the bucket policy in the target account must explicitly allow the action.
Fix: When debugging S3 access, always check both the IAM policy and the bucket policy. Use aws iam simulate-principal-policy to test the IAM side. Use aws s3api get-bucket-policy to inspect the resource side. Check for explicit denies in both — an explicit deny anywhere overrides all allows.
11. Stale access keys with no rotation¶
You create an access key for a service account in 2022. It is still active in 2026. Nobody remembers what it is for. It has AdministratorAccess. It is embedded in a forgotten Jenkins job that nobody dares to touch.
Fix: Enforce 90-day key rotation. Use Config rules or custom Lambda to detect old keys:
aws iam list-access-keys --user-name svc-deploy \
--query 'AccessKeyMetadata[].{Key:AccessKeyId,Created:CreateDate,Status:Status}'
12. Permission boundary not applied to delegated role creation¶
You give developers permission to create IAM roles (for their Lambda functions) but forget to require a permission boundary. A developer creates a role with AdministratorAccess for their Lambda. That Lambda now has full admin access. The developer did not intend to be malicious — they just wanted "something that works."
Fix: Use the iam:PermissionsBoundary condition key to enforce that all developer-created roles must have a boundary attached:
{
"Condition": {
"StringEquals": {
"iam:PermissionsBoundary": "arn:aws:iam::123456789012:policy/dev-boundary"
}
}
}