Skip to content

Quiz: AWS S3 Deep Dive

← Back to quiz index

6 questions

L1 (3 questions)

1. You set a bucket policy denying all public access, but an object was uploaded with a public-read ACL. Can the public read that object?

Show answer No. An explicit Deny in the bucket policy overrides the public-read ACL. AWS policy evaluation: explicit Deny (from any policy) always wins over any Allow. However, relying on policy evaluation order is fragile. The robust fix: enable S3 Block Public Access at the bucket or account level. This ignores all public ACLs and public bucket policies regardless of what they say. Block Public Access has four settings: BlockPublicAcls (reject PUT with public ACLs), IgnorePublicAcls (ignore existing public ACLs), BlockPublicPolicy (reject policies that grant public access), RestrictPublicBuckets (restrict cross-account access). Enable all four for any bucket that should never be public.

2. You enable versioning, upload a file, then delete it. The file disappears from the console (default view). Is the data actually gone? How do you recover it?

Show answer The data is NOT gone. Deleting a versioned object inserts a 'delete marker' — a zero-byte version that makes the object appear deleted. The previous version still exists. To recover:
1. In the S3 console, toggle 'Show versions' to see all versions including the delete marker.
2. Delete the delete marker itself — this restores the most recent previous version as the current version.
3. Via CLI: 'aws s3api delete-object --bucket BUCKET --key KEY --version-id DELETE_MARKER_VERSION_ID'. Important: this only works if the delete marker hasn't been permanently deleted (e.g., via lifecycle policy). MFA Delete adds an extra protection layer requiring MFA to permanently delete versions.

3. You generate a presigned URL valid for 1 hour. A user clicks it at minute 59 and begins downloading a 2 GB file. The download takes 20 minutes. Does it complete?

Show answer Yes, it completes. S3 validates the presigned URL's expiration at the time the request is initiated, not during data transfer. Once S3 accepts the request (at minute 59, within the 1-hour window), the download stream continues until completion regardless of how long the transfer takes. However, if the TCP connection drops and the client retries after the expiration time, the retry will fail with 403 Forbidden. This also applies to multipart downloads — each part request is validated independently. For very large files, generate URLs with longer expiration or use S3 Transfer Acceleration to reduce transfer time.

L2 (3 questions)

1. You start a multipart upload for a 5 GB file but the upload process crashes after uploading 800 of 1000 parts. What happens to the uploaded parts, and what do they cost?

Show answer The 800 uploaded parts remain in S3 and you are billed for their storage at the standard rate (approximately 4 GB of storage charges). They are invisible in normal ListObjects calls but consume storage. To find them: 'aws s3api list-multipart-uploads --bucket BUCKET'. To clean up: 'aws s3api abort-multipart-upload --bucket BUCKET --key KEY --upload-id ID'. Prevention: configure a lifecycle rule to automatically abort incomplete multipart uploads after N days: '{"Rules": [{"ID": "AbortIncomplete", "Status": "Enabled", "AbortIncompleteMultipartUpload": {"DaysAfterInitiation": 7}}]}'. Without this rule, orphaned parts accumulate indefinitely and inflate your bill.

2. You enable cross-region replication from us-east-1 to eu-west-1. A user writes an object to us-east-1 and immediately reads from eu-west-1. They get a 404. Why, and how long should they expect to wait?

Show answer Cross-region replication is asynchronous and eventual. The 404 occurs because the object hasn't replicated yet. Typical replication time is seconds to minutes for most objects, but AWS only guarantees that 99.99% of objects replicate within 15 minutes (with S3 Replication Time Control / S3 RTC). Without RTC, there is no SLA on replication latency. Large objects take longer. Solutions:
1. Enable S3 RTC for predictable 15-minute SLA (additional cost).
2. Design applications to tolerate eventual consistency — read from the source region if immediate consistency is required.
3. Use S3 replication metrics to monitor lag. Do NOT assume replication is instant — this is a common cause of cross-region application bugs.

3. Your bucket policy has Principal: '*' with Allow on s3:GetObject. IAM policies in your account deny S3 access for most users. Will the IAM Deny override the bucket policy's public Allow?

Show answer For same-account access: yes, an explicit Deny in an IAM policy overrides the Allow in the bucket policy. But for cross-account or anonymous access: IAM policies in YOUR account don't apply to external principals. Anonymous users (unauthenticated) are not subject to any IAM policy — they only see the bucket policy. So Principal: '*' with Allow effectively makes the bucket public to the internet, regardless of your IAM Deny policies. The fix: enable S3 Block Public Access (which overrides bucket policies granting public access), or change the Principal to specific account ARNs. Never rely on IAM Deny to compensate for an overly permissive bucket policy.