Amazon S3 is a cornerstone of AWS cloud storage, offering unmatched scalability, reliability, and versatility for modern applications. For AWS Associate Solution Architects, mastering S3 is essential to designing efficient, secure, and cost-effective solutions.
\ From understanding storage classes and lifecycle management to leveraging features like encryption, versioning, and event notifications, this guide covers the critical knowledge you need to optimize your S3 usage and succeed in your AWS journey.
\
| S3 Storage Class | Use-cases | Cost | Availability | Retrieval Time | |----|----|----|----|----| | S3 Standard | ==Frequently accessed== data. Ideal for big data analytics, mobile and gaming apps, and content delivery. | High | 99.99% | Immediate | | S3 Intelligent-Tiering | Data with ==unknown or changing access patterns==. Automatically moves objects between tiers. | Depends on tier | 99.99% | Immediate | | S3 Standard-IA (Infrequently-Access) | ==Infrequently== accessed data that requires ==rapid access==. Great for backups and disaster recovery. | Lower than S3 Standard | 99.99% | Immediate | | S3 One Zone-IA | ==Infrequently== accessed data that does not require multi-AZ resilience. ==One AZ only== | Lower than Standard-IA | 99==.95%== | ==Immediate== | | S3 Glacier Instant Retrieval | Archive data that needs millisecond access. | Lower than Standard-IA | 99.99% | Milliseconds | | S3 Glacier Flexible Retrieval | Long-term archival with less frequent access but cost-sensitive. Suitable for compliance archives. | Very Low | 99.99% | Minutes to hours | | S3 Glacier Deep Archive | Lowest-cost archival for rarely accessed data. | Lowest | 99.99% | 12-48 hours |
\
Durability and Availability:
Amazon S3 offers ==99.999999999% (11 nines) durability== and ==99.99% availability== across all storage classes.
\ By default, S3 replicates your data across at least three Availability Zones within the same Region, providing built-in redundancy to ensure the durability of most S3 storage classes. This setup does not require additional configuration, except for S3 One Zone-IA, which stores data in a single Availability Zone.
\
Bucket Policies and Access Control:
==When an S3 bucket is created, it is private by default==, which means that all objects in the bucket are private and only AWS accounts which are either bucket owners or accounts with administrator permissions and/or with appropriate IAM policies.
\
Versioning and Lifecycle Management:
Amazon S3 Versioning is a valuable feature in many scenarios, particularly when data protection, audit trails, disaster recovery, or compliance with regulatory requirements are needed. It is especially useful for use cases involving frequently changing data, data that needs to be retained over time, or historical data that is important for recovery and analysis.
\ It is a best practice to use S3 Versioning alongside Lifecycle Management in use cases such as data backup and recovery, compliance, archiving, and storage management. This combination ensures optimal storage costs, data protection, and simplifies operations by automating versioning and data transitions.
\ Below, are lifecycle rules you can define for objects in a bucket. Note that objects can be filtered. You can define rules to transition objects to different storage classes to delete them after a specified period. For example, move to Glacier after 90 days and Permanently delete after 365 days.
\
Data Protection and Encryption:
When a bucket is created, all objects stored in the bucket will be encrypted with SSE-S3 by default. SSE-S3 is a free server-side encryption solution for data encryption at-rest.
\ Granular permissions with SSE-KMS: By using SSE-KMS (AWS Key Management Service), you can control who has access to the encryption keys, enabling fine-grained control over who can decrypt your data. With SSE-KMS, you can create, rotate, and revoke encryption keys, giving you full control over the life cycle of the keys used to encrypt your data.
\ ==Try it:== Check if a bucket is enabled with server-side encryption.
\ (Note that all the information that using place holder such as region, account-id, and name are changed, and you need to replace the placeholder when you try with aws-cli)
\
\ Note: to encrypt data in transit, make sure to use HTTPS for secure data transfers.
\
Event Notifications:
S3 can trigger notifications to AWS services like Lambda, SNS, or SQS when specific events occur (e.g., object creation or deletion).
It is used for workflows such as image processing, logging, or custom alerts.
\ ==Try it:== create an s3 bucket and configure an SNS topic to send email notifications whenever there is a file uploaded to the s3 bucket.
\
Create an S3 bucket for uploading the file.
aws s3api create-bucket --bucketCreate an SNS Topic.
aws sns create-topic --name MySNSTopicThe output will contain the Topic ARN (e.g., arn:aws:sns:region:account-id:MySNSTopic).
Note down the Topic ARN for later steps.
\
Subscribe to an SNS topic to receive email from the topic.
aws sns subscribe --topic-arn arn:aws:sns:region:account-id:MySNSTopic --protocol email --notification-endpoint [email protected]\
Confirm the subscription by checking your email and clicking on the confirmation link. OR you can copy the token from the confirmation link and confirm your subscription via aws cli
aws sns confirm-subscription --topic-arn arn:aws:sns:region:account-id:MySNSTopic --token [a very long token you got in your email]\
Once the token is confirmed, you can test if you will receive an email from the SNS topic by publishing a message to SNS.
use the command below to publish a test message
aws sns publish --topic-arn arn:aws:sns:region:account-id:MySNSTopic --message "Test notification"Grant S3 Permission to Publish to SNS topic
Create a file name sns-policy.json to define the policy for S3 to publish the message.
{ "Version": "2008-10-17", "Id": "__default_policy_ID", "Statement": [ { "Sid": "__default_statement_ID", "Effect": "Allow", "Principal": { "Service": "s3.amazonaws.com" }, "Action": "SNS:Publish", "Resource": "arn:aws:sns:regionId:account-idd:MySNSTopic", "Condition": { "StringEquals": { "AWS:SourceAccount": "account-id" }, "ArnLike": { "AWS:SourceArn": "arn:aws:s3:::test-s3-for-upload-files-202501061146" } } } ] }Attach the policy to the SNS topic
aws sns set-topic-attributes --topic-arn arn:aws:sns:region:account-id:MySNSTopic --attribute-name Policy --attribute-value file://sns-policy.json==Good to know to save time with policy definition== (I have met no errors when defining the policy, however, the link between the s3 and SNS queue could not be set up, and it took a lot of time to find out these stupid findings.)
\
==use AWS instead of AWS==
\
==use SourceAccount instead of SourceOwner==
\
Configure S3 to Send Notifications to SNS
Create a bucket notification configuration JSON file notification.json, and define the rule below which means that the s3 bucket will fire an event when an object is created and send the notification to the SNS topic.
{ "TopicConfigurations": [ { "TopicArn": "arn:aws:sns:region:account-id:MySNSTopic", "Events": ["s3:ObjectCreated:*"] } ] }Attach the configuration to the s3 bucket
aws s3api put-bucket-notification-configuration --bucketThe result can also be checked from the s3 bucket’s event notifications.
\
\ \ \
Test the configuration: Upload a file, and check your email if you are receiving a notification.
touch my-local-file.txt
aws s3 cp my-local-file.txt s3://
\
Data Transfer Acceleration:
==S3 Transfer Acceleration== speeds up uploads by routing traffic through AWS edge locations using the CloudFront network. Useful for ==globally distributed users with high-latency== connections. To use Transfer Acceleration, your S3 bucket must be created in a region that supports this feature.
\
Uses a distinct Transfer Acceleration endpoint:
Bucket name must follow DNS-compliant naming conventions.
\
Static Website Hosting:
S3 can host static websites by enabling Static Website Hosting on a bucket. You can configure an index.html and error.html file. Combine with Amazon Route 53 for custom domains. Common use cases are marketing websites, landing pages, and documentation portals. Simple, affordable, and cost-effective.
\ ==Try it:== create a static website from an s3 bucket
Create an s3 bucket
aws s3api create-bucket --bucketEnable static website hosting
aws s3 website s3://Set Bucket Policy for Public Access
\
Apply the bucket policy to the s3 bucket.
aws s3api put-bucket-policy --bucketUpload the content of the static website (for example files index.html and error.html)
\
Verify static website hosting URL
aws s3api get-bucket-website --bucketYou can get a result similar to the one below as an example.
Browse the website
\
\
S3 Object Lock and Glacier Vault Lock:
S3 Object Lock: Enables write-once-read-many (WORM) to protect objects from being deleted or modified for a fixed retention period. Glacier Vault Lock: Enforces compliance controls for Glacier storage.
| Scenario | Recommended Service | Why? | |----|----|----| | Short-term retention with compliance needs | S3 Object Lock | Provides object-level retention and flexibility for managing versions. | | Protecting objects from accidental deletion | S3 Object Lock | Can easily be applied to specific objects for day-to-day needs. | | Regulatory archival storage (decades-long) | ==Glacier Vault Lock== | Designed for ==long-term,== high-security archival compliance. | | Immutable backups for ransomware recovery | S3 Object Lock | Combines immutability with faster access than Glacier Vault Lock. | | Archiving corporate records or tax data | ==Glacier Vault Lock== | Cost-efficient and secure for storing ==large volumes== of archival data. |
\Bonus Knowledge:
Performance Optimization: S3 scales automatically and supports unlimited throughput, so there’s no need to partition buckets. Replication: Use Cross-Region Replication (CRR) or Same-Region Replication (SRR) to replicate objects automatically between buckets.
Multi-Part Upload: For large files (over 100 MB), break uploads into parts for faster and more resilient uploads.
\
Only when you try you will know:
Try to create a bucket; you will know that not all names are valid
aws s3api create-bucket --bucketRules for Naming S3 Buckets:
Bucket names are also DNS-compliant because S3 uses the name as part of the URL for the bucket. The name must be compatible with domain names (RFC 1123).
For example, my-bucket-name is valid, but my..bucket is invalid due to consecutive periods.
\
Private by default? We can check by trying to browse for the bucket from the browser.
After a bucket is created, then we can check that the Block of all public access is ON in the bucket’s permissions settings.
\ In conclusion, mastering Amazon S3's essential features—from storage classes to encryption and event notifications—lays a strong foundation for AWS Solutions Architects at the associate level. Whether you’re managing data, optimizing costs, or securing your storage, understanding S3’s capabilities will help you design scalable, efficient, and compliant cloud solutions. As you continue to explore AWS, these fundamentals will guide your approach to building resilient, cost-effective cloud architectures.
\
All Rights Reserved. Copyright , Central Coast Communications, Inc.