AWS Certified Solutions Architect Associate Exam dumps with Complete Explanation-Part3

Muhammad Hassan Saeed
15 min readSep 5, 2023

--

For Increasing Blog Look

Question#21

An ecommerce company wants to launch a one-deal-a-day website on AWS. Each day will feature exactly one product on sale for a period of 24 hours. The company wants to be able to handle millions of requests each hour with millisecond latency during peak hours.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Use Amazon S3 to host the full website in different S3 buckets. Add Amazon CloudFront distributions. Set the S3 buckets as origins for the distributions. Store the order data in Amazon S3.
  • B. Deploy the full website on Amazon EC2 instances that run in Auto Scaling groups across multiple Availability Zones. Add an Application Load Balancer (ALB) to distribute the website traffic. Add another ALB for the backend APIs. Store the data in Amazon RDS for MySQL.
  • C. Migrate the full application to run in containers. Host the containers on Amazon Elastic Kubernetes Service (Amazon EKS). Use the Kubernetes Cluster Autoscaler to increase and decrease the number of pods to process bursts in traffic. Store the data in Amazon RDS for MySQL.
  • D. Use an Amazon S3 bucket to host the website’s static content. Deploy an Amazon CloudFront distribution. Set the S3 bucket as the origin. Use Amazon API Gateway and AWS Lambda functions for the backend APIs. Store the data in Amazon DynamoDB.

Reference/Arguments:

The total time spent in milliseconds from when CloudFront receives a request to when it provides a response to the network (not the viewer), for requests that are served from the origin, not the CloudFront cache. Origin Latency allows you to monitor the performance of your origin server.

Leveraging Amazon API Gateway and AWS Lambda functions for the backend APIs eliminates the need to manage EC2 instances or containers. This serverless approach scales automatically to handle high traffic without operational overhead

Build a Serverless Web Application with AWS Lambda, Amazon API Gateway, Amazon DynamoDB

Arguments about Others:

Option A (using S3 and CloudFront for static content hosting) is a good start but lacks the serverless backend required for the APIs, which would require operational overhead to manage servers.

Option B (EC2 instances and RDS) introduces operational overhead in managing instances and databases and may not easily scale to handle the required traffic.

Option C (EKS and RDS) also involves more operational management compared to the serverless approach in option D.

Question#22

A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to the loss of an Availability Zone. Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions architect must minimize the costs of storing and retrieving the media files.
Which storage option meets these requirements?

  • A. S3 Standard
  • B. S3 Intelligent-Tiering
  • C. S3 Standard-Infrequent Access (S3 Standard-IA)
  • D. S3 One Zone-Infrequent Access (S3 One Zone-IA)

Reference/Arguments:

The Amazon S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective access tier when access patterns change

For Pricing see here

Arguments about Others:

Option A (S3 Standard) does not optimize costs for rarely accessed files, as it has a higher cost compared to the Infrequent Access tiers.

Option C (S3 Standard-Infrequent Access) would be a good choice if all the files were rarely accessed, but it does not optimize costs for frequently accessed files.

Option D (S3 One Zone-Infrequent Access) is not suitable for resiliency to the loss of an Availability Zone since it stores data in a single Availability Zone, which makes it less durable compared to the other options.

Question#23

A company is storing backup files by using Amazon S3 Standard storage. The files are accessed frequently for 1 month. However, the files are not accessed after 1 month. The company must keep the files indefinitely.
Which storage solution will meet these requirements MOST cost-effectively?

  • A. Configure S3 Intelligent-Tiering to automatically migrate objects.
  • B. Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month.
  • C. Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) after 1 month.
  • D. Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 month.

Reference/Arguments:

S3 Glacier Deep Archive is Amazon S3’s lowest-cost storage class and supports long-term retention and digital preservation for data, retain data sets for 7–10 years or longer to meet regulatory compliance requirements.

Arguments about Others:

Option A (S3 Intelligent-Tiering) would be more suitable for objects with changing access patterns, but it may not be the most cost-effective solution for data that is not accessed after 1 month, as Intelligent-Tiering still retains data in more expensive storage classes.

Option C (S3 Standard-Infrequent Access) and option D (S3 One Zone-Infrequent Access) are designed for infrequently accessed data, but they are not as cost-effective as S3 Glacier Deep Archive for long-term archival.

Question#24

A company observes an increase in Amazon EC2 costs in its most recent bill. The billing team notices unwanted vertical scaling of instance types for a couple of EC2 instances. A solutions architect needs to create a graph comparing the last 2 months of EC2 costs and perform an in-depth analysis to identify the root cause of the vertical scaling.
How should the solutions architect generate the information with the LEAST operational overhead?

  • A. Use AWS Budgets to create a budget report and compare EC2 costs based on instance types.
  • B. Use Cost Explorer’s granular filtering feature to perform an in-depth analysis of EC2 costs based on instance types.
  • C. Use graphs from the AWS Billing and Cost Management dashboard to compare EC2 costs based on instance types for the last 2 months.
  • D. Use AWS Cost and Usage Reports to create a report and send it to an Amazon S3 bucket. Use Amazon QuickSight with Amazon S3 as a source to generate an interactive graph based on instance types

Reference/Arguments:

After you enable Cost Explorer, AWS prepares the data about your costs for the current month and the last 12 months, and then calculates the forecast for the next 12 months.

Arguments about Others:

Option A (AWS Budgets) can be used for cost tracking and alerts but may not provide the same level of granular analysis for EC2 costs based on instance types as Cost Explorer.

Option C (AWS Billing and Cost Management dashboard) provides basic cost information but may not offer the detailed analysis needed for comparing EC2 costs based on instance types.

Option D (AWS Cost and Usage Reports with QuickSight) involves more setup and management, which can introduce operational overhead that may not be necessary for this specific analysis.

Question#25

A company is designing an application. The application uses an AWS Lambda function to receive information through Amazon API Gateway and to store the information in an Amazon Aurora PostgreSQL database.
During the proof-of-concept stage, the company has to increase the Lambda quotas significantly to handle the high volumes of data that the company needs to load into the database. A solutions architect must recommend a new design to improve scalability and minimize the configuration effort.
Which solution will meet these requirements?

  • A. Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances. Connect the database by using native Java Database Connectivity (JDBC) drivers.
  • B. Change the platform from Aurora to Amazon DynamoDProvision a DynamoDB Accelerator (DAX) cluster. Use the DAX client SDK to point the existing DynamoDB API calls at the DAX cluster.
  • C. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using Amazon Simple Notification Service (Amazon SNS).
  • D. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue

Reference/Arguments:

By dividing the functionality into two Lambda functions, one for receiving the information and the other for loading it into the database, you can independently scale and optimize each function based on their specific requirements. This approach allows for more efficient resource allocation and reduces the potential impact of high volumes of data on the overall system.

Arguments about Others:

Option A suggests refactoring the Lambda function code to run on EC2 instances with Apache Tomcat. While this may provide more control, it increases operational complexity and does not necessarily improve scalability.

Option B suggests changing the database platform to DynamoDB with DAX, which might work for certain use cases but involves significant changes and may not be the simplest solution if you’re already using Aurora PostgreSQL.

Option C suggests using two Lambda functions with SNS for integration, but using SQS is generally a better choice for decoupling and handling high volumes of data.

Question#26

A company needs to review its AWS Cloud deployment to ensure that its Amazon S3 buckets do not have unauthorized configuration changes.
What should a solutions architect do to accomplish this goal?

  • A. Turn on AWS Config with the appropriate rules.
  • B. Turn on AWS Trusted Advisor with the appropriate checks.
  • C. Turn on Amazon Inspector with the appropriate assessment template.
  • D. Turn on Amazon S3 server access logging. Configure Amazon EventBridge (Amazon Cloud Watch Events).

Reference/Arguments:

AWS Config continually assesses, audits, and evaluates the configurations and relationships of your resources on AWS, on premises, and on other clouds.

Rule Example for S3 Configuration Monitoring

Arguments about Others:

AWS Trusted Advisor (Option B) is a service that provides best practice recommendations for your AWS resources, but it does not monitor or record changes to the configuration of your S3 buckets.

Amazon Inspector (Option C) is a service that helps you assess the security and compliance of your applications. While it can be used to assess the security of your S3 buckets, it does not monitor or record changes to the configuration of your S3 buckets.

Amazon S3 server access logging (Option D) enables you to log requests made to your S3 bucket. While it can help you identify changes to your S3 bucket, it does not monitor or record changes to the configuration of your S3 bucket.

Question#27

A company is launching a new application and will display application metrics on an Amazon CloudWatch dashboard. The company’s product manager needs to access this dashboard periodically. The product manager does not have an AWS account. A solutions architect must provide access to the product manager by following the principle of least privilege.
Which solution will meet these requirements?

  • A. Share the dashboard from the CloudWatch console. Enter the product manager’s email address, and complete the sharing steps. Provide a shareable link for the dashboard to the product manager.
  • B. Create an IAM user specifically for the product manager. Attach the CloudWatchReadOnlyAccess AWS managed policy to the user. Share the new login credentials with the product manager. Share the browser URL of the correct dashboard with the product manager.
  • C. Create an IAM user for the company’s employees. Attach the ViewOnlyAccess AWS managed policy to the IAM user. Share the new login credentials with the product manager. Ask the product manager to navigate to the CloudWatch console and locate the dashboard by name in the Dashboards section.
  • D. Deploy a bastion server in a public subnet. When the product manager requires access to the dashboard, start the server and share the RDP credentials. On the bastion server, ensure that the browser is configured to open the dashboard URL with cached AWS credentials that have appropriate permissions to view the dashboard.

Reference/Arguments:

Keywords: “access this dashboard periodically”

The product manager needs to access dashboard periodically so it is good that you create a user with appropiate permissions so manager view dashoboards when he need

Arguments about Others:

Option A suggests Sharing of dashboards with temporary credentials while product manager needs to view it periodically .

They automatically receive a separate email with their user name and a temporary password to use to connect to the dashboard.

If your password is expired you need an extra overhead of reseting password

Option C suggests creating an IAM user for employees and having the product manager navigate through the CloudWatch console, which is less efficient and doesn’t follow the principle of least privilege.

Option D suggests deploying a bastion server, which is a more complex and costly solution than is needed for simply viewing a CloudWatch dashboard.

Question#28

A company is migrating applications to AWS. The applications are deployed in different accounts. The company manages the accounts centrally by using AWS Organizations. The company’s security team needs a single sign-on (SSO) solution across all the company’s accounts. The company must continue managing the users and groups in its on-premises self-managed Microsoft Active Directory.
Which solution will meet these requirements?

  • A. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a one-way forest trust or a one-way domain trust to connect the company’s self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.
  • B. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a two-way forest trust to connect the company’s self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.
  • C. Use AWS Directory Service. Create a two-way trust relationship with the company’s self-managed Microsoft Active Directory.
  • D. Deploy an identity provider (IdP) on premises. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console.

Reference/Arguments:

A two-way trust is required for AWS Enterprise Apps such as Amazon Chime, Amazon Connect, Amazon QuickSight, AWS IAM Identity Center (successor to AWS Single Sign-On), Amazon WorkDocs, Amazon WorkMail, Amazon WorkSpaces, and the AWS Management Console. AWS Managed Microsoft AD must be able to query the users and groups in your self-managed AD.

Arguments about Others:

Option A, We need SSO but it is not in one way trust list

Amazon EC2, Amazon RDS, and Amazon FSx will work with either a one-way or two-way trust.

Option C (two-way trust with AWS Directory Service) is similar to option B and introduces additional complexity.

Option D (deploying an identity provider on-premises) would require additional setup and management, whereas AWS SSO is designed to simplify the process of integrating with AWS accounts.

Question#28

A company provides a Voice over Internet Protocol (VoIP) service that uses UDP connections. The service consists of Amazon EC2 instances that run in an Auto Scaling group. The company has deployments across multiple AWS Regions.
The company needs to route users to the Region with the lowest latency. The company also needs automated failover between Regions.
Which solution will meet these requirements?

  • A. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with the Auto Scaling group. Use the NLB as an AWS Global Accelerator endpoint in each Region.
  • B. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the target group with the Auto Scaling group. Use the ALB as an AWS Global Accelerator endpoint in each Region.
  • C. Deploy a Network Load Balancer (NLB) and an associated target group. Associate the target group with the Auto Scaling group. Create an Amazon Route 53 latency record that points to aliases for each NLB. Create an Amazon CloudFront distribution that uses the latency record as an origin.
  • D. Deploy an Application Load Balancer (ALB) and an associated target group. Associate the target group with the Auto Scaling group. Create an Amazon Route 53 weighted record that points to aliases for each ALB. Deploy an Amazon CloudFront distribution that uses the weighted record as an origin.

Reference/Arguments:

The Network Load Balancer is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part.You can now use the same load balancer for both TCP and UDP traffic. You can simplify your architecture, reduce your costs, and increase your scalability.

Global Accelerator always routes user traffic to the optimal endpoint based on performance, reacting instantly to changes in application health, your user’s location, and policies that you configure.Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP.

Reference/Arguments:

Option B (ALB) is designed for HTTP/HTTPS applications and may not be suitable for UDP-based VoIP services.

Option C involves using Amazon Route 53 with NLBs and CloudFront, which adds unnecessary complexity for this use case.

Option D also involves using Amazon Route 53 with ALBs and CloudFront, which is not the optimal approach for UDP-based VoIP services.

Question#29

A development team runs monthly resource-intensive tests on its general purpose Amazon RDS for MySQL DB instance with Performance Insights enabled. The testing lasts for 48 hours once a month and is the only process that uses the database. The team wants to reduce the cost of running the tests without reducing the compute and memory attributes of the DB instance.
Which solution meets these requirements MOST cost-effectively?

  • A. Stop the DB instance when tests are completed. Restart the DB instance when required.
  • B. Use an Auto Scaling policy with the DB instance to automatically scale when tests are completed.
  • C. Create a snapshot when tests are completed. Terminate the DB instance and restore the snapshot when required.
  • D. Modify the DB instance to a low-capacity instance when tests are completed. Modify the DB instance again when required.

Reference/Arguments:

By terminating the DB instance when tests are not running and creating a snapshot for later use, you avoid incurring the cost of running the DB instance continuously. RDS instances are billed hourly, so stopping the instance is a cost-saving measure.

Arguments about Others:

Option A (stopping and starting the DB instance) might reduce costs, but it doesn’t provide a clear way to maintain the same instance state and configurations for the tests.

Option B (using Auto Scaling) is not ideal for a situation where you want to run the tests only once a month for 48 hours. Auto Scaling is more suitable for dynamic workloads with varying resource requirements.

Option D (modifying the DB instance to a low-capacity instance) is not practical if you need the same compute and memory attributes for your tests.

Question#30

A company that hosts its web application on AWS wants to ensure all Amazon EC2 instances. Amazon RDS DB instances. and Amazon Redshift clusters are configured with tags. The company wants to minimize the effort of configuring and operating this check.
What should a solutions architect do to accomplish this?

  • A. Use AWS Config rules to define and detect resources that are not properly tagged.
  • B. Use Cost Explorer to display resources that are not properly tagged. Tag those resources manually.
  • C. Write API calls to check all resources for proper tag allocation. Periodically run the code on an EC2 instance.
  • D. Write API calls to check all resources for proper tag allocation. Schedule an AWS Lambda function through Amazon CloudWatch to periodically run the code.

Arguments about Others:

Option B (using Cost Explorer) is not designed for enforcing or checking tagging compliance; it’s primarily used for cost analysis.

Options C and D involve writing custom code and scripts, which can be complex to manage and maintain, and they don’t provide the automated and integrated approach that AWS Config rules offer.

Links of other Parts:

--

--

Muhammad Hassan Saeed

Greetings! I'm a passionate AWS DevOps Engineer with hands-on Experience on Majority Devops Tools