AWS Certified Solutions Architect Associate Exam dumps with Complete Explanation-Part4

Muhammad Hassan Saeed
14 min readSep 27, 2023

--

Question#31

A company that hosts its web application on AWS wants to ensure all Amazon EC2 instances. Amazon RDS DB instances. and Amazon Redshift clusters are configured with tags. The company wants to minimize the effort of configuring and operating this check.
What should a solutions architect do to accomplish this?

  • A. Use AWS Config rules to define and detect resources that are not properly tagged.
  • B. Use Cost Explorer to display resources that are not properly tagged. Tag those resources manually.
  • C. Write API calls to check all resources for proper tag allocation. Periodically run the code on an EC2 instance.
  • D. Write API calls to check all resources for proper tag allocation. Schedule an AWS Lambda function through Amazon CloudWatch to periodically run the code.

Reference/Argument

AWS Config provides a set of pre-built or customizable rules that can be used to check the configuration and compliance of AWS resources. By creating a custom rule or using the built-in rule for tagging, you can define the required tags for EC2, RDS DB and Redshift clusters. AWS Config continuously monitors the resources and generates configuration change events or evaluation results. By leveraging AWS Config, the solution can automatically detect any resources that do not comply with the defined tagging requirements. This approach eliminates the need for manual checks or periodic code execution, reducing operational overhead. Additionally, AWS Config provides the ability to automatically remediate non-compliant resources by triggering Lambda or sending notifications, further streamlining the configuration management process.

Arguments about others:

Option B (using Cost Explorer) primarily focuses on cost analysis and does not provide direct enforcement of proper tagging.

Option C and D (writing API calls and running them manually or through scheduled Lambda) require more manual effort and maintenance compared to using AWS Config rules

Question#32

A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side JavaScript, and images.
Which method is the MOST cost-effective for hosting the website?

  • A. Containerize the website and host it in AWS Fargate.
  • B. Create an Amazon S3 bucket and host the website there.
  • C. Deploy a web server on an Amazon EC2 instance to host the website.
  • D. Configure an Application Load Balancer with an AWS Lambda target that uses the Express.js framework.

Reference/Argument

HTML, CSS, client-side JavaScript, and images are all static resources. By using Amazon S3 to host the website, you can take advantage of its durability, scalability, and low-cost pricing model. You only pay for the storage and data transfer associated with your website, without the need for managing and maintaining web servers or containers. This reduces the operational overhead and infrastructure costs.

Arguments about others:

Containerizing the website and hosting it in AWS Fargate (option A) would involve additional complexity and costs associated with managing the container environment and scaling resources. Deploying a web server on an Amazon EC2 instance (option C) would require provisioning and managing the EC2 instance, which may not be cost-effective for a static website. Configuring an Application Load Balancer with an AWS Lambda target (option D) adds unnecessary complexity and may not be the most efficient solution for hosting a static website.

Question#33

A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.
What should a solutions architect recommend to meet these requirements?

  • A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications.
  • B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.
  • C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.
  • D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3.

Reference/Argument

Quickly pair with AWS Lambda to respond to or adjust immediate occurrences within the event-driven applications in your environment, at any scale.

Arguments About others:

Option A has some limitations because it is not near real-time as per demanded in question

Option B is same as Option C but Kinesis Data Firehouse currently not support Dynmodb

https://aws.amazon.com/kinesis/data-firehose/faqs/#:~:text=Kinesis%20Data%20Firehose%20currently%20supports,HTTP%20End%20Point%20as%20destinations.

Option D suggests batch processing, which may not meet the near-real-time requirements and isn’t as scalable as a streaming solution.

Question#34

A company hosts its multi-tier applications on AWS. For compliance, governance, auditing, and security, the company must track configuration changes on its AWS resources and record a history of API calls made to these resources.
What should a solutions architect do to meet these requirements?

  • A. Use AWS CloudTrail to track configuration changes and AWS Config to record API calls.
  • B. Use AWS Config to track configuration changes and AWS CloudTrail to record API calls.
  • C. Use AWS Config to track configuration changes and Amazon CloudWatch to record API calls.
  • D. Use AWS CloudTrail to track configuration changes and Amazon CloudWatch to record API calls.

Reference/Argument

AWS Config is a fully managed service that allows the company to assess, audit, and evaluate the configurations of its AWS resources. It provides a detailed inventory of the resources in use and tracks changes to resource configurations. AWS Config can detect configuration changes and alert the company when changes occur. It also provides a historical view of changes, which is essential for compliance and governance purposes. AWS CloudTrail is a fully managed service that provides a detailed history of API calls made to the company’s AWS resources. It records all API activity in the AWS account, including who made the API call, when the call was made, and what resources were affected by the call. This information is critical for security and auditing purposes, as it allows the company to investigate any suspicious activity that might occur on its AWS resources.

Arguments About others:

Option A (using CloudTrail to track configuration changes and Config to record API calls) is incorrect because CloudTrail is specifically designed to capture API call history, while Config is designed for tracking configuration changes.

Option C (using Config to track configuration changes and CloudWatch to record API calls) is not the recommended approach. While CloudWatch can be used for monitoring and logging, it does not provide the same level of detail and compliance tracking as CloudTrail for recording API calls.

Option D (using CloudTrail to track configuration changes and CloudWatch to record API calls) is not the optimal choice because CloudTrail is the appropriate service for tracking configuration changes, while CloudWatch is not specifically designed for recording API call history.

Question#35

A company is preparing to launch a public-facing web application in the AWS Cloud. The architecture consists of Amazon EC2 instances within a VPC behind an Elastic Load Balancer (ELB). A third-party service is used for the DNS. The company’s solutions architect must recommend a solution to detect and protect against large-scale DDoS attacks.
Which solution meets these requirements?

  • A. Enable Amazon GuardDuty on the account.
  • B. Enable Amazon Inspector on the EC2 instances.
  • C. Enable AWS Shield and assign Amazon Route 53 to it.
  • D. Enable AWS Shield Advanced and assign the ELB to it.

Reference/Argument

You can now enable automatic application layer DDoS mitigation for Application Load Balancer (ALB) resources, in addition to CloudFront distributions, protected by AWS Shield Advanced.

Arguments About others:

Option A is incorrect because Amazon GuardDuty is a threat detection service that focuses on identifying malicious activity and unauthorized behavior within AWS accounts. While it is useful for detecting various security threats, it does not specifically address large-scale DDoS attacks.

Option B is also incorrect because Amazon Inspector is a vulnerability assessment service that helps identify security issues and vulnerabilities within EC2. It does not directly protect against DDoS attacks.

Option C is not the optimal choice because AWS Shield provides basic DDoS protection for resources such as Elastic IP addresses, CloudFront, and Route53 hosted zones. However, it does not provide the advanced capabilities and assistance offered by AWS Shield Advanced, which is better suited for protecting against large-scale DDoS attacks.

Question#36

A company is building an application in the AWS Cloud. The application will store data in Amazon S3 buckets in two AWS Regions. The company must use an AWS Key Management Service (AWS KMS) customer managed key to encrypt all data that is stored in the S3 buckets. The data in both S3 buckets must be encrypted and decrypted with the same KMS key. The data and the key must be stored in each of the two Regions.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Create an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Configure replication between the S3 buckets.
  • B. Create a customer managed multi-Region KMS key. Create an S3 bucket in each Region. Configure replication between the S3 buckets. Configure the application to use the KMS key with client-side encryption.
  • C. Create a customer managed KMS key and an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Configure replication between the S3 buckets.
  • D. Create a customer managed KMS key and an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with AWS KMS keys (SSE-KMS). Configure replication between the S3 buckets

Reference/Argument

AWS KMS supports multi-Region keys, which are AWS KMS keys in different AWS Regions that can be used interchangeably — as though you had the same key in multiple Regions.

Arguments About others:

Option A is not recommended because it uses Amazon S3 managed encryption keys (SSE-S3), which would result in different encryption keys being used in each Region, not meeting the requirement of using the same KMS key.

Option C is similar to option A and also uses SSE-S3, which would result in different encryption keys in each Region.

Option D is also incorrect because company using client side encryption key So, why we go with Server-Side Encryption

Question#37

A company recently launched a variety of new workloads on Amazon EC2 instances in its AWS account. The company needs to create a strategy to access and administer the instances remotely and securely. The company needs to implement a repeatable process that works with native AWS services and follows the AWS Well-Architected Framework.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Use the EC2 serial console to directly access the terminal interface of each instance for administration.
  • B. Attach the appropriate IAM role to each existing instance and new instance. Use AWS Systems Manager Session Manager to establish a remote SSH session.
  • C. Create an administrative SSH key pair. Load the public key into each EC2 instance. Deploy a bastion host in a public subnet to provide a tunnel for administration of each instance.
  • D. Establish an AWS Site-to-Site VPN connection. Instruct administrators to use their local on-premises machines to connect directly to the instances by using SSH keys across the VPN tunnel

Reference/Argument

AWS Systems Manager provides a centralized and secure way to manage EC2 instances. Session Manager allows you to establish secure, audited, and controlled remote sessions to EC2 instances without exposing SSH ports or managing SSH keys. It also integrates with AWS Identity and Access Management (IAM), making it easy to control who can access instances.

Arguments About others:

Option A introduces the EC2 serial console, which is a low-level troubleshooting tool and not intended for regular administration.

Option C involves setting up a bastion host, which can add complexity and operational overhead compared to Systems Manager. It also requires managing SSH keys.

Option D suggests using an AWS Site-to-Site VPN connection and instructing administrators to use their local on-premises machines to connect directly to instances. This option introduces the need for VPN configuration and potentially exposes EC2 instances to the public internet, which is not the most secure approach.

Question#38

A company is hosting a static website on Amazon S3 and is using Amazon Route 53 for DNS. The website is experiencing increased demand from around the world. The company must decrease latency for users who access the website.
Which solution meets these requirements MOST cost-effectively?

  • A. Replicate the S3 bucket that contains the website to all AWS Regions. Add Route 53 geolocation routing entries.
  • B. Provision accelerators in AWS Global Accelerator. Associate the supplied IP addresses with the S3 bucket. Edit the Route 53 entries to point to the IP addresses of the accelerators.
  • C. Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the CloudFront distribution.
  • D. Enable S3 Transfer Acceleration on the bucket. Edit the Route 53 entries to point to the new endpoint.

Reference/Argument

Amazon CloudFront is a content delivery network (CDN) service that distributes content globally to reduce latency. By setting up a CloudFront distribution in front of the S3 bucket hosting the static website, you can take advantage of its edge locations around the world to deliver content from the nearest location to the users, reducing the latency they experience.

Arguments About others:

Option A (replicating the S3 bucket to all AWS Regions) can be costly and complex, requiring replication of data across multiple Regions and managing synchronization. It may not provide a significant latency improvement compared to the CloudFront solution.

Option B (provisioning accelerators in AWS Global Accelerator) can be more expensive as it adds an extra layer of infrastructure (accelerators) and requires associating IP addresses with the S3 bucket. CloudFront already includes global edge locations and provides similar acceleration capabilities.

Option D (enabling S3 Transfer Acceleration) can help improve upload speed to the S3 bucket but may not have a significant impact on reducing latency for website visitors.

Question#39

A company maintains a searchable repository of items on its website. The data is stored in an Amazon RDS for MySQL database table that contains more than 10 million rows. The database has 2 TB of General Purpose SSD storage. There are millions of updates against this data every day through the company’s website.
The company has noticed that some insert operations are taking 10 seconds or longer. The company has determined that the database storage performance is the problem.
Which solution addresses this performance issue?

  • A. Change the storage type to Provisioned IOPS SSD.
  • B. Change the DB instance to a memory optimized instance class.
  • C. Change the DB instance to a burstable performance instance class.
  • D. Enable Multi-AZ RDS read replicas with MySQL native asynchronous replication.

Reference/Argument

By changing the storage type to Provisioned IOPS SSD, you allocate a specific amount of I/O operations per second (IOPS) for your database. This is particularly important for workloads with high write activity, such as millions of daily updates. It ensures consistent and predictable performance, reducing the likelihood of long insert operation times.

Arguments About others:

Option B (changing the DB instance to a memory optimized instance class) may help with overall database performance, but it may not directly address the storage performance issue, especially if the I/O subsystem is the primary bottleneck.

Option C (changing to a burstable performance instance class) may not be suitable for a workload with continuous high write activity, as burstable instances are better suited for workloads with intermittent or variable CPU usage.

Option D (enabling Multi-AZ RDS read replicas with MySQL native asynchronous replication) can improve database resilience but may not directly address the storage performance issue. Additionally, enabling Multi-AZ might introduce additional latency due to synchronous replication to the standby replica.

Question#40

A company has thousands of edge devices that collectively generate 1 TB of status alerts each day. Each alert is approximately 2 KB in size. A solutions architect needs to implement a solution to ingest and store the alerts for future analysis.
The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional infrastructure. Additionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days.
What is the MOST operationally efficient solution that meets these requirements?

  • A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
  • B. Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts. Create a script on the EC2 instances that will store the alerts in an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
  • C. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon OpenSearch Service (Amazon Elasticsearch Service) cluster. Set up the Amazon OpenSearch Service (Amazon Elasticsearch Service) cluster to take manual snapshots every day and delete data from the cluster that is older than 14 days.
  • D. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to ingest the alerts, and set the message retention period to 14 days. Configure consumers to poll the SQS queue, check the age of the message, and analyze the message data as needed. If the message is 14 days old, the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue.

Reference/Argument

Amazon Kinesis Data Firehose is a fully managed service that can capture, transform, and deliver streaming data into storage systems or analytics tools, making it an ideal solution for ingesting and storing status alerts. In this solution, the Kinesis Data Firehose delivery stream ingests the alerts and delivers them to an S3 bucket, which is a cost-effective storage solution. An S3 Lifecycle configuration is set up to transition the data to Amazon S3 Glacier after 14 days to minimize storage costs.

Arguments About others:

B suggests launching EC2 instances to ingest and store the alerts, which introduces additional infrastructure management overhead and may not be as cost-effective and scalable as using managed services like Kinesis Data Firehose and S3.

C involves delivering the alerts to an Amazon OpenSearch Service cluster and manually managing snapshots and data deletion. This introduces additional complexity and manual overhead compared to the simpler solution of using Kinesis Data Firehose and S3.

D suggests using SQS to ingest the alerts, but it does not provide the same level of data persistence and durability as storing the alerts directly in S3. Additionally, it requires manual processing and copying of messages to S3, which adds operational complexity.

Links of other Parts:

--

--

Muhammad Hassan Saeed
Muhammad Hassan Saeed

Written by Muhammad Hassan Saeed

Greetings! I'm a passionate AWS DevOps Engineer with hands-on Experience on Majority Devops Tools

No responses yet