AWS Certified Solutions Architect Associate Exam dumps with Complete Explanation-Part6

Muhammad Hassan Saeed
14 min readSep 28, 2023

--

Question #51

A company is developing an application that provides order shipping statistics for retrieval by a REST API. The company wants to extract the shipping statistics, organize the data into an easy-to-read HTML format, and send the report to several email addresses at the same time every morning.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)

  • A. Configure the application to send the data to Amazon Kinesis Data Firehose.
  • B. Use Amazon Simple Email Service (Amazon SES) to format the data and to send the report by email.
  • C. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Glue job to query the application’s API for the data.
  • D. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the application’s API for the data.
  • E. Store the application data in Amazon S3. Create an Amazon Simple Notification Service (Amazon SNS) topic as an S3 event destination to send the report by email.

Reference/Arguments:

If you are sending an email message to a large number of recipients, then it makes sense to send it in both HTML and text.

By creating an Amazon EventBridge scheduled event that triggers an AWS Lambda function, you can automate the process of querying the application’s API for shipping statistics. The Lambda function can retrieve the data and perform any necessary formatting or transformation before proceeding to the next step.

Arguments about others:

Options A, C, and E are not necessary for achieving the desired outcome. Ooption A is typically used for real-time streaming data ingestion and delivery to data lakes or analytics services. Glue is a fully managed extract, transform, and load (ETL) service, which may be an overcomplication for this scenario. Storing the application data in S3 and using SNS (E) can be an alternative approach, but it adds unnecessary complexity.

Question #52

A company wants to migrate its on-premises application to AWS. The application produces output files that vary in size from tens of gigabytes to hundreds of terabytes. The application data must be stored in a standard file system structure. The company wants a solution that scales automatically. is highly available, and requires minimum operational overhead.
Which solution will meet these requirements?

  • A. Migrate the application to run as containers on Amazon Elastic Container Service (Amazon ECS). Use Amazon S3 for storage.
  • B. Migrate the application to run as containers on Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon Elastic Block Store (Amazon EBS) for storage.
  • C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic File System (Amazon EFS) for storage.
  • D. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling group. Use Amazon Elastic Block Store (Amazon EBS) for storage.

Reference/Arguments:

EFS provides a scalable and fully managed file system that can be easily mounted to multiple EC2. It allows you to store and access files using the standard file system structure, which aligns with the company’s requirement for a standard file system. EFS automatically scales with the size of your data.

Arguments about others:

A suggests using ECS for container orchestration and S3 for storage. ECS doesn’t offer a native file system storage solution. S3 is an object storage service and may not be the most suitable option for a standard file system structure.

B suggests using EKS for container orchestration and EBS for storage. Similar to A, EBS is block storage and not optimized for file system access. While EKS can manage containers, it doesn’t specifically address the file storage requirements.

D suggests using EC2 with EBS for storage. While EBS can provide block storage for EC2, it doesn’t inherently offer a scalable file system solution like EFS. You would need to manage and provision EBS volumes manually, which may introduce operational overhead.

Question #53

A company needs to store its accounting records in Amazon S3. The records must be immediately accessible for 1 year and then must be archived for an additional 9 years. No one at the company, including administrative users and root users, can be able to delete the records during the entire 10-year period. The records must be stored with maximum resiliency.
Which solution will meet these requirements?

  • A. Store the records in S3 Glacier for the entire 10-year period. Use an access control policy to deny deletion of the records for a period of 10 years.
  • B. Store the records by using S3 Intelligent-Tiering. Use an IAM policy to deny deletion of the records. After 10 years, change the IAM policy to allow deletion.
  • C. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 year. Use S3 Object Lock in compliance mode for a period of 10 years.
  • D. Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 year. Use S3 Object Lock in governance mode for a period of 10 years.

Reference/Arguments:

In compliance mode, a protected object version can’t be overwritten or deleted by any user, including the root user in your AWS account

Arguments about others:

Option A is not suitable because using an access control policy to deny deletion is not as secure as using S3 Object Lock in compliance mode, which provides stronger immutability guarantees.

Option B is not suitable because it suggests changing the IAM policy to allow deletion after 10 years. This approach does not guarantee immutability, as IAM policies can be changed.

Option D is not as cost-effective as transitioning to S3 Glacier Deep Archive, and using S3 One Zone-Infrequent Access (S3 One Zone-IA) does not provide the same level of durability as S3 Glacier Deep Archive. Additionally, governance mode in S3 Object Lock does not enforce the same level of immutability as compliance mode

Question #54

A company runs multiple Windows workloads on AWS. The company’s employees use Windows file shares that are hosted on two Amazon EC2 instances. The file shares synchronize data between themselves and maintain duplicate copies. The company wants a highly available and durable storage solution that preserves how users currently access the files.
What should a solutions architect do to meet these requirements?

  • A. Migrate all the data to Amazon S3. Set up IAM authentication for users to access files.
  • B. Set up an Amazon S3 File Gateway. Mount the S3 File Gateway on the existing EC2 instances.
  • C. Extend the file share environment to Amazon FSx for Windows File Server with a Multi-AZ configuration. Migrate all the data to FSx for Windows File Server.
  • D. Extend the file share environment to Amazon Elastic File System (Amazon EFS) with a Multi-AZ configuration. Migrate all the data to Amazon EFS.

Reference/Arguments:

Amazon FSx for Windows File Server provides fully managed shared storage built on Windows Server, and delivers a wide range of data access, data management, and administrative capabilities

Arguments about others:

Option A (Migrate all the data to Amazon S3 with IAM authentication) may not be suitable if you want to maintain the current file share setup and access methods. S3 is an object storage service, and accessing files directly from S3 would require a change in how users access their files.

Option B (Set up an Amazon S3 File Gateway) is a hybrid solution that could work, but it introduces additional complexity with the S3 File Gateway. It may not be necessary if the company wants a more straightforward solution that directly integrates with Windows file shares.

Option D (Extend the file share environment to Amazon EFS) is not ideal in this scenario because Amazon EFS is a Linux-based file system and may not seamlessly integrate with Windows workloads. Additionally, it may not provide the same level of compatibility and performance for Windows file shares as FSx for Windows File Server.

Question #55

A solutions architect is developing a VPC architecture that includes multiple subnets. The architecture will host applications that use Amazon EC2 instances and Amazon RDS DB instances. The architecture consists of six subnets in two Availability Zones. Each Availability Zone includes a public subnet, a private subnet, and a dedicated subnet for databases. Only EC2 instances that run in the private subnets can have access to the RDS databases.
Which solution will meet these requirements?

  • A. Create a new route table that excludes the route to the public subnets’ CIDR blocks. Associate the route table with the database subnets.
  • B. Create a security group that denies inbound traffic from the security group that is assigned to instances in the public subnets. Attach the security group to the DB instances.
  • C. Create a security group that allows inbound traffic from the security group that is assigned to instances in the private subnets. Attach the security group to the DB instances.
  • D. Create a new peering connection between the public subnets and the private subnets. Create a different peering connection between the private subnets and the database subnets.

Reference/Arguments:

Arguments about others:

Option A (Create a new route table) deals with routing and would not control access at the security group level, so it’s not the right choice.

Option B (Create a security group that denies inbound traffic) doesn’t follow the best practice of using security groups for allowing access; instead, it uses deny rules, which can be less intuitive and harder to manage.

Option D (Create peering connections) is not necessary for this use case. Peering connections are used for connecting VPCs, and in this scenario, all resources are within the same VPC. Security groups are the more appropriate mechanism for controlling access within a VPC.

Question #56

A company has registered its domain name with Amazon Route 53. The company uses Amazon API Gateway in the ca-central-1 Region as a public interface for its backend microservice APIs. Third-party services consume the APIs securely. The company wants to design its API Gateway URL with the company’s domain name and corresponding certificate so that the third-party services can use HTTPS.
Which solution will meet these requirements?

  • A. Create stage variables in API Gateway with Name=”Endpoint-URL” and Value=”Company Domain Name” to overwrite the default URL. Import the public certificate associated with the company’s domain name into AWS Certificate Manager (ACM).
  • B. Create Route 53 DNS records with the company’s domain name. Point the alias record to the Regional API Gateway stage endpoint. Import the public certificate associated with the company’s domain name into AWS Certificate Manager (ACM) in the us-east-1 Region.
  • C. Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company’s domain name. Import the public certificate associated with the company’s domain name into AWS Certificate Manager (ACM) in the same Region. Attach the certificate to the API Gateway endpoint. Configure Route 53 to route traffic to the API Gateway endpoint.
  • D. Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company’s domain name. Import the public certificate associated with the company’s domain name into AWS Certificate Manager (ACM) in the us-east-1 Region. Attach the certificate to the API Gateway APIs. Create Route 53 DNS records with the company’s domain name. Point an A record to the company’s domain name.

Reference/Arguments:

API requests are targeted directly to the Region-specific API Gateway API without going through any CloudFront distribution.

Arguments about others:

Option B is not the best choice because it points the alias record to the Regional API Gateway stage endpoint directly, bypassing the use of ACM for managing the SSL/TLS certificate. It’s important to use ACM for certificate management, especially when setting up HTTPS.

Option A doesn’t include the necessary steps to configure Route 53 or attach the certificate to the API Gateway.

Option D recommends importing the certificate into ACM in the us-east-1 Region, which is not necessary if your resources are primarily in the ca-central-1 Region. You should use ACM in the same Region for better integration.

Question #57

A company is running a popular social media website. The website gives users the ability to upload images to share with other users. The company wants to make sure that the images do not contain inappropriate content. The company needs a solution that minimizes development effort.
What should a solutions architect do to meet these requirements?

  • A. Use Amazon Comprehend to detect inappropriate content. Use human review for low-confidence predictions.
  • B. Use Amazon Rekognition to detect inappropriate content. Use human review for low-confidence predictions.
  • C. Use Amazon SageMaker to detect inappropriate content. Use ground truth to label low-confidence predictions.
  • D. Use AWS Fargate to deploy a custom machine learning model to detect inappropriate content. Use ground truth to label low-confidence predictions.

Reference/Arguments:

Arguments about others:

Option A (Amazon Comprehend) is primarily designed for natural language processing tasks, such as sentiment analysis and entity recognition, and may not be the best choice for image content analysis.

Option C (Amazon SageMaker) is a machine learning platform that allows you to build and train custom machine learning models. While it provides flexibility, it involves more development effort, especially for creating a custom model, and may not be the most straightforward solution for this use case.

Option D (AWS Fargate) is used for containerized applications and doesn’t directly provide a pre-trained model for image content analysis like Amazon Rekognition. Developing a custom machine learning model requires significant development effort and expertise, which goes against the requirement of minimizing development effort in this scenario.

Question #58

A company wants to run its critical applications in containers to meet requirements for scalability and availability. The company prefers to focus on maintenance of the critical applications. The company does not want to be responsible for provisioning and managing the underlying infrastructure that runs the containerized workload.
What should a solutions architect do to meet these requirements?

  • A. Use Amazon EC2 instances, and install Docker on the instances.
  • B. Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 worker nodes.
  • C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate.
  • D. Use Amazon EC2 instances from an Amazon Elastic Container Service (Amazon ECS)-optimized Amazon Machine Image (AMI).

Reference/Arguments:

AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS)

Arguments about others:

Option A (Use EC2 instances and install Docker) and Option D (Use EC2 instances from an ECS-optimized AMI) both require managing EC2 instances, which contradicts the company’s requirement of not being responsible for provisioning and managing the underlying infrastructure.

Option B and C also involves managing EC2 instances, even though it uses ECS for container orchestration. While ECS simplifies container management compared to pure EC2, it still requires infrastructure management, which is not aligned with the company’s preference. AWS Fargate provides a more hands-off approach for managing containers.

Question #59

A company hosts more than 300 global websites and applications. The company requires a platform to analyze more than 30 TB of clickstream data each day.
What should a solutions architect do to transmit and process the clickstream data?

  • A. Design an AWS Data Pipeline to archive the data to an Amazon S3 bucket and run an Amazon EMR cluster with the data to generate analytics.
  • B. Create an Auto Scaling group of Amazon EC2 instances to process the data and send it to an Amazon S3 data lake for Amazon Redshift to use for analysis.
  • C. Cache the data to Amazon CloudFront. Store the data in an Amazon S3 bucket. When an object is added to the S3 bucket. run an AWS Lambda function to process the data for analysis.
  • D. Collect the data from Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to transmit the data to an Amazon S3 data lake. Load the data in Amazon Redshift for analysis

Reference/Arguments:

We’re excited to launch Amazon Redshift streaming ingestion for Amazon Kinesis Data Streams, which enables you to ingest data directly from the Kinesis data stream without having to stage the data in Amazon Simple Storage Service (Amazon S3). Streaming ingestion allows you to achieve low latency in the order of seconds while ingesting hundreds of megabytes of data into your Amazon Redshift cluster.

Arguments about others:

Option A involves AWS Data Pipeline and Amazon EMR, which may be more complex and might not be the most cost-effective choice for handling large volumes of streaming data.

Option B recommends using EC2 instances for processing, which can be more challenging to scale and manage at the required scale.

Option C suggests caching the data in CloudFront, which is typically used for content delivery and not ideal for storing and processing large volumes of clickstream data.

Question #60

A company has a website hosted on AWS. The website is behind an Application Load Balancer (ALB) that is configured to handle HTTP and HTTPS separately. The company wants to forward all requests to the website so that the requests will use HTTPS.
What should a solutions architect do to meet this requirement?

  • A. Update the ALB’s network ACL to accept only HTTPS traffic.
  • B. Create a rule that replaces the HTTP in the URL with HTTPS.
  • C. Create a listener rule on the ALB to redirect HTTP traffic to HTTPS.
  • D. Replace the ALB with a Network Load Balancer configured to use Server Name Indication (SNI).

Reference/Arguments:

  • Arguments about others:
  • Option A (Updating the ALB’s network ACL) deals with network-level access control lists and does not provide a mechanism for URL redirection or enforcing HTTPS.
  • Option B (Replacing HTTP in the URL with HTTPS) is not a standard approach for achieving HTTPS redirection, and it doesn’t enforce HTTPS at the protocol level.
  • Option D (Replacing the ALB with a Network Load Balancer configured to use Server Name Indication) is not necessary for achieving HTTP to HTTPS redirection. Additionally, Network Load Balancers (NLBs) operate at the transport layer (Layer 4) and do not provide the same level of request routing and URL-based rules as ALBs. ALBs are better suited for HTTP/HTTPS routing and redirection scenarios.

Links of other Parts:

--

--

Muhammad Hassan Saeed
Muhammad Hassan Saeed

Written by Muhammad Hassan Saeed

Greetings! I'm a passionate AWS DevOps Engineer with hands-on Experience on Majority Devops Tools

Responses (1)