AWS Certified Solutions Architect Associate Exam dumps with Complete Explanation-Part7

Muhammad Hassan Saeed
16 min readSep 30, 2023

--

Question #61

A company is developing a two-tier web application on AWS. The company’s developers have deployed the application on an Amazon EC2 instance that connects directly to a backend Amazon RDS database. The company must not hardcode database credentials in the application. The company must also implement a solution to automatically rotate the database credentials on a regular basis.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Store the database credentials in the instance metadata. Use Amazon EventBridge (Amazon CloudWatch Events) rules to run a scheduled AWS Lambda function that updates the RDS credentials and instance metadata at the same time.
  • B. Store the database credentials in a configuration file in an encrypted Amazon S3 bucket. Use Amazon EventBridge (Amazon CloudWatch Events) rules to run a scheduled AWS Lambda function that updates the RDS credentials and the credentials in the configuration file at the same time. Use S3 Versioning to ensure the ability to fall back to previous values.
  • C. Store the database credentials as a secret in AWS Secrets Manager. Turn on automatic rotation for the secret. Attach the required permission to the EC2 role to grant access to the secret.
  • D. Store the database credentials as encrypted parameters in AWS Systems Manager Parameter Store. Turn on automatic rotation for the encrypted parameters. Attach the required permission to the EC2 role to grant access to the encrypted parameters.

Reference/Arguments:

Rotation is the process of periodically updating a secret. When you rotate a secret, you update the credentials in both the secret and the database or service.

Arguments about others:

Options A and B involve using Amazon EventBridge (Amazon CloudWatch Events) rules and scheduled AWS Lambda functions to update credentials. While they are feasible solutions, they require more manual setup and management compared to the built-in rotation capabilities of AWS Secrets Manager.

Option D, using AWS Systems Manager Parameter Store, is not designed for secrets management and rotation in the same way as AWS Secrets Manager. It is better suited for configuration parameters rather than sensitive credentials.

Question #62

A company is deploying a new public web application to AWS. The application will run behind an Application Load Balancer (ALB). The application needs to be encrypted at the edge with an SSL/TLS certificate that is issued by an external certificate authority (CA). The certificate must be rotated each year before the certificate expires.
What should a solutions architect do to meet these requirements?

  • A. Use AWS Certificate Manager (ACM) to issue an SSL/TLS certificate. Apply the certificate to the ALB. Use the managed renewal feature to automatically rotate the certificate.
  • B. Use AWS Certificate Manager (ACM) to issue an SSL/TLS certificate. Import the key material from the certificate. Apply the certificate to the ALUse the managed renewal feature to automatically rotate the certificate.
  • C. Use AWS Certificate Manager (ACM) Private Certificate Authority to issue an SSL/TLS certificate from the root CA. Apply the certificate to the ALB. Use the managed renewal feature to automatically rotate the certificate.
  • D. Use AWS Certificate Manager (ACM) to import an SSL/TLS certificate. Apply the certificate to the ALB. Use Amazon EventBridge (Amazon CloudWatch Events) to send a notification when the certificate is nearing expiration. Rotate the certificate manually.

Reference/Arguments:

In addition to requesting SSL/TLS certificates provided by AWS Certificate Manager (ACM), you can import certificates that you obtained outside of AWS. You might do this because you already have a certificate from a third-party certificate authority (CA), or because you have application-specific requirements that are not met by ACM issued certificates.

To import third-party issued TLS/SSL certificate into ACM, you must provide the certificate, its private key, and the certificate chain. Your certificate must also include the prerequisites for importing certificates.

Question #63

A company runs its infrastructure on AWS and has a registered base of 700,000 users for its document management application. The company intends to create a product that converts large .pdf files to .jpg image files. The .pdf files average 5 MB in size. The company needs to store the original files and the converted files. A solutions architect must design a scalable solution to accommodate demand that will grow rapidly over time.
Which solution meets these requirements MOST cost-effectively?

  • A. Save the .pdf files to Amazon S3. Configure an S3 PUT event to invoke an AWS Lambda function to convert the files to .jpg format and store them back in Amazon S3.
  • B. Save the .pdf files to Amazon DynamoDUse the DynamoDB Streams feature to invoke an AWS Lambda function to convert the files to .jpg format and store them back in DynamoDB.
  • C. Upload the .pdf files to an AWS Elastic Beanstalk application that includes Amazon EC2 instances, Amazon Elastic Block Store (Amazon EBS) storage, and an Auto Scaling group. Use a program in the EC2 instances to convert the files to .jpg format. Save the .pdf files and the .jpg files in the EBS store.
  • D. Upload the .pdf files to an AWS Elastic Beanstalk application that includes Amazon EC2 instances, Amazon Elastic File System (Amazon EFS) storage, and an Auto Scaling group. Use a program in the EC2 instances to convert the file to .jpg format. Save the .pdf files and the .jpg files in the EBS store.

Reference/Arguments:

Arguments about others:

B. Using DynamoDB for storing and processing large .pdf files would not be cost-effective due to storage and throughput costs associated with DynamoDB.

C. Using Elastic Beanstalk with EC2 and EBS storage can work, but it may not be most cost-effective solution. It involves managing the underlying infrastructure and scaling manually.

D. Similar to C, using Elastic Beanstalk with EC2 and EFS storage can work, but it may not be most cost-effective solution. EFS is a shared file storage service and may not provide optimal performance for conversion process, especially as demand and file sizes increase.

Question #64

A company has more than 5 TB of file data on Windows file servers that run on premises. Users and applications interact with the data each day.
The company is moving its Windows workloads to AWS. As the company continues this process, the company requires access to AWS and on-premises file storage with minimum latency. The company needs a solution that minimizes operational overhead and requires no significant changes to the existing file access patterns. The company uses an AWS Site-to-Site VPN connection for connectivity to AWS.
What should a solutions architect do to meet these requirements?

  • A. Deploy and configure Amazon FSx for Windows File Server on AWS. Move the on-premises file data to FSx for Windows File Server. Reconfigure the workloads to use FSx for Windows File Server on AWS.
  • B. Deploy and configure an Amazon S3 File Gateway on premises. Move the on-premises file data to the S3 File Gateway. Reconfigure the on-premises workloads and the cloud workloads to use the S3 File Gateway.
  • C. Deploy and configure an Amazon S3 File Gateway on premises. Move the on-premises file data to Amazon S3. Reconfigure the workloads to use either Amazon S3 directly or the S3 File Gateway. depending on each workload’s location.
  • D. Deploy and configure Amazon FSx for Windows File Server on AWS. Deploy and configure an Amazon FSx File Gateway on premises. Move the on-premises file data to the FSx File Gateway. Configure the cloud workloads to use FSx for Windows File Server on AWS. Configure the on-premises workloads to use the FSx File Gateway.

Reference/Arguments:

Amazon FSx File Gateway (FSx File Gateway) is a new File Gateway type that provides low latency and efficient access to in-cloud FSx for Windows File Server file shares from your on-premises facility. If you maintain on-premises file storage because of latency or bandwidth requirements, you can instead use FSx File Gateway for seamless access to fully managed, highly reliable, and virtually unlimited Windows file shares provided in the AWS Cloud by FSx for Windows File Server. FSx File Gateway provides the following benefits:

Helps eliminate on-premises file servers and consolidates all their data in AWS to take advantage of the scale and economics of cloud storage.

Provides options that you can use for all your file workloads, including those that require on-premises access to cloud data.

Applications that need to stay on premises can now experience the same low latency and high performance that they have in AWS, without taxing your networks or impacting the latencies experienced by your most demanding applications.

Arguments about others:

Option A can be a suitable solution for file storage in the cloud. However, this option does not directly address the need for on-premises access with low latency.

Option B, Amazon S3 is an object storage service, not a traditional file storage system. While the S3 File Gateway can provide access to S3 objects as if they were files, it may not be the best fit for scenarios with existing Windows file servers and a need for minimal operational changes.

Option C,may not fully address the requirement for low-latency access to on-premises data, as accessing Amazon S3 directly from on-premises can introduce latency.

Question #65

A hospital recently deployed a RESTful API with Amazon API Gateway and AWS Lambda. The hospital uses API Gateway and Lambda to upload reports that are in PDF format and JPEG format. The hospital needs to modify the Lambda code to identify protected health information (PHI) in the reports.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Use existing Python libraries to extract the text from the reports and to identify the PHI from the extracted text.
  • B. Use Amazon Textract to extract the text from the reports. Use Amazon SageMaker to identify the PHI from the extracted text.
  • C. Use Amazon Textract to extract the text from the reports. Use Amazon Comprehend Medical to identify the PHI from the extracted text.
  • D. Use Amazon Rekognition to extract the text from the reports. Use Amazon Comprehend Medical to identify the PHI from the extracted text

Reference/Arguments:

Amazon Comprehend Medical

Arguments about others:

Options A involve custom code and libraries, which would likely require more development effort and ongoing maintenance, resulting in higher operational overhead.

Option B suggests using Amazon SageMaker for PHI identification, which is a machine learning service. While it is powerful, it might involve more complexity in terms of model development and maintenance compared to using Amazon Comprehend Medical, which is specifically tailored for medical information extraction.

Option D add Amazon Rekognition for extract text while Amazon Rekognition is a Deep learning-based visual analysis service use for Search, verify, and organize millions of images and videos

Question #66

A company has an application that generates a large number of files, each approximately 5 MB in size. The files are stored in Amazon S3. Company policy requires the files to be stored for 4 years before they can be deleted. Immediate accessibility is always required as the files contain critical business data that is not easy to reproduce. The files are frequently accessed in the first 30 days of the object creation but are rarely accessed after the first 30 days.
Which storage solution is MOST cost-effective?

  • A. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Glacier 30 days from object creation. Delete the files 4 years after object creation.
  • B. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) 30 days from object creation. Delete the files 4 years after object creation.
  • C. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object creation. Delete the files 4 years after object creation.
  • D. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object creation. Move the files to S3 Glacier 4 years after object creation.

Reference/Arguments:

The files are frequently accessed in the first 30 days, and S3 Standard provides immediate accessibility. Therefore, you should keep the files in S3 Standard for the initial 30 days to ensure quick access.After the initial 30 days, the files are rarely accessed. Transitioning them to S3 Standard-IA is cost-effective because it offers lower storage costs than S3 Standard while still providing quick access when needed.

Arguments about others:

Options A and B involve transitioning the files to S3 Glacier or S3 One Zone-IA, respectively, after 30 days. While these options reduce storage costs compared to S3 Standard, they may introduce retrieval delays and costs if the files are needed during the 4-year retention period.

Option D moves the files to S3 Standard-IA initially but then transitions them to S3 Glacier after 4 years. This introduces additional complexity without a clear benefit in terms of cost savings for your use case, as it doesn’t take into account the initial 30-day access requirement.

Question #67

A company hosts an application on multiple Amazon EC2 instances. The application processes messages from an Amazon SQS queue, writes to an Amazon RDS table, and deletes the message from the queue. Occasional duplicate records are found in the RDS table. The SQS queue does not contain any duplicate messages.
What should a solutions architect do to ensure messages are being processed once only?

  • A. Use the CreateQueue API call to create a new queue.
  • B. Use the AddPermission API call to add appropriate permissions.
  • C. Use the ReceiveMessage API call to set an appropriate wait time.
  • D. Use the ChangeMessageVisibility API call to increase the visibility timeout.

Reference/Arguments:

To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents all consumers from receiving and processing the message.

Changes the visibility timeout of a specified message in a queue to a new value. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds.

Arguments about others:

Option A,Use the CreateQueue API call to create a new queue,” doesn’t inherently prevent duplicates. Creating a new queue would not necessarily solve the problem and could create additional complexity in managing queues.

Option B, AddPermission API is for Adds a permission to a queue for a specific principal. This allows sharing access to the queue.

Option C, “Use the ReceiveMessage API call to set an appropriate wait time,” is related to how your application receives messages from the queue but doesn’t directly address the issue of preventing duplicates. While it can be useful to avoid polling the queue too frequently, it doesn’t guarantee once-only processing.

Question #68

A solutions architect is designing a new hybrid architecture to extend a company’s on-premises infrastructure to AWS. The company requires a highly available connection with consistent low latency to an AWS Region. The company needs to minimize costs and is willing to accept slower traffic if the primary connection fails.
What should the solutions architect do to meet these requirements?

  • A. Provision an AWS Direct Connect connection to a Region. Provision a VPN connection as a backup if the primary Direct Connect connection fails.
  • B. Provision a VPN tunnel connection to a Region for private connectivity. Provision a second VPN tunnel for private connectivity and as a backup if the primary VPN connection fails.
  • C. Provision an AWS Direct Connect connection to a Region. Provision a second Direct Connect connection to the same Region as a backup if the primary Direct Connect connection fails.
  • D. Provision an AWS Direct Connect connection to a Region. Use the Direct Connect failover attribute from the AWS CLI to automatically create a backup connection if the primary Direct Connect connection fails.

Reference/Arguments:

This solution combines the benefits of the end-to-end secure IPSec connection with low latency and increased bandwidth of the AWS Direct Connect to provide a more consistent network experience than internet-based VPN connections.

Arguments about others:

Options B and C propose using multiple VPN connections for private connectivity and as backups. While VPNs can serve as backups, they may not provide the same level of consistent low latency and high availability as Direct Connect connections. Additionally, provisioning multiple VPN tunnels can increase operational complexity and costs.

Option D suggests using the Direct Connect failover attribute from the AWS CLI to automatically create a backup connection if the primary Direct Connect connection fails. While this approach can be automated, it does not provide the same level of immediate failover capabilities as having a separate backup connection in place.

Question #69

A company is running a business-critical web application on Amazon EC2 instances behind an Application Load Balancer. The EC2 instances are in an Auto Scaling group. The application uses an Amazon Aurora PostgreSQL database that is deployed in a single Availability Zone. The company wants the application to be highly available with minimum downtime and minimum loss of data.
Which solution will meet these requirements with the LEAST operational effort?

  • A. Place the EC2 instances in different AWS Regions. Use Amazon Route 53 health checks to redirect traffic. Use Aurora PostgreSQL Cross-Region Replication.
  • B. Configure the Auto Scaling group to use multiple Availability Zones. Configure the database as Multi-AZ. Configure an Amazon RDS Proxy instance for the database.
  • C. Configure the Auto Scaling group to use one Availability Zone. Generate hourly snapshots of the database. Recover the database from the snapshots in the event of a failure.
  • D. Configure the Auto Scaling group to use multiple AWS Regions. Write the data from the application to Amazon S3. Use S3 Event Notifications to launch an AWS Lambda function to write the data to the database.

Reference/Arguments:

By using Amazon RDS Proxy, you can allow your applications to pool and share database connections to improve their ability to scale. RDS Proxy makes applications more resilient to database failures by automatically connecting to a standby DB instance while preserving application connections.

Arguments about others:

Option A: Placing EC2 instances in different AWS Regions and using Cross-Region Replication for Aurora PostgreSQL introduces complexity and increased operational overhead, including data replication and potential latency issues.

Option C: Using a single Availability Zone for the Auto Scaling group and generating hourly snapshots of the database is not a high-availability solution. In the event of a failure, recovering from snapshots can result in data loss and downtime.

Option D: Writing data from the application to Amazon S3 and using S3 Event Notifications to trigger an AWS Lambda function to write data to the database is not a highly available solution for the database. It adds complexity and doesn’t address database failover or data consistency requirements.

Question #70

A company’s HTTP application is behind a Network Load Balancer (NLB). The NLB’s target group is configured to use an Amazon EC2 Auto Scaling group with multiple EC2 instances that run the web service.
The company notices that the NLB is not detecting HTTP errors for the application. These errors require a manual restart of the EC2 instances that run the web service. The company needs to improve the application’s availability without writing custom scripts or code.
What should a solutions architect do to meet these requirements?

  • A. Enable HTTP health checks on the NLB, supplying the URL of the company’s application.
  • B. Add a cron job to the EC2 instances to check the local application’s logs once each minute. If HTTP errors are detected. the application will restart.
  • C. Replace the NLB with an Application Load Balancer. Enable HTTP health checks by supplying the URL of the company’s application. Configure an Auto Scaling action to replace unhealthy instances.
  • D. Create an Amazon Cloud Watch alarm that monitors the UnhealthyHostCount metric for the NLB. Configure an Auto Scaling action to replace unhealthy instances when the alarm is in the ALARM state.

Reference/Arguments:

Your Application Load Balancer periodically sends requests to its registered targets to test their status. These tests are called health checks.

Arguments about others:

Option A, NLB Supports health checks for UDP, TCP Not for Http, Https

B. This approach involves custom scripting and manual intervention, which contradicts the requirement of not writing custom scripts or code.

D. Since the NLB does not detect HTTP errors, relying solely on the UnhealthyHostCount metric may not accurately capture the health of the application instances.

Link of other Parts:

--

--

Muhammad Hassan Saeed

Greetings! I'm a passionate AWS DevOps Engineer with hands-on Experience on Majority Devops Tools