AWS Certified Solutions Architect Associate Exam dumps with Complete Explanation-Part8

Muhammad Hassan Saeed
13 min readSep 30, 2023

--

Question#71

A company runs a shopping application that uses Amazon DynamoDB to store customer information. In case of data corruption, a solutions architect needs to design a solution that meets a recovery point objective (RPO) of 15 minutes and a recovery time objective (RTO) of 1 hour.
What should the solutions architect recommend to meet these requirements?

  • A. Configure DynamoDB global tables. For RPO recovery, point the application to a different AWS Region.
  • B. Configure DynamoDB point-in-time recovery. For RPO recovery, restore to the desired point in time.
  • C. Export the DynamoDB data to Amazon S3 Glacier on a daily basis. For RPO recovery, import the data from S3 Glacier to DynamoDB.
  • D. Schedule Amazon Elastic Block Store (Amazon EBS) snapshots for the DynamoDB table every 15 minutes. For RPO recovery, restore the DynamoDB table by using the EBS snapshot.

Reference/Arguments:

Point-in-time recovery helps protect your DynamoDB tables from accidental write or delete operations. With point-in-time recovery, you don’t have to worry about creating, maintaining, or scheduling on-demand backups.

With point-in-time recovery, you can restore that table to any point in time during the last 35 days. DynamoDB maintains incremental backups of your table.

Arguments about others:

The correct answer is B. Configure DynamoDB point-in-time recovery. For RPO recovery, restore to the desired point in time.

Here’s why:

A. Configuring DynamoDB global tables would provide high availability and replication of data across multiple AWS Regions, but it doesn’t directly address data corruption or the specified RPO and RTO requirements.

C. Exporting DynamoDB data to Amazon S3 Glacier on a daily basis doesn’t meet the RPO of 15 minutes, and restoring data from Glacier to DynamoDB can take a considerable amount of time, making it unsuitable for an RTO of 1 hour.

D. Scheduling Amazon Elastic Block Store (Amazon EBS) snapshots for the DynamoDB table every 15 minutes is not a valid approach for DynamoDB, as DynamoDB is a managed NoSQL database service, and EBS snapshots are not applicable for it.

Question#72

A company runs a photo processing application that needs to frequently upload and download pictures from Amazon S3 buckets that are located in the same AWS Region. A solutions architect has noticed an increased cost in data transfer fees and needs to implement a solution to reduce these costs.
How can the solutions architect meet this requirement?

  • A. Deploy Amazon API Gateway into a public subnet and adjust the route table to route S3 calls through it.
  • B. Deploy a NAT gateway into a public subnet and attach an endpoint policy that allows access to the S3 buckets.
  • C. Deploy the application into a public subnet and allow it to route through an internet gateway to access the S3 buckets.
  • D. Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3 buckets

Reference/Arguments:

There is no additional charge for using gateway endpoints.

Arguments about others:

A. Deploying Amazon API Gateway into a public subnet is not the right solution for reducing data transfer fees between S3 buckets in the same AWS Region. Amazon API Gateway is used for creating APIs, not for optimizing data transfer within the same region.

B. Deploying a NAT gateway into a public subnet and attaching an endpoint policy for S3 buckets is not the best solution for reducing data transfer fees within the same AWS Region. NAT gateways are typically used for outbound internet traffic, and they won’t have a significant impact on data transfer fees between S3 buckets in the same region.

C. Deploying the application into a public subnet and routing through an internet gateway would not reduce data transfer fees and might even increase them since traffic would go out to the internet and back into the AWS Region.

Question#73

A company recently launched Linux-based application instances on Amazon EC2 in a private subnet and launched a Linux-based bastion host on an Amazon EC2 instance in a public subnet of a VPC. A solutions architect needs to connect from the on-premises network, through the company’s internet connection, to the bastion host, and to the application servers. The solutions architect must make sure that the security groups of all the EC2 instances will allow that access.
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)

  • A. Replace the current security group of the bastion host with one that only allows inbound access from the application instances.
  • B. Replace the current security group of the bastion host with one that only allows inbound access from the internal IP range for the company.
  • C. Replace the current security group of the bastion host with one that only allows inbound access from the external IP range for the company.
  • D. Replace the current security group of the application instances with one that allows inbound SSH access from only the private IP address of the bastion host.
  • E. Replace the current security group of the application instances with one that allows inbound SSH access from only the public IP address of the bastion host.

Reference/Arguments:

This will restrict access to the bastion host from the specific IP range of the on-premises network, ensuring secure connectivity. This step ensures that only authorized users from the on-premises network can access the bastion host.This step enables SSH connectivity from the bastion host to the application instances in the private subnet. By allowing inbound SSH access only from the private IP address of the bastion host, you ensure that SSH access is restricted to the bastion host only.

Question#74

A solutions architect is designing a two-tier web application. The application consists of a public-facing web tier hosted on Amazon EC2 in public subnets. The database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet. Security is a high priority for the company.
How should security groups be configured in this situation? (Choose two.)

  • A. Configure the security group for the web tier to allow inbound traffic on port 443 from 0.0.0.0/0.
  • B. Configure the security group for the web tier to allow outbound traffic on port 443 from 0.0.0.0/0.
  • C. Configure the security group for the database tier to allow inbound traffic on port 1433 from the security group for the web tier.
  • D. Configure the security group for the database tier to allow outbound traffic on ports 443 and 1433 to the security group for the web tier.
  • E. Configure the security group for the database tier to allow inbound traffic on ports 443 and 1433 from the security group for the web tier.

Reference/Arguments:

Question#75

A company receives 10 TB of instrumentation data each day from several machines located at a single factory. The data consists of JSON files stored on a storage area network (SAN) in an on-premises data center located within the factory. The company wants to send this data to Amazon S3 where it can be accessed by several additional systems that provide critical near-real-time analytics. A secure transfer is important because the data is considered sensitive.
Which solution offers the MOST reliable data transfer?

  • A. AWS DataSync over public internet
  • B. AWS DataSync over AWS Direct Connect
  • C. AWS Database Migration Service (AWS DMS) over public internet
  • D. AWS Database Migration Service (AWS DMS) over AWS Direct Connect

Reference/Arguments:

AWS DataSync is a service to simplify, automate, and accelerate data transfer between on-premises storage and AWS.n addition to these security measures, some of our customers need to move data from their on-premises storage to AWS via Direct Connect

DataSync uses an agent to transfer data from your on-premises storage.

Arguments about others:

Option A, AWS DataSync over the public internet, is not as reliable as using Direct Connect, as it can be subject to potential network issues or congestion.

Option C, AWS Database Migration Service (DMS) over the public internet, is not a suitable solution for transferring large amounts of data, as it is designed for migrating databases rather than transferring large amounts of data from a storage area network (SAN).

Option D, AWS DMS over AWS Direct Connect, is also not a suitable solution, as it is designed for migrating databases and may not be efficient for transferring large amounts of data from a SAN.

Question#76

A company needs to configure a real-time data ingestion architecture for its application. The company needs an API, a process that transforms data as the data is streamed, and a storage solution for the data.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Deploy an Amazon EC2 instance to host an API that sends data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.
  • B. Deploy an Amazon EC2 instance to host an API that sends data to AWS Glue. Stop source/destination checking on the EC2 instance. Use AWS Glue to transform the data and to send the data to Amazon S3.
  • C. Configure an Amazon API Gateway API to send data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.
  • D. Configure an Amazon API Gateway API to send data to AWS Glue. Use AWS Lambda functions to transform the data. Use AWS Glue to send the data to Amazon S3.

Reference/Arguments:

Stream Ingestion: Amazon Api Gateway

Stream Storage: Amazon Kinesis Data Streams, Amazon Kinesis Data Firehose,

Stream Processing: AWS Lambda.

Destination: Amazon S3

Question#77

A company needs to keep user transaction data in an Amazon DynamoDB table. The company must retain the data for 7 years.
What is the MOST operationally efficient solution that meets these requirements?

  • A. Use DynamoDB point-in-time recovery to back up the table continuously.
  • B. Use AWS Backup to create backup schedules and retention policies for the table.
  • C. Create an on-demand backup of the table by using the DynamoDB console. Store the backup in an Amazon S3 bucket. Set an S3 Lifecycle configuration for the S3 bucket.
  • D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to invoke an AWS Lambda function. Configure the Lambda function to back up the table and to store the backup in an Amazon S3 bucket. Set an S3 Lifecycle configuration for the S3 bucket.

Reference/Arguments:

On demand backups are designed for long-term archiving and retention, which is typically used to help customers meet compliance and regulatory requirements.

This is the second of a series of two blog posts about using AWS Backup to set up scheduled on-demand backups for Amazon DynamoDB. Part 1 presents the steps to set up a scheduled backup for DynamoDB tables

Arguments about others:

Option A, using DynamoDB point-in-time recovery, is also a viable option but it requires continuous backup, which may be more resource-intensive and may incur higher costs compared to using AWS Backup.

Option C, creating an on-demand backup of the table and storing it in an S3 bucket, is also a viable option but it requires manual intervention and does not provide the automation and scheduling capabilities of AWS Backup.

Option D, using Amazon EventBridge (CloudWatch Events) and a Lambda function to back up the table and store it in an S3 bucket, is also a viable option but it requires more complex setup and maintenance compared to using AWS Backup.

Question#78

A company is planning to use an Amazon DynamoDB table for data storage. The company is concerned about cost optimization. The table will not be used on most mornings. In the evenings, the read and write traffic will often be unpredictable. When traffic spikes occur, they will happen very quickly.
What should a solutions architect recommend?

  • A. Create a DynamoDB table in on-demand capacity mode.
  • B. Create a DynamoDB table with a global secondary index.
  • C. Create a DynamoDB table with provisioned capacity and auto scaling.
  • D. Create a DynamoDB table in provisioned capacity mode, and configure it as a global table.

Reference/Arguments:

DynamoDB on-demand offers pay-per-request pricing for read and write requests so that you pay only for what you use.

Question#79

A company recently signed a contract with an AWS Managed Service Provider (MSP) Partner for help with an application migration initiative. A solutions architect needs ta share an Amazon Machine Image (AMI) from an existing AWS account with the MSP Partner’s AWS account. The AMI is backed by Amazon Elastic Block Store (Amazon EBS) and uses an AWS Key Management Service (AWS KMS) customer managed key to encrypt EBS volume snapshots.
What is the MOST secure way for the solutions architect to share the AMI with the MSP Partner’s AWS account?

  • A. Make the encrypted AMI and snapshots publicly available. Modify the key policy to allow the MSP Partner’s AWS account to use the key.
  • B. Modify the launchPermission property of the AMI. Share the AMI with the MSP Partner’s AWS account only. Modify the key policy to allow the MSP Partner’s AWS account to use the key.
  • C. Modify the launchPermission property of the AMI. Share the AMI with the MSP Partner’s AWS account only. Modify the key policy to trust a new KMS key that is owned by the MSP Partner for encryption.
  • D. Export the AMI from the source account to an Amazon S3 bucket in the MSP Partner’s AWS account, Encrypt the S3 bucket with a new KMS key that is owned by the MSP Partner. Copy and launch the AMI in the MSP Partner’s AWS account.

Reference/Arguments:

You can allow users or roles in a different AWS account to use a KMS key in your account. Cross-account access requires permission in the key policy of the KMS key and in an IAM policy in the external user’s account.

Question#80

A solutions architect is designing the cloud architecture for a new application being deployed on AWS. The process should run in parallel while adding and removing application nodes as needed based on the number of jobs to be processed. The processor application is stateless. The solutions architect must ensure that the application is loosely coupled and the job items are durably stored.
Which design should the solutions architect use?

  • A. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage.
  • B. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage.
  • C. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue.
  • D. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic.

Reference/Arguments:

This design follows the best practices for loosely coupled and scalable architecture. By using SQS, the jobs are durably stored in the queue, ensuring they are not lost. The processor application is stateless, which aligns with the design requirement. The AMI allows for consistent deployment of the application. The launch template and ASG facilitate the dynamic scaling of the application based on the number of items in the SQS, ensuring parallel processing of jobs.

Arguments about others:

Options A and D suggest using SNS, which is a publish/subscribe messaging service and may not provide the durability required for job storage.

Option B suggests using network usage as a scaling metric, which may not be directly related to the number of jobs to be processed. The number of items in the SQS provides a more accurate metric for scaling based on the workload.

Links of other Parts:

--

--

Muhammad Hassan Saeed

Greetings! I'm a passionate AWS DevOps Engineer with hands-on Experience on Majority Devops Tools