AWS Certified Solutions Architect Associate Exam dumps with Complete Explanation-Part5

Muhammad Hassan Saeed
13 min readSep 27, 2023

--

Question#41

A company’s application integrates with multiple software-as-a-service (SaaS) sources for data collection. The company runs Amazon EC2 instances to receive the data and to upload the data to an Amazon S3 bucket for analysis. The same EC2 instance that receives and uploads the data also sends a notification to the user when an upload is complete. The company has noticed slow application performance and wants to improve the performance as much as possible.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Create an Auto Scaling group so that EC2 instances can scale out. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
  • B. Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
  • C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for each SaaS source to send output data. Configure the S3 bucket as the rule’s target. Create a second EventBridge (Cloud Watch Events) rule to send events when the upload to the S3 bucket is complete. Configure an Amazon Simple Notification Service (Amazon SNS) topic as the second rule’s target.
  • D. Create a Docker container to use instead of an EC2 instance. Host the containerized application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon CloudWatch Container Insights to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.

Reference/Arguments:

With Amazon AppFlow automate bi-directional data flows between SaaS applications and AWS services in just a few clicks.

Arguments about others:

Option A suggests using an Auto Scaling group to scale out EC2 instances, but it does not address the potential bottleneck of slow application performance and the notification process.

Option C involves using Amazon EventBridge (CloudWatch Events) rules for data output and S3 uploads, but it introduces additional complexity with separate rules and does not specifically address the slow application performance.

Option D suggests containerizing the application and using Amazon Elastic Container Service (Amazon ECS) with CloudWatch Container Insights, which may involve more operational overhead and setup compared to the simpler solution provided by Amazon AppFlow.

Question#42

A company runs a highly available image-processing application on Amazon EC2 instances in a single VPC. The EC2 instances run inside several subnets across multiple Availability Zones. The EC2 instances do not communicate with each other. However, the EC2 instances download images from Amazon S3 and upload images to Amazon S3 through a single NAT gateway. The company is concerned about data transfer charges.
What is the MOST cost-effective way for the company to avoid Regional data transfer charges?

  • A. Launch the NAT gateway in each Availability Zone.
  • B. Replace the NAT gateway with a NAT instance.
  • C. Deploy a gateway VPC endpoint for Amazon S3.
  • D. Provision an EC2 Dedicated Host to run the EC2 instances.

Reference/Arguments:

Deploying a gateway VPC endpoint for Amazon S3 is the most cost-effective way for the company to avoid Regional data transfer charges. A gateway VPC endpoint is a network gateway that allows communication between instances in a VPC and a service, such as Amazon S3, without requiring an Internet gateway or a NAT device.

Arguments about others:

A suggests launching the NAT gateway in each AZ. While this can help with availability and redundancy, it does not address the issue of data transfer charges, as the traffic would still traverse the NAT gateways and incur data transfer fees.

B suggests replacing the NAT gateway with a NAT instance. However, this solution still involves transferring data between the instances and S3 through the NAT instance, which would result in data transfer charges.

D suggests provisioning an EC2 Dedicated Host to run the EC2. While this can provide dedicated hardware for the instances, it does not directly address the issue of data transfer charges.

Question#43

A company has an on-premises application that generates a large amount of time-sensitive data that is backed up to Amazon S3. The application has grown and there are user complaints about internet bandwidth limitations. A solutions architect needs to design a long-term solution that allows for both timely backups to Amazon S3 and with minimal impact on internet connectivity for internal users.
Which solution meets these requirements?

  • A. Establish AWS VPN connections and proxy all traffic through a VPC gateway endpoint.
  • B. Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.
  • C. Order daily AWS Snowball devices. Load the data onto the Snowball devices and return the devices to AWS each day.
  • D. Submit a support ticket through the AWS Management Console. Request the removal of S3 service limits from the account.

Reference/Arguments:

AWS Direct Connect is a network service that allows you to establish a dedicated network connection from your on-premises data center to AWS. This connection bypasses the public Internet and can provide more reliable, lower-latency communication between your on-premises application and Amazon S3. By directing backup traffic through the AWS Direct Connect connection, you can minimize the impact on your internet bandwidth and ensure timely backups to S3.

Arguments about others:

Option A , establishing AWS VPN connections and proxying all traffic through a VPC gateway endpoint, would not necessarily minimize the impact on internet bandwidth as it would still utilize the public Internet to access S3.

Option C , using AWS Snowball devices, would not address the issue of internet bandwidth limitations as the data would still need to be transferred over the Internet to and from the Snowball devices.

Option D , submitting a support ticket to request the removal of S3 service limits, would not address the issue of internet bandwidth limitations and would not ensure timely backups to S3.

Question#44

A company has an Amazon S3 bucket that contains critical data. The company must protect the data from accidental deletion.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)

  • A. Enable versioning on the S3 bucket.
  • B. Enable MFA Delete on the S3 bucket.
  • C. Create a bucket policy on the S3 bucket.
  • D. Enable default encryption on the S3 bucket.
  • E. Create a lifecycle policy for the objects in the S3 bucket.

Reference/Arguments:

Versioning-enabled buckets can help you recover objects from accidental deletion or overwrite.

MFA delete can help prevent accidental bucket deletions by requiring the user who initiates the delete action to prove physical possession of an MFA device with an MFA code and adding an extra layer of friction and security to the delete action.

Arguments about others:

The other options (C, D, E) are also important for various security and data management aspects but do not directly address the requirement to protect against accidental deletions.

Question#45

A company has a data ingestion workflow that consists of the following:
• An Amazon Simple Notification Service (Amazon SNS) topic for notifications about new data deliveries
• An AWS Lambda function to process the data and record metadata
The company observes that the ingestion workflow fails occasionally because of network connectivity issues. When such a failure occurs, the Lambda function does not ingest the corresponding data unless the company manually reruns the job.
Which combination of actions should a solutions architect take to ensure that the Lambda function ingests all data in the future? (Choose two.)

  • A. Deploy the Lambda function in multiple Availability Zones.
  • B. Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic.
  • C. Increase the CPU and memory that are allocated to the Lambda function.
  • D. Increase provisioned throughput for the Lambda function.
  • E. Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue

Reference/Arguments:

With Amazon SQS, you can offload tasks from one component of your application by sending them to a queue and processing them asynchronously.

Arguments about others:

A. Deploying the Lambda function in multiple Availability Zones helps with high availability but doesn’t address the issue of occasional network connectivity failures.

C. Increasing CPU and memory allocated to the Lambda function may improve its performance but does not solve the problem of handling network connectivity issues.

D. Increasing provisioned throughput for the Lambda function is not a standard configuration for Lambda, and it doesn’t directly address the network connectivity issues or data ingestion reliability.

Question#46

A company has an application that provides marketing services to stores. The services are based on previous purchases by store customers. The stores upload transaction data to the company through SFTP, and the data is processed and analyzed to generate new marketing offers. Some of the files can exceed 200 GB in size.
Recently, the company discovered that some of the stores have uploaded files that contain personally identifiable information (PII) that should not have been included. The company wants administrators to be alerted if PII is shared again. The company also wants to automate remediation.
What should a solutions architect do to meet these requirements with the LEAST development effort?

  • A. Use an Amazon S3 bucket as a secure transfer point. Use Amazon Inspector to scan the objects in the bucket. If objects contain PII, trigger an S3 Lifecycle policy to remove the objects that contain PII.
  • B. Use an Amazon S3 bucket as a secure transfer point. Use Amazon Macie to scan the objects in the bucket. If objects contain PII, use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects that contain PII.
  • C. Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. If objects contain PII, use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects that contain PII.
  • D. Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. If objects contain PII, use Amazon Simple Email Service (Amazon SES) to trigger a notification to the administrators and trigger an S3 Lifecycle policy to remove the meats that contain PII.

Reference/Arguments:

Amazon Macie is a data security service that uses machine learning (ML) and pattern matching to discover and help protect your sensitive data

Arguments about others:

Option A suggests using Amazon Inspector, which is not designed for data scanning and classification like Amazon Macie, and it would require additional custom development to achieve the desired results.

Option C and D involve custom scanning algorithms and additional email services, which would increase development effort and complexity.

Question#47

A company needs guaranteed Amazon EC2 capacity in three specific Availability Zones in a specific AWS Region for an upcoming event that will last 1 week.
What should the company do to guarantee the EC2 capacity?

  • A. Purchase Reserved Instances that specify the Region needed.
  • B. Create an On-Demand Capacity Reservation that specifies the Region needed.
  • C. Purchase Reserved Instances that specify the Region and three Availability Zones needed.
  • D. Create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed.

Reference/Arguments:

On-Demand Capacity Reservations enable you to reserve compute capacity for your Amazon EC2 instances in a specific Availability Zone for any duration.

Arguments about others:

Option A, purchasing Reserved Instances that specify the Region needed, would not guarantee capacity in specific Availability Zones.

Option B, creating an On-Demand Capacity Reservation that specifies the Region needed, would not guarantee capacity in specific Availability Zones.

Option C, purchasing Reserved Instances that specify the Region and three Availability Zones needed, would not guarantee capacity in specific Availability Zones as Reserved Instances do not provide capacity reservations.

Question#48

A company’s website uses an Amazon EC2 instance store for its catalog of items. The company wants to make sure that the catalog is highly available and that the catalog is stored in a durable location.
What should a solutions architect do to meet these requirements?

  • A. Move the catalog to Amazon ElastiCache for Redis.
  • B. Deploy a larger EC2 instance with a larger instance store.
  • C. Move the catalog from the instance store to Amazon S3 Glacier Deep Archive.
  • D. Move the catalog to an Amazon Elastic File System (Amazon EFS) file system.

Reference/Arguments:

Securely and reliably access your files with a fully managed file system designed for 99.999999999 percent (11 9s) durability and up to 99.99 percent (4 9s) of availability

Arguments about others:

Option A is not suitable for storing the catalog as ElastiCache is an in-memory data store primarily used for caching and cannot provide durable storage for the catalog.

Option B would not address the requirement for high availability or durability. Instance stores are ephemeral storage attached to EC2 instances and are not durable or replicated.

Option C would provide durability but not high availability. S3 Glacier Deep Archive is designed for long-term archival storage, and accessing the data from Glacier can have significant retrieval times and costs.

Question#49

A company stores call transcript files on a monthly basis. Users access the files randomly within 1 year of the call, but users access the files infrequently after 1 year. The company wants to optimize its solution by giving users the ability to query and retrieve files that are less than 1-year-old as quickly as possible. A delay in retrieving older files is acceptable.
Which solution will meet these requirements MOST cost-effectively?

  • A. Store individual files with tags in Amazon S3 Glacier Instant Retrieval. Query the tags to retrieve the files from S3 Glacier Instant Retrieval.
  • B. Store individual files in Amazon S3 Intelligent-Tiering. Use S3 Lifecycle policies to move the files to S3 Glacier Flexible Retrieval after 1 year. Query and retrieve the files that are in Amazon S3 by using Amazon Athena. Query and retrieve the files that are in S3 Glacier by using S3 Glacier Select.
  • C. Store individual files with tags in Amazon S3 Standard storage. Store search metadata for each archive in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3 Glacier Instant Retrieval after 1 year. Query and retrieve the files by searching for metadata from Amazon S3.
  • D. Store individual files in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3 Glacier Deep Archive after 1 year. Store search metadata in Amazon RDS. Query the files from Amazon RDS. Retrieve the files from S3 Glacier Deep Archive.

Reference/Arguments:

Keywords: Users access the files randomly

S3 Intelligent-Tiering is the ideal storage class for data with unknown, changing, or unpredictable access patterns, independent of object size or retention period. You can use S3 Intelligent-Tiering as the default storage class for virtually any workload, especially data lakes, data analytics, new applications, and user-generated content

Keyword: A delay in retrieving older files is acceptable.

  • S3 Glacier Flexible Retrieval is cheaper than Instant Retrieval.

S3 Glacier Flexible Retrieval delivers low-cost storage, up to 10% lower cost (than S3 Glacier Instant Retrieval), for archive data that is accessed 1–2 times per year and is retrieved asynchronously.

Question#50

A company has a production workload that runs on 1,000 Amazon EC2 Linux instances. The workload is powered by third-party software. The company needs to patch the third-party software on all EC2 instances as quickly as possible to remediate a critical security vulnerability.
What should a solutions architect do to meet these requirements?

  • A. Create an AWS Lambda function to apply the patch to all EC2 instances.
  • B. Configure AWS Systems Manager Patch Manager to apply the patch to all EC2 instances.
  • C. Schedule an AWS Systems Manager maintenance window to apply the patch to all EC2 instances.
  • D. Use AWS Systems Manager Run Command to run a custom command that applies the patch to all EC2 instances.

Reference/Arguments:

Run Command allows you to automate common administrative tasks and perform one-time configuration changes at scale.

Arguments about others:

Option A, creating an AWS Lambda function, would require significant custom development to implement patching, and it may not provide the same level of control and automation as Maintenance Windows or Patch Manager for managing patches at scale.

Option B, using AWS Systems Manager Patch Manager, is also a valid choice. However, it typically requires more manual intervention to select patches and doesn’t provide the same level of automation and control as a maintenance window. Patch Manager is more suited for ongoing patch management rather than rapidly addressing critical vulnerabilities.

Option C, Scheduling an AWS Systems Manager maintenance window to apply the patch to all EC2 instances would not be a suitable solution, as maintenance windows are not designed to apply patches to third-party software

Links of other Parts:

--

--

Muhammad Hassan Saeed

Greetings! I'm a passionate AWS DevOps Engineer with hands-on Experience on Majority Devops Tools