AWS Certified Solutions Architect Associate Exam dumps with Complete Explanation-Part9

Muhammad Hassan Saeed
12 min readOct 1, 2023

--

Question#81

A company’s dynamic website is hosted using on-premises servers in the United States. The company is launching its product in Europe, and it wants to optimize site loading times for new European users. The site’s backend must remain in the United States. The product is being launched in a few days, and an immediate solution is needed.
What should the solutions architect recommend?

  • A. Launch an Amazon EC2 instance in us-east-1 and migrate the site to it.
  • B. Move the website to Amazon S3. Use Cross-Region Replication between Regions.
  • C. Use Amazon CloudFront with a custom origin pointing to the on-premises servers.
  • D. Use an Amazon Route 53 geoproximity routing policy pointing to on-premises servers.

Reference/Arguments:

Fast discovery of products via search and browse is critical. Performance improvements for applications here translate directly into revenue and end user loyalty. Amazon Cloudfront’s support for dynamic content profiles and transaction acceleration optimizations make applications like these perform well under high demand. Extensive options for cookie and querystring handling, cache key modification , CDN and client-side cache-control allow for maximizing what content is cached, what comes directly from the origin.

Arguments about others:

Option A (launch an Amazon EC2 instance in us-east-1 and migrate the site to it) would not address the issue of optimizing loading times for European users.

Option B (move the website to Amazon S3 and use Cross-Region Replication between Regions) would not be an immediate solution as it would require time to set up and migrate the website.

Option D (use an Amazon Route 53 geoproximity routing policy pointing to on-premises servers) would not be suitable because it would not improve the loading times for users in Europe.

Question#82

A company wants to reduce the cost of its existing three-tier web architecture. The web, application, and database servers are running on Amazon EC2 instances for the development, test, and production environments. The EC2 instances average 30% CPU utilization during peak hours and 10% CPU utilization during non-peak hours.
The production EC2 instances run 24 hours a day. The development and test EC2 instances run for at least 8 hours each day. The company plans to implement automation to stop the development and test EC2 instances when they are not in use.
Which EC2 instance purchasing solution will meet the company’s requirements MOST cost-effectively?

  • A. Use Spot Instances for the production EC2 instances. Use Reserved Instances for the development and test EC2 instances.
  • B. Use Reserved Instances for the production EC2 instances. Use On-Demand Instances for the development and test EC2 instances.
  • C. Use Spot blocks for the production EC2 instances. Use Reserved Instances for the development and test EC2 instances.
  • D. Use On-Demand Instances for the production EC2 instances. Use Spot blocks for the development and test EC2 instances.

Reference/Arguments:

On-Demand Instances let you pay for compute capacity by the hour or second with no long-term commitments

Question#83

A company has a production web application in which users upload documents through a web interface or a mobile app. According to a new regulatory requirement. new documents cannot be modified or deleted after they are stored.
What should a solutions architect do to meet this requirement?

  • A. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning and S3 Object Lock enabled.
  • B. Store the uploaded documents in an Amazon S3 bucket. Configure an S3 Lifecycle policy to archive the documents periodically.
  • C. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning enabled. Configure an ACL to restrict all access to read-only.
  • D. Store the uploaded documents on an Amazon Elastic File System (Amazon EFS) volume. Access the data by mounting the volume in read-only mode.

Reference/Arguments:

You can use S3 Object Lock to store objects using a write-once-read-many (WORM) model. Object Lock can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely.

Versioning-enabled buckets can help you recover objects from accidental deletion or overwrite. For example, if you delete an object, Amazon S3 inserts a delete marker instead of removing the object permanently. The delete marker becomes the current object version.

Arguments about others:

Option B, storing the documents in an S3 bucket and configuring an S3 Lifecycle policy to archive them periodically, would not prevent the documents from being modified or deleted.

Option C, storing the documents in an S3 bucket with S3 Versioning enabled and configuring an ACL to restrict all access to read-only, would also not prevent the documents from being modified or deleted, since an ACL only controls access to the object and does not prevent it from being modified or deleted.

Option D, storing the documents on an Amazon Elastic File System (Amazon EFS) volume and accessing the data in read-only mode, would prevent the documents from being modified, but would not prevent them from being deleted.

Question#84

A company has several web servers that need to frequently access a common Amazon RDS MySQL Multi-AZ DB instance. The company wants a secure method for the web servers to connect to the database while meeting a security requirement to rotate user credentials frequently.
Which solution meets these requirements?

  • A. Store the database user credentials in AWS Secrets Manager. Grant the necessary IAM permissions to allow the web servers to access AWS Secrets Manager.
  • B. Store the database user credentials in AWS Systems Manager OpsCenter. Grant the necessary IAM permissions to allow the web servers to access OpsCenter.
  • C. Store the database user credentials in a secure Amazon S3 bucket. Grant the necessary IAM permissions to allow the web servers to retrieve credentials and access the database.
  • D. Store the database user credentials in files encrypted with AWS Key Management Service (AWS KMS) on the web server file system. The web server should be able to decrypt the files and access the database.

Reference/Arguments:

Amazon RDS integrates with Secrets Manager to manage master user passwords for your DB instances and Multi-AZ DB clusters.

To allow a user or role to connect to your DB instance, you must create an IAM policy. After that, you attach the policy to a permissions set or role.

Arguments about others:

Option B (AWS Systems Manager OpsCenter) is not designed for storing and managing database credentials. It is more focused on managing incidents and operational issues.

Option C (Storing credentials in an S3 bucket) is not recommended for storing sensitive database credentials because S3 buckets are not designed to securely manage secrets, and access control might be more complex to manage securely.

Option D (Storing credentials in files encrypted with AWS KMS on the web server file system) is less secure and more complex to manage because it involves storing credentials on the web server itself, which is vulnerable to various security risks. AWS Secrets Manager provides a more secure and centralized way to manage and rotate credentials.

Question#85

A company hosts an application on AWS Lambda functions that are invoked by an Amazon API Gateway API. The Lambda functions save customer data to an Amazon Aurora MySQL database. Whenever the company upgrades the database, the Lambda functions fail to establish database connections until the upgrade is complete. The result is that customer data is not recorded for some of the event.
A solutions architect needs to design a solution that stores customer data that is created during database upgrades.
Which solution will meet these requirements?

  • A. Provision an Amazon RDS proxy to sit between the Lambda functions and the database. Configure the Lambda functions to connect to the RDS proxy.
  • B. Increase the run time of the Lambda functions to the maximum. Create a retry mechanism in the code that stores the customer data in the database.
  • C. Persist the customer data to Lambda local storage. Configure new Lambda functions to scan the local storage to save the customer data to the database.
  • D. Store the customer data in an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Create a new Lambda function that polls the queue and stores the customer data in the database.

Reference/Arguments:

If the Lambda functions encounter errors during the database upgrade, they can still push messages to the SQS queue. The new Lambda function responsible for storing data can have retry logic and error handling to ensure that all data is eventually saved to the database when it becomes available.

Arguments about others:

Option A (Amazon RDS proxy) may help with database connection pooling but doesn’t directly address the issue of storing data during database upgrades.

Option B (increasing Lambda runtime and implementing retry logic) is less robust because it assumes that extending Lambda runtime is sufficient, and it might not handle all possible errors and retries reliably.

Option C (persisting data in Lambda local storage and scanning by new Lambdas) is not suitable for durability and scalability, as local storage is temporary and not designed for long-term data storage.

Question#86

A survey company has gathered data for several years from areas in the United States. The company hosts the data in an Amazon S3 bucket that is 3 TB in size and growing. The company has started to share the data with a European marketing firm that has S3 buckets. The company wants to ensure that its data transfer costs remain as low as possible.
Which solution will meet these requirements?

A. Configure the Requester Pays feature on the company’s S3 bucket.

B. Configure S3 Cross-Region Replication from the company’s S3 bucket to one of the marketing firm’s S3 buckets.

C. Configure cross-account access for the marketing firm so that the marketing firm has access to the company’s S3 bucket.

D. Configure the company’s S3 bucket to use S3 Intelligent-Tiering. Sync the S3 bucket to one of the marketing firm’s S3 buckets.

Reference/Arguments:

In general, bucket owners pay for all Amazon S3 storage and data transfer costs that are associated with their bucket. However, you can configure a bucket to be a Requester Pays bucket. With Requester Pays buckets, the requester instead of the bucket owner pays the cost of the request and the data download from the bucket. The bucket owner always pays the cost of storing data.

Question#87

A company uses Amazon S3 to store its confidential audit documents. The S3 bucket uses bucket policies to restrict access to audit team IAM user credentials according to the principle of least privilege. Company managers are worried about accidental deletion of documents in the S3 bucket and want a more secure solution.
What should a solutions architect do to secure the audit documents?

  • A. Enable the versioning and MFA Delete features on the S3 bucket.
  • B. Enable multi-factor authentication (MFA) on the IAM user credentials for each audit team IAM user account.
  • C. Add an S3 Lifecycle policy to the audit team’s IAM user accounts to deny the s3:DeleteObject action during audit dates.
  • D. Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit team IAM user accounts from accessing the KMS key.

Reference/Arguments:

Versioning-enabled buckets can help you recover objects from accidental deletion or overwrite. For example, if you delete an object, Amazon S3 inserts a delete marker instead of removing the object permanently. The delete marker becomes the current object version.

When working with S3 Versioning in Amazon S3 buckets, you can optionally add another layer of security by configuring a bucket to enable MFA (multi-factor authentication) delete. When you do this, the bucket owner must include two forms of authentication in any request to delete a version or change the versioning state of the bucket.

Question#88

A company is using a SQL database to store movie data that is publicly accessible. The database runs on an Amazon RDS Single-AZ DB instance. A script runs queries at random intervals each day to record the number of new movies that have been added to the database. The script must report a final total during business hours.
The company’s development team notices that the database performance is inadequate for development tasks when the script is running. A solutions architect must recommend a solution to resolve this issue.
Which solution will meet this requirement with the LEAST operational overhead?

  • A. Modify the DB instance to be a Multi-AZ deployment.
  • B. Create a read replica of the database. Configure the script to query only the read replica.
  • C. Instruct the development team to manually export the entries in the database at the end of each day.
  • D. Use Amazon ElastiCache to cache the common queries that the script runs against the database.

Reference/Arguments:

Business reporting or data warehousing scenarios where you might want business reporting queries to run against a read replica, rather than your production DB instance.

Question#89

A company has applications that run on Amazon EC2 instances in a VPC. One of the applications needs to call the Amazon S3 API to store and read objects. According to the company’s security regulations, no traffic from the applications is allowed to travel across the internet.
Which solution will meet these requirements?

  • A. Configure an S3 gateway endpoint.
  • B. Create an S3 bucket in a private subnet.
  • C. Create an S3 bucket in the same AWS Region as the EC2 instances.
  • D. Configure a NAT gateway in the same subnet as the EC2 instances.

Reference/Arguments:

With a gateway endpoint, you can access Amazon S3 from your VPC, without requiring an internet gateway or NAT device for your VPC, and with no additional cost

Question#90

A company is storing sensitive user information in an Amazon S3 bucket. The company wants to provide secure access to this bucket from the application tier running on Amazon EC2 instances inside a VPC.
Which combination of steps should a solutions architect take to accomplish this? (Choose two.)

  • A. Configure a VPC gateway endpoint for Amazon S3 within the VPC.
  • B. Create a bucket policy to make the objects in the S3 bucket public.
  • C. Create a bucket policy that limits access to only the application tier running in the VPC
  • D. Create an IAM user with an S3 access policy and copy the IAM credentials to the EC2 instance.
  • E. Create a NAT instance and have the EC2 instances use the NAT instance to access the S3 bucket.

Reference/Arguments:

These checks generate findings and provide actionable recommendations to help you author policies that are functional and conform to security best practices.

Links of other Parts:

--

--

Muhammad Hassan Saeed
Muhammad Hassan Saeed

Written by Muhammad Hassan Saeed

Greetings! I'm a passionate AWS DevOps Engineer with hands-on Experience on Majority Devops Tools

No responses yet