option
My Daypo

2changsoo_C02

COMMENTS STADISTICS RECORDS
TAKE THE TEST
Title of test:
2changsoo_C02

Description:
2changsoo_C02

Author:
iamcslee
(Other tests from this author)

Creation Date:
26/11/2021

Category:
Others

Number of questions: 100
Share the Test:
Facebook
Twitter
Whatsapp
Share the Test:
Facebook
Twitter
Whatsapp
Last comments
No comments about this test.
Content:
NO.101 A solutions architect is migrating a document management workload to AWS The workload keeps 7 TiB of contract documents on a snared storage file system and tracks them on an external database Most of the documents are stored and retrieved eventually for reference m the future The application cannot De modified during the migration, and the storage solution must be highly available. Documents are retrieved and stored by web servers that run on Amazon EC2 instances In an Auto Scaling group The Auto Scaling group can have up to 12 instances. Which solution meets these requirements MOST cost-effectively? A. Provision an enhanced networking optimized EC2 instance to serve as a shared NFS storage system. B. Create an Amazon S3 bucket that uses the S3 Standard-infrequent Access (S3 Standard-lA) storage class Mount the S3 bucket to the EC2 instances in the Auto Scaling group group C. Create an SFTP server endpoint by using AWS Transfer for SFTP and an Amazon S3 bucket Configure the EC2 instances m the Auto Scaling group to connect to the SFTP server D. Create an Amazon Elastic File System (Amazon EFS) file system that uses the EFS Standard-Infrequent Access (EFS Standard-lA) storage class. Mount the file system to the EC2 instances in the Auto Scaling .
NO.102 A company's infrastructure consists of hundreds of Amazon EC2 instances that use Amazon Elastic Block Store (Amazon EBS) storage. A solutions architect must ensure that every EC2 instance can be recovered after a disaster What should the solutions architect do to meet this requirement with the LEAST amount of effort? A. Take a snapshot of the EBS storage that is attached to each EC2 instance Create an AWS CloudFormation template to launch new EC2 instances from the EBS storage. B. Take a snapshot of the EBS storage that is attached to each EC2 instance. Use AWS Elastic Beanstalk to set the environment based on the EC2 template and attach the EBS storage. C. Use AWS Backup to set up a backup plan for the entire group of EC2 instances. Use the AWS Backup API or the AWS CLI to speed up the restore process for multiple EC2 instances D. Create an AWS Lambda function to take a snapshot of the EBS storage that is attached to each EC2 instance and copy the Amazon Machine Images (AMIs). Create another Lambda function to perform the restores with the copied AMIs and attach the EBS storage.
NO.103 A company uses Amazon Redshift for to data warehouse. The company wants to ensure high durability for its data in case of any component failure. What should a solution architect recommend? A. Enable concurrency scaling B. Enable cross-Region snapshots C. Increase the data retention period D. Deploy Amazon Redshift in Multi-AZ.
NO.104 A solutions architect is implementing a document review application using an Amazon S3 bucket for storage. The solution must prevent accidental deletion of the documents and ensure that all versions of the documents are available Users must be able to download, modify, and upload documents. Which combination of actions should be taken to meet these requirements? (Select TWO.) A. Enable a read-only bucket ACL B. Enable versioning on the bucket. C. Attach an IAM policy to the bucket D. Enable MFA Delete on the bucket. E. Encrypt the bucket using AWS KMS. .
NO.105 A company runs an application on an Amazon EC2 instance. The application frequently writes and reads tens of thousands of intermediate computation results to disk To finish within the allowed duration, the application must be able to complete all intermediate computations with sustained low latency. Which Amazon Elastic Block Store (Amazon EBS) volume type should a solutions architect attach to the EC2 instance to meet these requirements? A. Cold HDD (sc1) B. General Purpose SSD (gp3) C. Provisioned IOPS SSD (io2) D. Throughput Optimized HDD (st1) .
NO.106 A company has a data ingestion workflow that consists of the following: * An Amazon Simple Notification Service (Amazon SNS) topic for notifications about new data deliveries * An AWS Lambda function to process the data and record metadata The company observes that the ingestion workflow fails occasionally because of network connectivity issues. When such a failure occurs, the Lambda function does not ingest the corresponding data unless the company manually reruns the job. Which combination of actions should a solutions architect take to ensure that the Lambda function ingests all data in the future? (Select TWO.) A. Configure the Lambda function In multiple Availability Zones. B. Create an Amazon Simple Queue Service (Amazon SQS) queue and subscribe It to me SNS topic. C. Increase the CPU and memory that are allocated to the Lambda function. D. Increase provisioned throughput for the Lambda function. E. Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue.
NO.107 A company wants to minimize cost by moving infrequently accessed audit archives to low cost storage. Which AWS service should the company use for this storage? A. AWS Backup B. Amazon S3 Glacier C. AWS Snowball D. AWS Storage Gateway .
NO.108 A solutions architect needs to design a network that will allow multiple Amazon EC2 instances to access a common data source used for mission-critical data that can be accessed by all the EC2 instances simultaneously. The solution must be highly scalable, easy to implement, and support the NFS protocol Which solution meets these requirements? A. Create an Amazon EFS file system Configure a mount target in each Availability Zone. Attach each instance to the appropriate mount target B. Create an additional EC2 instance and configure it as a file server Create a security group that allows communication between the instances and apply that to the additional instance. C. Create an Amazon S3 bucket with the appropriate permissions Create a role in AWS IAM that grants the correct permissions to the S3 bucket. Attach the role to the EC2 instances that need access to the data D. Create an Amazon EBS volume with the appropriate permissions. Create a role in AWS IAM that grants the correct permissions to the EBS volume. Attach the role to the EC2 instances that need access to the data.
NO.109 A company has a service that reads and writes large amounts of data from an Amazon S3 bucket in the same AWS Region The service is deployed on Amazon EC2 instances within the private subnet of a VPC. The service communicates with Amazon S3 over a NAT gateway in the public subnet However, the company wants a solution that will reduce the data output costs. Which solution will meet these requirements MOST cost-effectively? A. Provision a dedicated EC2 NAT instance in the public subnet. Configure the route table for the private subnet to use the elastic network interface of this instance as the destination for all S3 traffic B. Provision a dedicated EC2 NAT instance in the private subnet. Configure the route table for the public subnet to use the elastic network interface of this instance as the destination for all S3 traffic. C. Provision a VPC gateway endpoint. Configure the route table for the private subnet to use the gateway endpoint as the route for all S3 traffic. D. Provision a second NAT gateway. Configure the route table foe the private subnet to use this NAT gateway as the destination for all S3 traffic.
NO.110 A company uses Amazon S3 as its object storage solution The company has thousands of S3 buckets it uses to store data Some of the S3 buckets have data that is accessed less frequently than others. A solutions architect found that lifecycle policies are not consistently implemented or are implemented partially: resulting in data being stored in high-cost storage Which solution will lower costs without compromising the availability of objects? A. Use S3 AC Ls B. Use Amazon Elastic Block Store (Amazon EBS) automated snapshots C. Use S3 Intelligent-Tiering storage D. Use S3 One Zone-Infrequent Access (S3 One Zone-IA). .
NO.111 A company's application is running on Amazon EC2 instances in a single Region In the event of a disaster a solutions architect needs to ensure that the resources can also be deployed to a second Region. Which combination of actions should the solutions architect take to accomplish this? (Select TWO ) A. Detach a volume on an EC2 instance and copy it to Amazon S3. B. Launch a new EC2 instance from an Amazon Machine Image (AMI) in a new Region C. Launch a new EC2 instance in a new Region and copy a volume from Amazon S3 to the new instance, D. Copy an Amazon Machine Image (AMI) of an EC2 instance and specify a different Region for the destination E. Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an EC2 instance in the destination Region using that EBS volume.
NO.112 A solutions architect must secure a VPC network that hosts Amazon EC2 instances The EC2 ^stances contain highly sensitive data and tun n a private subnet According to company policy the EC2 instances mat run m the VPC can access only approved third-party software repositories on the internet for software product updates that use the third party's URL Other internet traffic must be blocked. Which solution meets these requirements? A. Update the route table for the private subnet to route the outbound traffic to an AWS Network Firewall. Configure domain list rule groups B. Set up an AWS WAF web ACL. Create a custom set of rules that filter traffic requests based on source and destination IP address range sets. C. Implement strict inbound security group roles Configure an outbound rule that allows traffic only to the authorized software repositories on the internet by specifying the URLs D. Configure an Application Load Balancer (ALB) in front of the EC2 instances. Direct an outbound traffic to the ALB Use a URL-based rule listener in the ALB's target group for outbound access to the internet.
NO.113 A leasing company generates and emails PDF statements every month for all its customers. Each statement is about 400 KB in size Customers can download their statements from the website for up to 30 days from when the statements were generated At the end of their 3-year lease, the customers are emailed a ZIP file that contains all the statements What is the MOST cost-effective storage solution for this situation? A. Store the statements using the Amazon S3 Standard storage class Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 1 day. B. Store the statements using the Amazon S3 Glacier storage class Create a lifecycle policy to move the statements to Amazon S3 Glacier Deep Archive storage after 30 days. C. Store the statements using the Amazon S3 Standard storage class Create a lifecycle policy to move the statements to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) storage after 30 days. D. Store the statements using the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 30 days.
NO.114 A company needs guaranteed Amazon EC2 capacity in three specific Availability Zones in a specific AWS Region for an upcoming event that will last 1 week. What should the company do to guarantee the EC2 capacity? A. Purchase Reserved instances that specify the Region needed B. Create an On Demand Capacity Reservation that specifies the Region needed C. Purchase Reserved instances that specify the Region and three Availability Zones needed D. Create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed.
NO.115 A company has 150 TB of archived image data stored on-premises that needs to be moved to the AWS Cloud within the next month. The company's current network connection allows up to 100 Mbps uploads for this purpose during the night only. What is the MOST cost-effective mechanism to move this data and meet the migration deadline? A. Use AWS Snowmobile to ship the data to AWS. B. Order multiple AWS Snowball devices to ship the data to AWS. C. Enable Amazon S3 Transfer Acceleration and securely upload the data. D. Create an Amazon S3 VPC endpoint and establish a VPN to upload the data .
NO.116 A solutions architect is designing the cloud architecture for a new application that is being deployed on AWS. The application's users will interactively download and upload files. Files that are more than 90 days old will be accessed less frequently than newer files, but all files need to be instantly available. The solutions architect must ensure that the application can scale to store petabytes of data with maximum durability. Which solution meets these requirements? A. Store the files in Amazon S3 Standard. Create an S3 Lifecycle policy that moves objects that are more than 90 days old to S3 Glacier. B. Store the tiles in Amazon S3 Standard. Create an S3 Lifecycle policy that moves objects that are more than 90 days old to S3 Standard-Infrequent Access (S3 Standard-IA). C. Store the files in Amazon Elastic Block Store (Amazon EBS) volumes. Schedule snapshots of the volumes. Use the snapshots to archive data that is more than 90 days old. D. Store the files in RAID-striped Amazon Elastic Block Store (Amazon EBS) volumes. Schedule snapshots of the volumes. Use the snapshots to archive data that is more than 90 days old.
NO.117 A recent analysis of a company's IT expenses highlights the need to reduce backup costs The company s chief information officer wants to simplify the on-premises backup infrastructure and reduce costs by eliminating the use ol physical backup tapes The company must preserve the existing investment in the on-premises backup applications and workflows What should a solutions architect recommend'' A. Set up AWS Storage Gateway to conned with the backup applications using the NFS interface B. Set up an Amazon EFS file system that connects with the backup applications using the NFS interface C. Set up an Amazon EFS file system that connects with the backup applications using the iSCSl interface D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSi-virtual tape library (VTL) interface.
NO.118 A company has two AWS accounts: Production and Development. The company needs to push code changes in the Development account to the Production account. In the alpha phase, only two developers on the development team need access to the Production account. In the beta phase, more developers will need access to perform testing. Which solution will meet these requirements? A. Create two policy documents by using the AWS Management Console in each account. Assign the policy to developers who need access. B. Create an IAM role in the Development account. Grant the IAM role access to the Production account. Allow developers to assume the role. C. Create an IAM role in the Production account. Define a trust policy that specifies the Development account. Allow developers to assume the role. D. Create an IAM group in the Production account. Add the group as a principal in a trust policy that specifies the Production account. Add developers to the group.
NO.119 A company needs to provide its employees with secure access to confidential and sensitive files. The company wants to ensure that the tiles can be accessed only by authorized users. The files must be downloaded securely to the employees' devices. The tiles are stored in an on-premises Windows file server. However, due to an increase in remote usage, the file server is running out of capacity. Which solution will meet these requirements? A. Migrate the file server to an Amazon EC2 instance in a public subnet. Configure the security group to limit inbound traffic to the employees' IP addresses. B. Migrate the files to an Amazon FSx for Windows File Server file system. Integrate the Amazon FSx file system with the on-premises Active Directory. Configure AWS Client VPN. C. Migrate the tiles to Amazon S3 and create a private VPC endpoint. Create a signed URL to allow download. D. Migrate the tiles to Amazon S3 and create a public VPC endpoint. Allow employees to sign on with AWS Single Sign-On.
NO.120 A company is planning to use Amazon S3 to store images uploaded by its users. The images must be encrypted at rest in Amazon S3. The company does not want to spend time managing and rotating the keys, but it does want to control who can access those keys. What should a solutions architect use to accomplish this? A. Server-Side Encryption with keys stored in an S3 bucket B. Server-Side Encryption with Customer-Provided Keys (SSE-C) C. Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) D. Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS) .
NO.121 A customer is running an application on Amazon EC2 instances hosted in a private subnet of a VPC. The EC2 instances are configured in an Auto Scaling group behind an Elastic Load Balancer (ELB). The EC2 instances use a NAT gateway outbound internet access However, the EC2 instances are not able to connect to the public internet to download software updates. A. The ELB is not configured with a proper health check. B. The route tables in the VPC are configured incorrectly. C. The EC2 instances are not associated with an Elastic IP address. D. The security group attached to the NAT gateway is configured incorrectly. E. The outbound rules on the security group attachment to the EC2 instances are configured incorrectly.
NO.122 A company serves a multilingual website from a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB) This architecture is currently running in the us-west-1 Region but is exhibiting high request latency for users located in other parts of the world The website needs to serve requests quickly and efficiently regardless of a user's location However the company does not want to recreate the existing architecture across multiple Regions. How should a solutions architect accomplish this? A. Replace the existing architecture with a website served from an Amazon S3 bucket Configure an Amazon CloudFront distribution with the S3 bucket as the origin B. Configure an Amazon CloudFront distribution with the ALB as the origin Set the cache behavior settings to only cache based on the Accept-Language request header C. Set up Amazon API Gateway with the ALB as an integration Configure API Gateway to use an HTTP integration type Set up an API Gateway stage to enable the API cache D. Launch an EC2 instance in each additional Region and configure NGINX to act as a cache server for that Region Put all the instances plus the ALB behind an Amazon Route 53 record set with a geolocation routing policy .
NO.123 A company uses AWS to run all components of its three-tier web application. The company wants to automatically detect any potential security breaches within the environment The company wants to track any findings and notify administrators if a potential breach occurs Which solution meets these requirements? A. Set up AWS WAF to evaluate suspicious web traffic Create AWS Lambda functions to log any findings in Amazon CloudWatch and send email notifications to administrators. B. Set up AWS Shield to evaluate suspicious web traffic Create AWS Lambda functions to log any findings in Amazon CloudWatch and send email notifications to administrators. C. Deploy Amazon Inspector to monitor the environment and generate findings in Amazon CloudWatch Configure an Amazon EventBridge (Amazon CloudWatch Events) rule to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic to notify administrators by email D. Deploy Amazon GuardDuty to monitor the environment and generate findings in Amazon CloudWatch Configure an Amazon EventBridge (Amazon CloudWatch Events) rule to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic to notify administrators by email.
NO.124 A company hosts its multi-tier, public web application in the AWS Cloud. The web application runs on Amazon EC2 instances, and its database runs on Amazon RDS The company is anticipating a large increase in sales during an upcoming holiday weekend A solutions architect needs to build a solution to analyze the performance of the web application with a granularity of no more than 2 minutes. What should the solutions architect do to meet this requirement? A. Send Amazon CloudWatch logs to Amazon Redshift Use Amazon QuickSight to perform further analysis B. Enable detailed monitoring on all EC2 instances. Use Amazon CloudWatch metrics to perform further analysis. C. Create an AWS Lambda function to fetch EC2 logs from Amazon CloudWatch Logs. Use Amazon CloudWatch metrics to perform further analysis D. Send EC2 logs to Amazon S3. Use Amazon Redshift to fetch logs from the S3 bucket to process raw data for further analysis with Amazon QuickSight.
NO.125 A company receives inconsistent service from its data center provider because the company is headquartered in an area affected by natural disasters The company is not ready to fully migrate to the AWS Cloud but it wants a failure environment on AWS in case the on-premises data center fails The company runs web servers that connect to external vendors The data available on AWS and on premises must be uniform. Which solution should a solutions architect recommend that has the LEAST amount of downtime'' A. Configure an Amazon Route 53 failover record Run application servers on Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3. B. Configure an Amazon Route 53 failover record Execute an AWS CloudFormation template from a script to create Amazon EC2 instances behind an Application Load Balancer Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3 C. Configure an Amazon Route 53 failover record Set up an AWS Direct Connect connection between a VPC and the data center Run application servers on Amazon EC2 in an Auto Scaling group Run an AWS Lambda function to execute an AWS CloudFormation template to create an Application Load Balancer D. Configure an Amazon Route 53 failover record Run an AWS Lambda function to execute an AWS CloudFormation template to launch two Amazon EC2 instances Set up AWS Storage Gateway with stored volumes to back up data to Amazon S3 Set up an AWS Direct Connect connection between a VPC and the data center.
NO.126 A company uses a payment processing system that requires messages for a particular payment ID to be received in the same order that they were sent Otherwise, the payments might be processed incorrectly. Which actions should a solutions architect take to meet this requirement? (Select TWO.) A. Write the messages to an Amazon DynamoDB table with the payment ID as the partition key B. Write the messages to an Amazon Kinesis data stream with the payment ID as the partition key. C. Write the messages to an Amazon ElastiCache for Memcached cluster with the payment ID as the key D. Write the messages to an Amazon Simple Queue Service (Amazon SQS) queue Set the message attribute to use the payment ID E. Write the messages to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Set the message group to use the payment ID.
NO.127 A company's security team requests that network traffic be captured in VPC Flow Logs The logs will be frequently accessed for 90 days and then accessed intermittently What should a solutions architect do to meet these requirements when configuring the logs? A. Use Amazon CloudWatch as the target. Set the CloudWatch log group with an expiration of 90 days. B. Use Amazon Kinesis as the target Configure the Kinesis stream to always retain the logs for 90 days C. Use AWS CloudTrail as the target. Configure CloudTrail to save to an Amazon S3 bucket, and enable S3 Intelligent-Tiering D. Use Amazon S3 as the target Enable an S3 Lifecycle policy to transition the logs to S3 Standard-Infrequent Access (S3 Standard-IA) after 90 days.
NO.128 A company runs an AWS Lambda function in private subnets in a VPC The subnets have a default route to the internet through an Amazon EC2 NAT instance The Lambda function processes input data and saves its output as an object to Amazon S3 intermittently the Lambda function times out while trying to upload the object because of saturated traffic on the NAT instance's network The company wants to access Amazon S3 without traversing the internet Which solution will meet these requirements' A. Replace the fcC2 NAT instance with an AWS managed NAT gateway B. Increase the size of the EC2 NAT instance in the VPC to a network optimized instance type C. Provision a gateway endpoint for Amazon S3 in the VPC Update the route tables of the subnets accordingly D. Provision a transit gateway Place transit gateway attachments in the private subnets where the Lambda function is running.
NO.129 A company is building ils web application by using contains on AWS. The company requires three instances of the web application to run at all times The application must be highly available and must be able to scale to meet increases In demand Which solution meets these requirements? A. Use the AWS Fargate launch type to create an Amazon Elastic Contain Service (Amazon ECS) dust Create a task definition for the web application Create an ECS service that ha6 a desired count of three tasks. B. Use the Amazon EC2 launch type to create an Amazon Elastic Contain Service (Amazon ECS) cluster that has three container Instances in one Availability Zone Create a task definition for the web application Place one task for each container instance. C. Use the AWS Fargate launch type to create an Amazon Elastic Contain Service (Amazon ECS) cluster that has three container instances in three different Availability Zones Create a task definition for the web application Create an ECS service that has a desired count of three tasks D. Use the Amazon EC2 launch type to create an Amazon Elastic Contain Service (Amazon ECS) duster that has one container instance in two different Availability Zones. Create definition for the web application Place two tasks on one container instance Place one task on the remaining container instance.
NO.130 A company recently announced the deployment of its retail website to a global audience. The website runs on multiple Amazon EC2 instances behind an Elastic Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The company wants to provide its customers with different versions of content based on the devices that the customers use to access the website. Which combination of actions should a solutions architect take to meet these requirements7 (Select TWO.) A. Configure Amazon CloudFront to cache multiple versions of the content. B. Configure a host header in a Network Load Balancer to forward traffic to different instances. C. Configure a Lambda@Edge function to send specific objects to users based on the User-Agent header. D. Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up host-based routing to different EC2 instances. E. Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up path-based routing to different EC2 instances.
NO.131 A company's order fulfillment service uses a MySQL database The database needs to support a large number of concurrent queries and transactions Developers are spending time patching and tuning the database This is causing delays in releasing new product features The company wants to use cloud-based services to help address this new challenge The solution must allow the developers to migrate the database with little or no code changes and must optimize performance Which service should a solutions architect use to meet these requirements'? A. Amazon Aurora B. Amazon DynamoDB C. Amazon ElastiCache D. MySQL on Amazon EC2.
NO.132 A company is designing a shared storage solution for a gaming application that is hosted in the AWS Cloud The company needs the ability to use SMB clients to access data solution must be fully managed. Which AWS solution meets these requirements? A. Create an AWS DataSync task that shares the data as a mountable file system Mount the file system to the application server B. Create an Amazon EC2 Windows instance Install and configure a Windows file share role on the instance Connect the application server to the file share C. Create an Amazon FSx for Windows File Server file system Attach the file system to the origin server Connect the application server to the file system D. Create an Amazon S3 bucket Assign an IAM role to the application to grant access to the S3 bucket Mount the S3 bucket to the application server.
NO.133 A company has media and application files that need to be shared internally. Users currently are authenticated using Active Directory and access files from a Microsoft Windows platform The chief executive officer wants to keep the same user permissions, but wants the company to improve the process as the company is reaching its storage capacity limit. What should a solutions architect recommend? A. Set up a corporate Amazon S3 bucket and move all media and application files B. Configure Amazon FSx for Windows File Server and move all the media and application files. C. Configure Amazon Elastic File System (Amazon EFS) and move all media and application files D. Set up Amazon EC2 on Windows, attach multiple Amazon Elastic Block Store (Amazon EBS) volumes, and move all media and application files.
NO.134 A company wants to build an online marketplace application on AWS as a set of loosely coupled microservices For this application, when a customer submits a new order two microservices should handle the event simultaneously The Email microservice will send a confirmation email and the OrderProcessing microservice will start the order delivery process If a customer cancels an order, the OrderCancellation and Email microservices should handle the event simultaneously. A solutions architect wants to use Amazon Simple Queue Service (Amazon SQS) and Amazon Simple Notification Service (Amazon SNS) to design the messaging between the microservices. How should the solutions architect design the solution? A. Create a single SQS queue and publish order events to it The Email, OrderProcessing and OrderCancellation microservices can then consume messages off the queue B. Create three SNS topics for each microservice Publish order events to the three topics Subscribe each of the Email OrderProcessmg, and OrderCancellation microservices to its own topic C. Create an SNS topic and publish order events to it Create three SQS queues for the Email OrderProcessing and OrderCancellation microservices Subscribe all SQS queues to the SNS topic with message filtering D. Create two SQS queues and publish order events to both queues simultaneously One queue is for the Email and OrderProcessmg microservices The second queue is for the Email and Order Cancellation microservices.
NO.135 A company recently migrated a message processing system to AWS. The system receives messages into an ActiveMQ queue running on an Amazon EC2 instance. Messages are processed by a consumer application running on Amazon EC2. The consumer application processes the messages and writes results to a MySQL database funning on Amazon EC2. The company wants this application to be highly available with tow operational complexity Which architecture otters the HGHEST availability? A. Add a second ActiveMQ server to another Availably Zone Add an additional consumer EC2 instance in another Availability Zone. Replicate the MySQL database to another Availability Zone. B. Use Amazon MO with active/standby brokers configured across two Availability Zones Add an additional consumer EC2 instance in another Availability Zone. Replicate the MySQL database to another Availability Zone. C. Use Amazon MO with active/standby blotters configured across two Availability Zones. Add an additional consumer EC2 instance in another Availability Zone. Use Amazon ROS tor MySQL with Multi-AZ enabled. D. Use Amazon MQ with active/standby brokers configured across two Availability Zones Add an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use Amazon RDS (or MySQL with Multi-AZ enabled.
NO.136 A company uses Application Load Balancers (ALBs) in different AWS Regions The ALBs receive inconsistent traffic that can spike and drop throughout the year The company's networking team needs to allow the IP addresses of the ALBs in the on-premises firewall to enable connectivity Which solution is the MOST scalable with minimal configuration changes? A. Write an AWS Lambda script to get the IP addresses of the ALBs in different Regions. Update the on-premises firewall's rule to allow the IP addresses of the ALBs B. Migrate all ALBs in different Regions to the Network Load Balancers (NLBs) Update the on-premises firewall's rule to allow the Elastic IP addresses of all the NLBs C. Launch AWS Global Accelerator Register the ALBs in different Regions to the accelerator Update the on-premises firewall's rule to allow static IP addresses associated with the accelerator D. Launch a Network Load Balancer (NLB) in one Region Register the private IP addresses of the ALBs in different Regions with the NLB. Update the on-premises firewall's rule to allow the Elastic IP address attached to the NLB.
NO.137 A company has an application that processes customer of tiers. The company hosts the application on an Amazon EC2 instance that saves the orders to an Amazon Aurora database. Occasionally when traffic Is high, the workload does not process orders fast enough. What should a solutions architect do to write the orders reliably to the database as quickly as possible? A. Increase the instance size of the EC2 instance when baffle Is high. Write orders to Amazon Simple Notification Service (Amazon SNS) Subscribe the database endpoint to the SNS topic B. Write orders to an Amazon Simple Queue Service (Amazon SOS) queue Use EC2 instances in an Auto Scaling group behind an Application Load Balancer to read born the SQS queue and process orders into the database C. Write orders to Amazon Simple Notification Service (Amazon SNS). Subscribe the database endpoint to the SNS topic. Use EC2 ^stances in an Auto Scaling group behind an Application Load Balancer to read from the SNS topic. D. Write orders to an Amazon Simple Queue Service (Amazon SQS) queue when the EC2 instance reaches CPU threshold limits. Use scheduled scaling of EC2 instances in an Auto Scaling group behind an Application Load Balancer to read from the SQS queue and process orders into the database.
NO.138 A company has several web servers that need to frequently access a common Amazon RDS MySQL Multi-AZ DB instance The company wants a secure method for the web servers to connect to the database while meeting a security requirement to rotate user credentials frequently Which solution meets these requirements? A. Store the database user credentials in AWS Secrets Manager Grant the necessary IAM permissions to allow the web servers to access AWS Secrets Manager B. Store the database user credentials in AWS Systems Manager OpsCenter Grant the necessary IAM permissions to allow the web servers to access OpsCenter C. Store the database user credentials in a secure Amazon S3 bucket Grant the necessary IAM permissions to allow the web servers to retrieve credentials and access the database. D. Store the database user credentials in files encrypted with AWS Key Management Service (AWS KMS) on the web server file system. The web server should be able to decrypt the files and access the database.
NO.139 A company is running a media application in an on-premises data center and has accumulated 500 TB of data The company needs to migrate the data from the application s existing network-attached file system to AWS Users rarely access data that is older than 1 year Which solution meets these requirements MOST cost-effectively' A. Use AWS Snowmobile to move the data to Amazon S3 Create an S3 Lifecycle policy to transition data that is older than 1 year to S3 Glacier B. Use multiple AWS Snowball Edge Storage Optimized devices to move the data to Amazon S3 Create an S3 Lifecycle policy to transition data that is older than 1 year to S3 Standard-Infrequent Access (S3 Standard-IA) C. Set up an AWS Direct Connect connection between the on-premises data center and AWS Transfer the data directly to Amazon S3 by using the Direct Connect connection Create an S3 Lifecycle policy to transition data that is older than 1 year to S3 Glacier D. Set up an AWS Site-to-Site VPN connection between the on-premises data center and AWS Transfer the data directly to Amazon S3 by using the Site-to-Site VPN connection Create an S3 Lifecycle policy to transition data that is older than 1 year to S3 Standard-infrequent Access (S3 Standard-IA).
NO.140 A company has a financial application that produces reports. The reports average 50 KB in size and are stored in Amazon S3. The reports are frequently accessed during the first week after production and must be stored for several years The reports must be retrievable within 6 hours Which solution meets these requirements MOST cost-effectively1? A. Use S3 Standard Use an S3 Lifecycle rule to transition the reports to S3 Glacier after 7 days. B. Use S3 Standard Use an S3 Lifecycle rule to transition the reports to S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days C. Use S3 Intelligent-Tiering Configure S3 Intelligent-Tiering to transition the reports to S3 Standard-Infrequent Access (S3 Standard-IA) and S3 Glacier D. Use S3 Standard Use an S3 Lifecycle rule to transition the reports to S3 Glacier Deep Archive after 7 days.
NO.141 A company has an image processing workload running on Amazon Elastic Container Service {Amazon ECS) in two private subnets Each private subnet uses a NAT instance for Internet access All images are stored in Amazon S3 buckets The company is concerned about the data transfer costs between Amazon ECS and Amazon S3 What should a solutions architect do to reduce costs? A. Configure a NAT gateway to replace the NAT instances B. Configure a gateway endpoint for traffic destined to Amazon S3 C. Configure an interface endpoint for traffic destined to Amazon S3 D. Configure Amazon CloudFront for the S3 bucket storing the images .
NO.142 A company is designing an internet-facing web application. The application runs on Amazon EC2 for Linux-based instances that store sensitive user data in Amazon RDS MySQL Multi-AZ DB instances The EC2 instances are in public subnets, and the RDS DB instances are in private subnets. The security team has mandated that the DB instances be secured against web-based attacks. What should a solutions architect recommend? A. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer Configure the EC2 instance iptables rules to drop suspicious web traffic. Create a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the individual EC2 instances. B. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Move DB instances to the same subnets that EC2 instances are located in. Create a security group for the DB instances Configure the RDS security group to only allow port 3306 inbound from the individual EC2 instances. C. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Use AWS WAF to monitor inbound web traffic for threats Create a security group for the web application servers and a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the web application server security group D. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Use AWS WAF to monitor inbound web traffic for threats Configure the Auto Scaling group to automatically create new DB instances under heavy traffic. Create a security group for the RDS DB instances. Configure the RDS security group to only allow port 3306 inbound.
NO.143 An ecommerce company uses an Amazon Aurora DB cluster to store customer transactions. The company also maintains a separate Amazon DynamoDB table that contains item sales information The company wants the DB cluster to invoke a recently deployed AWS Lambda function to update the DynamoDB table every time a row is inserted into the database Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.) A. Modify the Lambda function to allow outbound communication to the DB cluster B. Modify the DB cluster to allow outbound communication to the Lambda function. C. Modify the DB cluster to allow outbound communication to the DynamoDB table D. Ensure that the DB cluster has an IAM role that allows the DB cluster to invoke Lambda functions. E. Ensure that the Lambda function has an IAM role that allows Lambda to invoke functions on the DB cluster.
NO.144 A pharmaceutical company is developing a new drug. The volume of data that the company generates has grown exponentially over the past few months. The company's researchers regularly require a subset of the entire dataset to be immediately available with minimal lag. However, the entire dataset does not need to be accessed on a daily basis. All the data currently resides in on-premises storage arrays, and the company wants to reduce ongoing capital expenses. Which storage solution should a solutions architect recommend to meet these requirements? A. Run AWS DataSync as a scheduled cron job to migrate the data to an Amazon S3 bucket on an ongoing basis. B. Deploy an AWS Storage Gateway file gateway with an Amazon S3 bucket as the target storage. Migrate the data to the Storage Gateway appliance. C. Deploy an AWS Storage Gateway volume gateway with cached volumes with an Amazon S3 bucket as the target storage. Migrate the data to the Storage Gateway appliance. D. Configure an AWS Site-to-Site VPN connection from the on-premises environment to AWS. Migrate data to an Amazon Elastic File System (Amazon EFS) file system.
NO.145 A marketing company is storing CSV files in an Amazon S3 bucket for statistical analysis An application on an Amazon EC2 instance needs permission to efficiently process the CSV data stored in the S3 bucket. A. Attach a resource-based policy lo the S3 bucket B. Create an IAM user for the application with specific permissions to the S3 bucket C. Associate an IAM role with least privilege permissions lo the EC2 instance profile D Store AWS a credential directly on the EC2 instance for applications on the instance to use for API calls.
NO.146 A company has an AWS account used for software engineering. The AWS account has access to the company's on-premises data center through a pair of AWS Direct Connect connections All non-VPC traffic routes to the virtual private gateway A development team recently created an AWS Lambda function through the console The development team needs to allow the function to access a database that runs in a private subnet in the company's data center Which solution will meet these requirements? A. Configure the Lambda function to run in the VPC with the appropriate security group B. Set up a VPN connection from AWS to the data center Route the traffic from the Lambda function through the VPN C. Update the route tables in the VPC to allow the Lambda function to access the on-premises data center through Direct Connect D. Create an Elastic IP address Configure the Lambda function to send traffic through the Elastic IP address without an elastic network interface.
NO.147 A company runs a public three-Tier web application in a VPC The application runs on Amazon EC2 instances across multiple Availability Zones. The EC2 instances that run in private subnets need to communicate with a license server over the internet The company needs a managed solution that minimizes operational maintenance Which solution meets these requirements'' A. Provision a NAT instance in a public subnet Modify each private subnets route table with a default route that points to the NAT instance B. Provision a NAT instance in a private subnet Modify each private subnet's route table with a default route that points to the NAT instance C. Provision a NAT gateway in a public subnet Modify each private subnet's route table with a default route that points to the NAT gateway D. Provision a NAT gateway in a private subnet Modify each private subnet's route table with a default route that points to the NAT gateway.
NO.148 A company has three VPCs named Development, Testing and Production in the us-east-1 Region. The three VPCs need to be connected to an on-premises data center and are designed to be separate to maintain security and prevent any resource sharing A solutions architect needs to find a scalable and secure solution What should the solutions architect recommend? A. Create an AWS Direct Connect connection and a VPN connection for each VPC to connect back to the data center. B. Create VPC peers from all the VPCs to the Production VPC Use an AWS Direct Connect connection from the Production VPC back to the data center C. Connect VPN connections from all the VPCs to a VPN in the Production VPC. Use a VPN connection from the Production VPC back to the data center D. Create a new VPC called Network Within the Network VPC create an AWS Transit Gateway with an AWS Direct Connect connection back to the data center Attach all the other VPCs to the Network VPC.
NO.149 A company has migrated several applications to AWS in the past 3 months. The company wants to know the breakdown of costs for each of these applications. The company wants to receive a regular report that includes this information. Which solution will meet these requirements MOST cost-effectively? A. Use AWS Budgets to download data for the past 3 months into a csv file. Look up the desired information. B. Load AWS Cost and Usage Reports into an Amazon RDS DB instance. Run SQL queries to get the desired information. C. Tag all the AWS resources with a key for cost and a value of the application's name. Activate cost allocation tags. Use Cost Explorer to get the desired information. D. Tag all the AWS resources with a key for cost and a value of the application's name. Use the AWS Billing and Cost Management console to download bills for the past 3 months. Look up the desired information.
NO.150 A company has an application that runs on Amazon EC2 instances within a private subnet in a VPC The instances access data in an Amazon S3 bucket in the same AWS Region The VPC contains a NAT gateway in a public subnet to access the S3 bucket The company wants to reduce costs by replacing the NAT gateway without compromising security or redundancy Which solution meets these requirements? A. Replace the NAT gateway with a NAT instance B. Replace the NAT gateway with an internet gateway C. Replace the NAT gateway with a gateway VPC endpoint D. Replace the NAT gateway with an AWS Direct Connect connection.
NO.151 A company has an application that calls AWS Lambda functions. A recent code review found database credentials stored in the source code. The database credentials needs to be removed from the Lambda source code. The credentials must then be securely stored and rotated on a on-going basis to meet security policy requirements. What should a solutions architect recommend meet these requirements? A. Store the password in AWS CloudHSM. Associate the Lambda function with a role that can review the password from CloudHSM given key ID. B. Store the password in AWS Secrets Manager . A associate the Lambda function with a role that can retrieve the password from secrets Manager given its secret ID. C. Move the database password to an environment variable associate the Lambda function Retrieve the password from the environment variable upon execution. D. Store the password in AWS Key Management Service (AWS KMS). Associate the Lambda function with a role that can retrieve the password from AWS KMS given its key ID.
NO.152 A company is planning to migrate a business-critical dataset to Amazon S3 The current solution design uses a single S3 bucket in the us-east-1 Region with versioning enabled to store the dataset. The company's disaster recovery policy states that all data must be in multiple AWS Regions. How should a solutions architect design the S3 solution? A. Create an additional S3 bucket in another Region and configure cross-Region replication. B. Create an additional S3 bucket in another Region and configure cross-origin resource sharing (CORS) C. Create an additional S3 bucket with versioning in another Region and configure cross-Region replication D. Create an additional S3 bucket with versioning in another Region and configure cross-origin resource sharing (CORS).
NO.153 A company has a website running on Amazon EC2 Instances across two Availability Zones The company is expecting spikes in traffic on specific holidays and wants to provide a consistent user experience. How can a solutions architect meet this requirement? A. Use step scaling B. Use simple scaling C. Use lifecycle hooks D. Use scheduled scaling.
NO.154 A company has NFS servers in an on-premises data center that need to periodically back up small amounts of data to Amazon S3. Which solution marts these requirement and is MOST cost-effective? A. Set up AWS Glue lo copy the data from the on-premises servers to Amazon S3. B. Set up an AWS DataSync agent on Vie on-premises servers, and sync the data lo Amazon S3 C. Set up an SFTP sync using AWS Transfer for SFTP lo sync data from on premises lo Amazon S3 D. Set up an AWS Direct Connect connection between the on-premises data center and a VPC, and copy the data to Amazon S3.
NO.155 A law firm needs to share information with the public. The information includes hundreds of files that must be publicly readable. Modifications or deletions of the files by anyone before a designated future date are prohibited. Which solution will meet these requirements in the MOST secure way? A. Upload all tiles to an Amazon S3 bucket that is configured for static website hosting. Grant read only IAM permissions to any AWS principals that access the S3 bucket until the designated date. B. Create a new Amazon S3 bucket with S3 Versioning enabled. Use S3 Object Lock with a retention period in accordance with the designated dale. Configure the S3 bucket for static website hosting Set an S3 bucket policy to allow read-only access to the objects. C. Create a new Amazon S3 bucket with S3 Versioning enabled Configure an event trigger to run an AWS Lambda function in case of object modification or deletion Configure the Lambda function to replace the objects with the original versions from a private S3 bucket D. Upload all files to an Amazon S3 bucket that is configured for static website hosing. Select the folder that contains the files. Use S3 Object Lock with a retention period m accordance with the designated date Grant read-only IAM permissions to any AWS principals that access the S3 bucket.
NO.156 A company receives structured and semi-structured data from various sources once every day A solutions architect needs to design a solution that leverages big data processing frameworks The data should be accessible using SQL queries and business intelligence tools What should the solutions architect recommend to build the MOST high-performing solution** A. Use AWS Glue to process data and Amazon S3 to store data B. Use Amazon EMR to process data and Amazon Redshift lo store data C. Use Amazon EC2 to process data and Amazon Elastic Block Store (Amazon EBS) to store data D. Use Amazon Kinesis Data Analytics to process data and Amazon Elastic File System (Amazon EFS) to store data Answer: A.
NO.157 A solution architect is designing he architect of a new application being deployed to the AWS Cloud. The application will run on Amazon EC2 On-Demand instances and will automatically scale across multiple Availability Zones. The EC2 instances will scale up and down frequently the day. An Application load balancer (ALB) will handle the load distribution. The architecture needs to support distributed session data management. The company is willing to make charges to code if needed. What should the solutions architect do to ensure that the architecture supports distributed session data management? A. Use Amazon ElastiCache to manage and store session data. B. Use session affinity (sticky sessions) of the ALB to manage session data. C. Use Session Manager from AWS Systems Manager to manage the session. D. Use the GetSessionToken API operation in AWS Security Token Service (AWS STS) to manage the session.
NO.158 A company needs a backup strategy for its three-tier stateless web application The web application runs on Amazon EC2 instances in an Auto Scaling group with a dynamic scaling policy that is configured to respond to scaling events The database tier runs on Amazon RDS for PostgreSQL The web application does not require temporary local storage on the EC2 instances The company's recovery point objective (RPO) is 2 hours The backup strategy must maximize scalability and optimize resource utilization for this environment Which solution will meet these requirements? A. Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances and database every 2 hours to meet the RPO B. Configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS) snapshots Enable automated backups in Amazon RDS to meet the RPO C. Retain the latest Amazon Machine Images (AMIs) of the web and application tiers Enable automated backups in Amazon RDS and use point-in-time recovery to meet the RPO D. Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances every 2 hours Enable automated backups in Amazon RDS and use point-in-time recovery to meet the RPO.
NO.159 A solutions architect is designing the architecture for a company website that is composed of static content. The company's target customers are located in the United States and Europe. Which architecture should the solutions architect recommend to MINIMIZE cost? A. Store the website files on Amazon S3 in the us-east-2 Region. Use an Amazon CloudFront distribution with the price class configured to limit the edge locations in use. B. Store the website files on Amazon S3 in the us-east-2 Region. Use an Amazon CloudFront distribution with the price class configured to maximize the use of edge locations. C. Store the website files on Amazon S3 in the us-east-2 Region and the eu-west-1 Region. Use an Amazon CloudFront geolocation routing policy to route requests to the closest Region to the user. D. Store the website files on Amazon S3 in the us-east-2 Region and the eu-west-1 Region. Use an Amazon CloudFront distribution with an Amazon Route 53 latency routing policy to route requests to the closest Region to the user.
NO.160 A company is developing a video conversion application hosted on AWS The application will be available in two tiers: a free tier and a paid tier. Users in the paid tier will have their videos converted first and then the free tier users will have their videos converted Which solution meets these requirements and is MOST cost-effective? A. One FIFO queue for the paid tier and one standard queue for the free tier B. A single FIFO Amazon Simple Queue Service (Amazon SQS) queue for all file types C. A single standard Amazon Simple Queue Service (Amazon SQS) queue for all file types D. Two standard Amazon Simple Queue Service (Amazon SQS) queues with one for the paid tier and one for the free tier.
NO.161 A company runs a photo processing application mat needs to frequently upload and download pictures from Amazon S3 buckets that are located in the same AWS Region A solutions architect has noticed an increased cost in data transfer lees and needs to implement a solution to reduce these costs How can the solutions architect meet this requirement? A. Deploy Amazon API Gateway into a public subnet and adjust the route table to route S3 calls through it B. Deploy a NAT gateway into a public subnet and attach an endpoint policy that allows access to the S3 buckets C. Deploy the application into a public subnet and allow it to route through an internet gateway to access the S3 buckets D. Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3 buckets.
NO.162 A company has data stored in an on-premises data center that is used by several on-premises applications The company wants to maintain its existing application environment and be able to use AWS services for data analytics and future visualizations Which storage service should a solutions architect recommend? A. Amazon Redshift B. AWS Storage Gateway for files C. Amazon Elastic Block Store (Amazon EBS) D. Amazon Elastic File System (Amazon EFS).
NO.163 A developer has a script lo generate daily reports that users previously ran manually The script consistently completes in under 10 minutes The developer needs to automate this process in a cost-effective manner. Which combination of services should the developer use? (Select TWO.) A. AWS Lambda B. AWS CloudTrail C. Cron on an Amazon EC2 instance D. Amazon EC2 On-Demand Instance with user data E. Amazon EventBridge {Amazon CloudWatch Events).
NO.164 A company hosts a marketing website in an on-premises data center. The website consists of static documents and runs on a single server. An administrator updates the website content infrequently and uses an SFTP client to upload new documents. The company decides to host its website on AWS and to use Amazon CloudFront. The company's solutions architect creates a CloudFront distribution. The solutions architect must design the most cost-effective and resilient architecture for website hosting to serve as the CloudFront origin Which solution will meet these requirements? A. Create a virtual server by using Amazon Lightsail Configure the web server in the Lightsail instance Upload website content by using an SFTP client B. Create an AWS Auto Scaling group for Amazon EC2 instances Use an Application Load Balancer Upload website content by using an SFTP client C. Create a private Amazon S3 bucket Use an S3 bucket policy to allow access from a CloudFront origin access identity (OAI) Upload website content by using the AWS CLI D. Create a public Amazon S3 bucket Configure AWS Transfer for SFTP Configure the S3 bucket for website hosting Upload website content by using the SFTP client.
NO.165 A company needs to store 160TB of data for an indefinite of time. The company must be able to use standard SQL and business intelligence tools to query all of the data. The data will be queried no more than twice each month. What is the MOST cost-effective solution that meets these requirements? A. Store the data in Amazon Aurora Serverles with MySQL . Use an SQL client to query the data. B. Store the data in Amazon S3. Use AWS Glue. Amazon Athena. IDBC and COBC drivers to query the data. C. Store the data in an Amazon EMR cluster with EMR File System (EMRFS) as the storage layer use Apache Presto to query the data. D. Store a subnet of the data in Amazon Redshift, and store the remaining data in Amazon S3. Use Amazon Redshift Spectrum to query the S3 data.
NO.166 A company stores can wordings on a monthly basis Users access lie recorded files randomly within 1year of recording, but users rarely access the files after 1year. The company wants to optimize its solution by allowing only files that ant newer than 1year old to be queried and retrieved as quickly as possible. A delay in retrieving older fees is acceptable Which solution meets these requirements MOST cost-effectively? A. Store individual files in Amazon S3 Glacier Store search metadata in object tags that are created in S3 Glacier Query the S3 Glacier tags to retrieve the files from S3 Glacier. B. Store individual files in Amazon S3. Use S3 Lifecycle polices to move the ties to S3 Glacier after 1year. Query and retrieve the files that are in Amazon S3 by using Amazon Athena. Query and retrieve the files that are in S3 Glacier by using S3 Glacier Select. C. Store Individual files In Amazon S3 Store search metadata for each archive In Amazon S3 Use S3 Lifecycle policies to move the ties to S3 Glacier after 1 year Query and retrieve tie flies by searching for metadata from Amazon S3. D. Store individual files in Amazon S3 Use S3 Lifecycle policies to move the files to S3 Glacier after 1year. Store search metadata in Amazon RDS Query the Sea from Amazon RDS Retrieve the files from Amazon S3 or S3 Glacier.
NO.167 A company has deployed a business-critical application in the AWS Good The application uses Amazon EC2 instances that run in the us-east-1 Region The application uses Amazon S3 for storage of all critical data To meet compliance requirements the company must create a disaster recovery (DR) plan that provides the capability of a full failover to another AWS Region What should a solutions architect recommend for this DR plan? A. Deploy the application to multiple Availability Zones in us-east-1 Create a resource group in AWS Resource Groups Turn on automatic failover for the application to use a predefined recovery Region B. Perform a virtual machine (VM) export by using AWS Import/Export on the existing EC2 instances Copy the exported instances to the destination Region in the event of a disaster provision new EC2 instances from the exported EC2 instances C. Create snapshots of all Amazon Elastic Block Store (Amazon EBS) volumes that are attached to the EC2 instances in us-east-t Copy the snapshots to the destination Region In the event of a disaster provision new EC2 instances from the EBS snapshots D. Use S3 Cross-Region Replication for the data that is stored in Amazon S3 Create an AWS CloudFormation template for the application with an S3 bucket parameter In the event of a disaster deploy the template to the destination Region and specify the local S3 bucket as the parameter.
NO.168 A company has an on-premises data center that is running out of storage capacity. The company wants to migrate its storage infrastructure to AWS while minimizing bandwidth costs. The solution must allow for immediate retrieval of data at no additional cost. How can these requirements be met? A. Deploy Amazon S3 Glacier Vault and enable expedited Enable provisioned retrieved capacity for the workload. B. Deploy AWS Storage Gateway using cached volumes. Use Storage GATEWAY store data in Amazon retaining copies of frequently accessed data subnets locally. C. Deploy AWS Storage gateway using stored volume to store data locally Use Storage gateway asynchronously back up point-in-time snapshots of the data Amazon S3. D. Deploy AWS Direct Connects to connect with on-premises data center. Configure AWS Storage gateway to store data locally use storage gateway to asynchronously back up point-in-time snapshot of data Amazon S3.
NO.169 A solutions architect must create a highly available bastion host architecture. The solution needs to be resilient within a single AWS Region and should require only minimal effort to maintain. What should the solutions architect do to meet these requirements? A. Create a Network Load Balancer backed by an Auto Scaling group with a UDP listener. B. Create a Network Load Balancer backed by a Spot Fleet with instances in a partition placement group. C. Create a Network Load Balancer backed by the existing servers in different Availability Zones as the target. D. Create a Network Load Balancer backed by an Auto Scaling group with instances in multiple Availability Zones as the target.
NO.170 A company is building a new furniture inventory application The company has deployed the application on a fleet of Amazon EC2 instances across multiple Availability Zones The EC2 instances run behind an Application Load Balancer (ALB) in their VPC A solutions architect has observed that incoming traffic seems to favor one EC2 instance resulting in latency for some requests What should the solutions architect do to resolve this issue? A. Disable session affinity (sticky sessions) on the ALB B. Replace the ALB with a Network Load Balancer C. increase the number of EC2 instances in each Availability Zone D. Adjust the frequency of the health checks on the ALB's target group.
NO.171 At part of budget planning. management wants a report of AWS billed dams listed by user. The data will be used to create department budgets. A solution architect needs to determine the most efficient way to obtain this report Information Which solution meets these requirement? A. Run a query with Amazon Athena to generate the report. B. Create a report in Cost Explorer and download the report C. Access the bill details from me tuning dashboard and download Via bill. D. Modify a cost budget in AWS Budgets to alert with Amazon Simple Email Service (Amazon SES).
NO.172 A company has deployed an internal API in a VPC behind an internet-facing Application Load Balancer (ALB). An application that consumes the API as a client is deployed in a VPC in a second account The application is deployed in private subnets behind a NAT gateway. When requests to the client application increase, the NAT gateway costs are higher than expected. Which combination of architectural changes will reduce the NAT gateway costs? (Select TWO.) A. Configure a VPC peering connection between the two VPCs. B. Configure an AWS Direct Connect connection between the two VPCs. C. Replace the internet-facing ALB with an internal ALB. Access the API by using the ALB's private DNS address. D. Configure a ClassicLink connection for the API to the client VPC. Access the API by using the ClassicLink address. E. Configure an AWS Resource Access Manager connection between the two accounts. Access the API by using the ALB's private DNS address.
NO.173 A company has a three-tier application image sharing. The application uses an Amazon EC2 instance for the front-end layer, another EC2 instance tor the application layer, and a third EC2 instance for a MySQL database A solutions architect must design a scalable and nighty available solution mat requires the least amount of change to the application. Which solution meets these requirement? A. Use Amazon S3 to host the front-end layer. Use AWS Lambda functions for the application layer. Move the database to an Amazon DynamoDB table Use Amazon S3 to store and service users' images. B. Use toad-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end layer and the application layer. Move the database to an Amazon RDS OB instance with multiple read replicas to serve users' images. C. Use Amazon S3 to host the front-end layer. Use a fleet of EC2 instances in an Auto Scaling group for the application layer. Move the database to a memory optimized instance type to store and serve users' images. D. Use toad-balanced Multi-AZ AWS Elastic Beanstark environments for tie front-end layer and the application layer. Move the database to an Amazon ROS Multi-AZ DB instance Use Amazon S3 to store and serve users' images.
NO.174 A company uses Amazon S3 to store its confidential audit documents. The S3 bucket uses bucket policies to restrict access to audit team IAM user credentials according to the principle of least privilege Company managers are wonted about accidental deletion of documents in the S3 bucket and want a more secure solution What should a solutions architect do to secure the audit documents? A. Enable the versioning and MFA Delete features on the S3 bucket. B. Enable multi-factor authentication (UFA) on the IAM user credentials for each audit team IAM user account. C. Add an S3 Lifecycle policy to the audit team's IAM user accounts to deny the s3 DekaeObject action during audit dates D. Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit learn IAM user accounts from accessing the KMS key.
NO.175 A company has thousands of edge devices that collectively generate 1 TB of status averts each day Each alert s approximately 2 KB in size. A solutions architect needs to implement a solution to ingest and store the alerts for future analysis The company wants a highly available solution However the company needs to minimize costs and does not want to manage additional infrastructure Additionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days What is the MOST operationally efficient solution that meets these requirements^ A. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days B Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts Create a script on the EC2 instances that will store tne alerts m an Amazon S3 bucket Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days B. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon Elasticsearch Service (Amazon ES) duster Set up the Amazon ES cluster to take manual snapshots every day and delete data from the duster that is older than 14 days D. Create an Amazon Simple Queue Service (Amazon SQS i standard queue to ingest the alerts and set the message retention period to 14 days Configure consumers to poll the SQS queue check the age of the message and analyze the message data as needed If the message is 14 days old the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue.
NO.176 A company must save ail the email messages that its employees send to customers for a period of 12 months. The messages are stored m a binary format and vary m size from 1 KB to 20 KB. The company has selected Amazon S3 as the storage service for the messages Which combination of steps will meet these requirements MOST cost-effectively9 (Select TWO.) A. Create an S3 bucket policy that denies the s3 Delete Object action. B. Create an S3 lifecycle configuration that deletes the messages after 12 months. C. Upload the messages to Amazon S3 Use S3 Object Lock in governance mode D. Upload the messages to Amazon S3. Use S3 Object Lock in compliance mode. E. Use S3 Inventory Create an AWS Batch job that periodically scans the inventory and deletes the messages after 12 months.
NO.177 A company has a production web application in which users upload documents through a web interlace or a mobile app. According to a new regulatory requirement, new documents cannot be modified or deleted after they are stored. What should a solutions architect do to meet this requirement? A. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning and S3 Object Lock enabled B. Store the uploaded documents in an Amazon S3 bucket. Configure an S3 Lifecycle policy to archive the documents periodically. C. Store the uploaded documents in an Amazon S3 bucket with S3 Versioning enabled Configure an ACL to restrict all access to read-only. D. Store the uploaded documents on an Amazon Elastic File System (Amazon EFS) volume. Access the data by mounting the volume in read-only mode.
NO.178 A company hosts more than 300 global websites and applications. The company requires a platform to analyze more than 30 TB of clickstream data each day. What should a solutions architect do to transmit and process the clickstream data? A. Design an AWS Data Pipeline to archive the data to an Amazon S3 bucket and run an Amazon EMR duster with the data to generate analytics B. Create an Auto Scaling group of Amazon EC2 instances to process the data and send it to an Amazon S3 data lake for Amazon Redshift to use tor analysis C. Cache the data to Amazon CloudFron: Store the data in an Amazon S3 bucket When an object is added to the S3 bucket, run an AWS Lambda function to process the data tor analysis. D. Collect the data from Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to transmit the data to an Amazon S3 data lake Load the data in Amazon Redshift for analysis .
NO.179 A company has a hybrid application hosted on multiple on-premises servers with static IP addresses There is already a VPN that provides connectivity between the VPC and the on-premises network. The company wants to distribute TCP traffic across the on-premises servers for internet users What should a solutions architect recommend to provide a highly available and scalable solution? A. Launch an internet-facing Network Load Balancer (NLB) and register on-premises IP addresses with the NLB B. Launch an internet-facing Application Load Balancer (ALB) and register on-premises IP addresses with the ALB C. Launch an Amazon EC2 instance attach an Elastic IP address, and distribute traffic to the on-premises servers D. Launch an Amazon EC2 instance with public IP addresses in an Auto Scaling group and distribute traffic to the on-premises servers.
NO.180 A company wants to perform an online migration of active datasets from an on-premises NFS server to an Amazon S3 bucket that is named DOC-EXAMPLE-BUCKET Data integrity verification is required during the transfer and at the end of the transfer. The data also must he encrypted A solutions architect is using an AWS solution to migrate the data. Which solution meets these requirements? A. AWS Storage Gateway file gateway B. S3 Transfer Acceleration C. AWS DataSync D. AWS Snowhall Edge Storage Optimized .
NO.181 A company is planning to run a group of Amazon EC2 instances that connect to an Amazon Aurora database. The company has built an AWS CloudFormation template to deploy the EC2 instances and the Aurora DB cluster. The company wants to allow the instances to authenticate to the database in a secure way. The company does not want to maintain static database credentials. Which solution meets these requirements with the LEAST operational effort? A. Create a database user with a user name and password. Add parameters for the database user name and password to the CloudFormation template. Pass the parameters to the EC2 instances when the instances are launched. B. Create a database user with a user name and password. Store the user name and password in AWS Systems Manager Parameter Store. Configure the EC2 instances to retrieve the database credentials from Parameter Store. C. Configure the DB cluster to use IAM database authentication. Create a database user to use with IAM authentication. Associate a role with the EC2 instances to allow applications on the instances to access the database. D. Configure the DB cluster to use IAM database authentication with an IAM user. Create a database user that has a name that matches the IAM user. Associate the IAM user with the EC2 instances to allow applications on the instances to access the database.
NO.182 A company maintains about 300 TB in Amazon S3 Standard storage month after month The S3 objects are each typically around 50 GB in size and are frequently replaced with multipart uploads by their global application The number and size of S3 objects remain constant but the company's S3 storage costs are increasing each month How should a solutions architect reduce costs in this situation? A. Switch from multipart uploads to Amazon S3 Transfer Acceleration B. Enable an S3 Lifecycle policy that deletes incomplete multipart uploads C. Configure S3 inventory to prevent objects from being archived too quickly D. Configure Amazon CloudFront to reduce the number of objects stored in Amazon S3 .
NO.183 A company has developed a microservices application. It uses a client-facing API with Amazon API Gateway and multiple internal services hosted on Amazon EC2 instances to process user requests The API is designed to support unpredictable surges in traffic, but internal services may become overwhelmed and unresponsive for a period of time during surges A solutions architect needs to design a more reliable solution that reduces errors when internal services become unresponsive or unavailable Which solution meets these requirements? A. Use AWS Auto Scaling to scale up internal services when there is a surge in traffic B. Use different Availability Zones to host internal services. Send a notification to a system administrator when an internal service becomes unresponsive. C. Use an Elastic Load Balancer to distribute the traffic between internal services Configure Amazon CloudWatch metrics to monitor traffic to internal services. D. Use Amazon Simple Queue Service (Amazon SQS) to store user requests as they arrive. Change the internal services to retrieve the requests from the queue for processing. .
NO.184 A solutions architect is designing a security solution for a company that wants to provide developers with individual AWS accounts through AWS Organizations, while also maintaining standard security controls Because the individual developers will have AWS account root user-level access to their own accounts, the solutions architect wants to ensure that the mandatory AWS CloudTrail configuration that is applied to new developer accounts is not modified. Which action meets these requirements? A. Create an IAM policy that prohibits changes to CloudTrail, and attach it to the root user B. Create a new trail in CloudTrail from within the developer accounts with the organization trails option enabled. C. Create a service control policy (SCP) the prohibits changes to CloudTrail, and attach it the developer accounts D. Create a service-linked role for CloudTrail with a policy condition that allows changes only from an Amazon Resource Name (ARN) in the master account.
NO.185 A company hosts its application in the AWS Cloud. The application runs on Amazon EC2 instances behind an Elastic Load Balancer in an Auto Scaling group and with an Amazon DynamoDB table. The company wants to ensure the application can be made available in another AWS Region with minimal downtime. What should a solutions architect do to meet these requirements with the LEAST amount of downtime? A. Create an Auto Scaling group and a load balancer in the disaster recovery Region. Configure the DynamoDB table as a global table. Configure DNS failover to point to the new disaster recovery Region's load balancer. B. Create an AWS CloudFormation template to create EC2 instances, load balancers, and DynamoDB tables to be launched when needed. Configure DNS failover to point to the new disaster recovery Region's load balancer. C. Create an AWS CloudFormation template to create EC2 instances and a load balancer to be launched when needed. Configure the DynamoDB table as a global table. Configure DNS failover to point to the new disaster recovery Region's load balancer. D. Create an Auto Scaling group and load balancer in the disaster recovery Region. Configure the DynamoDB table as a global table. Create an Amazon CloudWatch alarm to trigger an AWS Lambda function that updates Amazon Route 53 pointing to the disaster recovery load balancer.
NO.186 A company has designed an application where users provide small sets of textual data by calling a public API The application runs on AWS and includes a public Amazon API Gateway API that forwards requests to an AWS Lambda function for processing The Lambda function then writes the data to an Amazon Aurora Serverless database for consumption The company is concerned that it could lose some user data it a Lambda function fails to process the request property or reaches a concurrency limit. What should a solutions architect recommend to resolve this concern? A. Split the existing Lambda function into two Lambda functions Configure one function to receive API Gateway requests and put relevant items into Amazon Simple Queue Service (Amazon SQS) Configure the other function to read items from Amazon SQS and save the data into Aurora B. Configure the Lambda function to receive API Gateway requests and write relevant items to Amazon ElastiCache Configure ElastiCache to save the data into Aurora C. Increase the memory for the Lambda function Configure Aurora to use the Multi-AZ feature D. Split the existing Lambda function into two Lambda functions Configure one function to receive API Gateway requests and put relevant items into Amazon Simple Notification Service (Amazon SNS) Configure the other function to read items from Amazon SNS and save the data into Aurora.
NO.187 A company's website runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The website has a mix of dynamic and static content. Users around the globe are reporting that the website is slow Which set of actions will improve website performance for users worldwide? A. Create an Amazon CloudFront distribution and configure the ALB as an origin. Then update the Amazon Route 53 record to point to the CloudFront distribution B. Create a latency-based Amazon Route 53 record for the ALB. Then launch new EC2 instances with larger instance sizes and register the instances with the ALB C. Launch new EC2 instances hosting the same web application in different Regions closer to the users. Then register the instances with the same ALB using cross-Region VPC peering. D. Host the website in an Amazon S3 bucket in the Regions closest to the users and delete the ALB and EC2 instances. Then update an Amazon Route 53 record to point to the S3 buckets.
NO.188 A solutions architect is designing the architecture for a software demonstration environment The environment will run on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB) The system will experience significant increases in traffic during working hours but Is not required to operate on weekends. Which combination of actions should the solutions architect take to ensure that the system can scale to meet demand? (Select TWO) A. Use AWS Auto Scaling to adjust the ALB capacity based on request rate B. Use AWS Auto Scaling to scale the capacity of the VPC internet gateway C. Launch the EC2 instances in multiple AWS Regions to distribute the load across Regions D. Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization E. Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends Revert to the default values at the start of the week.
NO.189 A company runs an application using Amazon ECS. The application creates resized versions of an original Image and then makes Amazon S3 API calls to store the resized images in Amazon S3 How can a solutions architect ensure that the application has permission to access Amazon S3? A. Update the S3 role in AWS IAM to allow read/write access from Amazon ECS and then relaunch the container. B. Create an IAM role with S3 permissions and then specify that role as the taskRoleArn in the task definition. C. Create a security group that allows access from Amazon ECS to Amazon S3 and update the launch configuration used by the ECS cluster. D. Create an IAM user with S3 permissions, and then relaunch the Amazon EC2 instances for the ECS cluster while logged in as this account.
NO.190 A company is migrating to the AWS Cloud. A file server is the first workload to migrate Users must be able to access the file share using the Server Message Block (SMB) protocol. Which AWS managed service meets these requirements? A. Amazon EBS B. Amazon EC2 C. Amazon FSx D. Amazon S3.
NO.191 A company is migrating its application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster behind an Application Load Balancer (ALB). The disaster recovery (DR) requirements for the application include the ability to fail over to another AWS Region with minimal downtime. Which combination of actions should a solutions architect take to meet this requirement? (Select TWO.) A. Create a scaled-down clone environment in the DR Region. Use auto scaling policies with the EKS nodes. B. Create an Amazon Route 53 record that points to the ALB. Configure an active-passive failover routing policy on the record. C. Create an AWS Resource Access Manager policy that grants the application users access to the DR environment when the DR environment is needed. D. Create an AWS Lambda function that monitors the availability of the main environment and deploys the DR environment when the DR environment is needed. E. Create an AWS CIoudFormation template that deploys the stack. Deploy the same template in the DR Region when the main environment is unavailable.
NO.192 A company is running an application on Amazon EC2 instances. Traffic to the workload increases substantially during business hours and decreases afterward. The CPU utilization of an EC2 instance is a strong indicator of end-user demand on the application. The company has configured an Auto Scaling group to have a minimum group size of 2 EC2 instances and a maximum group size of 10 EC2 instances. The company is concerned that the current scaling policy that is associated with the Auto Scaling group might not be correct. The company must avoid over-provisioning EC2 instances and incurring unnecessary costs. What should a solutions architect recommend to meet these requirements? A. Configure Amazon EC2 Auto Scaling to use a scheduled scaling plan and launch an additional 8 EC2 instances during business hours. B. Configure AWS Auto Scaling to use a scaling plan that enables predictive scaling. Configure predictive scaling with a scaling mode of forecast and scale, and to enforce the maximum capacity setting during scaling. C. Configure a step scaling policy to add 4 EC2 instances at 50% CPU utilization and add another 4 EC2 instances at 90% CPU utilization. Configure scale-in policies to perform the reverse and remove EC2 instances based on the two values. D. Configure AWS Auto Scaling to have a desired capacity of 5 EC2 instances, and disable any existing scaling policies. Monitor the CPU utilization metric for 1 week. Then create dynamic scaling policies that are based on the observed values.
NO.193 A company has an application that uses an Amazon OynamoDB table lew storage. A solutions architect discovers that many requests to the table are not returning the latest data. The company's users have not reported any other issues with database performance Latency is in an acceptable range. Which design change should the solutions architect recommend? A. Add read replicas to the table. B Use a global secondary index (GSI). C. Request strongly consistent reads for the table D. Request eventually consistent reads for the table.
NO.194 A company is running a web-based game in two Availability Zones in the us-west-2 Region The web servers use an Application Load Balancer (ALB) in public subnets The ALB has an SSL certificate from AWS Certificate Manager (ACM) with a custom domain name The game is written in JavaScript and runs entirely in a user's web browser. The game is increasing in popularity in many countries around the world The company wants to update the application architecture and optimize costs without compromising performance. What should a solutions architect do to meet these requirements? A. Use Amazon CloudFront and create a global distribution that points to the ALB. Reuse the existing certificate from ACM for the CloudFront distribution Use Amazon Route 53 to update the application alias to point to the distribution B. Use AWS CloudFormation to deploy the application stack to AWS Regions near countries where the game is popular Use ACM to create a new certificate for each application instance Use Amazon Route 53 with a geolocation routing policy to direct traffic to the local application instance. C. Use Amazon S3 and create an S3 bucket in AWS Regions near countries where the game is popular Deploy the HTML and JavaScript files to each S3 bucket Use ACM to create a new certificate for each S3 bucket Use Amazon Route 53 with a geolocation routing policy to direct traffic to the local S3 bucket D. Use Amazon S3 and create an S3 bucket in us-west-2 Deploy the HTML and JavaScript files to the S3 bucket Use Amazon CloudFront and create a global distribution with the S3 bucket as the origin Use ACM to create a new certificate for the distribution Use Amazon Route 53 to update the application alias to point to the distribution.
NO.195 A company is upgrading its critical web-based application. The application is hosted on Amazon EC2 instances that are part of an Auto Scaling group behind an Application Load Balancer (ALB). The company wants to test the new configurations with a specific amount of traffic before the company begins to route all traffic to the upgraded application. How should a solutions architect design the architecture to meet these requirements? A. Create a new launch template. Associate the new launch template with the Auto Scaling group. Attach the Auto Scaling group to the ALB. Distribute traffic by using redirect rules. B. Create a new launch template. Create an additional Auto Scaling group. Associate the new launch template with the additional Auto Scaling group. Attach the additional Auto Scaling group to the ALB. Distribute traffic by using weighted target groups. C. Create a new launch template. Create an additional Auto Scaling group. Associate the new launch template with the additional Auto Scaling group. Create an additional ALB. Attach the additional Auto Scaling group to the additional ALB. Use an Amazon Route 53 failover routing policy to route traffic. D. Create a new launch template. Create an additional Auto Scaling group. Associate the new launch template with the additional Auto Scaling group. Create an additional ALB. Attach the additional Auto Scaling group to the additional ALB. Use an Amazon Route 53 weighted routing policy to route traffic.
NO.196 A company wants to move a multi-tiered application from on premises to the AWS Cloud to improve the application's performance. The application consists of application tiers that communicate with each other by way of Which solution moots these and is the MOST operationally efficient? A. Use Amazon API Gateway and direct transactions to the AWS Lambda functions as the application layer Use Amazon Simple Queue Service (Amazon SOS) as the communication layer between application services. B. Use Amazon CloudWatch metrics to analyze the application performance history to determine the servers' peak utilization during the performance failures Increase the size or the application servers Amazon EC2 instance to meet the peak requirements C. Use Amazon Simple Notification Service (Amazon SNS) to handle the messaging between application servers running on Amazon EC2 m an Auto Scaling group Use Amazon CloudWatch to monitor the SNS queue length and scale up and down as required. D. Use Amazon Simple Queue Service (Amazon SOS) to handle the messaging between application servers running on Amazon EC2 In an Auto Seeing group Use Amazon CloudWatch to monitor the SOS queue length and scale up when communication failures are detected.
NO.197 A company hosts a three-tier web application that includes a PostgreSQL database The database stores the metadata from documents The company searches the metadata for key terms to retrieve documents that the company reviews in a report each month The documents are stored in Amazon S3 The documents are usually written only once, but they are updated frequency The reporting process takes a few hours with the use of relational queries The reporting process must not affect any document modifications or the addition of new documents. What are the MOST operationally efficient solutions that meet these requirements? (Select TWO ) A. Set up a new Amazon DocumentDB (with MongoDB compatibility) cluster that includes a read replica Scale the read replica to generate the reports. B. Set up a new Amazon RDS for PostgreSQL Reserved Instance and an On-Demand read replica Scale the read replica to generate the reports C. Set up a new Amazon Aurora PostgreSQL DB cluster that includes a Reserved Instance and an Aurora Replica issue queries to the Aurora Replica to generate the reports. D. Set up a new Amazon RDS for PostgreSQL Multi-AZ Reserved Instance Configure the reporting module to query the secondary RDS node so that the reporting module does not affect the primary node E. Set up a new Amazon DynamoDB table to store the documents Use a fixed write capacity to support new document entries Automatically scale the read capacity to support the reports.
NO.198 A company is planning to host its compute-intensive applications on Amazon EC2 instances. The majority of the network traffic will be between these applications The company needs a solution that minimizes latency and maximizes network throughput The underlying hardware for the EC2 instances must not be shared with any other company Which solution will meet these requirements? A. Launch EC2 instances as Dedicated Hosts in a cluster placement group B. Launch EC2 instances as Dedicated Hosts in a partition placement group C. Launch EC2 instances as Dedicated Instances in a cluster placement group D. Launch EC2 instances as Dedicated Instances in a partition placement group.
NO.199 A company recently launched a new service that involves medical images The company scans the images and sends them from its on-premises data center through an AWS Direct Connect connection to Amazon EC2 instances After processing is complete, the images are stored in an Amazon S3 bucket A company requirement states that the EC2 instances cannot be accessible through the internet The EC2 instances run in a private subnet, which has a default route back to the on-premises data center for outbound internet access Usage of the new service is increasing rapidly A solutions architect must recommend a solution that meets the company's requirements and reduces the Direct Connect charges. Which solution accomplishes these goals MOST cost-effectively? A. Configure a VPC endpoint for Amazon S3 Add an entry to the private subnet's route table for the S3 endpoint B. Configure a NAT gateway in a public subnet Configure the private subnet's route table to use the NAT gateway C. Configure Amazon S3 as a file system mount point on the EC2 instances Access Amazon S3 through the mount D. Move the EC2 instances into a public subnet Configure the public subnet route table to point to an internet gateway.
NO.200 A company is concerned that two NAT instances in use will no longer be able to support the traffic needed for the company's application A solutions architect wants to implement a solution that is highly available fault tolerant and automatically scalable What should the solutions architect recommend? A. Remove the two NAT instances and replace them with two NAT gateways in the same Availability Zone B. Use Auto Scaling groups with Network Load Balancers for the NAT instances in different Availability Zones C. Remove the two NAT instances and replace them with two NAT gateways in different Availability Zones D. Replace the two NAT instances with Spot Instances in different Availability Zones and deploy a Network Load Balancer.
Report abuse Terms of use
HOME
CREATE TEST
COMMENTS
STADISTICS
RECORDS
Author's Tests