Questions
ayuda
option
My Daypo

ERASED TEST, YOU MAY BE INTERESTED ONDIVA-001-200

COMMENTS STATISTICS RECORDS
TAKE THE TEST
Title of test:
DIVA-001-200

Description:
DIVA-001-200

Author:
DIVADIVA
(Other tests from this author)

Creation Date:
27/03/2022

Category:
Logical

Number of questions: 200
Share the Test:
Facebook
Twitter
Whatsapp
Share the Test:
Facebook
Twitter
Whatsapp
Last comments
No comments about this test.
Content:
A firm is developing a web application on AWS utilizing containers. At any one moment, the organization needs three instances of the web application to be running. The application must be scalable in order to keep up with demand increases. While management is cost-conscious, they agree that the application should be highly accessible. What recommendations should a solutions architect make? Add an execution role to the function with lambda:InvokeFunction as the action and * as the principal. Add an execution role to the function with lambda:InvokeFunction as the action and Service:amazonaws.com as the principal. Add a resource-based policy to the function with lambda:ג€™* as the action and Service:events.amazonaws.com as the principal. Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service:events.amazonaws.com as the principal. .
A business outsources its marketplace analytics management to a third-party partner. The vendor requires restricted programmatic access to the company's account's resources. All necessary policies have been established to ensure acceptable access. Which new component provides the vendor the MOST SECURE access to the account? Stop the instance outside the applicationג€™s availability window. Start up the instance again when required. Hibernate the instance outside the applicationג€™s availability window. Start up the instance again when required. Use Auto Scaling to scale down the instance outside the applicationג€™s availability window. Scale up the instance when required. Terminate the instance outside the applicationג€™s availability window. Launch the instance by using a preconfigured Amazon Machine Image (AMI) when required.
A firm seeks to migrate its accounting system from an on-premises data center to an Amazon Web Services (AWS) Region. Data security and an unalterable audit log should be prioritized. All AWS activities must be subjected to compliance audits. Despite the fact that the business has enabled AWS CloudTrail, it want to guarantee that it meets these requirements. What precautions and security procedures should a solutions architect include to protect and secure CloudTrail? (Choose two.) Create a second S3 bucket in us-east-1. Enable S3 Cross-Region Replication from the existing S3 bucket to the second S3 bucket. Create a cross-origin resource sharing (CORS) configuration of the existing S3 bucket. Specify us-east-1 in the CORS rule's AllowedOrigin element. Create a second S3 bucket in us-east-1 across multiple Availability Zones. Create an S3 Lifecycle management rule to save photos into the second S3 bucket. Create a second S3 bucket in us-east-1 to store the replicated photos. Configure S3 event notifications on object creation and update events that invoke an AWS Lambda function to copy photos from the existing S3 bucket to the second S3 bucket.
A firm maintains a searchable inventory of items on its website. The data is stored in an Amazon RDS for MySQL database in a table with over ten million entries. The database is kept on a two-terabyte (TB) General Purpose Solid State Drive (gp2) array. The company's website gets millions of updates to this data each day. The business discovered that some tasks took 10 seconds or longer and determined that the bottleneck was the database storage performance. Which of the following options meets the performance requirement? Configure a VPC endpoint for Amazon S3. Add an entry to the private subnetג€™s route table for the S3 endpoint. Configure a NAT gateway in a public subnet. Configure the private subnetג€™s route table to use the NAT gateway. Configure Amazon S3 as a file system mount point on the EC2 instances. Access Amazon S3 through the mount. Move the EC2 instances into a public subnet. Configure the public subnet route table to point to an internet gateway.
A business that is currently hosting a web application on-premises is prepared to transition to AWS and launch a newer version of the application. The organization must route requests to the AWS or on-premises application based on the URL query string. The on-premises application is rendered unreachable over the internet, and a VPN connection is established between Amazon VPC and the business's data center. The company wishes to deploy this application using a load balancer (ALB). Which of the following solutions meets these criteria? Use AWS Snowball Edge devices to process and store the images. Upload the images to Amazon Simple Queue Service (Amazon SQS) during intermittent connectivity to EC2 instances. Configure Amazon Kinesis Data Firehose to create multiple delivery streams aimed separately at the S3 buckets for storage and the EC2 instances for processing the images. Use AWS Storage Gateway pre-installed on a hardware appliance to cache the images locally for Amazon S3 to process the images when connectivity becomes available.
A meteorological start-up company has created a custom web application for the aim of selling weather data to its members online. The company currently uses Amazon DynamoDB to store its data and wishes to establish a new service that alerts the managers of four internal teams whenever a new weather event is recorded. The business does not want for this new service to impair the operation of the present application. What steps should a solutions architect take to guarantee that these objectives are satisfied with the MINIMUM feasible operational overhead? Create a DynamoDB table in on-demand capacity mode. Create a DynamoDB table with a global secondary Index. Create a DynamoDB table with provisioned capacity and auto scaling. Create a DynamoDB table in provisioned capacity mode, and configure it as a global table.
A corporation uses an AWS application to offer content to its subscribers worldwide. Numerous Amazon EC2 instances are deployed on a private subnet behind an Application Load Balancer for the application (ALB). The chief information officer (CIO) wishes to limit access to some nations due to a recent change in copyright regulations. Which course of action will satisfy these criteria? Modify the ALB security group to deny incoming traffic from blocked countries. Modify the security group for EC2 instances to deny incoming traffic from blocked countries. Use Amazon CloudFront to serve the application and deny access to blocked countries. Use ALB listener rules to return access denied responses to incoming traffic from blocked countries.
Using seven Amazon EC2 instances, a business runs its web application on AWS. The organization needs that DNS queries provide the IP addresses of all healthy EC2 instances. Which policy should be employed to comply with this stipulation? Simple routing policy Latency routing policy Multi-value routing policy Geolocation routing policy.
Each day, a corporation collects data from millions of consumers totalling around 1'. The firm delivers use records for the last 12 months to its customers. To meet with regulatory and auditing standards, all use data must be retained for at least five years. Which storage option is the MOST CHEAPEST? Store the data in Amazon S3 Standard. Set a lifecycle rule to transition the data to S3 Glacier Deep Archive after 1 year. Set a lifecycle rule to delete the data after 5 years. Store the data in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA). Set a lifecycle rule to transition the data to S3 Glacier after 1 year. Set the lifecycle rule to delete the data after 5 years. Store the data in Amazon S3 Standard. Set a lifecycle rule to transition the data to S3 Standard-Infrequent Access (S3 Standard-IA) after 1 year. Set a lifecycle rule to delete the data after 5 years. Store the data in Amazon S3 Standard. Set a lifecycle rule to transition the data to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 year. Set a lifecycle rule to delete the data after 5 years.
A business uses an Amazon RDS for PostgreSQL database instance to manage a fleet of web servers. Following a normal compliance review, the corporation establishes a standard requiring all production databases to have a recovery point objective (RPO) of less than one second. Which solution satisfies these criteria? Enable a Multi-AZ deployment for the DB instance. Enable auto scaling for the DB instance in one Availability Zone. Configure the DB instance in one Availability Zone, and create multiple read replicas in a separate Availability Zone. Configure the DB instance in one Availability Zone, and configure AWS Database Migration Service (AWS DMS) change data capture (CDC) tasks.
On Amazon EC2 instances, a business is developing an application that creates transitory transactional data. Access to data storage that can deliver adjustable and consistent IOPS is required by the application. What recommendations should a solutions architect make? Provision an EC2 instance with a Throughput Optimized HDD (st1) root volume and a Cold HDD (sc1) data volume. Provision an EC2 instance with a Throughput Optimized HDD (st1) volume that will serve as the root and data volume. Provision an EC2 instance with a General Purpose SSD (gp2) root volume and Provisioned IOPS SSD (io1) data volume. Provision an EC2 instance with a General Purpose SSD (gp2) root volume. Configure the application to store its data in an Amazon S3 bucket.
Prior to implementing a new workload, a solutions architect must examine and update the company's current IAM rules. The following policy was written by the solutions architect: { "Version": "2012-10-17", "Statement": [{ "Effect": "Deny", "NotAction": "s3:PutObject", "Resource": "*", "Condition": {BoolIfExists": {"aws:MultiFactorAuthPresent": "false"}} }] } What is the policy's net effect? Users will be allowed all actions except s3:PutObject if multi-factor authentication (MFA) is enabled. Users will be allowed all actions except s3:PutObject if multi-factor authentication (MFA) is not enabled. Users will be denied all actions except s3:PutObject if multi-factor authentication (MFA) is enabled. Users will be denied all actions except s3:PutObject if multi-factor authentication (MFA) is not enabled.
To allow neat-real-time processing, a web application must persist order data to Amazon S3. A solutions architect must design a scalable and fault-tolerant architecture. Which solutions satisfy these criteria? (Select two.) Write the order event to an Amazon DynamoDB table. Use DynamoDB Streams to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3. Write the order event to an Amazon Simple Queue Service (Amazon SQS) queue. Use the queue to trigger an AWSLambda function that parsers the payload and writes the data to Amazon S3. Write the order event to an Amazon Simple Notification Service (Amazon SNS) topic. Use the SNS topic to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3. Write the order event to an Amazon Simple Queue Service (Amazon SQS) queue. Use an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3. Write the order event to an Amazon Simple Notification Service (Amazon SNS) topic. Use an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an AWS Lambda function that parses the payload andwrites the data to Amazon S3.
A business in the us-east-1 region offers a picture hosting service. Users from many countries may upload and browse images using the program. Some photographs get a high volume of views over months, while others receive a low volume of views for less than a week. The program supports picture uploads of up to 20 MB in size. The service determines which photographs to show to each user based on the photo information. Which option delivers the most cost-effective access to the suitable users? Store the photos in Amazon DynamoDB. Turn on DynamoDB Accelerator (DAX) to cache frequently viewed items. Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photo metadata and its S3 location in DynamoDB. Store the photos in the Amazon S3 Standard storage class. Set up an S3 Lifecycle policy to move photos older than 30 days to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Use the object tags to keep track of metadata. Store the photos in the Amazon S3 Glacier storage class. Set up an S3 Lifecycle policy to move photos older than 30 days to the S3 Glacier Deep Archive storage class. Store the photo metadata and its S3 location in Amazon Elasticsearch Service (Amazon ES).
A business is creating a website that will store static photos in an Amazon S3 bucket. The company's goal is to reduce both latency and cost for all future requests. How should a solutions architect propose a service configuration? Deploy a NAT server in front of Amazon S3. Deploy Amazon CloudFront in front of Amazon S3. Deploy a Network Load Balancer in front of Amazon S3. Configure Auto Scaling to automatically adjust the capacity of the website.
For the database layer of its ecommerce website, a firm uses Amazon DynamoDB with provided throughput. During flash sales, clients may encounter periods of delay when the database is unable to manage the volume of transactions. As a result, the business loses transactions. The database operates normally during regular times. Which approach resolves the company's performance issue? Switch DynamoDB to on-demand mode during flash sales. Implement DynamoDB Accelerator for fast in memory performance. Use Amazon Kinesis to queue transactions for processing to DynamoDB. Use Amazon Simple Queue Service (Amazon SQS) to queue transactions to DynamoDB.
A significant media corporation uses AWS to host a web application. The corporation intends to begin caching secret media files in order to provide dependable access to them to consumers worldwide. Amazon S3 buckets are used to store the material. The organization must supply material rapidly, regardless of the origin of the requests. Which solution will satisfy these criteria? Use AWS DataSync to connect the S3 buckets to the web application. Deploy AWS Global Accelerator to connect the S3 buckets to the web application. Deploy Amazon CloudFront to connect the S3 buckets to CloudFront edge servers. Use Amazon Simple Queue Service (Amazon SQS) to connect the S3 buckets to the web application.
In the AWS Cloud, a web application is deployed. It is a two-tier design comprised of a web and database layer. Cross-site scripting (XSS) attacks are possible on the web server. What is the best course of action for a solutions architect to take to address the vulnerability? Create a Classic Load Balancer. Put the web layer behind the load balancer and enable AWS WAF. Create a Network Load Balancer. Put the web layer behind the load balancer and enable AWS WAF. Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS WAF. Create an Application Load Balancer. Put the web layer behind the load balancer and use AWS Shield Standard.
On its website, a business keeps a searchable store of things. The data is stored in a table with over ten million rows in an Amazon RDS for MySQL database. The database is stored on a 2 TB General Purpose SSD (gp2) array. Every day, the company's website receives millions of changes to this data. The organization found that certain activities were taking ten seconds or more and concluded that the bottleneck was the database storage performance. Which option satisfies the performance requirement? Change the storage type to Provisioned IOPS SSD (io1). Change the instance to a memory-optimized instance class. Change the instance to a burstable performance DB instance class. Enable Multi-AZ RDS read replicas with MySQL native asynchronous replication.
A business is prepared to use Amazon S3 to store sensitive data. Data must be encrypted at rest for compliance purposes. Auditing of encryption key use is required. Each year, keys must be rotated. Which solution satisfies these parameters and is the MOST OPTIMAL in terms of operational efficiency? Server-side encryption with customer-provided keys (SSE-C) Server-side encryption with Amazon S3 managed keys (SSE-S3) Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with manual rotation Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with automatic rotation.
Management need a summary of AWS billed items broken down by user as part of their budget planning process. Budgets for departments will be created using the data. A solutions architect must ascertain the most effective method of obtaining this report data. Which solution satisfies these criteria? Run a query with Amazon Athena to generate the report. Create a report in Cost Explorer and download the report. Access the bill details from the billing dashboard and download the bill. Modify a cost budget in AWS Budgets to alert with Amazon Simple Email Service (Amazon SES).
A solutions architect must create a system for archiving client case files. The files are critical corporate assets. The file count will increase over time. Multiple application servers running on Amazon EC2 instances must be able to access the files concurrently. There must be built-in redundancy in the solution. Which solution satisfies these criteria? Amazon Elastic File System (Amazon EFS) Amazon Elastic Block Store (Amazon EBS) Amazon S3 Glacier Deep Archive AWS Backup.
A business must give secure access to secret and sensitive data to its workers. The firm want to guarantee that only authorized individuals have access to the data. The data must be safely downloaded to the workers' devices. The files are kept on a Windows file server on-premises. However, as remote traffic increases, the file server's capacity is being depleted. Which solution will satisfy these criteria? Migrate the file server to an Amazon EC2 instance in a public subnet. Configure the security group to limit inbound traffic to the employeesג€™ IP addresses. Migrate the files to an Amazon FSx for Windows File Server file system. Integrate the Amazon FSx file system with the on-premises Active Directory. Configure AWS Client VPN. Migrate the files to Amazon S3, and create a private VPC endpoint. Create a signed URL to allow download. Migrate the files to Amazon S3, and create a public VPC endpoint. Allow employees to sign on with AWS Single Sign-On.
A legal company must communicate with the public. Hundreds of files must be publicly accessible. Anyone is banned from modifying or deleting the files before to a specified future date. Which solution satisfies these criteria the SAFEST way possible? Upload all flies to an Amazon S3 bucket that is configured for static website hosting. Grant read-only IAM permissions to any AWS principals that access the S3 bucket until the designated date. Create a new Amazon S3 bucket with S3 Versioning enabled. Use S3 Object Lock with a retention period in accordance with the designated date. Configure the S3 bucket for static website hosting. Set an S3 bucket policy to allow read-only access to the objects. Create a new Amazon S3 bucket with S3 Versioning enabled. Configure an event trigger to run an AWS Lambda function in case of object modification or deletion. Configure the Lambda function to replace the objects with the original versions from a private S3 bucket. Upload all files to an Amazon S3 bucket that is configured for static website hosting. Select the folder that contains the files. Use S3 Object Lock with a retention period in accordance with the designated date. Grant read-only IAM permissions to any AWS principals that access the S3 bucket.
A corporation connects its on-premises servers to AWS through a 10 Gbps AWS Direct Connect connection. The connection's workloads are crucial. The organization needs a catastrophe recovery approach that is as resilient as possible while minimizing the existing connection bandwidth. What recommendations should a solutions architect make? Set up a new Direct Connect connection in another AWS Region. Set up a new AWS managed VPN connection in another AWS Region. Set up two new Direct Connect connections: one in the current AWS Region and one in another Region. Set up two new AWS managed VPN connections: one in the current AWS Region and one in another Region.
A business has two virtual private clouds (VPCs) labeled Management and Production. The Management VPC connects to a single device in the data center using VPNs via a customer gateway. The Production VPC is connected to AWS through two AWS Direct Connect connections via a virtual private gateway. Both the Management and Production VPCs communicate with one another through a single VPC peering connection. What should a solutions architect do to minimize the architecture's single point of failure? Add a set of VPNs between the Management and Production VPCs Add a second virtual private gateway and attach it to the Management VPC. Add a second set of VPNs to the Management VPC from a second customer gateway device. Add a second VPC peering connection between the Management VPC and the Production VPC.
AWS hosts a company's near-real-time streaming application. While the data is being ingested, a job is being performed on it that takes 30 minutes to finish. Due to the massive volume of incoming data, the workload regularly faces significant latency. To optimize performance, a solutions architect must build a scalable and serverless system. Which actions should the solutions architect do in combination? (Select two.) Use Amazon Kinesis Data Firehose to ingest the data. Use AWS Lambda with AWS Step Functions to process the data. Use AWS Database Migration Service (AWS DMS) to ingest the data. Use Amazon EC2 instances in an Auto Scaling group to process the data. Use AWS Fargate with Amazon Elastic Container Service (Amazon ECS) to process the data.
Amazon Elastic Block Store (Amazon EBS) volumes are used by a media organization to store video material. A certain video file has gained popularity, and a significant number of individuals from all over the globe are now viewing it. As a consequence, costs have increased. Which step will result in a cost reduction without jeopardizing user accessibility? Change the EBS volume to Provisioned IOPS (PIOPS). Store the video in an Amazon S3 bucket and create an Amazon CloudFront distribution. Split the video into multiple, smaller segments so users are routed to the requested video segments only. Clear an Amazon S3 bucket in each Region and upload the videos so users are routed to the nearest S3 bucket.
Amazon S3 buckets are used by an image hosting firm to store its objects. The firm wishes to prevent unintentional public disclosure of the items contained in the S3 buckets. All S3 items in the AWS account as a whole must remain private. Which solution will satisfy these criteria? Use Amazon GuardDuty to monitor S3 bucket policies. Create an automatic remediation action rule that uses an AWS Lambda function to remediate any change that makes the objects public. Use AWS Trusted Advisor to find publicly accessible S3 buckets. Configure email notifications in Trusted Advisor when a change is detected. Manually change the S3 bucket policy if it allows public access. Use AWS Resource Access Manager to find publicly accessible S3 buckets. Use Amazon Simple Notification Service (Amazon SNS) to invoke an AWS Lambda function when a change is detected. Deploy a Lambda function that programmatically remediates the change. Use the S3 Block Public Access feature on the account level. Use AWS Organizations to create a service control policy (SCP) that prevents IAM users from changing the setting. Apply the SCP to the account. .
A company's website stores transactional data on an Amazon RDS MySQL Multi-AZ DB instance. Other internal systems query this database instance to get data for batch processing. When internal systems request data from the RDS DB instance, the RDS DB instance drastically slows down. This has an effect on the website's read and write performance, resulting in poor response times for users. Which approach will result in an increase in website performance? Use an RDS PostgreSQL DB instance instead of a MySQL database. Use Amazon ElastiCache to cache the query responses for the website. Add an additional Availability Zone to the current RDS MySQL Multi-AZ DB instance. Add a read replica to the RDS DB instance and configure the internal systems to query the read replica.
Currently, a company's legacy application relies on an unencrypted Amazon RDS MySQL database with a single instance. All current and new data in this database must be encrypted to comply with new compliance standards. How is this to be achieved? Create an Amazon S3 bucket with server-side encryption enabled. Move all the data to Amazon S3. Delete the RDS instance. Enable RDS Multi-AZ mode with encryption at rest enabled. Perform a failover to the standby instance to delete the original instance. Take a Snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot. Create an RDS read replica with encryption at rest enabled. Promote the read replica to master and switch the application over to the new master. Delete the old RDS instance.
A marketing firm uses an Amazon S3 bucket to store CSV data for statistical research. Permission is required for an application running on an Amazon EC2 instance to properly handle the CSV data stored in the S3 bucket. Which step will provide the MOST SECURE access to the S3 bucket for the EC2 instance? Attach a resource-based policy to the S3 bucket. Create an IAM user for the application with specific permissions to the S3 bucket. Associate an IAM role with least privilege permissions to the EC2 instance profile. Store AWS credentials directly on the EC2 instance for applications on the instance to use for API calls.
On a cluster of Amazon Linux EC2 instances, a business runs an application. The organization is required to store all application log files for seven years for compliance purposes. The log files will be evaluated by a reporting program, which will need concurrent access to all files. Which storage system best satisfies these criteria in terms of cost-effectiveness? Amazon Elastic Block Store (Amazon EBS) Amazon Elastic File System (Amazon EFS) Amazon EC2 instance store Amazon S3.
On a fleet of Amazon EC2 instances, a business provides a training site. The business predicts that when its new course, which includes hundreds of training videos on the web, is available in one week, it will be tremendously popular. What should a solutions architect do to ensure that the predicted server load is kept to a minimum? Store the videos in Amazon ElastiCache for Redis. Update the web servers to serve the videos using the ElastiCache API. Store the videos in Amazon Elastic File System (Amazon EFS). Create a user data script for the web servers to mount the EFS volume. Store the videos in an Amazon S3 bucket. Create an Amazon CloudFront distribution with an origin access identity (OAI) of that S3 bucket. Restrict Amazon S3 access to the OAI. Store the videos in an Amazon S3 bucket. Create an AWS Storage Gateway file gateway to access the S3 bucket. Create a user data script for the web servers to mount the file gateway.
A business chooses to transition from on-premises to the AWS Cloud its three-tier web application. The new database must be able to scale storage capacity dynamically and conduct table joins. Which AWS service satisfies these criteria? Amazon Aurora Amazon RDS for SqlServer Amazon DynamoDB Streams Amazon DynamoDB on-demand.
On a fleet of Amazon EC2 instances, a business runs a production application. The program takes data from an Amazon SQS queue and concurrently processes the messages. The message volume is variable, and traffic is often interrupted. This program should handle messages continuously and without interruption. Which option best fits these criteria in terms of cost-effectiveness? Use Spot Instances exclusively to handle the maximum capacity required. Use Reserved Instances exclusively to handle the maximum capacity required. Use Reserved Instances for the baseline capacity and use Spot Instances to handle additional capacity. Use Reserved Instances for the baseline capacity and use On-Demand Instances to handle additional capacity.
A startup has developed an application that gathers data from Internet of Things (IoT) sensors installed on autos. Through Amazon Kinesis Data Firehose, the data is transmitted to and stored in Amazon S3. Each year, data generates billions of S3 objects. Each morning, the business retrains a set of machine learning (ML) models using data from the preceding 30 days. Four times a year, the corporation analyzes and trains other machine learning models using data from the preceding 12 months. The data must be accessible with a minimum of delay for a period of up to one year. Data must be preserved for archive reasons after one year. Which storage system best satisfies these criteria in terms of cost-effectiveness? Use the S3 Intelligent-Tiering storage class. Create an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 1 year. Use the S3 Intelligent-Tiering storage class. Configure S3 Intelligent-Tiering to automativally move objects to S3 Glacier Deep Archive after 1 year. Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 1 year. Use the S3 Standard storage class. Create an S3 Lifecycle policy to transition objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days, and then to S3 Glacier Deep Archive after 1 year. .
A business requires data storage on Amazon S3. A compliance requirement stipulates that when objects are modified, their original state must be retained. Additionally, data older than five years should be kept for auditing purposes. What should a solutions architect recommend as the most effortable? Enable object-level versioning and S3 Object Lock in governance mode Enable object-level versioning and S3 Object Lock in compliance mode Enable object-level versioning. Enable a lifecycle policy to move data older than 5 years to S3 Glacier Deep Archive Enable object-level versioning. Enable a lifecycle policy to move data older than 5 years to S3 Standard-Infrequent Access (S3 Standard-IA).
Multiple Amazon EC2 instances are used to host an application. The program reads messages from an Amazon SQS queue, writes them to an Amazon RDS database, and then removes them from the queue. The RDS table sometimes contains duplicate entries. There are no duplicate messages in the SQS queue. How can a solutions architect guarantee that messages are handled just once? Use the CreateQueue API call to create a new queue. Use the AddPermission API call to add appropriate permissions. Use the ReceiveMessage API call to set an appropriate wait time. Use the ChangeMessageVisibility API call to increase the visibility timeout.
A corporation just announced the worldwide launch of their retail website. The website is hosted on numerous Amazon EC2 instances, which are routed via an Elastic Load Balancer. The instances are distributed across several Availability Zones in an Auto Scaling group. The firm want to give its clients with customized material depending on the device from which they view the website. Which steps should a solutions architect perform in combination to satisfy these requirements? (Select two.) Configure Amazon CloudFront to cache multiple versions of the content. Configure a host header in a Network Load Balancer to forward traffic to different instances. Configure a Lambda@Edge function to send specific objects to users based on the User-Agent header. Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up host-based routing to different EC2 instances. Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up path-based routing to different EC2 instances.
To facilitate experimentation and agility, a business enables developers to link current IAM policies to existing IAM roles. The security operations team, on the other hand, is worried that the developers may attach the current administrator policy, allowing them to bypass any other security rules. What approach should a solutions architect use in dealing with this issue? Create an Amazon SNS topic to send an alert every time a developer creates a new policy. Use service control policies to disable IAM activity across all account in the organizational unit. Prevent the developers from attaching any policies and assign all IAM duties to the security operations team. Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy.
A newly formed company developed a three-tiered web application. The front end is comprised entirely of static information. Microservices form the application layer. User data is kept in the form of JSON documents that must be accessible with a minimum of delay. The firm anticipates minimal regular traffic in the first year, with monthly traffic spikes. The startup team's operational overhead expenditures must be kept to a minimum. What should a solutions architect suggest as a means of achieving this? Use Amazon S3 static website hosting to store and serve the front end. Use AWS Elastic Beanstalk for the application layer. Use Amazon DynamoDB to store user data. Use Amazon S3 static website hosting to store and serve the front end. Use Amazon Elastic KubernetesService (Amazon EKS) for the application layer. Use Amazon DynamoDB to store user data. Use Amazon S3 static website hosting to store and serve the front end. Use Amazon API Gateway and AWS Lambda functions for the application layer. Use Amazon DynamoDB to store user data. Use Amazon S3 static website hosting to store and serve the front end. Use Amazon API Gateway and AWS Lambda functions for the application layer. Use Amazon RDS with read replicas to store user data.
Amazon Elastic Container Service (Amazon ECS) container instances are used to install an ecommerce website's web application behind an Application Load Balancer (ALB). The website slows down and availability is decreased during moments of heavy usage. A solutions architect utilizes Amazon CloudWatch alarms to be notified when an availability problem occurs, allowing them to scale out resources. The management of the business want a system that automatically reacts to such circumstances. Which solution satisfies these criteria? Set up AWS Auto Scaling to scale out the ECS service when there are timeouts on the ALB. Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high. Set up AWS Auto Scaling to scale out the ECS service when the ALB CPU utilization is too high. Setup AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high. Set up AWS Auto Scaling to scale out the ECS service when the serviceג€™s CPU utilization is too high. Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high. Set up AWS Auto Scaling to scale out the ECS service when the ALB target group CPU utilization is too high. Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high.
A business uses Site-to-Site VPN connections to provide safe access to AWS Cloud services from on-premises. Users are experiencing slower VPN connectivity as a result of increased traffic through the VPN connections to the Amazon EC2 instances. Which approach will result in an increase in VPN throughput? Implement multiple customer gateways for the same network to scale the throughput. Use a transit gateway with equal cost multipath routing and add additional VPN tunnels. Configure a virtual private gateway with equal cost multipath routing and multiple channels. Increase the number of tunnels in the VPN configuration to scale the throughput beyond the default limit.
On Amazon EC2 Linux instances, a business hosts a website. Several of the examples are malfunctioning. The troubleshooting indicates that the unsuccessful instances lack swap space. The operations team's lead need a monitoring solution for this. What recommendations should a solutions architect make? Configure an Amazon CloudWatch SwapUsage metric dimension. Monitor the SwapUsage dimension in the EC2 metrics in CloudWatch. Use EC2 metadata to collect information, then publish it to Amazon CloudWatch custom metrics. Monitor SwapUsage metrics in CloudWatch. Install an Amazon CloudWatch agent on the instances. Run an appropriate script on a set schedule. Monitor SwapUtilization metrics in CloudWatch. Enable detailed monitoring in the EC2 console. Create an Amazon CloudWatch SwapUtilization custom metric. Monitor SwapUtilization metrics in CloudWatch.
AWS is used by a business to perform an online transaction processing (OLTP) burden. This workload is deployed in a Multi-AZ environment using an unencrypted Amazon RDS database instance. This instance's database is backed up daily. What should a solutions architect do going forward to guarantee that the database and snapshots are constantly encrypted? Encrypt a copy of the latest DB snapshot. Replace existing DB instance by restoring the encrypted snapshot. Create a new encrypted Amazon Elastic Block Store (Amazon EBS) volume and copy the snapshots to it. Enable encryption on the DB instance. Copy the snapshots and enable encryption using AWS Key Management Service (AWS KMS). Restore encrypted snapshot to an existing DB instance. Copy the snapshots to an Amazon S3 bucket that is encrypted using server-side encryption with AWS Key Management Service (AWS KMS) managed keys (SSE-KMS).
A business operates an application that collects data from its consumers through various Amazon EC2 instances. After processing, the data is uploaded to Amazon S3 for long-term storage. A study of the application reveals that the EC2 instances were inactive for extended periods of time. A solutions architect must provide a system that maximizes usage while minimizing expenditures. Which solution satisfies these criteria? Use Amazon EC2 in an Auto Scaling group with On-Demand instances. Build the application to use Amazon Lightsail with On-Demand Instances. Create an Amazon CloudWatch cron job to automatically stop the EC2 instances when there is no activity. Redesign the application to use an event-driven design with Amazon Simple Queue Service (Amazon SQS) and AWS Lambda.
A business wishes to migrate from many independent Amazon Web Services accounts to a consolidated, multi-account design. The organization intends to generate a large number of new AWS accounts for its business divisions. The organization must use a single corporate directory service to authenticate access to these AWS accounts. Which steps should a solutions architect advocate in order to satisfy these requirements? (Select two.) Create a new organization in AWS Organizations with all features turned on. Create the new AWS accounts in the organization. Set up an Amazon Cognito identity pool. Configure AWS Single Sign-On to accept Amazon Cognito authentication. Configure a service control policy (SCP) to manage the AWS accounts. Add AWS Single Sign-On to AWS Directory Service. Create a new organization in AWS Organizations. Configure the organizationג€™s authentication mechanism to use AWS Directory Service directly. Set up AWS Single Sign-On (AWS SSO) in the organization. Configure AWS SSO, and integrate it with the company's corporate directory service.
A solutions architect is developing a daily data processing task that will take up to two hours to finish. If the task is stopped, it must be restarted from scratch. What is the MOST cost-effective way for the solutions architect to solve this issue? Create a script that runs locally on an Amazon EC2 Reserved Instance that is triggered by a cron job. Create an AWS Lambda function triggered by an Amazon EventBridge (Amazon CloudWatch Events) scheduled event. Use an Amazon Elastic Container Service (Amazon ECS) Fargate task triggered by an Amazon EventBridge (Amazon CloudWatch Events) scheduled event. Use an Amazon Elastic Container Service (Amazon ECS) task running on Amazon EC2 triggered by an Amazon EventBridge (Amazon CloudWatch Events) scheduled event.
A business intends to use AWS to host a survey website. The firm anticipated a high volume of traffic. As a consequence of this traffic, the database is updated asynchronously. The organization want to avoid dropping writes to the database housed on AWS. How should the business's application be written to handle these database requests? Configure the application to publish to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the database to the SNS topic. Configure the application to subscribe to an Amazon Simple Notification Service (Amazon SNS) topic. Publish the database updates to the SNS topic. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues to queue the database connection until the database has resources to write the data. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues for capturing the writes and draining the queue as each write is made to the database. .
On a huge fleet of Amazon EC2 instances, a business runs an application. The program reads and writes items to a DynamoDB database hosted by Amazon. The DynamoDB database increases in size regularly, yet the application requires just data from the previous 30 days. The organization need a solution that is both cost effective and time efficient to implement. Which solution satisfies these criteria? Use an AWS CloudFormation template to deploy the complete solution. Redeploy the CloudFormation stack every 30 days, and delete the original stack. Use an EC2 instance that runs a monitoring application from AWS Marketplace. Configure the monitoring application to use Amazon DynamoDB Streams to store the timestamp when a new item is created in the table. Use a script that runs on the EC2 instance to delete items that have a timestamp that is older than 30 days. Configure Amazon DynamoDB Streams to invoke an AWS Lambda function when a new item is created in the table. Configure the Lambda function to delete items in the table that are older than 30 days. Extend the application to add an attribute that has a value of the current timestamp plus 30 days to each new item that is created in the table. Configure DynamoDB to use the attribute as the TTL attribute.
Previously, a corporation moved their data warehousing solution to AWS. Additionally, the firm has an AWS Direct Connect connection. Through the use of a visualization tool, users in the corporate office may query the data warehouse. Each query answered by the data warehouse is on average 50 MB in size, whereas each webpage supplied by the visualization tool is around 500 KB in size. The data warehouse does not cache the result sets it returns. Which approach results in the lowest outgoing data transfer costs for the company? Host the visualization tool on premises and query the data warehouse directly over the internet. Host the visualization tool in the same AWS Region as the data warehouse. Access it over the internet. Host the visualization tool on premises and query the data warehouse directly over a Direct Connect connection at a location in the same AWS Region. Host the visualization tool in the same AWS Region as the data warehouse and access it over a DirectConnect connection at a location in the same Region.
A business is developing an application that is composed of many microservices. The organization has chosen to deploy its software on AWS through container technology. The business need a solution that requires little ongoing work for maintenance and growth. Additional infrastructure cannot be managed by the business. Which steps should a solutions architect perform in combination to satisfy these requirements? (Select two.) Deploy an Amazon Elastic Container Service (Amazon ECS) cluster. Deploy the Kubernetes control plane on Amazon EC2 instances that span multiple Availability Zones. Deploy an Amazon Elastic Container Service (Amazon ECS) service with an Amazon EC2 launch type. Specify a desired task number level of greater than or equal to 2. Deploy an Amazon Elastic Container Service (Amazon ECS) service with a Fargate launch type. Specify a desired task number level of greater than or equal to 2. Deploy Kubernetes worker nodes on Amazon EC2 instances that span multiple Availability Zones. Create a deployment that specifies two or more replicas for each microservice.
The following policy was developed by an Amazon EC2 administrator and assigned to an IAM group including numerous users: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2.TerminateInstances", "Resources": "*", "Condition": { "IpAddress": { "aws:SourceIp": "10.100.100.0/24" } } }, { "Effect": "Deny", "Action": "ec2:*", "Resources": "*", "Condition": { "StringNotEquals": { "ec2:Region": "us-east-1" } } } ] } What impact does this policy have? Users can terminate an EC2 instance in any AWS Region except us-east-1. Users can terminate an EC2 instance with the IP address 10.100.100.1 in the us-east-1 Region. Users can terminate an EC2 instance in the us-east-1 Region when the userג€™s source IP is 10.100.100.254. Users cannot terminate an EC2 instance in the us-east-1 Region when the userג€™s source IP is 10.100.100.254.
The web application of a business stores its data on an Amazon RDS PostgreSQL database instance. Accountants conduct massive queries at the start of each month during the financial closure period, which has a negative influence on the database's performance owing to excessive utilization. The business want to reduce the effect of reporting on the online application. What should a solutions architect do to minimize the database's influence with the LEAST amount of work possible? Create a read replica and direct reporting traffic to the replica. Create a Multi-AZ database and direct reporting traffic to the standby. Create a cross-Region read replica and direct reporting traffic to the replica. Create an Amazon Redshift database and direct reporting traffic to the Amazon Redshift database.
A business has implemented a MySQL database on Amazon RDS. The database support team is reporting delayed reads on the DB instance as a result of the increased transactions and advises installing a read replica. Which activities should a solutions architect do prior to deploying this change? (Select two.) Enable binlog replication on the RDS primary node. Choose a failover priority for the source DB instance. Allow long-running transactions to complete on the source DB instance. Create a global table and specify the AWS Regions where the table will be available. Enable automatic backups on the source instance by setting the backup retention period to a value other than 0.
Users may get past performance reports from a company's website. The website requires a solution that can grow to suit the company's worldwide website requirements. The solution should be cost-effective, minimize infrastructure resource provisioning, and deliver the quickest reaction time feasible. Which mix of technologies might a solutions architect propose in order to satisfy these requirements? Amazon CloudFront and Amazon S3 AWS Lambda and Amazon DynamoDB Application Load Balancer with Amazon EC2 Auto Scaling Amazon Route 53 with internal Application Load Balancers.
A solutions architect is developing a hybrid application on the Amazon Web Services (AWS) cloud. AWS Direct Link (DX) will be used to connect the on-premises data center to AWS. Between AWS and the on-premises data center, the application connection must be very durable. Which DX setup should be used to satisfy these criteria? Configure a DX connection with a VPN on top of it. Configure DX connections at multiple DX locations. Configure a DX connection using the most reliable DX partner. Configure multiple virtual interfaces on top of a DX connection.
A financial institution uses AWS to host a web application. The program retrieves current stock prices using an Amazon API Gateway Regional API endpoint. The security staff at the organization has detected an upsurge in API queries. The security team is worried that HTTP flood attacks may result in the application being rendered inoperable. A solutions architect must create a defense against this form of assault. Which method satisfies these criteria with the LEAST amount of operational overhead? Create an Amazon CloudFront distribution in front of the API Gateway Regional API endpoint with a maximum TTL of 24 hours. Create a Regional AWS WAF web ACL with a rate-based rule. Associate the web ACL with the API Gateway stage. Use Amazon CloudWatch metrics to monitor the Count metric and alert the security team when the predefined rate is reached. Create an Amazon CloudFront distribution with Lambda@Edge in front of the API Gateway Regional API endpoint. Create an AWS Lambda function to block requests from IP addresses that exceed the predefined rate.
A business wishes to automate the evaluation of the security of its Amazon EC2 instances. The organization must verify and show that the development process adheres to security and compliance requirements. What actions should a solutions architect take to ensure that these criteria are met? Use Amazon Macie to automatically discover, classify and protect the EC2 instances. Use Amazon GuardDuty to publish Amazon Simple Notification Service (Amazon SNS) notifications. Use Amazon Inspector with Amazon CloudWatch to publish Amazon Simple Notification Service (Amazon SNS) notifications. Use Amazon EventBridge (Amazon CloudWatch Events) to detect and react to changes in the status of AWS Trusted Advisor checks.
On Amazon EC2, a corporation is operating a highly secure application that is backed up by an Amazon RDS database. All personally identifiable information (PII) must be encrypted at rest to comply with compliance standards. Which solution should a solutions architect propose in order to achieve this need with the MINIMUM number of infrastructure changes? Deploy AWS Certificate Manager to generate certificates. Use the certificates to encrypt the database volume. Deploy AWS CloudHSM, generate encryption keys, and use the customer master key (CMK) to encrypt database volumes. Configure SSL encryption using AWS Key Management Service customer master keys (AWS KMS CMKs) to encrypt database volumes. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.
The following IAM policy has been established by a solutions architect. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "lambda:*" ], "Resources": "*" }, { "Effect": "Deny", "Action": [ "lambda:CreateFunction", "lambda:DeleteFunction", ], "Resources": "*" "Condition": { "IpAddress": { "aws:SourceIp": "220.100.16.0/20" } } } ] } Which actions will the policy permit? An AWS Lambda function can be deleted from any network. An AWS Lambda function can be created from any network. An AWS Lambda function can be deleted from the 100.220.0.0/20 network. An AWS Lambda function can be deleted from the 220.100.16.0/20 network.
On Amazon Aurora, a business is operating a database. Every nightfall, the database is inactive. When user traffic surges in the early hours, an application that makes large reads on the database will face performance concerns. When reading from the database during these peak hours, the program encounters timeout issues. Due to the lack of a dedicated operations crew, the organization need an automated solution to solve performance concerns. Which activities should a solutions architect take to ensure that the database automatically adjusts to the increasing read load? (Select two.) Migrate the database to Aurora Serverless. Increase the instance size of the Aurora database. Configure Aurora Auto Scaling with Aurora Replicas. Migrate the database to an Aurora multi-master cluster. Migrate the database to an Amazon RDS for MySQL Multi-AZ deployment.
An Amazon EC2 instance-based application requires access to an Amazon DynamoDB database. The EC2 instance and DynamoDB table are both managed by the same AWS account. Permissions must be configured by a solutions architect. Which approach will provide the EC2 instance least privilege access to the DynamoDB table? Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Create an instance profile to assign this IAM role to the EC2 instance. Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Add the EC2 instance to the trust relationship policy document to allow it to assume the role. Create an IAM user with the appropriate policy to allow access to the DynamoDB table. Store the credentials in an Amazon S3 bucket and read them from within the application code directly. Create an IAM user with the appropriate policy to allow access to the DynamoDB table. Ensure that the application stores the IAM credentials securely on local storage and uses them to make the DynamoDB calls.
A business uses Amazon EC2 instances to operate an API-based inventory reporting application. The program makes use of an Amazon DynamoDB database to store data. The distribution centers of the corporation use an on-premises shipping application that communicates with an API to update inventory prior to generating shipping labels. Each day, the organization has seen application outages, resulting in missed transactions. What should a solutions architect propose to increase the resilience of an application? Modify the shipping application to write to a local database. Modify the application APIs to run serverless using AWS Lambda Configure Amazon API Gateway to call the EC2 inventory application APIs. Modify the application to send inventory updates using Amazon Simple Queue Service (Amazon SQS).
Amazon EC2 instances on private subnets are used to execute an application. The application requires access to a table in Amazon DynamoDB. What is the MOST SECURE method of accessing the table without allowing traffic to exit the AWS network? Use a VPC endpoint for DynamoDB. Use a NAT gateway in a public subnet. Use a NAT instance in a private subnet. Use the internet gateway attached to the VPC.
On a single Amazon EC2 instance, a business runs an ASP.NET MVC application. Due to a recent spike in application usage, users are experiencing poor response times during lunch hours. The firm must address this issue using the least amount of settings possible. What recommendations should a solutions architect make to satisfy these requirements? Move the application to AWS Elastic Beanstalk. Configure load-based auto scaling and time-based scaling to handle scaling during lunch hours. Move the application to Amazon Elastic Container Service (Amazon ECS). Create an AWS Lambda function to handle scaling during lunch hours. Move the application to Amazon Elastic Container Service (Amazon ECS). Configure scheduled scaling for AWS Application Auto Scaling during lunch hours. Move the application to AWS Elastic Beanstalk. Configure load-based auto scaling, and create an AWS Lambda function to handle scaling during lunch hours.
A business is in the process of migrating its on-premises application to AWS. Program servers and a Microsoft SQL Server database comprise the application. The database cannot be transferred to another engine due to the application's NET code using SQL Server functionality. The company's goal is to maximize availability while decreasing operational and administration costs. What actions should a solutions architect take to achieve this? Install SQL Server on Amazon EC2 in a Multi-AZ deployment. Migrate the data to Amazon RDS for SQL Server in a Multi-AZ deployment. Deploy the database on Amazon RDS for SQL Server with Multi-AZ Replicas. Migrate the data to Amazon RDS for SQL Server in a cross-Region Multi-AZ deployment.
Amazon Redshift is being used by a business to do analytics and produce customer reports. The corporation just obtained an extra 50 terabytes of demographic data on its customers. The data is saved in Amazon S3 in.csv files. The organization need a system that efficiently merges data and visualizes the findings. What recommendations should a solutions architect make to satisfy these requirements? Use Amazon Redshift Spectrum to query the data in Amazon S3 directly and join that data with the existing data in Amazon Redshift. Use Amazon QuickSight to build the visualizations. Use Amazon Athena to query the data in Amazon S3. Use Amazon QuickSight to join the data from Athena with the existing data in Amazon Redshift and to build the visualizations. Increase the size of the Amazon Redshift cluster, and load the data from Amazon S3. Use Amazon EMR Notebooks to query the data and build the visualizations in Amazon Redshift. Export the data from the Amazon Redshift cluster into Apache Parquet files in Amazon S3. Use Amazon Elasticsearch Service (Amazon ES) to query the data. Use Kibana to visualize the results.
Each month, a business keeps 200 GB of data on Amazon S3. At the conclusion of each month, the corporation must analyze this data to calculate the number of things sold in each sales area during the preceding month. Which analytics approach is the MOST cost-effective option for the business? Create an Amazon Elasticsearch Service (Amazon ES) cluster. Query the data in Amazon ES. Visualize the data by using Kibana. Create a table in the AWS Glue Data Catalog. Query the data in Amazon S3 by using Amazon Athena. Visualize the data in Amazon QuickSight. Create an Amazon EMR cluster. Query the data by using Amazon EMR, and store the results in Amazon S3. Visualize the data in Amazon QuickSight. Create an Amazon Redshift cluster. Query the data in Amazon Redshift, and upload the results to Amazon S3. Visualize the data in Amazon QuickSight.
A business's data layer is powered by Amazon RDS for PostgreSQL databases. The organization must adopt database password rotation. Which option satisfies this criterion with the LEAST amount of operational overhead? Store the password in AWS Secrets Manager. Enable automatic rotation on the secret. Store the password in AWS Systems Manager Parameter Store. Enable automatic rotation on the parameter. Store the password in AWS Systems Manager Parameter Store. Write an AWS Lambda function that rotates the password. Store the password in AWS Key Management Service (AWS KMS). Enable automatic rotation on the customer master key (CMK).
A business intends to transfer a TCP-based application onto the company's virtual private cloud (VPC). The program is available to the public over an unsupported TCP port via a physical device located in the company's data center. This public endpoint has a latency of less than 3 milliseconds and can handle up to 3 million requests per second. The organization needs the new public endpoint in AWS to function at the same level of performance. What solution architecture approach should be recommended to satisfy this requirement? Deploy a Network Load Balancer (NLB). Configure the NLB to be publicly accessible over the TCP port that the application requires. Deploy an Application Load Balancer (ALB). Configure the ALB to be publicly accessible over the TCP port that the application requires. Deploy an Amazon CloudFront distribution that listens on the TCP port that the application requires. Use an Application Load Balancer as the origin. Deploy an Amazon API Gateway API that is configured with the TCP port that the application requires. Configure AWS Lambda functions with provisioned concurrency to process the requests.
Within the same AWS account, a firm has two VPCs situated in the us-west-2 Region. The business must permit network communication between these VPCs. Each month, about 500 GB of data will be transferred between the VPCs. Which approach is the MOST cost-effective for connecting these VPCs? Implement AWS Transit Gateway to connect the VPCs. Update the route tables of each VPC to use the transit gateway for inter-VPC communication. Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tables of each VPC to use the VPN tunnel for inter-VPC communication. Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the VPC peering connection for inter-VPC communication. Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the route tables of each VPC to use the Direct Connect connection for inter-VPC communication.
A business's production workload is hosted on an Amazon Aurora MySQL DB cluster comprised of six Aurora Replicas. The corporation wishes to automate the distribution of near-real-time reporting requests from one of its departments among three Aurora Replicas. These three copies are configured differently from the rest of the DB cluster in terms of computation and memory. Which solution satisfies these criteria? Create and use a custom endpoint for the workload. Create a three-node cluster clone and use the reader endpoint. Use any of the instance endpoints for the selected three nodes. Use the reader endpoint to automatically distribute the read-only workload.
A business's on-premises data center has reached its storage limit. The organization wishes to shift its storage system to AWS while keeping bandwidth costs as low as possible. The solution must enable rapid and cost-free data retrieval. How are these stipulations to be met? Deploy Amazon S3 Glacier Vault and enable expedited retrieval. Enable provisioned retrieval capacity for the workload. Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally. Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3. Deploy AWS Direct Connect to connect with the on-premises data center. Configure AWS Storage Gateway to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.
Within a month of being bought, a newly acquired firm is needed to establish its own infrastructure on AWS and transfer various apps to the cloud. Each application requires the transmission of around 50 TB of data. Following the transfer, this firm and its parent company will need secure network connection with constant throughput between its data centers and apps. A solutions architect must guarantee that data transfer occurs just once and that network connection is maintained. Which solution will satisfy these criteria? AWS Direct Connect for both the initial transfer and ongoing connectivity. AWS Site-to-Site VPN for both the initial transfer and ongoing connectivity. AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity. AWS Snowball for the initial transfer and AWS Site-to-Site VPN for ongoing connectivity.
A solutions architect must create a solution that retrieves data every two minutes from an internet-based third-party web service. Each data retrieval is performed using a Python script in less than 100 milliseconds. The answer is a JSON object of less than 1 KB in size including sensor data. The architect of the solution must keep both the JSON object and the date. Which approach is the most cost-effective in meeting these requirements? Deploy an Amazon EC2 instance with a Linux operating system. Configure a cron job to run the script every 2 minutes. Extend the script to store the JSON object along with the timestamp in a MySQL database that is hosted on an Amazon RDS DB instance. Deploy an Amazon EC2 instance with a Linux operating system to extend the script to run in an infinite loop every 2 minutes. Store the JSON object along with the timestamp in an Amazon DynamoDB table that uses the timestamp as the primary key. Run the script on the EC2 instance. Deploy an AWS Lambda function to extend the script to store the JSON object along with the timestamp in an Amazon DynamoDB table that uses the timestamp as the primary key. Use an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that is initiated every 2 minutes to invoke the Lambda function. Deploy an AWS Lambda function to extend the script to run in an infinite loop every 2 minutes. Store the JSON object along with the timestamp in an Amazon DynamoDB table that uses the timestamp as the primary key. Ensure that the script is called by the handler function that is configured for the Lambda function.
A business does not currently have any file sharing services. A new project needs file storage that can be mounted as a disk for on-premises desktop computers. Before users can access the storage, the file server must authenticate them against an Active Directory domain. Which service enables Active Directory users to deploy storage on their workstations as a drive? Amazon S3 Glacier AWS DataSync AWS Snowball Edge AWS Storage Gateway.
A corporation with an on-premises application is transitioning to AWS to boost the flexibility and availability of the application. The present design makes considerable use of a Microsoft SQL Server database. The firm want to investigate other database solutions and, if necessary, migrate database engines. The development team does a complete copy of the production database every four hours in order to create a test database. Users will encounter delay during this time period. What database should a solution architect propose as a replacement? Use Amazon Aurora with Multi-AZ Aurora Replicas and restore from mysqldump for the test database. Use Amazon Aurora with Multi-AZ Aurora Replicas and restore snapshots from Amazon RDS for the test database. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas, and use the standby instance for the test database. Use Amazon RDS for SQL Server with a Multi-AZ deployment and read replicas, and restore snapshots from RDS for the test database.
On Amazon EC2 instances, a business runs an application. The volume of traffic to the webpage grows significantly during business hours and then falls. The CPU usage of an Amazon EC2 instance is a good measure of the application's end-user demand. The organization has specified a minimum group size of two EC2 instances and a maximum group size of ten EC2 instances for an Auto Scaling group. The firm is worried that the Auto Scaling group's existing scaling policy may be incorrect. The organization must prevent excessive EC2 instance provisioning and paying unneeded fees. What recommendations should a solutions architect make to satisfy these requirements? Configure Amazon EC2 Auto Scaling to use a scheduled scaling plan and launch an additional 8 EC2 instances during business hours. Configure AWS Auto Scaling to use a scaling plan that enables predictive scaling. Configure predictive scaling with a scaling mode of forecast and scale, and to enforce the maximum capacity setting during scaling. Configure a step scaling policy to add 4 EC2 instances at 50% CPU utilization and add another 4 EC2 instances at 90% CPU utilization. Configure scale-in policies to perform the reverse and remove EC2 instances based on the two values. Configure AWS Auto Scaling to have a desired capacity of 5 EC2 instances, and disable any existing scaling policies. Monitor the CPU utilization metric for 1 week. Then create dynamic scaling policies that are based on the observed values.
A business has launched a mobile multiplayer game. The game demands real-time monitoring of participants' latitude and longitude positions. The game's data storage must be capable of quick updates and location retrieval. The game stores location data on an Amazon RDS for PostgreSQL DB instance with read replicas. The database is unable to sustain the speed required for reading and writing changes during high use times. The game's user base is rapidly growing. What should a solutions architect do to optimize the data tier's performance? Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZ enabled. Migrate from Amazon RDS to Amazon Elasticsearch Service (Amazon ES) with Kibana. Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify the game to use DAX. Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use Redis.
A company's on-premises infrastructure and AWS need a secure connection. This connection does not need a large quantity of bandwidth and is capable of handling a limited amount of traffic. The link should be established immediately. Which way is the MOST CHEAPEST for establishing this sort of connection? Implement a client VPN. Implement AWS Direct Connect. Implement a bastion host on Amazon EC2. Implement an AWS Site-to-Site VPN connection.
A business is developing a web-based application that will operate on Amazon EC2 instances distributed across several Availability Zones. The online application will enable access to a collection of over 900 TB of text content. The corporation expects times of heavy demand for the online application. A solutions architect must guarantee that the text document storage component can scale to meet the application's demand at all times. The corporation is concerned about the solution's total cost. Which storage system best satisfies these criteria in terms of cost-effectiveness? Amazon Elastic Block Store (Amazon EBS) Amazon Elastic File System (Amazon EFS) Amazon Elasticsearch Service (Amazon ES) Amazon S3.
A business is using a tape backup system to offshore store critical application data. Daily data volume is in the neighborhood of 50 TB. For regulatory requirements, the firm must maintain backups for seven years. Backups are infrequently viewed, and a week's notice is normally required before restoring a backup. The organization is now investigating a cloud-based solution in order to cut storage expenses and the operational load associated with tape management. Additionally, the organization wants to ensure that the move from tape backups to the cloud is as seamless as possible. Which storage option is the CHEAPEST? Use Amazon Storage Gateway to back up to Amazon Glacier Deep Archive. Use AWS Snowball Edge to directly integrate the backups with Amazon S3 Glacier. Copy the backup data to Amazon S3 and create a lifecycle policy to move the data to Amazon S3 Glacier. Use Amazon Storage Gateway to back up to Amazon S3 and create a lifecycle policy to move the backup to Amazon S3 Glacier.
A development team must have a website that is accessible to other development teams. HTML, CSS, client-side JavaScript, and graphics comprise the website's content. Which form of website hosting is the MOST cost-effective? Containerize the website and host it in AWS Fargate. Create an Amazon S3 bucket and host the website there. Deploy a web server on an Amazon EC2 instance to host the website. Configure an Application Load Balancer with an AWS Lambda target that uses the Express.js framework.
A business's data warehouse is powered by Amazon Redshift. The firm want to assure the long-term viability of its data in the event of component failure. What recommendations should a solutions architect make? Enable concurrency scaling. Enable cross-Region snapshots. Increase the data retention period. Deploy Amazon Redshift in Multi-AZ.
A business offers its customers with an API that automates tax calculations based on item pricing. During the Christmas season, the firm receives an increased volume of queries, resulting in delayed response times. A solutions architect must create a scalable and elastic system. What is the solution architect's role in achieving this? Provide an API hosted on an Amazon EC2 instance. The EC2 instance performs the required computations when the API request is made. Design a REST API using Amazon API Gateway that accepts the item names. API Gateway passes item names to AWS Lambda for tax computations Create an Application Load Balancer that has two Amazon EC2 instances behind it. The EC2 instances will compute the tax on the received item names. Design a REST API using Amazon API Gateway that connects with an API hosted on an Amazon EC2 instance. API Gateway accepts and passes the item names to the EC2 instance for tax computations.
A business is operating a worldwide application. Users upload various videos, which are subsequently combined into a single video file. The program receives uploads from users through a single Amazon S3 bucket in the us-east-1 Region. The same S3 bucket also serves as the download point for the generated video file. The finished video file is around 250 GB in size. The organization requires a solution that enables quicker uploads and downloads of video files stored in Amazon S2. The corporation will charge consumers who choose to pay for the faster speed a monthly fee. What actions should a solutions architect take to ensure that these criteria are met? Enable AWS Global Accelerator for the S3 endpoint. Adjust the applicationג€™s upload and download links to use the Global Accelerator S3 endpoint for users who have a subscription. Enable S3 Cross-Region Replication to S3 buckets in all other AWS Regions. Use an Amazon Route 53 geolocation routing policy to route S3 requests based on the location of users who have a subscription Create an Amazon CloudFront distribution and use the S3 bucket in us-east-1 as an origin. Adjust the application to use the CloudFront URL as the upload and download links for users who have a subscription. Enable S3 Transfer Acceleration for the S3 bucket in us-east-1. Configure the application to use the bucketג€™s S3-accelerate endpoint domain name for the upload and download links for users who have a subscription.
A solutions architect is designing a VPC architecture with various subnets. Six subnets will be used in two Availability Zones. Subnets are classified as public, private, and database-specific. Access to a database should be restricted to Amazon EC2 instances operating on private subnets. Which solution satisfies these criteria? Create a now route table that excludes the route to the public subnetsג€™ CIDR blocks. Associate the route table to the database subnets. Create a security group that denies ingress from the security group used by instances in the public subnets. Attach the security group to an Amazon RDS DB instance. Create a security group that allows ingress from the security group used by instances in the private subnets. Attach the security group to an Amazon RDS DB instance. Create a new peering connection between the public subnets and the private subnets. Create a different peering connection between the private subnets and the database subnets.
A business is implementing a web gateway. The firm want to limit public access to the program to the online part. The VPC was created with two public subnets and two private subnets to achieve this. The application will be hosted on many Amazon EC2 instances that will be managed through an Auto Scaling group. SSL termination must be delegated to a separate instance on Amazon EC2. What actions should a solutions architect take to guarantee compliance with these requirements? Configure the Network Load Balancer in the public subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer. Configure the Network Load Balancer in the public subnets. Configure the Auto Scaling group in the public subnets and associate it with the Application Load Balancer. Configure the Application Load Balancer in the public subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer. Configure the Application Load Balancer in the private subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer.
A business intends to operate a collection of Amazon EC2 instances connected to an Amazon Aurora database. To deploy the EC2 instances and Aurora DB cluster, the business used an AWS Cloud Formation template. The organization wishes to provide safe authentication of instances to the database. The business does not want to keep track of static database credentials. Which method satisfies these criteria with the LEAST amount of operational effort? Create a database user with a user name and password. Add parameters for the database user name and password to the CloudFormation template. Pass the parameters to the EC2 instances when the instances are launched. Create a database user with a user name and password. Store the user name and password in AWS Systems Manager Parameter Store Configure the EC2 instances to retrieve the database credentials from Parameter Store. Configure the DB cluster to use IAM database authentication. Create a database user to use with IAM authentication. Associate a role with the EC2 instances to allow applications on the instances to access the database. Configure the DB cluster to use IAM database authentication with an IAM user. Create a database user that has a name that matches the IAM user. Associate the IAM user with the EC2 instances to allow applications on the instances to access the database.
A business uses the SMB protocol to back up on-premises databases to local file server shares. To accomplish recovery goals, the organization needs instant access to one week's worth of backup data. After a week, recovery is less possible, and the business may live with a delay in retrieving those earlier backup data. What actions should a solutions architect take to ensure that these criteria are met with the LEAST amount of operational work possible? Deploy Amazon FSx for Windows File Server to create a file system with exposed file shares with sufficient storage to hold all the desired backups. Deploy an AWS Storage Gateway file gateway with sufficient storage to hold 1 week of backups. Point the backups to SMB shares from the file gateway. Deploy Amazon Elastic File System (Amazon EFS) to create a file system with exposed NFS shares with sufficient storage to hold all the desired backups. Continue to back up to the existing file shares. Deploy AWS Database Migration Service (AWS DMS) and define a copy task to copy backup files older than 1 week to Amazon S3, and delete the backup files from the local file store.
A daily scheduled task must be executed by an ecommerce business to collect and filter sales statistics for analytics purposes. The sales records are stored in an Amazon S3 bucket. Each object has a maximum file size of 10 GB. The work might take up to an hour to complete depending on the amount of sales events. The job's CPU and memory requirements are consistent and known in advance. A solutions architect's goal is to reduce the amount of operational work required to complete the task. Which solution satisfies these criteria? Create an AWS Lambda function that has an Amazon EventBridge (Amazon CloudWatch Events) notification. Schedule the EventBridge (CloudWatch Events) event to run once a day. Create an AWS Lambda function. Create an Amazon API Gateway HTTP API. and integrate the API with the function. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that calls the API and invokes the function. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an AWS Fargate launch type. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that launches an ECS task on the cluster to run the job. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an Amazon EC2 launch type and an Auto Scaling group with at least one EC2 instance. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that launches an ECS task on the cluster to run the job.
A shopping cart application connects to an Amazon RDS Multi-AZ database instance. The database performance is causing the application to slow down. There was no significant performance improvement after upgrading to the next-generation instance type. According to the analysis, around 700 IOPS are maintained, typical queries execute for extended periods of time, and memory use is significant. Which application modification might a solutions architect propose to address these concerns? Migrate the RDS instance to an Amazon Redshift cluster and enable weekly garbage collection. Separate the long-running queries into a new Multi-AZ RDS database and modify the application to query whichever database is needed. Deploy a two-node Amazon ElastiCache cluster and modify the application to query the cluster first and query the database only if needed. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue for common queries and query it first and query the database only if needed.
A startup is developing a shared storage solution for an AWS Cloud-hosted gaming application. The organization need the capacity to access data through SMB clients. The solution must be controlled completely. Which AWS solution satisfies these criteria? Create an AWS DataSync task that shares the data as a mountable file system. Mount the file system to the application server. Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the instance. Connect the application server to the file share. Create an Amazon FSx for Windows File Server file system. Attach the file system to the origin server. Connect the application server to the file system. Create an Amazon S3 bucket. Assign an IAM role to the application to grant access to the S3 bucket. Mount the S3 bucket to the application server.
A business relies on Amazon S3 for object storage. The organization stores data in hundreds of S3 buckets. Certain S3 buckets contain less frequently accessed data than others. According to a solutions architect, lifecycle rules are either not followed consistently or are enforced in part, resulting in data being held in high-cost storage. Which option will reduce expenses without jeopardizing object availability? Use S3 ACLs. Use Amazon Elastic Block Store (Amazon EBS) automated snapshots. Use S3 Intelligent-Tiering storage. Use S3 One Zone-Infrequent Access (S3 One Zone-IA).
A business is re-architecting a tightly connected application in order to make it loosely coupled. Previously, the program communicated across layers through a request/response pattern. The organization intends to do this via the usage of Amazon Simple Queue Service (Amazon SQS). The first architecture includes a request queue and a response queue. However, when the program grows, this strategy will not handle all messages. What is the best course of action for a solutions architect to take in order to tackle this issue? Configure a dead-letter queue on the ReceiveMessage API action of the SQS queue. Configure a FIFO queue, and use the message deduplication ID and message group ID. Create a temporary queue, with the Temporary Queue Client to receive each response message. Create a queue for each request and response on startup for each producer, and use a correlation ID message attribute.
Amazon S3 is used by a business to store private audit records. According to the concept of least privilege, the S3 bucket implements bucket restrictions to limit access to audit team IAM user credentials. Company executives are concerned about inadvertent document destruction in the S3 bucket and need a more secure solution. What steps should a solutions architect take to ensure the security of audit documents? Enable the versioning and MFA Delete features on the S3 bucket. Enable multi-factor authentication (MFA) on the IAM user credentials for each audit team IAM user account. Add an S3 Lifecycle policy to the audit teamג€™s IAM user accounts to deny the s3:DeleteObject action during audit dates. Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit team IAM user accounts from accessing the KMS key.
Each day, a company's hundreds of edge devices create 1 TB of status alerts. Each alert has a file size of roughly 2 KB. A solutions architect must provide a system for ingesting and storing warnings for further investigation. The business need a solution that is extremely accessible. However, the business must have a low cost structure and does not want to handle extra infrastructure. Additionally, the corporation intends to retain 14 days of data for instant examination and archive any older data. What is the MOST OPTIMAL option that satisfies these requirements? Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days. Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts. Create a script on the EC2 instances that will store the alerts in an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon Elasticsearch Service (Amazon ES) cluster. Set up the Amazon ES cluster to take manual snapshots every day and delete data from the cluster that is older than 14 days. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to ingest the alerts, and set the message retention period to 14 days. Configure consumers to poll the SQS queue, check the age of the message, and analyze the message data as needed. If the message is 14 days old, the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue.
A firm runs a two-tier image processing program. The application is divided into two Availability Zones, each with its own public and private subnets. The web tier's Application Load Balancer (ALB) makes use of public subnets. Private subnets are used by Amazon EC2 instances at the application layer. The program is functioning more slowly than planned, according to users. According to a security audit of the web server log files, the application receives millions of unauthorized requests from a tiny number of IP addresses. While the organization finds a more permanent solution, a solutions architect must tackle the urgent performance issue. What solution architecture approach should be recommended to satisfy this requirement? Modify the inbound security group for the web tier. Add a deny rule for the IP addresses that are consuming resources. Modify the network ACL for the web tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources. Modify the inbound security group for the application tier. Add a deny rule for the IP addresses that are consuming resources. Modify the network ACL for the application tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources.
A business uses Amazon Elastic Container Service (Amazon ECS) to perform an image processing workload on two private subnets. Each private subnet connects to the internet through a NAT instance. Amazon S3 buckets are used to store all photos. The business is worried about the expenses associated with data transfers between Amazon ECS and Amazon S3. What actions should a solutions architect do to save money? Configure a NAT gateway to replace the NAT instances. Configure a gateway endpoint for traffic destined to Amazon S3. Configure an interface endpoint for traffic destined to Amazon S3. Configure Amazon CloudFront for the S3 bucket storing the images.
An online picture program enables users to upload photographs and modify them. The application provides two distinct service levels: free and paid. Paid users' photos are processed ahead of those submitted by free users. Amazon S3 is used to store the photos, while Amazon SQS is used to store the job information. How should a solutions architect propose a configuration? Use one SQS FIFO queue. Assign a higher priority to the paid photos so they are processed first. Use two SQS FIFO queues: one for paid and one for free. Set the free queue to use short polling and the paid queue to use long polling. Use two SQS standard queues: one for paid and one for free. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue. Use one SQS standard queue. Set the visibility timeout of the paid photos to zero. Configure Amazon EC2 instances to prioritize visibility settings so paid photos are processed first.
Application developers have found that when business reporting users run big production reports to the Amazon RDS instance that powers the application, the application becomes very sluggish. While the reporting queries are executing, the RDS instance's CPU and memory usage metrics do not surpass 60%. Business reporting users must be able to produce reports without impairing the functionality of the application. Which action is necessary to achieve this? Increase the size of the RDS instance. Create a read replica and connect the application to it. Enable multiple Availability Zones on the RDS instance. Create a read replica and connect the business reports to it.
A new employee has been hired as a deployment engineer by a corporation. The deployment engineer will construct several AWS resources using AWS CloudFormation templates. A solutions architect desires that the deployment engineer execute job functions with the least amount of privilege possible. Which steps should the solutions architect do in conjunction to reach this goal? (Select two.) Have the deployment engineer use AWS account roof user credentials for performing AWS CloudFormation stack operations. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the Administrate/Access IAM policy attached. Create a new IAM User for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using Dial IAM role.
A corporation is using AWS to construct a new machine learning model solution. The models are constructed as self-contained microservices that get around 1 GB of model data from Amazon S3 and put it into memory during startup. The models are accessed by users through an asynchronous API. Users may submit a single request or a batch of requests and designate the destination for the results. Hundreds of people benefit from the company's models. The models' use habits are erratic. Certain models may go days or weeks without being used. Other models may get hundreds of queries concurrently. Which solution satisfies these criteria? The requests from the API are sent to an Application Load Balancer (ALB). Models are deployed as AWS Lambda functions invoked by the ALB. The requests from the API are sent to the models Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as AWS Lambda functions triggered by SQS events AWS Auto Scaling is enabled on Lambda to increase the number of vCPUs based on the SQS queue size. The requests from the API are sent to the modelג€™s Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as Amazon Elastic Container Service (Amazon ECS) services reading from the queue AWS App Mesh scales the instances of the ECS cluster based on the SQS queue size. The requests from the API are sent to the models Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as Amazon Elastic Container Service (Amazon ECS) services reading from the queue AWS Auto Scaling is enabled on Amazon ECS for both the cluster and copies of the service based on the queue size.
A business developed a meal ordering application that collects and maintains user data for future research. On an Amazon EC2 instance, the application's static front end is installed. The front-end application communicates with the back-end application, which is hosted on a different EC2 instance. The data is subsequently stored in Amazon RDS by the backend application. What should a solutions architect do to decouple and scalability the architecture? Use Amazon S3 to serve the front-end application, which sends requests to Amazon EC2 to execute the backend application. The backend application will process and store the data in Amazon RDS. Use Amazon S3 to serve the front-end application and write requests to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe Amazon EC2 instances to the HTTP/HTTPS endpoint of the topic, and process and store the data in Amazon RDS. Use an EC2 instance to serve the front end and write requests to an Amazon SQS queue. Place the backend instance in an Auto Scaling group, and scale based on the queue depth to process and store the data in Amazon RDS. Use Amazon S3 to serve the static front-end application and send requests to Amazon API Gateway, which writes the requests to an Amazon SQS queue. Place the backend instances in an Auto Scaling group, and scale based on the queue depth to process and store the data in Amazon RDS.
Each month, a leasing firm prepares and delivers PDF statements to all of its clients. Each statement is around 400 KB in length. Customers may obtain their statements from the website for a period of up to 30 days after they are created. Customers are sent a ZIP file containing all of their statements at the conclusion of their three-year lease. Which storage method is the MOST cost-effective in this situation? Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 1 day. Store the statements using the Amazon S3 Glacier storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier Deep Archive storage after 30 days. Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) storage after 30 days. Store the statements using the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 30 days. .
A company's ecommerce site is seeing a rise in visitor visits. The company's shop is implemented as a two-tier two application on Amazon EC2 instances, with a web layer and a separate database tier. As traffic rises, the organization detects severe delays in delivering timely marketing and purchase confirmation emails to consumers due to the design. The organization wishes to decrease the amount of time spent addressing difficult email delivery problems and to cut operating costs. What actions should a solutions architect take to ensure that these criteria are met? Create a separate application tier using EC2 instances dedicated to email processing. Configure the web instance to send email through Amazon Simple Email Service (Amazon SES). Configure the web instance to send email through Amazon Simple Notification Service (Amazon SNS). Create a separate application tier using EC2 instances dedicated to email processing. Place the instances in an Auto Scaling group.
A business's application makes use of AWS Lambda functions. A code examination reveals that database credentials are being kept in the source code of a Lambda function, which violates the company's security policy. To comply with security policy requirements, credentials must be safely maintained and automatically cycled on a regular basis. What should a solutions architect propose as the MOST SECURE method of meeting these requirements? Store the password in AWS CloudHSM. Associate the Lambda function with a role that can use the key ID to retrieve the password from CloudHSM. Use CloudHSM to automatically rotate the password. Store the password in AWS Secrets Manager. Associate the Lambda function with a role that can use the secret ID to retrieve the password from Secrets Manager. Use Secrets Manager to automatically rotate the password. Store the password in AWS Key Management Service (AWS KMS). Associate the Lambda function with a role that can use the key ID to retrieve the password from AWS KMS. Use AWS KMS to automatically rotate the uploaded password. Move the database password to an environment variable that is associated with the Lambda function. Retrieve the password from the environment variable by invoking the function. Create a deployment script to automatically rotate the password.
A firm just launched a two-tier application in the us-east-1 Region's two Availability Zones. Databases are located on a private subnet, whereas web servers are located on a public subnet. The VPC is connected to the internet through an internet gateway. Amazon EC2 instances are used to host the application and database. The database servers are unable to connect to the internet in order to get fixes. A solutions architect must create a system that ensures database security while incurring the fewest operating costs. Which solution satisfies these criteria? Deploy a NAT gateway inside the public subnet for each Availability Zone and associate it with an Elastic IP address. Update the routing table of the private subnet to use it as the default route. Deploy a NAT gateway inside the private subnet for each Availability Zone and associate it with an Elastic IP address. Update the routing table of the private subnet to use it as the default route. Deploy two NAT instances inside the public subnet for each Availability Zone and associate them with Elastic IP addresses. Update the routing table of the private subnet to use it as the default route. Deploy two NAT instances inside the private subnet for each Availability Zone and associate them with Elastic IP addresses. Update the routing table of the private subnet to use it as the default route.
For each of its developer accounts, a corporation has configured AWS CloudTrail logs to transport log files to an Amazon S3 bucket. The organization has established a centralized AWS account for the purpose of facilitating administration and auditing. Internal auditors need access to CloudTrail logs, however access to all developer account users must be limited. The solution should be both secure and efficient. How should a solutions architect address these considerations? Configure an AWS Lambda function in each developer account to copy the log files to the central account. Create an IAM role in the central account for the auditor. Attach an IAM policy providing read-only permissions to the bucket. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the central account. Create an IAM user in the central account for the auditor. Attach an IAM policy providing full permissions to the bucket. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the central account. Create an IAM role in the central account for the auditor. Attach an IAM policy providing read-only permissions to the bucket. Configure an AWS Lambda function in the central account to copy the log files from the S3 bucket in each developer account. Create an IAM user in the central account for the auditor. Attach an IAM policy providing full permissions to the bucket.
A solutions architect is improving a website in preparation for a forthcoming musical performance. Real-time streaming of the performances will be accessible, as well as on-demand viewing. The event is anticipated to draw a large internet audience from across the world. Which service will optimize both real-time and on-demand steaming performance? Amazon CloudFront AWS Global Accelerator Amazon Route S3 Amazon S3 Transfer Acceleration.
A database is hosted on an Amazon RDS MySQL 5.6 Multi-AZ DB instance that is subjected to high-volume reads. When evaluating read performance from a secondary AWS Region, application developers detect a considerable lag. The developers need a solution that has a read replication latency of less than one second. What recommendations should the solutions architect make? Install MySQL on Amazon EC2 in the secondary Region. Migrate the database to Amazon Aurora with cross-Region replicas. Create another RDS for MySQL read replica in the secondary Region. Implement Amazon ElastiCache to improve database query performance.
Currently, a business runs a web application that is backed up by an Amazon RDS MySQL database. It features daily automatic backups that are not encrypted. A security audit entails the encryption of future backups and the destruction of unencrypted backups. Before deleting the previous backups, the firm will create at least one encrypted backup. What should be done to allow encrypted backups in the future? Enable default encryption for the Amazon S3 bucket where backups are stored. Modify the backup section of the database configuration to toggle the Enable encryption check box. Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot. Enable an encrypted read replica on RDS for MySQL. Promote the encrypted read replica to primary. Remove the original database instance.
Each entry to a company's facility is equipped with badge readers. When badges are scanned, the readers transmit an HTTPS message indicating who tried to enter that specific entry. A solutions architect must develop a system that will handle these sensor signals. The solution must be highly accessible, with the findings made available for analysis by the company's security staff. Which system design should be recommended by the solutions architect? Launch an Amazon EC2 instance to serve as the HTTPS endpoint and to process the messages. Configure the EC2 instance to save the results to an Amazon S3 bucket. Create an HTTPS endpoint in Amazon API Gateway. Configure the API Gateway endpoint to invoke an AWS Lambda function to process the messages and save the results to an Amazon DynamoDB table. Use Amazon Route 53 to direct incoming sensor messages to an AWS Lambda function. Configure the Lambda function to process the messages and save the results to an Amazon DynamoDB table. Create a gateway VPC endpoint for Amazon S3. Configure a Site-to-Site VPN connection from the facility network to the VPC so that sensor data can be written directly to an S3 bucket by way of the VPC endpoint.
A business maintains on-premises servers that operate a relational database. The existing database handles a large volume of read requests from users in various places. The organization want to transition to AWS with little effort. The database solution should facilitate catastrophe recovery while not interfering with the existing traffic flow of the business. Which solution satisfies these criteria? Use a database in Amazon RDS with Multi-AZ and at least one read replica. Use a database in Amazon RDS with Multi-AZ and at least one standby replica. Use databases hosted on multiple Amazon EC2 instances in different AWS Regions. Use databases hosted on Amazon EC2 instances behind an Application Load Balancer in different Availability Zones.
On AWS, a business is developing a document storage solution. The application is deployed across different Amazon EC2 Availability Zones. The firm demands a highly accessible document storage. When requested, documentation must be returned quickly. The lead engineer has setup the application to store documents in Amazon Elastic Block Store (Amazon EBS), but is open to examine additional solutions to fulfill the availability requirement. What recommendations should a solutions architect make? Snapshot the EBS volumes regularly and build new volumes using those snapshots in additional Availability Zones. Use Amazon Elastic Block Store (Amazon EBS) for the EC2 instance root volumes. Configure the application to build the document store on Amazon S3. Use Amazon Elastic Block Store (Amazon EBS) for the EC2 instance root volumes. Configure the application to build the document store on Amazon S3 Glacier. Use at least three Provisioned IOPS EBS volumes for EC2 instances. Mount the volumes to the EC2 instances in a RAID 5 configuration.
A business wishes to keep track of its AWS charges for financial reporting purposes. The cloud operations team is developing an architecture for querying AWS Cost and Usage Reports for all member accounts in the AWS Organizations management account. Once a month, the team must execute this query and give a full analysis of the bill. Which solution meets these needs in the MOST scalable and cost-effective manner? Enable Cost and Usage Reports in the management account. Deliver reports to Amazon Kinesis. Use Amazon EMR for analysis. Enable Cost and Usage Reports in the management account. Deliver the reports to Amazon S3. Use Amazon Athena for analysis. Enable Cost and Usage Reports for member accounts. Deliver the reports to Amazon S3. Use Amazon Redshift for analysis. Enable Cost and Usage Reports for member accounts. Deliver the reports to Amazon Kinesis. Use Amazon QuickSight for analysis.
A solutions architect desires that all new users meet particular difficulty standards and are required to rotate their IAM user passwords on a regular basis. What is the solution architect's role in achieving this? Set an overall password policy for the entire AWS account Set a password policy for each IAM user in the AWS account. Use third-party vendor software to set password requirements. Attach an Amazon CloudWatch rule to the Create_newuser event to set the password with the appropriate requirements.
A business uses AWS to host its website. The organization has utilized Amazon EC2 Auto Scaling to accommodate the extremely fluctuating demand. Management is worried that the firm is overprovisioning its infrastructure, particularly at the three-tier application's front end. A solutions architect's primary responsibility is to guarantee that costs are minimized without sacrificing performance. What is the solution architect's role in achieving this? Use Auto Scaling with Reserved Instances. Use Auto Scaling with a scheduled scaling policy. Use Auto Scaling with the suspend-resume feature. Use Auto Scaling with a target tracking scaling policy.
A business is in the process of transferring its apps to AWS. At the moment, on-premises apps create hundreds of terabytes of data, which is kept on a shared file system. The organization is using a cloud-based analytics solution to derive insights from this data on an hourly basis. The business requires a solution to manage continuous data transfer between its on-premises shared file system and Amazon S3. Additionally, the solution must be capable of coping with brief gaps in internet access. Which data transmission options should the business utilize to achieve these requirements? AWS DataSync AWS Migration Hub AWS Snowball Edge Storage Optimized AWS Transfer for SFTP.
A business wishes to manage a fleet of Amazon EC2 instances using AWS Systems Manager. No EC2 instances are permitted to have internet access, per the company's security needs. A solutions architect is responsible for designing network connection between EC2 instances and Systems Manager while adhering to this security requirement. Which solution will satisfy these criteria? Deploy the EC2 instances into a private subnet with no route to the internet. Configure an interface VPC endpoint for Systems Manager. Update routes to use the endpoint. Deploy a NAT gateway into a public subnet. Configure private subnets with a default route to the NAT gateway. Deploy an internet gateway. Configure a network ACL to deny traffic to all destinations except Systems Manager.
On Amazon EC2 instances, a business runs an application. The application is deployed on private subnets inside the us-east-1 Region's three Availability Zones. The instances must have internet access in order to download files. The organization is looking for a design that is readily accessible across the Region. Which solution should be done to guarantee that internet access is not disrupted? Deploy a NAT instance in a private subnet of each Availability Zone. Deploy a NAT gateway in a public subnet of each Availability Zone. Deploy a transit gateway in a private subnet of each Availability Zone. Deploy an internet gateway in a public subnet of each Availability Zone.
A business operates a distant plant with unstable connection. The factory must collect and interpret machine and sensor data in order to detect items on its conveyor belts and begin robotic movement to route them to the appropriate spot. For on-premises control systems, predictable low-latency computing processing is critical. Which data processing solution should the manufacturer use? Amazon CloudFront Lambda@Edge functions An Amazon EC2 instance that has enhanced networking enabled An Amazon EC2 instance that uses an AWS Global Accelerator An Amazon Elastic Block Store (Amazon EBS) volume on an AWS Snowball Edge cluster.
An application is deployed across various Availability Zones using Amazon EC2 instances. The instances are deployed behind an Application Load Balancer in an Amazon EC2 Auto Scaling group. The program operates optimally when the CPU usage of the Amazon EC2 instances is close to or equal to 40%. What should a solutions architect do to ensure that the required performance is maintained throughout all group instances? Use a simple scaling policy to dynamically scale the Auto Scaling group. Use a target tracking policy to dynamically scale the Auto Scaling group. Use an AWS Lambda function to update the desired Auto Scaling group capacity. Use scheduled scaling actions to scale up and scale down the Auto Scaling group.
The web application of a business is hosted on Amazon EC2 instances and is protected by an Application Load Balancer. The corporation recently altered its policy, requiring that the application be accessible exclusively from a single nation. Which setup will satisfy this criterion? Configure the security group for the EC2 instances. Configure the security group on the Application Load Balancer. Configure AWS WAF on the Application Load Balancer in a VPC. Configure the network ACL for the subnet that contains the EC2 instances.
AWS Lambda functions are being developed and deployed by an engineering team. The team must build roles and administer policies in AWS IAM in order to set the Lambda functions' rights. How should the team's permissions be adjusted to correspond to the principle of least privilege? Create an IAM role with a managed policy attached. Allow the engineering team and the Lambda functions to assume this role. Create an IAM group for the engineering team with an IAMFullAccess policy attached. Add all the users from the team to this IAM group. Create an execution role for the Lambda functions. Attach a managed policy that has permission boundaries specific to these Lambda functions. Create an IAM role with a managed policy attached that has permission boundaries specific to the Lambda functions. Allow the engineering team to assume this role.
A business is migrating from on-premises Oracle to Amazon Aurora PostgreSQL. Numerous apps write to the same tables in the database. The apps must be transferred sequentially, with a month between migrations. Management has raised worry about the database's heavy read and write activity. Throughout the entire migration process, the data must be maintained in sync across both databases. What recommendations should a solutions architect make? Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a change data capture (CDC) replication task and a table mapping to select all cables. Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a full load plus change data capture (CDC) replication task and a table mapping to select all tables. Use the AWS Schema Conversion Tool with AWS DataBase Migration Service (AWS DMS) using a memory optimized replication instance. Create a full load plus change data capture (CDC) replication task and a table mapping to select all tables. Use the AWS Schema Conversion Tool with AWS Database Migration Service (AWS DMS) using a compute optimized replication instance. Create a full load plus change data capture (CDC) replication task and a table mapping to select the largest tables.
A solutions architect is migrating static content from an Amazon EC2 instance-hosted public website to an Amazon S3 bucket. The static assets will be distributed using an Amazon CloudFront distribution. The EC2 instances' security group limits access to a subset of IP ranges. Access to static material should be regulated in a similar manner. Which combination of actions will satisfy these criteria? (Select two.) Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects. Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the CloudFront distribution. Create a new security group that includes the same IP restrictions that exist in the current EC2 security group. Associate this new security group with the CloudFront distribution. Create a new security group that includes the same IP restrictions that exist in the current EC2 security group. Associate this new security group with the S3 bucket hosting the static content. Create a new IAM role and associate the role with the distribution. Change the permissions either on the S3 bucket or on the files within the S3 bucket so that only the newly created IAM role has read and download permissions.
AWS is used by an ecommerce firm to operate a multi-tier application. Amazon EC2 hosts both the front-end and back-end layers, while Amazon RDS for MySQL hosts the database. The backend tier is responsible for communication with the RDS instance. There are many requests to the database to get identical datasets, which results in performance slowdowns. Which actions should be performed to optimize the backend's performance? Implement Amazon SNS to store the database calls. Implement Amazon ElastiCache to cache the large datasets. Implement an RDS for MySQL read replica to cache database calls. Implement Amazon Kinesis Data Firehose to stream the calls to the database.
A user owns a MySQL database, which is used by a variety of customers that anticipate a maximum delay of 100 milliseconds on queries. Once an entry is recorded in the database, it is almost never modified. Clients get access to a maximum of one record at a time. Due to rising customer demand, database access has expanded tremendously. As a consequence, the resulting load will quickly surpass the capability of even the most costly hardware available. The user want to move to AWS and is open to experimenting with new database systems. Which solution would resolve the database load problem and provide nearly limitless future scalability? Amazon RDS Amazon DynamoDB Amazon Redshift AWS Data Pipeline.
Amazon DynamoDB is being used by an entertainment firm to store media metadata. The application requires extensive reading and often encounters delays. The organization lacks the people necessary to manage extra operational expenses and requires an increase in DynamoDB's performance efficiency without changing the application. What solution architecture approach should be recommended to satisfy this requirement? Use Amazon ElastiCache for Redis. Use Amazon DynamoDB Accelerator (DAX). Replicate data by using DynamoDB global tables. Use Amazon ElastiCache for Memcached with Auto Discovery enabled.
In a branch office, a firm runs an application in a tiny data closet with no virtualized computing resources. The application's data is saved on a network file system (NFS) volume. Daily offsite backups of the NFS volume are required by compliance requirements. Which solution satisfies these criteria? Install an AWS Storage Gateway file gateway on premises to replicate the data to Amazon S3. Install an AWS Storage Gateway file gateway hardware appliance on premises to replicate the data to Amazon S3. Install an AWS Storage Gateway volume gateway with stored volumes on premises to replicate the data to Amazon S3. Install an AWS Storage Gateway volume gateway with cached volumes on premises to replicate the data to Amazon S3.
A business is using AWS to host an election reporting website for consumers worldwide. The website makes use of Amazon EC2 instances in an Auto Scaling group with Application Load Balancers for the web and application layers. The database layer is powered by Amazon RDS for MySQL. The website is updated once an hour with election results and has previously seen hundreds of individuals check the data. The firm anticipates a big boost in demand in the coming months as a result of impending elections in many nations. A solutions architect's objective is to increase the website's capacity to manage increased demand while limiting the requirement for more EC2 instances. Which solution will satisfy these criteria? Launch an Amazon ElastiCache cluster to cache common database queries. Launch an Amazon CloudFront web distribution to cache commonly requested website content. Enable disk-based caching on the EC2 instances to cache commonly requested website content. Deploy a reverse proxy into the design using an EC2 instance with caching enabled for commonly requested website content.
Amazon S3 is used by a corporation to store historical weather recordings. The records are accessed through a URL that refers to a domain name on the company's website. Subscriptions enable users from all around the globe to access this material. Although the organization's core domain name is hosted by a third-party operator, the company recently transferred some of its services to Amazon Route 53. The corporation want to consolidate contracts, minimize user latency, and lower the cost of offering the application to subscribers. Which solution satisfies these criteria? Create a web distribution on Amazon CloudFront to serve the S3 content for the application. Create a CNAME record in a Route 53 hosted zone that points to the CloudFront distribution, resolving to the applicationג€™s URL domain name. Create a web distribution on Amazon CloudFront to serve the S3 content for the application. Create an ALIAS record in the Amazon Route 53 hosted zone that points to the CloudFront distribution, resolving to the applicationג€™s URL domain name. Create an A record in a Route 53 hosted zone for the application. Create a Route 53 traffic policy for the web application, and configure a geolocation rule. Configure health checks to check the health of the endpoint and route DNS queries to other endpoints if an endpoint is unhealthy. Create an A record in a Route 53 hosted zone for the application. Create a Route 53 traffic policy for the web application, and configure a geoproximity rule. Configure health checks to check the health of the endpoint and route DNS queries to other endpoints if an endpoint is unhealthy.
A business is developing a web application that will be accessible over the internet. The application is hosted on Amazon EC2 for Linux instances that leverage Amazon RDS MySQL Multi-AZ DB instances to store sensitive user data. Public subnets are used for EC2 instances, whereas private subnets are used for RDS DB instances. The security team has required that web-based attacks on database instances be prevented. What recommendations should a solutions architect make? Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Configure the EC2 instance iptables rules to drop suspicious web traffic. Create a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the individual EC2 instances. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Move DB instances to the same subnets that EC2 instances are located in. Create a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the individual EC2 instances. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Use AWS WAF to monitor inbound web traffic for threats. Create a security group for the web application servers and a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the web application server security group. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Use AWS WAF to monitor inbound web traffic for threats. Configure the Auto Scaling group to automatically create new DB instances under heavy traffic. Create a security group for the RDS DB instances. Configure the RDS security group to only allow port 3306 inbound.
A solutions architect is responsible for designing a solution for migrating a persistent database from on-premises to AWS. According to the database administrator, the database needs 64,000 IOPS. If feasible, the database administrator wishes to host the database instance on a single Amazon Elastic Block Store (Amazon EBS) volume. Which option satisfies the database administrator's requirements the most effectively? Use an instance from the I3 I/O optimized family and leverage local ephemeral storage to achieve the IOPS requirement. Create a Nitro-based Amazon EC2 instance with an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1) volume attached. Configure the volume to have 64,000 IOPS. Create and map an Amazon Elastic File System (Amazon EFS) volume to the database instance and use the volume to achieve the required IOPS for the database. Provision two volumes and assign 32,000 IOPS to each. Create a logical volume at the operating system level that aggregates both volumes to achieve the IOPS requirements.
A solutions architect is tasked with the responsibility of creating the architecture for a new application that will be deployed to the AWS Cloud. Amazon EC2 On-Demand Instances will be used to execute the application, which will automatically scale across different Availability Zones. Throughout the day, the EC2 instances will scale up and down periodically. The load distribution will be handled by an Application Load Balancer (ALB). The architecture must be capable of managing dispersed session data. The firm is ready to make necessary adjustments to the code. What is the solution architect's responsibility in ensuring that the design enables distributed session data management? Use Amazon ElastiCache to manage and store session data. Use session affinity (sticky sessions) of the ALB to manage session data. Use Session Manager from AWS Systems Manager to manage the session. Use the GetSessionToken API operation in AWS Security Token Service (AWS STS) to manage the session.
Multiple Amazon EC2 Linux instances are used by a business in a VPC to execute applications that need a hierarchical directory structure. The apps must be able to access and write to shared storage fast and simultaneously. How is this accomplished? Create an Amazon Elastic File System (Amazon EFS) file system and mount it from each EC2 instance. Create an Amazon S3 bucket and permit access from all the EC2 instances in the VPC. Create a file system on an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1) volume. Attach the volume to all the EC2 instances. Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes attached to each EC2 instance. Synchronize the Amazon Elastic Block Store (Amazon EBS) volumes across the different EC2 instances.
A solutions architect is in the process of transferring a document management task to Amazon Web Services. The workload stores and tracks 7 terabytes of contract documents on a shared storage file system and an external database. The majority of records are archived and ultimately recovered for future reference. During the migration, the application cannot be updated, and the storage solution must be highly available. Web servers that are part of an Auto Scaling group on Amazon EC2 collect and store documents. There may be up to 12 instances in the Auto Scaling group. Which option best fits these criteria in terms of cost-effectiveness? Provision an enhanced networking optimized EC2 instance to serve as a shared NFS storage system. Create an Amazon S3 bucket that uses the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Mount the S3 bucket to the EC2 instances in the Auto Scaling group. Create an SFTP server endpoint by using AWS Transfer for SFTP and an Amazon S3 bucket. Configure the EC2 instances in the Auto Scaling group to connect to the SFTP server. Create an Amazon Elastic File System (Amazon EFS) file system that uses the EFS Standard-Infrequent Access (EFS Standard-IA) storage class. Mount the file system to the EC2 instances in the Auto Scaling group.
A business maintains monthly phone records. Statistically, recorded data may be referred to randomly within a year but is seldom retrieved beyond that time period. Files less than a year old must be queried and retrieved immediately. It is okay for there to be a delay in obtaining older files. A solutions architect must ensure that the captured data is stored at the lowest possible cost. Which option is the most cheapest? Store individual files in Amazon S3 Glacier and store search metadata in object tags created in S3 Glacier Query S3 Glacier tags and retrieve the files from S3 Glacier. Store individual files in Amazon S3. Use lifecycle policies to move the files to Amazon S3 Glacier after1 year. Query and retrieve the files from Amazon S3 or S3 Glacier. Archive individual files and store search metadata for each archive in Amazon S3. Use lifecycle policies to move the files to Amazon S3 Glacier after 1 year. Query and retrieve the files by searching for metadata from Amazon S3. Archive individual files in Amazon S3. Use lifecycle policies to move the files to Amazon S3 Glacier after 1 year. Store search metadata in Amazon DynamoDB. Query the files from DynamoDB and retrieve them from Amazon S3 or S3 Glacier.
A business want to move a workload to AWS. The chief information security officer demands that any data stored in the cloud be encrypted at rest. The organization desires total control over the encryption key lifecycle management process. Independent of AWS CloudTrail, the organization must be able to promptly delete key material and audit key use. The selected services should interface with other AWS storage services. Which services adhere to these security standards? AWS CloudHSM with the CloudHSM client AWS Key Management Service (AWS KMS) with AWS CloudHSM AWS Key Management Service (AWS KMS) with an external key material origin AWS Key Management Service (AWS KMS) with AWS managed customer master keys (CMKs).
A business hosts an application on Amazon Web Services (AWS) and utilizes Amazon DynamoDB as the database. To handle data from the database, the organization adds Amazon EC2 instances to a private network. The organization connects to DynamoDB using two NAT instances. The corporation want to decommission its NAT instances. A solutions architect must develop a solution that connects to DynamoDB and is self-managing. Which approach is the MOST cost-effective in terms of meeting these requirements? Create a gateway VPC endpoint to provide connectivity to DynamoDB. Configure a managed NAT gateway to provide connectivity to DynamoDB. Establish an AWS Direct Connect connection between the private network and DynamoDB. Deploy an AWS PrivateLink endpoint service between the private network and DynamoDB.
A business hosts a three-tier web application on a virtual private cloud (VPC) that spans various Availability Zones. For the application layer, Amazon EC2 instances are deployed in an Auto Scaling group. The organization must develop an automated scaling strategy that analyzes the daily and weekly workload patterns for each resource. The setup must correctly scale resources in response to both forecasted and actual changes in consumption. Which scaling approach, if any, should a solutions architect propose in order to satisfy these requirements? Implement dynamic scaling with step scaling based on average CPU utilization from the EC2 instances. Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking. Create an automated scheduled scaling action based on the traffic patterns of the web application. Set up a simple scaling policy. Increase the cooldown period based on the EC2 instance startup time.
A business administers its own Amazon EC2 instances, which are configured to operate MySQL databases. The firm manages replication and scaling manually as demand grows or falls. The organization need a new solution that makes it easier to add or remove computing resources from its database layer as required. Additionally, the solution must increase speed, scalability, and durability with little work on the part of operations. Which solution satisfies these criteria? Migrate the databases to Amazon Aurora Serverless for Aurora MySQL. Migrate the databases to Amazon Aurora Serverless for Aurora PostgreSQL. Combine the databases into one larger MySQL database. Run the larger database on larger EC2 instances. Create an EC2 Auto Scaling group for the database tier. Migrate the existing databases to the new environment.
A business want to transfer two apps to AWS. Both apps handle a huge number of files concurrently by accessing the same files. Both programs must read files with a minimum of delay. Which architecture would a solutions architect suggest in this case? Configure two AWS Lambda functions to run the applications. Create an Amazon EC2 instance with an instance store volume to store the data. Configure two AWS Lambda functions to run the applications. Create an Amazon EC2 instance with an Amazon Elastic Block Store (Amazon EBS) volume to store the data. Configure one memory optimized Amazon EC2 instance to run both applications simultaneously. Create an Amazon Elastic Block Store (Amazon EBS) volume with Provisioned IOPS to store the data. Configure two Amazon EC2 instances to run both applications. Configure Amazon Elastic File System (Amazon EFS) with General Purpose performance mode and Bursting Throughput mode to store the data.
A corporation operates a containerized application in an on-premises data center using a Kubernetes cluster. The organization stores data in a MongoDB database. The organization want to transition some of these environments to AWS, but no modifications to the code or deployment methods are currently feasible. The business need a solution that lowers operating costs. Which solution satisfies these criteria? Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes for compute and MongoDB on EC2 for data storage. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute and Amazon DynamoDB for data storage. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes for compute and Amazon DynamoDB for data storage. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB compatibility) for data storage.
A business has a mobile chat application that utilizes an Amazon DynamoDB data storage. Users want as low delay as possible while reading fresh messages. A solutions architect's objective is to provide the optimum solution with the fewest possible application modifications. Which technique should be chosen by the solutions architect? Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code to use the DAX endpoint. Add DynamoDB read replicas to handle the increased read load. Update the application to point to the read endpoint for the read replicas. Double the number of read capacity units for the new messages table in DynamoDB. Continue to use the existing DynamoDB endpoint. Add an Amazon ElastiCache for Redis cache to the application stack. Update the application to point to the Redis cache endpoint instead of DynamoDB.
A business utilizes Amazon Web Services to host all components of its three-tier application. The organization want to identify any possible security vulnerabilities inside the environment automatically. The organization want to keep track of any discoveries and to warn administrators in the event of a suspected breach. Which solution satisfies these criteria? Set up AWS WAF to evaluate suspicious web traffic. Create AWS Lambda functions to log any findings in Amazon CloudWatch and send email notifications to administrators. Set up AWS Shield to evaluate suspicious web traffic. Create AWS Lambda functions to log any findings in Amazon CloudWatch and send email notifications to administrators. Deploy Amazon Inspector to monitor the environment and generate findings in Amazon CloudWatch. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic to notify administrators by email. Deploy Amazon GuardDuty to monitor the environment and generate findings in Amazon CloudWatch. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic to notify administrators by email.
On an Amazon RDS MySQL DB instance, a company's production application processes online transaction processing (OLTP) transactions. The firm is also offering a new reporting tool with the same data access. The reporting tool must be highly accessible and have no adverse effect on the production application's performance. How is this accomplished? Create hourly snapshots of the production RDS DB instance. Create a Multi-AZ RDS Read Replica of the production RDS DB instance. Create multiple RDS Read Replicas of the production RDS DB instance. Place the Read Replicas in an Auto Scaling group. Create a Single-AZ RDS Read Replica of the production RDS DB instance. Create a second Single-AZ RDS Read Replica from the replica.
A solutions architect must offer a fully managed alternative to an on-premises system that enables file interchange between workers and partners. Workers connecting from on-premises systems, remote employees, and external partners must have easy access to the solution. Which solution satisfies these criteria? Use AWS Transfer for SFTP to transfer files into and out of Amazon S3. Use AWS Snowball Edge for local storage and large-scale data transfers. Use Amazon FSx to store and transfer files to make them available remotely. Use AWS Storage Gateway to create a volume gateway to store and transfer files to Amazon S3.
A business is operating an application on Amazon EC2 instances on a private subnet. The program must be capable of storing and retrieving data from Amazon S3. To save expenses, the corporation wishes to optimize the configuration of its AWS resources. How should the business go about doing this? Deploy a NAT gateway to access the S3 buckets. Deploy AWS Storage Gateway to access the S3 buckets. Deploy an S3 gateway endpoint to access the S3 buckets. Deploy an S3 interface endpoint to access the S3 buckets.
AWS is used by a business to store user data. The data is continually accessed, with peak consumption occurring during work hours. Access patterns vary, with some data going months without being accessed. A solutions architect must pick a solution that is both cost efficient and durable, while also maintaining a high degree of availability. Which storage option satisfies these criteria? Amazon S3 Standard Amazon S3 Intelligent-Tiering Amazon S3 Glacier Deep Archive Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).
A business uses Amazon EC2 instances to operate a legacy data processing application. Although data is processed sequentially, the order of the findings is irrelevant. The application is designed in a monolithic fashion. The only method for the business to expand the application in response to rising demand is to raise the instance size. The engineers at the organization have chosen to redesign the program using a microservices architecture using Amazon Elastic Container Service (Amazon ECS). What should a solutions architect propose for inter-microservice communication? Create an Amazon Simple Queue Service (Amazon SQS) queue. Add code to the data producers, and send data to the queue. Add code to the data consumers to process data from the queue. Create an Amazon Simple Notification Service (Amazon SNS) topic. Add code to the data producers, and publish notifications to the topic. Add code to the data consumers to subscribe to the topic. Create an AWS Lambda function to pass messages. Add code to the data producers to call the Lambda function with a data object. Add code to the data consumers to receive a data object that is passed from the Lambda function. Create an Amazon DynamoDB table. Enable DynamoDB Streams. Add code to the data producers to insert data into the table. Add code to the data consumers to use the DynamoDB Streams API to detect new table entries and retrieve the data.
A business is considering migrating a classic application to AWS. Currently, the application communicates with an on-premises storage system through NFS. The program cannot be changed to perform this function using any other communication protocol than NFS. Which storage solution, if any, should a solutions architect propose for post-migration use? AWS DataSync Amazon Elastic Block Store (Amazon EBS) Amazon Elastic File System (Amazon EFS) Amazon EMR File System (Amazon EMRFS).
A business wishes to run its mission-critical apps in containers in order to fulfill scalability and availability requirements. The corporation would rather concentrate on key application maintenance. The firm does not want to be responsible for provisioning and maintaining the containerized workload's underlying infrastructure. What actions should a solutions architect take to ensure that these criteria are met? Use Amazon EC2 instances, and install Docker on the instances. Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 worker nodes. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. Use Amazon EC2 instances from an Amazon Elastic Container Service (Amazon ECS)-optimized Amazon Machine Image (AMI).
A business operates a service that generates event data. The firm wishes to use AWS for the purpose of processing event data as it is received. The data is structured in a certain sequence that must be preserved during processing. The firm wishes to deploy a solution with the lowest possible operating costs. How is this to be accomplished by a solution architect? Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue to hold messages. Set up an AWS Lambda function to process messages from the queue. Create an Amazon Simple Notification Service (Amazon SNS) topic to deliver notifications containing payloads to process. Configure an AWS Lambda function as a subscriber. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to hold messages. Set up an AWS Lambda function to process messages from the queue independently. Create an Amazon Simple Notification Service (Amazon SNS) topic to deliver notifications containing payloads to process. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a subscriber.
The database of a business is hosted in the us-east-1 Region on an Amazon Aurora MySQL DB cluster. The database is around 4 terabytes in size. The company's disaster recovery plan should be expanded to include the us-west-2 region. The firm must be able to fail over to us-west-2 within a 15-minute recovery time goal (RTO). What recommendations should a solutions architect make to satisfy these requirements? Create a Multi-Region Aurora MySQL DB cluster in us-east-1 and use-west-2. Use an Amazon Route 53 health check to monitor us-east-1 and fail over to us- west-2 upon failure. Take a snapshot of the DB cluster in us-east-1. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function upon receipt of resource events. Configure the Lambda function to copy the snapshot to us-west-2 and restore the snapshot in us-west-2 when failure is detected. Create an AWS CloudFormation script to create another Aurora MySQL DB cluster in us-west-2 in case of failure. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function upon receipt of resource events. Configure the Lambda function to deploy the AWS CloudFormation stack in us-west-2 when failure is detected. Recreate the database as an Aurora global database with the primary DB cluster in us-east-1 and a secondary DB cluster in us-west-2. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function upon receipt of resource events. Configure the Lambda function to promote the DB cluster in us-west-2 when failure is detected.
A business is installing an application that handles near-real-time streaming data. The workload will be run on Amazon EC2 instances. The network architecture must be configured in such a way that the latency between nodes is as minimal as feasible. Which network solution combination will suit these requirements? (Select two.) Enable and configure enhanced networking on each EC2 instance. Group the EC2 instances in separate accounts. Run the EC2 instances in a cluster placement group. Attach multiple elastic network interfaces to each EC2 instance. Use Amazon Elastic Block Store (Amazon EBS) optimized instance types.
A business outsources its marketplace analytics management to a third-party partner. The vendor requires restricted programmatic access to the company's account's resources. All necessary policies have been established to ensure acceptable access. Which new component provides the vendor the mose secure access to the account? Create an IAM user. Implement a service control policy (SCP) Use a cross-account role with an external ID. Configure a single sign-on (SSO) identity provider.
A business uses two Amazon EC2 instances to run a dynamic web application. The organization has its own SSL certificate, which is used to complete SSL termination on each instance. Recently, there has been an increase in traffic, and the operations team concluded that SSL encryption and decryption is causing the web servers' compute capacity to surpass its limit. What should a solutions architect do to optimize the performance of an application? Create a new SSL certificate using AWS Certificate Manager (ACM). Install the ACM certificate on each instance. Create an Amazon S3 bucket. Migrate the SSL certificate to the S3 bucket. Configure the EC2 instances to reference the bucket for SSL termination. Create another EC2 instance as a proxy server. Migrate the SSL certificate to the new instance and configure it to direct connections to the existing EC2 instances. Import the SSL certificate into AWS Certificate Manager (ACM). Create an Application Load Balancer with an HTTPS listener that uses the SSL certificate from ACM.
A corporation is doing an evaluation of an existing workload placed on AWS using the AWS Well-Architected Framework. The evaluation discovered a public-facing website operating on the same Amazon EC2 instance as a freshly installed Microsoft Active Directory domain controller to support other AWS services. A solutions architect must offer a new design that increases the architecture's security and reduces the administrative burden on IT workers. What recommendations should the solutions architect make? Use AWS Directory Service to create a managed Active Directory. Uninstall Active Directory on the current EC2 instance. Create another EC2 instance in the same subnet and reinstall Active Directory on it. Uninstall Active Directory. Use AWS Directory Service to create an Active Directory connector. Proxy Active Directory requests to the Active domain controller running on the current EC2 instance. Enable AWS Single Sign-On (AWS SSO) with Security Assertion Markup Language (SAML) 2.0 federation with the current Active Directory controller. Modify the EC2 instanceג€™s security group to deny public access to Active Directory.
Amazon Aurora was recently selected as the data repository for a company's worldwide ecommerce platform. When developers run extensive reports, they discover that the ecommerce application is performing badly. When monthly reports are performed, a solutions architect notices that the ReadIOPS and CPUUtilization metrics spike. Which approach is the MOST cost-effective? Migrate the monthly reporting to Amazon Redshift. Migrate the monthly reporting to an Aurora Replica. Migrate the Aurora database to a larger instance class. Increase the Provisioned IOPS on the Aurora instance.
A business's backup data totals 700 terabytes (TB) and is kept in network attached storage (NAS) at its data center. This backup data must be available in the event of occasional regulatory inquiries and preserved for a period of seven years. The organization has chosen to relocate its backup data from its on-premises data center to Amazon Web Services (AWS). Within one month, the migration must be completed. The company's public internet connection provides 500 Mbps of dedicated capacity for data transport. What should a solutions architect do to ensure that data is migrated and stored at the lowest possible cost? Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive. Deploy a VPN connection between the data center and Amazon VPC. Use the AWS CLI to copy the data from on premises to Amazon S3 Glacier. Provision a 500 Mbps AWS Direct Connect connection and transfer the data to Amazon S3. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive. Use AWS DataSync to transfer the data and deploy a DataSync agent on premises. Use the DataSync task to copy files from the on-premises NAS storage to Amazon S3 Glacier.
The main and secondary data centers of a business are located 500 miles (804.7 kilometers) apart and are linked through high-speed fiber-optic cable. For a mission-critical workload, the organization requires a highly available and secure network link between its data centers and an AWS VPC. A solutions architect must choose a connectivity solution that is as resilient as possible. Which solution satisfies these criteria? Two AWS Direct Connect connections from the primary data center terminating at two Direct Connect locations on two separate devices A single AWS Direct Connect connection from each of the primary and secondary data centers terminating at one Direct Connect location on the same device Two AWS Direct Connect connections from each of the primary and secondary data centers terminating at two Direct Connect locations on two separate devices A single AWS Direct Connect connection from each of the primary and secondary data centers terminating at one Direct Connect location on two separate devices.
The dynamic website of a business is hosted on-premises in the United States. The firm is expanding throughout Europe and want to reduce site loading speeds for new European visitors. The backbone of the website must stay in the United States. A few days from now, the product will be introduced, and an instant answer is required. What recommendations should the solutions architect make? Launch an Amazon EC2 instance in us-east-1 and migrate the site to it. Move the website to Amazon S3. Use cross-Region replication between Regions. Use Amazon CloudFront with a custom origin pointing to the on-premises servers. Use an Amazon Route 53 geo-proximity routing policy pointing to on-premises servers.
A corporation used an AWS Direct Connect connection to copy 1 PB of data from a colocation facility to an Amazon S3 bucket in the us-east-1 Region. The business now wishes to replicate the data in another S3 bucket located in the us-west-2 Region. AWS Snowball is not permitted at the colocation facility. What should a solutions architect suggest as a means of achieving this? Order a Snowball Edge device to copy the data from one Region to another Region. Transfer contents from the source S3 bucket to a target S3 bucket using the S3 console. Use the aws S3 sync command to copy data from the source bucket to the destination bucket. Add a cross-Region replication configuration to copy objects across S3 buckets in different Regions.
A solutions architect must verify that API requests to Amazon DynamoDB are not routed across the internet from Amazon EC2 instances inside a VPC. What is the solution architect's role in achieving this? (Select two.) Create a route table entry for the endpoint. Create a gateway endpoint for DynamoDB Create a new DynamoDB table that uses the endpoint. Create an ENI for the endpoint in each of the subnets of the VPC. Create a security group entry in the default security group to provide access.
The application of a business is hosted on Amazon EC2 instances that are part of an Auto Scaling group behind an Elastic Load Balancer. Each year, the firm predicts a rise in traffic over a holiday, based on the application's history. A solutions architect must develop a plan to guarantee that the Auto Scaling group raises capacity proactively in order to minimize any effect on application users' performance. Which solution will satisfy these criteria? Create an Amazon CloudWatch alarm to scale up the EC2 instances when CPU utilization exceeds 90%. Create a recurring scheduled action to scale up the Auto Scaling group before the expected period of peak demand. Increase the minimum and maximum number of EC2 instances in the Auto Scaling group during the peak demand period. Configure an Amazon Simple Notification Service (Amazon SNS) notification to send alerts when there are autoscaling EC2_INSTANCE_LAUNCH events.
A business wishes to migrate live datasets online from an on-premises NFS server to an Amazon S3 bucket called DOC-EXAMPLE-BUCKET. Verification of data integrity is essential both during and after the transmission. Additionally, the data must be encrypted. A solutions architect is migrating the data using an AWS solution. Which solution satisfies these criteria? AWS Storage Gateway file gateway S3 Transfer Acceleration AWS DataSync AWS Snowball Edge Storage Optimized.
A business utilizes Application Load Balancers (ALBs) across many AWS Regions. The ALBs experience fluctuating traffic throughout the year. The company's networking personnel must enable connection by allowing the ALBs' IP addresses over the on-premises firewall. Which solution is the MOST scalable and requires the least amount of setup changes? Write an AWS Lambda script to get the IP addresses of the ALBs in different Regions. Update the on-premises firewallג€™s rule to allow the IP addresses of the ALBs. Migrate all ALBs in different Regions to the Network Load Balancer (NLBs). Update the on-premises firewallג€™s rule to allow the Elastic IP addresses of all the NLBs. Launch AWS Global Accelerator. Register the ALBs in different Regions to the accelerator. Update the on-premises firewallג€™s rule to allow static IP addresses associated with the accelerator. Launch a Network Load Balancer (NLB) in one Region. Register the private IP addresses of the ALBs in different Regions with the NLB. Update the on- premises firewallג€™s rule to allow the Elastic IP address attached to the NLB.
Each month, a business must create sales reports. On the first day of each month, the reporting procedure starts 20 Amazon EC2 instances. The procedure lasts seven days and cannot be paused. The corporation wishes to keep expenses low. Which pricing strategy should the business pursue? Reserved Instances Spot Block Instances On-Demand Instances Scheduled Reserved Instances.
A business offers an online service for uploading and transcoding video material for usage on any mobile device. The application design makes use of Amazon Elastic File System (Amazon EFS) Standard to gather and store the films so that they may be processed by numerous Amazon EC2 Linux instances. As the service's popularity has increased, the storage charges have become prohibitively costly. Which storage option is the most cheapest? Use AWS Storage Gateway for files to store and process the video content. Use AWS Storage Gateway for volumes to store and process the video content. Use Amazon Elastic File System (Amazon EFS) for storing the video content. Once processing is complete, transfer the files to Amazon Elastic Block Store (Amazon EBS). Use Amazon S3 for storing the video content. Move the files temporarily over to an Amazon ElasticBlock Store (Amazon EBS) volume attached to the server for processing.
Each day, a corporation gets ten terabytes of instrumentation data from many machines situated in a single plant. The data is saved in JSON files on a storage area network (SAN) inside the factory's on-premises data center. The organization want to upload this data to Amazon S3 so that it may be accessible by a number of other systems that do crucial near-real-time analytics. Because the data is deemed sensitive, a secure transmission is critical. Which option provides the most secure method of data transfer? AWS DataSync over public internet AWS DataSync over AWS Direct Connect AWS Database Migration Service (AWS DMS) over public internet AWS Database Migration Service (AWS DMS) over AWS Direct Connect.
A data science team needs storage to analyze logs on a nightly basis. The amount and quantity of logs are unclear, however they will be retained for 24 hours. Which approach is the most cost-effective? Amazon S3 Glacier Amazon S3 Standard Amazon S3 Intelligent-Tiering Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).
A development team is releasing a new product on AWS, and as part of the rollout, they are use AWS Lambda. For one of the Lambda functions, the team allocates 512 MB of RAM. The function is finished in two minutes with this memory allocation. Monthly, the function is executed millions of times, and the development team is worried about the cost. The team does experiments to determine the effect of various Lambda memory allocations on the function's cost. Which measures will result in a decrease in the product's Lambda costs? (Select two.) Increase the memory allocation for this Lambda function to 1,024 MB if this change causes the execution time of each function to be less than 1 minute. Increase the memory allocation for this Lambda function to 1,024 MB if this change causes the execution time of each function to be less than 90 seconds. Reduce the memory allocation for this Lambda function to 256 MB if this change causes the execution time of each function to be less than 4 minutes. Increase the memory allocation for this Lambda function to 2,048 MB if this change causes the execution time of each function to be less than 1 minute. Reduce the memory allocation for this Lambda function to 256 MB if this change causes the execution time of each function to be less than 5 minutes.
A business is using a centralized Amazon Web Services account to store log data in many Amazon S3 buckets. Prior to uploading data to S3 buckets, a solutions architect must guarantee that the data is encrypted at rest. Additionally, data must be encrypted during transit. Which solution satisfies these criteria? Use client-side encryption to encrypt the data that is being uploaded to the S3 buckets. Use server-side encryption to encrypt the data that is being uploaded to the S3 buckets. Create bucket policies that require the use of server-side encryption with S3 managed encryption keys (SSE-S3) for S3 uploads. Enable the security option to encrypt the S3 buckets through the use of a default AWS Key Management Service (AWS KMS) key.
A business requires the migration of a Microsoft Windows-based application to AWS. This program utilizes a shared Windows file system that is tied to numerous Amazon EC2 Windows machines. What actions should a solutions architect take to achieve this? Configure a volume using Amazon Elastic File System (Amazon EFS). Mount the EFS volume to each Windows instance. Configure AWS Storage Gateway in Volume Gateway mode. Mount the volume to each Windows instance. Configure Amazon FSx for Windows File Server. Mount the Amazon FSx volume to each Windows instance. Configure an Amazon Elastic Block Store (Amazon EBS) volume with the required size. Attach each EC2 instance to the volume. Mount the file system within the volume to each Windows instance.
An IAM group is associated with the following IAM policy. This is the group's sole policy. { "Version": "2012-10-17", "Statement": [ { "Sid": "1", "Effect": "Allow", "Action": "ec2:*", "Resources": { "StringEquals": { "ec2:Region": "us-east-1" } } }, { "Sid": "2", "Effect": "Deny", "Action": [ "ec2:StopInstances", "ec2:TerminateInstances" ], "Resources": "*", "Condition": { "BoolIfExists": {"aws:MultiFactorAuthPresent": false} } } ] } What are the policy's effective IAM permissions for group members? Group members are permitted any Amazon EC2 action within the us-east-1 Region. Statements after the Allow permission are not applied. Group members are denied any Amazon EC2 permissions in the us-east-1 Region unless they are logged in with multi-factor authentication (MFA). Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for all Regions when logged in with multi-factor authentication (MFA). Group members are permitted any other Amazon EC2 action. Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for the us-east-1 Region only when logged in with multi-factor authentication (MFA). Group members are permitted any other Amazon EC2 action within the us-east-1 Region.
A business is searching for a solution that would enable them to store video archives created from archived news footage on AWS. The business must keep expenses down and will seldom need to recover these data. When files are required, they must be provided within a five-minute window. Which approach is the most cost-effective? Store the video archives in Amazon S3 Glacier and use Expedited retrievals. Store the video archives in Amazon S3 Glacier and use Standard retrievals. Store the video archives in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Store the video archives in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).
A firm that now hosts a web application on-premises is ready to migrate to AWS and launch a newer version of the program. The organization must route requests depending on the URL query string to either the AWS- or on-premises-hosted application. The on-premises application is inaccessible over the internet, and a VPN connection between Amazon VPC and the company's data center is formed. The firm intends to deploy this application using an Application Load Balancer (ALB). Which solution satisfies these criteria? Use two ALBs: one for on-premises and one for the AWS resource. Add hosts to each target group of each ALB. Route with Amazon Route 53 based on the URL query string. Use two ALBs: one for on-premises and one for the AWS resource. Add hosts to the target group of each ALB. Create a software router on an EC2 instance based on the URL query string. Use one ALB with two target groups: one for the AWS resource and one for on premises. Add hosts to each target group of the ALB. Configure listener rules based on the URL query string. Use one ALB with two AWS Auto Scaling groups: one for the AWS resource and one for on premises. Add hosts to each Auto Scaling group. Route with Amazon Route 53 based on the URL query string.
Under its registered parent domain, a firm hosts many websites for various lines of business. According to the subdomain, anyone visiting these websites will be directed to the proper backend Amazon EC2 instance. Static webpages, pictures, and server-side programming such as PHP and JavaScript are all hosted on the websites. Certain websites see a spike in traffic during the first two hours of business, followed by consistent use throughout the remainder of the day. A solutions architect must build a system that adapts capacity automatically to certain traffic patterns while being cost effective. Which AWS service or feature combination will suit these requirements? (Select two.) AWS Batch Network Load Balancer Application Load Balancer Amazon EC2 Auto Scaling Amazon S3 website hosting.
As a web application, a corporation has built a new video game. The application is deployed in a three-tier design using Amazon RDS for MySQL in a VPC. Multiple players will compete simultaneously online through the database layer. The makers of the game want to show a top-10 scoreboard in near-real time and to enable players to pause and resume the game while retaining their existing scores. What actions should a solutions architect take to ensure that these criteria are met? Set up an Amazon ElastiCache for Memcached cluster to cache the scores for the web application to display. Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web application to display. Place an Amazon CloudFront distribution in front of the web application to cache the scoreboard in a section of the application. Create a read replica on Amazon RDS for MySQL to run queries to compute the scoreboard and serve the read traffic to the web application.
A business wishes to migrate its on-premises network attached storage (NAS) to Amazon Web Services (AWS). The corporation wishes to make the data accessible to any Linux instance inside its VPC and to guarantee that changes to the data store are immediately synced across all instances that use it. The bulk of data is viewed infrequently, whereas certain files are read concurrently by numerous people. Which option satisfies these criteria and is the most cost-effective? Create an Amazon Elastic Block Store (Amazon EBS) snapshot containing the data. Share it with users within the VPC. Create an Amazon S3 bucket that has a lifecycle policy set to transition the data to S3 Standard-Infrequent Access (S3 Standard-IA) after the appropriate number of days. Create an Amazon Elastic File System (Amazon EFS) file system within the VPC. Set the throughput mode to Provisioned and to the required amount of IOPS to support concurrent usage. Create an Amazon Elastic File System (Amazon EFS) file system within the VPC. Set the lifecycle policy to transition the data to EFS Infrequent Access (EFS IA) after the appropriate number of days.
For many years, a business has stored analytics data on an Amazon RDS instance. The firm hired a solutions architect to develop an API that would enable consumers to access this data. The program is expected to have periods of idleness but may get surges of traffic within seconds. Which option should the architect recommend? Set up an Amazon API Gateway and use Amazon ECS. Set up an Amazon API Gateway and use AWS Elastic Beanstalk. Set up an Amazon API Gateway and use AWS Lambda functions. Set up an Amazon API Gateway and use Amazon EC2 with Auto Scaling.
A mobile gaming startup uses Amazon EC2 instances to host application servers. Every 15 minutes, the servers get updates from players. The mobile game generates a JSON object containing the game's progress since the last update and delivers it to an Application Load Balancer. As the mobile game is played, it loses game updates. The business intends to develop a long-lasting method for older devices to get updates. What should a solution architect propose for system decoupling? Use Amazon Kinesis Data Streams to capture the data and store the JSON object in Amazon S3. Use Amazon Kinesis Data Firehose to capture the data and store the JSON object in Amazon S3. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues to capture the data and EC2 instances to process the messages in the queue. Use Amazon Simple Notification Service (Amazon SNS) to capture the data and EC2 instances to process the messages sent to the Application Load Balancer.
A recent review of a company's IT spending demonstrates the critical necessity of lowering backup costs. The chief information officer of the organization want to simplify the on-premises backup architecture and cut expenses by phasing out physical backup tapes. The company's current investment in on-premises backup systems and procedures must be protected. What recommendations should a solutions architect make? Set up AWS Storage Gateway to connect with the backup applications using the NFS interface. Set up an Amazon Elastic File System (Amazon EFS) file system that connects with the backup applications using the NFS interface. Set up an Amazon Elastic File System (Amazon EFS) file system that connects with the backup applications using the iSCSI interface. Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual tape library (VTL) interface.
A business maintains an internal web-based application. The application is deployed on Amazon EC2 instances that are routed via an Application Load Balancer. The instances are distributed across several Availability Zones through an Amazon EC2 Auto Scaling group. During business hours, the Auto Scaling group grows up to 20 instances, then scales down to two instances overnight. Staff are saying that the program is very sluggish to start the day, but performs fine by mid-morning. How might the scale be altered to accommodate employee concerns while keeping expenses low? Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens. Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period. Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period. Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens.
A business must adhere to a regulatory obligation that all emails be saved and preserved outside for a period of seven years. An administrator has prepared compressed email files on-premises and wishes to have the data transferred to AWS storage through a managed service. Which managed service should be recommended by a solutions architect? Amazon Elastic File System (Amazon EFS) Amazon S3 Glacier AWS Backup AWS Storage Gateway.
A business hosts its website on Amazon EC2 instances that are distributed across several Availability Zones through an Elastic Load Balancer. The instances are managed as part of an EC2 Auto Scaling group. The website stores product manuals for download through Amazon Elastic Block Store (Amazon EBS) volumes. The organization often changes the product information, which means that new instances created by the Auto Scaling group frequently have out-of-date data. It may take up to 30 minutes for all changes to be received by fresh instances. Additionally, the changes involve resizing the EBS volumes during business hours. The corporation want to guarantee that product manuals are constantly current and that the architecture adapts fast to rising customer demand. A solutions architect must satisfy these objectives without requiring the business to upgrade its application code or website. What actions should the solutions architect take to achieve this objective? Store the product manuals in an EBS volume. Mount that volume to the EC2 instances. Store the product manuals in an Amazon S3 bucket. Redirect the downloads to this bucket. Store the product manuals in an Amazon Elastic File System (Amazon EFS) volume. Mount that volume to the EC2 instances. Store the product manuals in an Amazon S3 Standard-Infrequent Access (S3 Standard-IA) bucket. Redirect the downloads to this bucket.
On Amazon EC2, a business hosts an ecommerce application. The application is composed of a stateless web layer that needs a minimum of 10 instances and a maximum of 250 instances to run. 80% of the time, the program needs 50 instances. Which solution should be adopted in order to keep expenses down? Purchase Reserved Instances to cover 250 instances. Purchase Reserved Instances to cover 80 instances. Use Spot Instances to cover the remaining instances. Purchase On-Demand Instances to cover 40 instances. Use Spot Instances to cover the remaining instances. Purchase Reserved Instances to cover 50 instances. Use On-Demand and Spot Instances to cover the remaining instances.
A business intends to install an Amazon RDS database instance powered by Amazon Aurora. The organization has a 90-day backup retention policy. Which solution, if any, should a solutions architect suggest? Set the backup retention period to 90 days when creating the RDS DB instance. Configure RDS to copy automated snapshots to a user-managed Amazon S3 bucket with a lifecycle policy set to delete after 90 days. Create an AWS Backup plan to perform a daily snapshot of the RDS database with the retention set to 90 days. Create an AWS Backup job to schedule the execution of the backup plan daily. Use a daily scheduled event with Amazon CloudWatch Events to execute a custom AWS Lambda function that makes a copy of the RDS automated snapshot. Purge snapshots older than 90 days.
A solutions architect is configuring a virtual private cloud (VPC) with public and private subnets. The VPC and subnets are configured using IPv4 CIDR blocks. Each of the three Availability Zones (AZs) has one public and one private subnet. An internet gateway is used to connect public subnets to the internet. Private subnets must have internet connectivity in order for Amazon EC2 instances to obtain software upgrades. What should the solutions architect do to allow private subnets to connect to the internet? Create three NAT gateways, one for each public subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic to the NAT gateway in its AZ. Create three NAT instances, one for each private subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic to the NAT instance in its AZ. Create a second internet gateway on one of the private subnets. Update the route table for the private subnets that forward non-VPC traffic to the private internet gateway. Create an egress-only internet gateway on one of the public subnets. Update the route table for the private subnets that forward non-VPC traffic to the egress- only internet gateway.
Currently, a corporation has 250 TB of backup data saved in Amazon S3 using a vendor-specific format. The firm wishes to extract files from Amazon S3, convert them to an industry-standard format, and then re-upload them to Amazon S3. The firm want to reduce the costs connected with data transmission for this session. What actions should a solutions architect take to achieve this? Install the conversion software as an Amazon S3 batch operation so the data is transformed without leaving Amazon S3. Install the conversion software onto an on-premises virtual machine. Perform the transformation and re-upload the files to Amazon S3 from the virtual machine. Use AWS Snowball Edge devices to export the data and install the conversion software onto the devices. Perform the data transformation and re-upload the files to Amazon S3 from the Snowball Edge devices. Launch an Amazon EC2 instance in the same Region as Amazon S3 and install the conversion software onto the instance. Perform the transformation and re- upload the files to Amazon S3 from the EC2 instance.
A business's applications are hosted on on-premises servers. The corporation is rapidly depleting its storage capacity. The programs make use of both block and network file storage. The business need a high-performance solution that enables local caching without requiring it to re-architect its current applications. Which steps should a solutions architect perform in combination to satisfy these requirements? (Select two.) Mount Amazon S3 as a file system to the on-premises servers. Deploy an AWS Storage Gateway file gateway to replace NFS storage. Deploy AWS Snowball Edge to provision NFS mounts to on-premises servers. Deploy an AWS Storage Gateway volume gateway to replace the block storage. Deploy Amazon Elastic Fife System (Amazon EFS) volumes and mount them to on-premises servers.
A business hosts a web service on Amazon EC2 instances that are routed via an Application Load Balancer. The instances are distributed across two Availability Zones through an Amazon EC2 Auto Scaling group. At all times, the corporation requires a minimum of four instances to achieve the needed service level agreement (SLA) requirements while keeping expenses low. How can the organization maintain compliance with the SLA if an Availability Zone fails? Add a target tracking scaling policy with a short cooldown period. Change the Auto Scaling group launch configuration to use a larger instance type. Change the Auto Scaling group to use six servers across three Availability Zones. Change the Auto Scaling group to use eight servers across two Availability Zones.
A firm is using the AWS Cloud to run a three-tier ecommerce application. The firm hosts the website on Amazon S3 and combines it with a sales API. The API is hosted by the firm on three Amazon EC2 instances that are connected through an Application Load Balancer (ALB). The API is composed of static and dynamic front-end content, as well as back-end workers that asynchronously execute sales requests. The corporation anticipates a big and abrupt surge in sales requests during events celebrating the introduction of new items. What should a solutions architect prescribe to assure the effective processing of all requests? Add an Amazon CloudFront distribution for the dynamic content. Increase the number of EC2 instances to handle the increase in traffic. Add an Amazon CloudFront distribution for the static content. Place the EC2 instances in an Auto Scaling group to launch new instances based on network traffic. Add an Amazon CloudFront distribution for the dynamic content. Add an Amazon ElastiCache instance in front of the ALB to reduce traffic for the API to handle. Add an Amazon CloudFront distribution for the static content. Add an Amazon Simple Queue Service (Amazon SQS) queue to receive requests from the website for later processing by the EC2 instances. .
Amazon EC2 instances are used to execute an application. The application's sensitive data is housed in an Amazon S3 bucket. The bucket must be shielded from internet access while yet allowing access to it for services inside the VPC. Which activities should solutions archived take in order to do this? (Select two.) Create a VPC endpoint for Amazon S3. Enable server access logging on the bucket. Apply a bucket policy to restrict access to the S3 endpoint. Add an S3 ACL to the bucket that has sensitive information. Restrict users using the IAM policy to use the specific bucket.
A business's program creates a vast number of files, each around 5 MB in size. Amazon S3 is used to store the files. According to company policy, files must be retained for a period of four years before they may be erased. Immediate access is always essential due to the fact that the files contain vital business data that is difficult to replicate. The files are commonly viewed within the first 30 days after the establishment of the item, but are seldom accessed beyond that time period. Which storage option is the most cheapest? Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Glacier 30 days from object creation. Delete the files 4 years after object creation. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) 30 days from object creation. Delete the files 4 years after object creation. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object creation. Delete the files 4 years after object creation. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object creation. Move the files to S3 Glacier 4 years after object creation.
A business has a bucket on Amazon S3 that includes mission-critical data. The firm wishes to safeguard this data against inadvertent deletion. The data should remain available, and the user should be able to erase it on purpose. Which actions should a solutions architect use in conjunction to achieve this? (Select two.) Enable versioning on the S3 bucket. Enable MFA Delete on the S3 bucket. Create a bucket policy on the S3 bucket. Enable default encryption on the S3 bucket. Create a lifecycle policy for the objects in the S3 bucket.
Report abuse Consent Terms of use