option
My Daypo

DIVA-001-300

COMMENTS STADISTICS RECORDS
TAKE THE TEST
Title of test:
DIVA-001-300

Description:
DIVA-001-300

Author:
DIVA
(Other tests from this author)

Creation Date:
07/04/2022

Category:
Logical

Number of questions: 300
Share the Test:
Facebook
Twitter
Whatsapp
Share the Test:
Facebook
Twitter
Whatsapp
Last comments
No comments about this test.
Content:
A firm is developing a web application on AWS utilizing containers. At any one moment, the organization needs three instances of the web application to be running. The application must be scalable in order to keep up with demand increases. While management is cost-conscious, they agree that the application should be highly accessible. What recommendations should a solutions architect make? Add an execution role to the function with lambda:InvokeFunction as the action and * as the principal. Add an execution role to the function with lambda:InvokeFunction as the action and Service:amazonaws.com as the principal. Add a resource-based policy to the function with lambda:ג€™* as the action and Service:events.amazonaws.com as the principal. Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service:events.amazonaws.com as the principal. .
A business outsources its marketplace analytics management to a third-party partner. The vendor requires restricted programmatic access to the company's account's resources. All necessary policies have been established to ensure acceptable access. Which new component provides the vendor the MOST SECURE access to the account? Stop the instance outside the applicationג€™s availability window. Start up the instance again when required. Hibernate the instance outside the applicationג€™s availability window. Start up the instance again when required. Use Auto Scaling to scale down the instance outside the applicationג€™s availability window. Scale up the instance when required. Terminate the instance outside the applicationג€™s availability window. Launch the instance by using a preconfigured Amazon Machine Image (AMI) when required.
A firm seeks to migrate its accounting system from an on-premises data center to an Amazon Web Services (AWS) Region. Data security and an unalterable audit log should be prioritized. All AWS activities must be subjected to compliance audits. Despite the fact that the business has enabled AWS CloudTrail, it want to guarantee that it meets these requirements. What precautions and security procedures should a solutions architect include to protect and secure CloudTrail? (Choose two.) Create a second S3 bucket in us-east-1. Enable S3 Cross-Region Replication from the existing S3 bucket to the second S3 bucket. Create a cross-origin resource sharing (CORS) configuration of the existing S3 bucket. Specify us-east-1 in the CORS rule's AllowedOrigin element. Create a second S3 bucket in us-east-1 across multiple Availability Zones. Create an S3 Lifecycle management rule to save photos into the second S3 bucket. Create a second S3 bucket in us-east-1 to store the replicated photos. Configure S3 event notifications on object creation and update events that invoke an AWS Lambda function to copy photos from the existing S3 bucket to the second S3 bucket.
A firm maintains a searchable inventory of items on its website. The data is stored in an Amazon RDS for MySQL database in a table with over ten million entries. The database is kept on a two-terabyte (TB) General Purpose Solid State Drive (gp2) array. The company's website gets millions of updates to this data each day. The business discovered that some tasks took 10 seconds or longer and determined that the bottleneck was the database storage performance. Which of the following options meets the performance requirement? Configure a VPC endpoint for Amazon S3. Add an entry to the private subnetג€™s route table for the S3 endpoint. Configure a NAT gateway in a public subnet. Configure the private subnetג€™s route table to use the NAT gateway. Configure Amazon S3 as a file system mount point on the EC2 instances. Access Amazon S3 through the mount. Move the EC2 instances into a public subnet. Configure the public subnet route table to point to an internet gateway.
A business that is currently hosting a web application on-premises is prepared to transition to AWS and launch a newer version of the application. The organization must route requests to the AWS or on-premises application based on the URL query string. The on-premises application is rendered unreachable over the internet, and a VPN connection is established between Amazon VPC and the business's data center. The company wishes to deploy this application using a load balancer (ALB). Which of the following solutions meets these criteria? Use AWS Snowball Edge devices to process and store the images. Upload the images to Amazon Simple Queue Service (Amazon SQS) during intermittent connectivity to EC2 instances. Configure Amazon Kinesis Data Firehose to create multiple delivery streams aimed separately at the S3 buckets for storage and the EC2 instances for processing the images. Use AWS Storage Gateway pre-installed on a hardware appliance to cache the images locally for Amazon S3 to process the images when connectivity becomes available.
A meteorological start-up company has created a custom web application for the aim of selling weather data to its members online. The company currently uses Amazon DynamoDB to store its data and wishes to establish a new service that alerts the managers of four internal teams whenever a new weather event is recorded. The business does not want for this new service to impair the operation of the present application. What steps should a solutions architect take to guarantee that these objectives are satisfied with the MINIMUM feasible operational overhead? Create a DynamoDB table in on-demand capacity mode. Create a DynamoDB table with a global secondary Index. Create a DynamoDB table with provisioned capacity and auto scaling. Create a DynamoDB table in provisioned capacity mode, and configure it as a global table.
A corporation uses an AWS application to offer content to its subscribers worldwide. Numerous Amazon EC2 instances are deployed on a private subnet behind an Application Load Balancer for the application (ALB). The chief information officer (CIO) wishes to limit access to some nations due to a recent change in copyright regulations. Which course of action will satisfy these criteria? Modify the ALB security group to deny incoming traffic from blocked countries. Modify the security group for EC2 instances to deny incoming traffic from blocked countries. Use Amazon CloudFront to serve the application and deny access to blocked countries. Use ALB listener rules to return access denied responses to incoming traffic from blocked countries.
Using seven Amazon EC2 instances, a business runs its web application on AWS. The organization needs that DNS queries provide the IP addresses of all healthy EC2 instances. Which policy should be employed to comply with this stipulation? Simple routing policy Latency routing policy Multi-value routing policy Geolocation routing policy.
Each day, a corporation collects data from millions of consumers totalling around 1'. The firm delivers use records for the last 12 months to its customers. To meet with regulatory and auditing standards, all use data must be retained for at least five years. Which storage option is the MOST CHEAPEST? Store the data in Amazon S3 Standard. Set a lifecycle rule to transition the data to S3 Glacier Deep Archive after 1 year. Set a lifecycle rule to delete the data after 5 years. Store the data in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA). Set a lifecycle rule to transition the data to S3 Glacier after 1 year. Set the lifecycle rule to delete the data after 5 years. Store the data in Amazon S3 Standard. Set a lifecycle rule to transition the data to S3 Standard-Infrequent Access (S3 Standard-IA) after 1 year. Set a lifecycle rule to delete the data after 5 years. Store the data in Amazon S3 Standard. Set a lifecycle rule to transition the data to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 year. Set a lifecycle rule to delete the data after 5 years.
A business uses an Amazon RDS for PostgreSQL database instance to manage a fleet of web servers. Following a normal compliance review, the corporation establishes a standard requiring all production databases to have a recovery point objective (RPO) of less than one second. Which solution satisfies these criteria? Enable a Multi-AZ deployment for the DB instance. Enable auto scaling for the DB instance in one Availability Zone. Configure the DB instance in one Availability Zone, and create multiple read replicas in a separate Availability Zone. Configure the DB instance in one Availability Zone, and configure AWS Database Migration Service (AWS DMS) change data capture (CDC) tasks.
On Amazon EC2 instances, a business is developing an application that creates transitory transactional data. Access to data storage that can deliver adjustable and consistent IOPS is required by the application. What recommendations should a solutions architect make? Provision an EC2 instance with a Throughput Optimized HDD (st1) root volume and a Cold HDD (sc1) data volume. Provision an EC2 instance with a Throughput Optimized HDD (st1) volume that will serve as the root and data volume. Provision an EC2 instance with a General Purpose SSD (gp2) root volume and Provisioned IOPS SSD (io1) data volume. Provision an EC2 instance with a General Purpose SSD (gp2) root volume. Configure the application to store its data in an Amazon S3 bucket.
Prior to implementing a new workload, a solutions architect must examine and update the company's current IAM rules. The following policy was written by the solutions architect: { "Version": "2012-10-17", "Statement": [{ "Effect": "Deny", "NotAction": "s3:PutObject", "Resource": "*", "Condition": {BoolIfExists": {"aws:MultiFactorAuthPresent": "false"}} }] } What is the policy's net effect? Users will be allowed all actions except s3:PutObject if multi-factor authentication (MFA) is enabled. Users will be allowed all actions except s3:PutObject if multi-factor authentication (MFA) is not enabled. Users will be denied all actions except s3:PutObject if multi-factor authentication (MFA) is enabled. Users will be denied all actions except s3:PutObject if multi-factor authentication (MFA) is not enabled.
To allow neat-real-time processing, a web application must persist order data to Amazon S3. A solutions architect must design a scalable and fault-tolerant architecture. Which solutions satisfy these criteria? (Select two.) Write the order event to an Amazon DynamoDB table. Use DynamoDB Streams to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3. Write the order event to an Amazon Simple Queue Service (Amazon SQS) queue. Use the queue to trigger an AWSLambda function that parsers the payload and writes the data to Amazon S3. Write the order event to an Amazon Simple Notification Service (Amazon SNS) topic. Use the SNS topic to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3. Write the order event to an Amazon Simple Queue Service (Amazon SQS) queue. Use an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an AWS Lambda function that parses the payload and writes the data to Amazon S3. Write the order event to an Amazon Simple Notification Service (Amazon SNS) topic. Use an Amazon EventBridge (Amazon CloudWatch Events) rule to trigger an AWS Lambda function that parses the payload andwrites the data to Amazon S3.
A business in the us-east-1 region offers a picture hosting service. Users from many countries may upload and browse images using the program. Some photographs get a high volume of views over months, while others receive a low volume of views for less than a week. The program supports picture uploads of up to 20 MB in size. The service determines which photographs to show to each user based on the photo information. Which option delivers the most cost-effective access to the suitable users? Store the photos in Amazon DynamoDB. Turn on DynamoDB Accelerator (DAX) to cache frequently viewed items. Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photo metadata and its S3 location in DynamoDB. Store the photos in the Amazon S3 Standard storage class. Set up an S3 Lifecycle policy to move photos older than 30 days to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Use the object tags to keep track of metadata. Store the photos in the Amazon S3 Glacier storage class. Set up an S3 Lifecycle policy to move photos older than 30 days to the S3 Glacier Deep Archive storage class. Store the photo metadata and its S3 location in Amazon Elasticsearch Service (Amazon ES).
A business is creating a website that will store static photos in an Amazon S3 bucket. The company's goal is to reduce both latency and cost for all future requests. How should a solutions architect propose a service configuration? Deploy a NAT server in front of Amazon S3. Deploy Amazon CloudFront in front of Amazon S3. Deploy a Network Load Balancer in front of Amazon S3. Configure Auto Scaling to automatically adjust the capacity of the website.
For the database layer of its ecommerce website, a firm uses Amazon DynamoDB with provided throughput. During flash sales, clients may encounter periods of delay when the database is unable to manage the volume of transactions. As a result, the business loses transactions. The database operates normally during regular times. Which approach resolves the company's performance issue? Switch DynamoDB to on-demand mode during flash sales. Implement DynamoDB Accelerator for fast in memory performance. Use Amazon Kinesis to queue transactions for processing to DynamoDB. Use Amazon Simple Queue Service (Amazon SQS) to queue transactions to DynamoDB.
A significant media corporation uses AWS to host a web application. The corporation intends to begin caching secret media files in order to provide dependable access to them to consumers worldwide. Amazon S3 buckets are used to store the material. The organization must supply material rapidly, regardless of the origin of the requests. Which solution will satisfy these criteria? Use AWS DataSync to connect the S3 buckets to the web application. Deploy AWS Global Accelerator to connect the S3 buckets to the web application. Deploy Amazon CloudFront to connect the S3 buckets to CloudFront edge servers. Use Amazon Simple Queue Service (Amazon SQS) to connect the S3 buckets to the web application.
In the AWS Cloud, a web application is deployed. It is a two-tier design comprised of a web and database layer. Cross-site scripting (XSS) attacks are possible on the web server. What is the best course of action for a solutions architect to take to address the vulnerability? Create a Classic Load Balancer. Put the web layer behind the load balancer and enable AWS WAF. Create a Network Load Balancer. Put the web layer behind the load balancer and enable AWS WAF. Create an Application Load Balancer. Put the web layer behind the load balancer and enable AWS WAF. Create an Application Load Balancer. Put the web layer behind the load balancer and use AWS Shield Standard.
On its website, a business keeps a searchable store of things. The data is stored in a table with over ten million rows in an Amazon RDS for MySQL database. The database is stored on a 2 TB General Purpose SSD (gp2) array. Every day, the company's website receives millions of changes to this data. The organization found that certain activities were taking ten seconds or more and concluded that the bottleneck was the database storage performance. Which option satisfies the performance requirement? Change the storage type to Provisioned IOPS SSD (io1). Change the instance to a memory-optimized instance class. Change the instance to a burstable performance DB instance class. Enable Multi-AZ RDS read replicas with MySQL native asynchronous replication.
A business is prepared to use Amazon S3 to store sensitive data. Data must be encrypted at rest for compliance purposes. Auditing of encryption key use is required. Each year, keys must be rotated. Which solution satisfies these parameters and is the MOST OPTIMAL in terms of operational efficiency? Server-side encryption with customer-provided keys (SSE-C) Server-side encryption with Amazon S3 managed keys (SSE-S3) Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with manual rotation Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with automatic rotation.
Management need a summary of AWS billed items broken down by user as part of their budget planning process. Budgets for departments will be created using the data. A solutions architect must ascertain the most effective method of obtaining this report data. Which solution satisfies these criteria? Run a query with Amazon Athena to generate the report. Create a report in Cost Explorer and download the report. Access the bill details from the billing dashboard and download the bill. Modify a cost budget in AWS Budgets to alert with Amazon Simple Email Service (Amazon SES).
A solutions architect must create a system for archiving client case files. The files are critical corporate assets. The file count will increase over time. Multiple application servers running on Amazon EC2 instances must be able to access the files concurrently. There must be built-in redundancy in the solution. Which solution satisfies these criteria? Amazon Elastic File System (Amazon EFS) Amazon Elastic Block Store (Amazon EBS) Amazon S3 Glacier Deep Archive AWS Backup.
A business must give secure access to secret and sensitive data to its workers. The firm want to guarantee that only authorized individuals have access to the data. The data must be safely downloaded to the workers' devices. The files are kept on a Windows file server on-premises. However, as remote traffic increases, the file server's capacity is being depleted. Which solution will satisfy these criteria? Migrate the file server to an Amazon EC2 instance in a public subnet. Configure the security group to limit inbound traffic to the employeesג€™ IP addresses. Migrate the files to an Amazon FSx for Windows File Server file system. Integrate the Amazon FSx file system with the on-premises Active Directory. Configure AWS Client VPN. Migrate the files to Amazon S3, and create a private VPC endpoint. Create a signed URL to allow download. Migrate the files to Amazon S3, and create a public VPC endpoint. Allow employees to sign on with AWS Single Sign-On.
A legal company must communicate with the public. Hundreds of files must be publicly accessible. Anyone is banned from modifying or deleting the files before to a specified future date. Which solution satisfies these criteria the SAFEST way possible? Upload all flies to an Amazon S3 bucket that is configured for static website hosting. Grant read-only IAM permissions to any AWS principals that access the S3 bucket until the designated date. Create a new Amazon S3 bucket with S3 Versioning enabled. Use S3 Object Lock with a retention period in accordance with the designated date. Configure the S3 bucket for static website hosting. Set an S3 bucket policy to allow read-only access to the objects. Create a new Amazon S3 bucket with S3 Versioning enabled. Configure an event trigger to run an AWS Lambda function in case of object modification or deletion. Configure the Lambda function to replace the objects with the original versions from a private S3 bucket. Upload all files to an Amazon S3 bucket that is configured for static website hosting. Select the folder that contains the files. Use S3 Object Lock with a retention period in accordance with the designated date. Grant read-only IAM permissions to any AWS principals that access the S3 bucket.
A corporation connects its on-premises servers to AWS through a 10 Gbps AWS Direct Connect connection. The connection's workloads are crucial. The organization needs a catastrophe recovery approach that is as resilient as possible while minimizing the existing connection bandwidth. What recommendations should a solutions architect make? Set up a new Direct Connect connection in another AWS Region. Set up a new AWS managed VPN connection in another AWS Region. Set up two new Direct Connect connections: one in the current AWS Region and one in another Region. Set up two new AWS managed VPN connections: one in the current AWS Region and one in another Region.
A business has two virtual private clouds (VPCs) labeled Management and Production. The Management VPC connects to a single device in the data center using VPNs via a customer gateway. The Production VPC is connected to AWS through two AWS Direct Connect connections via a virtual private gateway. Both the Management and Production VPCs communicate with one another through a single VPC peering connection. What should a solutions architect do to minimize the architecture's single point of failure? Add a set of VPNs between the Management and Production VPCs Add a second virtual private gateway and attach it to the Management VPC. Add a second set of VPNs to the Management VPC from a second customer gateway device. Add a second VPC peering connection between the Management VPC and the Production VPC.
AWS hosts a company's near-real-time streaming application. While the data is being ingested, a job is being performed on it that takes 30 minutes to finish. Due to the massive volume of incoming data, the workload regularly faces significant latency. To optimize performance, a solutions architect must build a scalable and serverless system. Which actions should the solutions architect do in combination? (Select two.) Use Amazon Kinesis Data Firehose to ingest the data. Use AWS Lambda with AWS Step Functions to process the data. Use AWS Database Migration Service (AWS DMS) to ingest the data. Use Amazon EC2 instances in an Auto Scaling group to process the data. Use AWS Fargate with Amazon Elastic Container Service (Amazon ECS) to process the data.
Amazon Elastic Block Store (Amazon EBS) volumes are used by a media organization to store video material. A certain video file has gained popularity, and a significant number of individuals from all over the globe are now viewing it. As a consequence, costs have increased. Which step will result in a cost reduction without jeopardizing user accessibility? Change the EBS volume to Provisioned IOPS (PIOPS). Store the video in an Amazon S3 bucket and create an Amazon CloudFront distribution. Split the video into multiple, smaller segments so users are routed to the requested video segments only. Clear an Amazon S3 bucket in each Region and upload the videos so users are routed to the nearest S3 bucket.
Amazon S3 buckets are used by an image hosting firm to store its objects. The firm wishes to prevent unintentional public disclosure of the items contained in the S3 buckets. All S3 items in the AWS account as a whole must remain private. Which solution will satisfy these criteria? Use Amazon GuardDuty to monitor S3 bucket policies. Create an automatic remediation action rule that uses an AWS Lambda function to remediate any change that makes the objects public. Use AWS Trusted Advisor to find publicly accessible S3 buckets. Configure email notifications in Trusted Advisor when a change is detected. Manually change the S3 bucket policy if it allows public access. Use AWS Resource Access Manager to find publicly accessible S3 buckets. Use Amazon Simple Notification Service (Amazon SNS) to invoke an AWS Lambda function when a change is detected. Deploy a Lambda function that programmatically remediates the change. Use the S3 Block Public Access feature on the account level. Use AWS Organizations to create a service control policy (SCP) that prevents IAM users from changing the setting. Apply the SCP to the account. .
A company's website stores transactional data on an Amazon RDS MySQL Multi-AZ DB instance. Other internal systems query this database instance to get data for batch processing. When internal systems request data from the RDS DB instance, the RDS DB instance drastically slows down. This has an effect on the website's read and write performance, resulting in poor response times for users. Which approach will result in an increase in website performance? Use an RDS PostgreSQL DB instance instead of a MySQL database. Use Amazon ElastiCache to cache the query responses for the website. Add an additional Availability Zone to the current RDS MySQL Multi-AZ DB instance. Add a read replica to the RDS DB instance and configure the internal systems to query the read replica.
Currently, a company's legacy application relies on an unencrypted Amazon RDS MySQL database with a single instance. All current and new data in this database must be encrypted to comply with new compliance standards. How is this to be achieved? Create an Amazon S3 bucket with server-side encryption enabled. Move all the data to Amazon S3. Delete the RDS instance. Enable RDS Multi-AZ mode with encryption at rest enabled. Perform a failover to the standby instance to delete the original instance. Take a Snapshot of the RDS instance. Create an encrypted copy of the snapshot. Restore the RDS instance from the encrypted snapshot. Create an RDS read replica with encryption at rest enabled. Promote the read replica to master and switch the application over to the new master. Delete the old RDS instance.
A marketing firm uses an Amazon S3 bucket to store CSV data for statistical research. Permission is required for an application running on an Amazon EC2 instance to properly handle the CSV data stored in the S3 bucket. Which step will provide the MOST SECURE access to the S3 bucket for the EC2 instance? Attach a resource-based policy to the S3 bucket. Create an IAM user for the application with specific permissions to the S3 bucket. Associate an IAM role with least privilege permissions to the EC2 instance profile. Store AWS credentials directly on the EC2 instance for applications on the instance to use for API calls.
On a cluster of Amazon Linux EC2 instances, a business runs an application. The organization is required to store all application log files for seven years for compliance purposes. The log files will be evaluated by a reporting program, which will need concurrent access to all files. Which storage system best satisfies these criteria in terms of cost-effectiveness? Amazon Elastic Block Store (Amazon EBS) Amazon Elastic File System (Amazon EFS) Amazon EC2 instance store Amazon S3.
On a fleet of Amazon EC2 instances, a business provides a training site. The business predicts that when its new course, which includes hundreds of training videos on the web, is available in one week, it will be tremendously popular. What should a solutions architect do to ensure that the predicted server load is kept to a minimum? Store the videos in Amazon ElastiCache for Redis. Update the web servers to serve the videos using the ElastiCache API. Store the videos in Amazon Elastic File System (Amazon EFS). Create a user data script for the web servers to mount the EFS volume. Store the videos in an Amazon S3 bucket. Create an Amazon CloudFront distribution with an origin access identity (OAI) of that S3 bucket. Restrict Amazon S3 access to the OAI. Store the videos in an Amazon S3 bucket. Create an AWS Storage Gateway file gateway to access the S3 bucket. Create a user data script for the web servers to mount the file gateway.
A business chooses to transition from on-premises to the AWS Cloud its three-tier web application. The new database must be able to scale storage capacity dynamically and conduct table joins. Which AWS service satisfies these criteria? Amazon Aurora Amazon RDS for SqlServer Amazon DynamoDB Streams Amazon DynamoDB on-demand.
On a fleet of Amazon EC2 instances, a business runs a production application. The program takes data from an Amazon SQS queue and concurrently processes the messages. The message volume is variable, and traffic is often interrupted. This program should handle messages continuously and without interruption. Which option best fits these criteria in terms of cost-effectiveness? Use Spot Instances exclusively to handle the maximum capacity required. Use Reserved Instances exclusively to handle the maximum capacity required. Use Reserved Instances for the baseline capacity and use Spot Instances to handle additional capacity. Use Reserved Instances for the baseline capacity and use On-Demand Instances to handle additional capacity.
A startup has developed an application that gathers data from Internet of Things (IoT) sensors installed on autos. Through Amazon Kinesis Data Firehose, the data is transmitted to and stored in Amazon S3. Each year, data generates billions of S3 objects. Each morning, the business retrains a set of machine learning (ML) models using data from the preceding 30 days. Four times a year, the corporation analyzes and trains other machine learning models using data from the preceding 12 months. The data must be accessible with a minimum of delay for a period of up to one year. Data must be preserved for archive reasons after one year. Which storage system best satisfies these criteria in terms of cost-effectiveness? Use the S3 Intelligent-Tiering storage class. Create an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 1 year. Use the S3 Intelligent-Tiering storage class. Configure S3 Intelligent-Tiering to automativally move objects to S3 Glacier Deep Archive after 1 year. Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 1 year. Use the S3 Standard storage class. Create an S3 Lifecycle policy to transition objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days, and then to S3 Glacier Deep Archive after 1 year. .
A business requires data storage on Amazon S3. A compliance requirement stipulates that when objects are modified, their original state must be retained. Additionally, data older than five years should be kept for auditing purposes. What should a solutions architect recommend as the most effortable? Enable object-level versioning and S3 Object Lock in governance mode Enable object-level versioning and S3 Object Lock in compliance mode Enable object-level versioning. Enable a lifecycle policy to move data older than 5 years to S3 Glacier Deep Archive Enable object-level versioning. Enable a lifecycle policy to move data older than 5 years to S3 Standard-Infrequent Access (S3 Standard-IA).
Multiple Amazon EC2 instances are used to host an application. The program reads messages from an Amazon SQS queue, writes them to an Amazon RDS database, and then removes them from the queue. The RDS table sometimes contains duplicate entries. There are no duplicate messages in the SQS queue. How can a solutions architect guarantee that messages are handled just once? Use the CreateQueue API call to create a new queue. Use the AddPermission API call to add appropriate permissions. Use the ReceiveMessage API call to set an appropriate wait time. Use the ChangeMessageVisibility API call to increase the visibility timeout.
A corporation just announced the worldwide launch of their retail website. The website is hosted on numerous Amazon EC2 instances, which are routed via an Elastic Load Balancer. The instances are distributed across several Availability Zones in an Auto Scaling group. The firm want to give its clients with customized material depending on the device from which they view the website. Which steps should a solutions architect perform in combination to satisfy these requirements? (Select two.) Configure Amazon CloudFront to cache multiple versions of the content. Configure a host header in a Network Load Balancer to forward traffic to different instances. Configure a Lambda@Edge function to send specific objects to users based on the User-Agent header. Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up host-based routing to different EC2 instances. Configure AWS Global Accelerator. Forward requests to a Network Load Balancer (NLB). Configure the NLB to set up path-based routing to different EC2 instances.
To facilitate experimentation and agility, a business enables developers to link current IAM policies to existing IAM roles. The security operations team, on the other hand, is worried that the developers may attach the current administrator policy, allowing them to bypass any other security rules. What approach should a solutions architect use in dealing with this issue? Create an Amazon SNS topic to send an alert every time a developer creates a new policy. Use service control policies to disable IAM activity across all account in the organizational unit. Prevent the developers from attaching any policies and assign all IAM duties to the security operations team. Set an IAM permissions boundary on the developer IAM role that explicitly denies attaching the administrator policy.
A newly formed company developed a three-tiered web application. The front end is comprised entirely of static information. Microservices form the application layer. User data is kept in the form of JSON documents that must be accessible with a minimum of delay. The firm anticipates minimal regular traffic in the first year, with monthly traffic spikes. The startup team's operational overhead expenditures must be kept to a minimum. What should a solutions architect suggest as a means of achieving this? Use Amazon S3 static website hosting to store and serve the front end. Use AWS Elastic Beanstalk for the application layer. Use Amazon DynamoDB to store user data. Use Amazon S3 static website hosting to store and serve the front end. Use Amazon Elastic KubernetesService (Amazon EKS) for the application layer. Use Amazon DynamoDB to store user data. Use Amazon S3 static website hosting to store and serve the front end. Use Amazon API Gateway and AWS Lambda functions for the application layer. Use Amazon DynamoDB to store user data. Use Amazon S3 static website hosting to store and serve the front end. Use Amazon API Gateway and AWS Lambda functions for the application layer. Use Amazon RDS with read replicas to store user data.
Amazon Elastic Container Service (Amazon ECS) container instances are used to install an ecommerce website's web application behind an Application Load Balancer (ALB). The website slows down and availability is decreased during moments of heavy usage. A solutions architect utilizes Amazon CloudWatch alarms to be notified when an availability problem occurs, allowing them to scale out resources. The management of the business want a system that automatically reacts to such circumstances. Which solution satisfies these criteria? Set up AWS Auto Scaling to scale out the ECS service when there are timeouts on the ALB. Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high. Set up AWS Auto Scaling to scale out the ECS service when the ALB CPU utilization is too high. Setup AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high. Set up AWS Auto Scaling to scale out the ECS service when the serviceג€™s CPU utilization is too high. Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high. Set up AWS Auto Scaling to scale out the ECS service when the ALB target group CPU utilization is too high. Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high.
A business uses Site-to-Site VPN connections to provide safe access to AWS Cloud services from on-premises. Users are experiencing slower VPN connectivity as a result of increased traffic through the VPN connections to the Amazon EC2 instances. Which approach will result in an increase in VPN throughput? Implement multiple customer gateways for the same network to scale the throughput. Use a transit gateway with equal cost multipath routing and add additional VPN tunnels. Configure a virtual private gateway with equal cost multipath routing and multiple channels. Increase the number of tunnels in the VPN configuration to scale the throughput beyond the default limit.
On Amazon EC2 Linux instances, a business hosts a website. Several of the examples are malfunctioning. The troubleshooting indicates that the unsuccessful instances lack swap space. The operations team's lead need a monitoring solution for this. What recommendations should a solutions architect make? Configure an Amazon CloudWatch SwapUsage metric dimension. Monitor the SwapUsage dimension in the EC2 metrics in CloudWatch. Use EC2 metadata to collect information, then publish it to Amazon CloudWatch custom metrics. Monitor SwapUsage metrics in CloudWatch. Install an Amazon CloudWatch agent on the instances. Run an appropriate script on a set schedule. Monitor SwapUtilization metrics in CloudWatch. Enable detailed monitoring in the EC2 console. Create an Amazon CloudWatch SwapUtilization custom metric. Monitor SwapUtilization metrics in CloudWatch.
AWS is used by a business to perform an online transaction processing (OLTP) burden. This workload is deployed in a Multi-AZ environment using an unencrypted Amazon RDS database instance. This instance's database is backed up daily. What should a solutions architect do going forward to guarantee that the database and snapshots are constantly encrypted? Encrypt a copy of the latest DB snapshot. Replace existing DB instance by restoring the encrypted snapshot. Create a new encrypted Amazon Elastic Block Store (Amazon EBS) volume and copy the snapshots to it. Enable encryption on the DB instance. Copy the snapshots and enable encryption using AWS Key Management Service (AWS KMS). Restore encrypted snapshot to an existing DB instance. Copy the snapshots to an Amazon S3 bucket that is encrypted using server-side encryption with AWS Key Management Service (AWS KMS) managed keys (SSE-KMS).
A business operates an application that collects data from its consumers through various Amazon EC2 instances. After processing, the data is uploaded to Amazon S3 for long-term storage. A study of the application reveals that the EC2 instances were inactive for extended periods of time. A solutions architect must provide a system that maximizes usage while minimizing expenditures. Which solution satisfies these criteria? Use Amazon EC2 in an Auto Scaling group with On-Demand instances. Build the application to use Amazon Lightsail with On-Demand Instances. Create an Amazon CloudWatch cron job to automatically stop the EC2 instances when there is no activity. Redesign the application to use an event-driven design with Amazon Simple Queue Service (Amazon SQS) and AWS Lambda.
A business wishes to migrate from many independent Amazon Web Services accounts to a consolidated, multi-account design. The organization intends to generate a large number of new AWS accounts for its business divisions. The organization must use a single corporate directory service to authenticate access to these AWS accounts. Which steps should a solutions architect advocate in order to satisfy these requirements? (Select two.) Create a new organization in AWS Organizations with all features turned on. Create the new AWS accounts in the organization. Set up an Amazon Cognito identity pool. Configure AWS Single Sign-On to accept Amazon Cognito authentication. Configure a service control policy (SCP) to manage the AWS accounts. Add AWS Single Sign-On to AWS Directory Service. Create a new organization in AWS Organizations. Configure the organizationג€™s authentication mechanism to use AWS Directory Service directly. Set up AWS Single Sign-On (AWS SSO) in the organization. Configure AWS SSO, and integrate it with the company's corporate directory service.
A solutions architect is developing a daily data processing task that will take up to two hours to finish. If the task is stopped, it must be restarted from scratch. What is the MOST cost-effective way for the solutions architect to solve this issue? Create a script that runs locally on an Amazon EC2 Reserved Instance that is triggered by a cron job. Create an AWS Lambda function triggered by an Amazon EventBridge (Amazon CloudWatch Events) scheduled event. Use an Amazon Elastic Container Service (Amazon ECS) Fargate task triggered by an Amazon EventBridge (Amazon CloudWatch Events) scheduled event. Use an Amazon Elastic Container Service (Amazon ECS) task running on Amazon EC2 triggered by an Amazon EventBridge (Amazon CloudWatch Events) scheduled event.
A business intends to use AWS to host a survey website. The firm anticipated a high volume of traffic. As a consequence of this traffic, the database is updated asynchronously. The organization want to avoid dropping writes to the database housed on AWS. How should the business's application be written to handle these database requests? Configure the application to publish to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the database to the SNS topic. Configure the application to subscribe to an Amazon Simple Notification Service (Amazon SNS) topic. Publish the database updates to the SNS topic. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues to queue the database connection until the database has resources to write the data. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues for capturing the writes and draining the queue as each write is made to the database. .
On a huge fleet of Amazon EC2 instances, a business runs an application. The program reads and writes items to a DynamoDB database hosted by Amazon. The DynamoDB database increases in size regularly, yet the application requires just data from the previous 30 days. The organization need a solution that is both cost effective and time efficient to implement. Which solution satisfies these criteria? Use an AWS CloudFormation template to deploy the complete solution. Redeploy the CloudFormation stack every 30 days, and delete the original stack. Use an EC2 instance that runs a monitoring application from AWS Marketplace. Configure the monitoring application to use Amazon DynamoDB Streams to store the timestamp when a new item is created in the table. Use a script that runs on the EC2 instance to delete items that have a timestamp that is older than 30 days. Configure Amazon DynamoDB Streams to invoke an AWS Lambda function when a new item is created in the table. Configure the Lambda function to delete items in the table that are older than 30 days. Extend the application to add an attribute that has a value of the current timestamp plus 30 days to each new item that is created in the table. Configure DynamoDB to use the attribute as the TTL attribute.
Previously, a corporation moved their data warehousing solution to AWS. Additionally, the firm has an AWS Direct Connect connection. Through the use of a visualization tool, users in the corporate office may query the data warehouse. Each query answered by the data warehouse is on average 50 MB in size, whereas each webpage supplied by the visualization tool is around 500 KB in size. The data warehouse does not cache the result sets it returns. Which approach results in the lowest outgoing data transfer costs for the company? Host the visualization tool on premises and query the data warehouse directly over the internet. Host the visualization tool in the same AWS Region as the data warehouse. Access it over the internet. Host the visualization tool on premises and query the data warehouse directly over a Direct Connect connection at a location in the same AWS Region. Host the visualization tool in the same AWS Region as the data warehouse and access it over a DirectConnect connection at a location in the same Region.
A business is developing an application that is composed of many microservices. The organization has chosen to deploy its software on AWS through container technology. The business need a solution that requires little ongoing work for maintenance and growth. Additional infrastructure cannot be managed by the business. Which steps should a solutions architect perform in combination to satisfy these requirements? (Select two.) Deploy an Amazon Elastic Container Service (Amazon ECS) cluster. Deploy the Kubernetes control plane on Amazon EC2 instances that span multiple Availability Zones. Deploy an Amazon Elastic Container Service (Amazon ECS) service with an Amazon EC2 launch type. Specify a desired task number level of greater than or equal to 2. Deploy an Amazon Elastic Container Service (Amazon ECS) service with a Fargate launch type. Specify a desired task number level of greater than or equal to 2. Deploy Kubernetes worker nodes on Amazon EC2 instances that span multiple Availability Zones. Create a deployment that specifies two or more replicas for each microservice.
The following policy was developed by an Amazon EC2 administrator and assigned to an IAM group including numerous users: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2.TerminateInstances", "Resources": "*", "Condition": { "IpAddress": { "aws:SourceIp": "10.100.100.0/24" } } }, { "Effect": "Deny", "Action": "ec2:*", "Resources": "*", "Condition": { "StringNotEquals": { "ec2:Region": "us-east-1" } } } ] } What impact does this policy have? Users can terminate an EC2 instance in any AWS Region except us-east-1. Users can terminate an EC2 instance with the IP address 10.100.100.1 in the us-east-1 Region. Users can terminate an EC2 instance in the us-east-1 Region when the userג€™s source IP is 10.100.100.254. Users cannot terminate an EC2 instance in the us-east-1 Region when the userג€™s source IP is 10.100.100.254.
The web application of a business stores its data on an Amazon RDS PostgreSQL database instance. Accountants conduct massive queries at the start of each month during the financial closure period, which has a negative influence on the database's performance owing to excessive utilization. The business want to reduce the effect of reporting on the online application. What should a solutions architect do to minimize the database's influence with the LEAST amount of work possible? Create a read replica and direct reporting traffic to the replica. Create a Multi-AZ database and direct reporting traffic to the standby. Create a cross-Region read replica and direct reporting traffic to the replica. Create an Amazon Redshift database and direct reporting traffic to the Amazon Redshift database.
A business has implemented a MySQL database on Amazon RDS. The database support team is reporting delayed reads on the DB instance as a result of the increased transactions and advises installing a read replica. Which activities should a solutions architect do prior to deploying this change? (Select two.) Enable binlog replication on the RDS primary node. Choose a failover priority for the source DB instance. Allow long-running transactions to complete on the source DB instance. Create a global table and specify the AWS Regions where the table will be available. Enable automatic backups on the source instance by setting the backup retention period to a value other than 0.
Users may get past performance reports from a company's website. The website requires a solution that can grow to suit the company's worldwide website requirements. The solution should be cost-effective, minimize infrastructure resource provisioning, and deliver the quickest reaction time feasible. Which mix of technologies might a solutions architect propose in order to satisfy these requirements? Amazon CloudFront and Amazon S3 AWS Lambda and Amazon DynamoDB Application Load Balancer with Amazon EC2 Auto Scaling Amazon Route 53 with internal Application Load Balancers.
A solutions architect is developing a hybrid application on the Amazon Web Services (AWS) cloud. AWS Direct Link (DX) will be used to connect the on-premises data center to AWS. Between AWS and the on-premises data center, the application connection must be very durable. Which DX setup should be used to satisfy these criteria? Configure a DX connection with a VPN on top of it. Configure DX connections at multiple DX locations. Configure a DX connection using the most reliable DX partner. Configure multiple virtual interfaces on top of a DX connection.
A financial institution uses AWS to host a web application. The program retrieves current stock prices using an Amazon API Gateway Regional API endpoint. The security staff at the organization has detected an upsurge in API queries. The security team is worried that HTTP flood attacks may result in the application being rendered inoperable. A solutions architect must create a defense against this form of assault. Which method satisfies these criteria with the LEAST amount of operational overhead? Create an Amazon CloudFront distribution in front of the API Gateway Regional API endpoint with a maximum TTL of 24 hours. Create a Regional AWS WAF web ACL with a rate-based rule. Associate the web ACL with the API Gateway stage. Use Amazon CloudWatch metrics to monitor the Count metric and alert the security team when the predefined rate is reached. Create an Amazon CloudFront distribution with Lambda@Edge in front of the API Gateway Regional API endpoint. Create an AWS Lambda function to block requests from IP addresses that exceed the predefined rate.
A business wishes to automate the evaluation of the security of its Amazon EC2 instances. The organization must verify and show that the development process adheres to security and compliance requirements. What actions should a solutions architect take to ensure that these criteria are met? Use Amazon Macie to automatically discover, classify and protect the EC2 instances. Use Amazon GuardDuty to publish Amazon Simple Notification Service (Amazon SNS) notifications. Use Amazon Inspector with Amazon CloudWatch to publish Amazon Simple Notification Service (Amazon SNS) notifications. Use Amazon EventBridge (Amazon CloudWatch Events) to detect and react to changes in the status of AWS Trusted Advisor checks.
On Amazon EC2, a corporation is operating a highly secure application that is backed up by an Amazon RDS database. All personally identifiable information (PII) must be encrypted at rest to comply with compliance standards. Which solution should a solutions architect propose in order to achieve this need with the MINIMUM number of infrastructure changes? Deploy AWS Certificate Manager to generate certificates. Use the certificates to encrypt the database volume. Deploy AWS CloudHSM, generate encryption keys, and use the customer master key (CMK) to encrypt database volumes. Configure SSL encryption using AWS Key Management Service customer master keys (AWS KMS CMKs) to encrypt database volumes. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.
The following IAM policy has been established by a solutions architect. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "lambda:*" ], "Resources": "*" }, { "Effect": "Deny", "Action": [ "lambda:CreateFunction", "lambda:DeleteFunction", ], "Resources": "*" "Condition": { "IpAddress": { "aws:SourceIp": "220.100.16.0/20" } } } ] } Which actions will the policy permit? An AWS Lambda function can be deleted from any network. An AWS Lambda function can be created from any network. An AWS Lambda function can be deleted from the 100.220.0.0/20 network. An AWS Lambda function can be deleted from the 220.100.16.0/20 network.
On Amazon Aurora, a business is operating a database. Every nightfall, the database is inactive. When user traffic surges in the early hours, an application that makes large reads on the database will face performance concerns. When reading from the database during these peak hours, the program encounters timeout issues. Due to the lack of a dedicated operations crew, the organization need an automated solution to solve performance concerns. Which activities should a solutions architect take to ensure that the database automatically adjusts to the increasing read load? (Select two.) Migrate the database to Aurora Serverless. Increase the instance size of the Aurora database. Configure Aurora Auto Scaling with Aurora Replicas. Migrate the database to an Aurora multi-master cluster. Migrate the database to an Amazon RDS for MySQL Multi-AZ deployment.
An Amazon EC2 instance-based application requires access to an Amazon DynamoDB database. The EC2 instance and DynamoDB table are both managed by the same AWS account. Permissions must be configured by a solutions architect. Which approach will provide the EC2 instance least privilege access to the DynamoDB table? Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Create an instance profile to assign this IAM role to the EC2 instance. Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Add the EC2 instance to the trust relationship policy document to allow it to assume the role. Create an IAM user with the appropriate policy to allow access to the DynamoDB table. Store the credentials in an Amazon S3 bucket and read them from within the application code directly. Create an IAM user with the appropriate policy to allow access to the DynamoDB table. Ensure that the application stores the IAM credentials securely on local storage and uses them to make the DynamoDB calls.
A business uses Amazon EC2 instances to operate an API-based inventory reporting application. The program makes use of an Amazon DynamoDB database to store data. The distribution centers of the corporation use an on-premises shipping application that communicates with an API to update inventory prior to generating shipping labels. Each day, the organization has seen application outages, resulting in missed transactions. What should a solutions architect propose to increase the resilience of an application? Modify the shipping application to write to a local database. Modify the application APIs to run serverless using AWS Lambda Configure Amazon API Gateway to call the EC2 inventory application APIs. Modify the application to send inventory updates using Amazon Simple Queue Service (Amazon SQS).
Amazon EC2 instances on private subnets are used to execute an application. The application requires access to a table in Amazon DynamoDB. What is the MOST SECURE method of accessing the table without allowing traffic to exit the AWS network? Use a VPC endpoint for DynamoDB. Use a NAT gateway in a public subnet. Use a NAT instance in a private subnet. Use the internet gateway attached to the VPC.
On a single Amazon EC2 instance, a business runs an ASP.NET MVC application. Due to a recent spike in application usage, users are experiencing poor response times during lunch hours. The firm must address this issue using the least amount of settings possible. What recommendations should a solutions architect make to satisfy these requirements? Move the application to AWS Elastic Beanstalk. Configure load-based auto scaling and time-based scaling to handle scaling during lunch hours. Move the application to Amazon Elastic Container Service (Amazon ECS). Create an AWS Lambda function to handle scaling during lunch hours. Move the application to Amazon Elastic Container Service (Amazon ECS). Configure scheduled scaling for AWS Application Auto Scaling during lunch hours. Move the application to AWS Elastic Beanstalk. Configure load-based auto scaling, and create an AWS Lambda function to handle scaling during lunch hours.
A business is in the process of migrating its on-premises application to AWS. Program servers and a Microsoft SQL Server database comprise the application. The database cannot be transferred to another engine due to the application's NET code using SQL Server functionality. The company's goal is to maximize availability while decreasing operational and administration costs. What actions should a solutions architect take to achieve this? Install SQL Server on Amazon EC2 in a Multi-AZ deployment. Migrate the data to Amazon RDS for SQL Server in a Multi-AZ deployment. Deploy the database on Amazon RDS for SQL Server with Multi-AZ Replicas. Migrate the data to Amazon RDS for SQL Server in a cross-Region Multi-AZ deployment.
Amazon Redshift is being used by a business to do analytics and produce customer reports. The corporation just obtained an extra 50 terabytes of demographic data on its customers. The data is saved in Amazon S3 in.csv files. The organization need a system that efficiently merges data and visualizes the findings. What recommendations should a solutions architect make to satisfy these requirements? Use Amazon Redshift Spectrum to query the data in Amazon S3 directly and join that data with the existing data in Amazon Redshift. Use Amazon QuickSight to build the visualizations. Use Amazon Athena to query the data in Amazon S3. Use Amazon QuickSight to join the data from Athena with the existing data in Amazon Redshift and to build the visualizations. Increase the size of the Amazon Redshift cluster, and load the data from Amazon S3. Use Amazon EMR Notebooks to query the data and build the visualizations in Amazon Redshift. Export the data from the Amazon Redshift cluster into Apache Parquet files in Amazon S3. Use Amazon Elasticsearch Service (Amazon ES) to query the data. Use Kibana to visualize the results.
Each month, a business keeps 200 GB of data on Amazon S3. At the conclusion of each month, the corporation must analyze this data to calculate the number of things sold in each sales area during the preceding month. Which analytics approach is the MOST cost-effective option for the business? Create an Amazon Elasticsearch Service (Amazon ES) cluster. Query the data in Amazon ES. Visualize the data by using Kibana. Create a table in the AWS Glue Data Catalog. Query the data in Amazon S3 by using Amazon Athena. Visualize the data in Amazon QuickSight. Create an Amazon EMR cluster. Query the data by using Amazon EMR, and store the results in Amazon S3. Visualize the data in Amazon QuickSight. Create an Amazon Redshift cluster. Query the data in Amazon Redshift, and upload the results to Amazon S3. Visualize the data in Amazon QuickSight.
A business's data layer is powered by Amazon RDS for PostgreSQL databases. The organization must adopt database password rotation. Which option satisfies this criterion with the LEAST amount of operational overhead? Store the password in AWS Secrets Manager. Enable automatic rotation on the secret. Store the password in AWS Systems Manager Parameter Store. Enable automatic rotation on the parameter. Store the password in AWS Systems Manager Parameter Store. Write an AWS Lambda function that rotates the password. Store the password in AWS Key Management Service (AWS KMS). Enable automatic rotation on the customer master key (CMK).
A business intends to transfer a TCP-based application onto the company's virtual private cloud (VPC). The program is available to the public over an unsupported TCP port via a physical device located in the company's data center. This public endpoint has a latency of less than 3 milliseconds and can handle up to 3 million requests per second. The organization needs the new public endpoint in AWS to function at the same level of performance. What solution architecture approach should be recommended to satisfy this requirement? Deploy a Network Load Balancer (NLB). Configure the NLB to be publicly accessible over the TCP port that the application requires. Deploy an Application Load Balancer (ALB). Configure the ALB to be publicly accessible over the TCP port that the application requires. Deploy an Amazon CloudFront distribution that listens on the TCP port that the application requires. Use an Application Load Balancer as the origin. Deploy an Amazon API Gateway API that is configured with the TCP port that the application requires. Configure AWS Lambda functions with provisioned concurrency to process the requests.
Within the same AWS account, a firm has two VPCs situated in the us-west-2 Region. The business must permit network communication between these VPCs. Each month, about 500 GB of data will be transferred between the VPCs. Which approach is the MOST cost-effective for connecting these VPCs? Implement AWS Transit Gateway to connect the VPCs. Update the route tables of each VPC to use the transit gateway for inter-VPC communication. Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tables of each VPC to use the VPN tunnel for inter-VPC communication. Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the VPC peering connection for inter-VPC communication. Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the route tables of each VPC to use the Direct Connect connection for inter-VPC communication.
A business's production workload is hosted on an Amazon Aurora MySQL DB cluster comprised of six Aurora Replicas. The corporation wishes to automate the distribution of near-real-time reporting requests from one of its departments among three Aurora Replicas. These three copies are configured differently from the rest of the DB cluster in terms of computation and memory. Which solution satisfies these criteria? Create and use a custom endpoint for the workload. Create a three-node cluster clone and use the reader endpoint. Use any of the instance endpoints for the selected three nodes. Use the reader endpoint to automatically distribute the read-only workload.
A business's on-premises data center has reached its storage limit. The organization wishes to shift its storage system to AWS while keeping bandwidth costs as low as possible. The solution must enable rapid and cost-free data retrieval. How are these stipulations to be met? Deploy Amazon S3 Glacier Vault and enable expedited retrieval. Enable provisioned retrieval capacity for the workload. Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally. Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3. Deploy AWS Direct Connect to connect with the on-premises data center. Configure AWS Storage Gateway to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.
Within a month of being bought, a newly acquired firm is needed to establish its own infrastructure on AWS and transfer various apps to the cloud. Each application requires the transmission of around 50 TB of data. Following the transfer, this firm and its parent company will need secure network connection with constant throughput between its data centers and apps. A solutions architect must guarantee that data transfer occurs just once and that network connection is maintained. Which solution will satisfy these criteria? AWS Direct Connect for both the initial transfer and ongoing connectivity. AWS Site-to-Site VPN for both the initial transfer and ongoing connectivity. AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity. AWS Snowball for the initial transfer and AWS Site-to-Site VPN for ongoing connectivity.
A solutions architect must create a solution that retrieves data every two minutes from an internet-based third-party web service. Each data retrieval is performed using a Python script in less than 100 milliseconds. The answer is a JSON object of less than 1 KB in size including sensor data. The architect of the solution must keep both the JSON object and the date. Which approach is the most cost-effective in meeting these requirements? Deploy an Amazon EC2 instance with a Linux operating system. Configure a cron job to run the script every 2 minutes. Extend the script to store the JSON object along with the timestamp in a MySQL database that is hosted on an Amazon RDS DB instance. Deploy an Amazon EC2 instance with a Linux operating system to extend the script to run in an infinite loop every 2 minutes. Store the JSON object along with the timestamp in an Amazon DynamoDB table that uses the timestamp as the primary key. Run the script on the EC2 instance. Deploy an AWS Lambda function to extend the script to store the JSON object along with the timestamp in an Amazon DynamoDB table that uses the timestamp as the primary key. Use an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that is initiated every 2 minutes to invoke the Lambda function. Deploy an AWS Lambda function to extend the script to run in an infinite loop every 2 minutes. Store the JSON object along with the timestamp in an Amazon DynamoDB table that uses the timestamp as the primary key. Ensure that the script is called by the handler function that is configured for the Lambda function.
A business does not currently have any file sharing services. A new project needs file storage that can be mounted as a disk for on-premises desktop computers. Before users can access the storage, the file server must authenticate them against an Active Directory domain. Which service enables Active Directory users to deploy storage on their workstations as a drive? Amazon S3 Glacier AWS DataSync AWS Snowball Edge AWS Storage Gateway.
A corporation with an on-premises application is transitioning to AWS to boost the flexibility and availability of the application. The present design makes considerable use of a Microsoft SQL Server database. The firm want to investigate other database solutions and, if necessary, migrate database engines. The development team does a complete copy of the production database every four hours in order to create a test database. Users will encounter delay during this time period. What database should a solution architect propose as a replacement? Use Amazon Aurora with Multi-AZ Aurora Replicas and restore from mysqldump for the test database. Use Amazon Aurora with Multi-AZ Aurora Replicas and restore snapshots from Amazon RDS for the test database. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas, and use the standby instance for the test database. Use Amazon RDS for SQL Server with a Multi-AZ deployment and read replicas, and restore snapshots from RDS for the test database.
On Amazon EC2 instances, a business runs an application. The volume of traffic to the webpage grows significantly during business hours and then falls. The CPU usage of an Amazon EC2 instance is a good measure of the application's end-user demand. The organization has specified a minimum group size of two EC2 instances and a maximum group size of ten EC2 instances for an Auto Scaling group. The firm is worried that the Auto Scaling group's existing scaling policy may be incorrect. The organization must prevent excessive EC2 instance provisioning and paying unneeded fees. What recommendations should a solutions architect make to satisfy these requirements? Configure Amazon EC2 Auto Scaling to use a scheduled scaling plan and launch an additional 8 EC2 instances during business hours. Configure AWS Auto Scaling to use a scaling plan that enables predictive scaling. Configure predictive scaling with a scaling mode of forecast and scale, and to enforce the maximum capacity setting during scaling. Configure a step scaling policy to add 4 EC2 instances at 50% CPU utilization and add another 4 EC2 instances at 90% CPU utilization. Configure scale-in policies to perform the reverse and remove EC2 instances based on the two values. Configure AWS Auto Scaling to have a desired capacity of 5 EC2 instances, and disable any existing scaling policies. Monitor the CPU utilization metric for 1 week. Then create dynamic scaling policies that are based on the observed values.
A business has launched a mobile multiplayer game. The game demands real-time monitoring of participants' latitude and longitude positions. The game's data storage must be capable of quick updates and location retrieval. The game stores location data on an Amazon RDS for PostgreSQL DB instance with read replicas. The database is unable to sustain the speed required for reading and writing changes during high use times. The game's user base is rapidly growing. What should a solutions architect do to optimize the data tier's performance? Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZ enabled. Migrate from Amazon RDS to Amazon Elasticsearch Service (Amazon ES) with Kibana. Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify the game to use DAX. Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use Redis.
A company's on-premises infrastructure and AWS need a secure connection. This connection does not need a large quantity of bandwidth and is capable of handling a limited amount of traffic. The link should be established immediately. Which way is the MOST CHEAPEST for establishing this sort of connection? Implement a client VPN. Implement AWS Direct Connect. Implement a bastion host on Amazon EC2. Implement an AWS Site-to-Site VPN connection.
A business is developing a web-based application that will operate on Amazon EC2 instances distributed across several Availability Zones. The online application will enable access to a collection of over 900 TB of text content. The corporation expects times of heavy demand for the online application. A solutions architect must guarantee that the text document storage component can scale to meet the application's demand at all times. The corporation is concerned about the solution's total cost. Which storage system best satisfies these criteria in terms of cost-effectiveness? Amazon Elastic Block Store (Amazon EBS) Amazon Elastic File System (Amazon EFS) Amazon Elasticsearch Service (Amazon ES) Amazon S3.
A business is using a tape backup system to offshore store critical application data. Daily data volume is in the neighborhood of 50 TB. For regulatory requirements, the firm must maintain backups for seven years. Backups are infrequently viewed, and a week's notice is normally required before restoring a backup. The organization is now investigating a cloud-based solution in order to cut storage expenses and the operational load associated with tape management. Additionally, the organization wants to ensure that the move from tape backups to the cloud is as seamless as possible. Which storage option is the CHEAPEST? Use Amazon Storage Gateway to back up to Amazon Glacier Deep Archive. Use AWS Snowball Edge to directly integrate the backups with Amazon S3 Glacier. Copy the backup data to Amazon S3 and create a lifecycle policy to move the data to Amazon S3 Glacier. Use Amazon Storage Gateway to back up to Amazon S3 and create a lifecycle policy to move the backup to Amazon S3 Glacier.
A development team must have a website that is accessible to other development teams. HTML, CSS, client-side JavaScript, and graphics comprise the website's content. Which form of website hosting is the MOST cost-effective? Containerize the website and host it in AWS Fargate. Create an Amazon S3 bucket and host the website there. Deploy a web server on an Amazon EC2 instance to host the website. Configure an Application Load Balancer with an AWS Lambda target that uses the Express.js framework.
A business's data warehouse is powered by Amazon Redshift. The firm want to assure the long-term viability of its data in the event of component failure. What recommendations should a solutions architect make? Enable concurrency scaling. Enable cross-Region snapshots. Increase the data retention period. Deploy Amazon Redshift in Multi-AZ.
A business offers its customers with an API that automates tax calculations based on item pricing. During the Christmas season, the firm receives an increased volume of queries, resulting in delayed response times. A solutions architect must create a scalable and elastic system. What is the solution architect's role in achieving this? Provide an API hosted on an Amazon EC2 instance. The EC2 instance performs the required computations when the API request is made. Design a REST API using Amazon API Gateway that accepts the item names. API Gateway passes item names to AWS Lambda for tax computations Create an Application Load Balancer that has two Amazon EC2 instances behind it. The EC2 instances will compute the tax on the received item names. Design a REST API using Amazon API Gateway that connects with an API hosted on an Amazon EC2 instance. API Gateway accepts and passes the item names to the EC2 instance for tax computations.
A business is operating a worldwide application. Users upload various videos, which are subsequently combined into a single video file. The program receives uploads from users through a single Amazon S3 bucket in the us-east-1 Region. The same S3 bucket also serves as the download point for the generated video file. The finished video file is around 250 GB in size. The organization requires a solution that enables quicker uploads and downloads of video files stored in Amazon S2. The corporation will charge consumers who choose to pay for the faster speed a monthly fee. What actions should a solutions architect take to ensure that these criteria are met? Enable AWS Global Accelerator for the S3 endpoint. Adjust the applicationג€™s upload and download links to use the Global Accelerator S3 endpoint for users who have a subscription. Enable S3 Cross-Region Replication to S3 buckets in all other AWS Regions. Use an Amazon Route 53 geolocation routing policy to route S3 requests based on the location of users who have a subscription Create an Amazon CloudFront distribution and use the S3 bucket in us-east-1 as an origin. Adjust the application to use the CloudFront URL as the upload and download links for users who have a subscription. Enable S3 Transfer Acceleration for the S3 bucket in us-east-1. Configure the application to use the bucketג€™s S3-accelerate endpoint domain name for the upload and download links for users who have a subscription.
A solutions architect is designing a VPC architecture with various subnets. Six subnets will be used in two Availability Zones. Subnets are classified as public, private, and database-specific. Access to a database should be restricted to Amazon EC2 instances operating on private subnets. Which solution satisfies these criteria? Create a now route table that excludes the route to the public subnetsג€™ CIDR blocks. Associate the route table to the database subnets. Create a security group that denies ingress from the security group used by instances in the public subnets. Attach the security group to an Amazon RDS DB instance. Create a security group that allows ingress from the security group used by instances in the private subnets. Attach the security group to an Amazon RDS DB instance. Create a new peering connection between the public subnets and the private subnets. Create a different peering connection between the private subnets and the database subnets.
A business is implementing a web gateway. The firm want to limit public access to the program to the online part. The VPC was created with two public subnets and two private subnets to achieve this. The application will be hosted on many Amazon EC2 instances that will be managed through an Auto Scaling group. SSL termination must be delegated to a separate instance on Amazon EC2. What actions should a solutions architect take to guarantee compliance with these requirements? Configure the Network Load Balancer in the public subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer. Configure the Network Load Balancer in the public subnets. Configure the Auto Scaling group in the public subnets and associate it with the Application Load Balancer. Configure the Application Load Balancer in the public subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer. Configure the Application Load Balancer in the private subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer.
A business intends to operate a collection of Amazon EC2 instances connected to an Amazon Aurora database. To deploy the EC2 instances and Aurora DB cluster, the business used an AWS Cloud Formation template. The organization wishes to provide safe authentication of instances to the database. The business does not want to keep track of static database credentials. Which method satisfies these criteria with the LEAST amount of operational effort? Create a database user with a user name and password. Add parameters for the database user name and password to the CloudFormation template. Pass the parameters to the EC2 instances when the instances are launched. Create a database user with a user name and password. Store the user name and password in AWS Systems Manager Parameter Store Configure the EC2 instances to retrieve the database credentials from Parameter Store. Configure the DB cluster to use IAM database authentication. Create a database user to use with IAM authentication. Associate a role with the EC2 instances to allow applications on the instances to access the database. Configure the DB cluster to use IAM database authentication with an IAM user. Create a database user that has a name that matches the IAM user. Associate the IAM user with the EC2 instances to allow applications on the instances to access the database.
A business uses the SMB protocol to back up on-premises databases to local file server shares. To accomplish recovery goals, the organization needs instant access to one week's worth of backup data. After a week, recovery is less possible, and the business may live with a delay in retrieving those earlier backup data. What actions should a solutions architect take to ensure that these criteria are met with the LEAST amount of operational work possible? Deploy Amazon FSx for Windows File Server to create a file system with exposed file shares with sufficient storage to hold all the desired backups. Deploy an AWS Storage Gateway file gateway with sufficient storage to hold 1 week of backups. Point the backups to SMB shares from the file gateway. Deploy Amazon Elastic File System (Amazon EFS) to create a file system with exposed NFS shares with sufficient storage to hold all the desired backups. Continue to back up to the existing file shares. Deploy AWS Database Migration Service (AWS DMS) and define a copy task to copy backup files older than 1 week to Amazon S3, and delete the backup files from the local file store.
A daily scheduled task must be executed by an ecommerce business to collect and filter sales statistics for analytics purposes. The sales records are stored in an Amazon S3 bucket. Each object has a maximum file size of 10 GB. The work might take up to an hour to complete depending on the amount of sales events. The job's CPU and memory requirements are consistent and known in advance. A solutions architect's goal is to reduce the amount of operational work required to complete the task. Which solution satisfies these criteria? Create an AWS Lambda function that has an Amazon EventBridge (Amazon CloudWatch Events) notification. Schedule the EventBridge (CloudWatch Events) event to run once a day. Create an AWS Lambda function. Create an Amazon API Gateway HTTP API. and integrate the API with the function. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that calls the API and invokes the function. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an AWS Fargate launch type. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that launches an ECS task on the cluster to run the job. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an Amazon EC2 launch type and an Auto Scaling group with at least one EC2 instance. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that launches an ECS task on the cluster to run the job.
A shopping cart application connects to an Amazon RDS Multi-AZ database instance. The database performance is causing the application to slow down. There was no significant performance improvement after upgrading to the next-generation instance type. According to the analysis, around 700 IOPS are maintained, typical queries execute for extended periods of time, and memory use is significant. Which application modification might a solutions architect propose to address these concerns? Migrate the RDS instance to an Amazon Redshift cluster and enable weekly garbage collection. Separate the long-running queries into a new Multi-AZ RDS database and modify the application to query whichever database is needed. Deploy a two-node Amazon ElastiCache cluster and modify the application to query the cluster first and query the database only if needed. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue for common queries and query it first and query the database only if needed.
A startup is developing a shared storage solution for an AWS Cloud-hosted gaming application. The organization need the capacity to access data through SMB clients. The solution must be controlled completely. Which AWS solution satisfies these criteria? Create an AWS DataSync task that shares the data as a mountable file system. Mount the file system to the application server. Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the instance. Connect the application server to the file share. Create an Amazon FSx for Windows File Server file system. Attach the file system to the origin server. Connect the application server to the file system. Create an Amazon S3 bucket. Assign an IAM role to the application to grant access to the S3 bucket. Mount the S3 bucket to the application server.
A business relies on Amazon S3 for object storage. The organization stores data in hundreds of S3 buckets. Certain S3 buckets contain less frequently accessed data than others. According to a solutions architect, lifecycle rules are either not followed consistently or are enforced in part, resulting in data being held in high-cost storage. Which option will reduce expenses without jeopardizing object availability? Use S3 ACLs. Use Amazon Elastic Block Store (Amazon EBS) automated snapshots. Use S3 Intelligent-Tiering storage. Use S3 One Zone-Infrequent Access (S3 One Zone-IA).
A business is re-architecting a tightly connected application in order to make it loosely coupled. Previously, the program communicated across layers through a request/response pattern. The organization intends to do this via the usage of Amazon Simple Queue Service (Amazon SQS). The first architecture includes a request queue and a response queue. However, when the program grows, this strategy will not handle all messages. What is the best course of action for a solutions architect to take in order to tackle this issue? Configure a dead-letter queue on the ReceiveMessage API action of the SQS queue. Configure a FIFO queue, and use the message deduplication ID and message group ID. Create a temporary queue, with the Temporary Queue Client to receive each response message. Create a queue for each request and response on startup for each producer, and use a correlation ID message attribute.
Amazon S3 is used by a business to store private audit records. According to the concept of least privilege, the S3 bucket implements bucket restrictions to limit access to audit team IAM user credentials. Company executives are concerned about inadvertent document destruction in the S3 bucket and need a more secure solution. What steps should a solutions architect take to ensure the security of audit documents? Enable the versioning and MFA Delete features on the S3 bucket. Enable multi-factor authentication (MFA) on the IAM user credentials for each audit team IAM user account. Add an S3 Lifecycle policy to the audit teamג€™s IAM user accounts to deny the s3:DeleteObject action during audit dates. Use AWS Key Management Service (AWS KMS) to encrypt the S3 bucket and restrict audit team IAM user accounts from accessing the KMS key.
Each day, a company's hundreds of edge devices create 1 TB of status alerts. Each alert has a file size of roughly 2 KB. A solutions architect must provide a system for ingesting and storing warnings for further investigation. The business need a solution that is extremely accessible. However, the business must have a low cost structure and does not want to handle extra infrastructure. Additionally, the corporation intends to retain 14 days of data for instant examination and archive any older data. What is the MOST OPTIMAL option that satisfies these requirements? Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days. Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts. Create a script on the EC2 instances that will store the alerts in an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days. Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon Elasticsearch Service (Amazon ES) cluster. Set up the Amazon ES cluster to take manual snapshots every day and delete data from the cluster that is older than 14 days. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to ingest the alerts, and set the message retention period to 14 days. Configure consumers to poll the SQS queue, check the age of the message, and analyze the message data as needed. If the message is 14 days old, the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue.
A firm runs a two-tier image processing program. The application is divided into two Availability Zones, each with its own public and private subnets. The web tier's Application Load Balancer (ALB) makes use of public subnets. Private subnets are used by Amazon EC2 instances at the application layer. The program is functioning more slowly than planned, according to users. According to a security audit of the web server log files, the application receives millions of unauthorized requests from a tiny number of IP addresses. While the organization finds a more permanent solution, a solutions architect must tackle the urgent performance issue. What solution architecture approach should be recommended to satisfy this requirement? Modify the inbound security group for the web tier. Add a deny rule for the IP addresses that are consuming resources. Modify the network ACL for the web tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources. Modify the inbound security group for the application tier. Add a deny rule for the IP addresses that are consuming resources. Modify the network ACL for the application tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources.
A business uses Amazon Elastic Container Service (Amazon ECS) to perform an image processing workload on two private subnets. Each private subnet connects to the internet through a NAT instance. Amazon S3 buckets are used to store all photos. The business is worried about the expenses associated with data transfers between Amazon ECS and Amazon S3. What actions should a solutions architect do to save money? Configure a NAT gateway to replace the NAT instances. Configure a gateway endpoint for traffic destined to Amazon S3. Configure an interface endpoint for traffic destined to Amazon S3. Configure Amazon CloudFront for the S3 bucket storing the images.
An online picture program enables users to upload photographs and modify them. The application provides two distinct service levels: free and paid. Paid users' photos are processed ahead of those submitted by free users. Amazon S3 is used to store the photos, while Amazon SQS is used to store the job information. How should a solutions architect propose a configuration? Use one SQS FIFO queue. Assign a higher priority to the paid photos so they are processed first. Use two SQS FIFO queues: one for paid and one for free. Set the free queue to use short polling and the paid queue to use long polling. Use two SQS standard queues: one for paid and one for free. Configure Amazon EC2 instances to prioritize polling for the paid queue over the free queue. Use one SQS standard queue. Set the visibility timeout of the paid photos to zero. Configure Amazon EC2 instances to prioritize visibility settings so paid photos are processed first.
Application developers have found that when business reporting users run big production reports to the Amazon RDS instance that powers the application, the application becomes very sluggish. While the reporting queries are executing, the RDS instance's CPU and memory usage metrics do not surpass 60%. Business reporting users must be able to produce reports without impairing the functionality of the application. Which action is necessary to achieve this? Increase the size of the RDS instance. Create a read replica and connect the application to it. Enable multiple Availability Zones on the RDS instance. Create a read replica and connect the business reports to it.
A new employee has been hired as a deployment engineer by a corporation. The deployment engineer will construct several AWS resources using AWS CloudFormation templates. A solutions architect desires that the deployment engineer execute job functions with the least amount of privilege possible. Which steps should the solutions architect do in conjunction to reach this goal? (Select two.) Have the deployment engineer use AWS account roof user credentials for performing AWS CloudFormation stack operations. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the Administrate/Access IAM policy attached. Create a new IAM User for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation actions only. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks using Dial IAM role.
A corporation is using AWS to construct a new machine learning model solution. The models are constructed as self-contained microservices that get around 1 GB of model data from Amazon S3 and put it into memory during startup. The models are accessed by users through an asynchronous API. Users may submit a single request or a batch of requests and designate the destination for the results. Hundreds of people benefit from the company's models. The models' use habits are erratic. Certain models may go days or weeks without being used. Other models may get hundreds of queries concurrently. Which solution satisfies these criteria? The requests from the API are sent to an Application Load Balancer (ALB). Models are deployed as AWS Lambda functions invoked by the ALB. The requests from the API are sent to the models Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as AWS Lambda functions triggered by SQS events AWS Auto Scaling is enabled on Lambda to increase the number of vCPUs based on the SQS queue size. The requests from the API are sent to the modelג€™s Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as Amazon Elastic Container Service (Amazon ECS) services reading from the queue AWS App Mesh scales the instances of the ECS cluster based on the SQS queue size. The requests from the API are sent to the models Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as Amazon Elastic Container Service (Amazon ECS) services reading from the queue AWS Auto Scaling is enabled on Amazon ECS for both the cluster and copies of the service based on the queue size.
A business developed a meal ordering application that collects and maintains user data for future research. On an Amazon EC2 instance, the application's static front end is installed. The front-end application communicates with the back-end application, which is hosted on a different EC2 instance. The data is subsequently stored in Amazon RDS by the backend application. What should a solutions architect do to decouple and scalability the architecture? Use Amazon S3 to serve the front-end application, which sends requests to Amazon EC2 to execute the backend application. The backend application will process and store the data in Amazon RDS. Use Amazon S3 to serve the front-end application and write requests to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe Amazon EC2 instances to the HTTP/HTTPS endpoint of the topic, and process and store the data in Amazon RDS. Use an EC2 instance to serve the front end and write requests to an Amazon SQS queue. Place the backend instance in an Auto Scaling group, and scale based on the queue depth to process and store the data in Amazon RDS. Use Amazon S3 to serve the static front-end application and send requests to Amazon API Gateway, which writes the requests to an Amazon SQS queue. Place the backend instances in an Auto Scaling group, and scale based on the queue depth to process and store the data in Amazon RDS.
Each month, a leasing firm prepares and delivers PDF statements to all of its clients. Each statement is around 400 KB in length. Customers may obtain their statements from the website for a period of up to 30 days after they are created. Customers are sent a ZIP file containing all of their statements at the conclusion of their three-year lease. Which storage method is the MOST cost-effective in this situation? Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 1 day. Store the statements using the Amazon S3 Glacier storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier Deep Archive storage after 30 days. Store the statements using the Amazon S3 Standard storage class. Create a lifecycle policy to move the statements to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) storage after 30 days. Store the statements using the Amazon S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create a lifecycle policy to move the statements to Amazon S3 Glacier storage after 30 days. .
A company's ecommerce site is seeing a rise in visitor visits. The company's shop is implemented as a two-tier two application on Amazon EC2 instances, with a web layer and a separate database tier. As traffic rises, the organization detects severe delays in delivering timely marketing and purchase confirmation emails to consumers due to the design. The organization wishes to decrease the amount of time spent addressing difficult email delivery problems and to cut operating costs. What actions should a solutions architect take to ensure that these criteria are met? Create a separate application tier using EC2 instances dedicated to email processing. Configure the web instance to send email through Amazon Simple Email Service (Amazon SES). Configure the web instance to send email through Amazon Simple Notification Service (Amazon SNS). Create a separate application tier using EC2 instances dedicated to email processing. Place the instances in an Auto Scaling group.
A business's application makes use of AWS Lambda functions. A code examination reveals that database credentials are being kept in the source code of a Lambda function, which violates the company's security policy. To comply with security policy requirements, credentials must be safely maintained and automatically cycled on a regular basis. What should a solutions architect propose as the MOST SECURE method of meeting these requirements? Store the password in AWS CloudHSM. Associate the Lambda function with a role that can use the key ID to retrieve the password from CloudHSM. Use CloudHSM to automatically rotate the password. Store the password in AWS Secrets Manager. Associate the Lambda function with a role that can use the secret ID to retrieve the password from Secrets Manager. Use Secrets Manager to automatically rotate the password. Store the password in AWS Key Management Service (AWS KMS). Associate the Lambda function with a role that can use the key ID to retrieve the password from AWS KMS. Use AWS KMS to automatically rotate the uploaded password. Move the database password to an environment variable that is associated with the Lambda function. Retrieve the password from the environment variable by invoking the function. Create a deployment script to automatically rotate the password.
A firm just launched a two-tier application in the us-east-1 Region's two Availability Zones. Databases are located on a private subnet, whereas web servers are located on a public subnet. The VPC is connected to the internet through an internet gateway. Amazon EC2 instances are used to host the application and database. The database servers are unable to connect to the internet in order to get fixes. A solutions architect must create a system that ensures database security while incurring the fewest operating costs. Which solution satisfies these criteria? Deploy a NAT gateway inside the public subnet for each Availability Zone and associate it with an Elastic IP address. Update the routing table of the private subnet to use it as the default route. Deploy a NAT gateway inside the private subnet for each Availability Zone and associate it with an Elastic IP address. Update the routing table of the private subnet to use it as the default route. Deploy two NAT instances inside the public subnet for each Availability Zone and associate them with Elastic IP addresses. Update the routing table of the private subnet to use it as the default route. Deploy two NAT instances inside the private subnet for each Availability Zone and associate them with Elastic IP addresses. Update the routing table of the private subnet to use it as the default route.
For each of its developer accounts, a corporation has configured AWS CloudTrail logs to transport log files to an Amazon S3 bucket. The organization has established a centralized AWS account for the purpose of facilitating administration and auditing. Internal auditors need access to CloudTrail logs, however access to all developer account users must be limited. The solution should be both secure and efficient. How should a solutions architect address these considerations? Configure an AWS Lambda function in each developer account to copy the log files to the central account. Create an IAM role in the central account for the auditor. Attach an IAM policy providing read-only permissions to the bucket. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the central account. Create an IAM user in the central account for the auditor. Attach an IAM policy providing full permissions to the bucket. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the central account. Create an IAM role in the central account for the auditor. Attach an IAM policy providing read-only permissions to the bucket. Configure an AWS Lambda function in the central account to copy the log files from the S3 bucket in each developer account. Create an IAM user in the central account for the auditor. Attach an IAM policy providing full permissions to the bucket.
A solutions architect is improving a website in preparation for a forthcoming musical performance. Real-time streaming of the performances will be accessible, as well as on-demand viewing. The event is anticipated to draw a large internet audience from across the world. Which service will optimize both real-time and on-demand steaming performance? Amazon CloudFront AWS Global Accelerator Amazon Route S3 Amazon S3 Transfer Acceleration.
A database is hosted on an Amazon RDS MySQL 5.6 Multi-AZ DB instance that is subjected to high-volume reads. When evaluating read performance from a secondary AWS Region, application developers detect a considerable lag. The developers need a solution that has a read replication latency of less than one second. What recommendations should the solutions architect make? Install MySQL on Amazon EC2 in the secondary Region. Migrate the database to Amazon Aurora with cross-Region replicas. Create another RDS for MySQL read replica in the secondary Region. Implement Amazon ElastiCache to improve database query performance.
Currently, a business runs a web application that is backed up by an Amazon RDS MySQL database. It features daily automatic backups that are not encrypted. A security audit entails the encryption of future backups and the destruction of unencrypted backups. Before deleting the previous backups, the firm will create at least one encrypted backup. What should be done to allow encrypted backups in the future? Enable default encryption for the Amazon S3 bucket where backups are stored. Modify the backup section of the database configuration to toggle the Enable encryption check box. Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot. Enable an encrypted read replica on RDS for MySQL. Promote the encrypted read replica to primary. Remove the original database instance.
Each entry to a company's facility is equipped with badge readers. When badges are scanned, the readers transmit an HTTPS message indicating who tried to enter that specific entry. A solutions architect must develop a system that will handle these sensor signals. The solution must be highly accessible, with the findings made available for analysis by the company's security staff. Which system design should be recommended by the solutions architect? Launch an Amazon EC2 instance to serve as the HTTPS endpoint and to process the messages. Configure the EC2 instance to save the results to an Amazon S3 bucket. Create an HTTPS endpoint in Amazon API Gateway. Configure the API Gateway endpoint to invoke an AWS Lambda function to process the messages and save the results to an Amazon DynamoDB table. Use Amazon Route 53 to direct incoming sensor messages to an AWS Lambda function. Configure the Lambda function to process the messages and save the results to an Amazon DynamoDB table. Create a gateway VPC endpoint for Amazon S3. Configure a Site-to-Site VPN connection from the facility network to the VPC so that sensor data can be written directly to an S3 bucket by way of the VPC endpoint.
A business maintains on-premises servers that operate a relational database. The existing database handles a large volume of read requests from users in various places. The organization want to transition to AWS with little effort. The database solution should facilitate catastrophe recovery while not interfering with the existing traffic flow of the business. Which solution satisfies these criteria? Use a database in Amazon RDS with Multi-AZ and at least one read replica. Use a database in Amazon RDS with Multi-AZ and at least one standby replica. Use databases hosted on multiple Amazon EC2 instances in different AWS Regions. Use databases hosted on Amazon EC2 instances behind an Application Load Balancer in different Availability Zones.
On AWS, a business is developing a document storage solution. The application is deployed across different Amazon EC2 Availability Zones. The firm demands a highly accessible document storage. When requested, documentation must be returned quickly. The lead engineer has setup the application to store documents in Amazon Elastic Block Store (Amazon EBS), but is open to examine additional solutions to fulfill the availability requirement. What recommendations should a solutions architect make? Snapshot the EBS volumes regularly and build new volumes using those snapshots in additional Availability Zones. Use Amazon Elastic Block Store (Amazon EBS) for the EC2 instance root volumes. Configure the application to build the document store on Amazon S3. Use Amazon Elastic Block Store (Amazon EBS) for the EC2 instance root volumes. Configure the application to build the document store on Amazon S3 Glacier. Use at least three Provisioned IOPS EBS volumes for EC2 instances. Mount the volumes to the EC2 instances in a RAID 5 configuration.
A business wishes to keep track of its AWS charges for financial reporting purposes. The cloud operations team is developing an architecture for querying AWS Cost and Usage Reports for all member accounts in the AWS Organizations management account. Once a month, the team must execute this query and give a full analysis of the bill. Which solution meets these needs in the MOST scalable and cost-effective manner? Enable Cost and Usage Reports in the management account. Deliver reports to Amazon Kinesis. Use Amazon EMR for analysis. Enable Cost and Usage Reports in the management account. Deliver the reports to Amazon S3. Use Amazon Athena for analysis. Enable Cost and Usage Reports for member accounts. Deliver the reports to Amazon S3. Use Amazon Redshift for analysis. Enable Cost and Usage Reports for member accounts. Deliver the reports to Amazon Kinesis. Use Amazon QuickSight for analysis.
A solutions architect desires that all new users meet particular difficulty standards and are required to rotate their IAM user passwords on a regular basis. What is the solution architect's role in achieving this? Set an overall password policy for the entire AWS account Set a password policy for each IAM user in the AWS account. Use third-party vendor software to set password requirements. Attach an Amazon CloudWatch rule to the Create_newuser event to set the password with the appropriate requirements.
A business uses AWS to host its website. The organization has utilized Amazon EC2 Auto Scaling to accommodate the extremely fluctuating demand. Management is worried that the firm is overprovisioning its infrastructure, particularly at the three-tier application's front end. A solutions architect's primary responsibility is to guarantee that costs are minimized without sacrificing performance. What is the solution architect's role in achieving this? Use Auto Scaling with Reserved Instances. Use Auto Scaling with a scheduled scaling policy. Use Auto Scaling with the suspend-resume feature. Use Auto Scaling with a target tracking scaling policy.
A business is in the process of transferring its apps to AWS. At the moment, on-premises apps create hundreds of terabytes of data, which is kept on a shared file system. The organization is using a cloud-based analytics solution to derive insights from this data on an hourly basis. The business requires a solution to manage continuous data transfer between its on-premises shared file system and Amazon S3. Additionally, the solution must be capable of coping with brief gaps in internet access. Which data transmission options should the business utilize to achieve these requirements? AWS DataSync AWS Migration Hub AWS Snowball Edge Storage Optimized AWS Transfer for SFTP.
A business wishes to manage a fleet of Amazon EC2 instances using AWS Systems Manager. No EC2 instances are permitted to have internet access, per the company's security needs. A solutions architect is responsible for designing network connection between EC2 instances and Systems Manager while adhering to this security requirement. Which solution will satisfy these criteria? Deploy the EC2 instances into a private subnet with no route to the internet. Configure an interface VPC endpoint for Systems Manager. Update routes to use the endpoint. Deploy a NAT gateway into a public subnet. Configure private subnets with a default route to the NAT gateway. Deploy an internet gateway. Configure a network ACL to deny traffic to all destinations except Systems Manager.
On Amazon EC2 instances, a business runs an application. The application is deployed on private subnets inside the us-east-1 Region's three Availability Zones. The instances must have internet access in order to download files. The organization is looking for a design that is readily accessible across the Region. Which solution should be done to guarantee that internet access is not disrupted? Deploy a NAT instance in a private subnet of each Availability Zone. Deploy a NAT gateway in a public subnet of each Availability Zone. Deploy a transit gateway in a private subnet of each Availability Zone. Deploy an internet gateway in a public subnet of each Availability Zone.
A business operates a distant plant with unstable connection. The factory must collect and interpret machine and sensor data in order to detect items on its conveyor belts and begin robotic movement to route them to the appropriate spot. For on-premises control systems, predictable low-latency computing processing is critical. Which data processing solution should the manufacturer use? Amazon CloudFront Lambda@Edge functions An Amazon EC2 instance that has enhanced networking enabled An Amazon EC2 instance that uses an AWS Global Accelerator An Amazon Elastic Block Store (Amazon EBS) volume on an AWS Snowball Edge cluster.
An application is deployed across various Availability Zones using Amazon EC2 instances. The instances are deployed behind an Application Load Balancer in an Amazon EC2 Auto Scaling group. The program operates optimally when the CPU usage of the Amazon EC2 instances is close to or equal to 40%. What should a solutions architect do to ensure that the required performance is maintained throughout all group instances? Use a simple scaling policy to dynamically scale the Auto Scaling group. Use a target tracking policy to dynamically scale the Auto Scaling group. Use an AWS Lambda function to update the desired Auto Scaling group capacity. Use scheduled scaling actions to scale up and scale down the Auto Scaling group.
The web application of a business is hosted on Amazon EC2 instances and is protected by an Application Load Balancer. The corporation recently altered its policy, requiring that the application be accessible exclusively from a single nation. Which setup will satisfy this criterion? Configure the security group for the EC2 instances. Configure the security group on the Application Load Balancer. Configure AWS WAF on the Application Load Balancer in a VPC. Configure the network ACL for the subnet that contains the EC2 instances.
AWS Lambda functions are being developed and deployed by an engineering team. The team must build roles and administer policies in AWS IAM in order to set the Lambda functions' rights. How should the team's permissions be adjusted to correspond to the principle of least privilege? Create an IAM role with a managed policy attached. Allow the engineering team and the Lambda functions to assume this role. Create an IAM group for the engineering team with an IAMFullAccess policy attached. Add all the users from the team to this IAM group. Create an execution role for the Lambda functions. Attach a managed policy that has permission boundaries specific to these Lambda functions. Create an IAM role with a managed policy attached that has permission boundaries specific to the Lambda functions. Allow the engineering team to assume this role.
A business is migrating from on-premises Oracle to Amazon Aurora PostgreSQL. Numerous apps write to the same tables in the database. The apps must be transferred sequentially, with a month between migrations. Management has raised worry about the database's heavy read and write activity. Throughout the entire migration process, the data must be maintained in sync across both databases. What recommendations should a solutions architect make? Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a change data capture (CDC) replication task and a table mapping to select all cables. Use AWS DataSync for the initial migration. Use AWS Database Migration Service (AWS DMS) to create a full load plus change data capture (CDC) replication task and a table mapping to select all tables. Use the AWS Schema Conversion Tool with AWS DataBase Migration Service (AWS DMS) using a memory optimized replication instance. Create a full load plus change data capture (CDC) replication task and a table mapping to select all tables. Use the AWS Schema Conversion Tool with AWS Database Migration Service (AWS DMS) using a compute optimized replication instance. Create a full load plus change data capture (CDC) replication task and a table mapping to select the largest tables.
A solutions architect is migrating static content from an Amazon EC2 instance-hosted public website to an Amazon S3 bucket. The static assets will be distributed using an Amazon CloudFront distribution. The EC2 instances' security group limits access to a subset of IP ranges. Access to static material should be regulated in a similar manner. Which combination of actions will satisfy these criteria? (Select two.) Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects. Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web ACL with the CloudFront distribution. Create a new security group that includes the same IP restrictions that exist in the current EC2 security group. Associate this new security group with the CloudFront distribution. Create a new security group that includes the same IP restrictions that exist in the current EC2 security group. Associate this new security group with the S3 bucket hosting the static content. Create a new IAM role and associate the role with the distribution. Change the permissions either on the S3 bucket or on the files within the S3 bucket so that only the newly created IAM role has read and download permissions.
AWS is used by an ecommerce firm to operate a multi-tier application. Amazon EC2 hosts both the front-end and back-end layers, while Amazon RDS for MySQL hosts the database. The backend tier is responsible for communication with the RDS instance. There are many requests to the database to get identical datasets, which results in performance slowdowns. Which actions should be performed to optimize the backend's performance? Implement Amazon SNS to store the database calls. Implement Amazon ElastiCache to cache the large datasets. Implement an RDS for MySQL read replica to cache database calls. Implement Amazon Kinesis Data Firehose to stream the calls to the database.
A user owns a MySQL database, which is used by a variety of customers that anticipate a maximum delay of 100 milliseconds on queries. Once an entry is recorded in the database, it is almost never modified. Clients get access to a maximum of one record at a time. Due to rising customer demand, database access has expanded tremendously. As a consequence, the resulting load will quickly surpass the capability of even the most costly hardware available. The user want to move to AWS and is open to experimenting with new database systems. Which solution would resolve the database load problem and provide nearly limitless future scalability? Amazon RDS Amazon DynamoDB Amazon Redshift AWS Data Pipeline.
Amazon DynamoDB is being used by an entertainment firm to store media metadata. The application requires extensive reading and often encounters delays. The organization lacks the people necessary to manage extra operational expenses and requires an increase in DynamoDB's performance efficiency without changing the application. What solution architecture approach should be recommended to satisfy this requirement? Use Amazon ElastiCache for Redis. Use Amazon DynamoDB Accelerator (DAX). Replicate data by using DynamoDB global tables. Use Amazon ElastiCache for Memcached with Auto Discovery enabled.
In a branch office, a firm runs an application in a tiny data closet with no virtualized computing resources. The application's data is saved on a network file system (NFS) volume. Daily offsite backups of the NFS volume are required by compliance requirements. Which solution satisfies these criteria? Install an AWS Storage Gateway file gateway on premises to replicate the data to Amazon S3. Install an AWS Storage Gateway file gateway hardware appliance on premises to replicate the data to Amazon S3. Install an AWS Storage Gateway volume gateway with stored volumes on premises to replicate the data to Amazon S3. Install an AWS Storage Gateway volume gateway with cached volumes on premises to replicate the data to Amazon S3.
A business is using AWS to host an election reporting website for consumers worldwide. The website makes use of Amazon EC2 instances in an Auto Scaling group with Application Load Balancers for the web and application layers. The database layer is powered by Amazon RDS for MySQL. The website is updated once an hour with election results and has previously seen hundreds of individuals check the data. The firm anticipates a big boost in demand in the coming months as a result of impending elections in many nations. A solutions architect's objective is to increase the website's capacity to manage increased demand while limiting the requirement for more EC2 instances. Which solution will satisfy these criteria? Launch an Amazon ElastiCache cluster to cache common database queries. Launch an Amazon CloudFront web distribution to cache commonly requested website content. Enable disk-based caching on the EC2 instances to cache commonly requested website content. Deploy a reverse proxy into the design using an EC2 instance with caching enabled for commonly requested website content.
Amazon S3 is used by a corporation to store historical weather recordings. The records are accessed through a URL that refers to a domain name on the company's website. Subscriptions enable users from all around the globe to access this material. Although the organization's core domain name is hosted by a third-party operator, the company recently transferred some of its services to Amazon Route 53. The corporation want to consolidate contracts, minimize user latency, and lower the cost of offering the application to subscribers. Which solution satisfies these criteria? Create a web distribution on Amazon CloudFront to serve the S3 content for the application. Create a CNAME record in a Route 53 hosted zone that points to the CloudFront distribution, resolving to the applicationג€™s URL domain name. Create a web distribution on Amazon CloudFront to serve the S3 content for the application. Create an ALIAS record in the Amazon Route 53 hosted zone that points to the CloudFront distribution, resolving to the applicationג€™s URL domain name. Create an A record in a Route 53 hosted zone for the application. Create a Route 53 traffic policy for the web application, and configure a geolocation rule. Configure health checks to check the health of the endpoint and route DNS queries to other endpoints if an endpoint is unhealthy. Create an A record in a Route 53 hosted zone for the application. Create a Route 53 traffic policy for the web application, and configure a geoproximity rule. Configure health checks to check the health of the endpoint and route DNS queries to other endpoints if an endpoint is unhealthy.
A business is developing a web application that will be accessible over the internet. The application is hosted on Amazon EC2 for Linux instances that leverage Amazon RDS MySQL Multi-AZ DB instances to store sensitive user data. Public subnets are used for EC2 instances, whereas private subnets are used for RDS DB instances. The security team has required that web-based attacks on database instances be prevented. What recommendations should a solutions architect make? Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Configure the EC2 instance iptables rules to drop suspicious web traffic. Create a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the individual EC2 instances. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Move DB instances to the same subnets that EC2 instances are located in. Create a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the individual EC2 instances. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Use AWS WAF to monitor inbound web traffic for threats. Create a security group for the web application servers and a security group for the DB instances. Configure the RDS security group to only allow port 3306 inbound from the web application server security group. Ensure the EC2 instances are part of an Auto Scaling group and are behind an Application Load Balancer. Use AWS WAF to monitor inbound web traffic for threats. Configure the Auto Scaling group to automatically create new DB instances under heavy traffic. Create a security group for the RDS DB instances. Configure the RDS security group to only allow port 3306 inbound.
A solutions architect is responsible for designing a solution for migrating a persistent database from on-premises to AWS. According to the database administrator, the database needs 64,000 IOPS. If feasible, the database administrator wishes to host the database instance on a single Amazon Elastic Block Store (Amazon EBS) volume. Which option satisfies the database administrator's requirements the most effectively? Use an instance from the I3 I/O optimized family and leverage local ephemeral storage to achieve the IOPS requirement. Create a Nitro-based Amazon EC2 instance with an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1) volume attached. Configure the volume to have 64,000 IOPS. Create and map an Amazon Elastic File System (Amazon EFS) volume to the database instance and use the volume to achieve the required IOPS for the database. Provision two volumes and assign 32,000 IOPS to each. Create a logical volume at the operating system level that aggregates both volumes to achieve the IOPS requirements.
A solutions architect is tasked with the responsibility of creating the architecture for a new application that will be deployed to the AWS Cloud. Amazon EC2 On-Demand Instances will be used to execute the application, which will automatically scale across different Availability Zones. Throughout the day, the EC2 instances will scale up and down periodically. The load distribution will be handled by an Application Load Balancer (ALB). The architecture must be capable of managing dispersed session data. The firm is ready to make necessary adjustments to the code. What is the solution architect's responsibility in ensuring that the design enables distributed session data management? Use Amazon ElastiCache to manage and store session data. Use session affinity (sticky sessions) of the ALB to manage session data. Use Session Manager from AWS Systems Manager to manage the session. Use the GetSessionToken API operation in AWS Security Token Service (AWS STS) to manage the session.
Multiple Amazon EC2 Linux instances are used by a business in a VPC to execute applications that need a hierarchical directory structure. The apps must be able to access and write to shared storage fast and simultaneously. How is this accomplished? Create an Amazon Elastic File System (Amazon EFS) file system and mount it from each EC2 instance. Create an Amazon S3 bucket and permit access from all the EC2 instances in the VPC. Create a file system on an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1) volume. Attach the volume to all the EC2 instances. Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes attached to each EC2 instance. Synchronize the Amazon Elastic Block Store (Amazon EBS) volumes across the different EC2 instances.
A solutions architect is in the process of transferring a document management task to Amazon Web Services. The workload stores and tracks 7 terabytes of contract documents on a shared storage file system and an external database. The majority of records are archived and ultimately recovered for future reference. During the migration, the application cannot be updated, and the storage solution must be highly available. Web servers that are part of an Auto Scaling group on Amazon EC2 collect and store documents. There may be up to 12 instances in the Auto Scaling group. Which option best fits these criteria in terms of cost-effectiveness? Provision an enhanced networking optimized EC2 instance to serve as a shared NFS storage system. Create an Amazon S3 bucket that uses the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Mount the S3 bucket to the EC2 instances in the Auto Scaling group. Create an SFTP server endpoint by using AWS Transfer for SFTP and an Amazon S3 bucket. Configure the EC2 instances in the Auto Scaling group to connect to the SFTP server. Create an Amazon Elastic File System (Amazon EFS) file system that uses the EFS Standard-Infrequent Access (EFS Standard-IA) storage class. Mount the file system to the EC2 instances in the Auto Scaling group.
A business maintains monthly phone records. Statistically, recorded data may be referred to randomly within a year but is seldom retrieved beyond that time period. Files less than a year old must be queried and retrieved immediately. It is okay for there to be a delay in obtaining older files. A solutions architect must ensure that the captured data is stored at the lowest possible cost. Which option is the most cheapest? Store individual files in Amazon S3 Glacier and store search metadata in object tags created in S3 Glacier Query S3 Glacier tags and retrieve the files from S3 Glacier. Store individual files in Amazon S3. Use lifecycle policies to move the files to Amazon S3 Glacier after1 year. Query and retrieve the files from Amazon S3 or S3 Glacier. Archive individual files and store search metadata for each archive in Amazon S3. Use lifecycle policies to move the files to Amazon S3 Glacier after 1 year. Query and retrieve the files by searching for metadata from Amazon S3. Archive individual files in Amazon S3. Use lifecycle policies to move the files to Amazon S3 Glacier after 1 year. Store search metadata in Amazon DynamoDB. Query the files from DynamoDB and retrieve them from Amazon S3 or S3 Glacier.
A business want to move a workload to AWS. The chief information security officer demands that any data stored in the cloud be encrypted at rest. The organization desires total control over the encryption key lifecycle management process. Independent of AWS CloudTrail, the organization must be able to promptly delete key material and audit key use. The selected services should interface with other AWS storage services. Which services adhere to these security standards? AWS CloudHSM with the CloudHSM client AWS Key Management Service (AWS KMS) with AWS CloudHSM AWS Key Management Service (AWS KMS) with an external key material origin AWS Key Management Service (AWS KMS) with AWS managed customer master keys (CMKs).
A business hosts an application on Amazon Web Services (AWS) and utilizes Amazon DynamoDB as the database. To handle data from the database, the organization adds Amazon EC2 instances to a private network. The organization connects to DynamoDB using two NAT instances. The corporation want to decommission its NAT instances. A solutions architect must develop a solution that connects to DynamoDB and is self-managing. Which approach is the MOST cost-effective in terms of meeting these requirements? Create a gateway VPC endpoint to provide connectivity to DynamoDB. Configure a managed NAT gateway to provide connectivity to DynamoDB. Establish an AWS Direct Connect connection between the private network and DynamoDB. Deploy an AWS PrivateLink endpoint service between the private network and DynamoDB.
A business hosts a three-tier web application on a virtual private cloud (VPC) that spans various Availability Zones. For the application layer, Amazon EC2 instances are deployed in an Auto Scaling group. The organization must develop an automated scaling strategy that analyzes the daily and weekly workload patterns for each resource. The setup must correctly scale resources in response to both forecasted and actual changes in consumption. Which scaling approach, if any, should a solutions architect propose in order to satisfy these requirements? Implement dynamic scaling with step scaling based on average CPU utilization from the EC2 instances. Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking. Create an automated scheduled scaling action based on the traffic patterns of the web application. Set up a simple scaling policy. Increase the cooldown period based on the EC2 instance startup time.
A business administers its own Amazon EC2 instances, which are configured to operate MySQL databases. The firm manages replication and scaling manually as demand grows or falls. The organization need a new solution that makes it easier to add or remove computing resources from its database layer as required. Additionally, the solution must increase speed, scalability, and durability with little work on the part of operations. Which solution satisfies these criteria? Migrate the databases to Amazon Aurora Serverless for Aurora MySQL. Migrate the databases to Amazon Aurora Serverless for Aurora PostgreSQL. Combine the databases into one larger MySQL database. Run the larger database on larger EC2 instances. Create an EC2 Auto Scaling group for the database tier. Migrate the existing databases to the new environment.
A business want to transfer two apps to AWS. Both apps handle a huge number of files concurrently by accessing the same files. Both programs must read files with a minimum of delay. Which architecture would a solutions architect suggest in this case? Configure two AWS Lambda functions to run the applications. Create an Amazon EC2 instance with an instance store volume to store the data. Configure two AWS Lambda functions to run the applications. Create an Amazon EC2 instance with an Amazon Elastic Block Store (Amazon EBS) volume to store the data. Configure one memory optimized Amazon EC2 instance to run both applications simultaneously. Create an Amazon Elastic Block Store (Amazon EBS) volume with Provisioned IOPS to store the data. Configure two Amazon EC2 instances to run both applications. Configure Amazon Elastic File System (Amazon EFS) with General Purpose performance mode and Bursting Throughput mode to store the data.
A corporation operates a containerized application in an on-premises data center using a Kubernetes cluster. The organization stores data in a MongoDB database. The organization want to transition some of these environments to AWS, but no modifications to the code or deployment methods are currently feasible. The business need a solution that lowers operating costs. Which solution satisfies these criteria? Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes for compute and MongoDB on EC2 for data storage. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute and Amazon DynamoDB for data storage. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes for compute and Amazon DynamoDB for data storage. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute and Amazon DocumentDB (with MongoDB compatibility) for data storage.
A business has a mobile chat application that utilizes an Amazon DynamoDB data storage. Users want as low delay as possible while reading fresh messages. A solutions architect's objective is to provide the optimum solution with the fewest possible application modifications. Which technique should be chosen by the solutions architect? Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code to use the DAX endpoint. Add DynamoDB read replicas to handle the increased read load. Update the application to point to the read endpoint for the read replicas. Double the number of read capacity units for the new messages table in DynamoDB. Continue to use the existing DynamoDB endpoint. Add an Amazon ElastiCache for Redis cache to the application stack. Update the application to point to the Redis cache endpoint instead of DynamoDB.
A business utilizes Amazon Web Services to host all components of its three-tier application. The organization want to identify any possible security vulnerabilities inside the environment automatically. The organization want to keep track of any discoveries and to warn administrators in the event of a suspected breach. Which solution satisfies these criteria? Set up AWS WAF to evaluate suspicious web traffic. Create AWS Lambda functions to log any findings in Amazon CloudWatch and send email notifications to administrators. Set up AWS Shield to evaluate suspicious web traffic. Create AWS Lambda functions to log any findings in Amazon CloudWatch and send email notifications to administrators. Deploy Amazon Inspector to monitor the environment and generate findings in Amazon CloudWatch. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic to notify administrators by email. Deploy Amazon GuardDuty to monitor the environment and generate findings in Amazon CloudWatch. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic to notify administrators by email.
On an Amazon RDS MySQL DB instance, a company's production application processes online transaction processing (OLTP) transactions. The firm is also offering a new reporting tool with the same data access. The reporting tool must be highly accessible and have no adverse effect on the production application's performance. How is this accomplished? Create hourly snapshots of the production RDS DB instance. Create a Multi-AZ RDS Read Replica of the production RDS DB instance. Create multiple RDS Read Replicas of the production RDS DB instance. Place the Read Replicas in an Auto Scaling group. Create a Single-AZ RDS Read Replica of the production RDS DB instance. Create a second Single-AZ RDS Read Replica from the replica.
A solutions architect must offer a fully managed alternative to an on-premises system that enables file interchange between workers and partners. Workers connecting from on-premises systems, remote employees, and external partners must have easy access to the solution. Which solution satisfies these criteria? Use AWS Transfer for SFTP to transfer files into and out of Amazon S3. Use AWS Snowball Edge for local storage and large-scale data transfers. Use Amazon FSx to store and transfer files to make them available remotely. Use AWS Storage Gateway to create a volume gateway to store and transfer files to Amazon S3.
A business is operating an application on Amazon EC2 instances on a private subnet. The program must be capable of storing and retrieving data from Amazon S3. To save expenses, the corporation wishes to optimize the configuration of its AWS resources. How should the business go about doing this? Deploy a NAT gateway to access the S3 buckets. Deploy AWS Storage Gateway to access the S3 buckets. Deploy an S3 gateway endpoint to access the S3 buckets. Deploy an S3 interface endpoint to access the S3 buckets.
AWS is used by a business to store user data. The data is continually accessed, with peak consumption occurring during work hours. Access patterns vary, with some data going months without being accessed. A solutions architect must pick a solution that is both cost efficient and durable, while also maintaining a high degree of availability. Which storage option satisfies these criteria? Amazon S3 Standard Amazon S3 Intelligent-Tiering Amazon S3 Glacier Deep Archive Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).
A business uses Amazon EC2 instances to operate a legacy data processing application. Although data is processed sequentially, the order of the findings is irrelevant. The application is designed in a monolithic fashion. The only method for the business to expand the application in response to rising demand is to raise the instance size. The engineers at the organization have chosen to redesign the program using a microservices architecture using Amazon Elastic Container Service (Amazon ECS). What should a solutions architect propose for inter-microservice communication? Create an Amazon Simple Queue Service (Amazon SQS) queue. Add code to the data producers, and send data to the queue. Add code to the data consumers to process data from the queue. Create an Amazon Simple Notification Service (Amazon SNS) topic. Add code to the data producers, and publish notifications to the topic. Add code to the data consumers to subscribe to the topic. Create an AWS Lambda function to pass messages. Add code to the data producers to call the Lambda function with a data object. Add code to the data consumers to receive a data object that is passed from the Lambda function. Create an Amazon DynamoDB table. Enable DynamoDB Streams. Add code to the data producers to insert data into the table. Add code to the data consumers to use the DynamoDB Streams API to detect new table entries and retrieve the data.
A business is considering migrating a classic application to AWS. Currently, the application communicates with an on-premises storage system through NFS. The program cannot be changed to perform this function using any other communication protocol than NFS. Which storage solution, if any, should a solutions architect propose for post-migration use? AWS DataSync Amazon Elastic Block Store (Amazon EBS) Amazon Elastic File System (Amazon EFS) Amazon EMR File System (Amazon EMRFS).
A business wishes to run its mission-critical apps in containers in order to fulfill scalability and availability requirements. The corporation would rather concentrate on key application maintenance. The firm does not want to be responsible for provisioning and maintaining the containerized workload's underlying infrastructure. What actions should a solutions architect take to ensure that these criteria are met? Use Amazon EC2 instances, and install Docker on the instances. Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 worker nodes. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. Use Amazon EC2 instances from an Amazon Elastic Container Service (Amazon ECS)-optimized Amazon Machine Image (AMI).
A business operates a service that generates event data. The firm wishes to use AWS for the purpose of processing event data as it is received. The data is structured in a certain sequence that must be preserved during processing. The firm wishes to deploy a solution with the lowest possible operating costs. How is this to be accomplished by a solution architect? Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue to hold messages. Set up an AWS Lambda function to process messages from the queue. Create an Amazon Simple Notification Service (Amazon SNS) topic to deliver notifications containing payloads to process. Configure an AWS Lambda function as a subscriber. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to hold messages. Set up an AWS Lambda function to process messages from the queue independently. Create an Amazon Simple Notification Service (Amazon SNS) topic to deliver notifications containing payloads to process. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a subscriber.
The database of a business is hosted in the us-east-1 Region on an Amazon Aurora MySQL DB cluster. The database is around 4 terabytes in size. The company's disaster recovery plan should be expanded to include the us-west-2 region. The firm must be able to fail over to us-west-2 within a 15-minute recovery time goal (RTO). What recommendations should a solutions architect make to satisfy these requirements? Create a Multi-Region Aurora MySQL DB cluster in us-east-1 and use-west-2. Use an Amazon Route 53 health check to monitor us-east-1 and fail over to us- west-2 upon failure. Take a snapshot of the DB cluster in us-east-1. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function upon receipt of resource events. Configure the Lambda function to copy the snapshot to us-west-2 and restore the snapshot in us-west-2 when failure is detected. Create an AWS CloudFormation script to create another Aurora MySQL DB cluster in us-west-2 in case of failure. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function upon receipt of resource events. Configure the Lambda function to deploy the AWS CloudFormation stack in us-west-2 when failure is detected. Recreate the database as an Aurora global database with the primary DB cluster in us-east-1 and a secondary DB cluster in us-west-2. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function upon receipt of resource events. Configure the Lambda function to promote the DB cluster in us-west-2 when failure is detected.
A business is installing an application that handles near-real-time streaming data. The workload will be run on Amazon EC2 instances. The network architecture must be configured in such a way that the latency between nodes is as minimal as feasible. Which network solution combination will suit these requirements? (Select two.) Enable and configure enhanced networking on each EC2 instance. Group the EC2 instances in separate accounts. Run the EC2 instances in a cluster placement group. Attach multiple elastic network interfaces to each EC2 instance. Use Amazon Elastic Block Store (Amazon EBS) optimized instance types.
A business outsources its marketplace analytics management to a third-party partner. The vendor requires restricted programmatic access to the company's account's resources. All necessary policies have been established to ensure acceptable access. Which new component provides the vendor the mose secure access to the account? Create an IAM user. Implement a service control policy (SCP) Use a cross-account role with an external ID. Configure a single sign-on (SSO) identity provider.
A business uses two Amazon EC2 instances to run a dynamic web application. The organization has its own SSL certificate, which is used to complete SSL termination on each instance. Recently, there has been an increase in traffic, and the operations team concluded that SSL encryption and decryption is causing the web servers' compute capacity to surpass its limit. What should a solutions architect do to optimize the performance of an application? Create a new SSL certificate using AWS Certificate Manager (ACM). Install the ACM certificate on each instance. Create an Amazon S3 bucket. Migrate the SSL certificate to the S3 bucket. Configure the EC2 instances to reference the bucket for SSL termination. Create another EC2 instance as a proxy server. Migrate the SSL certificate to the new instance and configure it to direct connections to the existing EC2 instances. Import the SSL certificate into AWS Certificate Manager (ACM). Create an Application Load Balancer with an HTTPS listener that uses the SSL certificate from ACM.
A corporation is doing an evaluation of an existing workload placed on AWS using the AWS Well-Architected Framework. The evaluation discovered a public-facing website operating on the same Amazon EC2 instance as a freshly installed Microsoft Active Directory domain controller to support other AWS services. A solutions architect must offer a new design that increases the architecture's security and reduces the administrative burden on IT workers. What recommendations should the solutions architect make? Use AWS Directory Service to create a managed Active Directory. Uninstall Active Directory on the current EC2 instance. Create another EC2 instance in the same subnet and reinstall Active Directory on it. Uninstall Active Directory. Use AWS Directory Service to create an Active Directory connector. Proxy Active Directory requests to the Active domain controller running on the current EC2 instance. Enable AWS Single Sign-On (AWS SSO) with Security Assertion Markup Language (SAML) 2.0 federation with the current Active Directory controller. Modify the EC2 instanceג€™s security group to deny public access to Active Directory.
Amazon Aurora was recently selected as the data repository for a company's worldwide ecommerce platform. When developers run extensive reports, they discover that the ecommerce application is performing badly. When monthly reports are performed, a solutions architect notices that the ReadIOPS and CPUUtilization metrics spike. Which approach is the MOST cost-effective? Migrate the monthly reporting to Amazon Redshift. Migrate the monthly reporting to an Aurora Replica. Migrate the Aurora database to a larger instance class. Increase the Provisioned IOPS on the Aurora instance.
A business's backup data totals 700 terabytes (TB) and is kept in network attached storage (NAS) at its data center. This backup data must be available in the event of occasional regulatory inquiries and preserved for a period of seven years. The organization has chosen to relocate its backup data from its on-premises data center to Amazon Web Services (AWS). Within one month, the migration must be completed. The company's public internet connection provides 500 Mbps of dedicated capacity for data transport. What should a solutions architect do to ensure that data is migrated and stored at the lowest possible cost? Order AWS Snowball devices to transfer the data. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive. Deploy a VPN connection between the data center and Amazon VPC. Use the AWS CLI to copy the data from on premises to Amazon S3 Glacier. Provision a 500 Mbps AWS Direct Connect connection and transfer the data to Amazon S3. Use a lifecycle policy to transition the files to Amazon S3 Glacier Deep Archive. Use AWS DataSync to transfer the data and deploy a DataSync agent on premises. Use the DataSync task to copy files from the on-premises NAS storage to Amazon S3 Glacier.
The main and secondary data centers of a business are located 500 miles (804.7 kilometers) apart and are linked through high-speed fiber-optic cable. For a mission-critical workload, the organization requires a highly available and secure network link between its data centers and an AWS VPC. A solutions architect must choose a connectivity solution that is as resilient as possible. Which solution satisfies these criteria? Two AWS Direct Connect connections from the primary data center terminating at two Direct Connect locations on two separate devices A single AWS Direct Connect connection from each of the primary and secondary data centers terminating at one Direct Connect location on the same device Two AWS Direct Connect connections from each of the primary and secondary data centers terminating at two Direct Connect locations on two separate devices A single AWS Direct Connect connection from each of the primary and secondary data centers terminating at one Direct Connect location on two separate devices.
The dynamic website of a business is hosted on-premises in the United States. The firm is expanding throughout Europe and want to reduce site loading speeds for new European visitors. The backbone of the website must stay in the United States. A few days from now, the product will be introduced, and an instant answer is required. What recommendations should the solutions architect make? Launch an Amazon EC2 instance in us-east-1 and migrate the site to it. Move the website to Amazon S3. Use cross-Region replication between Regions. Use Amazon CloudFront with a custom origin pointing to the on-premises servers. Use an Amazon Route 53 geo-proximity routing policy pointing to on-premises servers.
A corporation used an AWS Direct Connect connection to copy 1 PB of data from a colocation facility to an Amazon S3 bucket in the us-east-1 Region. The business now wishes to replicate the data in another S3 bucket located in the us-west-2 Region. AWS Snowball is not permitted at the colocation facility. What should a solutions architect suggest as a means of achieving this? Order a Snowball Edge device to copy the data from one Region to another Region. Transfer contents from the source S3 bucket to a target S3 bucket using the S3 console. Use the aws S3 sync command to copy data from the source bucket to the destination bucket. Add a cross-Region replication configuration to copy objects across S3 buckets in different Regions.
A solutions architect must verify that API requests to Amazon DynamoDB are not routed across the internet from Amazon EC2 instances inside a VPC. What is the solution architect's role in achieving this? (Select two.) Create a route table entry for the endpoint. Create a gateway endpoint for DynamoDB Create a new DynamoDB table that uses the endpoint. Create an ENI for the endpoint in each of the subnets of the VPC. Create a security group entry in the default security group to provide access.
The application of a business is hosted on Amazon EC2 instances that are part of an Auto Scaling group behind an Elastic Load Balancer. Each year, the firm predicts a rise in traffic over a holiday, based on the application's history. A solutions architect must develop a plan to guarantee that the Auto Scaling group raises capacity proactively in order to minimize any effect on application users' performance. Which solution will satisfy these criteria? Create an Amazon CloudWatch alarm to scale up the EC2 instances when CPU utilization exceeds 90%. Create a recurring scheduled action to scale up the Auto Scaling group before the expected period of peak demand. Increase the minimum and maximum number of EC2 instances in the Auto Scaling group during the peak demand period. Configure an Amazon Simple Notification Service (Amazon SNS) notification to send alerts when there are autoscaling EC2_INSTANCE_LAUNCH events.
A business wishes to migrate live datasets online from an on-premises NFS server to an Amazon S3 bucket called DOC-EXAMPLE-BUCKET. Verification of data integrity is essential both during and after the transmission. Additionally, the data must be encrypted. A solutions architect is migrating the data using an AWS solution. Which solution satisfies these criteria? AWS Storage Gateway file gateway S3 Transfer Acceleration AWS DataSync AWS Snowball Edge Storage Optimized.
A business utilizes Application Load Balancers (ALBs) across many AWS Regions. The ALBs experience fluctuating traffic throughout the year. The company's networking personnel must enable connection by allowing the ALBs' IP addresses over the on-premises firewall. Which solution is the MOST scalable and requires the least amount of setup changes? Write an AWS Lambda script to get the IP addresses of the ALBs in different Regions. Update the on-premises firewallג€™s rule to allow the IP addresses of the ALBs. Migrate all ALBs in different Regions to the Network Load Balancer (NLBs). Update the on-premises firewallג€™s rule to allow the Elastic IP addresses of all the NLBs. Launch AWS Global Accelerator. Register the ALBs in different Regions to the accelerator. Update the on-premises firewallג€™s rule to allow static IP addresses associated with the accelerator. Launch a Network Load Balancer (NLB) in one Region. Register the private IP addresses of the ALBs in different Regions with the NLB. Update the on- premises firewallג€™s rule to allow the Elastic IP address attached to the NLB.
Each month, a business must create sales reports. On the first day of each month, the reporting procedure starts 20 Amazon EC2 instances. The procedure lasts seven days and cannot be paused. The corporation wishes to keep expenses low. Which pricing strategy should the business pursue? Reserved Instances Spot Block Instances On-Demand Instances Scheduled Reserved Instances.
A business offers an online service for uploading and transcoding video material for usage on any mobile device. The application design makes use of Amazon Elastic File System (Amazon EFS) Standard to gather and store the films so that they may be processed by numerous Amazon EC2 Linux instances. As the service's popularity has increased, the storage charges have become prohibitively costly. Which storage option is the most cheapest? Use AWS Storage Gateway for files to store and process the video content. Use AWS Storage Gateway for volumes to store and process the video content. Use Amazon Elastic File System (Amazon EFS) for storing the video content. Once processing is complete, transfer the files to Amazon Elastic Block Store (Amazon EBS). Use Amazon S3 for storing the video content. Move the files temporarily over to an Amazon ElasticBlock Store (Amazon EBS) volume attached to the server for processing.
Each day, a corporation gets ten terabytes of instrumentation data from many machines situated in a single plant. The data is saved in JSON files on a storage area network (SAN) inside the factory's on-premises data center. The organization want to upload this data to Amazon S3 so that it may be accessible by a number of other systems that do crucial near-real-time analytics. Because the data is deemed sensitive, a secure transmission is critical. Which option provides the most secure method of data transfer? AWS DataSync over public internet AWS DataSync over AWS Direct Connect AWS Database Migration Service (AWS DMS) over public internet AWS Database Migration Service (AWS DMS) over AWS Direct Connect.
A data science team needs storage to analyze logs on a nightly basis. The amount and quantity of logs are unclear, however they will be retained for 24 hours. Which approach is the most cost-effective? Amazon S3 Glacier Amazon S3 Standard Amazon S3 Intelligent-Tiering Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).
A development team is releasing a new product on AWS, and as part of the rollout, they are use AWS Lambda. For one of the Lambda functions, the team allocates 512 MB of RAM. The function is finished in two minutes with this memory allocation. Monthly, the function is executed millions of times, and the development team is worried about the cost. The team does experiments to determine the effect of various Lambda memory allocations on the function's cost. Which measures will result in a decrease in the product's Lambda costs? (Select two.) Increase the memory allocation for this Lambda function to 1,024 MB if this change causes the execution time of each function to be less than 1 minute. Increase the memory allocation for this Lambda function to 1,024 MB if this change causes the execution time of each function to be less than 90 seconds. Reduce the memory allocation for this Lambda function to 256 MB if this change causes the execution time of each function to be less than 4 minutes. Increase the memory allocation for this Lambda function to 2,048 MB if this change causes the execution time of each function to be less than 1 minute. Reduce the memory allocation for this Lambda function to 256 MB if this change causes the execution time of each function to be less than 5 minutes.
A business is using a centralized Amazon Web Services account to store log data in many Amazon S3 buckets. Prior to uploading data to S3 buckets, a solutions architect must guarantee that the data is encrypted at rest. Additionally, data must be encrypted during transit. Which solution satisfies these criteria? Use client-side encryption to encrypt the data that is being uploaded to the S3 buckets. Use server-side encryption to encrypt the data that is being uploaded to the S3 buckets. Create bucket policies that require the use of server-side encryption with S3 managed encryption keys (SSE-S3) for S3 uploads. Enable the security option to encrypt the S3 buckets through the use of a default AWS Key Management Service (AWS KMS) key.
A business requires the migration of a Microsoft Windows-based application to AWS. This program utilizes a shared Windows file system that is tied to numerous Amazon EC2 Windows machines. What actions should a solutions architect take to achieve this? Configure a volume using Amazon Elastic File System (Amazon EFS). Mount the EFS volume to each Windows instance. Configure AWS Storage Gateway in Volume Gateway mode. Mount the volume to each Windows instance. Configure Amazon FSx for Windows File Server. Mount the Amazon FSx volume to each Windows instance. Configure an Amazon Elastic Block Store (Amazon EBS) volume with the required size. Attach each EC2 instance to the volume. Mount the file system within the volume to each Windows instance.
An IAM group is associated with the following IAM policy. This is the group's sole policy. { "Version": "2012-10-17", "Statement": [ { "Sid": "1", "Effect": "Allow", "Action": "ec2:*", "Resources": { "StringEquals": { "ec2:Region": "us-east-1" } } }, { "Sid": "2", "Effect": "Deny", "Action": [ "ec2:StopInstances", "ec2:TerminateInstances" ], "Resources": "*", "Condition": { "BoolIfExists": {"aws:MultiFactorAuthPresent": false} } } ] } What are the policy's effective IAM permissions for group members? Group members are permitted any Amazon EC2 action within the us-east-1 Region. Statements after the Allow permission are not applied. Group members are denied any Amazon EC2 permissions in the us-east-1 Region unless they are logged in with multi-factor authentication (MFA). Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for all Regions when logged in with multi-factor authentication (MFA). Group members are permitted any other Amazon EC2 action. Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for the us-east-1 Region only when logged in with multi-factor authentication (MFA). Group members are permitted any other Amazon EC2 action within the us-east-1 Region.
A business is searching for a solution that would enable them to store video archives created from archived news footage on AWS. The business must keep expenses down and will seldom need to recover these data. When files are required, they must be provided within a five-minute window. Which approach is the most cost-effective? Store the video archives in Amazon S3 Glacier and use Expedited retrievals. Store the video archives in Amazon S3 Glacier and use Standard retrievals. Store the video archives in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Store the video archives in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).
A firm that now hosts a web application on-premises is ready to migrate to AWS and launch a newer version of the program. The organization must route requests depending on the URL query string to either the AWS- or on-premises-hosted application. The on-premises application is inaccessible over the internet, and a VPN connection between Amazon VPC and the company's data center is formed. The firm intends to deploy this application using an Application Load Balancer (ALB). Which solution satisfies these criteria? Use two ALBs: one for on-premises and one for the AWS resource. Add hosts to each target group of each ALB. Route with Amazon Route 53 based on the URL query string. Use two ALBs: one for on-premises and one for the AWS resource. Add hosts to the target group of each ALB. Create a software router on an EC2 instance based on the URL query string. Use one ALB with two target groups: one for the AWS resource and one for on premises. Add hosts to each target group of the ALB. Configure listener rules based on the URL query string. Use one ALB with two AWS Auto Scaling groups: one for the AWS resource and one for on premises. Add hosts to each Auto Scaling group. Route with Amazon Route 53 based on the URL query string.
Under its registered parent domain, a firm hosts many websites for various lines of business. According to the subdomain, anyone visiting these websites will be directed to the proper backend Amazon EC2 instance. Static webpages, pictures, and server-side programming such as PHP and JavaScript are all hosted on the websites. Certain websites see a spike in traffic during the first two hours of business, followed by consistent use throughout the remainder of the day. A solutions architect must build a system that adapts capacity automatically to certain traffic patterns while being cost effective. Which AWS service or feature combination will suit these requirements? (Select two.) AWS Batch Network Load Balancer Application Load Balancer Amazon EC2 Auto Scaling Amazon S3 website hosting.
As a web application, a corporation has built a new video game. The application is deployed in a three-tier design using Amazon RDS for MySQL in a VPC. Multiple players will compete simultaneously online through the database layer. The makers of the game want to show a top-10 scoreboard in near-real time and to enable players to pause and resume the game while retaining their existing scores. What actions should a solutions architect take to ensure that these criteria are met? Set up an Amazon ElastiCache for Memcached cluster to cache the scores for the web application to display. Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web application to display. Place an Amazon CloudFront distribution in front of the web application to cache the scoreboard in a section of the application. Create a read replica on Amazon RDS for MySQL to run queries to compute the scoreboard and serve the read traffic to the web application.
A business wishes to migrate its on-premises network attached storage (NAS) to Amazon Web Services (AWS). The corporation wishes to make the data accessible to any Linux instance inside its VPC and to guarantee that changes to the data store are immediately synced across all instances that use it. The bulk of data is viewed infrequently, whereas certain files are read concurrently by numerous people. Which option satisfies these criteria and is the most cost-effective? Create an Amazon Elastic Block Store (Amazon EBS) snapshot containing the data. Share it with users within the VPC. Create an Amazon S3 bucket that has a lifecycle policy set to transition the data to S3 Standard-Infrequent Access (S3 Standard-IA) after the appropriate number of days. Create an Amazon Elastic File System (Amazon EFS) file system within the VPC. Set the throughput mode to Provisioned and to the required amount of IOPS to support concurrent usage. Create an Amazon Elastic File System (Amazon EFS) file system within the VPC. Set the lifecycle policy to transition the data to EFS Infrequent Access (EFS IA) after the appropriate number of days.
For many years, a business has stored analytics data on an Amazon RDS instance. The firm hired a solutions architect to develop an API that would enable consumers to access this data. The program is expected to have periods of idleness but may get surges of traffic within seconds. Which option should the architect recommend? Set up an Amazon API Gateway and use Amazon ECS. Set up an Amazon API Gateway and use AWS Elastic Beanstalk. Set up an Amazon API Gateway and use AWS Lambda functions. Set up an Amazon API Gateway and use Amazon EC2 with Auto Scaling.
A mobile gaming startup uses Amazon EC2 instances to host application servers. Every 15 minutes, the servers get updates from players. The mobile game generates a JSON object containing the game's progress since the last update and delivers it to an Application Load Balancer. As the mobile game is played, it loses game updates. The business intends to develop a long-lasting method for older devices to get updates. What should a solution architect propose for system decoupling? Use Amazon Kinesis Data Streams to capture the data and store the JSON object in Amazon S3. Use Amazon Kinesis Data Firehose to capture the data and store the JSON object in Amazon S3. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues to capture the data and EC2 instances to process the messages in the queue. Use Amazon Simple Notification Service (Amazon SNS) to capture the data and EC2 instances to process the messages sent to the Application Load Balancer.
A recent review of a company's IT spending demonstrates the critical necessity of lowering backup costs. The chief information officer of the organization want to simplify the on-premises backup architecture and cut expenses by phasing out physical backup tapes. The company's current investment in on-premises backup systems and procedures must be protected. What recommendations should a solutions architect make? Set up AWS Storage Gateway to connect with the backup applications using the NFS interface. Set up an Amazon Elastic File System (Amazon EFS) file system that connects with the backup applications using the NFS interface. Set up an Amazon Elastic File System (Amazon EFS) file system that connects with the backup applications using the iSCSI interface. Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual tape library (VTL) interface.
A business maintains an internal web-based application. The application is deployed on Amazon EC2 instances that are routed via an Application Load Balancer. The instances are distributed across several Availability Zones through an Amazon EC2 Auto Scaling group. During business hours, the Auto Scaling group grows up to 20 instances, then scales down to two instances overnight. Staff are saying that the program is very sluggish to start the day, but performs fine by mid-morning. How might the scale be altered to accommodate employee concerns while keeping expenses low? Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens. Implement a step scaling action triggered at a lower CPU threshold, and decrease the cooldown period. Implement a target tracking action triggered at a lower CPU threshold, and decrease the cooldown period. Implement a scheduled action that sets the minimum and maximum capacity to 20 shortly before the office opens.
A business must adhere to a regulatory obligation that all emails be saved and preserved outside for a period of seven years. An administrator has prepared compressed email files on-premises and wishes to have the data transferred to AWS storage through a managed service. Which managed service should be recommended by a solutions architect? Amazon Elastic File System (Amazon EFS) Amazon S3 Glacier AWS Backup AWS Storage Gateway.
A business hosts its website on Amazon EC2 instances that are distributed across several Availability Zones through an Elastic Load Balancer. The instances are managed as part of an EC2 Auto Scaling group. The website stores product manuals for download through Amazon Elastic Block Store (Amazon EBS) volumes. The organization often changes the product information, which means that new instances created by the Auto Scaling group frequently have out-of-date data. It may take up to 30 minutes for all changes to be received by fresh instances. Additionally, the changes involve resizing the EBS volumes during business hours. The corporation want to guarantee that product manuals are constantly current and that the architecture adapts fast to rising customer demand. A solutions architect must satisfy these objectives without requiring the business to upgrade its application code or website. What actions should the solutions architect take to achieve this objective? Store the product manuals in an EBS volume. Mount that volume to the EC2 instances. Store the product manuals in an Amazon S3 bucket. Redirect the downloads to this bucket. Store the product manuals in an Amazon Elastic File System (Amazon EFS) volume. Mount that volume to the EC2 instances. Store the product manuals in an Amazon S3 Standard-Infrequent Access (S3 Standard-IA) bucket. Redirect the downloads to this bucket.
On Amazon EC2, a business hosts an ecommerce application. The application is composed of a stateless web layer that needs a minimum of 10 instances and a maximum of 250 instances to run. 80% of the time, the program needs 50 instances. Which solution should be adopted in order to keep expenses down? Purchase Reserved Instances to cover 250 instances. Purchase Reserved Instances to cover 80 instances. Use Spot Instances to cover the remaining instances. Purchase On-Demand Instances to cover 40 instances. Use Spot Instances to cover the remaining instances. Purchase Reserved Instances to cover 50 instances. Use On-Demand and Spot Instances to cover the remaining instances.
A business intends to install an Amazon RDS database instance powered by Amazon Aurora. The organization has a 90-day backup retention policy. Which solution, if any, should a solutions architect suggest? Set the backup retention period to 90 days when creating the RDS DB instance. Configure RDS to copy automated snapshots to a user-managed Amazon S3 bucket with a lifecycle policy set to delete after 90 days. Create an AWS Backup plan to perform a daily snapshot of the RDS database with the retention set to 90 days. Create an AWS Backup job to schedule the execution of the backup plan daily. Use a daily scheduled event with Amazon CloudWatch Events to execute a custom AWS Lambda function that makes a copy of the RDS automated snapshot. Purge snapshots older than 90 days.
A solutions architect is configuring a virtual private cloud (VPC) with public and private subnets. The VPC and subnets are configured using IPv4 CIDR blocks. Each of the three Availability Zones (AZs) has one public and one private subnet. An internet gateway is used to connect public subnets to the internet. Private subnets must have internet connectivity in order for Amazon EC2 instances to obtain software upgrades. What should the solutions architect do to allow private subnets to connect to the internet? Create three NAT gateways, one for each public subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic to the NAT gateway in its AZ. Create three NAT instances, one for each private subnet in each AZ. Create a private route table for each AZ that forwards non-VPC traffic to the NAT instance in its AZ. Create a second internet gateway on one of the private subnets. Update the route table for the private subnets that forward non-VPC traffic to the private internet gateway. Create an egress-only internet gateway on one of the public subnets. Update the route table for the private subnets that forward non-VPC traffic to the egress- only internet gateway.
Currently, a corporation has 250 TB of backup data saved in Amazon S3 using a vendor-specific format. The firm wishes to extract files from Amazon S3, convert them to an industry-standard format, and then re-upload them to Amazon S3. The firm want to reduce the costs connected with data transmission for this session. What actions should a solutions architect take to achieve this? Install the conversion software as an Amazon S3 batch operation so the data is transformed without leaving Amazon S3. Install the conversion software onto an on-premises virtual machine. Perform the transformation and re-upload the files to Amazon S3 from the virtual machine. Use AWS Snowball Edge devices to export the data and install the conversion software onto the devices. Perform the data transformation and re-upload the files to Amazon S3 from the Snowball Edge devices. Launch an Amazon EC2 instance in the same Region as Amazon S3 and install the conversion software onto the instance. Perform the transformation and re- upload the files to Amazon S3 from the EC2 instance.
A business's applications are hosted on on-premises servers. The corporation is rapidly depleting its storage capacity. The programs make use of both block and network file storage. The business need a high-performance solution that enables local caching without requiring it to re-architect its current applications. Which steps should a solutions architect perform in combination to satisfy these requirements? (Select two.) Mount Amazon S3 as a file system to the on-premises servers. Deploy an AWS Storage Gateway file gateway to replace NFS storage. Deploy AWS Snowball Edge to provision NFS mounts to on-premises servers. Deploy an AWS Storage Gateway volume gateway to replace the block storage. Deploy Amazon Elastic Fife System (Amazon EFS) volumes and mount them to on-premises servers.
A business hosts a web service on Amazon EC2 instances that are routed via an Application Load Balancer. The instances are distributed across two Availability Zones through an Amazon EC2 Auto Scaling group. At all times, the corporation requires a minimum of four instances to achieve the needed service level agreement (SLA) requirements while keeping expenses low. How can the organization maintain compliance with the SLA if an Availability Zone fails? Add a target tracking scaling policy with a short cooldown period. Change the Auto Scaling group launch configuration to use a larger instance type. Change the Auto Scaling group to use six servers across three Availability Zones. Change the Auto Scaling group to use eight servers across two Availability Zones.
A firm is using the AWS Cloud to run a three-tier ecommerce application. The firm hosts the website on Amazon S3 and combines it with a sales API. The API is hosted by the firm on three Amazon EC2 instances that are connected through an Application Load Balancer (ALB). The API is composed of static and dynamic front-end content, as well as back-end workers that asynchronously execute sales requests. The corporation anticipates a big and abrupt surge in sales requests during events celebrating the introduction of new items. What should a solutions architect prescribe to assure the effective processing of all requests? Add an Amazon CloudFront distribution for the dynamic content. Increase the number of EC2 instances to handle the increase in traffic. Add an Amazon CloudFront distribution for the static content. Place the EC2 instances in an Auto Scaling group to launch new instances based on network traffic. Add an Amazon CloudFront distribution for the dynamic content. Add an Amazon ElastiCache instance in front of the ALB to reduce traffic for the API to handle. Add an Amazon CloudFront distribution for the static content. Add an Amazon Simple Queue Service (Amazon SQS) queue to receive requests from the website for later processing by the EC2 instances. .
Amazon EC2 instances are used to execute an application. The application's sensitive data is housed in an Amazon S3 bucket. The bucket must be shielded from internet access while yet allowing access to it for services inside the VPC. Which activities should solutions archived take in order to do this? (Select two.) Create a VPC endpoint for Amazon S3. Enable server access logging on the bucket. Apply a bucket policy to restrict access to the S3 endpoint. Add an S3 ACL to the bucket that has sensitive information. Restrict users using the IAM policy to use the specific bucket.
A business's program creates a vast number of files, each around 5 MB in size. Amazon S3 is used to store the files. According to company policy, files must be retained for a period of four years before they may be erased. Immediate access is always essential due to the fact that the files contain vital business data that is difficult to replicate. The files are commonly viewed within the first 30 days after the establishment of the item, but are seldom accessed beyond that time period. Which storage option is the most cheapest? Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Glacier 30 days from object creation. Delete the files 4 years after object creation. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) 30 days from object creation. Delete the files 4 years after object creation. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object creation. Delete the files 4 years after object creation. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object creation. Move the files to S3 Glacier 4 years after object creation.
A business has a bucket on Amazon S3 that includes mission-critical data. The firm wishes to safeguard this data against inadvertent deletion. The data should remain available, and the user should be able to erase it on purpose. Which actions should a solutions architect use in conjunction to achieve this? (Select two.) Enable versioning on the S3 bucket. Enable MFA Delete on the S3 bucket. Create a bucket policy on the S3 bucket. Enable default encryption on the S3 bucket. Create a lifecycle policy for the objects in the S3 bucket.
A business maintains an application that processes incoming communications. These messages are then digested in a matter of seconds by dozens of other apps and microservices. The quantity of communications fluctuates significantly and sometimes peaks above 100,000 per second. The firm wishes to divorce the solution from its underlying infrastructure and thereby boost its scalability. Which solution satisfies these criteria? Persist the messages to Amazon Kinesis Data Analytics. All the applications will read and process the messages. Deploy the application on Amazon EC2 instances in an Auto Scaling group, which scales the number of EC2 instances based on CPU metrics. Write the messages to Amazon Kinesis Data Streams with a single shard. All applications will read from the stream and process the messages. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with one or more Amazon Simple Queue Service (Amazon SQS) subscriptions. All applications then process the messages from the queues.
A business is transferring its infrastructure from on-premises to the AWS Cloud. One of the company's apps stores data on a Windows file server farm that utilizes Distributed File System Replication (DFSR) to maintain data consistency. The file server farm must be replaced by a solutions architect. Which solution architect service should be used? Amazon Elastic File System (Amazon EFS) Amazon FSx Amazon S3 AWS Storage Gateway.
On AWS, a business operates a high-performance computing (HPC) workload. The demand necessitated low network latency and high network throughput through closely linked node-to-node communication. Amazon EC2 instances are started with default configurations and are appropriately scaled for computation and storage capabilities. What should a solutions architect advise to optimize the workload's performance? Choose a cluster placement group while launching Amazon EC2 instances. Choose dedicated instance tenancy while launching Amazon EC2 instances. Choose an Elastic Inference accelerator while launching Amazon EC2 instances. Choose the required capacity reservation while launching Amazon EC2 instances.
A business has two applications: one that sends messages with payloads to be processed and another that receives messages with payloads. The organization wishes to create an Amazon Web Services (AWS) solution to manage communications between the two apps. The sender program is capable of sending around 1,000 messages every hour. Processing of communications may take up to two days. If the messages do not process, they must be kept to avoid interfering with the processing of subsequent messages. Which solution satisfies these parameters and is the MOST OPTIMAL in terms of operational efficiency? Set up an Amazon EC2 instance running a Redis database. Configure both applications to use the instance. Store, process, and delete the messages, respectively. Use an Amazon Kinesis data stream to receive the messages from the sender application. Integrate the processing application with the Kinesis Client Library (KCL). Integrate the sender and processor applications with an Amazon Simple Queue Service (Amazon SQS) queue. Configure a dead-letter queue to collect the messages that failed to process. Subscribe the processing application to an Amazon Simple Notification Service (Amazon SNS) topic to receive notifications to process. Integrate the sender application to write to the SNS topic.
A business hosts its corporate content management platform on AWS in a single region but requires the platform to function across several regions. The organization operates its microservices on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The EKS cluster is responsible for storing and retrieving items from Amazon S3. Additionally, the EKS cluster utilizes Amazon DynamoDB to store and retrieve information. Which actions should a solutions architect do in combination to deploy the platform across several regions? (Select two.) Replicate the EKS cluster with cross-Region replication. Use Amazon API Gateway to create a global endpoint to the EKS cluster. Use AWS Global Accelerator endpoints to distribute the traffic to multiple Regions. Use Amazon S3 access points to give access to the objects across multiple Regions. Configure DynamoDB Accelerator (DAX). Connect DAX to the relevant tables. Deploy an EKS cluster and an S3 bucket in another Region. Configure cross-Region replication on both S3 buckets. Turn on global tables for DynamoDB.
A business use a VPC that is provisioned with a CIDR block of 10.10.1.0/24. Due to continuing expansion, this block's IP address space may soon be consumed. A solutions architect must expand the VPC's IP address capacity. Which method satisfies these criteria with the LEAST amount of operational overhead? Create a new VPC. Associate a larger CIDR block. Add a secondary CIDR block of 10.10.2.0/24 to the VPC. Resize the existing VPC CIDR block from 10.10.1.0/24 to 10.10.1.0/16. Establish VPC peering with a new VPC that has a CIDR block of 10.10.1.0/16.
A business hosts its website on Amazon EC2 instances that are routed via an ELB Application Load Balancer. The DNS is handled via Amazon Route 53. The firm want to establish a backup website with a message, phone number, and email address for users to contact in the event that the original website becomes unavailable. How should this solution be implemented? Use Amazon S3 website hosting for the backup website and Route 53 failover routing policy. Use Amazon S3 website hosting for the backup website and Route 53 latency routing policy. Deploy the application in another AWS Region and use ELB health checks for failover routing. Deploy the application in another AWS Region and use server-side redirection on the primary website.
A firm's on-premises business program creates hundreds of files daily. These files are kept on an SMB file share and need a connection to the application servers with a low latency. A new business policy requires that all files created by applications be moved to AWS. A VPN connection to AWS is already established. The application development team lacks the time required to modify the application's code in order to migrate it to AWS. Which service should a solutions architect propose to enable an application to transfer files to Amazon Web Services (AWS)? Amazon Elastic File System (Amazon EFS) Amazon FSx for Windows File Server AWS Snowball AWS Storage Gateway.
A business uses WebSockets to host a live chat application on its on-premises servers. The firm want to transfer the application to Amazon Web Services (AWS). Traffic to the application is uneven, and the firm anticipates more traffic with sudden spikes in the future. The business need a highly scalable solution that requires minimal server maintenance or sophisticated capacity planning. Which solution satisfies these criteria? Use Amazon API Gateway and AWS Lambda with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for provisioned capacity. Use Amazon API Gateway and AWS Lambda with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for on-demand capacity. Run Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for on-demand capacity. Run Amazon EC2 instances behind a Network Load Balancer in an Auto Scaling group with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for provisioned capacity.
A business intends to migrate many gigabytes of data to AWS. Offline data is obtained from ships. Before transmitting the data, the organization want to do complicated transformations. Which Amazon Web Services (AWS) service should a solutions architect suggest for this migration? AWS Snowball AWS Snowmobile AWS Snowball Edge Storage Optimize AWS Snowball Edge Compute Optimize.
Two video conversion programs are being used by a media organization on Amazon EC2 instances. One utility is Windows-based, while the other is Linux-based. Each video file is rather huge and both programs must process it. The organization requires a storage solution that enables the creation of a centralized file system that can be mounted on all of the EC2 instances utilized in this operation. Which solution satisfies these criteria? Use Amazon FSx for Windows File Server for the Windows instances. Use Amazon Elastic File System (Amazon EFS) with Max I/O performance mode for the Linux instances. Use Amazon FSx for Windows File Server for the Windows instances. Use Amazon FSx for Lustre for the Linux instances. Link both Amazon FSx file systems to the same Amazon S3 bucket. Use Amazon Elastic File System (Amazon EFS) with General Purpose performance mode for the Windows instances and the Linux instances Use Amazon FSx for Windows File Server for the Windows instances and the Linux instances.
A business uses Amazon RDS to power a web application. A fresh database administrator mistakenly deleted data from a database table. To aid in recovery from such an occurrence, the organization desires the capacity to restore the database to the condition it was in five minutes prior to any alteration during the past 30 days. Which capability should the solutions architect include into the design to satisfy this requirement? Read replicas Manual snapshots Automated backups Multi-AZ deployments.
A business is building a video converter application that will be hosted on AWS. The program will be offered in two flavors: a free version and a premium version. People on the premium tier will get their videos converted first, followed by users on the tree tier. Which option satisfies these criteria and is the most cost-effective? One FIFO queue for the paid tier and one standard queue for the free tier. A single FIFO Amazon Simple Queue Service (Amazon SQS) queue for all file types. A single standard Amazon Simple Queue Service (Amazon SQS) queue for all file types. Two standard Amazon Simple Queue Service (Amazon SQS) queues with one for the paid tier and one for the free tier.
A business has an application that sends messages to Amazon Simple Queue Service. Another program polls the queue and performs I/O-intensive operations on the messages. The organization has a service level agreement (SLA) that stipulates the maximum time allowed between message receipt and response to users. Due to the rise in message volume, the organization is having trouble fulfilling its SLA on a constant basis. What should a solutions architect do to assist in increasing the application's processing speed and ensuring that it can manage any level of load? Create an Amazon Machine Image (AMI) from the instance used for processing. Terminate the instance and replace it with a larger size. Create an Amazon Machine Image (AMI) from the instance used for processing. Terminate the instance and replace it with an Amazon EC2 Dedicated Instance. Create an Amazon Machine image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy to keep its aggregate CPU utilization below 70%. Create an Amazon Machine Image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy based on the age of the oldest message in the SQS queue.
In the AWS Cloud, a business is operating a multi-tier ecommerce web application. The application is hosted on Amazon EC2 instances that are connected to an Amazon RDS MySQL Multi-AZ database. Amazon RDS is setup with the latest generation instance and 2,000 GB of storage in a General Purpose SSD (gp2) volume from Amazon Elastic Block Store (Amazon EBS). During moments of heavy demand, the database performance has an effect on the application. After studying the logs in Amazon CloudWatch Logs, a database administrator discovers that when the number of read and write IOPS exceeds 6.000, the application's performance constantly drops. What should a solutions architect do to optimize the performance of an application? Replace the volume with a Magnetic volume. Increase the number of IOPS on the gp2 volume. Replace the volume with a Provisioned IOPS (PIOPS) volume. Replace the 2,000 GB gp2 volume with two 1,000 GBgp2 volumes.
For its ecommerce website, a business developed a multi-tier application. The website makes use of a public subnet-based Application Load Balancer, a public subnet-based web tier, and a private subnet-based MySQL cluster hosted on Amazon EC2 instances. The MySQL database must obtain product catalog and price information from a third-party provider's website. A solutions architect's objective is to develop a plan that optimizes security without raising operating costs. What actions should the solutions architect take to ensure that these criteria are met? Deploy a NAT instance in the VPC. Route all the internet-based traffic through the NAT instance. Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all internet-bound traffic to the NAT gateway. Configure an internet gateway and attach it to the VPC. Modify the private subnet route table to direct internet-bound traffic to the internet gateway. Configure a virtual private gateway and attach it to the VPC. Modify the private subnet route table to direct internet-bound traffic to the virtual private gateway.
Recently, a business introduced a new form of internet-connected sensor. The business anticipates selling thousands of sensors that are intended to feed large amounts of data to a central location every second. A solutions architect must develop a system that ingests and stores data in near-real time with millisecond responsiveness for engineering teams to examine. Which solution should the architect of solutions recommend? Use an Amazon SQS queue to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon Redshift. Use an Amazon SQS queue to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon DynamoDB. Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon Redshift. Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon DynamoDB.
A business must share an Amazon S3 bucket with a third-party provider. All items must be accessible to the bucket owner. Which procedure should be followed in order to share the S3 bucket? Update the bucket to be a Requester Pays bucket. Update the bucket to enable cross-origin resource sharing (CORS). Create a bucket policy to require users to grant bucket-owner-full-control when uploading objects. Create an IAM policy to require users to grant bucket-owner-full-control when uploading objects.
A corporation want to move a 143 TB MySQL database to AWS. The objective is to continue using Amazon Aurora MySQL as the platform. The organization connects to Amazon VPC using a 100 Mbps AWS Direct Connect connection. Which option best satisfies the requirements of the business and requires the least amount of time? Use a gateway endpoint for Amazon S3. Migrate the data to Amazon S3. Import the data into Aurora. Upgrade the Direct Connect link to 500 Mbps. Copy the data to Amazon S3. Import the data into Aurora. Order an AWS Snowmobile and copy the database backup to it. Have AWS import the data into Amazon S3. Import the backup into Aurora. Order four 50-TB AWS Snowball devices and copy the database backup onto them. Have AWS import the data into Amazon S3. Import the data into Aurora.
A business stores static photos for its website in an Amazon S3 bucket. Permissions were specified to restrict access to Amazon S3 items to privileged users only. What steps should a solutions architect take to prevent data loss? (Select two.) Enable versioning on the S3 bucket. Enable access logging on the S3 bucket. Enable server-side encryption on the S3 bucket. Configure an S3 lifecycle rule to transition objects to Amazon S3 Glacier. Use MFA Delete to require multi-factor authentication to delete an object.
A solutions architect is developing a document review application that will be stored in an Amazon S3 bucket. The solution must prevent unintentional document deletion and guarantee that all document versions are accessible. The ability for users to download, change, and upload documents is required. Which measures should be conducted in combination to achieve these requirements? (Select two.) Enable a read-only bucket ACL. Enable versioning on the bucket. Attach an IAM policy to the bucket. Enable MFA Delete on the bucket. Encrypt the bucket using AWS KMS.
A corporation hosts more than 300 websites and apps on a worldwide scale. Each day, the organization wants a platform capable of analyzing more than 30 TB of clickstream data. What should a solutions architect do with the clickstream data during transmission and processing? Design an AWS Data Pipeline to archive the data to an Amazon S3 bucket and run an Amazon EMR cluster with the data to generate analytics. Create an Auto Scaling group of Amazon EC2 instances to process the data and send it to an Amazon S3 data lake for Amazon Redshift to use for analysis. Cache the data to Amazon CloudFront. Store the data in an Amazon S3 bucket. When an object is added to the S3 bucket, run an AWS Lambda function to process the data for analysis. Collect the data from Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to transmit the data to an Amazon S3 data lake. Load the data in Amazon Redshift for analysis.
A business is developing a three-tier online application that will include a web server, an application server, and a database server. While packages are being delivered, the program will monitor their GPS locations. The database will be updated every 0-5 seconds by the program. Tracking information must be read as quickly as possible to allow users to verify the status of their deliveries. On certain days, just a few parcels may be monitored, while on others, millions of packages may be tracked. The tracking system must be searchable using the tracking ID, the customer ID, and the order ID. Orders placed after one month will no longer be monitored. What should a solution architect propose in order to do this with the lowest possible total cost of ownership? Use Amazon DynamoDB Enable Auto Scaling on the DynamoDB table. Schedule an automatic deletion script for items older than 1 month. Use Amazon DynamoDB with global secondary indexes. Enable Auto Scaling on the DynamoDB table and the global secondary indexes. Enable TTL on the DynamoDB table. Use an Amazon RDS On-Demand instance with Provisioned IOPS (PIOPS). Enable Amazon CloudWatch alarms to send notifications when PIOPS are exceeded. Increase and decrease PIOPS as needed. Use an Amazon RDS Reserved Instance with Provisioned IOPS (PIOPS). Enable Amazon CloudWatch alarms to send notification when PIOPS are exceeded. Increase and decrease PIOPS as needed.
A news organization with correspondents located around the globe uses AWS to host its broadcast system. The reporters provide the broadcast system with live feeds. The reporters transmit live broadcasts through the Real Time Messaging Protocol using software installed on their phones (RTMP). A solutions architect must provide a system that enables reporters to deliver the highest-quality streams possible. The solution must ensure that TCP connections to the broadcast system are expedited. What approach should the solutions architect use in order to satisfy these requirements? Amazon CloudFront AWS Global Accelerator AWS Client VPN Amazon EC2 instances and AWS Elastic IP addresses.
A business is transferring a set of Linux-based web servers to AWS. For certain content, the web servers must access files stored in a shared file storage. To fulfill the migration deadline, only minor adjustments are necessary. What actions should a solutions architect take to ensure that these criteria are met? Create an Amazon S3 Standard bucket with access to the web server. Configure an Amazon CloudFront distribution with an Amazon S3 bucket as the origin. Create an Amazon Elastic File System (Amazon EFS) volume and mount it on all web servers. Configure Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1) volumes and mount them on all web servers.
Every day, a business gets structured and semi-structured data from a variety of sources. A solutions architect must create a solution that makes use of frameworks for big data processing. SQL queries and business intelligence tools should be able to access the data. What should the solutions architect advocate in order to provide the most performant solution possible? Use AWS Glue to process data and Amazon S3 to store data. Use Amazon EMR to process data and Amazon Redshift to store data. Use Amazon EC2 to process data and Amazon Elastic Block Store (Amazon EBS) to store data. Use Amazon Kinesis Data Analytics to process data and Amazon Elastic File System (Amazon EFS) to store data.
A MySQL database instance on Amazon RDS is used by an application. The RDS database is rapidly depleting its storage capacity. A solutions architect wants to expand disk capacity without causing downtime. Which method satisfies these criteria with the minimum amount of effort? Enable storage auto scaling in RDS. Increase the RDS database instance size. Change the RDS database instance storage type to Provisioned IOPS. Back up the RDS database, increase the storage capacity, restore the database and stop the previous instance.
A business runs an ecommerce application in a single VPC. A single web server and an Amazon RDS Multi-AZ database instance comprise the application stack. Twice a month, the firm introduces new items. This results in a 400% increase in website traffic for a minimum of 72 hours. Users' browsers encounter poor response times and numerous timeout issues during product launches. What should a solutions architect do to minimize response times and timeout failures while maintaining a minimal operational overhead? Increase the instance size of the web server. Add an Application Load Balancer and an additional web server. Add Amazon EC2 Auto Scaling and an Application Load Balancer. Deploy an Amazon ElastiCache cluster to store frequently accessed data.
A solutions architect is developing a two-step order process application. The first step is synchronous and must return with minimal delay to the user. Because the second stage is more time consuming, it will be done as a distinct component. Orders must be processed precisely once and in their original sequence of receipt. How are these components to be integrated by the solutions architect? Use Amazon SQS FIFO queues. Use an AWS Lambda function along with Amazon SQS standard queues. Create an SNS topic and subscribe an Amazon SQS FIFO queue to that topic. Create an SNS topic and subscribe an Amazon SQS Standard queue to that topic.
A business is using AWS to operate an application that processes weather sensor data stored in an Amazon S3 bucket. Three batch tasks are scheduled to run hourly to process data in the S3 bucket for various reasons. The organization wishes to minimize total processing time by employing an event-based strategy to run the three programs in parallel. What actions should a solutions architect take to ensure that these criteria are met? Enable S3 Event Notifications for new objects to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Subscribe all applications to the queue for processing. Enable S3 Event Notifications for new objects to an Amazon Simple Queue Service (Amazon SQS) standard queue. Create an additional SQS queue for all applications, and subscribe all applications to the initial queue for processing. Enable S3 Event Notifications for new objects to separate Amazon Simple Queue Service (Amazon SQS) FIFO queues. Create an additional SQS queue for each application, and subscribe each queue to the initial topic for processing. Enable S3 Event Notifications for new objects to an Amazon Simple Notification Service (Amazon SNS) topic. Create an Amazon Simple Queue Service (Amazon SQS) queue for each application, and subscribe each queue to the topic for processing.
Multiple Amazon EC2 instances in a single Availability Zone are used by a gaming firm to host a multiplayer game that connects with players using Layer 4 communication. The chief technology officer (CTO) desires a highly accessible and cost-effective architecture. What actions should a solutions architect take to ensure that these criteria are met? (Select two.)? Increase the number of EC2 instances. Decrease the number of EC2 instances. Configure a Network Load Balancer in front of the EC2 instances. Configure an Application Load Balancer in front of the EC2 instances. Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically.
A business runs a static website on Amazon S3. A solutions architect must guarantee that data is recoverable in the event of an accidently deleted file. Which action is necessary to achieve this? Enable Amazon S3 versioning. Enable Amazon S3 Intelligent-Tiering. Enable an Amazon S3 lifecycle policy. Enable Amazon S3 cross-Region replication.
A solutions architect must develop a managed storage solution with high-performance machine learning capability for a company's application. This application is hosted on AWS Fargate, and the storage attached to it must support concurrent file access and give good performance. Which storage choice should the architect of solutions recommend? Create an Amazon S3 bucket for the application and establish an IAM role for Fargate to communicate with Amazon S3. Create an Amazon FSx for Lustre file share and establish an IAM role that allows Fargate to communicate with FSx for Lustre. Create an Amazon Elastic File System (Amazon EFS) file share and establish an IAM role that allows Fargate to communicate with Amazon Elastic File System (Amazon EFS). Create an Amazon Elastic Block Store (Amazon EBS) volume for the application and establish an IAM role that allows Fargate to communicate with Amazon Elastic Block Store (Amazon EBS).
A firm gathers data on temperature, humidity, and air pressure in cities across the world. Each day, an average of 500 GB of data is gathered each station. Each location is equipped with a high-speed internet connection. The company's weather forecasting tools are regionally focused and do daily data analysis. What is the speediest method for collecting data from all of these worldwide sites? Enable Amazon S3 Transfer Acceleration on the destination bucket. Use multipart uploads to directly upload site data to the destination bucket. Upload site data to an Amazon S3 bucket in the closest AWS Region. Use S3 cross-Region replication to copy objects to the destination bucket. Schedule AWS Snowball jobs daily to transfer data to the closest AWS Region. Use S3 cross-Region replication to copy objects to the destination bucket. Upload the data to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Once a day take an EBS snapshot and copy it to the centralized Region. Restore the EBS volume in the centralized Region and run an analysis on the data daily.
A business uses an Amazon EC2 instance to host a web server on a public subnet with an Elastic IP address. The EC2 instance is assigned to the default security group. The default network access control list (ACL) has been updated to deny all traffic. A solutions architect must ensure that the web server is accessible from any location through port 443. Which sequence of procedures will achieve this objective? (Select two.) Create a security group with a rule to allow TCP port 443 from source 0.0.0.0/0. Create a security group with a rule to allow TCP port 443 to destination 0.0.0.0/0. Update the network ACL to allow TCP port 443 from source 0.0.0.0/0. Update the network ACL to allow inbound/outbound TCP port 443 from source 0.0.0.0/0 and to destination 0.0.0.0/0. Update the network ACL to allow inbound TCP port 443 from source 0.0.0.0/0 and outbound TCP port 32768-65535 to destination 0.0.0.0/0.
A business has developed a three-tiered picture sharing platform. It runs the front-end layer on one Amazon EC2 instance, the backend layer on another, and the MySQL database on a third. A solutions architect has been entrusted with the responsibility of developing a solution that is highly available and needs the fewest modifications to the application as possible. Which solution satisfies these criteria? Use Amazon S3 to host the front-end layer and AWS Lambda functions for the backend layer. Move the database to an Amazon DynamoDB table and use Amazon S3 to store and serve usersג€™ images. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend layers. Move the database to an Amazon RDS instance with multiple read replicas to store and serve usersג€™ images. Use Amazon S3 to host the front-end layer and a fleet of Amazon EC2 instances in an Auto Scaling group for the backend layer. Move the database to a memory optimized instance type to store and serve usersג€™ images. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend layers. Move the database to an Amazon RDS instance with a Multi-AZ deployment. Use Amazon S3 to store and serve usersג€™ images.
A business has developed a virtual private cloud (VPC) with various private subnets distributed across different Availability Zones (AZs) and one public subnet located in one of the AZs. A NAT gateway is launched on the public subnet. Within private subnets, there are circumstances when a NAT gateway is used to connect to the internet. In the event of an AZ failure, the organization wants to verify that not all instances have internet connection difficulties and that a backup plan is prepared. Which solution, according to a solutions architect, is the MOST highly available? Create a new public subnet with a NAT gateway in the same AZ. Distribute the traffic between the two NAT gateways. Create an Amazon EC2 NAT instance in a new public subnet. Distribute the traffic between the NAT gateway and the NAT instance. Create public subnets in each AZ and launch a NAT gateway in each subnet. Configure the traffic from the private subnets in each AZ to the respective NAT gateway. Create an Amazon EC2 NAT instance in the same public subnet. Replace the NAT gateway with the NAT instance and associate the instance with an Auto Scaling group with an appropriate scaling policy.
A business maintains numerous AWS accounts and deploys apps in the us-west-2 Region. Each account's application logs are kept in Amazon S3 buckets. The organization wishes to create a centralized log analysis system based on a single Amazon S3 bucket. Logs cannot depart us-west-2, and the corporation want to incur the fewest possible operating costs. Which option satisfies these criteria and is the MOST cost-effective? Create an S3 Lifecycle policy that copies the objects from one of the application S3 buckets to the centralized S3 bucket. Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3 bucket in us-west-2. Use this S3 bucket for log analysis. Write a script that uses the PutObject API operation every day to copy the entire contents of the buckets to another S3 bucket in us-west-2. Use this S3 bucket for log analysis. Write AWS Lambda functions in these accounts that are triggered every time logs are delivered to the S3 buckets (s3:ObjectCreated:* event). Copy the logs to another S3 bucket in us-west-2. Use this S3 bucket for log analysis.
A business is migrating its on-premises apps to Amazon Elastic Compute Cloud instances. However, due to variable compute needs, EC2 instances must always be available for usage between the hours of 8 a.m. and 5 p.m. in designated Availability Zones. Which Amazon Elastic Compute Cloud instances should the business use to execute the applications? Scheduled Reserved Instances On-Demand Instances Spot Instances as part of a Spot Fleet EC2 instances in an Auto Scaling group.
A solutions architect is developing a solution that entails coordinating a number of Amazon Elastic Container Service (Amazon ECS) task types that are operating on Amazon EC2 instances that are members of an ECS cluster. All tasks' output and status data must be saved. Each job outputs around 10 MB of data, and hundreds of tasks may be operating concurrently. The system should be tuned for reading and writing at a fast rate of speed. As ancient Because outputs are preserved and removed, the total storage space should not exceed 1 TB. Which storage option should be recommended by the solutions architect? An Amazon DynamoDB table accessible by all ECS cluster instances. An Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode. An Amazon Elastic File System (Amazon EFS) file system with Bursting Throughput mode. An Amazon Elastic Block Store (Amazon EBS) volume mounted to the ECS cluster instances.
A solutions architect must design a bastion host architecture that is highly available. The solution must be robust inside a single AWS Region and need little maintenance effort. What actions should the solutions architect take to ensure that these criteria are met? Create a Network Load Balancer backed by an Auto Scaling group with a UDP listener. Create a Network Load Balancer backed by a Spot Fleet with instances in a partition placement group. Create a Network Load Balancer backed by the existing servers in different Availability Zones as the target. Create a Network Load Balancer backed by an Auto Scaling group with instances in multiple Availability Zones as the target.
A business's customer relationship management (CRM) application stores data on an Amazon RDS database instance running Microsoft SQL Server. The database is administered by the company's information technology personnel. The database includes confidential information. The organization want to guarantee that data is inaccessible to IT professionals and is only seen by authorized people. What steps should a solutions architect take to safeguard data? Use client-side encryption with an Amazon RDS managed key. Use client-side encryption with an AWS Key Management Service (AWS KMS) customer managed key. Use Amazon RDS encryption with an AWS Key Management Service (AWS KMS) default encryption key. Use Amazon RDS encryption with an AWS Key Management Service (AWS KMS) customer managed key.
Amazon Route 53 latency-based routing is being used by a firm to route requests to their UDP-based application for customers worldwide. The program is hosted on redundant servers inside the company's own data centers in the United States, Asia, and Europe. The application must be hosted on-premises in accordance with the company's compliance standards. The organization want to enhance the application's performance and availability. What actions should a solutions architect take to ensure that these criteria are met? Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the NLBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the ALBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a latency-based record that points to the three NLBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the application by using a CNAME that points to the CloudFront DNS. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a latency-based record that points to the three ALBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the application by using a CNAME that points to the CloudFront DNS.
A business is developing a massively multiplayer online game. The game communicates through UDP, thus it is critical that the client and backend have a low latency. The backend is hosted on Amazon EC2 instances that may be scaled across various AWS Regions. The firm requires a high level of availability for the game in order for consumers worldwide to have access to it at all times. What actions should a solutions architect take to ensure that these criteria are met? Deploy Amazon CloudFront to support the global traffic. Configure CloudFront with an origin group to allow access to EC2 instances in multiple Regions. Deploy an Application Load Balancer in one Region to distribute traffic to EC2 instances in each Region that hosts the gameג€™s backend instances. Deploy Amazon CloudFront to support an origin access identity (OAI). Associate the OAI with EC2 instances in each Region to support global traffic. Deploy a Network Load Balancer in each Region to distribute the traffic. Use AWS Global Accelerator to route traffic to the correct Regional endpoint.
A business wishes to relocate its accounting system from an on-premises data center to an AWS Region. Priority one should be given to data security and an unalterable audit log. The organization must conduct compliance audits on all AWS operations. Although the organization has activated AWS CloudTrail, it want to ensure that it complies with these criteria. Which safeguards and security measures should a solutions architect use to safeguard and secure CloudTrail? (Select two.) Enable CloudTrail log file validation. Install the CloudTrail Processing Library. Enable logging of Insights events in CloudTrail. Enable custom logging from the on-premises resources. Create an AWS Config rule to monitor whether CloudTrail is configured to use server-side encryption with AWS KMS managed encryption keys (SSE-KMS).
A solutions architect is developing a new service to be used in conjunction with Amazon API Gateway. The service's request patterns will be erratic, ranging from zero to over 500 per second. The entire quantity of data that must be persisted in a backend database is now less than 1 GB, with unpredictability about future expansion. Simple key-value queries may be used to query data. Which AWS service combination would best suit these requirements? (Select two.) AWS Fargate AWS Lambda Amazon DynamoDB Amazon EC2 Auto Scaling MySQL-compatible Amazon Aurora.
Management has chosen to allow IPv6 on all AWS VPCs. After a period of time, a solutions architect attempts to create a new instance and gets an error indicating that the subnet does not have enough accessible IP address space. What is the solution architect's role in resolving this? Check to make sure that only IPv6 was used during the VPC creation. Create a new IPv4 subnet with a larger range, and then launch the instance. Create a new IPv6-only subnet with a large range, and then launch the instance. Disable the IPv4 subnet and migrate all instances to IPv6 only. Once that is complete, launch the instance.
A business uses AWS Organizations in conjunction with two AWS accounts: Logistics and Sales. The Logistics account is responsible for the operation of an Amazon Redshift cluster. Amazon EC2 instances are included in the Sales account. The Sales account requires access to the Amazon Redshift cluster owned by the Logistics account. What should a solutions architect propose as the most cost-effective way to accomplish this requirement? Set up VPC sharing with the Logistics account as the owner and the Sales account as the participant to transfer the data. Create an AWS Lambda function in the Logistics account to transfer data to the Amazon EC2 instances in the Sales account. Create a snapshot of the Amazon Redshift cluster, and share the snapshot with the Sales account. In the Sales account, restore the cluster by using the snapshot ID that is shared by the Logistics account. Run COPY commands to load data from Amazon Redshift into Amazon S3 buckets in the Logistics account. Grant permissions to the Sales account to access the S3 buckets of the Logistics account.
A business is presenting their application to the internet using an Application Load Balancer (ALB). The organization identifies out-of-the-ordinary traffic access patterns across the application. A solutions architect must increase visibility into the infrastructure in order to assist the business in comprehending these anomalies. What is the most optimal option that satisfies these requirements? Create a table in Amazon Athena for AWS CloudTrail logs. Create a query for the relevant information. Enable ALB access logging to Amazon S3. Create a table in Amazon Athena, and query the logs. Enable ALB access logging to Amazon S3. Open each file in a text editor, and search each line for the relevant information. Use Amazon EMR on a dedicated Amazon EC2 instance to directly query the ALB to acquire traffic access log information.
The operations team of a business already has an Amazon S3 bucket set to send notifications to an Amazon SQS queue when new items are generated in the bucket. Additionally, the development team want to get notifications when new objects are generated. The present workflow of the operations team must be maintained. Which solution would meet these criteria? Create another SQS queue. Update the S3 events in the bucket to also update the new queue when a new object is created. Create a new SQS queue that only allows Amazon S3 to access the queue. Update Amazon S3 to update this queue when a new object is created. Create an Amazon SNS topic and SQS queue for the bucket updates. Update the bucket to send events to the new topic. Updates both queues to poll Amazon SNS. Create an Amazon SNS topic and SQS queue for the bucket updates. Update the bucket to send events to the new topic. Add subscriptions for both queues in the topic.
A solutions architect is creating storage for an Amazon Linux-based high performance computing (HPC) environment. The workload saves and analyzes a huge number of engineering drawings, which necessitates the use of shared storage and high-performance computation. Which storage choice is the best? Amazon Elastic File System (Amazon EFS) Amazon FSx for Lustre Amazon EC2 instance store Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1).
A business is planning to build a public-facing web application on Amazon Web Services (AWS). The architecture comprises of Amazon EC2 instances contained inside a Virtual Private Cloud (VPC) and protected by an Elastic Load Balancer (ELB). The DNS is managed by a third-party provider. The solutions architect of the business must offer a solution for detecting and defending against large-scale DDoS assaults. Which solution satisfies these criteria? Enable Amazon GuardDuty on the account. Enable Amazon Inspector on the EC2 instances. Enable AWS Shield and assign Amazon Route 53 to it. Enable AWS Shield Advanced and assign the ELB to it.
A business is shifting to the Amazon Web Services (AWS) Cloud. The initial workload to move is a file server. The file share must be accessible through the Server Message Block (SMB) protocol. Which AWS managed service satisfies these criteria? Amazon Elastic Block Store (Amazon EBS) Amazon EC2 Amazon FSx Amazon S3.
A business is deploying a new application on an Amazon Elastic Container Service (Amazon ECS) cluster, using the Fargate ECS task launch type. The firm is monitoring CPU and memory use in anticipation of the program receiving a significant volume of traffic upon launch. However, the corporation desires cost savings as usage declines. What recommendations should a solutions architect make? Use Amazon EC2 Auto Scaling to scale at certain periods based on previous traffic patterns. Use an AWS Lambda function to scale Amazon ECS based on metric breaches that trigger an Amazon CloudWatch alarm. Use Amazon EC2 Auto Scaling with simple scaling policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm. Use AWS Application Auto Scaling with target tracking policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm.
A business has an AWS-hosted website. The database backend is hosted on Amazon RDS for MySQL and consists of a main instance and five read replicas to accommodate scalability requirements. To provide a consistent user experience, read replicas should be no more than one second behind the original instance. As the website's traffic continues to grow, the copies lag farther behind at peak moments, resulting in user complaints when searches return inconsistent results. A solutions architect's goal should be to minimize replication latency with little modifications to the application's code or operational requirements. Which solution satisfies these criteria? Migrate the database to Amazon Aurora MySQL. Replace the MySQL read replicas with Aurora Replicas and enable Aurora Auto Scaling Deploy an Amazon ElastiCache for Redis cluster in front of the database. Modify the website to check the cache before querying the database read endpoints. Migrate the database from Amazon RDS to MySQL running on Amazon EC2 compute instances. Choose very large compute optimized instances for all replica nodes. Migrate the database to Amazon DynamoDB. Initially provision a large number of read capacity units (RCUs) to support the required throughput with on- demand capacity scaling enabled.
A business has users from all over the world using an application that is installed in many AWS Regions, exposing public static IP addresses. When users use the program through the internet, they encounter performance issues. What should a solutions architect propose as a means of lowering internet latency? Set up AWS Global Accelerator and add endpoints. Set up AWS Direct Connect locations in multiple Regions. Set up an Amazon CloudFront distribution to access an application. Set up an Amazon Route 53 geoproximity routing policy to route traffic.
A business requires the development of a reporting solution on AWS. SQL queries must be supported by the solution for data analysts to execute on the data. Each day, the data analysts will do less than ten queries. Each day, the corporation adds 3 GB of fresh data to its on-premises relational database. This data must be sent to AWS in order for reporting chores to be performed. What should a solutions architect propose as the cheapest way to achieve these requirements? Use AWS Database Migration Service (AWS DMS) to replicate the data from the on-premises database into Amazon S3. Use Amazon Athena to query the data. Use an Amazon Kinesis Data Firehose delivery stream to deliver the data into an Amazon Elasticsearch Service (Amazon ES) cluster. Run the queries in Amazon ES. Export a daily copy of the data from the on-premises database. Use an AWS Storage Gateway file gateway to store and copy the export into Amazon S3. Use an Amazon EMR cluster to query the data. Use AWS Database Migration Service (AWS DMS) to replicate the data from the on-premises database and load it into an Amazon Redshift cluster. Use the Amazon Redshift cluster to query the data.
A business is using an Amazon S3 bucket to store data that has been submitted by several departments from various locations. The finance manager finds that 10 TB of S3 Standard storage data has been charged each month during an AWS Well-Architected assessment. However, executing the command to select all files and folders in the AWS Management Console for Amazon S3 results in a total size of 5 TB. What may be the potential reasons for this discrepancy? (Select two.) Some files are stored with deduplication. The S3 bucket has versioning enabled. There are incomplete S3 multipart uploads. The S3 bucker has AWS Key Management Service (AWS KMS) enabled. The S3 bucket has Intelligent-Tiering enabled.
On AWS, a business is creating an ecommerce website. This website is constructed on a three-tier design that contains a MySQL database in an Amazon Aurora MySQL Multi-AZ deployment. The internet application must be highly available, and will be deployed in an AWS Region with three Availability Zones initially. The program generates a statistic that indicates the amount of load it is experiencing. Which solution satisfies these criteria? onfigure an Application Load Balancer (ALB) with Amazon EC2 Auto Scaling behind the ALB with scheduled scaling Configure an Application Load Balancer (ALB) and Amazon EC2 Auto Scaling behind the ALB with a simple scaling policy. Configure a Network Load Balancer (NLB) and launch a Spot Fleet with Amazon EC2 Auto Scaling behind the NLB Configure an Application Load Balancer (ALB) and Amazon EC2 Auto Scaling behind the ALB with a target tracking scaling policy.
A gaming firm uses AWS to host a browser-based application. The application's users consume a high volume of movies and photographs stored on Amazon S3. This material is consistent across all users. The program has grown in popularity, with millions of users accessing these media files on a daily basis. The firm want to provide files to consumers while minimizing strain on the origin. Which option best fits these criteria in terms of cost-effectiveness? Deploy an AWS Global Accelerator accelerator in front of the web servers. Deploy an Amazon CloudFront web distribution in front of the S3 bucket. Deploy an Amazon ElastiCache for Redis instance in front of the web servers. Deploy an Amazon ElastiCache for Memcached instance in front of the web servers.
Every day, a business processes data. The processes' output is kept in an Amazon S3 bucket, examined daily for one week, and then must remain readily available for ad hoc examination. Which storage option is the MOST cost-effective alternative to the existing configuration? Configure a lifecycle policy to delete the objects after 30 days. Configure a lifecycle policy to transition the objects to Amazon S3 Glacier after 30 days. Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days. Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.
A business is developing an application. The program receives data through Amazon API Gateway and stores it in an Amazon Aurora PostgreSQL database using an AWS Lambda function. During the proof-of-concept stage, the firm must drastically raise the Lambda quotas to manage the large amounts of data that must be loaded into the database. A solutions architect must provide a recommendation for a new design that maximizes scalability and reduces setup effort. Which solution will satisfy these criteria? Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances. Connect the database by using native Java Database Connectivity (JDBC) drivers. Change the platform from Aurora to Amazon DynamoDB. Provision a DynamoDB Accelerator (DAX) cluster. Use the DAX client SDK to point the existing DynamoDB API calls at the DAX cluster. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using Amazon Simple Notification Service (Amazon SNS). Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.
A business wishes to migrate its on-premises MySQL database to Amazon Web Services (AWS). Regular imports from a client-facing application result in a huge amount of write operations in the database. The organization is worried that the volume of traffic may be affecting the application's performance. How should a solutions architect approach the design of an AWS architecture? Provision an Amazon RDS for MySQL DB instance with Provisioned IOPS SSD storage. Monitor write operation metrics by using Amazon CloudWatch. Adjust the provisioned IOPS if necessary. Provision an Amazon RDS for MySQL DB instance with General Purpose SSD storage. Place an Amazon ElastiCache cluster in front of the DB instance. Configure the application to query ElastiCache instead. Provision an Amazon DocumentDB (with MongoDB compatibility) instance with a memory optimized instance type. Monitor Amazon CloudWatch for performance-related issues. Change the instance class if necessary. Provision an Amazon Elastic File System (Amazon EFS) file system in General Purpose performance mode. Monitor Amazon CloudWatch for IOPS bottlenecks. Change to Provisioned Throughput performance mode if necessary.
A firm is using AWS to create a multi-instance application that needs low latency between the instances. What recommendations should a solutions architect make? Use an Auto Scaling group with a cluster placement group. Use an Auto Scaling group with single Availability Zone in the same AWS Region. Use an Auto Scaling group with multiple Availability Zones in the same AWS Region. Use a Network Load Balancer with multiple Amazon EC2 Dedicated Hosts as the targets.
For the last 15 years, a corporation has been operating a web application using an Oracle relational database in an on-premises data center. The company's database must be migrated to AWS. The business wants to cut operating costs without modifying the application's code. Which solution satisfies these criteria? Use AWS Database Migration Service (AWS DMS) to migrate the database servers to Amazon RDS. Use Amazon EC2 instances to migrate and operate the database servers. Use AWS Database Migration Service (AWS DMS) to migrate the database servers to Amazon DynamoDB. Use an AWS Snowball Edge Storage Optimized device to migrate the data from Oracle to Amazon Aurora.
A development team keeps the user name and password for its Amazon RDS MySQL DB instance in a configuration file. The configuration file is saved in plaintext on the team's Amazon EC2 instance's root device disk. When the team's application needs to connect to the database, the file is read and the credentials are loaded into the code. The team adjusted the configuration file's permissions so that only the program may access its contents. A solution architect's primary responsibility is to build a better secure system. What actions should the solutions architect do in order to satisfy this requirement? Store the configuration file in Amazon S3. Grant the application access to read the configuration file. Create an IAM role with permission to access the database. Attach this IAM role to the EC2 instance. Enable SSL connections on the database instance. Alter the database user to require SSL when logging in. Move the configuration file to an EC2 instance store, and create an Amazon Machine Image (AMI) of the instance. Launch new instances from this AMI.
A business's application is operating on Amazon EC2 instances contained inside a VPC. One of the apps must make a request to the Amazon S3 API in order to store and retrieve items. The company's security regulations prohibit programs from sending any internet-bound traffic. Which course of action will satisfy these needs while still maintaining security? Configure an S3 interface endpoint. Configure an S3 gateway endpoint. Create an S3 bucket in a private subnet. Create an S3 bucket in the same Region as the EC2 instance.
A MySQL database is used by a business's order fulfillment service. The database must be able to handle a high volume of concurrent requests and transactions. The database is patched and tuned by developers. This results in delays in the introduction of new product features. The organization wishes to use cloud-based services in order to assist it in addressing this new difficulty. The solution must enable developers to move the database with little or no modifications to the code and must maximize performance. Which solution architect service should be used to achieve these requirements? Amazon Aurora Amazon DynamoDB Amazon ElastiCache MySQL on Amazon EC2.
A corporation needs to create a relational database with a 1 second Recovery Point Objective (RPO) and a 1 minute Recovery Time Objective (RTO) for multi-region disaster recovery. Which AWS solution is capable of doing this? Amazon Aurora Global Database Amazon DynamoDB global tables Amazon RDS for MySQL with Multi-AZ enabled Amazon RDS for MySQL with a cross-Region snapshot copy.
A solutions architect is tasked with the responsibility of designing the implementation of a new static website. The solution must be cost effective and maintain a minimum of 99 percent availability. Which solution satisfies these criteria? Deploy the application to an Amazon S3 bucket in one AWS Region that has versioning disabled. Deploy the application to Amazon EC2 instances that run in two AWS Regions and two Availability Zones. Deploy the application to an Amazon S3 bucket that has versioning and cross-Region replication enabled. Deploy the application to an Amazon EC2 instance that runs in one AWS Region and one Availability Zone.
A business operates a three-tier web application for the purpose of processing credit card payments. Static websites comprise the front-end user interface. The application layer may include lengthy procedures. MySQL is used in the database layer. Currently, the application is running on a single huge general-purpose Amazon EC2 machine. A solutions architect must decouple the services in order to maximize the availability of the web application. Which of the following solutions would give the HIGHEST level of availability? Move static assets to Amazon CloudFront. Leave the application in EC2 in an Auto Scaling group. Move the database to Amazon RDS to deploy Multi-AZ. Move static assets and the application into a medium EC2 instance. Leave the database on the large instance. Place both instances in an Auto Scaling group. Move static assets to Amazon S3. Move the application to AWS Lambda with the concurrency limit set. Move the database to Amazon DynamoDB with on- demand enabled. Move static assets to Amazon S3. Move the application to Amazon Elastic Container Service (Amazon ECS) containers with Auto Scaling enabled. Move the database to Amazon RDS to deploy Multi-AZ.
The security team of a corporation wants that network traffic be logged in VPC Flow Logs. The logs will be viewed often for 90 days and then deleted. occasionally. What should a solutions architect do when customizing the logs to satisfy these requirements? Use Amazon CloudWatch as the target. Set the CloudWatch log group with an expiration of 90 days. Use Amazon Kinesis as the target. Configure the Kinesis stream to always retain the logs for 90 days. Use AWS CloudTrail as the target. Configure CloudTrail to save to an Amazon S3 bucket, and enable S3 Intelligent-Tiering. Use Amazon S3 as the target. Enable an S3 Lifecycle policy to transition the logs to S3 Standard-Infrequent Access (S3 Standard-IA) after 90 days.
Amazon S3 is being used by a solutions architect to develop the storage architecture for a new digital media application. The media files must be robust in the event of an Availability Zone failure. Certain files are routinely visited, while others are viewed infrequently and in an unexpected fashion. The architect of the solution must keep the expenses of storing and retrieving media files to a minimum. Which storage choice satisfies these criteria? S3 Standard S3 Intelligent-Tiering S3 Standard-Infrequent Access (S3 Standard-IA) S3 One Zone-Infrequent Access (S3 One Zone-IA).
Monthly reports are stored in an Amazon S3 bucket by a company's financial application. The vice president of finance has directed that all access to these reports be documented, as well as any adjustments to the log files. What activities can a solutions architect take to ensure compliance with these requirements? Use S3 server access logging on the bucket that houses the reports with the read and write data events and log file validation options enabled. Use S3 server access logging on the bucket that houses the reports with the read and write management events and log file validation options enabled. Use AWS CloudTrail to create a new trail. Configure the trail to log read and write data events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation. Use AWS CloudTrail to create a new trail. Configure the trail to log read and write management events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation.
A business that specializes in online gaming is developing a game that is predicted to be very popular around the globe. A solutions architect must create an AWS Cloud architecture capable of capturing and presenting near-real-time game data for each participant, as well as the names of the world's top 25 players at any one moment. Which AWS database solution and configuration should be used to satisfy these requirements? Use Amazon RDS for MySQL as the data store for player activity. Configure the RDS DB instance for Multi-AZ support. Use Amazon DynamoDB as the data store for player activity. Configure DynamoDB Accelerator (DAX) for the player data. Use Amazon DynamoDB as the data store for player activity. Configure global tables in each required AWS Region for the player data. Use Amazon RDS for MySQL as the data store for player activity. Configure cross-Region read replicas in each required AWS Region based on player proximity.
A business is building a serverless web application that will allow users to engage with real-time game stats. The data generated by the games must be transmitted live. The business need a robust, low-latency database solution for user data. The corporation is unsure about the application's anticipated user base. Any design considerations must ensure single-digit millisecond response rates as the application grows. Which AWS service combination will suit these requirements? (Select two.) Amazon CloudFront Amazon DynamoDB Amazon Kinesis Amazon RDS AWS Global Accelerator.
A solutions architect is developing a solution that will allow users to browse a collection of photos and make requests for customized images. Parameters for image customisation will be included in each request made to an AWS API Gateway API. The personalized picture will be created on demand, and consumers will get a link to see or download it. The solution must be very user-friendly in terms of viewing and modifying photos. Which approach is the MOST cost-effective in meeting these requirements? Use Amazon EC2 instances to manipulate the original image into the requested customizations. Store the original and manipulated images in Amazon S3. Configure an Elastic Load Balancer in front of the EC2 instances. Use AWS Lambda to manipulate the original image to the requested customizations. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin. Use AWS Lambda to manipulate the original image to the requested customizations. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2 instances. Use Amazon EC2 instances to manipulate the original image into the requested customizations. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin.
A business runs an Amazon EC2 instance on a private subnet and requires access to a public website in order to get patches and upgrades. The organization does not want other websites to be able to see or start connections to the EC2 instance's IP address. How can a solutions architect accomplish this goal? Create a site-to-site VPN connection between the private subnet and the network in which the public site is deployed. Create a NAT gateway in a public subnet. Route outbound traffic from the private subnet through the NAT gateway. Create a network ACL for the private subnet where the EC2 instance deployed only allows access from the IP address range of the public website. Create a security group that only allows connections from the IP address range of the public website. Attach the security group to the EC2 instance.
A business uses AWS to host its website. The website is protected by an Application Load Balancer (ALB) configured to manage HTTP and HTTPS traffic independently. The firm wishes to route all queries to the website through HTTPS. What solution should a solutions architect implement to satisfy this criterion? Update the ALBג€™s network ACL to accept only HTTPS traffic. Create a rule that replaces the HTTP in the URL with HTTPS. Create a listener rule on the ALB to redirect HTTP traffic to HTTPS. Replace the ALB with a Network Load Balancer configured to use Server Name Indication (SNI).
A business hosts a multilingual website using a fleet of Amazon EC2 instances protected by an Application Load Balancer (ALB). While this design is presently operational in the us-west-1 Region, it exhibits significant request delay for customers in other regions of the globe. The website must respond fast and effectively to user queries regardless of their location. The organization, however, does not want to duplicate the present infrastructure across numerous Regions. How is this to be accomplished by a solutions architect? Replace the existing architecture with a website served from an Amazon S3 bucket. Configure an Amazon CloudFront distribution with the S3 bucket as the origin. Configure an Amazon CloudFront distribution with the ALB as the origin. Set the cache behavior settings to only cache based on the Accept-Language request header. Set up Amazon API Gateway with the ALB as an integration. Configure API Gateway to use an HTTP integration type. Set up an API Gateway stage to enable the API cache. Launch an EC2 instance in each additional Region and configure NGINX to act as a cache server for that Region. Put all the instances plus the ALB behind an Amazon Route 53 record set with a geolocation routing policy.
On AWS, a business is developing a prototype of an ecommerce website. The website is powered by an Application Load Balancer from Amazon's Auto Scaling division. EC2 instances for web servers and an Amazon RDS for MySQL database instance configured in Single-AZ mode. The website is sluggish to react while doing product catalog searches. The product catalog is a collection of tables in the MySQL database that the firm uses to store its products. not regularly updated. A solutions architect has established that when product catalog searches occur, the CPU consumption on the database instance is significant. What should the solutions architect propose to optimize the website's performance during product catalog searches? Migrate the product catalog to an Amazon Redshift database. Use the COPY command to load the product catalog tables. Implement an Amazon ElastiCache for Redis cluster to cache the product catalog. Use lazy loading to populate the cache. Add an additional scaling policy to the Auto Scaling group to launch additional EC2 instances when database response is slow. Turn on the Multi-AZ configuration for the DB instance. Configure the EC2 instances to throttle the product catalog queries that are sent to the database.
A business has developed a bespoke application that utilizes embedded credentials to get data from an Amazon RDS MySQL DB instance. According to management, the application's security must be enhanced with the least amount of development work possible. What actions should a solutions architect take to ensure that these criteria are met? Use AWS Key Management Service (AWS KMS) customer master keys (CMKs) to create keys. Configure the application to load the database credentials from AWS KMS. Enable automatic key rotation. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the application to load the database credentials from Secrets Manager. Create an AWS Lambda function that rotates the credentials in Secret Manager. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the application to load the database credentials from Secrets Manager. Set up a credentials rotation schedule for the application user in the RDS for MySQL database using Secrets Manager. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Systems Manager Parameter Store. Configure the application to load the database credentials from Parameter Store. Set up a credentials rotation schedule for the application user in the RDS for MySQL database using Parameter Store.
A solutions architect is developing a system for analyzing financial market performance while the markets are closed. Each night, the system will conduct a succession of compute-intensive operations for four hours. The time required to finish compute tasks is supposed to be constant, and once begun, jobs cannot be stopped. After completion, the system is scheduled to operate for at least one year. Which Amazon EC2 instance type should be utilized to lower the system's cost? Spot Instances On-Demand Instances Standard Reserved Instances Scheduled Reserved Instances.
A business's on-premises data center hosts its critical network services, such as directory services and DNS. AWS Direct Connect connects the data center to the AWS Cloud (DX). Additional AWS accounts are anticipated, which will need continuous, rapid, and cost-effective access to these network services. What measures should a solutions architect take to ensure that these criteria are met with the least amount of operational overhead possible? Create a DX connection in each new account. Route the network traffic to the on-premises servers. Configure VPC endpoints in the DX VPC for all required services. Route the network traffic to the on-premises servers. Create a VPN connection between each new account and the DX VPC. Route the network traffic to the on-premises servers. Configure AWS Transit Gateway between the accounts. Assign DX to the transit gateway and route network traffic to the on-premises servers.
A business is developing a new application for storing a big volume of data. Hourly data analysis and modification will be performed by many Amazon EC2 Linux instances distributed across several Availability Zones. The application team anticipates that the required quantity of space will continue to expand over the following six months. Which course of action should a solutions architect pursue in order to meet these requirements? Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the application instances. Store the data in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system on the application instances. Store the data in Amazon S3 Glacier. Update the S3 Glacier vault policy to allow access to the application instances. Store the data in an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS volume shared between the application instances.
A solutions architect is tasked with the responsibility of creating the cloud architecture for a new application that will be hosted on AWS. The process should be parallelized, with the number of jobs to be handled dictating the number of application nodes added and removed. State is not maintained by the processor program. The solutions architect must guarantee that the application is loosely connected and that the task items are kept in a durable manner. Which design should the architect of solutions use? Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic.
A business is constructing a file-sharing application that will be stored in an Amazon S3 bucket. The firm want to distribute all files using Amazon CloudFront. The firm does not want for the files to be available directly via the S3 URL. What actions should a solutions architect take to ensure that these criteria are met? Write individual policies for each S3 bucket to grant read permission for only CloudFront access. Create an IAM user. Grant the user read permission to objects in the S3 bucket. Assign the user to CloudFront. Write an S3 bucket policy that assigns the CloudFront distribution ID as the Principal and assigns the target S3 bucket as the Amazon Resource Name (ARN). Create an origin access identity (OAI). Assign the OAI to the CloudFront distribution. Configure the S3 bucket permissions so that only the OAI has read permission.
Numerous business processes inside a corporation need access to data kept in a file share. The file share will be accessed by business systems using the Server Message Block (SMB) protocol. The file sharing solution should be available from both the on-premises and cloud environments of the business. Which services are required by the business? (Select two.) Amazon Elastic Block Store (Amazon EBS) Amazon Elastic File System (Amazon EFS) Amazon FSx for Windows Amazon S3 AWS Storage Gateway file gateway.
A business is ingesting data from on-premises data sources utilizing a fleet of Amazon EC2 instances. The data is in JSON format and may be ingested at a rate of up to 1 MB/s. When an EC2 instance is restarted, any data that was in transit is lost. The data science team at the organization wishes to query imported data in near-real time. Which method enables near-real-time data querying while being scalable and causing the least amount of data loss? Publish data to Amazon Kinesis Data Streams. Use Kinesis Data Analytics to query the data. Publish data to Amazon Kinesis Data Firehose with Amazon Redshift as the destination. Use Amazon Redshift to query the data. Store ingested data in an EC2 instance store. Publish data to Amazon Kinesis Data Firehose with Amazon S3 as the destination. Use Amazon Athena to query the data. Store ingested data in an Amazon Elastic Block Store (Amazon EBS) volume. Publish data to Amazon ElastiCache for Redis. Subscribe to the Redis channel to query the data.
A business relies on a traditional on-premises analytics solution that runs on terabytes of.csv files and contains months of data. The older program is unable to cope with the increasing size of.csv files. Daily, new.csv files are uploaded to a common on-premises storage site from numerous data sources. The organization want to maintain support for the traditional application while customers familiarize themselves with AWS analytics capabilities. To do this, the solutions architect want to keep two synchronized copies of all.csv files on-premises and on Amazon S3. Which solution should the architect of solutions recommend? Deploy AWS DataSync on-premises. Configure DataSync to continuously replicate the .csv files between the companyג€™s on-premises storage and the companyג€™s S3 bucket. Deploy an on-premises file gateway. Configure data sources to write the .csv files to the file gateway. Point the legacy analytics application to the file gateway. The file gateway should replicate the .csv files to Amazon S3. Deploy an on-premises volume gateway. Configure data sources to write the .csv files to the volume gateway. Point the legacy analytics application to the volume gateway. The volume gateway should replicate data to Amazon S3. Deploy AWS DataSync on-premises. Configure DataSync to continuously replicate the .csv files between on-premises and Amazon Elastic File System (Amazon EFS). Enable replication from Amazon Elastic File System (Amazon EFS) to the companyג€™s S3 bucket.
A business is transferring a three-tier application to Amazon Web Services. A MySQL database is required for the program. Previously, application users complained about the program's slow performance while adding new entries. These performance difficulties occurred as a result of users creating various real-time reports from the program during business hours. Which solution will optimize the application's performance when it is migrated to AWS? Import the data into an Amazon DynamoDB table with provisioned capacity. Refactor the application to use DynamoDB for reports. Create the database on a compute optimized Amazon EC2 instance. Ensure compute resources exceed the on-premises database. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the application to use the reader endpoint for reports. Create an Amazon Aurora MySQL Multi-AZ DB cluster. Configure the application to use the backup instance of the cluster as an endpoint for the reports.
A corporation has an on-premises MySQL database that is used infrequently by the worldwide sales staff. The sales team needs little database downtime. A database administrator wishes to move this database to AWS without specifying an instance type in front of increased user traffic in the future. Which solution architect service should be recommended? Amazon Aurora MySQL Amazon Aurora Serverless for MySQL Amazon Redshift Spectrum Amazon RDS for MySQL.
A corporation is developing an architecture for a mobile application that needs the least amount of delay possible for its consumers. The company's architecture is comprised of Amazon EC2 instances that are routed via an Application Load Balancer that is configured to operate in an Auto Scaling group. Amazon EC2 instances communicate with Amazon RDS. Beta testing of the application revealed a slowness while reading the data. However, the data suggest that no CPU usage criteria are exceeded by the EC2 instances. How can this problem be resolved? Reduce the threshold for CPU utilization in the Auto Scaling group. Replace the Application Load Balancer with a Network Load Balancer. Add read replicas for the RDS instances and direct read traffic to the replica. Add Multi-AZ support to the RDS instances and direct read traffic to the new EC2 instance.
A business's application architecture is two-tiered and distributed over public and private subnets. The public subnet contains Amazon EC2 instances that execute the web application, whereas the private subnet has a database. The web application instances and database are both contained inside a single Availability Zone (AZ). Which combination of measures should a solutions architect take to ensure this architecture's high availability? (Select two.) Create new public and private subnets in the same AZ for high availability. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer. Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ. Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment.
A business want to utilize Amazon S3 as a supplementary storage location for its on-premises dataset. The business would seldom need access to this copy. The cost of the storage solution should be kept to a minimum. Which storage option satisfies these criteria? S3 Standard S3 Intelligent-Tiering S3 Standard-Infrequent Access (S3 Standard-IA) S3 One Zone-Infrequent Access (S3 One Zone-IA).
A security team that is responsible for restricting access to certain services or activities across all of the team's AWS accounts. All accounts in AWS Organizations are part of a huge organization. The solution must be scalable, and permissions must be managed centrally. What actions should a solutions architect take to achieve this? Create an ACL to provide access to the services or actions. Create a security group to allow accounts and attach it to user groups. Create cross-account roles in each account to deny access to the services or actions. Create a service control policy in the root organizational unit to deny access to the services or actions.
A business has an application with a REST-based interface that enables near-real-time data retrieval from a third-party vendor. After receiving the data, the program analyzes and saves it for further analysis. Amazon EC2 instances are used to host the application. When delivering data to the program, the third-party vendor saw many 503 Service Unavailable errors. When data volume increases, the compute capacity approaches its limit and the application becomes unable of processing all requests. Which design should a solutions architect advocate in order to achieve more scalability? Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions. Use Amazon API Gateway on top of the existing application. Create a usage plan with a quota limit for the third-party vendor. Use Amazon Simple Notification Service (Amazon SNS) to ingest the data. Put the EC2 instances in an Auto Scaling group behind an Application Load Balancer. Repackage the application as a container. Deploy the application using Amazon Elastic Container Service (Amazon ECS) using the EC2 launch type with an Auto Scaling group.
A business has developed an application that analyzes inventory data by using overnight digital photographs of items on shop shelves. The application is deployed on Amazon EC2 instances behind an Application Load Balancer (ALB) and retrieves photos from an Amazon S3 bucket for metadata processing by worker nodes. A solutions architect must guarantee that worker nodes process each picture. What actions should the solutions architect take to ensure that this need is met in the MOST cost-effective manner possible? Send the image metadata from the application directly to a second ALB for the worker nodes that use an Auto Scaling group of EC2 Spot Instances as the target group. Process the image metadata by sending it directly to EC2 Reserved Instances in an Auto Scaling group. With a dynamic scaling policy, use an Amazon CloudWatch metric for average CPU utilization of the Auto Scaling group as soon as the front-end application obtains the images. Write messages to Amazon Simple Queue Service (Amazon SQS) when the front-end application obtains an image. Process the images with EC2 On- Demand instances in an Auto Scaling group with instance scale-in protection and a fixed number of instances with periodic health checks. Write messages to Amazon Simple Queue Service (Amazon SQS) when the application obtains an image. Process the images with EC2 Spot Instances in an Auto Scaling group with instance scale-in protection and a dynamic scaling policy using a custom Amazon CloudWatch metric for the current number of messages in the queue.
In the us-east-1 Region, a corporation has three VPCs designated Development, Testing, and Production. The three virtual private clouds must be linked to an on-premises data center and are meant to be self-contained in order to ensure security and avoid resource sharing. A solutions architect must identify a solution that is both scalable and safe. What recommendations should the solutions architect make? Create an AWS Direct Connect connection and a VPN connection for each VPC to connect back to the data center. Create VPC peers from all the VPCs to the Production VPC. Use an AWS Direct Connect connection from the Production VPC back to the data center. Connect VPN connections from all the VPCs to a VPN in the Production VPC. Use a VPN connection from the Production VPC back to the data center. Create a new VPC called Network. Within the Network VPC, create an AWS Transit Gateway with an AWS Direct Connect connection back to the data center. Attach all the other VPCs to the Network VPC.
A business has retained the services of a solutions architect to develop a dependable architecture for its application. The application is comprised of a single Amazon RDS database instance and two manually deployed Amazon EC2 instances running web servers. A single Availability Zone contains all of the EC2 instances. An employee recently removed the database instance, resulting in the application being offline for 24 hours. The firm is concerned with the environment's general dependability. What should the solutions architect do to ensure the application's infrastructure is as reliable as possible? Delete one EC2 instance and enable termination protection on the other EC2 instance. Update the DB instance to be Multi-AZ, and enable deletion protection. Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances behind an Application Load Balancer, and run them in an EC2 Auto Scaling group across multiple Availability Zones. Create an additional DB instance along with an Amazon API Gateway and an AWS Lambda function. Configure the application to invoke the Lambda function through API Gateway. Have the Lambda function write the data to the two DB instances. Place the EC2 instances in an EC2 Auto Scaling group that has multiple subnets located in multiple Availability Zones. Use Spot Instances instead of On- Demand Instances. Set up Amazon CloudWatch alarms to monitor the health of the instances. Update the DB instance to be Multi-AZ, and enable deletion protection.
Report abuse Terms of use
HOME
CREATE TEST
COMMENTS
STADISTICS
RECORDS
Author's Tests