Questions
ayuda
option
My Daypo

ERASED TEST, YOU MAY BE INTERESTED ONDIVA-201-300

COMMENTS STATISTICS RECORDS
TAKE THE TEST
Title of test:
DIVA-201-300

Description:
DIVA-201-300

Author:
DIVA
(Other tests from this author)

Creation Date:
16/02/2022

Category:
Logical

Number of questions: 100
Share the Test:
Facebook
Twitter
Whatsapp
Share the Test:
Facebook
Twitter
Whatsapp
Last comments
No comments about this test.
Content:
A business maintains an application that processes incoming communications. These messages are then digested in a matter of seconds by dozens of other apps and microservices. The quantity of communications fluctuates significantly and sometimes peaks above 100,000 per second. The firm wishes to divorce the solution from its underlying infrastructure and thereby boost its scalability. Which solution satisfies these criteria? Persist the messages to Amazon Kinesis Data Analytics. All the applications will read and process the messages. Deploy the application on Amazon EC2 instances in an Auto Scaling group, which scales the number of EC2 instances based on CPU metrics. Write the messages to Amazon Kinesis Data Streams with a single shard. All applications will read from the stream and process the messages. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with one or more Amazon Simple Queue Service (Amazon SQS) subscriptions. All applications then process the messages from the queues.
A business is transferring its infrastructure from on-premises to the AWS Cloud. One of the company's apps stores data on a Windows file server farm that utilizes Distributed File System Replication (DFSR) to maintain data consistency. The file server farm must be replaced by a solutions architect. Which solution architect service should be used? Amazon Elastic File System (Amazon EFS) Amazon FSx Amazon S3 AWS Storage Gateway.
On AWS, a business operates a high-performance computing (HPC) workload. The demand necessitated low network latency and high network throughput through closely linked node-to-node communication. Amazon EC2 instances are started with default configurations and are appropriately scaled for computation and storage capabilities. What should a solutions architect advise to optimize the workload's performance? Choose a cluster placement group while launching Amazon EC2 instances. Choose dedicated instance tenancy while launching Amazon EC2 instances. Choose an Elastic Inference accelerator while launching Amazon EC2 instances. Choose the required capacity reservation while launching Amazon EC2 instances.
A business has two applications: one that sends messages with payloads to be processed and another that receives messages with payloads. The organization wishes to create an Amazon Web Services (AWS) solution to manage communications between the two apps. The sender program is capable of sending around 1,000 messages every hour. Processing of communications may take up to two days. If the messages do not process, they must be kept to avoid interfering with the processing of subsequent messages. Which solution satisfies these parameters and is the MOST OPTIMAL in terms of operational efficiency? Set up an Amazon EC2 instance running a Redis database. Configure both applications to use the instance. Store, process, and delete the messages, respectively. Use an Amazon Kinesis data stream to receive the messages from the sender application. Integrate the processing application with the Kinesis Client Library (KCL). Integrate the sender and processor applications with an Amazon Simple Queue Service (Amazon SQS) queue. Configure a dead-letter queue to collect the messages that failed to process. Subscribe the processing application to an Amazon Simple Notification Service (Amazon SNS) topic to receive notifications to process. Integrate the sender application to write to the SNS topic.
A business hosts its corporate content management platform on AWS in a single region but requires the platform to function across several regions. The organization operates its microservices on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The EKS cluster is responsible for storing and retrieving items from Amazon S3. Additionally, the EKS cluster utilizes Amazon DynamoDB to store and retrieve information. Which actions should a solutions architect do in combination to deploy the platform across several regions? (Select two.) Replicate the EKS cluster with cross-Region replication. Use Amazon API Gateway to create a global endpoint to the EKS cluster. Use AWS Global Accelerator endpoints to distribute the traffic to multiple Regions. Use Amazon S3 access points to give access to the objects across multiple Regions. Configure DynamoDB Accelerator (DAX). Connect DAX to the relevant tables. Deploy an EKS cluster and an S3 bucket in another Region. Configure cross-Region replication on both S3 buckets. Turn on global tables for DynamoDB.
A business use a VPC that is provisioned with a CIDR block of 10.10.1.0/24. Due to continuing expansion, this block's IP address space may soon be consumed. A solutions architect must expand the VPC's IP address capacity. Which method satisfies these criteria with the LEAST amount of operational overhead? Create a new VPC. Associate a larger CIDR block. Add a secondary CIDR block of 10.10.2.0/24 to the VPC. Resize the existing VPC CIDR block from 10.10.1.0/24 to 10.10.1.0/16. Establish VPC peering with a new VPC that has a CIDR block of 10.10.1.0/16.
A business hosts its website on Amazon EC2 instances that are routed via an ELB Application Load Balancer. The DNS is handled via Amazon Route 53. The firm want to establish a backup website with a message, phone number, and email address for users to contact in the event that the original website becomes unavailable. How should this solution be implemented? Use Amazon S3 website hosting for the backup website and Route 53 failover routing policy. Use Amazon S3 website hosting for the backup website and Route 53 latency routing policy. Deploy the application in another AWS Region and use ELB health checks for failover routing. Deploy the application in another AWS Region and use server-side redirection on the primary website.
A firm's on-premises business program creates hundreds of files daily. These files are kept on an SMB file share and need a connection to the application servers with a low latency. A new business policy requires that all files created by applications be moved to AWS. A VPN connection to AWS is already established. The application development team lacks the time required to modify the application's code in order to migrate it to AWS. Which service should a solutions architect propose to enable an application to transfer files to Amazon Web Services (AWS)? Amazon Elastic File System (Amazon EFS) Amazon FSx for Windows File Server AWS Snowball AWS Storage Gateway.
A business uses WebSockets to host a live chat application on its on-premises servers. The firm want to transfer the application to Amazon Web Services (AWS). Traffic to the application is uneven, and the firm anticipates more traffic with sudden spikes in the future. The business need a highly scalable solution that requires minimal server maintenance or sophisticated capacity planning. Which solution satisfies these criteria? Use Amazon API Gateway and AWS Lambda with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for provisioned capacity. Use Amazon API Gateway and AWS Lambda with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for on-demand capacity. Run Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for on-demand capacity. Run Amazon EC2 instances behind a Network Load Balancer in an Auto Scaling group with an Amazon DynamoDB table as the data store. Configure the DynamoDB table for provisioned capacity.
A business intends to migrate many gigabytes of data to AWS. Offline data is obtained from ships. Before transmitting the data, the organization want to do complicated transformations. Which Amazon Web Services (AWS) service should a solutions architect suggest for this migration? AWS Snowball AWS Snowmobile AWS Snowball Edge Storage Optimize AWS Snowball Edge Compute Optimize.
Two video conversion programs are being used by a media organization on Amazon EC2 instances. One utility is Windows-based, while the other is Linux-based. Each video file is rather huge and both programs must process it. The organization requires a storage solution that enables the creation of a centralized file system that can be mounted on all of the EC2 instances utilized in this operation. Which solution satisfies these criteria? Use Amazon FSx for Windows File Server for the Windows instances. Use Amazon Elastic File System (Amazon EFS) with Max I/O performance mode for the Linux instances. Use Amazon FSx for Windows File Server for the Windows instances. Use Amazon FSx for Lustre for the Linux instances. Link both Amazon FSx file systems to the same Amazon S3 bucket. Use Amazon Elastic File System (Amazon EFS) with General Purpose performance mode for the Windows instances and the Linux instances Use Amazon FSx for Windows File Server for the Windows instances and the Linux instances.
A business uses Amazon RDS to power a web application. A fresh database administrator mistakenly deleted data from a database table. To aid in recovery from such an occurrence, the organization desires the capacity to restore the database to the condition it was in five minutes prior to any alteration during the past 30 days. Which capability should the solutions architect include into the design to satisfy this requirement? Read replicas Manual snapshots Automated backups Multi-AZ deployments.
A business is building a video converter application that will be hosted on AWS. The program will be offered in two flavors: a free version and a premium version. People on the premium tier will get their videos converted first, followed by users on the tree tier. Which option satisfies these criteria and is the most cost-effective? One FIFO queue for the paid tier and one standard queue for the free tier. A single FIFO Amazon Simple Queue Service (Amazon SQS) queue for all file types. A single standard Amazon Simple Queue Service (Amazon SQS) queue for all file types. Two standard Amazon Simple Queue Service (Amazon SQS) queues with one for the paid tier and one for the free tier.
A business has an application that sends messages to Amazon Simple Queue Service. Another program polls the queue and performs I/O-intensive operations on the messages. The organization has a service level agreement (SLA) that stipulates the maximum time allowed between message receipt and response to users. Due to the rise in message volume, the organization is having trouble fulfilling its SLA on a constant basis. What should a solutions architect do to assist in increasing the application's processing speed and ensuring that it can manage any level of load? Create an Amazon Machine Image (AMI) from the instance used for processing. Terminate the instance and replace it with a larger size. Create an Amazon Machine Image (AMI) from the instance used for processing. Terminate the instance and replace it with an Amazon EC2 Dedicated Instance. Create an Amazon Machine image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy to keep its aggregate CPU utilization below 70%. Create an Amazon Machine Image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy based on the age of the oldest message in the SQS queue.
In the AWS Cloud, a business is operating a multi-tier ecommerce web application. The application is hosted on Amazon EC2 instances that are connected to an Amazon RDS MySQL Multi-AZ database. Amazon RDS is setup with the latest generation instance and 2,000 GB of storage in a General Purpose SSD (gp2) volume from Amazon Elastic Block Store (Amazon EBS). During moments of heavy demand, the database performance has an effect on the application. After studying the logs in Amazon CloudWatch Logs, a database administrator discovers that when the number of read and write IOPS exceeds 6.000, the application's performance constantly drops. What should a solutions architect do to optimize the performance of an application? Replace the volume with a Magnetic volume. Increase the number of IOPS on the gp2 volume. Replace the volume with a Provisioned IOPS (PIOPS) volume. Replace the 2,000 GB gp2 volume with two 1,000 GBgp2 volumes.
For its ecommerce website, a business developed a multi-tier application. The website makes use of a public subnet-based Application Load Balancer, a public subnet-based web tier, and a private subnet-based MySQL cluster hosted on Amazon EC2 instances. The MySQL database must obtain product catalog and price information from a third-party provider's website. A solutions architect's objective is to develop a plan that optimizes security without raising operating costs. What actions should the solutions architect take to ensure that these criteria are met? Deploy a NAT instance in the VPC. Route all the internet-based traffic through the NAT instance. Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all internet-bound traffic to the NAT gateway. Configure an internet gateway and attach it to the VPC. Modify the private subnet route table to direct internet-bound traffic to the internet gateway. Configure a virtual private gateway and attach it to the VPC. Modify the private subnet route table to direct internet-bound traffic to the virtual private gateway.
Recently, a business introduced a new form of internet-connected sensor. The business anticipates selling thousands of sensors that are intended to feed large amounts of data to a central location every second. A solutions architect must develop a system that ingests and stores data in near-real time with millisecond responsiveness for engineering teams to examine. Which solution should the architect of solutions recommend? Use an Amazon SQS queue to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon Redshift. Use an Amazon SQS queue to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon DynamoDB. Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon Redshift. Use Amazon Kinesis Data Streams to ingest the data. Consume the data with an AWS Lambda function, which then stores the data in Amazon DynamoDB.
A business must share an Amazon S3 bucket with a third-party provider. All items must be accessible to the bucket owner. Which procedure should be followed in order to share the S3 bucket? Update the bucket to be a Requester Pays bucket. Update the bucket to enable cross-origin resource sharing (CORS). Create a bucket policy to require users to grant bucket-owner-full-control when uploading objects. Create an IAM policy to require users to grant bucket-owner-full-control when uploading objects.
A corporation want to move a 143 TB MySQL database to AWS. The objective is to continue using Amazon Aurora MySQL as the platform. The organization connects to Amazon VPC using a 100 Mbps AWS Direct Connect connection. Which option best satisfies the requirements of the business and requires the least amount of time? Use a gateway endpoint for Amazon S3. Migrate the data to Amazon S3. Import the data into Aurora. Upgrade the Direct Connect link to 500 Mbps. Copy the data to Amazon S3. Import the data into Aurora. Order an AWS Snowmobile and copy the database backup to it. Have AWS import the data into Amazon S3. Import the backup into Aurora. Order four 50-TB AWS Snowball devices and copy the database backup onto them. Have AWS import the data into Amazon S3. Import the data into Aurora.
A business stores static photos for its website in an Amazon S3 bucket. Permissions were specified to restrict access to Amazon S3 items to privileged users only. What steps should a solutions architect take to prevent data loss? (Select two.) Enable versioning on the S3 bucket. Enable access logging on the S3 bucket. Enable server-side encryption on the S3 bucket. Configure an S3 lifecycle rule to transition objects to Amazon S3 Glacier. Use MFA Delete to require multi-factor authentication to delete an object.
A solutions architect is developing a document review application that will be stored in an Amazon S3 bucket. The solution must prevent unintentional document deletion and guarantee that all document versions are accessible. The ability for users to download, change, and upload documents is required. Which measures should be conducted in combination to achieve these requirements? (Select two.) Enable a read-only bucket ACL. Enable versioning on the bucket. Attach an IAM policy to the bucket. Enable MFA Delete on the bucket. Encrypt the bucket using AWS KMS.
A corporation hosts more than 300 websites and apps on a worldwide scale. Each day, the organization wants a platform capable of analyzing more than 30 TB of clickstream data. What should a solutions architect do with the clickstream data during transmission and processing? Design an AWS Data Pipeline to archive the data to an Amazon S3 bucket and run an Amazon EMR cluster with the data to generate analytics. Create an Auto Scaling group of Amazon EC2 instances to process the data and send it to an Amazon S3 data lake for Amazon Redshift to use for analysis. Cache the data to Amazon CloudFront. Store the data in an Amazon S3 bucket. When an object is added to the S3 bucket, run an AWS Lambda function to process the data for analysis. Collect the data from Amazon Kinesis Data Streams. Use Amazon Kinesis Data Firehose to transmit the data to an Amazon S3 data lake. Load the data in Amazon Redshift for analysis.
A business is developing a three-tier online application that will include a web server, an application server, and a database server. While packages are being delivered, the program will monitor their GPS locations. The database will be updated every 0-5 seconds by the program. Tracking information must be read as quickly as possible to allow users to verify the status of their deliveries. On certain days, just a few parcels may be monitored, while on others, millions of packages may be tracked. The tracking system must be searchable using the tracking ID, the customer ID, and the order ID. Orders placed after one month will no longer be monitored. What should a solution architect propose in order to do this with the lowest possible total cost of ownership? Use Amazon DynamoDB Enable Auto Scaling on the DynamoDB table. Schedule an automatic deletion script for items older than 1 month. Use Amazon DynamoDB with global secondary indexes. Enable Auto Scaling on the DynamoDB table and the global secondary indexes. Enable TTL on the DynamoDB table. Use an Amazon RDS On-Demand instance with Provisioned IOPS (PIOPS). Enable Amazon CloudWatch alarms to send notifications when PIOPS are exceeded. Increase and decrease PIOPS as needed. Use an Amazon RDS Reserved Instance with Provisioned IOPS (PIOPS). Enable Amazon CloudWatch alarms to send notification when PIOPS are exceeded. Increase and decrease PIOPS as needed.
A news organization with correspondents located around the globe uses AWS to host its broadcast system. The reporters provide the broadcast system with live feeds. The reporters transmit live broadcasts through the Real Time Messaging Protocol using software installed on their phones (RTMP). A solutions architect must provide a system that enables reporters to deliver the highest-quality streams possible. The solution must ensure that TCP connections to the broadcast system are expedited. What approach should the solutions architect use in order to satisfy these requirements? Amazon CloudFront AWS Global Accelerator AWS Client VPN Amazon EC2 instances and AWS Elastic IP addresses.
A business is transferring a set of Linux-based web servers to AWS. For certain content, the web servers must access files stored in a shared file storage. To fulfill the migration deadline, only minor adjustments are necessary. What actions should a solutions architect take to ensure that these criteria are met? Create an Amazon S3 Standard bucket with access to the web server. Configure an Amazon CloudFront distribution with an Amazon S3 bucket as the origin. Create an Amazon Elastic File System (Amazon EFS) volume and mount it on all web servers. Configure Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1) volumes and mount them on all web servers.
Every day, a business gets structured and semi-structured data from a variety of sources. A solutions architect must create a solution that makes use of frameworks for big data processing. SQL queries and business intelligence tools should be able to access the data. What should the solutions architect advocate in order to provide the most performant solution possible? Use AWS Glue to process data and Amazon S3 to store data. Use Amazon EMR to process data and Amazon Redshift to store data. Use Amazon EC2 to process data and Amazon Elastic Block Store (Amazon EBS) to store data. Use Amazon Kinesis Data Analytics to process data and Amazon Elastic File System (Amazon EFS) to store data.
A MySQL database instance on Amazon RDS is used by an application. The RDS database is rapidly depleting its storage capacity. A solutions architect wants to expand disk capacity without causing downtime. Which method satisfies these criteria with the minimum amount of effort? Enable storage auto scaling in RDS. Increase the RDS database instance size. Change the RDS database instance storage type to Provisioned IOPS. Back up the RDS database, increase the storage capacity, restore the database and stop the previous instance.
A business runs an ecommerce application in a single VPC. A single web server and an Amazon RDS Multi-AZ database instance comprise the application stack. Twice a month, the firm introduces new items. This results in a 400% increase in website traffic for a minimum of 72 hours. Users' browsers encounter poor response times and numerous timeout issues during product launches. What should a solutions architect do to minimize response times and timeout failures while maintaining a minimal operational overhead? Increase the instance size of the web server. Add an Application Load Balancer and an additional web server. Add Amazon EC2 Auto Scaling and an Application Load Balancer. Deploy an Amazon ElastiCache cluster to store frequently accessed data.
A solutions architect is developing a two-step order process application. The first step is synchronous and must return with minimal delay to the user. Because the second stage is more time consuming, it will be done as a distinct component. Orders must be processed precisely once and in their original sequence of receipt. How are these components to be integrated by the solutions architect? Use Amazon SQS FIFO queues. Use an AWS Lambda function along with Amazon SQS standard queues. Create an SNS topic and subscribe an Amazon SQS FIFO queue to that topic. Create an SNS topic and subscribe an Amazon SQS Standard queue to that topic.
A business is using AWS to operate an application that processes weather sensor data stored in an Amazon S3 bucket. Three batch tasks are scheduled to run hourly to process data in the S3 bucket for various reasons. The organization wishes to minimize total processing time by employing an event-based strategy to run the three programs in parallel. What actions should a solutions architect take to ensure that these criteria are met? Enable S3 Event Notifications for new objects to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Subscribe all applications to the queue for processing. Enable S3 Event Notifications for new objects to an Amazon Simple Queue Service (Amazon SQS) standard queue. Create an additional SQS queue for all applications, and subscribe all applications to the initial queue for processing. Enable S3 Event Notifications for new objects to separate Amazon Simple Queue Service (Amazon SQS) FIFO queues. Create an additional SQS queue for each application, and subscribe each queue to the initial topic for processing. Enable S3 Event Notifications for new objects to an Amazon Simple Notification Service (Amazon SNS) topic. Create an Amazon Simple Queue Service (Amazon SQS) queue for each application, and subscribe each queue to the topic for processing.
Multiple Amazon EC2 instances in a single Availability Zone are used by a gaming firm to host a multiplayer game that connects with players using Layer 4 communication. The chief technology officer (CTO) desires a highly accessible and cost-effective architecture. What actions should a solutions architect take to ensure that these criteria are met? (Select two.)? Increase the number of EC2 instances. Decrease the number of EC2 instances. Configure a Network Load Balancer in front of the EC2 instances. Configure an Application Load Balancer in front of the EC2 instances. Configure an Auto Scaling group to add or remove instances in multiple Availability Zones automatically.
A business runs a static website on Amazon S3. A solutions architect must guarantee that data is recoverable in the event of an accidently deleted file. Which action is necessary to achieve this? Enable Amazon S3 versioning. Enable Amazon S3 Intelligent-Tiering. Enable an Amazon S3 lifecycle policy. Enable Amazon S3 cross-Region replication.
A solutions architect must develop a managed storage solution with high-performance machine learning capability for a company's application. This application is hosted on AWS Fargate, and the storage attached to it must support concurrent file access and give good performance. Which storage choice should the architect of solutions recommend? Create an Amazon S3 bucket for the application and establish an IAM role for Fargate to communicate with Amazon S3. Create an Amazon FSx for Lustre file share and establish an IAM role that allows Fargate to communicate with FSx for Lustre. Create an Amazon Elastic File System (Amazon EFS) file share and establish an IAM role that allows Fargate to communicate with Amazon Elastic File System (Amazon EFS). Create an Amazon Elastic Block Store (Amazon EBS) volume for the application and establish an IAM role that allows Fargate to communicate with Amazon Elastic Block Store (Amazon EBS).
A firm gathers data on temperature, humidity, and air pressure in cities across the world. Each day, an average of 500 GB of data is gathered each station. Each location is equipped with a high-speed internet connection. The company's weather forecasting tools are regionally focused and do daily data analysis. What is the speediest method for collecting data from all of these worldwide sites? Enable Amazon S3 Transfer Acceleration on the destination bucket. Use multipart uploads to directly upload site data to the destination bucket. Upload site data to an Amazon S3 bucket in the closest AWS Region. Use S3 cross-Region replication to copy objects to the destination bucket. Schedule AWS Snowball jobs daily to transfer data to the closest AWS Region. Use S3 cross-Region replication to copy objects to the destination bucket. Upload the data to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Once a day take an EBS snapshot and copy it to the centralized Region. Restore the EBS volume in the centralized Region and run an analysis on the data daily.
A business uses an Amazon EC2 instance to host a web server on a public subnet with an Elastic IP address. The EC2 instance is assigned to the default security group. The default network access control list (ACL) has been updated to deny all traffic. A solutions architect must ensure that the web server is accessible from any location through port 443. Which sequence of procedures will achieve this objective? (Select two.) Create a security group with a rule to allow TCP port 443 from source 0.0.0.0/0. Create a security group with a rule to allow TCP port 443 to destination 0.0.0.0/0. Update the network ACL to allow TCP port 443 from source 0.0.0.0/0. Update the network ACL to allow inbound/outbound TCP port 443 from source 0.0.0.0/0 and to destination 0.0.0.0/0. Update the network ACL to allow inbound TCP port 443 from source 0.0.0.0/0 and outbound TCP port 32768-65535 to destination 0.0.0.0/0.
A business has developed a three-tiered picture sharing platform. It runs the front-end layer on one Amazon EC2 instance, the backend layer on another, and the MySQL database on a third. A solutions architect has been entrusted with the responsibility of developing a solution that is highly available and needs the fewest modifications to the application as possible. Which solution satisfies these criteria? Use Amazon S3 to host the front-end layer and AWS Lambda functions for the backend layer. Move the database to an Amazon DynamoDB table and use Amazon S3 to store and serve usersג€™ images. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend layers. Move the database to an Amazon RDS instance with multiple read replicas to store and serve usersג€™ images. Use Amazon S3 to host the front-end layer and a fleet of Amazon EC2 instances in an Auto Scaling group for the backend layer. Move the database to a memory optimized instance type to store and serve usersג€™ images. Use load-balanced Multi-AZ AWS Elastic Beanstalk environments for the front-end and backend layers. Move the database to an Amazon RDS instance with a Multi-AZ deployment. Use Amazon S3 to store and serve usersג€™ images.
A business has developed a virtual private cloud (VPC) with various private subnets distributed across different Availability Zones (AZs) and one public subnet located in one of the AZs. A NAT gateway is launched on the public subnet. Within private subnets, there are circumstances when a NAT gateway is used to connect to the internet. In the event of an AZ failure, the organization wants to verify that not all instances have internet connection difficulties and that a backup plan is prepared. Which solution, according to a solutions architect, is the MOST highly available? Create a new public subnet with a NAT gateway in the same AZ. Distribute the traffic between the two NAT gateways. Create an Amazon EC2 NAT instance in a new public subnet. Distribute the traffic between the NAT gateway and the NAT instance. Create public subnets in each AZ and launch a NAT gateway in each subnet. Configure the traffic from the private subnets in each AZ to the respective NAT gateway. Create an Amazon EC2 NAT instance in the same public subnet. Replace the NAT gateway with the NAT instance and associate the instance with an Auto Scaling group with an appropriate scaling policy.
A business maintains numerous AWS accounts and deploys apps in the us-west-2 Region. Each account's application logs are kept in Amazon S3 buckets. The organization wishes to create a centralized log analysis system based on a single Amazon S3 bucket. Logs cannot depart us-west-2, and the corporation want to incur the fewest possible operating costs. Which option satisfies these criteria and is the MOST cost-effective? Create an S3 Lifecycle policy that copies the objects from one of the application S3 buckets to the centralized S3 bucket. Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3 bucket in us-west-2. Use this S3 bucket for log analysis. Write a script that uses the PutObject API operation every day to copy the entire contents of the buckets to another S3 bucket in us-west-2. Use this S3 bucket for log analysis. Write AWS Lambda functions in these accounts that are triggered every time logs are delivered to the S3 buckets (s3:ObjectCreated:* event). Copy the logs to another S3 bucket in us-west-2. Use this S3 bucket for log analysis.
A business is migrating its on-premises apps to Amazon Elastic Compute Cloud instances. However, due to variable compute needs, EC2 instances must always be available for usage between the hours of 8 a.m. and 5 p.m. in designated Availability Zones. Which Amazon Elastic Compute Cloud instances should the business use to execute the applications? Scheduled Reserved Instances On-Demand Instances Spot Instances as part of a Spot Fleet EC2 instances in an Auto Scaling group.
A solutions architect is developing a solution that entails coordinating a number of Amazon Elastic Container Service (Amazon ECS) task types that are operating on Amazon EC2 instances that are members of an ECS cluster. All tasks' output and status data must be saved. Each job outputs around 10 MB of data, and hundreds of tasks may be operating concurrently. The system should be tuned for reading and writing at a fast rate of speed. As ancient Because outputs are preserved and removed, the total storage space should not exceed 1 TB. Which storage option should be recommended by the solutions architect? An Amazon DynamoDB table accessible by all ECS cluster instances. An Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode. An Amazon Elastic File System (Amazon EFS) file system with Bursting Throughput mode. An Amazon Elastic Block Store (Amazon EBS) volume mounted to the ECS cluster instances.
A solutions architect must design a bastion host architecture that is highly available. The solution must be robust inside a single AWS Region and need little maintenance effort. What actions should the solutions architect take to ensure that these criteria are met? Create a Network Load Balancer backed by an Auto Scaling group with a UDP listener. Create a Network Load Balancer backed by a Spot Fleet with instances in a partition placement group. Create a Network Load Balancer backed by the existing servers in different Availability Zones as the target. Create a Network Load Balancer backed by an Auto Scaling group with instances in multiple Availability Zones as the target.
A business's customer relationship management (CRM) application stores data on an Amazon RDS database instance running Microsoft SQL Server. The database is administered by the company's information technology personnel. The database includes confidential information. The organization want to guarantee that data is inaccessible to IT professionals and is only seen by authorized people. What steps should a solutions architect take to safeguard data? Use client-side encryption with an Amazon RDS managed key. Use client-side encryption with an AWS Key Management Service (AWS KMS) customer managed key. Use Amazon RDS encryption with an AWS Key Management Service (AWS KMS) default encryption key. Use Amazon RDS encryption with an AWS Key Management Service (AWS KMS) customer managed key.
Amazon Route 53 latency-based routing is being used by a firm to route requests to their UDP-based application for customers worldwide. The program is hosted on redundant servers inside the company's own data centers in the United States, Asia, and Europe. The application must be hosted on-premises in accordance with the company's compliance standards. The organization want to enhance the application's performance and availability. What actions should a solutions architect take to ensure that these criteria are met? Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the NLBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the ALBs as its endpoints. Provide access to the application by using a CNAME that points to the accelerator DNS. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a latency-based record that points to the three NLBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the application by using a CNAME that points to the CloudFront DNS. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a latency-based record that points to the three ALBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the application by using a CNAME that points to the CloudFront DNS.
A business is developing a massively multiplayer online game. The game communicates through UDP, thus it is critical that the client and backend have a low latency. The backend is hosted on Amazon EC2 instances that may be scaled across various AWS Regions. The firm requires a high level of availability for the game in order for consumers worldwide to have access to it at all times. What actions should a solutions architect take to ensure that these criteria are met? Deploy Amazon CloudFront to support the global traffic. Configure CloudFront with an origin group to allow access to EC2 instances in multiple Regions. Deploy an Application Load Balancer in one Region to distribute traffic to EC2 instances in each Region that hosts the gameג€™s backend instances. Deploy Amazon CloudFront to support an origin access identity (OAI). Associate the OAI with EC2 instances in each Region to support global traffic. Deploy a Network Load Balancer in each Region to distribute the traffic. Use AWS Global Accelerator to route traffic to the correct Regional endpoint.
A business wishes to relocate its accounting system from an on-premises data center to an AWS Region. Priority one should be given to data security and an unalterable audit log. The organization must conduct compliance audits on all AWS operations. Although the organization has activated AWS CloudTrail, it want to ensure that it complies with these criteria. Which safeguards and security measures should a solutions architect use to safeguard and secure CloudTrail? (Select two.) Enable CloudTrail log file validation. Install the CloudTrail Processing Library. Enable logging of Insights events in CloudTrail. Enable custom logging from the on-premises resources. Create an AWS Config rule to monitor whether CloudTrail is configured to use server-side encryption with AWS KMS managed encryption keys (SSE-KMS).
A solutions architect is developing a new service to be used in conjunction with Amazon API Gateway. The service's request patterns will be erratic, ranging from zero to over 500 per second. The entire quantity of data that must be persisted in a backend database is now less than 1 GB, with unpredictability about future expansion. Simple key-value queries may be used to query data. Which AWS service combination would best suit these requirements? (Select two.) AWS Fargate AWS Lambda Amazon DynamoDB Amazon EC2 Auto Scaling MySQL-compatible Amazon Aurora.
Management has chosen to allow IPv6 on all AWS VPCs. After a period of time, a solutions architect attempts to create a new instance and gets an error indicating that the subnet does not have enough accessible IP address space. What is the solution architect's role in resolving this? Check to make sure that only IPv6 was used during the VPC creation. Create a new IPv4 subnet with a larger range, and then launch the instance. Create a new IPv6-only subnet with a large range, and then launch the instance. Disable the IPv4 subnet and migrate all instances to IPv6 only. Once that is complete, launch the instance.
A business uses AWS Organizations in conjunction with two AWS accounts: Logistics and Sales. The Logistics account is responsible for the operation of an Amazon Redshift cluster. Amazon EC2 instances are included in the Sales account. The Sales account requires access to the Amazon Redshift cluster owned by the Logistics account. What should a solutions architect propose as the most cost-effective way to accomplish this requirement? Set up VPC sharing with the Logistics account as the owner and the Sales account as the participant to transfer the data. Create an AWS Lambda function in the Logistics account to transfer data to the Amazon EC2 instances in the Sales account. Create a snapshot of the Amazon Redshift cluster, and share the snapshot with the Sales account. In the Sales account, restore the cluster by using the snapshot ID that is shared by the Logistics account. Run COPY commands to load data from Amazon Redshift into Amazon S3 buckets in the Logistics account. Grant permissions to the Sales account to access the S3 buckets of the Logistics account.
A business is presenting their application to the internet using an Application Load Balancer (ALB). The organization identifies out-of-the-ordinary traffic access patterns across the application. A solutions architect must increase visibility into the infrastructure in order to assist the business in comprehending these anomalies. What is the most optimal option that satisfies these requirements? Create a table in Amazon Athena for AWS CloudTrail logs. Create a query for the relevant information. Enable ALB access logging to Amazon S3. Create a table in Amazon Athena, and query the logs. Enable ALB access logging to Amazon S3. Open each file in a text editor, and search each line for the relevant information. Use Amazon EMR on a dedicated Amazon EC2 instance to directly query the ALB to acquire traffic access log information.
The operations team of a business already has an Amazon S3 bucket set to send notifications to an Amazon SQS queue when new items are generated in the bucket. Additionally, the development team want to get notifications when new objects are generated. The present workflow of the operations team must be maintained. Which solution would meet these criteria? Create another SQS queue. Update the S3 events in the bucket to also update the new queue when a new object is created. Create a new SQS queue that only allows Amazon S3 to access the queue. Update Amazon S3 to update this queue when a new object is created. Create an Amazon SNS topic and SQS queue for the bucket updates. Update the bucket to send events to the new topic. Updates both queues to poll Amazon SNS. Create an Amazon SNS topic and SQS queue for the bucket updates. Update the bucket to send events to the new topic. Add subscriptions for both queues in the topic.
A solutions architect is creating storage for an Amazon Linux-based high performance computing (HPC) environment. The workload saves and analyzes a huge number of engineering drawings, which necessitates the use of shared storage and high-performance computation. Which storage choice is the best? Amazon Elastic File System (Amazon EFS) Amazon FSx for Lustre Amazon EC2 instance store Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1).
A business is planning to build a public-facing web application on Amazon Web Services (AWS). The architecture comprises of Amazon EC2 instances contained inside a Virtual Private Cloud (VPC) and protected by an Elastic Load Balancer (ELB). The DNS is managed by a third-party provider. The solutions architect of the business must offer a solution for detecting and defending against large-scale DDoS assaults. Which solution satisfies these criteria? Enable Amazon GuardDuty on the account. Enable Amazon Inspector on the EC2 instances. Enable AWS Shield and assign Amazon Route 53 to it. Enable AWS Shield Advanced and assign the ELB to it.
A business is shifting to the Amazon Web Services (AWS) Cloud. The initial workload to move is a file server. The file share must be accessible through the Server Message Block (SMB) protocol. Which AWS managed service satisfies these criteria? Amazon Elastic Block Store (Amazon EBS) Amazon EC2 Amazon FSx Amazon S3.
A business is deploying a new application on an Amazon Elastic Container Service (Amazon ECS) cluster, using the Fargate ECS task launch type. The firm is monitoring CPU and memory use in anticipation of the program receiving a significant volume of traffic upon launch. However, the corporation desires cost savings as usage declines. What recommendations should a solutions architect make? Use Amazon EC2 Auto Scaling to scale at certain periods based on previous traffic patterns. Use an AWS Lambda function to scale Amazon ECS based on metric breaches that trigger an Amazon CloudWatch alarm. Use Amazon EC2 Auto Scaling with simple scaling policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm. Use AWS Application Auto Scaling with target tracking policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm.
A business has an AWS-hosted website. The database backend is hosted on Amazon RDS for MySQL and consists of a main instance and five read replicas to accommodate scalability requirements. To provide a consistent user experience, read replicas should be no more than one second behind the original instance. As the website's traffic continues to grow, the copies lag farther behind at peak moments, resulting in user complaints when searches return inconsistent results. A solutions architect's goal should be to minimize replication latency with little modifications to the application's code or operational requirements. Which solution satisfies these criteria? Migrate the database to Amazon Aurora MySQL. Replace the MySQL read replicas with Aurora Replicas and enable Aurora Auto Scaling Deploy an Amazon ElastiCache for Redis cluster in front of the database. Modify the website to check the cache before querying the database read endpoints. Migrate the database from Amazon RDS to MySQL running on Amazon EC2 compute instances. Choose very large compute optimized instances for all replica nodes. Migrate the database to Amazon DynamoDB. Initially provision a large number of read capacity units (RCUs) to support the required throughput with on- demand capacity scaling enabled.
A business has users from all over the world using an application that is installed in many AWS Regions, exposing public static IP addresses. When users use the program through the internet, they encounter performance issues. What should a solutions architect propose as a means of lowering internet latency? Set up AWS Global Accelerator and add endpoints. Set up AWS Direct Connect locations in multiple Regions. Set up an Amazon CloudFront distribution to access an application. Set up an Amazon Route 53 geoproximity routing policy to route traffic.
A business requires the development of a reporting solution on AWS. SQL queries must be supported by the solution for data analysts to execute on the data. Each day, the data analysts will do less than ten queries. Each day, the corporation adds 3 GB of fresh data to its on-premises relational database. This data must be sent to AWS in order for reporting chores to be performed. What should a solutions architect propose as the cheapest way to achieve these requirements? Use AWS Database Migration Service (AWS DMS) to replicate the data from the on-premises database into Amazon S3. Use Amazon Athena to query the data. Use an Amazon Kinesis Data Firehose delivery stream to deliver the data into an Amazon Elasticsearch Service (Amazon ES) cluster. Run the queries in Amazon ES. Export a daily copy of the data from the on-premises database. Use an AWS Storage Gateway file gateway to store and copy the export into Amazon S3. Use an Amazon EMR cluster to query the data. Use AWS Database Migration Service (AWS DMS) to replicate the data from the on-premises database and load it into an Amazon Redshift cluster. Use the Amazon Redshift cluster to query the data.
A business is using an Amazon S3 bucket to store data that has been submitted by several departments from various locations. The finance manager finds that 10 TB of S3 Standard storage data has been charged each month during an AWS Well-Architected assessment. However, executing the command to select all files and folders in the AWS Management Console for Amazon S3 results in a total size of 5 TB. What may be the potential reasons for this discrepancy? (Select two.) Some files are stored with deduplication. The S3 bucket has versioning enabled. There are incomplete S3 multipart uploads. The S3 bucker has AWS Key Management Service (AWS KMS) enabled. The S3 bucket has Intelligent-Tiering enabled.
On AWS, a business is creating an ecommerce website. This website is constructed on a three-tier design that contains a MySQL database in an Amazon Aurora MySQL Multi-AZ deployment. The internet application must be highly available, and will be deployed in an AWS Region with three Availability Zones initially. The program generates a statistic that indicates the amount of load it is experiencing. Which solution satisfies these criteria? Configure an Application Load Balancer (ALB) with Amazon EC2 Auto Scaling behind the ALB with scheduled scaling. Configure an Application Load Balancer (ALB) and Amazon EC2 Auto Scaling behind the ALB with a simple scaling policy. Configure a Network Load Balancer (NLB) and launch a Spot Fleet with Amazon EC2 Auto Scaling behind the NLB. Configure an Application Load Balancer (ALB) and Amazon EC2 Auto Scaling behind the ALB with a target tracking scaling policy.
A gaming firm uses AWS to host a browser-based application. The application's users consume a high volume of movies and photographs stored on Amazon S3. This material is consistent across all users. The program has grown in popularity, with millions of users accessing these media files on a daily basis. The firm want to provide files to consumers while minimizing strain on the origin. Which option best fits these criteria in terms of cost-effectiveness? Deploy an AWS Global Accelerator accelerator in front of the web servers. Deploy an Amazon CloudFront web distribution in front of the S3 bucket. Deploy an Amazon ElastiCache for Redis instance in front of the web servers. Deploy an Amazon ElastiCache for Memcached instance in front of the web servers.
Every day, a business processes data. The processes' output is kept in an Amazon S3 bucket, examined daily for one week, and then must remain readily available for ad hoc examination. Which storage option is the MOST cost-effective alternative to the existing configuration? Configure a lifecycle policy to delete the objects after 30 days. Configure a lifecycle policy to transition the objects to Amazon S3 Glacier after 30 days. Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days. Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.
A business is developing an application. The program receives data through Amazon API Gateway and stores it in an Amazon Aurora PostgreSQL database using an AWS Lambda function. During the proof-of-concept stage, the firm must drastically raise the Lambda quotas to manage the large amounts of data that must be loaded into the database. A solutions architect must provide a recommendation for a new design that maximizes scalability and reduces setup effort. Which solution will satisfy these criteria? Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances. Connect the database by using native Java Database Connectivity (JDBC) drivers. Change the platform from Aurora to Amazon DynamoDB. Provision a DynamoDB Accelerator (DAX) cluster. Use the DAX client SDK to point the existing DynamoDB API calls at the DAX cluster. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using Amazon Simple Notification Service (Amazon SNS). Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.
A business wishes to migrate its on-premises MySQL database to Amazon Web Services (AWS). Regular imports from a client-facing application result in a huge amount of write operations in the database. The organization is worried that the volume of traffic may be affecting the application's performance. How should a solutions architect approach the design of an AWS architecture? Provision an Amazon RDS for MySQL DB instance with Provisioned IOPS SSD storage. Monitor write operation metrics by using Amazon CloudWatch. Adjust the provisioned IOPS if necessary. Provision an Amazon RDS for MySQL DB instance with General Purpose SSD storage. Place an Amazon ElastiCache cluster in front of the DB instance. Configure the application to query ElastiCache instead. Provision an Amazon DocumentDB (with MongoDB compatibility) instance with a memory optimized instance type. Monitor Amazon CloudWatch for performance-related issues. Change the instance class if necessary. Provision an Amazon Elastic File System (Amazon EFS) file system in General Purpose performance mode. Monitor Amazon CloudWatch for IOPS bottlenecks. Change to Provisioned Throughput performance mode if necessary.
A firm is using AWS to create a multi-instance application that needs low latency between the instances. What recommendations should a solutions architect make? Use an Auto Scaling group with a cluster placement group. Use an Auto Scaling group with single Availability Zone in the same AWS Region. Use an Auto Scaling group with multiple Availability Zones in the same AWS Region. Use a Network Load Balancer with multiple Amazon EC2 Dedicated Hosts as the targets.
For the last 15 years, a corporation has been operating a web application using an Oracle relational database in an on-premises data center. The company's database must be migrated to AWS. The business wants to cut operating costs without modifying the application's code. Which solution satisfies these criteria? Use AWS Database Migration Service (AWS DMS) to migrate the database servers to Amazon RDS. Use Amazon EC2 instances to migrate and operate the database servers. Use AWS Database Migration Service (AWS DMS) to migrate the database servers to Amazon DynamoDB. Use an AWS Snowball Edge Storage Optimized device to migrate the data from Oracle to Amazon Aurora.
A development team keeps the user name and password for its Amazon RDS MySQL DB instance in a configuration file. The configuration file is saved in plaintext on the team's Amazon EC2 instance's root device disk. When the team's application needs to connect to the database, the file is read and the credentials are loaded into the code. The team adjusted the configuration file's permissions so that only the program may access its contents. A solution architect's primary responsibility is to build a better secure system. What actions should the solutions architect do in order to satisfy this requirement? Store the configuration file in Amazon S3. Grant the application access to read the configuration file. Create an IAM role with permission to access the database. Attach this IAM role to the EC2 instance. Enable SSL connections on the database instance. Alter the database user to require SSL when logging in. Move the configuration file to an EC2 instance store, and create an Amazon Machine Image (AMI) of the instance. Launch new instances from this AMI.
A business's application is operating on Amazon EC2 instances contained inside a VPC. One of the apps must make a request to the Amazon S3 API in order to store and retrieve items. The company's security regulations prohibit programs from sending any internet-bound traffic. Which course of action will satisfy these needs while still maintaining security? Configure an S3 interface endpoint. Configure an S3 gateway endpoint. Create an S3 bucket in a private subnet. Create an S3 bucket in the same Region as the EC2 instance.
A MySQL database is used by a business's order fulfillment service. The database must be able to handle a high volume of concurrent requests and transactions. The database is patched and tuned by developers. This results in delays in the introduction of new product features. The organization wishes to use cloud-based services in order to assist it in addressing this new difficulty. The solution must enable developers to move the database with little or no modifications to the code and must maximize performance. Which solution architect service should be used to achieve these requirements? Amazon Aurora Amazon DynamoDB Amazon ElastiCache MySQL on Amazon EC2.
A corporation needs to create a relational database with a 1 second Recovery Point Objective (RPO) and a 1 minute Recovery Time Objective (RTO) for multi-region disaster recovery. Which AWS solution is capable of doing this? Amazon Aurora Global Database Amazon DynamoDB global tables Amazon RDS for MySQL with Multi-AZ enabled Amazon RDS for MySQL with a cross-Region snapshot copy.
A solutions architect is tasked with the responsibility of designing the implementation of a new static website. The solution must be cost effective and maintain a minimum of 99 percent availability. Which solution satisfies these criteria? Deploy the application to an Amazon S3 bucket in one AWS Region that has versioning disabled. Deploy the application to Amazon EC2 instances that run in two AWS Regions and two Availability Zones. Deploy the application to an Amazon S3 bucket that has versioning and cross-Region replication enabled. Deploy the application to an Amazon EC2 instance that runs in one AWS Region and one Availability Zone.
A business operates a three-tier web application for the purpose of processing credit card payments. Static websites comprise the front-end user interface. The application layer may include lengthy procedures. MySQL is used in the database layer. Currently, the application is running on a single huge general-purpose Amazon EC2 machine. A solutions architect must decouple the services in order to maximize the availability of the web application. Which of the following solutions would give the HIGHEST level of availability? Move static assets to Amazon CloudFront. Leave the application in EC2 in an Auto Scaling group. Move the database to Amazon RDS to deploy Multi-AZ. Move static assets and the application into a medium EC2 instance. Leave the database on the large instance. Place both instances in an Auto Scaling group. Move static assets to Amazon S3. Move the application to AWS Lambda with the concurrency limit set. Move the database to Amazon DynamoDB with on- demand enabled. Move static assets to Amazon S3. Move the application to Amazon Elastic Container Service (Amazon ECS) containers with Auto Scaling enabled. Move the database to Amazon RDS to deploy Multi-AZ.
The security team of a corporation wants that network traffic be logged in VPC Flow Logs. The logs will be viewed often for 90 days and then deleted. occasionally. What should a solutions architect do when customizing the logs to satisfy these requirements? Use Amazon CloudWatch as the target. Set the CloudWatch log group with an expiration of 90 days. Use Amazon Kinesis as the target. Configure the Kinesis stream to always retain the logs for 90 days. Use AWS CloudTrail as the target. Configure CloudTrail to save to an Amazon S3 bucket, and enable S3 Intelligent-Tiering. Use Amazon S3 as the target. Enable an S3 Lifecycle policy to transition the logs to S3 Standard-Infrequent Access (S3 Standard-IA) after 90 days.
Amazon S3 is being used by a solutions architect to develop the storage architecture for a new digital media application. The media files must be robust in the event of an Availability Zone failure. Certain files are routinely visited, while others are viewed infrequently and in an unexpected fashion. The architect of the solution must keep the expenses of storing and retrieving media files to a minimum. Which storage choice satisfies these criteria? S3 Standard S3 Intelligent-Tiering S3 Standard-Infrequent Access (S3 Standard-IA) S3 One Zone-Infrequent Access (S3 One Zone-IA).
Monthly reports are stored in an Amazon S3 bucket by a company's financial application. The vice president of finance has directed that all access to these reports be documented, as well as any adjustments to the log files. What activities can a solutions architect take to ensure compliance with these requirements? Use S3 server access logging on the bucket that houses the reports with the read and write data events and log file validation options enabled. Use S3 server access logging on the bucket that houses the reports with the read and write management events and log file validation options enabled. Use AWS CloudTrail to create a new trail. Configure the trail to log read and write data events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation. Use AWS CloudTrail to create a new trail. Configure the trail to log read and write management events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation.
A business that specializes in online gaming is developing a game that is predicted to be very popular around the globe. A solutions architect must create an AWS Cloud architecture capable of capturing and presenting near-real-time game data for each participant, as well as the names of the world's top 25 players at any one moment. Which AWS database solution and configuration should be used to satisfy these requirements? Use Amazon RDS for MySQL as the data store for player activity. Configure the RDS DB instance for Multi-AZ support. Use Amazon DynamoDB as the data store for player activity. Configure DynamoDB Accelerator (DAX) for the player data. Use Amazon DynamoDB as the data store for player activity. Configure global tables in each required AWS Region for the player data. Use Amazon RDS for MySQL as the data store for player activity. Configure cross-Region read replicas in each required AWS Region based on player proximity.
A business is building a serverless web application that will allow users to engage with real-time game stats. The data generated by the games must be transmitted live. The business need a robust, low-latency database solution for user data. The corporation is unsure about the application's anticipated user base. Any design considerations must ensure single-digit millisecond response rates as the application grows. Which AWS service combination will suit these requirements? (Select two.) Amazon CloudFront Amazon DynamoDB Amazon Kinesis Amazon RDS AWS Global Accelerator.
A solutions architect is developing a solution that will allow users to browse a collection of photos and make requests for customized images. Parameters for image customisation will be included in each request made to an AWS API Gateway API. The personalized picture will be created on demand, and consumers will get a link to see or download it. The solution must be very user-friendly in terms of viewing and modifying photos. Which approach is the MOST cost-effective in meeting these requirements? Use Amazon EC2 instances to manipulate the original image into the requested customizations. Store the original and manipulated images in Amazon S3. Configure an Elastic Load Balancer in front of the EC2 instances. Use AWS Lambda to manipulate the original image to the requested customizations. Store the original and manipulated images in Amazon S3. Configure an Amazon CloudFront distribution with the S3 bucket as the origin. Use AWS Lambda to manipulate the original image to the requested customizations. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2 instances. Use Amazon EC2 instances to manipulate the original image into the requested customizations. Store the original images in Amazon S3 and the manipulated images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin.
A business runs an Amazon EC2 instance on a private subnet and requires access to a public website in order to get patches and upgrades. The organization does not want other websites to be able to see or start connections to the EC2 instance's IP address. How can a solutions architect accomplish this goal? Create a site-to-site VPN connection between the private subnet and the network in which the public site is deployed. Create a NAT gateway in a public subnet. Route outbound traffic from the private subnet through the NAT gateway. Create a network ACL for the private subnet where the EC2 instance deployed only allows access from the IP address range of the public website. Create a security group that only allows connections from the IP address range of the public website. Attach the security group to the EC2 instance.
A business uses AWS to host its website. The website is protected by an Application Load Balancer (ALB) configured to manage HTTP and HTTPS traffic independently. The firm wishes to route all queries to the website through HTTPS. What solution should a solutions architect implement to satisfy this criterion? Update the ALBג€™s network ACL to accept only HTTPS traffic. Create a rule that replaces the HTTP in the URL with HTTPS. Create a listener rule on the ALB to redirect HTTP traffic to HTTPS. Replace the ALB with a Network Load Balancer configured to use Server Name Indication (SNI).
A business hosts a multilingual website using a fleet of Amazon EC2 instances protected by an Application Load Balancer (ALB). While this design is presently operational in the us-west-1 Region, it exhibits significant request delay for customers in other regions of the globe. The website must respond fast and effectively to user queries regardless of their location. The organization, however, does not want to duplicate the present infrastructure across numerous Regions. How is this to be accomplished by a solutions architect? Replace the existing architecture with a website served from an Amazon S3 bucket. Configure an Amazon CloudFront distribution with the S3 bucket as the origin. Configure an Amazon CloudFront distribution with the ALB as the origin. Set the cache behavior settings to only cache based on the Accept-Language request header. Set up Amazon API Gateway with the ALB as an integration. Configure API Gateway to use an HTTP integration type. Set up an API Gateway stage to enable the API cache. Launch an EC2 instance in each additional Region and configure NGINX to act as a cache server for that Region. Put all the instances plus the ALB behind an Amazon Route 53 record set with a geolocation routing policy.
On AWS, a business is developing a prototype of an ecommerce website. The website is powered by an Application Load Balancer from Amazon's Auto Scaling division. EC2 instances for web servers and an Amazon RDS for MySQL database instance configured in Single-AZ mode. The website is sluggish to react while doing product catalog searches. The product catalog is a collection of tables in the MySQL database that the firm uses to store its products. not regularly updated. A solutions architect has established that when product catalog searches occur, the CPU consumption on the database instance is significant. What should the solutions architect propose to optimize the website's performance during product catalog searches? Migrate the product catalog to an Amazon Redshift database. Use the COPY command to load the product catalog tables. Implement an Amazon ElastiCache for Redis cluster to cache the product catalog. Use lazy loading to populate the cache. Add an additional scaling policy to the Auto Scaling group to launch additional EC2 instances when database response is slow. Turn on the Multi-AZ configuration for the DB instance. Configure the EC2 instances to throttle the product catalog queries that are sent to the database.
A business has developed a bespoke application that utilizes embedded credentials to get data from an Amazon RDS MySQL DB instance. According to management, the application's security must be enhanced with the least amount of development work possible. What actions should a solutions architect take to ensure that these criteria are met? Use AWS Key Management Service (AWS KMS) customer master keys (CMKs) to create keys. Configure the application to load the database credentials from AWS KMS. Enable automatic key rotation. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the application to load the database credentials from Secrets Manager. Create an AWS Lambda function that rotates the credentials in Secret Manager. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the application to load the database credentials from Secrets Manager. Set up a credentials rotation schedule for the application user in the RDS for MySQL database using Secrets Manager. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Systems Manager Parameter Store. Configure the application to load the database credentials from Parameter Store. Set up a credentials rotation schedule for the application user in the RDS for MySQL database using Parameter Store.
A solutions architect is developing a system for analyzing financial market performance while the markets are closed. Each night, the system will conduct a succession of compute-intensive operations for four hours. The time required to finish compute tasks is supposed to be constant, and once begun, jobs cannot be stopped. After completion, the system is scheduled to operate for at least one year. Which Amazon EC2 instance type should be utilized to lower the system's cost? Spot Instances On-Demand Instances Standard Reserved Instances Scheduled Reserved Instances.
A business's on-premises data center hosts its critical network services, such as directory services and DNS. AWS Direct Connect connects the data center to the AWS Cloud (DX). Additional AWS accounts are anticipated, which will need continuous, rapid, and cost-effective access to these network services. What measures should a solutions architect take to ensure that these criteria are met with the least amount of operational overhead possible? Create a DX connection in each new account. Route the network traffic to the on-premises servers. Configure VPC endpoints in the DX VPC for all required services. Route the network traffic to the on-premises servers. Create a VPN connection between each new account and the DX VPC. Route the network traffic to the on-premises servers. Configure AWS Transit Gateway between the accounts. Assign DX to the transit gateway and route network traffic to the on-premises servers.
A business is developing a new application for storing a big volume of data. Hourly data analysis and modification will be performed by many Amazon EC2 Linux instances distributed across several Availability Zones. The application team anticipates that the required quantity of space will continue to expand over the following six months. Which course of action should a solutions architect pursue in order to meet these requirements? Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the application instances. Store the data in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system on the application instances. Store the data in Amazon S3 Glacier. Update the S3 Glacier vault policy to allow access to the application instances. Store the data in an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS volume shared between the application instances.
A solutions architect is tasked with the responsibility of creating the cloud architecture for a new application that will be hosted on AWS. The process should be parallelized, with the number of jobs to be handled dictating the number of application nodes added and removed. State is not maintained by the processor program. The solutions architect must guarantee that the application is loosely connected and that the task items are kept in a durable manner. Which design should the architect of solutions use? Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch configuration that uses the AMI. Create an Auto Scaling group using the launch configuration. Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage. Create an Amazon SQS queue to hold the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue. Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon Machine Image (AMI) that consists of the processor application. Create a launch template that uses the AMI. Create an Auto Scaling group using the launch template. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic.
A business is constructing a file-sharing application that will be stored in an Amazon S3 bucket. The firm want to distribute all files using Amazon CloudFront. The firm does not want for the files to be available directly via the S3 URL. What actions should a solutions architect take to ensure that these criteria are met? Write individual policies for each S3 bucket to grant read permission for only CloudFront access. Create an IAM user. Grant the user read permission to objects in the S3 bucket. Assign the user to CloudFront. Write an S3 bucket policy that assigns the CloudFront distribution ID as the Principal and assigns the target S3 bucket as the Amazon Resource Name (ARN). Create an origin access identity (OAI). Assign the OAI to the CloudFront distribution. Configure the S3 bucket permissions so that only the OAI has read permission.
Numerous business processes inside a corporation need access to data kept in a file share. The file share will be accessed by business systems using the Server Message Block (SMB) protocol. The file sharing solution should be available from both the on-premises and cloud environments of the business. Which services are required by the business? (Select two.) Amazon Elastic Block Store (Amazon EBS) Amazon Elastic File System (Amazon EFS) Amazon FSx for Windows Amazon S3 AWS Storage Gateway file gateway.
A business is ingesting data from on-premises data sources utilizing a fleet of Amazon EC2 instances. The data is in JSON format and may be ingested at a rate of up to 1 MB/s. When an EC2 instance is restarted, any data that was in transit is lost. The data science team at the organization wishes to query imported data in near-real time. Which method enables near-real-time data querying while being scalable and causing the least amount of data loss? Publish data to Amazon Kinesis Data Streams. Use Kinesis Data Analytics to query the data. Publish data to Amazon Kinesis Data Firehose with Amazon Redshift as the destination. Use Amazon Redshift to query the data. Store ingested data in an EC2 instance store. Publish data to Amazon Kinesis Data Firehose with Amazon S3 as the destination. Use Amazon Athena to query the data. Store ingested data in an Amazon Elastic Block Store (Amazon EBS) volume. Publish data to Amazon ElastiCache for Redis. Subscribe to the Redis channel to query the data.
A business relies on a traditional on-premises analytics solution that runs on terabytes of.csv files and contains months of data. The older program is unable to cope with the increasing size of.csv files. Daily, new.csv files are uploaded to a common on-premises storage site from numerous data sources. The organization want to maintain support for the traditional application while customers familiarize themselves with AWS analytics capabilities. To do this, the solutions architect want to keep two synchronized copies of all.csv files on-premises and on Amazon S3. Which solution should the architect of solutions recommend? Deploy AWS DataSync on-premises. Configure DataSync to continuously replicate the .csv files between the companyג€™s on-premises storage and the companyג€™s S3 bucket. Deploy an on-premises file gateway. Configure data sources to write the .csv files to the file gateway. Point the legacy analytics application to the file gateway. The file gateway should replicate the .csv files to Amazon S3. Deploy an on-premises volume gateway. Configure data sources to write the .csv files to the volume gateway. Point the legacy analytics application to the volume gateway. The volume gateway should replicate data to Amazon S3. Deploy AWS DataSync on-premises. Configure DataSync to continuously replicate the .csv files between on-premises and Amazon Elastic File System (Amazon EFS). Enable replication from Amazon Elastic File System (Amazon EFS) to the companyג€™s S3 bucket.
A business is transferring a three-tier application to Amazon Web Services. A MySQL database is required for the program. Previously, application users complained about the program's slow performance while adding new entries. These performance difficulties occurred as a result of users creating various real-time reports from the program during business hours. Which solution will optimize the application's performance when it is migrated to AWS? Import the data into an Amazon DynamoDB table with provisioned capacity. Refactor the application to use DynamoDB for reports. Create the database on a compute optimized Amazon EC2 instance. Ensure compute resources exceed the on-premises database. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the application to use the reader endpoint for reports. Create an Amazon Aurora MySQL Multi-AZ DB cluster. Configure the application to use the backup instance of the cluster as an endpoint for the reports.
A corporation has an on-premises MySQL database that is used infrequently by the worldwide sales staff. The sales team needs little database downtime. A database administrator wishes to move this database to AWS without specifying an instance type in front of increased user traffic in the future. Which solution architect service should be recommended? Amazon Aurora MySQL Amazon Aurora Serverless for MySQL Amazon Redshift Spectrum Amazon RDS for MySQL.
A corporation is developing an architecture for a mobile application that needs the least amount of delay possible for its consumers. The company's architecture is comprised of Amazon EC2 instances that are routed via an Application Load Balancer that is configured to operate in an Auto Scaling group. Amazon EC2 instances communicate with Amazon RDS. Beta testing of the application revealed a slowness while reading the data. However, the data suggest that no CPU usage criteria are exceeded by the EC2 instances. How can this problem be resolved? Reduce the threshold for CPU utilization in the Auto Scaling group. Replace the Application Load Balancer with a Network Load Balancer. Add read replicas for the RDS instances and direct read traffic to the replica. Add Multi-AZ support to the RDS instances and direct read traffic to the new EC2 instance.
A business's application architecture is two-tiered and distributed over public and private subnets. The public subnet contains Amazon EC2 instances that execute the web application, whereas the private subnet has a database. The web application instances and database are both contained inside a single Availability Zone (AZ). Which combination of measures should a solutions architect take to ensure this architecture's high availability? (Select two.) Create new public and private subnets in the same AZ for high availability. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer. Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in one AZ. Create new public and private subnets in the same VPC, each in a new AZ. Migrate the database to an Amazon RDS multi-AZ deployment.
A business want to utilize Amazon S3 as a supplementary storage location for its on-premises dataset. The business would seldom need access to this copy. The cost of the storage solution should be kept to a minimum. Which storage option satisfies these criteria? S3 Standard S3 Intelligent-Tiering S3 Standard-Infrequent Access (S3 Standard-IA) S3 One Zone-Infrequent Access (S3 One Zone-IA).
A security team that is responsible for restricting access to certain services or activities across all of the team's AWS accounts. All accounts in AWS Organizations are part of a huge organization. The solution must be scalable, and permissions must be managed centrally. What actions should a solutions architect take to achieve this? Create an ACL to provide access to the services or actions. Create a security group to allow accounts and attach it to user groups. Create cross-account roles in each account to deny access to the services or actions. Create a service control policy in the root organizational unit to deny access to the services or actions.
A business has an application with a REST-based interface that enables near-real-time data retrieval from a third-party vendor. After receiving the data, the program analyzes and saves it for further analysis. Amazon EC2 instances are used to host the application. When delivering data to the program, the third-party vendor saw many 503 Service Unavailable errors. When data volume increases, the compute capacity approaches its limit and the application becomes unable of processing all requests. Which design should a solutions architect advocate in order to achieve more scalability? Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions. Use Amazon API Gateway on top of the existing application. Create a usage plan with a quota limit for the third-party vendor. Use Amazon Simple Notification Service (Amazon SNS) to ingest the data. Put the EC2 instances in an Auto Scaling group behind an Application Load Balancer. Repackage the application as a container. Deploy the application using Amazon Elastic Container Service (Amazon ECS) using the EC2 launch type with an Auto Scaling group.
A business has developed an application that analyzes inventory data by using overnight digital photographs of items on shop shelves. The application is deployed on Amazon EC2 instances behind an Application Load Balancer (ALB) and retrieves photos from an Amazon S3 bucket for metadata processing by worker nodes. A solutions architect must guarantee that worker nodes process each picture. What actions should the solutions architect take to ensure that this need is met in the MOST cost-effective manner possible? Send the image metadata from the application directly to a second ALB for the worker nodes that use an Auto Scaling group of EC2 Spot Instances as the target group. Process the image metadata by sending it directly to EC2 Reserved Instances in an Auto Scaling group. With a dynamic scaling policy, use an Amazon CloudWatch metric for average CPU utilization of the Auto Scaling group as soon as the front-end application obtains the images. Write messages to Amazon Simple Queue Service (Amazon SQS) when the front-end application obtains an image. Process the images with EC2 On- Demand instances in an Auto Scaling group with instance scale-in protection and a fixed number of instances with periodic health checks. Write messages to Amazon Simple Queue Service (Amazon SQS) when the application obtains an image. Process the images with EC2 Spot Instances in an Auto Scaling group with instance scale-in protection and a dynamic scaling policy using a custom Amazon CloudWatch metric for the current number of messages in the queue.
In the us-east-1 Region, a corporation has three VPCs designated Development, Testing, and Production. The three virtual private clouds must be linked to an on-premises data center and are meant to be self-contained in order to ensure security and avoid resource sharing. A solutions architect must identify a solution that is both scalable and safe. What recommendations should the solutions architect make? Create an AWS Direct Connect connection and a VPN connection for each VPC to connect back to the data center. Create VPC peers from all the VPCs to the Production VPC. Use an AWS Direct Connect connection from the Production VPC back to the data center. Connect VPN connections from all the VPCs to a VPN in the Production VPC. Use a VPN connection from the Production VPC back to the data center. Create a new VPC called Network. Within the Network VPC, create an AWS Transit Gateway with an AWS Direct Connect connection back to the data center. Attach all the other VPCs to the Network VPC.
A business has retained the services of a solutions architect to develop a dependable architecture for its application. The application is comprised of a single Amazon RDS database instance and two manually deployed Amazon EC2 instances running web servers. A single Availability Zone contains all of the EC2 instances. An employee recently removed the database instance, resulting in the application being offline for 24 hours. The firm is concerned with the environment's general dependability. What should the solutions architect do to ensure the application's infrastructure is as reliable as possible? Delete one EC2 instance and enable termination protection on the other EC2 instance. Update the DB instance to be Multi-AZ, and enable deletion protection. Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances behind an Application Load Balancer, and run them in an EC2 Auto Scaling group across multiple Availability Zones. Create an additional DB instance along with an Amazon API Gateway and an AWS Lambda function. Configure the application to invoke the Lambda function through API Gateway. Have the Lambda function write the data to the two DB instances. Place the EC2 instances in an EC2 Auto Scaling group that has multiple subnets located in multiple Availability Zones. Use Spot Instances instead of On- Demand Instances. Set up Amazon CloudWatch alarms to monitor the health of the instances. Update the DB instance to be Multi-AZ, and enable deletion protection.
Report abuse Consent Terms of use