ARCH Prof
|
|
Title of test:
![]() ARCH Prof Description: Test Nick |



| New Comment |
|---|
NO RECORDS |
|
A business wishes to enable its marketing department to run SQL queries on customer information in order to discover market segments. Hundreds of files contain the data. Encryption of records in transit and at rest is required. While the Team Manager must be able to manage users and groups, no team member should have access to services or resources that are not necessary to run the SQL queries. Additionally, administrators must audit queries and get warnings when a query breaches the Security team's set guidelines. AWS Organizations was utilized to establish a new Team Manager account and an AWS IAM user with administrator access. Which design satisfies these criteria?. Apply a service control policy (SCP) that allows access to IAM, Amazon RDS, and AWS CloudTrail. Load customer records in Amazon RDS MySQL and train users to execute queries using the AWS CLI. Stream the query logs to Amazon CloudWatch Logs from the RDS database instance. Use a subscription filter with AWS Lambda functions to audit and alarm on queries against personal data. Apply a service control policy (SCP) that denies access to all services except IAM, Amazon Athena, Amazon S3, and AWS CloudTrail. Store customer record files in Amazon S3 and train users to execute queries using the CLI via Athena. Analyze CloudTrail events to audit and alarm on queries against personal data. Apply a service control policy (SCP) that denies access to all services except IAM, Amazon DynamoDB, and AWS CloudTrail. Store customer records in DynamoDB and train users to execute queries using the AWS CLI. Enable DynamoDB streams to track the queries that are issued and use an AWS Lambda function for real-time monitoring and alerting. Apply a service control policy (SCP) that allows access to IAM, Amazon Athena, Amazon S3, and AWS CloudTrail. Store customer records as files in Amazon S3 and train users to leverage the Amazon S3 Select feature and execute queries using the AWS CLI. Enable S3 object-level logging and analyze CloudTrail events to audit and alarm on queries against personal data. An organization wishes to use a SaaS solution provided by a third party. The SaaS application must have the ability to execute multiple API calls in order to find Amazon EC2 resources that are operating inside the enterprise's account. The company has internal security standards requiring that any external access to their environment adhere to the concept of least privilege and that procedures are in place to guarantee that the SaaS vendor's credentials cannot be utilized by another third party. Which of the following options would satisfy all of these criteria?. From the AWS Management Console, navigate to the Security Credentials page and retrieve the access and secret key for your account. Create an IAM user within the enterprise account assign a user policy to the IAM user that allows only the actions required by the SaaS application create a new access and secret key for the user and provide these credentials to the SaaS provider. Create an IAM role for cross-account access allows the SaaS provider's account to assume the role and assign it a policy that allows only the actions required by the SaaS application. Create an IAM role for EC2 instances, assign it a policy that allows only the actions required tor the SaaS application to work, provide the role ARN to the SaaS provider to use when launching their application instances. AWS Organizations is used by a business to handle more than 1,000 AWS accounts. The corporation has established a new developer division. There are 540 member accounts for developers that must be transferred to the new development organization. Each account is configured with the necessary information to function independently. Which actions should a solutions architect perform in combination to migrate all developer accounts to the new developer organization? (Select three.). Call the MoveAccount operation in the Organizations API from the old organization's management account to migrate the developer accounts to the new developer organization. From the management account, remove each developer account from the old organization using the RemoveAccountFromOrganization operation in the Organizations API. From each developer account, remove the account from the old organization using the RemoveAccountFromOrganization operation in the Organizations API. Sign in to the new developer organization's management account and create a placeholder member account that acts as a target for the developer account migration. Call the InviteAccountToOrganization operation in the Organizations API from the new developer organization's management account to send invitations to the developer accounts. Have each developer sign in to their account and confirm to join the new developer organization. DynamoDB is solely utilized as a transport mechanism; it is not used as a data store format. WDDX. XML. SGML. JSON. To use Amazon SNS and ADM to deliver push notifications to mobile devices, you must have the following, except: Device token. Client ID. Registration ID. Client secret. You're considering migrating your Development (Dev) and Test environments to Amazon Web Services (AWS). You've chosen to host each environment using a different AWS account. You want to use Consolidated Billing to connect each account's bill to a Master AWS account. To ensure budget compliance, you'd want to develop a mechanism that allows administrators in the Master account to halt, remove, and/or terminate resources in both the Dev and Test accounts. Determine which choice will enable you to accomplish this aim. Create IAM users in the Master account with full Admin permissions. Create cross-account roles in the Dev and Test accounts that grant the Master account access to the resources in the account by inheriting permissions from the Master account. Create IAM users and a cross-account role in the Master account that grants full Admin permissions to the Dev and Test accounts. Create IAM users in the Master account. Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access. Link the accounts using Consolidated Billing. This will give IAM users in the Master account access to resources in the Dev and Test accounts. A business commissions a solution architect to reduce the cost of a solution. The solution manages many client demands. Amazon API Gateway, AWS Lambda, AWS Fargate, Amazon Simple Queue Service (Amazon SQS), and Amazon EC2 are all used in the solution's multi-tier architecture. In the present configuration, queries are routed via API Gateway to Lambda, which then starts a Fargate container or pushes a message to a SQS queue. A EC2 Fleet is a collection of EC2 instances that act as workers for a SQS queue. The size of the EC2 Fleet is proportional to the number of items in the SQS queue. Which sequence of activities should the solutions architect prescribe to get the most cost savings? (Select three.). Determine the minimum number of EC2 instances that are needed during a day. Reserve this number of instances in a 3-year plan with payment all upfront. Examine the last 6 months of compute utilization across the services. Use this information to determine the needed compute for the solution. Commit to a Savings Plan for this amount. Determine the average number of EC2 instances that are needed during a day. Reserve this number of instances in a 3-year plan with payment all upfront. Remove the SQS queue from the solution and from the solution infrastructure. Change the solution so that it runs as a container instead of on EC2 instances. Configure Lambda to start up the solution in Fargate by using environment variables to give the solution the message. Change the Lambda function so that it posts the message directly to the EC2 instances through an Application Load Balancer. To access AWS services, a company's AWS architecture presently relies on access keys and secret access keys saved on each instance. Each instance's database credentials are hard-coded. SSH keys are kept in a secure Amazon S3 bucket enabling command-line remote access. The organization has tasked its solutions architect with enhancing the architecture's security posture without increasing operational complexity. Which actions should the solutions architect take in conjunction to achieve this? (Select three.). Use Amazon EC2 instance profiles with an IAM role. Use AWS Secrets Manager to store access keys and secret access keys. Use AWS Systems Manager Parameter Store to store database credentials. Use a secure fleet of Amazon EC2 bastion hosts for remote access. Use AWS KMS to store database credentials. Use AWS Systems Manager Session Manager for remote access. On AWS, a business is operating a.NET three-tier web application. The team is presently storing and serving the website's picture and video assets on local instance storage through XL storage optimized instances. The firm has experienced data loss as a result of replication and instance failures. The Solutions Architect has been tasked with redesigning this application in order to increase its dependability while maintaining a cost-effective architecture. Which solution will satisfy these criteria?. Set up a new Amazon EFS share, move all image and video files to this share, and then attach this new drive as a mount point to all existing servers. Create an Elastic Load Balancer with Auto Scaling general purpose instances. Enable Amazon CloudFront to the Elastic Load Balancer. Enable Cost Explorer and use AWS Trusted Advisor checks to continue monitoring the environment for future savings. Implement Auto Scaling with general purpose instance types and an Elastic Load Balancer. Enable an Amazon CloudFront distribution to Amazon S3 and move images and video files to Amazon S3. Reserve general purpose instances to meet base performance requirements. Use Cost Explorer and AWS Trusted Advisor checks to continue monitoring the environment for future savings. Move the entire website to Amazon S3 using the S3 website hosting feature. Remove all the web servers and have Amazon S3 communicate directly with the application servers in Amazon VPC. Use AWS Elastic Beanstalk to deploy the .NET application. Move all images and video files to Amazon EFS. Create an Amazon CloudFront distribution that points to the EFS share. Reserve the m4.4xl instances needed to meet base performance requirements. Which of the following preset policy condition keys in AWS IAM examines how recently (in seconds) the MFA-validated security credentials used to make the request were issued?. aws:MultiFactorAuthAge. aws:MultiFactorAuthLast. aws:MFAAge. aws:MultiFactorAuthPrevious. You're creating a social networking site and contemplating how to protect it against distributed denial-of-service (DDoS) assaults. Which of the following mitigating strategies is a suitable option? (Select three.). Add multiple elastic network interfaces (ENIs) to each EC2 instance to increase the network bandwidth. Use dedicated instances to ensure that each instance has the maximum performance possible. Use an Amazon CloudFront distribution for both static and dynamic content. Use an Elastic Load Balancer with auto scaling groups at the web, app and Amazon Relational Database Service (RDS) tiers. Add alert Amazon CloudWatch to look for high Network in and CPU utilization. Create processes and capabilities to quickly add and remove rules to the instance OS firewall. A medical business is using the AWS Cloud to host an application. The program replicates the impact of developing medicinal medications. Two components comprise the application: setup and simulation. The configuration portion of the application is executed in AWS Fargate containers inside an Amazon Elastic Container Service (Amazon ECS) cluster. The simulation component is executed on massively parallelized Amazon EC2 instances. If a simulation is interrupted, it may be restarted. The setup portion of the application runs 24 hours a day with a constant load. The simulation portion runs for a few hours each night under varied load conditions. The corporation maintains simulation findings in Amazon S3 and the researchers have 30 days to utilize them. The firm must keep simulations for a minimum of ten years and be able to recover them within five hours. Which option best fits these criteria in terms of cost-effectiveness?. Purchase an EC2 Instance Savings Plan to cover the usage for the configuration part. Run the simulation part by using EC2 Spot Instances. Create an S3 Lifecycle policy to transition objects that are older than 30 days to S3 Intelligent-Tiering. Purchase an EC2 Instance Savings Plan to cover the usage for the configuration part and the simulation part. Create an S3 Lifecycle policy to transition objects that are older than 30 days to S3 Glacier. Purchase Compute Savings Plans to cover the usage for the configuration part. Run the simulation part by using EC2 Spot Instances. Create an S3 Lifecycle policy to transition objects that are older than 30 days to S3 Glacier. Purchase Compute Savings Plans to cover the usage for the configuration part. Purchase EC2 Reserved Instances for the simulation part. Create an S3 Lifecycle policy to transition objects that are older than 30 days to S3 Glacier Deep Archive. A firm that operates apps on Amazon Web Services (AWS) just signed up for a new data-as-a-service (SaaS) provider. The vendor supplies data through a REST API that the vendor hosts on AWS. The vendor provides numerous connecting options for the API and is collaborating with the firm to determine the optimal method of connection. The company's Amazon Web Services (AWS) account does not permit outbound internet connectivity from inside its AWS environment. Vendor services are hosted on AWS in the same region as the vendor's apps. A solutions architect must integrate connection to the vendor's API in order for the API to be highly accessible inside the company's virtual private cloud (VPC). Which solution will satisfy these criteria?. Connect to the vendor's public API address for the data service. Connect to the vendor by way of a VPC peering connection between the vendor's VPC and the company's VPC. Connect to the vendor by way of a VPC endpoint service that uses AWS PrivateLink. Connect to a public bastion host that the vendor provides. Tunnel the API traffic. A Solutions Architect has been tasked with the responsibility of examining a company's Amazon Redshift cluster, which has fast become a vital element of its technology and supports critical business processes. The Solutions Architect's role is to strengthen the cluster's dependability and availability and to give alternatives for restoring the cluster within four hours if a problem occurs. Which of the following solution alternatives BEST meets the business requirement at the lowest possible cost?. Ensure that the Amazon Redshift cluster has been set up to make use of Auto Scaling groups with the nodes in the cluster spread across multiple Availability Zones. Ensure that the Amazon Redshift cluster creation has been templated using AWS CloudFormation so it can easily be launched in another Availability Zone and data populated from the automated Redshift back-ups stored in Amazon S3. Use Amazon Kinesis Data Firehose to collect the data ahead of ingestion into Amazon Redshift and create clusters using AWS CloudFormation in another region and stream the data to both clusters. Create two identical Amazon Redshift clusters in different regions (one as the primary, one as the secondary). Use Amazon S3 cross-region replication from the primary to secondary region, which triggers an AWS Lambda function to populate the cluster in the secondary region. Your website is providing your staff with on-demand training videos. Monthly videos in high-resolution MP4 format are uploaded. Your staff is geographically dispersed and often on the road, using company-provided tablets that need the HTTP Live Streaming (HLS) protocol to view video. Your organization lacks experience in video transcoding, and as a result, you may need to hire a consultant. How can you build the most cost-effective architecture possible while maintaining high availability and video transmission quality?. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CloudFront to serve HLS transcoded videos from EC2. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. EBS volumes to host videos and EBS snapshots to incrementally backup original files after a few days. CloudFront to serve HLS transcoded videos from EC2. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. S3 to host videos with Lifecycle Management to archive original files to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue. S3 to host videos with Lifecycle Management to archive all files to Glacier after a few days. CloudFront to serve HLS transcoded videos from Glacier. A greeting card firm recently advertised that clients could use the company's platform to send cards to their favorite celebrities. Since the advertising was released, the site has consistently received traffic from 10,000 unique users every second. The platform is powered by m5.xlarge. Behind an Application Load Balancer, Amazon EC2 instances (ALB). The instances are part of an Auto Scaling group and operate on an Amazon Linux-based custom AMI. The platform makes use of an Amazon Aurora MySQL DB cluster with highly accessible main and reader endpoints. Additionally, the platform makes use of an Amazon ElastiCache for Redis cluster, which is accessible through its cluster endpoint. Each client is assigned a unique process, and the platform maintains open database connections to MySQL for the length of the customer's session. However, the platform consumes little resources. Numerous clients are experiencing connection issues while attempting to connect to the platform. Connections to the Aurora database are failing, as seen by the logs. Amazon CloudWatch data indicate that the platform's CPU load is minimal and that connections to the platform are established successfully through the ALB. Which option will most effectively correct the errors?. Set up an Amazon CloudFront distribution. Set the ALB as the origin. Move all customer traffic to the CloudFront distribution endpoint. Use Amazon RDS Proxy. Reconfigure the database connections to use the proxy. Increase the number of reader nodes in the Aurora MySQL cluster. Increase the number of nodes in the ElastiCache for Redis cluster. A firm based in the United States of America (US) purchased a business in Europe. Both businesses rely on the AWS Cloud. The firm in the United States has developed a new application using a microservices architecture. The US-based corporation hosts the application across five virtual private clouds (VPCs) in the us-east-2 Region. The application must have access to resources contained inside a single VPC in the eu-west-1 Region. The application, however, must be unable to access any other VPCs. There are no overlapping CIDR ranges between the VPCs in any Region. In AWS Organizations, all accounts are already integrated into a single organization. Which approach will be the most cost-effective in meeting these requirements?. Create one transit gateway in eu-west-1. Attach the VPCs in us-east-2 and the VPC in eu-west-1 to the transit gateway. Create the necessary route entries in each VPC so that the traffic is routed through the transit gateway. Create one transit gateway in each Region. Attach the involved subnets to the regional transit gateway. Create the necessary route entries in the associated route tables for each subnet so that the traffic is routed through the regional transit gateway. Peer the two transit gateways. Create a full mesh VPC peering connection configuration between all the VPCs. Create the necessary route entries in each VPC so that the traffic is routed through the VPC peering connection. Create one VPC peering connection for each VPC in us-east-2 to the VPC in eu-west-1. Create the necessary route entries in each VPC so that the traffic is routed through the VPC peering connection. Multiple business divisions comprise a huge global financial services corporation. The organization want to encourage developers to experiment with new services, but there are many compliance requirements for various workloads. The Security team is worried about the on-premises and AWS access strategies. They want to impose control on AWS services used by business teams to manage regulatory workloads, such as Payment Card Industry (PCI) compliance. Which option will allay the Security team's fears while enabling Developers to experiment with new services?. Implement a strong identity and access management model that includes users, groups, and roles in various AWS accounts. Ensure that centralized AWS CloudTrail logging is enabled to detect anomalies. Build automation with AWS Lambda to tear down unapproved AWS resources for governance. Build a multi-account strategy based on business units, environments, and specific regulatory requirements. Implement SAML-based federation across all AWS accounts with an on-premises identity store. Use AWS Organizations and build organizational units (OUs) structure based on regulations and service governance. Implement service control policies across OUs. Implement a multi-account strategy based on business units, environments, and specific regulatory requirements. Ensure that only PCI-compliant services are approved for use in the accounts. Build IAM policies to give access to only PCI-compliant services for governance. Build one AWS account for the company for strong security controls. Ensure that all the service limits are raised to meet company scalability requirements. Implement SAML federation with an on-premises identity store, and ensure that only approved services are used in the account. A firm is using AWS CodePipeline to automate the continuous integration and delivery of an application to an Amazon EC2 Auto Scaling group. AWS CloudFormation templates specify all AWS resources. The application artifacts are saved in an Amazon S3 bucket and deployed utilizing instance user data scripts to the Auto Scaling group. Due to the increased complexity of the application, recent resource modifications in the CloudFormation templates resulted in unintended downtime. How could a solutions architect optimize the CI/CD pipeline to minimize the risk of downtime due to template changes?. Adapt the deployment scripts to detect and report CloudFormation error conditions when performing deployments. Write test plans for a testing team to execute in a non-production environment before approving the change for production. Implement automated testing using AWS CodeBuild in a test environment. Use CloudFormation change sets to evaluate changes before deployment. Use AWS CodeDeploy to leverage blue/green deployment patterns to allow evaluations and the ability to revert changes, if needed. Use plugins for the integrated development environment (IDE) to check the templates for errors, and use the AWS CLI to validate that the templates are correct. Adapt the deployment code to check for error conditions and generate notifications on errors. Deploy to a test environment and execute a manual test plan before approving the change for production. Use AWS CodeDeploy and a blue/green deployment pattern with CloudFormation to replace the user data deployment scripts. Have the operators log in to running instances and go through a manual test plan to verify the application is running as expected. On AWS, a business runs a serverless multi-tenant content management system. A web-based front end communicates with an Amazon API Gateway API using a bespoke AWS Lambda authorizer. The authorizer verifies a user's identity against its tenant ID and stores the information in a JSON Web Token (JWT) token. Following authentication, each API connection made via API Gateway is sent to a Lambda function that processes requests by interacting with a single Amazon DynamoDB database. To meet security requirements, the corporation requires a greater degree of separation between renters. Within the first year, the firm will have hundreds of consumers. Which method satisfies these criteria with the LEAST amount of operational overhead?. Create a DynamoDB table for each tenant by using the tenant ID in the table name. Create a service that uses the JWT token to retrieve the appropriate Lambda execution role that is tenant-specific. Attach IAM policies to the execution role to allow access only to the DynamoDB table for the tenant. Add tenant ID information to the partition key of the DynamoDB table. Create a service that uses the JWT token to retrieve the appropriate Lambda execution role that is tenant-specific. Attach IAM policies to the execution role to allow access to items in the table only when the key matches the tenant ID. Create a separate AWS account for each tenant of the application. Use dedicated infrastructure for each tenant. Ensure that no cross-account network connectivity exists. Add tenant ID as a sort key in every DynamoDB table. Add logic to each Lambda function to use the tenant ID that comes from the JWT token as the sort key in every operation on the DynamoDB table. A retailer hosts a mission-critical online service on an Amazon Elastic Container Service (Amazon ECS) cluster that is comprised of Amazon EC2 instances. The web service accepts POST requests from end users and publishes data to a MySQL database running on its own EC2 server. Businesses must take precautions to avoid data loss. Currently, the process of deploying code requires manual changes to the ECS service. End users reported sporadic 502 Bad Gateway failures in response to genuine web requests during a recent deployment. The organization wants to develop a dependable solution to avoid a recurrence of this situation. Additionally, the organization wishes to automate code deployments. The solution should be highly accessible and cost-effective. Which combination of actions will satisfy these criteria? (Select three.). Run the web service on an ECS cluster that has a Fargate launch type. Use AWS CodePipeline and AWS CodeDeploy to perform a blue/green deployment with validation testing to update the ECS service. Migrate the MySQL database to run on an Amazon RDS for MySQL Multi-AZ DB instance that uses Provisioned IOPS SSD (io2) storage. Configure an Amazon Simple Queue Service (Amazon SQS) queue as an event source to receive the POST requests from the web service. Configure an AWS Lambda function to poll the queue. Write the data to the database. Run the web service on an ECS cluster that has a Fargate launch type. Use AWS CodePipeline and AWS CodeDeploy to perform a canary deployment to update the ECS service. Configure an Amazon Simple Queue Service (Amazon SQS) queue. Install the SQS agent on the containers that run in the ECS cluster to poll the queue. Write the data to the database. Migrate the MySQL database to run on an Amazon RDS for MySQL Multi-AZ DB instance that uses General Purpose SSD (gp3) storage. A firm needs to guarantee that each of its business units' workloads on AWS have total autonomy and a small blast radius. The security team must be able to manage access to the account's resources and services in order to prevent certain services from being utilized by business units. How can a Solutions Architect ensure that all criteria for isolation are met?. Create individual accounts for each business unit and add the account to an OU in AWS Organizations. Modify the OU to ensure that the particular services are blocked. Federate each account with an IdP, and create separate roles for the business units and the Security team. Create individual accounts for each business unit. Federate each account with an IdP and create separate roles and policies for business units and the Security team. Create one shared account for the entire company. Create separate VPCs for each business unit. Create individual IAM policies and resource tags for each business unit. Federate each account with an IdP, and create separate roles for the business units and the Security team. Create one shared account for the entire company. Create individual IAM policies and resource tags for each business unit. Federate the account with an IdP, and create separate roles for the business units and the Security team. A Solutions Architect is tasked with the responsibility of developing a highly available infrastructure for a successful worldwide video game running on a mobile phone platform. The application is deployed on Amazon EC2 instances that are routed via an Application Load Balancer. The instances are distributed across several Availability Zones in an Auto Scaling group. Amazon RDS MySQL Multi-AZ instance serves as the database layer. In both us-east-1 and eu-central-1, the whole application stack is deployed. Using a latency-based routing strategy, Amazon Route 53 routes traffic to the two installations. In Route 53, a weighted routing policy is implemented as a failover to another region in the event that a region's installation becomes unresponsive. After limiting access to the Amazon RDS MySQL instance in eu-central-1 from all application instances operating in that region during disaster recovery testing. Route 53 does not failover all traffic to us-east-1 automatically. Which adjustments, in light of this circumstance, would enable the infrastructure to failover to us-east-1? (Select two.). Specify a weight of 100 for the record pointing to the primary Application Load Balancer in us-east-1 and a weight of 60 for the pointing to the primary Application Load Balancer in eu-central-1. Specify a weight of 100 for the record pointing to the primary Application Load Balancer in us-east-1 and a weight of 0 for the record pointing to the primary Application Load Balancer in eu-central-1. Set the value of Evaluate Target Health to Yes on the latency alias resources for both eu-central-1 and us-east-1. Write a URL in the application that performs a health check on the database layer. Add it as a health check within the weighted routing policy in both regions. Disable any existing health checks for the resources in the policies and set a weight of 0 for the records pointing to primary in both eu-central-1 and us-east-1, and set a weight of 100 for the primary Application Load Balancer only in the region that has healthy resources. A business is transferring a portion of its application APIs from Amazon EC2 instances to a serverless environment. For the new application, the business has used Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. The Lambda function's principal job is to get data from a third-party Software as a Service (SaaS) provider. The Lambda function is tied to the same virtual private cloud (VPC) as the original EC2 instances for consistency. Test users report being unable to access the newly relocated feature, and the organization is getting API Gateway 5xx issues. The SaaS provider's monitoring records indicate that the queries never made it to its systems. The organization finds that the Lambda services are generating Amazon CloudWatch Logs. When the same functionality is tested on Amazon EC2 instances, it works correctly. What is the source of the problem?. Lambda is in a subnet that does not have a NAT gateway attached to it to connect to the SaaS provider. The end-user application is misconfigured to continue using the endpoint backed by EC2 instances. The throttle limit set on API Gateway is too low and the requests are not making their way through. API Gateway does not have the necessary permissions to invoke Lambda. A business has over 100 AWS accounts, each with its own VPC, that need outbound HTTPS communication to the internet. Currently, each VPC has one NAT gateway per Availability Zone (AZ). To save expenses and get visibility into outward traffic, management has requested a new internet access architecture. Which solution will satisfy existing requirements and expand when more accounts are provided, all while lowering costs?. Create a transit VPC across two AZs using a third-party routing appliance. Create a VPN connection to each VPC. Default route internet traffic to the transit VPC. Create multiple hosted-private AWS Direct Connect VIFs, one per account, each with a Direct Connect gateway. Default route internet traffic back to an on- premises router to route to the internet. Create a central VPC for outbound internet traffic. Use VPC peering to default route to a set of redundant NAT gateway in the central VPC. Create a proxy fleet in a central VPC account. Create an AWS PrivateLink endpoint service in the central VPC. Use PrivateLink interface for internet connectivity through the proxy fleet. A business has many AWS accounts. A development team is now working on automating cloud governance and remediation procedures. The automation framework makes use of AWS Lambda services, which are managed centrally. A solutions architect must develop a policy allowing Lambda functions to execute in each of the company's AWS accounts with the least privilege possible. Which combination of actions will satisfy these criteria? (Select two.). In the centralized account, create an IAM role that has the Lambda service as a trusted entity. Add an inline policy to assume the roles of the other AWS accounts. In the other AWS accounts, create an IAM role that has minimal permissions. Add the centralized account's Lambda IAM role as a trusted entity. In the centralized account, create an IAM role that has roles of the other accounts as trusted entities. Provide minimal permissions. In the other AWS accounts, create an IAM role that has permissions to assume the role of the centralized account. Add the Lambda service as a trusted entity. In the other AWS accounts, create an IAM role that has minimal permissions. Add the Lambda service as a trusted entity. A business has adopted an event-driven architecture for its ordering system. The system ceased processing orders during first testing. Further log examination indicated that a single order message in an Amazon Simple Queue Service (Amazon SQS) standard queue was triggering a backend problem and preventing further order messages from being processed. The queue's visibility timeout is 30 seconds, while the backend processing timeout is 10 seconds. A solutions architect must assess erroneous order messages and guarantee that succeeding messages are processed by the system. Which approach should the solutions architect employ in order to satisfy these requirements?. Increase the backend processing timeout to 30 seconds to match the visibility timeout. Reduce the visibility timeout of the queue to automatically remove the faulty message. Configure a new SQS FIFO queue as a dead-letter queue to isolate the faulty messages. Configure a new SQS standard queue as a dead-letter queue to isolate the faulty messages. A company is implementing a multi-site solution in which the application operates on-premises as well as on AWS in order to meet the target of the shortest recovery time possible (RTO). Which of the following configurations does not match the criteria of the scenario involving a multi-site solution?. Configure data replication based on RTO. Keep an application running on premise as well as in AWS with full capacity. Setup a single DB instance which will be accessed by both sites. Setup a weighted DNS service like Route 53 to route traffic across sites. A business is transferring apps from its on-premises infrastructure to the AWS Cloud. These apps serve as the foundation for the company's internal web forms. These online forms gather data on certain occurrences on a quarterly basis. Simple SQL statements are used to save data to a local relational database using the web forms. Each event generates data, and the on-premises servers remain idle for the majority of the time. The company's goal should be to reduce the quantity of idle infrastructure supporting online forms. Which solution will satisfy these criteria?. Use Amazon EC2 Image Builder to create AMIs for the legacy servers. Use the AMIs to provision EC2 instances to recreate the applications in the AWS Cloud. Place an Application Load Balancer (ALB) in front of the EC2 instances. Use Amazon Route 53 to point the DNS names of the web forms to the ALB. Create one Amazon DynamoDB table to store data for all the data input. Use the application form name as the table key to distinguish data items. Create an Amazon Kinesis data stream to receive the data input and store the input in DynamoDB. Use Amazon Route 53 to point the DNS names of the web forms to the Kinesis data stream's endpoint. Create Docker images for each server of the legacy web form applications. Create an Amazon Elastic Container Service (Amazon EC2) cluster on AWS Fargate. Place an Application Load Balancer in front of the ECS cluster. Use Fargate task storage to store the web form data. Provision an Amazon Aurora Serverless cluster. Build multiple schemas for each web form's data storage. Use Amazon API Gateway and an AWS Lambda function to recreate the data input forms. Use Amazon Route 53 to point the DNS names of the web forms to their corresponding API Gateway endpoint. As a result, one of your AWS Data Pipeline operations failed and reached a hard failure state after three retries. You want to attempt it once again. Can the number of automated retries be increased to more than three?. Yes, you can increase the number of automatic retries to 6. Yes, you can increase the number of automatic retries to indefinite number. No, you cannot increase the number of automatic retries. Yes, you can increase the number of automatic retries to 10. A business is arranging connection to a multi-account AWS environment in order to handle application workloads serving a single geographic region's users. The workloads are dependent on an on-premises legacy system that is highly available and distributed over two sites. Connectivity to the legacy system is important for the AWS workloads, and a minimum of 5 Gbps of bandwidth is needed. All AWS application workloads must be connected to one another. Which solution will satisfy these criteria?. Configure multiple AWS Direct Connect (DX) 10 Gbps dedicated connections from a DX partner for each onג€"premises location. Create private virtual interfaces on each connection for each AWS account VPC. Associate the private virtual interface with a virtual private gateway attached to each VPC. Configure multiple AWS Direct Connect (DX) 10 Gbps dedicated connections from two DX partners for each on-premises location. Create and attach a virtual private gateway for each AWS account VPC. Create a DX gateway in a central network account and associate it with the virtual private gateways. Create a public virtual interface on each DX connection and associate the interface with the DX gateway. Configure multiple AWS Direct Connect (DX) 10 Gbps dedicated connections from two DX partners for each on-premises location. Create a transit gateway and a DX gateway in a central network account. Create a transit virtual interface for each DX interface and associate them with the DX gateway. Create a gateway association between the DX gateway and the transit gateway. Configure multiple AWS Direct Connect (DX) 10 Gbps dedicated connections from a DX partner for each on-premises location. Create and attach a virtual private gateway for each AWS account VPC. Create a transit gateway in a central network account and associate it with the virtual private gateways. Create a transit virtual interface on each DX connection and attach the interface to the transit gateway. A Solutions Architect is tasked with the responsibility of migrating an existing on-premises web application that contains 70 TB of static files and is used to support a public open-data project. As part of the migration process, the Architect want to update to the newest version of the host operating system. Which method of migration is the FASTEST and MOST cost-effective?. Run a physical-to-virtual conversion on the application server. Transfer the server image over the internet, and transfer the static data to Amazon S3. Run a physical-to-virtual conversion on the application server. Transfer the server image over AWS Direct Connect, and transfer the static data to Amazon S3. Re-platform the server to Amazon EC2, and use AWS Snowball to transfer the static data to Amazon S3. Re-platform the server by using the AWS Server Migration Service to move the code and data to a new Amazon EC2 instance. A firm intends to migrate regulated and security-sensitive operations to AWS. The Security team is establishing a framework to ensure that AWS best practices and industry-recognized compliance requirements are being followed. For teams, the AWS Management Console is the primary way of resource provisioning. Which tactics should a Solutions Architect use to ensure that business needs are met and that the configurations of AWS resources are regularly assessed, audited, and monitored? (Select two.). Use AWS Config rules to periodically audit changes to AWS resources and monitor the compliance of the configuration. Develop AWS Config custom rules using AWS Lambda to establish a test-driven development approach, and further automate the evaluation of configuration changes against the required controls. Use Amazon CloudWatch Logs agent to collect all the AWS SDK logs. Search the log data using a pre-defined set of filter patterns that matches mutating API calls. Send notifications using Amazon CloudWatch alarms when unintended changes are performed. Archive log data by using a batch export to Amazon S3 and then Amazon Glacier for a long-term retention and auditability. Use AWS CloudTrail events to assess management activities of all AWS accounts. Ensure that CloudTrail is enabled in all accounts and available AWS services. Enable trails, encrypt CloudTrail event log files with an AWS KMS key, and monitor recorded activities with CloudWatch Logs. Use the Amazon CloudWatch Events near-real-time capabilities to monitor system events patterns, and trigger AWS Lambda functions to automatically revert non-authorized changes in AWS resources. Also, target Amazon SNS topics to enable notifications and improve the response time of incident responses. Use CloudTrail integration with Amazon SNS to automatically notify unauthorized API activities. Ensure that CloudTrail is enabled in all accounts and available AWS services. Evaluate the usage of Lambda functions to automatically revert non-authorized changes in AWS resources. A business is building a new service that will be accessible through TCP on a fixed port. A solutions architect must guarantee that the service is highly available, redundant across Availability Zones, and reachable through the publicly accessible DNS name my.service.com. The service must use fixed address assignments in order for other businesses to add the addresses to their allow list. Which solution will fulfill these criteria if resources are distributed across several Availability Zones within a single Region?. Create Amazon EC2 instances with an Elastic IP address for each instance. Create a Network Load Balancer (NLB) and expose the static TCP port. Register EC2 instances with the NLB. Create a new name server record set named my.service.com, and assign the Elastic IP addresses of the EC2 instances to the record set. Provide the Elastic IP addresses of the EC2 instances to the other companies to add to their allow lists. Create an Amazon ECS cluster and a service definition for the application. Create and assign public IP addresses for the ECS cluster. Create a Network Load Balancer (NLB) and expose the TCP port. Create a target group and assign the ECS cluster name to the NLB. Create a new A record set named my.service.com, and assign the public IP addresses of the ECS cluster to the record set. Provide the public IP addresses of the ECS cluster to the other companies to add to their allow lists. Create Amazon EC2 instances for the service. Create one Elastic IP address for each Availability Zone. Create a Network Load Balancer (NLB) and expose the assigned TCP port. Assign the Elastic IP addresses to the NLB for each Availability Zone. Create a target group and register the EC2 instances with the NLB. Create a new A (alias) record set named my.service.com, and assign the NLB DNS name to the record set. Create an Amazon ECS cluster and a service definition for the application. Create and assign public IP address for each host in the cluster. Create an Application Load Balancer (ALB) and expose the static TCP port. Create a target group and assign the ECS service definition name to the ALB. Create a new CNAME record set and associate the public IP addresses to the record set. Provide the Elastic IP addresses of the Amazon EC2 instances to the other companies to add to their allow lists. A business must establish a centralized logging infrastructure for all of its Amazon Web Services accounts. The architecture should provide near-real-time data analysis across all AWS CloudTrail and VPC Flow logs. The organization intends to analyze logs in the logging account using Amazon Elasticsearch Service (Amazon ES). Which method should a solutions architect use in order to satisfy these requirements?. Configure CloudTrail and VPC Flow Logs in each AWS account to send data to a centralized Amazon S3 bucket in the logging account. Create and AWS Lambda function to load data from the S3 bucket to Amazon ES in the logging account. Configure CloudTrail and VPC Flow Logs to send data to a log group in Amazon CloudWatch account. Configure a CloudWatch subscription filter in each AWS account to send data to Amazon Kinesis Data Firehouse in the logging account. Load data from Kinesis Data Firehouse into Amazon ES in the logging account. Configure CloudTrail and VPC Flow Logs to send data to a separate Amazon S3 bucket in each AWS account. Create an AWS Lambda function triggered by S3 events to copy the data to a centralized logging bucket. Create another Lambda function to load data from the S3 bucket to Amazon ES in the logging account. Configure CloudTrail and VPC Flow Logs to send data to a log group in Amazon CloudWatch Logs in each AWS account. Create AWS Lambda functions in each AWS accounts to subscribe to the log groups and stream the data to an Amazon S3 bucket in the logging account. Create another Lambda function to load data from the S3 bucket to Amazon ES in the logging account. A solutions architect must assess a business's Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes to determine how effectively the business is using resources. The organization uses numerous big, high-memory Amazon EC2 instances to host database clusters in active/passive setups. The organization has not detected a pattern in how these EC2 instances are used by the apps that access the databases. The solutions architect must conduct an analysis of the environment and take appropriate action depending on the results. Which option best fits these criteria in terms of cost-effectiveness?. Create a dashboard by using AWS Systems Manager OpsCenter. Configure visualizations for Amazon CloudWatch metrics that are associated with the EC2 instances and their EBS volumes. Review the dashboard periodically, and identify usage patterns. Right size the EC2 instances based on the peaks in the metrics. Turn on Amazon CloudWatch detailed monitoring for the EC2 instances and their EBS volumes. Create and review a dashboard that is based on the metrics. Identify usage patterns. Rightsize the EC2 instances based on the peaks in the metrics. Install the Amazon CloudWatch agent on each of the EC2 instances. Turn on AWS Compute Optimizer, and let it run for at least 12 hours. Review the recommendations from Compute Optimizer, and rightsize the EC2 instances as directed. Sign up for the AWS Enterprise Support plan. Turn on AWS Trusted Advisor. Wait 12 hours. Review the recommendations from Trusted Advisor, and rightsize the EC2 instances as directed. On-premises, a business runs a high-volume media-sharing program. It presently stores over 400 terabytes of data, including millions of video clips. The organization is transferring this application to AWS in order to increase the application's stability and save expenses. The Solutions Architecture team intends to store the films in an Amazon S3 bucket and distribute them to customers using Amazon CloudFront. The organization needs to transition this application to AWS within ten days with minimal downtime. Currently, the firm has a 1 Gbps connection to the Internet, with 30% of available capacity. Which of the following options would allow the organization to shift the workload to AWS while remaining compliant with all requirements?. Use a multi-part upload in Amazon S3 client to parallel-upload the data to the Amazon S3 bucket over the Internet. Use the throttling feature to ensure that the Amazon S3 client does not use more than 30 percent of available Internet capacity. Request an AWS Snowmobile with 1 PB capacity to be delivered to the data center. Load the data into Snowmobile and send it back to have AWS download that data to the Amazon S3 bucket. Sync the new data that was generated while migration was in flight. Use an Amazon S3 client to transfer data from the data center to the Amazon S3 bucket over the Internet. Use the throttling feature to ensure the Amazon S3 client does not use more than 30 percent of available Internet capacity. Request multiple AWS Snowball devices to be delivered to the data center. Load the data concurrently into these devices and send it back. Have AWS download that data to the Amazon S3 bucket. Sync the new data that was generated while migration was in flight. On AWS, a business is developing a new highly accessible web application. The application needs constant and dependable communication between its AWS application servers and a backend REST API housed on-premises. The backend connection between AWS and on-premises will be handled over a private virtual interface using an AWS Direct Connect connection. Amazon Route 53 will be utilized to handle the application's private DNS records for resolving the IP address for the backend REST API. Which architecture would be most likely to establish a secure connection to the backend API?. Implement at least two backend endpoints for the backend REST API, and use Route 53 health checks to monitor the availability of each backend endpoint and perform DNS-level failover. Install a second Direct Connect connection from a different network carrier and attach it to the same virtual private gateway as the first Direct Connect connection. Install a second cross connect for the same Direct Connect connection from the same network carrier, and join both connections to the same link aggregation group (LAG) on the same private virtual interface. Create an IPSec VPN connection routed over the public internet from the on-premises data center to AWS and attach it to the same virtual private gateway as the Direct Connect connection. A business is distributing both static and dynamic content from a web application operating behind an Application Load Balancer through an Amazon CloudFront distribution. For dynamic content, the web application needs user authorisation and session monitoring. The CloudFront distribution is set with a single cache behavior that forwards the HTTP whitelist headers Authorization, Host, and User-Agent, as well as a session cookie, to the origin. All other cache behavior parameters are left alone. A valid ACM certificate is deployed to the CloudFront distribution through the distribution settings, along with a corresponding CNAME. Additionally, the ACM certificate is applied to the Application Load Balancer's HTTPS listener. CloudFront's origin protocol policy is configured to use exclusively HTTPS. According to the cache statistics report, this distribution has an extremely high miss rate. What can the Solutions Architect do to increase this distribution's cache hit rate without jeopardizing the SSL/TLS handshake between CloudFront and the Application Load Balancer?. Create two cache behaviors for static and dynamic content. Remove the User-Agent and Host HTTP headers from the whitelist headers section on both of the cache behaviors. Remove the session cookie from the whitelist cookies section and the Authorization HTTP header from the whitelist headers section for cache behavior configured for static content. Remove the User-Agent and Authorization HTTP headers from the whitelist headers section of the cache behavior. Then update the cache behavior to use presigned cookies for authorization. Remove the Host HTTP header from the whitelist headers section and remove the session cookie from the whitelist cookies section for the default cache behavior. Enable automatic object compression and use Lambda@Edge viewer request events for user authorization. Create two cache behaviors for static and dynamic content. Remove the User-Agent HTTP header from the whitelist headers section on both of the cache behaviors. Remove the session cookie from the whitelist cookies section and the Authorization HTTP header from the whitelist headers section for cache behavior configured for static content. An online retailer runs its stateful web application and MySQL database on a single server in an on-premises data center. The corporation wishes to expand its consumer base via the use of additional marketing campaigns and promotions. The firm intends to transition its application and database to AWS in preparation, in order to boost the stability of its architecture. Which option should be the most dependable?. Migrate the database to an Amazon RDS MySQL Multi-AZ DB instance. Deploy the application in an Auto Scaling group on Amazon EC2 instances behind an Application Load Balancer Store sessions in Amazon Neptune. Migrate the database to Amazon Aurora MySQL. Deploy the application in an Auto Scaling group on Amazon EC2 instances behind an Application Load Balancer. Store sessions in an Amazon ElastiCache for Redis replication group. Migrate the database to Amazon DocumentDB (with MongoDB compatibility). Deploy the application in an Auto Scaling group on Amazon EC2 instances behind a Network Load Balancer. Store sessions in Amazon Kinesis Data Firehose. Migrate the database to an Amazon RDS MariaDB Multi-AZ DB instance. Deploy the application in an Auto Scaling group on Amazon EC2 instances behind an Application Load Balancer. Store sessions in Amazon ElastiCache for Memcached. A business operates an application that is spread over many Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. All application access attempts must be made accessible for examination to the security team. The IP address of the client, the kind of connection, and the user agent must all be supplied. Which solution will satisfy these criteria?. Enable EC2 detailed monitoring, and include network logs. Send all logs through Amazon Kinesis Data Firehose to an Amazon Elasticsearch Service (Amazon ES) cluster that the security team uses for analysis. Enable VPC Flow Logs for all EC2 instance network interfaces. Publish VPC Flow Logs to an Amazon S3 bucket. Have the security team use Amazon Athena to query and analyze the logs. Enable access logs for the Application Load Balancer, and publish the logs to an Amazon S3 bucket. Have the security team use Amazon Athena to query and analyze the logs. Enable Traffic Mirroring and specify all EC2 instance network interfaces as the source. Send all traffic information through Amazon Kinesis Data Firehose to an Amazon Elasticsearch Service (Amazon ES) cluster that the security team uses for analysis. You are responsible for a web application that utilizes an Elastic Load Balancing (ELB) load balancer in front of an Amazon Elastic Compute Cloud (EC2) Auto Scaling group of instances. A new Amazon Machine Image (AMI) was built for a recent deployment of a new version of the application, and the Auto Scaling group was modified with a new launch configuration that references the new AMI. You got reports from users throughout the rollout that the website was responding incorrectly. All occurrences were found to be in good health by the ELB. What should you do to ensure that future deployments are error-free? (Select two.). Add an Elastic Load Balancing health check to the Auto Scaling group. Set a short period for the health checks to operate as soon as possible in order to prevent premature registration of the instance to the load balancer. Enable EC2 instance CloudWatch alerts to change the launch configuration's AMI to the previous one. Gradually terminate instances that are using the new AMI. Set the Elastic Load Balancing health check configuration to target a part of the application that fully tests application health and returns an error if the tests fail. Create a new launch configuration that refers to the new AMI, and associate it with the group. Double the size of the group, wait for the new instances to become healthy, and reduce back to the original size. If new instances do not become healthy, associate the previous launch configuration. Increase the Elastic Load Balancing Unhealthy Threshold to a higher value to prevent an unhealthy instance from going into service behind the load balancer. A business is expanding its permitted external vendor base to include solely IPv6 connection. The company's backend systems are located inside an Amazon VPC's private subnet. The organization utilizes a NAT gateway to facilitate communication between these systems and external suppliers through IPv4. According to company policy, systems that connect with external vendors must be protected by a security group that restricts access to only authorized external suppliers. The virtual private cloud (VPC) makes use of the default network access control list (ACL). Each backend system is successfully assigned IPv6 addresses by the Systems Operator. Additionally, the Systems Operator modifies the outgoing security group to include the external vendor's IPv6 CIDR (destination). The computers included inside the VPC are capable of effectively pinging one another via IPv6. These systems, however, are incapable of communicating with the external vendor. What modifications are necessary to facilitate communication with the external vendor?. Create an IPv6 NAT instance. Add a route for destination 0.0.0.0/0 pointing to the NAT instance. Enable IPv6 on the NAT gateway. Add a route for destination ::/0 pointing to the NAT gateway. Enable IPv6 on the internet gateway. Add a route for destination 0.0.0.0/0 pointing to the IGW. Create an egress-only internet gateway. Add a route for destination ::/0 pointing to the gateway. On AWS, a business is developing an application. For analysis, the application transmits log files to an Amazon Elasticsearch Service (Amazon ES) cluster. Each piece of data must be contained inside a VPC. A number of the company's developers work remotely. Other developers are based at three distinct business locations. The developers must connect to Amazon ES directly from their local development computers in order to study and display logs. Which solution will satisfy these criteria?. Configure and set up an AWS Client VPN endpoint. Associate the Client VPN endpoint with a subnet in the VPC. Configure a Client VPN self-service portal. Instruct the developers to connect by using the client for Client VPN. Create a transit gateway, and connect it to the VPC. Create an AWS Site-to-Site VPN. Create an attachment to the transit gateway. Instruct the developers to connect by using an OpenVPN client. Create a transit gateway, and connect it to the VPC. Order an AWS Direct Connect connection. Set up a public VIF on the Direct Connect connection. Associate the public VIF with the transit gateway. Instruct the developers to connect to the Direct Connect connection. Create and configure a bastion host in a public subnet of the VPC. Configure the bastion host security group to allow SSH access from the company CIDR ranges. Instruct the developers to connect by using SSH. On AWS, a business hosts a software-as-a-service (SaaS) application. The application is composed of AWS Lambda functions and a MySQL Multi-AZ database on Amazon RDS. During market events, the application's burden is significantly increased. Users have slower response times during peak hours due to the high volume of database connections. The organization needs to enhance the database's scalability and availability. Which solution satisfies these criteria?. Create an Amazon CloudWatch alarm action that triggers a Lambda function to add an Amazon RDS for MySQL read replica when resource utilization hits a threshold. Migrate the database to Amazon Aurora, and add a read replica. Add a database connection pool outside of the Lambda handler function. Migrate the database to Amazon Aurora, and add a read replica. Use Amazon Route 53 weighted records. Migrate the database to Amazon Aurora, and add an Aurora Replica. Configure Amazon RDS Proxy to manage database connection pools. A cluster of Amazon EC2 instances has been setup as a high performance computing (HPC) cluster using a collection of Amazon EC2 instances. The instances are operating in a placement group and are capable of communicating at up to 20 Gbps network rates. The cluster must communicate with an EC2 instance that is not a member of the placement group. The control instance is setup with a public IP address and uses the same instance type and AMI as the other instances. How can the Solutions Architect optimize network performance between the control instance and the placement group instances?. Terminate the control instance and relaunch it in the placement group. Ensure that the instances are communicating using their private IP addresses. Ensure that the control instance is using an Elastic Network Adapter. Move the control instance inside the placement group. A business is building a web application that will be hosted on Amazon EC2 instances in an Auto Scaling group behind a public-facing Application Load Balancer (ALB). The application is only accessible to users from a specified country. The firm need the ability to track prohibited access requests. The solution should be as low-maintenance as feasible. Which solution satisfies these criteria?. Create an IPSet containing a list of IP ranges that belong to the specified country. Create an AWS WAF web ACL. Configure a rule to block any requests that do not originate from an IP range in the IPSet. Associate the rule with the web ACL. Associate the web ACL with the ALB. Create an AWS WAF web ACL. Configure a rule to block any requests that do not originate from the specified country. Associate the rule with the web ACL. Associate the web ACL with the ALB. Configure AWS Shield to block any requests that do not originate from the specified country. Associate AWS Shield with the ALB. Create a security group rule that allows ports 80 and 443 from IP ranges that belong to the specified country. Associate the security group with the ALB. An online store must process vast product catalogs on a regular basis, which are processed in batches. These are delivered to Amazon Mechanical Turk users for processing, but the company has requested its Solutions Architect to develop a workflow orchestration system that enables it to manage many concurrent Mechanical Turk operations, manage the outcome evaluation process, and reprocess failures. Which of the following choices provides the retailer with the LEAST amount of implementation effort for interrogating the status of each workflow?. Trigger Amazon CloudWatch alarms based upon message visibility in multiple Amazon SQS queues (one queue per workflow stage) and send messages via Amazon SNS to trigger AWS Lambda functions to process the next step. Use Amazon ES and Kibana to visualize Lambda processing logs to see the workflow states. Hold workflow information in an Amazon RDS instance with AWS Lambda functions polling RDS for status changes. Worker Lambda functions then process the next workflow steps. Amazon QuickSight will visualize workflow states directly out of Amazon RDS. Build the workflow in AWS Step Functions, using it to orchestrate multiple concurrent workflows. The status of each workflow can be visualized in the AWS Management Console, and historical data can be written to Amazon S3 and visualized using Amazon QuickSight. Use Amazon SWF to create a workflow that handles a single batch of catalog records with multiple worker tasks to extract the data, transform it, and send it through Mechanical Turk. Use Amazon ES and Kibana to visualize AWS Lambda processing logs to see the workflow states. A corporation has an on-premises monitoring system that stores events in a PostgreSQL database. Due to high ingestion, the database is unable to scale and regularly runs out of storage. The business is pursuing a hybrid approach and has already established a VPN link between its network and AWS. The solution must include the following characteristics: ✑ Managed Amazon Web Services (AWS) services to reduce operational complexity. ✑ A buffer that grows automatically in response to data traffic and needs no continuing management. ✑ A dashboard-creation tool for monitoring events in near-real time. ✑ Support for JSON data that is semi-structured and dynamic schemas. Which component combination will allow the business to develop a monitoring system that satisfies these requirements? (Select two.). Use Amazon Kinesis Data Firehose to buffer events. Create an AWS Lambda function to process and transform events. Create an Amazon Kinesis data stream to buffer events. Create an AWS Lambda function to process and transform events. Configure an Amazon Aurora PostgreSQL DB cluster to receive events. Use Amazon QuickSight to read from the database and create near-real-time visualizations and dashboards. Configure Amazon Elasticsearch Service (Amazon ES) to receive events. Use the Kibana endpoint deployed with Amazon ES to create near-real-time visualizations and dashboards. Configure an Amazon Neptune DB instance to receive events. Use Amazon QuickSight to read from the database and create near-real-time visualizations and dashboards. A business has built a web application that is hosted on Amazon EC2 instances in a single AWS Region. The firm has expanded its operations into new nations and needs to expand its application into additional locations to fulfill its consumers' low-latency requirements. The regions may be partitioned, and an application operating in one area need not interact with instances running in other regions. How could the company's Solutions Architect automate the application's deployment so that it may be deployed MOST EFFECTIVELY across numerous regions?. Write a bash script that uses the AWS CLI to query the current state in one region and output a JSON representation. Pass the JSON representation to the AWS CLI, specifying the --region parameter to deploy the application to other regions. Write a bash script that uses the AWS CLI to query the current state in one region and output an AWS CloudFormation template. Create a CloudFormation stack from the template by using the AWS CLI, specifying the --region parameter to deploy the application to other regions. Write a CloudFormation template describing the application's infrastructure in the resources section. Create a CloudFormation stack from the template by using the AWS CLI, specify multiple regions using the --regions parameter to deploy the application. Write a CloudFormation template describing the application's infrastructure in the Resources section. Use a CloudFormation stack set from an administrator account to launch stack instances that deploy the application to other regions. A business uses Amazon CloudFront, Amazon API Gateway, and AWS Lambda services to power a serverless application. Currently, the application code is deployed by creating a new version number for the Lambda function and updating it using an AWS CLI script. If an error occurs with the new function version, another CLI script reverts to the prior functioning version of the function. The organization wishes to lower the time required to deploy new versions of the application logic given by Lambda functions, as well as the time required to discover and reverse problems. Create and deploy nested AWS CloudFormation stacks with the parent stack consisting of the AWS CloudFront distribution and API Gateway, and the child stack containing the Lambda function. For changes to Lambda, create an AWS CloudFormation change set and deploy; if errors are triggered, revert the AWS CloudFormation change set to the previous version. Use AWS SAM and built-in AWS CodeDeploy to deploy the new Lambda version, gradually shift traffic to the new version, and use pre-traffic and post-traffic test functions to verify code. Rollback if Amazon CloudWatch alarms are triggered. Refactor the AWS CLI scripts into a single script that deploys the new Lambda version. When deployment is completed, the script tests execute. If errors are detected, revert to the previous Lambda version. Create and deploy an AWS CloudFormation stack that consists of a new API Gateway endpoint that references the new Lambda version. Change the CloudFront origin to the new API Gateway endpoint, monitor errors and if detected, change the AWS CloudFront origin to the previous API Gateway endpoint. Which of the following cannot be done after configuring an AWS Direct Connect Virtual Interface?. You can exchange traffic between the two ports in the same region connecting to different Virtual Private Gateways (VGWs) if you have more than one virtual interface. You can change the region of your virtual interface. You can delete a virtual interface; if its connection has no other virtual interfaces, you can delete the connection. You can create a hosted virtual interface. A business is using the AWS Cloud to host a bespoke database. Amazon Elastic Computing Cloud (Amazon EC2) is used for compute while Amazon Elastic Block Store (Amazon EBS) is used for storage. The database is hosted on Amazon EC2 instances of the newest generation and data is stored on a General Purpose SSD (gp2) EBS disk. The current volume of data is as follows: ✑ The volume is 512 GB in size. ✑ The volume never goes above 256 GB utilization. ✑ The volume consistently uses around 1,500 IOPS. A solutions architect must analyze the present database storage layer and give recommendations on cost-cutting measures. Which method will result in the MOST cost reduction while maintaining the database's performance?. Convert the data volume to the Cloud HDD (sc1) type. Leave the volume as 512 GB. Set the volume IOPS to 1,500. Convert the data volume to the Provisioned IOPS SSD (io2) type. Resize the volume to 256 GB. Set the volume IOPS to 1,500. Convert the data volume to the Provisioned IOPS SSD (io2) Block Express type. Leave the volume as 512 GB. Set the volume IOPS to 1,500. Convert the data volume to the General Purpose SSD (gp3) type. Resize the volume to 256 GB. Set the volume IOPS to 1,500. A bank is migrating its mainframe-based credit card acceptance processing program to the AWS cloud. At peak demand, the new application will get up to 1,000 requests per second. Each transaction consists of numerous stages, each of which must receive the outcome of the preceding step. The full request must return an authorized response with no data loss in less than two seconds. Each request must be addressed. Payment Card Industry Data Security Standard (PCI DSS) compliance is required. Which solution satisfies all of the bank's goals with the LEAST amount of complexity and expense while still complying with regulatory requirements?. Create an Amazon API Gateway to process inbound requests using a single AWS Lambda task that performs multiple steps and returns a JSON object with the approval status. Open a support case to increase the limit for the number of concurrent Lambdas to allow room for bursts of activity due to the new application. Create an Application Load Balancer with an Amazon ECS cluster on Amazon EC2 Dedicated Instances in a target group to process incoming requests. Use Auto Scaling to scale the cluster out/in based on average CPU utilization. Deploy a web service that processes all of the approval steps and returns a JSON object with the approval status. Deploy the application on Amazon EC2 on Dedicated Instances. Use an Elastic Load Balancer in front of a farm of application servers in an Auto Scaling group to handle incoming requests. Scale out/in based on a custom Amazon CloudWatch metric for the number of inbound requests per second after measuring the capacity of a single instance. Create an Amazon API Gateway to process inbound requests using a series of AWS Lambda processes, each with an Amazon SQS input queue. As each step completes, it writes its result to the next step's queue. The final step returns a JSON object with the approval status. Open a support case to increase the limit for the number of concurrent Lambdas to allow room for bursts of activity due to the new application. A solutions architect must advise a business on how to transition its on-premises data processing application to Amazon Web Services (AWS). At the moment, users submit input files using a web site. The web server then uploads the files to the NAS and communicates with the processing server through a message queue. Processing each media file might take up to an hour. The organization has established that the volume of media files awaiting processing is much larger during business hours and swiftly decreases after hours. Which migration suggestion is the MOST cost-effective?. Create a queue using Amazon SQS. Configure the existing web server to publish to the new queue. When there are messages in the queue, invoke an AWS Lambda function to pull requests from the queue and process the files. Store the processed files in an Amazon S3 bucket. Create a queue using Amazon MQ. Configure the existing web server to publish to the new queue. When there are messages in the queue, create a new Amazon EC2 instance to pull requests from the queue and process the files. Store the processed files in Amazon EFS. Shut down the EC2 instance after the task is complete. Create a queue using Amazon MQ. Configure the existing web server to publish to the new queue. When there are messages in the queue, invoke an AWS Lambda function to pull requests from the queue and process the files. Store the processed files in Amazon EFS. Create a queue using Amazon SQS. Configure the existing web server to publish to the new queue. Use Amazon EC2 instances in an EC2 Auto Seating group to pull requests from the queue and process the files. Scale the EC2 instances based on the SQS queue length. Store the processed files in an Amazon S3 bucket. An IAM user is attempting to conduct an action on an item that is part of the bucket of another root account. Which of the following will AWS S3 not verify?. The object owner has provided access to the IAM user. Permission provided by the parent of the IAM user on the bucket. Permission provided by the bucket owner to the IAM user. Permission provided by the parent of the IAM user. Amazon Aurora MySQL is being used by a business to power a customer relationship management (CRM) application. The program needs regular database and Amazon EC2 instance maintenance. System administrators authenticate against AWS Identity and Access Management (IAM) using an internal identity provider to obtain access to the AWS Management Console. Each system administrator has a user name and password that were previously set inside the database for database access. A recent security assessment discovered that database passwords are not changed on a regular basis. The organization wishes to replace the passwords with temporary credentials using the AWS access restrictions already in place. Which collection of solutions best meets the needs of the business?. Create a new AWS Systems Manager Parameter Store entry for each database password. Enable parameter expiration to invoke an AWS Lambda function to perform password rotation by updating the parameter value. Create an IAM policy allowing each system administrator to retrieve their current password from the Parameter Store. Use the AWS CLI to retrieve credentials when connecting to the database. Create a new AWS Secrets Manager entry for each database password. Configure password rotation for each secret using an AWS Lambda function in the same VPC as the database cluster. Create an IAM policy allowing each system administrator to retrieve their current password. Use the AWS CLI to retrieve credentials when connecting to the database. Enable IAM database authentication on the database. Attach an IAM policy to each system administrator's role to map the role to the database user name. Install the Amazon Aurora SSL certificate bundle to the system administrators' certificate trust store. Use the AWS CLI to generate an authentication token used when connecting to the database. Enable IAM database authentication on the database. Configure the database to use the IAM identity provider to map the administrator roles to the database user. Install the Amazon Aurora SSL certificate bundle to the system administrators' certificate trust store. Use the AWS CLI to generate an authentication token used when connecting to the database. A business wishes to examine log data utilizing date ranges via the use of a bespoke application hosted on AWS. Each day, the program creates around 10 GB of data, which is likely to expand. A Solutions Architect is assigned with the responsibility of storing the data in Amazon S3 and analyzing it using Amazon Athena. Which sequence of steps will provide the best performance as the data grows? (Select two.). Store each object in Amazon S3 with a random string at the front of each key. Store the data in multiple S3 buckets. Store the data in Amazon S3 in a columnar format, such as Apache Parquet or Apache ORC. Store the data in Amazon S3 in objects that are smaller than 10 MB. Store the data using Apache Hive partitioning in Amazon S3 using a key that includes a date, such as dt=2019-02. AWS Organizations is being used by a business to manage several AWS accounts. For security reasons, the organization needs the development of an Amazon Simple Notification Service (Amazon SNS) topic in each Organizations member account that permits interaction with a third-party alerting system. To automate the deployment of CloudFormation stacks, a solutions architect utilized an AWS CloudFormation template to construct the SNS topic and stack sets. Organizations have enabled trusted access. What should the solutions architect do to ensure that the CloudFormation StackSets are deployed across all AWS accounts?. Create a stack set in the Organizations member accounts. Use service-managed permissions. Set deployment options to deploy to an organization. Use CloudFormation StackSets drift detection. Create stacks in the Organizations member accounts. Use self-service permissions. Set deployment options to deploy to an organization. Enable the CloudFormation StackSets automatic deployment. Create a stack set in the Organizations master account. Use service-managed permissions. Set deployment options to deploy to the organization. Enable CloudFormation StackSets automatic deployment. Create stacks in the Organizations master account. Use service-managed permissions. Set deployment options to deploy to the organization. Enable CloudFormation StackSets drift detection. Currently, a web design business maintains numerous FTP servers, which are used by its 250 clients to upload and download huge graphic assets. They desire to migrate this system to AWS in order to increase its scalability, but they also wish to preserve client privacy and keep expenses low. Which Amazon Web Services architecture would you recommend?. ASK their customers to use an S3 client instead of an FTP client. Create a single S3 bucket Create an IAM user for each customer Put the IAM Users in a Group that has an IAM policy that permits access to sub-directories within the bucket via use of the 'username' Policy variable. Create a single S3 bucket with Reduced Redundancy Storage turned on and ask their customers to use an S3 client instead of an FTP client Create a bucket for each customer with a Bucket Policy that permits access only to that one customer. Create an auto-scaling group of FTP servers with a scaling policy to automatically scale-in when minimum network traffic on the auto-scaling group is below a given threshold. Load a central list of ftp users from S3 as part of the user Data startup script on each Instance. Create a single S3 bucket with Requester Pays turned on and ask their customers to use an S3 client instead of an FTP client Create a bucket tor each customer with a Bucket Policy that permits access only to that one customer. A large organization want to enable its developers to buy third-party software through AWS Marketplace. The corporation employs an AWS Organizations account structure with all capabilities enabled, and each organizational unit (OU) has a shared services account that procurement managers will use. According to the procurement team's guideline, developers should be allowed to purchase third-party software only from an authorized list and should do so via AWS Marketplace's Private Marketplace. The procurement team desires that management of Private Marketplace be limited to a job called procurement-manager-role, which procurement managers may adopt. Other IAM users, groups, roles, and account administrators within the organization should be disallowed administrative access to the Private Marketplace. What is the MOST EFFECTIVE method for developing an architecture that satisfies these requirements?. Create an IAM role named procurement-manager-role in all AWS accounts in the organization. Add the PowerUserAccess managed policy to the role. Apply an inline policy to all IAM users and roles in every AWS account to deny permissions on the AWSPrivateMarketplaceAdminFullAccess managed policy. Create an IAM role named procurement-manager-role in all AWS accounts in the organization. Add the AdministratorAccess managed policy to the role. Define a permissions boundary with the AWSPrivateMarketplaceAdminFullAccess managed policy and attach it to all the developer roles. Create an IAM role named procurement-manager-role in all the shared services accounts in the organization. Add the AWSPrivateMarketplaceAdminFullAccess managed policy to the role. Create an organization root-level SCP to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role. Create another organization root-level SCP to deny permissions to create an IAM role named procurement-manager-role to everyone in the organization. Create an IAM role named procurement-manager-role in all AWS accounts that will be used by developers. Add the AWSPrivateMarketplaceAdminFullAccess managed policy to the role. Create an SCP in Organizations to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role. Apply the SCP to all the shared services accounts in the organization. A business is using the AWS Cloud to host numerous workloads. The organization has distinct software development departments. The organization leverages AWS Organizations and SAML-based federation to provide developers authority to handle resources in their AWS accounts. Each development unit deploys its production workloads to a single shared production account. Recently, on the production account, an event happened in which members of one development unit terminated an EC2 instance that belonged to another development unit. A solutions architect must provide a solution that eliminates the possibility of a similar situation occurring in the future. Additionally, the solution must enable developers to control the instances utilized to run their workloads. Which technique will satisfy these criteria?. Create separate OUs in AWS Organizations for each development unit. Assign the created OUs to the company AWS accounts. Create separate SCPs with a deny action and a StringNotEquals condition for the DevelopmentUnit resource tag that matches the development unit name. Assign the SCP to the corresponding OU. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS) session tag during SAML federation. Update the IAM policy for the developers' assumed IAM role with a deny action and a StringNotEquals condition for the DevelopmentUnit resource tag and aws:PrincipalTag/ DevelopmentUnit. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS) session tag during SAML federation. Create an SCP with an allow action and a StringEquals condition for the DevelopmentUnit resource tag and aws:PrincipalTag/DevelopmentUnit. Assign the SCP to the root OU. Create separate IAM policies for each development unit. For every IAM policy, add an allow action and a StringEquals condition for the DevelopmentUnit resource tag and the development unit name. During SAML federation, use AWS Security Token Service (AWS STS) to assign the IAM policy and match the development unit name to the assumed IAM role. On AWS, a business is developing a software-as-a-service (SaaS) offering. The organization has implemented an Amazon API Gateway REST API integrated with AWS Lambda across various AWS Regions and in the same production account. The company's pricing structure is tiered, allowing users to pay for the capability to perform a certain number of API requests each second. The premium tier enables users to make up to 3,000 calls per second and is identifiable by a unique API key. Several premium tier customers across several regions report receiving 429 Too Many Requests error answers from multiple API calls during high use hours. The Lambda function is never called, as seen by the logs. What may be causing these customers' error messages?. The Lambda function reached its concurrency limit. The Lambda function its Region limit for concurrency. The company reached its API Gateway account limit for calls per second. The company reached its API Gateway default per-method limit for calls per second. A business intends to create a management network on the AWS VPC. The business is attempting to protect the webserver on a single VPC instance in such a way that both internet and back-end administration traffic is permitted. The business want to configure the back end administration network interface to accept SSH traffic exclusively from a certain IP range, but the webserver facing the internet will have an IP address that accepts traffic from all internet IPs. How can the business do this with a single web server instance?. It is not possible to have two IP addresses for a single instance. The organization should create two network interfaces with the same subnet and security group to assign separate IPs to each network interface. The organization should create two network interfaces with separate subnets so one instance can have two subnets and the respective security groups for controlled access. The organization should launch an instance with two separate subnets using the same network interface which allows to have a separate CIDR as well as security groups. A huge on-premises Apache Hadoop cluster with a 20 PB HDFS database is used by a business. Each quarter, the cluster grows by around 200 instances and 1 PB. The company's objectives are to allow Hadoop data resilience, to mitigate the effect of cluster node failures, and to dramatically cut expenses. The present cluster is available 24 hours a day and is capable of handling a wide range of analytical workloads, including interactive queries and batch processing. Which solution would match these criteria with the LEAST amount of price and downtime?. Use AWS Snowmobile to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster initially sized to handle the interactive workload based on historical data from the on-premises cluster. Store the data on EMRFS. Minimize costs using Reserved Instances for master and core nodes and Spot Instances for task nodes, and auto scale task nodes based on Amazon CloudWatch metrics. Create job-specific, optimized clusters for batch workloads that are similarly optimized. Use AWS Snowmobile to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster of a similar size and configuration to the current cluster. Store the data on EMRFS. Minimize costs by using Reserved Instances. As the workload grows each quarter, purchase additional Reserved Instances and add to the cluster. Use AWS Snowball to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster initially sized to handle the interactive workloads based on historical data from the on-premises cluster. Store the data on EMRFS. Minimize costs using Reserved Instances for master and core nodes and Spot Instances for task nodes, and auto scale task nodes based on Amazon CloudWatch metrics. Create job-specific, optimized clusters for batch workloads that are similarly optimized. Use AWS Direct Connect to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster initially sized to handle the interactive workload based on historical data from the on-premises cluster. Store the data on EMRFS. Minimize costs using Reserved Instances for master and core nodes and Spot Instances for task nodes, and auto scale task nodes based on Amazon CloudWatch metrics. Create job-specific, optimized clusters for batch workloads that are similarly optimized. A user is running a batch process on EC2 instances that are EBS-backed. The batch process starts a few Amazon EC2 instances to perform Hadoop Map reduce tasks, which may take between 50 and 600 minutes or even longer. The user desires a setting that allows the instance to be terminated only when the procedure is complete. Configure a job which terminates all instances after 600 minutes. It is not possible to terminate instances automatically. Configure the CloudWatch action to terminate the instance when the CPU utilization falls below 5%. Set up the CloudWatch with Auto Scaling to terminate all the instances. A business created a Java application and deployed it on an Amazon EC2 instance running an Apache Tomcat server. The company's Engineering team utilized AWS CloudFormation and Chef Automate to automate the provisioning and updating of the application's infrastructure and configuration in development, test, and production environments. These implementations have resulted in a considerable increase in the dependability of change release. The Engineering team indicates that service outages occur often as a result of unanticipated issues encountered when upgrading the Apache Tomcat server's application. Which option will make all releases more reliable?. Implement a blue/green deployment methodology. Implement the canary release methodology. Configure Amazon CloudFront to serve all requests from the cache while deploying the updates. Implement the all at once deployment methodology. A client has created a connection to AWS using AWS Direct Connect. Although the connection is operational and routes are listed on the client's end, the customer is unable to connect EC2 instances inside its VPC to servers in its datacenter. Which of the following choices represents a feasible means of resolving this situation? (Select two.). Add a route to the route table with an iPsec VPN connection as the target. Enable route propagation to the virtual pinnate gateway (VGW). Enable route propagation to the customer gateway (CGW). Modify the route table of all Instances using the 'route' command. Modify the Instances VPC subnet route table by adding a route back to the customer's on-premises environment. A life sciences business processes genomics data using a mix of open source tools and Docker containers running on servers in its on-premises data center. Data for sequencing is created and stored on a local storage area network (SAN), followed by processing. The research and development teams are experiencing capacity constraints and have chosen to re-architect their genomics analysis platform on AWS to enable it to grow in response to workload needs and shorten turnaround time from weeks to days. The business is connected to AWS through a high-speed AWS Direct Connect connection. Sequencers create around 200 GB of data for each genome, and processing the data with optimal computational capability may take several hours. The resulting file will be uploaded to Amazon S3. Each day, the organization anticipates receiving 10-15 employment applications. Which solution satisfies these criteria?. Use regularly scheduled AWS Snowball Edge devices to transfer the sequencing data into AWS. When AWS receives the Snowball Edge device and the data is loaded into Amazon S3, use S3 events to trigger an AWS Lambda function to process the data. Use AWS Data Pipeline to transfer the sequencing data to Amazon S3. Use S3 events to trigger an Amazon EC2 Auto Scaling group to launch custom-AMI EC2 instances running the Docker containers to process the data. Use AWS DataSync to transfer the sequencing data to Amazon S3. Use S3 events to trigger an AWS Lambda function that starts an AWS Step Functions workflow. Store the Docker images in Amazon Elastic Container Registry (Amazon ECR) and trigger AWS Batch to run the container and process the sequencing data. Use an AWS Storage Gateway file gateway to transfer the sequencing data to Amazon S3. Use S3 events to trigger an AWS Batch job that executes on Amazon EC2 instances running the Docker containers to process the data. A business is experiencing difficulties with its recently installed serverless infrastructure, which makes use of Amazon API Gateway, Amazon Lambda, and Amazon DynamoDB. The application operates as anticipated in steady state. However, during periods of high stress, tens of thousands of concurrent invocations are required, and user requests often fail before succeeding. The organization examined the logs for each component, with a particular emphasis on the Amazon CloudWatch Logs for Lambda. The services and apps have not registered any faults. What may be causing this issue?. Lambda has very low memory assigned, which causes the function to fail at peak load. Lambda is in a subnet that uses a NAT gateway to reach out of the internet, and the function instance does not have sufficient Amazon EC2 resources in the VPC to scale with the load. The throttle limit set on API Gateway is very low. During peak load, the additional requests are not making their way through to Lambda. DynamoDB is set up in an auto scaling mode. During peak load, DynamoDB adjusts capacity and throughput behind the scenes, which is causing the temporary downtime. Once the scaling completes, the retries go through successfully. A corporation wishes to increase their Amazon EMR platform's cost awareness. The organization has budgeted for each team's use of Amazon EMR. When a budgetary threshold is crossed, an email should be sent to the budget office's distribution list notifying them. Teams should be able to check the total cost of their EMR cluster. A solutions architect must develop a system that proactively and centrally enforces the policy in a multi-account environment. Which measures should the solutions architect do in combination to satisfy these requirements? (Select two.). Update the AWS CloudFormation template to include the AWS::Budgets::Budget::resource with the NotificationsWithSubscribers property. Implement Amazon CloudWatch dashboards for Amazon EMR usage. Create an EMR bootstrap action that runs at startup that calls the Cost Explorer API to set the budget on the cluster with the GetCostForecast and NotificationsWithSubscribers actions. Create an AWS Service Catalog portfolio for each team. Add each team's Amazon EMR cluster as an AWS CloudFormation template to their Service Catalog portfolio as a Product. Create an Amazon CloudWatch metric for billing. Create a custom alert when costs exceed the budgetary threshold. An education corporation manages an online program that is utilized by college students worldwide. The application is hosted in an Amazon Elastic Container Service (Amazon ECS) cluster behind an Application Load Balancer (ALB). A system administrator notices a weekly increase in the number of unsuccessful login attempts, which overwhelms the authentication service for the application. All unsuccessful login attempts come from around 500 unique IP addresses that vary on a weekly basis. A solutions architect must ensure that the authentication service is not overwhelmed by unsuccessful login attempts. Which option satisfies these conditions the most efficiently?. Use AWS Firewall Manager to create a security group and security group policy to deny access from the IP addresses. Create an AWS WAF web ACL with a rate-based rule, and set the rule action to Block. Connect the web ACL to the ALB. Use AWS Firewall Manager to create a security group and security group policy to allow access only to specific CIDR ranges. Create an AWS WAF web ACL with an IP set match rule, and set the rule action to Block. Connect the web ACL to the ALB. A business runs an application on an Amazon EC2 instance and requires file storage in Amazon S3. The data should never be sent over the public internet, and access to a particular Amazon S3 bucket should be restricted to the application's EC2 instances. A solutions architect has constructed an Amazon S3 VPC endpoint and linked it to the application's VPC. What further efforts should the solutions architect take to ensure compliance with these requirements?. Assign an endpoint policy to the endpoint that restricts access to a specific S3 bucket. Attach a bucket policy to the S3 bucket that grants access to the VPC endpoint. Add the gateway prefix list to a NACL of the instances to limit access to the application EC2 instances only. Attach a bucket policy to the S3 bucket that grants access to application EC2 instances only using the aws:SourceIp condition. Update the VPC route table so only the application EC2 instances can access the VPC endpoint. Assign an endpoint policy to the VPC endpoint that restricts access to a specific S3 bucket. Attach a bucket policy to the S3 bucket that grants access to the VPC endpoint. Assign an IAM role to the application EC2 instances and only allow access to this role in the S3 bucket's policy. Assign an endpoint policy to the VPC endpoint that restricts access to S3 in the current Region. Attach a bucket policy to the S3 bucket that grants access to the VPC private subnets only. Add the gateway prefix list to a NACL to limit access to the application EC2 instances only. On AWS, a business want to operate a serverless application. The firm intends to deploy its application using Docker containers on an Amazon ECS cluster. A MySQL database is required for the application, and the firm intends to utilize Amazon RDS. The firm contains records that must be viewed regularly for the first three months and then very seldom afterwards. The document must be kept for a period of seven years. Which approach is the MOST cost-effective in meeting these requirements?. Create an ECS cluster using On-Demand Instances. Provision the database and its read replicas in Amazon RDS using Spot Instances. Store the documents in an encrypted EBS volume, and create a cron job to delete the documents after 7 years. Create an ECS cluster using a fleet of Spot Instances, with Spot Instance draining enabled. Provision the database and its read replicas in Amazon RDS using Reserved Instances. Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier, then delete the documents from Amazon S3 Glacier that are more than 7 years old. Create an ECS cluster using On-Demand Instances. Provision the database and its read replicas in Amazon RDS using On-Demand Instances. Store the documents in Amazon EFS. Create a cron job to move the documents that are older than 3 months to Amazon S3 Glacier. Create an AWS Lambda function to delete the documents in S3 Glacier that are older than 7 years. Create an ECS cluster using a fleet of Spot Instances with Spot Instance draining enabled. Provision the database and its read replicas in Amazon RDS using On-Demand Instances. Store the documents in a secured Amazon S3 bucket with a lifecycle policy to move the documents that are older than 3 months to Amazon S3 Glacier, then delete the documents in Amazon S3 Glacier after 7 years. A business maintains many apps in an on-premises data center. The data center hosts a mixture of Windows and Linux virtual machines that are controlled by VMware vCenter. A solutions architect must develop a strategy for migrating apps to AWS. The solutions architect, however, realizes that the application's documentation is out of date and that there are no full infrastructure diagrams. The company's developers are unable to meet with the solutions architect to discuss their apps and current use. What should the solutions architect do to assemble the necessary data?. Deploy the AWS Server Migration Service (AWS SMS) connector using the OVA image on the VMware cluster to collect configuration and utilization data from the VMs. Use the AWS Migration Portfolio Assessment (MPA) tool to connect to each of the VMs to collect the configuration and utilization data. Install the AWS Application Discovery Service on each of the VMs to collect the configuration and utilization data. Register the on-premises VMs with the AWS Migration Hub to collect configuration and utilization data. The site reliability engineer for a corporation is doing an evaluation of Amazon FSx for Windows File Server installations inside a newly acquired account. All Amazon FSx file systems must be designed to be highly available across Availability Zones, according to company policy. The site reliability engineer learns during the evaluation that one of the Amazon FSx file systems was deployed using the Single-AZ 2 deployment type. A solutions architect must reduce downtime while adhering to business policies about this Amazon FSx file system. What actions should the solutions architect take to ensure that these criteria are met?. Reconfigure the deployment type to Multi-AZ for this Amazon FSx file system. Create a new Amazon FSx file system with a deployment type of Multi-AZ. Use AWS DataSync to transfer data to the new Amazon FSx file system. Point users to the new location. Create a second Amazon FSx file system with a deployment type of Single-AZ 2. Use AWS DataSync to keep the data in sync. Switch users to the second Amazon FSx file system in the event of failure. Use the AWS Management Console to take a backup of the Amazon FSx file system. Create a new Amazon FSx file system with a deployment type of Multi- AZ. Restore the backup to the new Amazon FSx file system Point users to the new location. A large-scale migration to AWS was just completed by a corporation. Development teams that service many business units each have their own AWS Organizations account. A central cloud team is in charge of determining which services and resources may be used, as well as developing operational plans for all corporate teams. Certain teams are nearing the end of their account service limits. The cloud team must develop an automated and operationally efficient system for monitoring service quotas in real time. Monitoring should occur every 15 minutes and trigger alarms when a team's usage surpasses 80%. Which solution will satisfy these criteria?. Create a scheduled AWS Config rule to trigger an AWS Lambda function to call the GetServiceQuota API. If any service utilization is above 80%, publish a message to an Amazon Simple Notification Service (Amazon SNS) topic to alert the cloud team. Create an AWS CloudFormation template and deploy the necessary resources to each account. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that triggers an AWS Lambda function to refresh the AWS Trusted Advisor service limits checks and retrieve the most current utilization and service limit data. If the current utilization is above 80%, publish a message to an Amazon Simple Notification Service (Amazon SNS) topic to alert the cloud team. Create AWS CloudFormation StackSets that deploy the necessary resources to all Organizations accounts. Create an Amazon CloudWatch alarm that triggers an AWS Lambda function to call the Amazon CloudWatch GetInsightRuleReport API to retrieve the most current utilization and service limit data. If the current utilization is above 80%, publish an Amazon Simple Email Service (Amazon SES) notification to alert the cloud team. Create AWS CloudFormation StackSets that deploy the necessary resources to all Organizations accounts. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that triggers an AWS Lambda function to refresh the AWS Trusted Advisor service limits checks and retrieve the most current utilization and service limit data. If the current utilization is above 80%, use Amazon Pinpoint to send an alert to the cloud team. Create an AWS CloudFormation template and deploy the necessary resources to each account. You're developing a personal document archiving system for your multinational corporation, which employs thousands of people. Each employee's data is potentially several gigabytes in size and must be backed up using this archiving system. Employees will be able to access the solution through an application, which will allow them to simply drag and drop their files into the archiving system. Employees may access their archives using a web-based interface. The corporate network is connected to AWS through a high-bandwidth AWS Direct Connect connection. You are required by law to encrypt all data before to uploading it to the cloud. How can you achieve this in a manner that is both highly accessible and cost-effective?. Manage encryption keys on-premises in an encrypted relational database. Set up an on-premises server with sufficient storage to temporarily store files, and then upload them to Amazon S3, providing a client-side master key. Mange encryption keys in a Hardware Security Module (HSM) appliance on-premises serve r with sufficient storage to temporarily store, encrypt, and upload files directly into Amazon Glacier. Manage encryption keys in Amazon Key Management Service (KMS), upload to Amazon Simple Storage Service (S3) with client-side encryption using a KMS customer master key ID, and configure Amazon S3 lifecycle policies to store each object using the Amazon Glacier storage tier. Manage encryption keys in an AWS CloudHSM appliance. Encrypt files prior to uploading on the employee desktop, and then upload directly into Amazon Glacier. Amazon Elastic File System (EFS) offers information about the amount of space utilized by an item by using the network file system's space used property. The property specifies the current metered data size of the item, not the metadata size. Which of the following tools will you use to determine how much disk space a file consumes?. blkid utility. du utility. sfdisk utility. pydf utility. All data uploaded to an Amazon S3 bucket must be encrypted according to the company's security policy. The encryption keys must be highly accessible, and the organization must be able to regulate access on a per-user basis, with each user having access to a unique encryption key. Which of the following architectures satisfies these criteria? (Select two.). Use Amazon S3 server-side encryption with Amazon S3-managed keys. Allow Amazon S3 to generate an AWS/S3 master key, and use IAM to control access to the data keys that are generated. Use Amazon S3 server-side encryption with AWS KMS-managed keys, create multiple customer master keys, and use key policies to control access to them. Use Amazon S3 server-side encryption with customer-managed keys, and use AWS CloudHSM to manage the keys. Use CloudHSM client software to control access to the keys that are generated. Use Amazon S3 server-side encryption with customer-managed keys, and use two AWS CloudHSM instances configured in high-availability mode to manage the keys. Use the CloudHSM client software to control access to the keys that are generated. Use Amazon S3 server-side encryption with customer-managed keys, and use two AWS CloudHSM instances configured in high-availability mode to manage the keys. Use IAM to control access to the keys that are generated in CloudHSM. A significant European corporation intends to move its apps to the AWS Cloud. The corporation has many AWS accounts for different business units. According to a data privacy rule, the corporation is required to limit developers' access to AWS European Regions exclusively. What should the solution architect do in order to satisfy this demand with the LEAST amount of administrative overhead possible?. Create IAM users and IAM groups in each account. Create IAM policies to limit access to non-European Regions. Attach the IAM policies to the IAM groups. Enable AWS Organizations, attach the AWS accounts, and create OUs for European Regions and non-European Regions. Create SCPs to limit access to non-European Regions and attach the policies to the OUs. Set up AWS Single Sign-On and attach AWS accounts. Create permission sets with policies to restrict access to non-European Regions. Create IAM users and IAM groups in each account. Enable AWS Organizations, attach the AWS accounts, and create OUs for European Regions and non-European Regions. Create permission sets with policies to restrict access to non-European Regions. Create IAM users and IAM groups in the primary account. AWS client has an on-premises web application. The web application retrieves data from a firewall-protected third-party API. In each client's allow list, the third party accepts just one public CIDR block. The client want to transfer their web application to Amazon Web Services (AWS). The application will be hosted on a collection of Amazon EC2 instances in a Virtual Private Cloud (VPC) behind an Application Load Balancer (ALB). The ALB is distributed among public subnets. Private subnets are used to host the EC2 instances. NAT gateways connect private subnets to the internet. How can a solutions architect guarantee that the web application can continue to make calls to the third-party API after the migration?. Associate a block of customer-owned public IP addresses to the VPC. Enable public IP addressing for public subnets in the VPC. Register a block of customer-owned public IP addresses in the AWS account. Create Elastic IP addresses from the address block and assign them to the NAT gateways in the VPC. Create Elastic IP addresses from the block of customer-owned IP addresses. Assign the static Elastic IP addresses to the ALB. Register a block of customer-owned public IP addresses in the AWS account. Set up AWS Global Accelerator to use Elastic IP addresses from the address block. Set the ALB as the accelerator endpoint. A startup is nearing completion of the architecture for its backup solution for AWS apps. All apps are hosted on AWS and each tier utilizes at least two Availability Zones. IT is required by company policy to maintain nightly backups of all data in at least two locations: production and disaster recovery. The places must be geographically distinct. Additionally, the firm requires that the backup be instantly accessible for restoration at the production data center and within 24 hours at the disaster recovery site. Ideally, all backup operations should be completely automated. Which backup system is the MOST cost-effective and meets all requirements?. Back up all the data to a large Amazon EBS volume attached to the backup media server in the production region. Run automated scripts to snapshot these volumes nightly, and copy these snapshots to the disaster recovery region. Back up all the data to Amazon S3 in the disaster recovery region. Use a lifecycle policy to move this data to Amazon Glacier in the production region immediately. Only the data is replicated; remove the data from the S3 bucket in the disaster recovery region. Back up all the data to Amazon Glacier in the production region. Set up cross-region replication of this data to Amazon Glacier in the disaster recovery region. Set up a lifecycle policy to delete any data older than 60 days. Back up all the data to Amazon S3 in the production region. Set up cross-region replication of this S3 bucket to another region and set up a lifecycle policy in the second region to immediately move this data to Amazon Glacier. A business is in the process of transferring an application to the AWS Cloud. Each night, the application publishes thousands of photos to a mounted NFS file system in an on-premises data center. After the application is migrated, it will be hosted on an Amazon EC2 instance with a mounted Amazon Elastic File System (Amazon EFS) file system. The firm has created a link to AWS using AWS Direct Connect. Prior to the cutover, a solutions architect must develop a mechanism for replicating freshly produced on-premises pictures to the EFS file system. What is the MOST OPTIMAL method for replicating the images?. Configure a periodic process to run the aws s3 sync command from the on-premises file system to Amazon S3. Configure an AWS Lambda function to process event notifications from Amazon S3 and copy the images from Amazon S3 to the EFS file system. Deploy an AWS Storage Gateway file gateway with an NFS mount point. Mount the file gateway file system on the on-premises server. Configure a process to periodically copy the images to the mount point. Deploy an AWS DataSync agent to an on-premises server that has access to the NFS file system. Send data over the Direct Connect connection to an S3 bucket by using public VIF. Configure an AWS Lambda function to process event notifications from Amazon S3 and copy the images from Amazon S3 to the EFS file system. Deploy an AWS DataSync agent to an on-premises server that has access to the NFS file system. Send data over the Direct Connect connection to an AWS PrivateLink interface VPC endpoint for Amazon EFS by using a private VIF. Configure a DataSync scheduled task to send the images to the EFS file system every 24 hours. A business is operating a distributed application on an Auto Scaling group of Amazon EC2 machines. The program saves massive volumes of data on an Amazon Elastic File System (Amazon EFS) file system and generates fresh data on a monthly basis. The organization needs to back up its data in a secondary AWS Region to use as a fallback in the event of a main Region performance issue. The company's RTO is one hour. A solutions architect must develop a backup plan while keeping the additional expense to a minimum. Which backup method, if any, should the solutions architect propose in order to satisfy these requirements?. Create a pipeline in AWS Data Pipeline. Copy the data to an EFS file system in the secondary Region. Create a lifecycle policy to move files to the EFS One Zone-Infrequent Access storage class. Set up automatic backups by using AWS Backup. Create a copy rule to copy backups to an Amazon S3 bucket in the secondary Region. Create a lifecycle policy to move backups to the S3 Glacier storage class. Set up AWS DataSync and continuously copy the files to an Amazon S3 bucket in the secondary Region. Create a lifecycle policy to move files to the S3 Glacier Deep Archive storage class. Turn on EFS Cross-Region Replication and set the secondary Region as the target. Create a lifecycle policy to move files to the EFS Infrequent Access storage class in the secondary Region. You're successfully operating a multitier web application on AWS, and your marketing department has requested that you add a reporting tier to the service. Every 30 minutes, the reporting layer will collect and publish status reports based on user-generated data contained in your web application's database. For the database layer, you are presently operating a Multi-AZ RDS MySQL server. Additionally, you've used Elasticache to provide a database caching layer between the application and database tiers. Please choose the response that will enable you to properly install the reporting layer while having the least impact on your database as feasible. Continually send transaction logs from your master database to an S3 bucket and generate the reports off the S3 bucket using S3 byte range requests. Generate the reports by querying the synchronously replicated standby RDS MySQL instance maintained through Multi-AZ. Launch a RDS Read Replica connected to your Multi AZ master database and generate reports by querying the Read Replica. Generate the reports by querying the ElastiCache database caching tier. Your application specializes in data transformation. Transformable files are uploaded to Amazon S3 and subsequently transformed by a fleet of spot EC2 instances. Files provided by your premium clients must be processed immediately. How should such a system be implemented?. Use a DynamoDB table with an attribute defining the priority level. Transformation instances will scan the table for tasks, sorting the results by priority level. Use Route 53 latency based-routing to send high priority tasks to the closest transformation instances. Use two SQS queues, one for high priority messages, the other for default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue. Use a single SQS queue. Each message contains the priority level. Transformation instances poll high-priority messages first. After establishing an instance on a public subnet to act as a NAT (Network Address Translation) device, you adjust your route tables to make the NAT device the target of your private subnet's internet-bound traffic. When attempting to establish an outbound connection to the internet from a private subnet instance, you are unsuccessful. Which of the following approaches would be most effective in resolving the issue?. Disabling the Source/Destination Check attribute on the NAT instance. Attaching an Elastic IP address to the instance in the private subnet. Attaching a second Elastic Network Interface (ENI) to the NAT instance, and placing it in the private subnet. Attaching a second Elastic Network Interface (ENI) to the instance in the private subnet, and placing it in the public subnet. Which of the following commands takes arguments in the form of binary data?. --user-data. cipher text-key. --aws-customer-key. --describe-instances-user. The data center of a business is linked to the AWS Cloud through a low-latency 10 Gbps AWS Direct Connect connection that includes a private virtual interface to the virtual private cloud (VPC). The firm's internet connection is 200 Mbps, and each Friday, the company creates a 150 TB dataset. On Monday morning, the data must be moved and made accessible on Amazon S3. Which is the LEAST EXPENSIVE method of meeting the criteria while yet allowing for growth in data transfer?. Order two 80 TB AWS Snowball appliances. Offload the data to the appliances and ship them to AWS. AWS will copy the data from the Snowball appliances to Amazon S3. Create a VPC endpoint for Amazon S3. Copy the data to Amazon S3 by using the VPC endpoint, forcing the transfer to use the Direct Connect connection. Create a VPC endpoint for Amazon S3. Set up a reverse proxy farm behind a Classic Load Balancer in the VPC. Copy the data to Amazon S3 using the proxy. Create a public virtual interface on a Direct Connect connection, and copy the data to Amazon S3 over the connection. A business is developing an account strategy in order to begin using AWS. The security team will provide each team the permissions necessary to adhere to the least privileged access concept. Teams want to keep their resources distinct from those of other groups, while the Finance team wants to charge separately for each team's resource utilization. Which account creation method satisfies these criteria and allows for modifications?. Create a new AWS Organizations account. Create groups in Active Directory and assign them to roles in AWS to grant federated access. Require each team to tag their resources, and separate bills based on tags. Control access to resources through IAM granting the minimally required privilege. Create individual accounts for each team. Assign the security account as the master account, and enable consolidated billing for all other accounts. Create a cross-account role for security to manage accounts, and send logs to a bucket in the security account. Create a new AWS account, and use AWS Service Catalog to provide teams with the required resources. Implement a third-party billing solution to provide the Finance team with the resource use for each team based on tagging. Isolate resources using IAM to avoid account sprawl. Security will control and monitor logs and permissions. Create a master account for billing using Organizations, and create each team's account from that master account. Create a security account for logs and cross-account access. Apply service control policies on each account, and grant the Security team cross-account access to all accounts. Security will create IAM policies for each account to maintain least privilege access. A Solutions Architect is developing a solution for a cluster of Amazon EC2 instances that is extremely available and dependable. The Solutions Architect is responsible for ensuring that each EC2 instance inside the cluster immediately recovers after a system failure. The solution must guarantee that the restored instance retains its original IP address. How are these stipulations to be met?. Create an AWS Lambda script to restart any EC2 instances that shut down unexpectedly. Create an Auto Scaling group for each EC2 instance that has a minimum and maximum size of 1. Create a new t2.micro instance to monitor the cluster instances. Configure the t2.micro instance to issue an aws ec2 reboot-instances command upon failure. Create an Amazon CloudWatch alarm for the StatusCheckFailed_System metric, and then configure an EC2 action to recover the instance. A healthcare organization use AWS to host a production workload that holds extremely sensitive personal information. The security team has mandated that each AWS API activity performed with the root user credentials of an AWS account must immediately issue a high-priority ticket in the company's ticketing system for auditing reasons. The ticketing system includes a monthly maintenance window of three hours during which no tickets may be generated. To comply with security standards, the organization activated AWS CloudTrail logs and created a scheduled AWS Lambda function that queries API operations done by the root user using Amazon Athena. The Lambda function notifies the ticketing system API of any activities discovered. Several tickets were not generated during a recent security assessment owing to the ticketing system being unavailable due to scheduled maintenance. Which combination of measures should a solutions architect take to guarantee that problems are notified to the ticketing system even while scheduled maintenance is being performed? (Select two.). Create an Amazon SNS topic to which Amazon CloudWatch alarms will be published. Configure a CloudWatch alarm to invoke the Lambda function. Create an Amazon SQS queue to which Amazon CloudWatch alarms will be published. Configure a CloudWatch alarm to publish to the SQS queue. Modify the Lambda function to be triggered by messages published to an Amazon SNS topic. Update the existing application code to retry every 5 minutes if the ticketing system's API endpoint is unavailable. Modify the Lambda function to be triggered when there are messages in the Amazon SQS queue and to return successfully when the ticketing system API has processed the request. Create an Amazon EventBridge rule that triggers on all API events where the invoking user identity is root. Configure the EventBridge rule to write the event to an Amazon SQS queue. Your firm maintains an on-premises multi-tier PHP online application that recently suffered outage as a result of a huge spike in web traffic after a corporate announcement. You anticipate similar announcements driving similar unanticipated bursts in the future days and are searching for strategies to swiftly boost your infrastructure's capacity to manage unexpected surges in traffic. Currently, the application is divided into two tiers: a web tier comprised of a load balancer and many Linux Apache web servers, and a database layer comprised of a Linux server hosting a MySQL database. Which of the following scenarios will give complete site functionality while also assisting in the enhancement of your application's capability in the short period required?. Failover environment: Create an S3 bucket and configure it for website hosting. Migrate your DNS to Route53 using zone file import, and leverage Route53 DNS failover to failover to the S3 hosted website. Hybrid environment: Create an AMI, which can be used to launch web servers in EC2. Create an Auto Scaling group, which uses the AMI to scale the web tier based on incoming traffic. Leverage Elastic Load Balancing to balance traffic between on-premises web servers and those hosted in AWS. Offload traffic from on-premises environment: Setup a CIoudFront distribution, and configure CloudFront to cache objects from a custom origin. Choose to customize your object cache behavior, and select a TTL that objects should exist in cache. Migrate to AWS: Use VM Import/Export to quickly convert an on-premises web server to an AMI. Create an Auto Scaling group, which uses the imported AMI to scale the web tier based on incoming traffic. Create an RDS read replica and setup replication between the RDS instance and on-premises MySQL server to migrate the database. You are the administrator of a news website that updates every 15 minutes in the eu-west-1 area. The website is accessible to a global audience. It makes use of an Auto Scaling group and an Elastic Load Balancer in conjunction with an Amazon RDS database. Amazon S3 is used to store static material, which is delivered through Amazon CloudFront. Your Auto Scaling group is configured to initiate a scale up event when CPU usage reaches 60%. You're using an Amazon RDS extra large database instance with 10.000 Provisioned IOPS, a CPU usage of roughly 80%, and free RAM in the region of 2 GB. Web analytics records indicate that the average load time for your web pages is between 1.5 and 2 seconds, while your SEO consultant desires a load time of less than 0.5 seconds. How might you improve your visitors' website load times? (Select three.). Lower the scale up trigger of your Auto Scaling group to 30% so it scales more aggressively. Add an Amazon ElastiCache caching layer to your application for storing sessions and frequent DB queries. Configure Amazon CloudFront dynamic content support to enable caching of re-usable content from your site. Switch the Amazon RDS database to the high memory extra large Instance type. Set up a second installation in another region, and use the Amazon Route 53 latency-based routing feature to select the right region. In the United States of America, a company develops software for the CIA. The CIA agreed to host the application on Amazon Web Services (AWS), but in a secure environment. The firm is considering hosting the application on the Amazon Web Services (AWS) GovCloud region. Which of the following statements is incorrect when an enterprise hosts on AWS GovCloud rather than the AWS standard region?. The billing for the AWS GovCLoud will be in a different account than the Standard AWS account. GovCloud region authentication is isolated from Amazon.com. Physical and logical administrative access only to U.S. persons. It is physically isolated and has logical network isolation from all the other regions. A business has created a web application. The application is hosted by the enterprise on a cluster of Amazon EC2 instances protected by an Application Load Balancer. The organization wishes to enhance the application's security posture and intends to do so via the usage of AWS WAF web ACLs. The solution must have no harmful effect on valid application traffic. How should a solutions architect design web access control lists to conform to these requirements?. Set the action of the web ACL rules to Count. Enable AWS WAF logging. Analyze the requests for false positives. Modify the rules to avoid any false positive. Over time, change the action of the web ACL rules from Count to Block. Use only rate-based rules in the web ACLs, and set the throttle limit as high as possible. Temporarily block all requests that exceed the limit. Define nested rules to narrow the scope of the rate tracking. Set the action of the web ACL rules to Block. Use only AWS managed rule groups in the web ACLs. Evaluate the rule groups by using Amazon CloudWatch metrics with AWS WAF sampled requests or AWS WAF logs. Use only custom rule groups in the web ACLs, and set the action to Allow. Enable AWS WAF logging. Analyze the requests for false positives. Modify the rules to avoid any false positive. Over time, change the action of the web ACL rules from Allow to Block. A business has launched a web service in two AWS Regions: us-west-2 and us-est-1. Each AWS region hosts a single instance of the web service. Amazon Route 53 is used to direct clients to the least-latency AWS Region. The organization want to increase the web service's availability in the event of an outage in one of the two AWS Regions. A Solutions Architect has advised that a health check of Route 53 be conducted. The health check must identify a certain piece of text on an endpoint. Which requirements must the endpoint satisfy in order to pass the Route 53 health check? (Select two.). The endpoint must establish a TCP connection within 10 seconds. The endpoint must return an HTTP 200 status code. The endpoint must return an HTTP 2xx or 3xx status code. The specific text string must appear within the first 5,120 bytes of the response. The endpoint must respond to the request within the number of seconds specified when creating the health check. A corporation with worldwide offices connects to a single AWS Region using a single 1 Gbps AWS Direct Connect connection. The link is used by the company's on-premises network to interact with its AWS Cloud services. The connection is made up of a single private virtual interface that connects to a single virtual private cloud (VPC). A solutions architect must build a solution that allows for the addition of a redundant Direct Connect connection within the same Region. Additionally, the solution must allow access to other Regions using the same pair of Direct Connect connections when the business develops into additional Regions. Which solution satisfies these criteria?. Provision a Direct Connect gateway. Delete the existing private virtual interface from the existing connection. Create the second Direct Connect connection. Create a new private virtual interface on each connection, and connect both private virtual interfaces to the Direct Connect gateway. Connect the Direct Connect gateway to the single VPC. Keep the existing private virtual interface. Create the second Direct Connect connection. Create a new private virtual interface on the new connection, and connect the new private virtual interface to the single VPC. Keep the existing private virtual interface. Create the second Direct Connect connection. Create a new public virtual interface on the new connection and connect the new public virtual interface to the single VPC. Provision a transit gateway. Delete the existing private virtual interface from the existing connection. Create the second Direct Connect connection. Create a new private virtual interface on each connection, and connect both private virtual interfaces to the transit gateway. Associate the transit gateway with the single VPC. A business delivers a centralized Amazon EC2 application that is housed in a single shared virtual private cloud (VPC). The centralized application must be available to client apps operating in various business units' virtual private clouds (VPCs). For scalability, the centralized application front end is equipped using a Network Load Balancer (NLB). Up to ten virtual private clouds (VPCs) per business unit must be linked to the common VPC. Certain CIDR blocks in the business unit VPC overlap with those in the shared VPC, while others overlap with one another. Network connection to the shared VPC's centralized application should be restricted to approved business unit VPCs. Which network configuration should a solutions architect use to link client applications in business unit virtual private clouds to the centralized application on the shared virtual private cloud?. Create an AWS Transit Gateway. Attach the shared VPC and the authorized business unit VPCs to the transit gateway. Create a single transit gateway route table and associate it with all of the attached VPCs. Allow automatic propagation of routes from the attachments into the route table. Configure VPC routing tables to send traffic to the transit gateway. Create a VPC endpoint service using the centralized application NLB and enable the option to require endpoint acceptance. Create a VPC endpoint in each of the business unit VPCs using the service name of the endpoint service. Accept authorized endpoint requests from the endpoint service console. Create a VPC peering connection from each business unit VPC to the shared VPC. Accept the VPC peering connections from the shared VPC console. Configure VPC routing tables to send traffic to the VPC peering connection. Configure a virtual private gateway for the shared VPC and create customer gateways for each of the authorized business unit VPCs. Establish a Site-to-Site VPN connection from the business unit VPCs to the shared VPC. Configure VPC routing tables to send traffic to the VPN connection. |




