option
Questions
ayuda
daypo
search.php

Test Knowledge 14

COMMENTS STATISTICS RECORDS
TAKE THE TEST
Title of test:
Test Knowledge 14

Description:
75 Question

Creation Date: 2025/10/15

Category: Others

Number of questions: 53

Rating:(0)
Share the Test:
Nuevo ComentarioNuevo Comentario
New Comment
NO RECORDS
Content:

A company is designing a web application with an internet-facing Application Load Balancer (ALB). The company needs the ALB to receive HTTPS web traffic from the public internet. The ALB must send only HTTPS traffic to the web application servers hosted on the Amazon EC2 instances on port 443. The ALB must perform a health check of the web application servers over HTTPS on port 8443. Which combination of configurations of the security group that is associated with the ALB will meet these requirements? (Choose three.). A. Allow HTTPS inbound traffic from 0.0.0.0/0 for port 443. B. Allow all outbound traffic to 0.0.0.0/0 for port 443. C. Allow HTTPS outbound traffic to the web application instances for port 443. D. Allow HTTPS inbound traffic from the web application instances for port 443. E. Allow HTTPS outbound traffic to the web application instances for the health check on port 8443. F. Allow HTTPS inbound traffic from the web application instances for the health check on port 8443.

A company hosts an application on AWS. The application gives users the ability to upload photos and store the photos in an Amazon S3 bucket. The company wants to use Amazon CloudFront and a custom domain name to upload the photo files to the S3 bucket in the eu-west-1 Region. Which solution will meet these requirements? (Choose two.). A. Use AWS Certificate Manager (ACM) to create a public certificate in the us-east-1 Region. Use the certificate in CloudFront. B. Use AWS Certificate Manager (ACM) to create a public certificate in eu-west-1. Use the certificate in CloudFront. C. Configure Amazon S3 to allow uploads from CloudFront. Configure S3 Transfer Acceleration. D. Configure Amazon S3 to allow uploads from CloudFront origin access control (OAC). E. Configure Amazon S3 to allow uploads from CloudFront. Configure an Amazon S3 website endpoint.

A weather forecasting company collects temperature readings from various sensors on a continuous basis. An existing data ingestion process collects the readings and aggregates the readings into larger Apache Parquet files. Then the process encrypts the files by using client-side encryption with KMS managed keys (CSE-KMS). Finally, the process writes the files to an Amazon S3 bucket with separate prefixes for each calendar day. The company wants to run occasional SQL queries on the data to take sample moving averages for a specific calendar day. Which solution will meet these requirements MOST cost-effectively?. A. Configure Amazon Athena to read the encrypted files. Run SQL queries on the data directly in Amazon S3. B. Use Amazon S3 Select to run SQL queries on the data directly in Amazon S3. C. Configure Amazon Redshift to read the encrypted files. Use Redshift Spectrum and Redshift query editor v2 to run SQL queries on the data directly in Amazon S3. D. Configure Amazon EMR Serverless to read the encrypted files. Use Apache SparkSQL to run SQL queries on the data directly in Amazon S3.

A company is implementing a new application on AWS. The company will run the application on multiple Amazon EC2 instances across multiple Availability Zones within multiple AWS Regions. The application will be available through the internet. Users will access the application from around the world. The company wants to ensure that each user who accesses the application is sent to the EC2 instances that are closest to the user’s location. Which solution will meet these requirements?. A. Implement an Amazon Route 53 geolocation routing policy. Use an internet-facing Application Load Balancer to distribute the traffic across all Availability Zones within the same Region. B. Implement an Amazon Route 53 geoproximity routing policy. Use an internet-facing Network Load Balancer to distribute the traffic across all Availability Zones within the same Region. C. Implement an Amazon Route 53 multivalue answer routing policy. Use an internet-facing Application Load Balancer to distribute the traffic across all Availability Zones within the same Region. D. Implement an Amazon Route 53 weighted routing policy. Use an internet-facing Network Load Balancer to distribute the traffic across all Availability Zones within the same Region.

A financial services company plans to launch a new application on AWS to handle sensitive financial transactions. The company will deploy the application on Amazon EC2 instances. The company will use Amazon RDS for MySQL as the database. The company’s security policies mandate that data must be encrypted at rest and in transit. Which solution will meet these requirements with the LEAST operational overhead?. A. Configure encryption at rest for Amazon RDS for MySQL by using AWS KMS managed keys. Configure AWS Certificate Manager (ACM) SSL/TLS certificates for encryption in transit. B. Configure encryption at rest for Amazon RDS for MySQL by using AWS KMS managed keys. Configure IPsec tunnels for encryption in transit. C. Implement third-party application-level data encryption before storing data in Amazon RDS for MySQL. Configure AWS Certificate Manager (ACM) SSL/TLS certificates for encryption in transit. D. Configure encryption at rest for Amazon RDS for MySQL by using AWS KMS managed keys. Configure a VPN connection to enable private connectivity to encrypt data in transit.

A company is migrating its on-premises Oracle database to an Amazon RDS for Oracle database. The company needs to retain data for 90 days to meet regulatory requirements. The company must also be able to restore the database to a specific point in time for up to 14 days. Which solution will meet these requirements with the LEAST operational overhead?. A. Create Amazon RDS automated backups. Set the retention period to 90 days. B. Create an Amazon RDS manual snapshot every day. Delete manual snapshots that are older than 90 days. C. Use the Amazon Aurora Clone feature for Oracle to create a point-in-time restore. Delete clones that are older than 90 days. D. Create a backup plan that has a retention period of 90 days by using AWS Backup for Amazon RDS.

A company is developing a new application that uses a relational database to store user data and application configurations. The company expects the application to have steady user growth. The company expects the database usage to be variable and read-heavy, with occasional writes. The company wants to cost-optimize the database solution. The company wants to use an AWS managed database solution that will provide the necessary performance. Which solution will meet these requirements MOST cost-effectively?. A. Deploy the database on Amazon RDS. Use Provisioned IOPS SSD storage to ensure consistent performance for read and write operations. B. Deploy the database on Amazon Aurora Serverless to automatically scale the database capacity based on actual usage to accommodate the workload. C. Deploy the database on Amazon DynamoDB. Use on-demand capacity mode to automatically scale throughput to accommodate the workload. D. Deploy the database on Amazon RDS. Use magnetic storage and use read replicas to accommodate the workload.

A company hosts its application on several Amazon EC2 instances inside a VPC. The company creates a dedicated Amazon S3 bucket for each customer to store their relevant information in Amazon S3. The company wants to ensure that the application running on EC2 instances can securely access only the S3 buckets that belong to the company’s AWS account. Which solution will meet these requirements with the LEAST operational overhead?. A. Create a gateway endpoint for Amazon S3 that is attached to the VPC. Update the IAM instance profile policy to provide access to only the specific buckets that the application needs. B. Create a NAT gateway in a public subnet with a security group that allows access to only Amazon S3. Update the route tables to use the NAT Gateway. C. Create a gateway endpoint for Amazon S3 that is attached to the VPUpdate the IAM instance profile policy with a Deny action and the following condition key:. D. Create a NAT Gateway in a public subnet. Update route tables to use the NAT Gateway. Assign.

A company is building a cloud-based application on AWS that will handle sensitive customer data. The application uses Amazon RDS for the database, Amazon S3 for object storage, and S3 Event Notifications that invoke AWS Lambda for serverless processing. The company uses AWS IAM Identity Center to manage user credentials. The development, testing, and operations teams need secure access to Amazon RDS and Amazon S3 while ensuring the confidentiality of sensitive customer data. The solution must comply with the principle of least privilege. Which solution meets these requirements with the LEAST operational overhead?. A. Use IAM roles with least privilege to grant all the teams access. Assign IAM roles to each team with customized IAM policies defining specific permission for Amazon RDS and S3 object access based on team responsibilities. B. Enable IAM Identity Center with an Identity Center directory. Create and configure permission sets with granular access to Amazon RDS and Amazon S3. Assign all the teams to groups that have specific access with the permission sets. C. Create individual IAM users for each member in all the teams with role-based permissions. Assign the IAM roles with predefined policies for RDS and S3 access to each user based on user needs. Implement IAM Access Analyzer for periodic credential evaluation. D. Use AWS Organizations to create separate accounts for each team. Implement cross-account IAM roles with least privilege. Grant specific permission for RDS and S3 access based on team roles and responsibilities.

A company needs to migrate its data and applications to AWS. The data is stored on a network-attached storage (NAS) device and needs to be moved over a 1 Gbps internet connection. Which AWS service should the company use to migrate the data?. A. AWS DataSync. B. AWS Snowball. C. AWS Snowcone. D. AWS Snowmobile.

A company is hosting a web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The ALB is configured to use an Amazon Certificate Manager (ACM) SSL/TLS certificate. How should the company configure the application to redirect all HTTP requests to HTTPS?. A. Create a redirect rule in the ALB listener to redirect HTTP (port 80) to HTTPS (port 443). B. Modify the web server configuration to redirect HTTP to HTTPS. C. Use AWS WAF rules to block HTTP traffic. D. Enable HTTP to HTTPS redirection in ACM.

A company runs an application that writes log files to Amazon S3. A solutions architect must ensure that all objects are encrypted at rest using server-side encryption with AWS KMS-managed keys (SSE-KMS). What should the solutions architect do to ensure compliance?. A. Create an S3 bucket policy that denies any PUT request that does not include x-amz-server-side-encryption: aws:kms. B. Enable default encryption using AWS KMS on the S3 bucket. C. Use an AWS Config rule to detect unencrypted objects. D. Apply an IAM policy requiring encryption.

A company is deploying a web application that must scale automatically based on user demand. The application should remain stateless. Which architecture should a solutions architect recommend?. A. Launch EC2 instances in an Auto Scaling group behind an Application Load Balancer. Store session data in Amazon DynamoDB. B. Use Amazon ECS with EC2 launch type and store session state locally. C. Launch EC2 instances without a load balancer and use Amazon EFS for session data. D. Use AWS Elastic Beanstalk with instance storage for session management.

A solutions architect must design a highly available architecture for an application that uses an Amazon RDS MySQL DB instance. The database must remain available in case of Availability Zone failure. A. Deploy RDS MySQL with Multi-AZ enabled. B. Deploy RDS MySQL in a single AZ with read replicas. C. Use Amazon Aurora Serverless. D. Backup the database daily to S3.

A company is building a cloud-based application on AWS that will handle sensitive customer data. The application uses Amazon RDS for the database, Amazon S3 for object storage, and S3 Event Notifications that invoke AWS Lambda for serverless processing. The company uses AWS IAM Identity Center to manage user credentials. The development, testing, and operations teams need secure access to Amazon RDS and Amazon S3 while ensuring the confidentiality of sensitive customer data. The solution must comply with the principle of least privilege. Which solution meets these requirements with the LEAST operational overhead?. A. Use IAM roles with least privilege to grant all the teams access. Assign IAM roles to each team with customized IAM policies defining specific permissions for Amazon RDS and S3 object access based on team responsibilities. B. Enable IAM Identity Center with an Identity Center directory. Create and configure permission sets with granular access to Amazon RDS and Amazon S3. Assign all the teams to groups that have specific access with the permission sets. C. Create individual IAM users for each member in all the teams with role-based permissions. Assign the IAM roles with predefined policies for RDS and S3 access to each user based on user needs. Implement IAM Access Analyzer for periodic credential evaluation. D. Use AWS Organizations to create separate accounts for each team. Implement cross-account IAM roles with least privilege. Grant specific permission for RDS and S3 access based on team roles and responsibilities.

A company has an Amazon S3 bucket that contains sensitive data files. The company has an application that runs on virtual machines in an on-premises data center. The company currently uses AWS IAM Identity Center. The application requires temporary access to files in the S3 bucket. The company wants to grant the application secure access to the files in the S3 bucket. Which solution will meet these requirements?. A. Create an S3 bucket policy that permits access to the bucket from the public IP address range of the company’s on-premises data center. B. Use IAM Roles Anywhere to obtain security credentials in IAM Identity Center that grant access to the S3 bucket. Configure the virtual machines to assume the role by using the AWS CLI. C. Install the AWS CLI on the virtual machine. Configure the AWS CLI with access keys from an IAM user that has access to the bucket. D. Create an IAM user and policy that grants access to the bucket. Store the access key and secret key for the IAM user in AWS Secrets Manager. Configure the application to retrieve the access key and secret key at startup.

A company hosts its core network services, including directory services and DNS, in its on-premises data center. The data center is connected to the AWS Cloud using AWS Direct Connect (DX). Additional AWS accounts are planned that will require quick, cost-effective, and consistent access to these network services. What should a solutions architect implement to meet these requirements with the LEAST amount of operational overhead?. A. Create a DX connection in each new account. Route the network traffic to the on-premises servers. B. Configure VPC endpoints in the DX VPC for all required services. Route the network traffic to the on-premises servers. C. Create a VPN connection between each new account and the DX VPC. Route the network traffic to the on-premises servers. D. Configure AWS Transit Gateway between the accounts. Assign DX to the transit gateway and route network traffic to the on-premises servers.

A company hosts its main public web application in one AWS Region across multiple Availability Zones. The application uses an Amazon EC2 Auto Scaling group and an Application Load Balancer (ALB). A web development team needs a cost-optimized compute solution to improve the company’s ability to serve dynamic content globally to millions of customers. Which solution will meet these requirements?. A. Create an Amazon CloudFront distribution. Configure the existing ALB as the origin. B. Use Amazon Route 53 to serve traffic to the ALB and EC2 instances based on the geographic location of each customer. C. Create an Amazon S3 bucket with public read access enabled. Migrate the web application to the S3 bucket. Configure the S3 bucket for website hosting. D. Use AWS Direct Connect to directly serve content from the web application to the location of each customer.

A company stores user data in AWS. The data is used continuously with peak usage during business hours. Access patterns vary, with some data not being used for months at a time. A solutions architect must choose a cost-effective solution that maintains the highest level of durability while maintaining high availability. Which storage solution meets these requirements?. A. Amazon S3 Standard. B. Amazon S3 Intelligent-Tiering. C. Amazon S3 Glacier Deep Archive. D. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).

A company is testing an application that runs on an Amazon EC2 Linux instance. A single 500 GB Amazon EBS General Purpose SSD (gp2) volume is attached to the EC2 instance. The company will deploy the application on multiple EC2 instances in an Auto Scaling group. All instances require access to the data that is stored in the EBS volume. The company needs a highly available and resilient solution that does not introduce significant changes to the application's code. Which solution will meet these requirements?. A. Provision an EC2 instance that uses NFS server software. Attach a single 500 GB gp2 EBS volume to the instance. B. Provision an Amazon FSx for Windows File Server file system. Configure the file system as an SMB file store within a single Availability Zone. C. Provision an EC2 instance with two 250 GB Provisioned IOPS SSD EBS volumes. D. Provision an Amazon Elastic File System (Amazon EFS) file system. Configure the file system to use General Purpose performance mode.

A company recently launched a new application for its customers. The application runs on multiple Amazon EC2 instances across two Availability Zones. End users use TCP to communicate with the application. The application must be highly available and must automatically scale as the number of users increases. Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.). A. Add a Network Load Balancer in front of the EC2 instances. B. Configure an Auto Scaling group for the EC2 instances. C. Add an Application Load Balancer in front of the EC2 instances. D. Manually add more EC2 instances for the application. E. Add a Gateway Load Balancer in front of the EC2 instances.

A company is designing the architecture for a new mobile app that uses the AWS Cloud. The company uses organizational units (OUs) in AWS Organizations to manage its accounts. The company wants to tag Amazon EC2 instances with data sensitivity by using values of sensitive and nonsensitive. IAM identities must not be able to delete a tag or create instances without a tag. Which combination of steps will meet these requirements? (Choose two.). A. In Organizations, create a new tag policy that specifies the data sensitivity tag key and the required values. Enforce the tag values for the EC2 instances. Attach the tag policy to the appropriate OU. B. In Organizations, create a new service control policy (SCP) that specifies the data sensitivity tag key and the required tag values. Enforce the tag values for the EC2 instances. Attach the SCP to the appropriate OU. C. Create a tag policy to deny running instances when a tag key is not specified. Create another tag policy that prevents identities from deleting tags. Attach the tag policies to the appropriate OU. D. Create a service control policy (SCP) to deny creating instances when a tag key is not specified. Create another SCP that prevents identities from deleting tags. Attach the SCPs to the appropriate OU. E. Create an AWS Config rule to check if EC2 instances use the data sensitivity tag and the specified values. Configure an AWS Lambda function to delete the resource if a noncompliant resource is found.

A company runs database workloads on AWS that are the backend for the company's customer portals. The company runs a Multi-AZ database cluster on Amazon RDS for PostgreSQL. The company needs to implement a 30-day backup retention policy. The company currently has both automated RDS backups and manual RDS backups. The company wants to maintain both types of existing RDS backups that are less than 30 days old. Which solution will meet these requirements MOST cost-effectively?. A. Configure the RDS backup retention policy to 30 days for automated backups by using AWS Backup. Manually delete manual backups that are older than 30 days. B. Disable RDS automated backups. Delete automated backups and manual backups that are older than 30 days. Configure the RDS backup retention policy to 30 days for automated backups. C. Configure the RDS backup retention policy to 30 days for automated backups. Manually delete manual backups that are older than 30 days. D. Disable RDS automated backups. Delete automated backups and manual backups that are older than 30 days automatically by using AWS CloudFormation. Configure the RDS backup retention policy to 30 days for automated backups.

A company is planning to migrate a legacy application to AWS. The application currently uses NFS to communicate to an on-premises storage solution to store application data. The application cannot be modified to use any other communication protocols other than NFS for this purpose. Which storage solution should a solutions architect recommend for use after the migration?. A. AWS DataSync. B. Amazon Elastic Block Store (Amazon EBS). C. Amazon Elastic File System (Amazon EFS). D. Amazon EMR File System (Amazon EMRFS).

A company uses GPS trackers to document the migration patterns of thousands of sea turtles. The trackers check every 5 minutes to see if a turtle has moved more than 100 yards (91.4 meters). If a turtle has moved, its tracker sends the new coordinates to a web application running on three Amazon EC2 instances that are in multiple Availability Zones in one AWS Region. Recently, the web application was overwhelmed while processing an unexpected volume of tracker data. Data was lost with no way to replay the events. A solutions architect must prevent this problem from happening again and needs a solution with the least operational overhead. What should the solutions architect do to meet these requirements?. A. Create an Amazon S3 bucket to store the data. Configure the application to scan for new data in the bucket for processing. B. Create an Amazon API Gateway endpoint to handle transmitted location coordinates. Use an AWS Lambda function to process each item concurrently. C. Create an Amazon Simple Queue Service (Amazon SQS) queue to store the incoming data. Configure the application to poll for new messages for processing. D. Create an Amazon DynamoDB table to store transmitted location coordinates. Configure the application to query the table for new data for processing. Use TTL to remove data that has been processed.

A company's software development team needs an Amazon RDS Multi-AZ cluster. The RDS cluster will serve as a backend for a desktop client that is deployed on premises. The desktop client requires direct connectivity to the RDS cluster. The company must give the development team the ability to connect to the cluster by using the client when the team is in the office. Which solution provides the required connectivity MOST securely?. A. Create a VPC and two public subnets. Create the RDS cluster in the public subnets. Use AWS Site-to-Site VPN with a customer gateway in the company's office. B. Create a VPC and two private subnets. Create the RDS cluster in the private subnets. Use AWS Site-to-Site VPN with a customer gateway in the company's office. C. Create a VPC and two private subnets. Create the RDS cluster in the private subnets. Use RDS security groups to allow the company's office IP ranges to access the cluster. D. Create a VPC and two public subnets. Create the RDS cluster in the public subnets. Create a cluster user for each developer. Use RDS security groups to allow the users to access the cluster.

A solutions architect is creating an application that will handle batch processing of large amounts of data. The input data will be held in Amazon S3 and the output data will be stored in a different S3 bucket. For processing, the application will transfer the data over the network between multiple Amazon EC2 instances. What should the solutions architect do to reduce the overall data transfer costs?. A. Place all the EC2 instances in an Auto Scaling group. B. Place all the EC2 instances in the same AWS Region. C. Place all the EC2 instances in the same Availability Zone. D. Place all the EC2 instances in private subnets in multiple Availability Zones.

A company hosts a multi-tier web application that uses an Amazon Aurora MySQL DB cluster for storage. The application tier is hosted on Amazon EC2 instances. The company's IT security guidelines mandate that the database credentials be encrypted and rotated every 14 days. What should a solutions architect do to meet this requirement with the LEAST operational effort?. A. Create a new AWS Key Management Service (AWS KMS) encryption key. Use AWS Secrets Manager to create a new secret that uses the KMS key with the appropriate credentials. Associate the secret with the Aurora DB cluster. Configure a custom rotation period of 14 days. B. Create two parameters in AWS Systems Manager Parameter Store: one for the username and one SecureString for the password (encrypted with KMS). Load these parameters in the application tier. Implement an AWS Lambda function that rotates the password every 14 days. C. Store credentials in an AWS KMS–encrypted Amazon EFS file system mounted on the EC2 instances. Implement a Lambda function to rotate keys every 14 days. D. Store credentials in an AWS KMS–encrypted Amazon S3 bucket and use Lambda to rotate credentials every 14 days.

A streaming media company is rebuilding its infrastructure to accommodate increasing demand for video content that users consume daily. The company needs to process terabyte-sized videos to block some content in the videos. Video processing can take up to 20 minutes. The company needs a solution that will scale with demand and remain cost-effective. Which solution will meet these requirements?. A. Use AWS Lambda functions to process videos. Store video metadata in Amazon DynamoDB. Store video content in Amazon S3 Intelligent-Tiering. B. Use Amazon Elastic Container Service (Amazon ECS) and AWS Fargate to implement microservices to process videos. Store video metadata in Amazon Aurora. Store video content in Amazon S3 Intelligent-Tiering. C. Use Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB) to process videos. Store video content in Amazon S3 Standard. Use Amazon Simple Queue Service (Amazon SQS) for queuing. D. Deploy a containerized video processing application on Amazon EKS on EC2. Store metadata in Amazon RDS in a single Availability Zone. Store video content in Amazon S3 Glacier Deep Archive.

A company runs an on-premises application on a Kubernetes cluster. After gaining millions of new customers, its infrastructure cannot handle the load. The company needs to migrate the application to AWS and wants to avoid managing compute infrastructure. Which solution meets these requirements with the LEAST operational overhead?. A. Use a self-managed node group for compute capacity and deploy the app to Amazon EKS. B. Use managed node groups for compute capacity and deploy the app to Amazon EKS. C. Use AWS Fargate to supply compute capacity. Create a Fargate profile and deploy the app. D. Use managed node groups with Karpenter for compute capacity.

A company is launching a new application that requires a structured database to store user profiles, settings, and transactional data. The database must scale with traffic and offer backups. Which solution is MOST cost-effective?. A. Deploy a self-managed database on EC2 with open-source software, using Spot Instances and S3 backups. B. Use Amazon RDS in on-demand capacity mode with GP SSD storage and automatic backups (7 days). C. Use Amazon Aurora Serverless with automatic scaling and S3 backups. D. Deploy a self-managed NoSQL database on EC2 with Reserved Instances and Glacier backups.

A legacy web application on AWS stores customer-uploaded images on an EC2 instance’s EBS volume and uploads them nightly to Amazon S3 for backup. The uploads currently use the public S3 endpoint. The architect must ensure uploads occur privately. Which solution meets these requirements?. A. Create a gateway VPC endpoint for S3 with the proper permissions and update the route table to use it. B. Move the S3 bucket inside the VPC and configure routing to use private IPs. C. Create an S3 access point inside the VPC and configure uploads through it. D. Use AWS Direct Connect between the VPC and S3.

A prototype ecommerce website uses an ALB, EC2 Auto Scaling group, and RDS MySQL (Single-AZ). Product catalog searches are slow and cause high CPU usage on the database. What should the architect recommend to improve search performance?. A. Migrate the product catalog to Amazon Redshift and use the COPY command. B. Implement Amazon ElastiCache for Redis to cache product catalog data with lazy loading. C. Add an Auto Scaling policy to launch more EC2 web servers during load spikes. D. Enable RDS Multi-AZ and throttle catalog queries from EC2 instances.

A company currently stores 5 TB of data in on-premises block storage systems. The system has limited space for expansion. The company’s on-premises applications require low-latency access to frequently used data. Which cloud-based storage solution meets these requirements with the MOST operational efficiency?. A. Use Amazon S3 File Gateway and integrate it with the on-premises applications via SMB. B. Use AWS Storage Gateway – Volume Gateway (cached volumes) as iSCSI targets. C. Use AWS Storage Gateway – Volume Gateway (stored volumes) as iSCSI targets. D. Use AWS Storage Gateway – Tape Gateway to store virtual tapes in Amazon S3.

A food delivery company’s order processing system struggles to scale during peak traffic. It runs two Auto Scaling groups — one for order collection and one for fulfillment. Order fulfillment is slower, and data loss cannot occur during scaling events. Which solution meets these requirements?. A. Use CloudWatch metrics to keep each Auto Scaling group at peak capacity. B. Use CloudWatch alarms to trigger SNS notifications to create new Auto Scaling groups on demand. C. Use two SQS queues — one for order collection and one for fulfillment — and scale based on notifications. D. Use two SQS queues — one for order collection and one for fulfillment — and scale based on queue message count.

An online gaming company is moving user data to DynamoDB. The data includes user profiles, achievements, and in-game transactions. The company needs a highly available, resilient, and cost-effective architecture. Which solution meets these requirements?. A. Create DynamoDB tables in a single Region, use on-demand mode, and configure global tables for replication. B. Use DAX caching with tables in one Region, auto scaling, and manual cross-region replication. C. Create DynamoDB tables in multiple Regions, use on-demand mode, and replicate with DynamoDB Streams. D. Use DynamoDB global tables with automatic multi-region replication and provisioned capacity mode with auto scaling.

A company runs its media rendering application on premises. It moved all data to Amazon S3 but still needs low-latency access for rendering. Which storage solution meets these requirements most cost-effectively?. A. Use Mountpoint for Amazon S3 to access the data directly. B. Configure an Amazon S3 File Gateway to provide storage for the on-premises application. C. Copy the data to Amazon FSx for Windows File Server and access via FSx File Gateway. D. Configure an on-premises file server using the Amazon S3 API.

A company hosts its ERP system in the us-east-1 Region on Amazon EC2. Customers use a public API to interact with it, but international users report slow response times. Which solution improves response times most cost-effectively?. A. Create an AWS Direct Connect public VIF from each customer data center to us-east-1. B. Use Amazon CloudFront in front of the API with a caching-optimized policy. C. Use AWS Global Accelerator with endpoint groups in multiple Regions. D. Use AWS Site-to-Site VPN for direct connections from customer networks.

A company collects thousands of customer surveys hourly through its website. Results are currently emailed and manually analyzed. The company wants to automate the process, keeping results for 12 months. Which solution meets these requirements most scalably?. A. Send survey data to API Gateway → SQS → Lambda → Amazon Comprehend → DynamoDB, with TTL = 365 days. B. Send data to an API on EC2, which writes to DynamoDB and calls Comprehend. C. Write survey data to S3, trigger Lambda via S3 Events, use Comprehend, and apply lifecycle policies. D. Send data to API Gateway → SQS → Lambda → Amazon Lex → DynamoDB, with TTL = 365 days.

A company uses AWS Systems Manager for patching EC2 instances in an IP address–type target group behind an ALB. Security policy requires instances be removed from service during patching, but errors occur when following this protocol. Which combination resolves the issue? (Choose two.). A. Change target group type from IP to instance type. B. Keep existing Systems Manager document. C. Use AWSEC2-PatchLoadBalancerInstance Systems Manager Automation document. D. Use Maintenance Windows to remove and patch instances automatically. E. Use State Manager to manage removal and patching, relying on ALB health checks.

A medical company processes clinical trial data from multiple customers. Data comes from relational databases, undergoes complex transformations, and is stored in S3. Data must be encrypted during processing using customer-specific keys. Which solution requires the least operational effort?. A. Use one AWS Glue job per customer with SSE-S3 encryption. B. Use one Amazon EMR cluster per customer with client-side encryption (CSE-Custom). C. Use one AWS Glue job per customer with client-side encryption (CSE-KMS) using AWS KMS keys. D. Use one Amazon EMR cluster per customer with SSE-KMS encryption.

A company hosts a website analytics app on a single EC2 On-Demand instance. The app is stateless and resilient but shows degraded performance and 5xx errors during peak loads. Which solution is most cost-effective for seamless scaling?. A. Create an AMI, launch another EC2 On-Demand instance, and load balance with an ALB. B. Create an AMI, launch another EC2 instance, and use Route 53 weighted routing. C. Create a Lambda function to stop and resize the instance at 75% CPU. D. Create an AMI and use it in a launch template and Auto Scaling group with a Spot Fleet and ALB.

A company stores frequently accessed data in an S3 bucket and uses AWS KMS for encryption. They want to reduce encryption costs and avoid additional KMS API calls. Which solution meets these requirements?. A. Use SSE-S3 (Amazon S3 managed keys). B. Use an S3 Bucket Key for SSE-KMS on new objects. C. Use client-side encryption with KMS CMKs. D. Use SSE-C (customer-provided keys).

A company runs multiple on-premises VMs but needs to migrate quickly to AWS using lift-and-shift for non-critical workloads. Which steps meet these requirements? (Choose three.). A. Use AWS SCT to collect VM data. B. Use AWS Application Migration Service (MGN) and install the replication agent on VMs. C. Perform initial replication and launch test instances. D. Stop VM operations and launch cutover instances. E. Use App2Container (A2C) for data collection. F. Use AWS DMS to migrate VMs.

A private application uses Amazon Cognito user pools for authentication. The app must now securely store user documents in S3. Which steps securely integrate S3? (Choose two.). A. Create an Amazon Cognito identity pool to generate temporary S3 access tokens after login. B. Use the existing user pool to generate tokens directly. C. Create an S3 VPC endpoint in the same VPC. D. Create a NAT gateway and deny non-Cognito requests via bucket policy. E. Attach a bucket policy allowing only specific IP addresses.

A three-tier web application processes customer orders. The web tier (EC2 + ALB) communicates with the processing tier via Amazon SQS, and the storage tier uses DynamoDB. During peak times, EC2 CPU usage hits 100%, and the SQS queue fills up. Which solution improves performance?. A. Schedule Auto Scaling based on CPU utilization. B. Add ElastiCache for Redis in front of DynamoDB. C. Add CloudFront caching to the web tier. D. Use Auto Scaling target tracking with SQS ApproximateNumberOfMessages to scale the processing tier.

A company runs EC2 On-Demand Instances Monday–Saturday and only 12 hours on Sunday. Instances cannot tolerate interruptions, and cost optimization is required. Which solution is MOST cost-effective?. A. Use Scheduled Reserved Instances for Sunday workloads and Standard Reserved Instances for Mon–Sat. B. Use Convertible RIs for Sunday and Standard RIs for Mon–Sat. C. Use Spot for Sunday and Standard RIs for Mon–Sat. D. Use Spot for Sunday and Convertible RIs for Mon–Sat.

A digital image processing company wants to migrate its on-premises monolithic application to AWS. It processes thousands of images and generates large files, needing less manual work and no infrastructure management. Which solution meets these requirements with the least operational overhead?. A. ECS on EC2 Spot with SQS orchestration, store files in EFS. B. AWS Batch with Step Functions orchestration, store files in S3. C. Lambda + EC2 Spot to process, store files in FSx. D. EC2 instances with Step Functions, store files in EBS.

An image-hosting site in S3 faces latency when users upload/download images globally. Which solution improves performance with the least effort?. A. Configure CloudFront for S3 to improve download speed and enable S3 Transfer Acceleration for uploads. B. Migrate to EC2s across Regions with Global Accelerator and ALB. C. Use CloudFront for download and multi-Region S3 buckets with replication for upload. D. Use Global Accelerator for the S3 bucket.

A company runs an application in a private subnet behind an ALB in a VPC with a NAT gateway and internet gateway. The app calls the Amazon S3 API to store objects. Security policy prohibits traffic from traversing the internet. Which solution meets these requirements most cost-effectively?. A. Configure an S3 interface endpoint and allow outbound traffic. B. Configure an S3 gateway endpoint and update the route table. C. Configure an S3 bucket policy to allow traffic from the NAT gateway’s Elastic IP. D. Create a second NAT gateway for the subnet.

A company runs an app on Amazon EKS. The UI pods use DynamoDB; data service pods use S3. Each must only access its respective service. Which solution meets these requirements?. A. Attach both IAM policies to the EC2 instance profile and control access via RBAC. B. Attach IAM policies directly to the pods. C. Create separate Kubernetes service accounts with AmazonS3FullAccess and AmazonDynamoDBFullAccess. D. Use IAM Roles for Service Accounts (IRSA) — one for UI (DynamoDB) and one for data services (S3).

A globally distributed dev team needs secure access to AWS resources using the company’s on-premises Active Directory. The company uses AWS Organizations for multiple accounts. Which solution meets these requirements with the least operational overhead?. A. Use AWS Directory Service (Managed AD) with a trust to on-prem AD. B. Create IAM users for each developer with MFA. C. Use AD Connector integrated with AWS IAM Identity Center, with permission sets for each AD group. D. Use Amazon Cognito with federation to on-prem AD.

An AWS-hosted application uses Amazon API Gateway to publish an HTTP API with critical data. Access must be restricted to specific trusted IP ranges. Which solution meets this requirement?. A. Use a private integration in API Gateway for restricted IPs. B. Create a resource policy for API Gateway denying all IPs except the allowed ones. C. Deploy API in a private subnet and control via NACL. D. Modify API Gateway’s security group for trusted IPs.

Report abuse