Choosing the Right Path for Your SAA-C03 Exam Preparation
Welcome to PassExamHub's comprehensive study guide for the AWS Certified Solutions Architect - Associate (SAA-C03) exam. Our SAA-C03 dumps is designed to equip you with the knowledge and resources you need to confidently prepare for and succeed in the SAA-C03 certification exam.
What Our Amazon SAA-C03 Study Material Offers
PassExamHub's SAA-C03 dumps PDF is carefully crafted to provide you with a comprehensive and effective learning experience. Our study material includes:
In-depth Content: Our study guide covers all the key concepts, topics, and skills you need to master for the SAA-C03 exam. Each topic is explained in a clear and concise manner, making it easy to understand even the most complex concepts.
Online Test Engine: Test your knowledge and build your confidence with a wide range of practice questions that simulate the actual exam format. Our test engine cover every exam objective and provide detailed explanations for both correct and incorrect answers.
Exam Strategies: Get valuable insights into exam-taking strategies, time management, and how to approach different types of questions.
Real-world Scenarios: Gain practical insights into applying your knowledge in real-world scenarios, ensuring you're well-prepared to tackle challenges in your professional career.
Why Choose PassExamHub?
Expertise: Our SAA-C03 exam questions answers are developed by experienced Amazon certified professionals who have a deep understanding of the exam objectives and industry best practices.
Comprehensive Coverage: We leave no stone unturned in covering every topic and skill that could appear on the SAA-C03 exam, ensuring you're fully prepared.
Engaging Learning: Our content is presented in a user-friendly and engaging format, making your study sessions enjoyable and effective.
Proven Success: Countless students have used our study materials to achieve their SAA-C03 certifications and advance their careers.
Start Your Journey Today!
Embark on your journey to AWS Certified Solutions Architect - Associate (SAA-C03) success with PassExamHub. Our study material is your trusted companion in preparing for the SAA-C03 exam and unlocking exciting career opportunities.
Related Exams
Amazon SAA-C03 Sample Question Answers
Question # 1
A company is developing a mobile game that streams score updates to a backendprocessor and then posts results on a leaderboard A solutions architect needs to design asolution that can handle large traffic spikes process the mobile game updates in order ofreceipt, and store the processed updates in a highly available database The company alsowants to minimize the management overhead required to maintain the solutionWhat should the solutions architect do to meet these requirements?
A. Push score updates to Amazon Kinesis Data Streams Process the updates in KinesisData Streams with AWS Lambda Store the processed updates in Amazon DynamoDB. B. Push score updates to Amazon Kinesis Data Streams. Process the updates with a fleetof Amazon EC2 instances set up for Auto Scaling Store the processed updates in AmazonRedshift. C. Push score updates to an Amazon Simple Notification Service (Amazon SNS) topicSubscribe an AWS Lambda function to the SNS topic to process the updates. Store theprocessed updates in a SQL database running on Amazon EC2. D. Push score updates to an Amazon Simple Queue Service (Amazon SQS) queue. Use afleet of Amazon EC2 instances with Auto Scaling to process the updates in the SQSqueue. Store the processed updates in an Amazon RDS Multi-AZ DB instance.
Answer: A
Explanation: Amazon Kinesis Data Streams is a scalable and reliable service that can
ingest, buffer, and process streaming data in real-time. It can handle large traffic spikes
and preserve the order of the incoming data records. AWS Lambda is a serverless
compute service that can process the data streams from Kinesis Data Streams without
requiring any infrastructure management. It can also scale automatically to match the
throughput of the data stream. Amazon DynamoDB is a fully managed, highly available,
and fast NoSQL database that can store the processed updates from Lambda. It can also
handle high write throughput and provide consistent performance. By using these services,
the solutions architect can design a solution that meets the requirements of the company
with the least operational overhead.
Question # 2
A company runs an SMB file server in its data center. The file server stores large files thatthe company frequently accesses for up to 7 days after the file creation date. After 7 days,the company needs to be able to access the files with a maximum retrieval time of 24hours.Which solution will meet these requirements?
A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server toAWS. B. Create an Amazon S3 File Gateway to increase the company's storage space. Createan S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days. C. Create an Amazon FSx File Gateway to increase the company's storage space. Createan Amazon S3 Lifecycle policy to transition the data after 7 days. D. Configure access to Amazon S3 for each user. Create an S3 Lifecycle policy totransition the data to S3 Glacier Flexible Retrieval after 7 days.
Answer: B
Explanation:
Amazon S3 File Gateway is a service that provides a file-based interface to Amazon S3,
which appears as a network file share. It enables you to store and retrieve Amazon S3
objects through standard file storage protocols such as SMB. S3 File Gateway can also
cache frequently accessed data locally for low-latency access. S3 Lifecycle policy is a
feature that allows you to define rules that automate the management of your objects
throughout their lifecycle. You can use S3 Lifecycle policy to transition objects to different
storage classes based on their age and access patterns. S3 Glacier Deep Archive is a
storage class that offers the lowest cost for long-term data archiving, with a retrieval time of
12 hours or 48 hours. This solution will meet the requirements, as it allows the company to
store large files in S3 with SMB file access, and to move the files to S3 Glacier Deep
Archive after 7 days for cost savings and compliance.
References: 1 provides an overview of Amazon S3 File Gateway and its benefits.
2 explains how to use S3 Lifecycle policy to manage object storage lifecycle.
3 describes the features and use cases of S3 Glacier Deep Archive storage class.
Question # 3
A company has an organization in AWS Organizations that has all features enabled Thecompany requires that all API calls and logins in any existing or new AWS account must beaudited The company needs a managed solution to prevent additional work and tominimize costs The company also needs to know when any AWS account is not compliantwith the AWS Foundational Security Best Practices (FSBP) standard.Which solution will meet these requirements with the LEAST operational overhead?
A. Deploy an AWS Control Tower environment in the Organizations management accountEnable AWS Security Hub and AWS Control Tower Account Factory in the environment. B. Deploy an AWS Control Tower environment in a dedicated Organizations memberaccount Enable AWS Security Hub and AWS Control Tower Account Factory in theenvironment. C. Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone(MALZ) Submit an RFC to self-service provision Amazon GuardDuty in the MALZ. D. Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone(MALZ) Submit an RFC to self-service provision AWS Security Hub in the MALZ.
Answer: A
Explanation: AWS Control Tower is a fully managed service that simplifies the setup and
governance of a secure, compliant, multi-account AWS environment. It establishes a
landing zone that is based on best-practices blueprints, and it enables governance using
controls you can choose from a pre-packaged list. The landing zone is a well-architected,
multi-account baseline that follows AWS best practices. Controls implement governance
rules for security, compliance, and operations. AWS Security Hub is a service that provides
a comprehensive view of your security posture across your AWS accounts. It aggregates,
organizes, and prioritizes security alerts and findings from multiple AWS services, such as
IAM Access Analyzer, as well as from AWS Partner solutions. AWS Security Hub
continuously monitors your environment using automated compliance checks based on the
AWS best practices and industry standards, such as the AWS Foundational Security Best
Practices (FSBP) standard. AWS Control Tower Account Factory is a feature that
automates the provisioning of new AWS accounts that are preconfigured to meet your
business, security, and compliance requirements. By deploying an AWS Control Tower
environment in the Organizations management account, you can leverage the existing
organization structure and policies, and enable AWS Security Hub and AWS Control Tower
Account Factory in the environment. This way, you can audit all API calls and logins in any
existing or new AWS account, monitor the compliance status of each account with the FSBP standard, and provision new accounts with ease and consistency. This solution
meets the requirements with the least operational overhead, as you do not need to manage
any infrastructure, perform any data migration, or submit any requests for changes.
References:
AWS Control Tower
[AWS Security Hub]
[AWS Control Tower Account Factory]
Question # 4
A solutions architect is designing a user authentication solution for a company The solutionmust invoke two-factor authentication for users that log in from inconsistent geographicallocations. IP addresses, or devices. The solution must also be able to scale up toaccommodate millions of users.Which solution will meet these requirements'?
A. Configure Amazon Cognito user pools for user authentication Enable the nsk-basedadaptive authentication feature with multi-factor authentication (MFA) B. Configure Amazon Cognito identity pools for user authentication Enable multi-factorauthentication (MFA). C. Configure AWS Identity and Access Management (1AM) users for user authenticationAttach an 1AM policy that allows the AllowManageOwnUserMFA action D. Configure AWS 1AM Identity Center (AWS Single Sign-On) authentication for userauthentication Configure the permission sets to require multi-factor authentication(MFA)
Answer: A
Explanation: Amazon Cognito user pools provide a secure and scalable user directory for
user authentication and management. User pools support various authentication methods,
such as username and password, email and password, phone number and password, and
social identity providers. User pools also support multi-factor authentication (MFA), which
adds an extra layer of security by requiring users to provide a verification code or a
biometric factor in addition to their credentials. User pools can also enable risk-based
adaptive authentication, which dynamically adjusts the authentication challenge based on
the risk level of the sign-in attempt. For example, if a user tries to sign in from an unfamiliar
device or location, the user pool can require a stronger authentication factor, such as SMS
or email verification code. This feature helps to protect user accounts from unauthorized
access and reduce the friction for legitimate users. User pools can scale up to millions of
users and integrate with other AWS services, such as Amazon SNS, Amazon SES, AWS
Lambda, and AWS KMS.
Amazon Cognito identity pools provide a way to federate identities from multiple identity
providers, such as user pools, social identity providers, and corporate identity providers.
Identity pools allow users to access AWS resources with temporary, limited-privilege
credentials. Identity pools do not provide user authentication or management features,
such as MFA or adaptive authentication. Therefore, option B is not correct.
AWS Identity and Access Management (IAM) is a service that helps to manage access to
AWS resources. IAM users are entities that represent people or applications that need to
interact with AWS. IAM users can be authenticated with a password or an access key. IAM
users can also enable MFA for their own accounts, by using the
AllowManageOwnUserMFA action in an IAM policy. However, IAM users are not suitable
for user authentication for web or mobile applications, as they are intended for
administrative purposes. IAM users also do not support adaptive authentication based on
risk factors. Therefore, option C is not correct.
AWS IAM Identity Center (AWS Single Sign-On) is a service that enables users to sign in
to multiple AWS accounts and applications with a single set of credentials. AWS SSO
supports various identity sources, such as AWS SSO directory, AWS Managed Microsoft
AD, and external identity providers. AWS SSO also supports MFA for user authentication,
which can be configured in the permission sets that define the level of access for each
user. However, AWS SSO does not support adaptive authentication based on risk factors.
Therefore, option D is not correct.
References:
Amazon Cognito User Pools
Adding Multi-Factor Authentication (MFA) to a User Pool
Risk-Based Adaptive Authentication
Amazon Cognito Identity Pools
IAM Users
Enabling MFA Devices
AWS Single Sign-On
How AWS SSO Works
Question # 5
A solutions architect needs to design the architecture for an application that a vendorprovides as a Docker container image The container needs 50 GB of storage available fortemporary files The infrastructure must be serverless.Which solution meets these requirements with the LEAST operational overhead?
A. Create an AWS Lambda function that uses the Docker container image with an AmazonS3 mounted volume that has more than 50 GB of space B. Create an AWS Lambda function that uses the Docker container image with an AmazonElastic Block Store (Amazon EBS) volume that has more than 50 GB of space C. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWSFargate launch type Create a task definition for the container image with an AmazonElastic File System (Amazon EFS) volume. Create a service with that task definition. D. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses theAmazon EC2 launch type with an Amazon Elastic Block Store (Amazon EBS) volume thathas more than 50 GB of space Create a task definition for the container image. Create aservice with that task definition.
Answer: C
Explanation:
The AWS Fargate launch type is a serverless way to run containers on Amazon ECS,
without having to manage any underlying infrastructure. You only pay for the resources
required to run your containers, and AWS handles the provisioning, scaling, and security of
the cluster. Amazon EFS is a fully managed, elastic, and scalable file system that can be
mounted to multiple containers, and provides high availability and durability. By using AWS
Fargate and Amazon EFS, you can run your Docker container image with 50 GB of storage available for temporary files, with the least operational overhead. This solution meets the
requirements of the question.
References:
AWS Fargate
Amazon Elastic File System
Using Amazon EFS file systems with Amazon ECS
Question # 6
A company uses AWS Organizations to run workloads within multiple AWS accounts Atagging policy adds department tags to AWS resources when the company creates tags.An accounting team needs to determine spending on Amazon EC2 consumption Theaccounting team must determine which departments are responsible for the costsregardless of AWS account The accounting team has access to AWS Cost Explorer for allAWS accounts within the organization and needs to access all reports from Cost Explorer.Which solution meets these requirements in the MOST operationally efficient way'?
A. From the Organizations management account billing console, activate a user-definedcost allocation tag named department Create one cost report in Cost Explorer grouping by tag name, and filter by EC2. B. From the Organizations management account billing console, activate an AWS-definedcost allocation tag named department. Create one cost report in Cost Explorer grouping bytag name, and filter by EC2. C. From the Organizations member account billing console, activate a user-defined costallocation tag named department. Create one cost report in Cost Explorer grouping by thetag name, and filter by EC2. D. From the Organizations member account billing console, activate an AWS-defined costallocation tag named department. Create one cost report in Cost Explorer grouping by tagname and filter by EC2.
Answer: B
Explanation: This solution meets the following requirements:
It is operationally efficient, as it only requires one activation of the cost allocation
tag and one creation of the cost report from the management account, which has
access to all the member accounts’ data and billing preferences.
It is consistent, as it uses the AWS-defined cost allocation tag named department,
which is automatically applied to resources when the company creates tags using
the tagging policy enforced by AWS Organizations. This ensures that the tag name
and value are the same across all the resources and accounts, and avoids any
discrepancies or errors that might arise from user-defined tags.
It is informative, as it creates one cost report in Cost Explorer grouping by the tag
name, and filters by EC2. This allows the accounting team to see the breakdown
of EC2 consumption and costs by department, regardless of the AWS account.
The team can also use other features of Cost Explorer, such as charts, filters, and
forecasts, to analyze and optimize the spending.
References:
Using AWS cost allocation tags - AWS Billing
User-defined cost allocation tags - AWS Billing
Cost Tagging and Reporting with AWS Organizations
Question # 7
A company is building an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for itsworkloads. All secrets that are stored in Amazon EKS must be encrypted in the Kubernetesetcd key-value store.Which solution will meet these requirements?
A. Create a new AWS Key Management Service (AWS KMS) key Use AWS SecretsManager to manage rotate, and store all secrets in Amazon EKS. B. Create a new AWS Key Management Service (AWS KMS) key Enable Amazon EKSKMS secrets encryption on the Amazon EKS cluster. C. Create the Amazon EKS cluster with default options Use the Amazon Elastic BlockStore (Amazon EBS) Container Storage Interface (CSI) driver as an add-on. D. Create a new AWS Key Management Service (AWS KMS) key with the ahas/aws/ebsalias Enable default Amazon Elastic Block Store (Amazon EBS) volume encryption for theaccount.
Answer: B
Explanation: This option is the most secure and simple way to encrypt the secrets that are
stored in Amazon EKS. AWS Key Management Service (AWS KMS) is a service that
allows you to create and manage encryption keys that can be used to encrypt your data.
Amazon EKS KMS secrets encryption is a feature that enables you to use a KMS key to
encrypt the secrets that are stored in the Kubernetes etcd key-value store. This provides an
additional layer of protection for your sensitive data, such as passwords, tokens, and keys.
You can create a new KMS key or use an existing one, and then enable the Amazon EKS
KMS secrets encryption on the Amazon EKS cluster. You can also use IAM policies to
control who can access or use the KMS key.
Option A is not correct because using AWS Secrets Manager to manage, rotate, and store
all secrets in Amazon EKS is not necessary or efficient. AWS Secrets Manager is a service
that helps you securely store, retrieve, and rotate your secrets, such as database
credentials, API keys, and passwords. You can use it to manage secrets that are used by
your applications or services outside of Amazon EKS, but it is not designed to encrypt the
secrets that are stored in the Kubernetes etcd key-value store. Moreover, using AWS
Secrets Manager would incur additional costs and complexity, and it would not leverage the
Option C is not correct because using the Amazon EBS Container Storage Interface (CSI)
driver as an add-on does not encrypt the secrets that are stored in Amazon EKS. The
Amazon EBS CSI driver is a plugin that allows you to use Amazon EBS volumes as
persistent storage for your Kubernetes pods. It is useful for providing durable and scalable
storage for your applications, but it does not affect the encryption of the secrets that are
stored in the Kubernetes etcd key-value store. Moreover, using the Amazon EBS CSI
driver would require additional configuration and resources, and it would not provide the
same level of security as using a KMS key.
Option D is not correct because creating a new AWS KMS key with the alias aws/ebs and
enabling default Amazon EBS volume encryption for the account does not encrypt the
secrets that are stored in Amazon EKS. The alias aws/ebs is a reserved alias that is used
by AWS to create a default KMS key for your account. This key is used to encrypt the
Amazon EBS volumes that are created in your account, unless you specify a different KMS
key. Enabling default Amazon EBS volume encryption for the account is a setting that ensures that all new Amazon EBS volumes are encrypted by default. However, these
features do not affect the encryption of the secrets that are stored in the Kubernetes etcd
key-value store. Moreover, using the default KMS key or the default encryption setting
would not provide the same level of control and security as using a custom KMS key and
enabling the Amazon EKS KMS secrets encryption feature. References:
Encrypting secrets used in Amazon EKS
What Is AWS Key Management Service?
What Is AWS Secrets Manager?
Amazon EBS CSI driver
Encryption at rest
Question # 8
A retail company has several businesses. The IT team for each business manages its ownAWS account. Each team account is part of an organization in AWS Organizations. Eachteam monitors its product inventory levels in an Amazon DynamoDB table in the team'sown AWS account.The company is deploying a central inventory reporting application into a shared AWSaccount. The application must be able to read items from all the teams' DynamoDB tables.Which authentication option will meet these requirements MOST securely?
A. Integrate DynamoDB with AWS Secrets Manager in the inventory application account.Configure the application to use the correct secret from Secrets Manager to authenticateand read the DynamoDB table. Schedule secret rotation for every 30 days. B. In every business account, create an 1AM user that has programmatic access.Configure the application to use the correct 1AM user access key ID and secret access keyto authenticate and read the DynamoDB table. Manually rotate 1AM access keys every 30days. C. In every business account, create an 1AM role named BU_ROLE with a policy that givesthe role access to the DynamoDB table and a trust policy to trust a specific role in theinventory application account. In the inventory account, create a role named APP_ROLEthat allows access to the STS AssumeRole API operation. Configure the application to useAPP_ROLE and assume the cross-account role BU_ROLE to read the DynamoDB table. D. Integrate DynamoDB with AWS Certificate Manager (ACM). Generate identitycertificates to authenticate DynamoDB. Configure the application to use the correctcertificate to authenticate and read the DynamoDB table.
Answer: C
Explanation: This solution meets the requirements most securely because it uses IAM
roles and the STS AssumeRole API operation to authenticate and authorize the inventory
application to access the DynamoDB tables in different accounts. IAM roles are more
secure than IAM users or certificates because they do not require long-term credentials or
passwords. Instead, IAM roles provide temporary security credentials that are automatically
rotated and can be configured with a limited duration. The STS AssumeRole API operation
enables you to request temporary credentials for a role that you are allowed to assume. By
using this operation, you can delegate access to resources that are in different AWS
accounts that you own or that are owned by third parties. The trust policy of the role defines
which entities can assume the role, and the permissions policy of the role defines which
actions can be performed on the resources. By using this solution, you can avoid hardcoding
credentials or certificates in the inventory application, and you can also avoid
storing them in Secrets Manager or ACM. You can also leverage the built-in security
features of IAM and STS, such as MFA, access logging, and policy conditions.
References: IAM Roles
STS AssumeRole
Tutorial: Delegate Access Across AWS Accounts Using IAM Roles
Question # 9
A company built an application with Docker containers and needs to run the application inthe AWS Cloud The company wants to use a managed sen/ice to host the applicationThe solution must scale in and out appropriately according to demand on the individualcontainer services The solution also must not result in additional operational overhead orinfrastructure to manageWhich solutions will meet these requirements? (Select TWO)
A. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate. B. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate. C. Provision an Amazon API Gateway API Connect the API to AWS Lambda to run the containers. D. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes. E. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 workernodes.
Answer: A,B
Explanation: These options are the best solutions because they allow the company to run
the application with Docker containers in the AWS Cloud using a managed service that
scales automatically and does not require any infrastructure to manage. By using AWS
Fargate, the company can launch and run containers without having to provision, configure,
or scale clusters of EC2 instances. Fargate allocates the right amount of compute
resources for each container and scales them up or down as needed. By using Amazon
ECS or Amazon EKS, the company can choose the container orchestration platform that
suits its needs. Amazon ECS is a fully managed service that integrates with other AWS
services and simplifies the deployment and management of containers. Amazon EKS is a
managed service that runs Kubernetes on AWS and provides compatibility with existing
Kubernetes tools and plugins.
C. Provision an Amazon API Gateway API Connect the API to AWS Lambda to run the
containers. This option is not feasible because AWS Lambda does not support running
Docker containers directly. Lambda functions are executed in a sandboxed environment
that is isolated from other functions and resources. To run Docker containers on Lambda,
the company would need to use a custom runtime or a wrapper library that emulates the
Docker API, which can introduce additional complexity and overhead.
D. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes.
This option is not optimal because it requires the company to manage the EC2 instances
that host the containers. The company would need to provision, configure, scale, patch,
and monitor the EC2 instances, which can increase the operational overhead and
infrastructure costs.
E. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker
nodes. This option is not ideal because it requires the company to manage the EC2
instances that host the containers. The company would need to provision, configure, scale,
patch, and monitor the EC2 instances, which can increase the operational overhead and
infrastructure costs.
References:
1 AWS Fargate - Amazon Web Services
2 Amazon Elastic Container Service - Amazon Web Services
3 Amazon Elastic Kubernetes Service - Amazon Web Services
4 AWS Lambda FAQs - Amazon Web Services
Question # 10
A company uses Amazon S3 as its data lake. The company has a new partner that mustuse SFTP to upload data files A solutions architect needs to implement a highly availableSFTP solution that minimizes operational overhead.Which solution will meet these requirements?
A. Use AWS Transfer Family to configure an SFTP-enabled server with a publiclyaccessible endpoint Choose the S3 data lake as the destination B. Use Amazon S3 File Gateway as an SFTP server Expose the S3 File Gateway endpointURL to the new partner Share the S3 File Gateway endpoint with the newpartner C. Launch an Amazon EC2 instance in a private subnet in a VPC. Instruct the new partnerto upload files to the EC2 instance by using a VPN. Run a cron job script on the EC2instance to upload files to the S3 data lake D. Launch Amazon EC2 instances in a private subnet in a VPC. Place a Network LoadBalancer (NLB) in front of the EC2 instances. Create an SFTP listener port for the NLB Share the NLB hostname with the new partner Run a cron job script on the EC2 instancesto upload files to the S3 data lake.
Answer: A
Explanation: This option is the most cost-effective and simple way to enable SFTP access
to the S3 data lake. AWS Transfer Family is a fully managed service that supports secure
file transfers over SFTP, FTPS, and FTP protocols. You can create an SFTP-enabled
server with a public endpoint and associate it with your S3 bucket. You can also use AWS
Identity and Access Management (IAM) roles and policies to control access to your S3 data
lake. The service scales automatically to handle any volume of file transfers and provides
high availability and durability. You do not need to provision, manage, or patch any servers
or load balancers.
Option B is not correct because Amazon S3 File Gateway is not an SFTP server. It is a
hybrid cloud storage service that provides a local file system interface to S3. You can use it
to store and retrieve files as objects in S3 using standard file protocols such as NFS and
SMB. However, it does not support SFTP protocol, and it requires deploying a file gateway
appliance on-premises or on EC2.
Option C is not cost-effective or scalable because it requires launching and managing an
EC2 instance in a private subnet and setting up a VPN connection for the new partner. This
would incur additional costs for the EC2 instance, the VPN connection, and the data
transfer. It would also introduce complexity and security risks to the solution. Moreover, it
would require running a cron job script on the EC2 instance to upload files to the S3 data
lake, which is not efficient or reliable.
Option D is not cost-effective or scalable because it requires launching and managing
multiple EC2 instances in a private subnet and placing a NLB in front of them. This would
incur additional costs for the EC2 instances, the NLB, and the data transfer. It would also
introduce complexity and security risks to the solution. Moreover, it would require running a
cron job script on the EC2 instances to upload files to the S3 data lake, which is not
efficient or reliable. References:
What Is AWS Transfer Family?
What Is Amazon S3 File Gateway?
What Is Amazon EC2?
[What Is Amazon Virtual Private Cloud?]
[What Is a Network Load Balancer?]
Question # 11
A company hosts an application used to upload files to an Amazon S3 bucket Onceuploaded, the files are processed to extract metadata which takes less than 5 seconds Thevolume and frequency of the uploads varies from a few files each hour to hundreds ofconcurrent uploads The company has asked a solutions architect to design a cost-effectivearchitecture that will meet these requirements.What should the solutions architect recommend?
A. Configure AWS CloudTrail trails to tog S3 API calls Use AWS AppSync to process thefiles. B. Configure an object-created event notification within the S3 bucket to invoke an AWSLambda function to process the files. C. Configure Amazon Kinesis Data Streams to process and send data to Amazon S3.Invoke an AWS Lambda function to process the files. D. Configure an Amazon Simple Notification Service (Amazon SNS) topic to process thefiles uploaded to Amazon S3 Invoke an AWS Lambda function to process the files.
Answer: B
Explanation: This option is the most cost-effective and scalable way to process the files
uploaded to S3. AWS CloudTrail is used to log API calls, not to trigger actions based on
them. AWS AppSync is a service for building GraphQL APIs, not for processing files.
Amazon Kinesis Data Streams is used to ingest and process streaming data, not to send
data to S3. Amazon SNS is a pub/sub service that can be used to notify subscribers of
events, not to process files. References:
Using AWS Lambda with Amazon S3
AWS CloudTrail FAQs
What Is AWS AppSync?
[What Is Amazon Kinesis Data Streams?]
[What Is Amazon Simple Notification Service?]
Question # 12
A company runs analytics software on Amazon EC2 instances The software accepts jobrequests from users to process data that has been uploaded to Amazon S3 Users reportthat some submitted data is not being processed Amazon CloudWatch reveals that theEC2 instances have a consistent CPU utilization at or near 100% The company wants toimprove system performance and scale the system based on user load.What should a solutions architect do to meet these requirements?
A. Create a copy of the instance Place all instances behind an Application Load Balancer B. Create an S3 VPC endpoint for Amazon S3 Update the software to reference theendpoint C. Stop the EC2 instances. Modify the instance type to one with a more powerful CPU andmore memory. Restart the instances. D. Route incoming requests to Amazon Simple Queue Service (Amazon SQS) Configurean EC2 Auto Scaling group based on queue size Update the software to read from the queue.
Answer: D
Explanation: This option is the best solution because it allows the company to decouple
the analytics software from the user requests and scale the EC2 instances dynamically
based on the demand. By using Amazon SQS, the company can create a queue that
stores the user requests and acts as a buffer between the users and the analytics software.
This way, the software can process the requests at its own pace without losing any data or
overloading the EC2 instances. By using EC2 Auto Scaling, the company can create an
Auto Scaling group that launches or terminates EC2 instances automatically based on the
size of the queue. This way, the company can ensure that there are enough instances to
handle the load and optimize the cost and performance of the system. By updating the
software to read from the queue, the company can enable the analytics software to
consume the requests from the queue and process the data from Amazon S3.
A. Create a copy of the instance Place all instances behind an Application Load Balancer.
This option is not optimal because it does not address the root cause of the problem, which
is the high CPU utilization of the EC2 instances. An Application Load Balancer can
distribute the incoming traffic across multiple instances, but it cannot scale the instances
based on the load or reduce the processing time of the analytics software. Moreover, this
option can incur additional costs for the load balancer and the extra instances.
B. Create an S3 VPC endpoint for Amazon S3 Update the software to reference the
endpoint. This option is not effective because it does not solve the issue of the high CPU
utilization of the EC2 instances. An S3 VPC endpoint can enable the EC2 instances to
access Amazon S3 without going through the internet, which can improve the network
performance and security. However, it cannot reduce the processing time of the analytics
software or scale the instances based on the load.
C. Stop the EC2 instances. Modify the instance type to one with a more powerful CPU and
more memory. Restart the instances. This option is not scalable because it does not
account for the variability of the user load. Changing the instance type to a more powerful
one can improve the performance of the analytics software, but it cannot adjust the number
of instances based on the demand. Moreover, this option can increase the cost of the
system and cause downtime during the instance modification.
References:
1 Using Amazon SQS queues with Amazon EC2 Auto Scaling - Amazon EC2 Auto
Scaling
2 Tutorial: Set up a scaled and load-balanced application - Amazon EC2 Auto
Scaling
3 Amazon EC2 Auto Scaling FAQs
Question # 13
A company is deploying an application that processes streaming data in near-real time Thecompany plans to use Amazon EC2 instances for the workload The network architecturemust be configurable to provide the lowest possible latency between nodesWhich combination of network solutions will meet these requirements? (Select TWO)
A. Enable and configure enhanced networking on each EC2 instance B. Group the EC2 instances in separate accounts C. Run the EC2 instances in a cluster placement group D. Attach multiple elastic network interfaces to each EC2 instance E. Use Amazon Elastic Block Store (Amazon EBS) optimized instance types.
Answer: A,C
Explanation: These options are the most suitable ways to configure the network
architecture to provide the lowest possible latency between nodes. Option A enables and
configures enhanced networking on each EC2 instance, which is a feature that improves
the network performance of the instance by providing higher bandwidth, lower latency, and
lower jitter. Enhanced networking uses single root I/O virtualization (SR-IOV) or Elastic
Fabric Adapter (EFA) to provide direct access to the network hardware. You can enable
and configure enhanced networking by choosing a supported instance type and a
compatible operating system, and installing the required drivers. Option C runs the EC2
instances in a cluster placement group, which is a logical grouping of instances within a
single Availability Zone that are placed close together on the same underlying hardware.
Cluster placement groups provide the lowest network latency and the highest network
throughput among the placement group options. You can run the EC2 instances in a
cluster placement group by creating a placement group and launching the instances into it.
Option B is not suitable because grouping the EC2 instances in separate accounts does
not provide the lowest possible latency between nodes. Separate accounts are used to
isolate and organize resources for different purposes, such as security, billing, or
compliance. However, they do not affect the network performance or proximity of the
instances. Moreover, grouping the EC2 instances in separate accounts would incur
additional costs and complexity, and it would require setting up cross-account networking
and permissions.
Option D is not suitable because attaching multiple elastic network interfaces to each EC2
instance does not provide the lowest possible latency between nodes. Elastic network
interfaces are virtual network interfaces that can be attached to EC2 instances to provide
additional network capabilities, such as multiple IP addresses, multiple subnets, or
enhanced security. However, they do not affect the network performance or proximity of the
instances. Moreover, attaching multiple elastic network interfaces to each EC2 instance
would consume additional resources and limit the instance type choices. Option E is not suitable because using Amazon EBS optimized instance types does not
provide the lowest possible latency between nodes. Amazon EBS optimized instance types
are instances that provide dedicated bandwidth for Amazon EBS volumes, which are block
storage volumes that can be attached to EC2 instances. EBS optimized instance types
improve the performance and consistency of the EBS volumes, but they do not affect the
network performance or proximity of the instances. Moreover, using EBS optimized
instance types would incur additional costs and may not be necessary for the streaming
data workload. References:
Enhanced networking on Linux
Placement groups
Elastic network interfaces
Amazon EBS-optimized instances
Question # 14
A company runs a container application on a Kubernetes cluster in the company's datacenter The application uses Advanced Message Queuing Protocol (AMQP) tocommunicate with a message queue The data center cannot scale fast enough to meet thecompany's expanding business needs The company wants to migrate the workloads toAWSWhich solution will meet these requirements with the LEAST operational overhead? \
A. Migrate the container application to Amazon Elastic Container Service (Amazon ECS)Use Amazon Simple Queue Service (Amazon SQS) to retrieve the messages. B. Migrate the container application to Amazon Elastic Kubernetes Service (Amazon EKS)Use Amazon MQ to retrieve the messages. C. Use highly available Amazon EC2 instances to run the application Use Amazon MQ toretrieve the messages. D. Use AWS Lambda functions to run the application Use Amazon Simple Queue Service(Amazon SQS) to retrieve the messages.
Answer: B
Explanation: This option is the best solution because it allows the company to migrate the
container application to AWS with minimal changes and leverage a managed service to run
the Kubernetes cluster and the message queue. By using Amazon EKS, the company can
run the container application on a fully managed Kubernetes control plane that is
compatible with the existing Kubernetes tools and plugins. Amazon EKS handles the
provisioning, scaling, patching, and security of the Kubernetes cluster, reducing the
operational overhead and complexity. By using Amazon MQ, the company can use a fully
managed message broker service that supports AMQP and other popular messaging
protocols. Amazon MQ handles the administration, maintenance, and scaling of the
message broker, ensuring high availability, durability, and security of the messages.
A. Migrate the container application to Amazon Elastic Container Service (Amazon ECS)
Use Amazon Simple Queue Service (Amazon SQS) to retrieve the messages. This option
is not optimal because it requires the company to change the container orchestration
platform from Kubernetes to ECS, which can introduce additional complexity and risk.
Moreover, it requires the company to change the messaging protocol from AMQP to SQS,
which can also affect the application logic and performance. Amazon ECS and Amazon
SQS are both fully managed services that simplify the deployment and management of
containers and messages, but they may not be compatible with the existing application
architecture and requirements.
C. Use highly available Amazon EC2 instances to run the application Use Amazon MQ to
retrieve the messages. This option is not ideal because it requires the company to manage
the EC2 instances that host the container application. The company would need to
provision, configure, scale, patch, and monitor the EC2 instances, which can increase the
operational overhead and infrastructure costs. Moreover, the company would need to
install and maintain the Kubernetes software on the EC2 instances, which can also add
complexity and risk. Amazon MQ is a fully managed message broker service that supports
AMQP and other popular messaging protocols, but it cannot compensate for the lack of a
managed Kubernetes service.
D. Use AWS Lambda functions to run the application Use Amazon Simple Queue Service
(Amazon SQS) to retrieve the messages. This option is not feasible because AWS Lambda
does not support running container applications directly. Lambda functions are executed in
a sandboxed environment that is isolated from other functions and resources. To run container applications on Lambda, the company would need to use a custom runtime or a
wrapper library that emulates the container API, which can introduce additional complexity
and overhead. Moreover, Lambda functions have limitations in terms of available CPU,
memory, and runtime, which may not suit the application needs. Amazon SQS is a fully
managed message queue service that supports asynchronous communication, but it does
not support AMQP or other messaging protocols.
References:
1 Amazon Elastic Kubernetes Service - Amazon Web Services
2 Amazon MQ - Amazon Web Services
3 Amazon Elastic Container Service - Amazon Web Services
4 AWS Lambda FAQs - Amazon Web Services
Question # 15
A company runs a real-time data ingestion solution on AWS. The solution consists of themost recent version of Amazon Managed Streaming for Apache Kafka (Amazon MSK). Thesolution is deployed in a VPC in private subnets across three Availability Zones.A solutions architect needs to redesign the data ingestion solution to be publicly availableover the internet. The data in transit must also be encrypted.Which solution will meet these requirements with the MOST operational efficiency?
A. Configure public subnets in the existing VPC. Deploy an MSK cluster in the publicsubnets. Update the MSK cluster security settings to enable mutual TLS authentication. B. Create a new VPC that has public subnets. Deploy an MSK cluster in the publicsubnets. Update the MSK cluster security settings to enable mutual TLS authentication. C. Deploy an Application Load Balancer (ALB) that uses private subnets. Configure an ALBsecurity group inbound rule to allow inbound traffic from the VPC CIDR block for HTTPSprotocol. D. Deploy a Network Load Balancer (NLB) that uses private subnets. Configure an NLBlistener for HTTPS communication over the internet.
Answer: A
Explanation: The solution that meets the requirements with the most operational efficiency
is to configure public subnets in the existing VPC and deploy an MSK cluster in the public
subnets. This solution allows the data ingestion solution to be publicly available over the
internet without creating a new VPC or deploying a load balancer. The solution also
ensures that the data in transit is encrypted by enabling mutual TLS authentication, which
requires both the client and the server to present certificates for verification. This solution
leverages the public access feature of Amazon MSK, which is available for clusters running
Apache Kafka 2.6.0 or later versions1.
The other solutions are not as efficient as the first one because they either create
unnecessary resources or do not encrypt the data in transit. Creating a new VPC with
public subnets would incur additional costs and complexity for managing network resources
and routing. Deploying an ALB or an NLB would also add more costs and latency for the
data ingestion solution. Moreover, an ALB or an NLB would not encrypt the data in transit
by itself, unless they are configured with HTTPS listeners and certificates, which would
require additional steps and maintenance. Therefore, these solutions are not optimal for the
given requirements.
References:
Public access - Amazon Managed Streaming for Apache Kafka
Question # 16
A company runs a Java-based job on an Amazon EC2 instance. The job runs every hourand takes 10 seconds to run. The job runs on a scheduled interval and consumes 1 GB ofmemory. The CPU utilization of the instance is low except for short surges during which thejob uses the maximum CPU available. The company wants to optimize the costs to run thejob.Which solution will meet these requirements?
A. Use AWS App2Container (A2C) to containerize the job. Run the job as an AmazonElastic Container Service (Amazon ECS) task on AWS Fargate with 0.5 virtual CPU(vCPU) and 1 GB of memory. B. Copy the code into an AWS Lambda function that has 1 GB of memory. Create anAmazon EventBridge scheduled rule to run the code each hour. C. Use AWS App2Container (A2C) to containerize the job. Install the container in theexisting Amazon Machine Image (AMI). Ensure that the schedule stops the container whenthe task finishes. D. Configure the existing schedule to stop the EC2 instance at the completion of the joband restart the EC2 instance when the next job starts.
Answer: B
Explanation: AWS Lambda is a serverless compute service that allows you to run code
without provisioning or managing servers. You can create Lambda functions using various
languages, including Java, and specify the amount of memory and CPU allocated to your function. Lambda charges you only for the compute time you consume, which is calculated
based on the number of requests and the duration of your code execution. You can use
Amazon EventBridge to trigger your Lambda function on a schedule, such as every hour,
using cron or rate expressions. This solution will optimize the costs to run the job, as you
will not pay for any idle time or unused resources, unlike running the job on an EC2
instance. References: 1: AWS Lambda - FAQs2, General Information section2: Tutorial:
Schedule AWS Lambda functions using EventBridge3, Introduction section3: Schedule
expressions using rate or cron - AWS Lambda4, Introduction section.
Question # 17
An ecommerce company runs applications in AWS accounts that are part of anorganization in AWS Organizations The applications run on Amazon Aurora PostgreSQLdatabases across all the accounts The company needs to prevent malicious activity andmust identify abnormal failed and incomplete login attempts to the databasesWhich solution will meet these requirements in the MOST operationally efficient way?
A. Attach service control policies (SCPs) to the root of the organization to identify the failedlogin attempts B. Enable the Amazon RDS Protection feature in Amazon GuardDuty for the memberaccounts of the organization C. Publish the Aurora general logs to a log group in Amazon CloudWatch Logs Export thelog data to a central Amazon S3 bucket D. Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a centralAmazon S3 bucket
Answer: C
Explanation: This option is the most operationally efficient way to meet the requirements
because it allows the company to monitor and analyze the database login activity across all
the accounts in the organization. By publishing the Aurora general logs to a log group in
Amazon CloudWatch Logs, the company can enable the logging of the database
connections, disconnections, and failed authentication attempts. By exporting the log data
to a central Amazon S3 bucket, the company can store the log data in a durable and costeffective
way and use other AWS services or tools to perform further analysis or alerting on
the log data. For example, the company can use Amazon Athena to query the log data in
Amazon S3, or use Amazon SNS to send notifications based on the log data.
A. Attach service control policies (SCPs) to the root of the organization to identify the failed
login attempts. This option is not effective because SCPs are not designed to identify the
failed login attempts, but to restrict the actions that the users and roles can perform in the
member accounts of the organization. SCPs are applied to the AWS API calls, not to the
database login attempts. Moreover, SCPs do not provide any logging or analysis
capabilities for the database activity.
B. Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member
accounts of the organization. This option is not optimal because the Amazon RDS
Protection feature in Amazon GuardDuty is not available for Aurora PostgreSQL
databases, but only for Amazon RDS for MySQL and Amazon RDS for MariaDB databases. Moreover, the Amazon RDS Protection feature does not monitor the database
login attempts, but the network and API activity related to the RDS instances.
D. Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a central
Amazon S3 bucket. This option is not sufficient because AWS CloudTrail does not capture
the database login attempts, but only the AWS API calls made by or on behalf of the
Aurora PostgreSQL database. For example, AWS CloudTrail can record the events such
as creating, modifying, or deleting the database instances, clusters, or snapshots, but not
the events such as connecting, disconnecting, or failing to authenticate to the database.
References:
1 Working with Amazon Aurora PostgreSQL - Amazon Aurora
2 Working with log groups and log streams - Amazon CloudWatch Logs
3 Exporting Log Data to Amazon S3 - Amazon CloudWatch Logs
[4] Amazon GuardDuty FAQs
[5] Logging Amazon RDS API Calls with AWS CloudTrail - Amazon Relational
Database Service
Question # 18
A company needs to provide customers with secure access to its data. The companyprocesses customer data and stores the results in an Amazon S3 bucket.All the data is subject to strong regulations and security requirements. The data must beencrypted at rest. Each customer must be able to access only their data from their AWSaccount. Company employees must not be able to access the data.Which solution will meet these requirements?
A. Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt thedata client-side. In the private certificate policy, deny access to the certificate for allprincipals except an 1AM role that the customer provides. B. Provision a separate AWS Key Management Service (AWS KMS) key for eachcustomer. Encrypt the data server-side. In the S3 bucket policy, deny decryption of data forall principals except an 1AM role that the customer provides. C. Provision a separate AWS Key Management Service (AWS KMS) key for eachcustomer. Encrypt the data server-side. In each KMS key policy, deny decryption of datafor all principals except an 1AM role that the customer provides. D. Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt thedata client-side. In the public certificate policy, deny access to the certificate for allprincipals except an 1AM role that the customer provides.
Answer: C
Explanation: The correct solution is to provision a separate AWS KMS key for each
customer and encrypt the data server-side. This way, the company can use the S3
encryption feature to protect the data at rest and delegate the control of the encryption keys
to the customers. The customers can then use their own IAM roles to access and decrypt
their data. The company employees will not be able to access the data because they are
not authorized by the KMS key policies. The other options are incorrect because:
Option A and D are using ACM certificates to encrypt the data client-side. This is
not a recommended practice for S3 encryption because it adds complexity and
overhead to the encryption process. Moreover, the company will have to manage
the certificates and their policies for each customer, which is not scalable and
secure.
Option B is using a separate KMS key for each customer, but it is using the S3
bucket policy to control the decryption access. This is not a secure solution
because the bucket policy applies to the entire bucket, not to individual objects.
Therefore, the customers will be able to access and decrypt each other’s data if
they have the permission to list the bucket contents. The bucket policy also
overrides the KMS key policy, which means the company employees can access
the data if they have the permission to use the KMS key.
References:
S3 encryption
KMS key policies
ACM certificates
Question # 19
A company has a nightly batch processing routine that analyzes report files that an onpremisesfile system receives daily through SFTP. The company wants to move thesolution to the AWS Cloud. The solution must be highly available and resilient. The solutionalso must minimize operational effort.Which solution meets these requirements?
A. Deploy AWS Transfer for SFTP and an Amazon Elastic File System (Amazon EFS) filesystem for storage. Use an Amazon EC2 instance in an Auto Scaling group with ascheduled scaling policy to run the batch operation. B. Deploy an Amazon EC2 instance that runs Linux and an SFTP service. Use an AmazonElastic Block Store {Amazon EBS) volume for storage. Use an Auto Scaling group with theminimum number of instances and desired number of instances set to 1. C. Deploy an Amazon EC2 instance that runs Linux and an SFTP service. Use an AmazonElastic File System (Amazon EFS) file system for storage. Use an Auto Scaling group withthe minimum number of instances and desired number of instances set to 1. D. Deploy AWS Transfer for SFTP and an Amazon S3 bucket for storage. Modify theapplication to pull the batch files from Amazon S3 to an Amazon EC2 instance forprocessing. Use an EC2 instance in an Auto Scaling group with a scheduled scaling policyto run the batch operation.
Answer: D
Explanation: The solution that meets the requirements of high availability, performance,
security, and static IP addresses is to use Amazon CloudFront, Application Load Balancers
(ALBs), Amazon Route 53, and AWS WAF. This solution allows the company to distribute
its HTTP-based application globally using CloudFront, which is a content delivery network
(CDN) service that caches content at edge locations and provides static IP addresses for
each edge location. The company can also use Route 53 latency-based routing to route
requests to the closest ALB in each Region, which balances the load across the EC2
instances. The company can also deploy AWS WAF on the CloudFront distribution to
protect the application against common web exploits by creating rules that allow, block, or
count web requests based on conditions that are defined. The other solutions do not meet
all the requirements because they either use Network Load Balancers (NLBs), which do not
support HTTP-based applications, or they do not use CloudFront, which provides better
performance and security than AWS Global Accelerator. References :=
Amazon CloudFront
Application Load Balancer
Amazon Route 53
AWS WAF
Question # 20
A company uses high concurrency AWS Lambda functions to process a constantlyincreasing number of messages in a message queue during marketing events. TheLambda functions use CPU intensive code to process the messages. The company wantsto reduce the compute costs and to maintain service latency for its customers.Which solution will meet these requirements?
A. Configure reserved concurrency for the Lambda functions. Decrease the memoryallocated to the Lambda functions. B. Configure reserved concurrency for the Lambda functions. Increase the memoryaccording to AWS Compute Optimizer recommendations. C. Configure provisioned concurrency for the Lambda functions. Decrease the memoryallocated to the Lambda functions. D. Configure provisioned concurrency for the Lambda functions. Increase the memoryaccording to AWS Compute Optimizer recommendations.
Answer: D
Explanation: The company wants to reduce the compute costs and maintain service
latency for its Lambda functions that process a constantly increasing number of messages
in a message queue. The Lambda functions use CPU intensive code to process the
messages. To meet these requirements, a solutions architect should recommend the
following solution:
Configure provisioned concurrency for the Lambda functions. Provisioned
concurrency is the number of pre-initialized execution environments that are
allocated to the Lambda functions. These execution environments are prepared to
respond immediately to incoming function requests, reducing the cold start latency.
Configuring provisioned concurrency also helps to avoid throttling errors due to
reaching the concurrency limit of the Lambda service.
Increase the memory according to AWS Compute Optimizer recommendations.
AWS Compute Optimizer is a service that provides recommendations for optimal
AWS resource configurations based on your utilization data. By increasing the
memory allocated to the Lambda functions, you can also increase the CPU power
and improve the performance of your CPU intensive code. AWS Compute
Optimizer can help you find the optimal memory size for your Lambda functions
based on your workload characteristics and performance goals.
This solution will reduce the compute costs by avoiding unnecessary over-provisioning of
memory and CPU resources, and maintain service latency by using provisioned
concurrency and optimal memory size for the Lambda functions.
References:
Provisioned Concurrency
AWS Compute Optimizer
Question # 21
A company runs applications on AWS that connect to the company's Amazon RDSdatabase. The applications scale on weekends and at peak times of the year. Thecompany wants to scale the database more effectively for its applications that connect tothe database.Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon DynamoDB with connection pooling with a target group configuration forthe database. Change the applications to use the DynamoDB endpoint. B. Use Amazon RDS Proxy with a target group for the database. Change the applicationsto use the RDS Proxy endpoint. C. Use a custom proxy that runs on Amazon EC2 as an intermediary to the database.Change the applications to use the custom proxy endpoint. D. Use an AWS Lambda function to provide connection pooling with a target groupconfiguration for the database. Change the applications to use the Lambda function.
Answer: B
Explanation:
Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon
Relational Database Service (RDS) that makes applications more scalable, more resilient
to database failures, and more secure1. RDS Proxy allows applications to pool and share
connections established with the database, improving database efficiency and application
scalability2. RDS Proxy also reduces failover times for Aurora and RDS databases by up to
66% and enables IAM authentication and Secrets Manager integration for database
access1. RDS Proxy can be enabled for most applications with no code changes2.
Question # 22
A company wants to run its payment application on AWS The application receives paymentnotifications from mobile devices Payment notifications require a basic validation beforethey are sent for further processingThe backend processing application is long running and requires compute and memory tobe adjusted The company does not want to manage the infrastructureWhich solution will meet these requirements with the LEAST operational overhead?
A. Create an Amazon Simple Queue Service (Amazon SQS) queue Integrate the queuewith an Amazon EventBndge rule to receive payment notifications from mobile devicesConfigure the rule to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon Elastic KubernetesService (Amazon EKS) Anywhere Create a standalone cluster B. Create an Amazon API Gateway API Integrate the API with anAWS Step Functionsstate machine to receive payment notifications from mobile devices Invoke the statemachine to validate payment notifications and send the notifications to the backendapplication Deploy the backend application on Amazon Elastic Kubernetes Sen/ice(Amazon EKS). Configure an EKS cluster with self-managed nodes. C. Create an Amazon Simple Queue Sen/ice (Amazon SQS) queue Integrate the queuewith an Amazon EventBridge rule to receive payment notifications from mobile devicesConfigure the rule to validate payment notifications and send the notifications to thebackend application Deploy the backend application on Amazon EC2 Spot InstancesConfigure a Spot Fleet with a default allocation strategy. D. Create an Amazon API Gateway API Integrate the API with AWS Lambda to receivepayment notifications from mobile devices Invoke a Lambda function to validate paymentnotifications and send the notifications to the backend application Deploy the backendapplication on Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECSwith an AWS Fargate launch type.
Answer: D
Explanation:
This option is the best solution because it allows the company to run its payment
application on AWS with minimal operational overhead and infrastructure management. By
using Amazon API Gateway, the company can create a secure and scalable API to receive
payment notifications from mobile devices. By using AWS Lambda, the company can run a
serverless function to validate the payment notifications and send them to the backend
application. Lambda handles the provisioning, scaling, and security of the function,
reducing the operational complexity and cost. By using Amazon ECS with AWS Fargate,
the company can run the backend application on a fully managed container service that
scales the compute resources automatically and does not require any EC2 instances to
manage. Fargate allocates the right amount of CPU and memory for each container and
adjusts them as needed.
A. Create an Amazon Simple Queue Service (Amazon SQS) queue Integrate the queue
with an Amazon EventBndge rule to receive payment notifications from mobile devices
Configure the rule to validate payment notifications and send the notifications to the
backend application Deploy the backend application on Amazon Elastic Kubernetes
Service (Amazon EKS) Anywhere Create a standalone cluster. This option is not optimal
because it requires the company to manage the Kubernetes cluster that runs the backend
application. Amazon EKS Anywhere is a deployment option that allows the company to
create and operate Kubernetes clusters on-premises or in other environments outside
AWS. The company would need to provision, configure, scale, patch, and monitor the
cluster nodes, which can increase the operational overhead and complexity. Moreover, the
company would need to ensure the connectivity and security between the AWS services
and the EKS Anywhere cluster, which can also add challenges and risks. B. Create an Amazon API Gateway API Integrate the API with anAWS Step Functions
state ma-chine to receive payment notifications from mobile devices Invoke the state
machine to validate payment notifications and send the notifications to the backend
application Deploy the backend application on Amazon Elastic Kubernetes Sen/ice
(Amazon EKS). Configure an EKS cluster with self-managed nodes. This option is not ideal
because it requires the company to manage the EC2 instances that host the Kubernetes
cluster that runs the backend application. Amazon EKS is a fully managed service that runs
Kubernetes on AWS, but it still requires the company to manage the worker nodes that run
the containers. The company would need to provision, configure, scale, patch, and monitor
the EC2 instances, which can increase the operational overhead and infrastructure costs.
Moreover, using AWS Step Functions to validate the payment notifications may be
unnecessary and complex, as the validation logic can be implemented in a simpler way
with Lambda or other services.
C. Create an Amazon Simple Queue Sen/ice (Amazon SQS) queue Integrate the queue
with an Amazon EventBridge rule to receive payment notifications from mobile devices
Configure the rule to validate payment notifications and send the notifications to the
backend application Deploy the backend application on Amazon EC2 Spot Instances
Configure a Spot Fleet with a default al-location strategy. This option is not cost-effective
because it requires the company to manage the EC2 instances that run the backend
application. The company would need to provision, configure, scale, patch, and monitor the
EC2 instances, which can increase the operational overhead and infrastructure costs.
Moreover, using Spot Instances can introduce the risk of interruptions, as Spot Instances
are reclaimed by AWS when the demand for On-Demand Instances increases. The
company would need to handle the interruptions gracefully and ensure the availability and
reliability of the backend application.
References:
1 Amazon API Gateway - Amazon Web Services
2 AWS Lambda - Amazon Web Services
3 Amazon Elastic Container Service - Amazon Web Services
4 AWS Fargate - Amazon Web Services
Question # 23
A company has multiple AWS accounts with applications deployed in the us-west-2 RegionApplication logs are stored within Amazon S3 buckets in each account The company wants to build a centralized log analysis solution that uses a single S3 bucket Logs must not leaveus-west-2, and the company wants to incur minimal operational overheadWhich solution meets these requirements and is MOST cost-effective?
A. Create an S3 Lifecycle policy that copies the objects from one of the application S3buckets to the centralized S3 bucket B. Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3bucket in us-west-2 Use this S3 bucket for log analysis. C. Write a script that uses the PutObject API operation every day to copy the entirecontents of the buckets to another S3 bucket in us-west-2 Use this S3 bucket for loganalysis. D. Write AWS Lambda functions in these accounts that are triggered every time logs aredelivered to the S3 buckets (s3 ObjectCreated a event) Copy the logs to another S3 bucketin us-west-2. Use this S3 bucket for log analysis.
Answer: B
Explanation: This solution meets the following requirements:
It is cost-effective, as it only charges for the storage and data transfer of the
replicated objects, and does not require any additional AWS services or custom
scripts. S3 Same-Region Replication (SRR) is a feature that automatically
replicates objects across S3 buckets within the same AWS Region. SRR can help
you aggregate logs from multiple sources to a single destination for analysis and
auditing. SRR also preserves the metadata, encryption, and access control of the
source objects.
It is operationally efficient, as it does not require any manual intervention or
scheduling. SRR replicates objects as soon as they are uploaded to the source
bucket, ensuring that the destination bucket always has the latest log data. SRR
also handles any updates or deletions of the source objects, keeping the
destination bucket in sync. SRR can be enabled with a few clicks in the S3 console
or with a simple API call.
It is secure, as it does not allow the logs to leave the us-west-2 Region. SRR only
replicates objects within the same AWS Region, ensuring that the data sovereignty
and compliance requirements are met. SRR also supports encryption of the source
and destination objects, using either server-side encryption with AWS KMS or S3-
managed keys, or client-side encryption.
References:
Same-Region Replication - Amazon Simple Storage Service
How do I replicate objects across S3 buckets in the same AWS Region?
A company runs a highly available web application on Amazon EC2 instances behind anApplication Load Balancer The company uses Amazon CloudWatch metricsAs the traffic to the web application Increases, some EC2 instances become overloadedwith many outstanding requests The CloudWatch metrics show that the number of requestsprocessed and the time to receive the responses from some EC2 instances are both highercompared to other EC2 instances The company does not want new requests to beforwarded to the EC2 instances that are already overloaded.Which solution will meet these requirements?
A. Use the round robin routing algorithm based on the RequestCountPerTarget and ActiveConnection Count CloudWatch metrics. B. Use the least outstanding requests algorithm based on the RequestCountPerTarget andActiveConnectionCount CloudWatch metrics. C. Use the round robin routing algorithm based on the RequestCount andTargetResponseTime CloudWatch metrics. D. Use the least outstanding requests algorithm based on the RequestCount andTargetResponseTime CloudWatch metrics.
Answer: D
Explanation: The least outstanding requests (LOR) algorithm is a load balancing algorithm
that distributes incoming requests to the target with the fewest outstanding requests. This
helps to avoid overloading any single target and improves the overall performance and
availability of the web application. The LOR algorithm can use the RequestCount and
TargetResponseTime CloudWatch metrics to determine the number of outstanding
requests and the response time of each target. These metrics measure the number of
requests processed by each target and the time elapsed after the request leaves the load
balancer until a response from the target is received by the load balancer, respectively. By
using these metrics, the LOR algorithm can route new requests to the targets that are less
busy and more responsive, and avoid sending requests to the targets that are already
overloaded or slow. This solution meets the requirements of the company.
References:
Application Load Balancer now supports Least Outstanding Requests algorithm for
An analytics company uses Amazon VPC to run its multi-tier services. The company wantsto use RESTful APIs to offer a web analytics service to millions of users. Users must beverified by using an authentication service to access the APIs.Which solution will meet these requirements with the MOST operational efficiency?
A. Configure an Amazon Cognito user pool for user authentication. Implement Amazon APIGateway REST APIs with a Cognito authorizer. B. Configure an Amazon Cognito identity pool for user authentication. Implement AmazonAPI Gateway HTTP APIs with a Cognito authorizer. C. Configure an AWS Lambda function to handle user authentication. Implement AmazonAPI Gateway REST APIs with a Lambda authorizer. D. Configure an 1AM user to handle user authentication. Implement Amazon API GatewayHTTP APIs with an 1AM authorizer.
Answer: A
Explanation: This solution will meet the requirements with the most operational efficiency
because:
Amazon Cognito user pools provide a secure and scalable user directory that can
store and manage user profiles, and handle user sign-up, sign-in, and access
control. User pools can also integrate with social identity providers and enterprise
identity providers via SAML or OIDC. User pools can issue JSON Web Tokens
(JWTs) that can be used to authenticate users and authorize API requests.
Amazon API Gateway REST APIs enable you to create and deploy APIs that
expose your backend services to your clients. REST APIs support multiple
authorization mechanisms, including Cognito user pools, IAM, Lambda, and
custom authorizers. A Cognito authorizer is a type of Lambda authorizer that uses
a Cognito user pool as the identity source. When a client makes a request to a
REST API method that is configured with a Cognito authorizer, API Gateway
verifies the JWTs that are issued by the user pool and grants access based on the
token’s claims and the authorizer’s configuration.
By using Cognito user pools and API Gateway REST APIs with a Cognito
authorizer, you can achieve a high level of security, scalability, and performance
for your web analytics service. You can also leverage the built-in features of
Cognito and API Gateway, such as user management, token validation, caching,
throttling, and monitoring, without having to implement them yourself. This reduces
the operational overhead and complexity of your solution.
References:
Amazon Cognito User Pools
Amazon API Gateway REST APIs
Use API Gateway Lambda authorizers
Question # 26
A company has an AWS Direct Connect connection from its on-premises location to anAWS account The AWS account has 30 different VPCs in the same AWS Region TheVPCs use private virtual interfaces (VIFs) Each VPC has a CIDR block that does notoverlap with other networks under the company's controlThe company wants to centrally manage the networking architecture while still allowingeach VPC to communicate with all other VPCs and on-premises networksWhich solution will meet these requirements with the LEAST amount of operationaloverhead?
A. Create a transit gateway and associate the Direct Connect connection with a new transitVIF Turn on the transit gateway's route propagation feature B. Create a Direct Connect gateway Recreate the private VIFs to use the new gatewayAssociate each VPC by creating new virtual private gateways C. Create a transit VPC Connect the Direct Connect connection to the transit VPC Create apeenng connection between all other VPCs in the Region Update the route tables D. Create AWS Site-to-Site VPN connections from on premises to each VPC Ensure thatboth VPN tunnels are UP for each connection Turn on the route propagation feature
Answer: A
Explanation: This solution meets the following requirements:
It is operationally efficient, as it only requires one transit gateway and one transit
VIF to connect the Direct Connect connection to all the VPCs in the same AWS
Region. The transit gateway acts as a regional network hub that simplifies the
network management and reduces the number of VIFs and gateways needed.
It is scalable, as it can support up to 5000 attachments per transit gateway, which
can include VPCs, VPNs, Direct Connect gateways, and peering connections. The
transit gateway can also be connected to other transit gateways in different
Regions or accounts using peering connections, enabling cross-Region and cross-account connectivity.
It is flexible, as it allows each VPC to communicate with all other VPCs and onpremises
networks using dynamic routing protocols such as Border Gateway
Protocol (BGP). The transit gateway’s route propagation feature automatically
propagates the routes from the attached VPCs and VPNs to the transit gateway
route table, eliminating the need to manually update the route tables.
References:
Transit Gateways - Amazon Virtual Private Cloud
Working with transit gateways - AWS Direct Connect
A solutions architect is designing a shared storage solution for a web application that isdeployed across multiple Availability Zones The web application runs on Amazon EC2instances that are in an Auto Scaling group The company plans to make frequent changesto the content The solution must have strong consistency in returning the new content assoon as the changes occur.Which solutions meet these requirements? (Select TWO)
A. Use AWS Storage Gateway Volume Gateway Internet Small Computer SystemsInterface (iSCSI) block storage that is mounted to the individual EC2 instances B. Create an Amazon Elastic File System (Amazon EFS) file system Mount the EFS filesystem on the individual EC2 instances C. Create a shared Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBSvolume on the individual EC2 instances. D. Use AWS DataSync to perform continuous synchronization of data between EC2 hostsin the Auto Scaling group E. Create an Amazon S3 bucket to store the web content Set the metadata for the Cache-Control header to no-cache Use Amazon CloudFront to deliver the content
Answer: B,E
Explanation: These options are the most suitable ways to design a shared storage
solution for a web application that is deployed across multiple Availability Zones and
requires strong consistency. Option B uses Amazon Elastic File System (Amazon EFS) as
a shared file system that can be mounted on multiple EC2 instances in different Availability
Zones. Amazon EFS provides high availability, durability, scalability, and performance for
file-based workloads. It also supports strong consistency, which means that any changes
made to the file system are immediately visible to all clients. Option E uses Amazon S3 as
a shared object store that can store the web content and serve it through Amazon
CloudFront, a content delivery network (CDN). Amazon S3 provides high availability,
durability, scalability, and performance for object-based workloads. It also supports strong
consistency for read-after-write and list operations, which means that any changes made to
the objects are immediately visible to all clients. By setting the metadata for the Cache-
Control header to no-cache, the web content can be prevented from being cached by the
browsers or the CDN edge locations, ensuring that the latest content is always delivered to
the users.
Option A is not suitable because using AWS Storage Gateway Volume Gateway as a
shared storage solution for a web application is not efficient or scalable. AWS Storage
Gateway Volume Gateway is a hybrid cloud storage service that provides block storage
volumes that can be mounted on-premises or on EC2 instances as iSCSI devices. It is
useful for migrating or backing up data to AWS, but it is not designed for serving web
content or providing strong consistency. Moreover, using Volume Gateway would incur
additional costs and complexity, and it would not leverage the native AWS storage
services.
Option C is not suitable because creating a shared Amazon EBS volume and mounting it
on multiple EC2 instances is not possible or reliable. Amazon EBS is a block storage
service that provides persistent and high-performance volumes for EC2 instances.
However, EBS volumes can only be attached to one EC2 instance at a time, and they are
constrained to a single Availability Zone. Therefore, creating a shared EBS volume for a
web application that is deployed across multiple Availability Zones is not feasible.
Moreover, EBS volumes do not support strong consistency, which means that any changes
made to the volume may not be immediately visible to other clients.
Option D is not suitable because using AWS DataSync to perform continuous
synchronization of data between EC2 hosts in the Auto Scaling group is not efficient or
scalable. AWS DataSync is a data transfer service that helps you move large amounts of
data to and from AWS storage services. It is useful for migrating or archiving data, but it is
not designed for serving web content or providing strong consistency. Moreover, using
DataSync would incur additional costs and complexity, and it would not leverage the native AWS storage services. References:
What Is Amazon Elastic File System?
What Is Amazon Simple Storage Service?
What Is Amazon CloudFront?
What Is AWS Storage Gateway?
What Is Amazon Elastic Block Store?
What Is AWS DataSync?
Question # 28
A company needs to extract the names of ingredients from recipe records that are storedas text files in an Amazon S3 bucket A web application will use the ingredient names toquery an Amazon DynamoDB table and determine a nutrition score.The application can handle non-food records and errors The company does not have anyemployees who have machine learning knowledge to develop this solutionWhich solution will meet these requirements MOST cost-effectively?
A. Use S3 Event Notifications to invoke an AWS Lambda function when PutObjectrequests occur Program the Lambda function to analyze the object and extract theingredient names by using Amazon Comprehend Store the Amazon Comprehend output inthe DynamoDB table. B. Use an Amazon EventBridge rule to invoke an AWS Lambda function when PutObjectrequests occur. Program the Lambda function to analyze the object by using AmazonForecast to extract the ingredient names Store the Forecast output in the DynamoDB table. C. Use S3 Event Notifications to invoke an AWS Lambda function when PutObjectrequests occur Use Amazon Polly to create audio recordings of the recipe records. Savethe audio files in the S3 bucket Use Amazon Simple Notification Service (Amazon SNS) tosend a URL as a message to employees Instruct the employees to listen to the audio filesand calculate the nutrition score Store the ingredient names in the DynamoDB table. D. Use an Amazon EventBridge rule to invoke an AWS Lambda function when a PutObjectrequest occurs Program the Lambda function to analyze the object and extract theingredient names by using Amazon SageMaker Store the inference output from theSageMaker endpoint in the DynamoDB table.
Answer: A
Explanation: This solution meets the following requirements:
It is cost-effective, as it only uses serverless components that are charged based
on usage and do not require any upfront provisioning or maintenance.
It is scalable, as it can handle any number of recipe records that are uploaded to
the S3 bucket without any performance degradation or manual intervention.
It is easy to implement, as it does not require any machine learning knowledge or
complex data processing logic. Amazon Comprehend is a natural language
processing service that can automatically extract entities such as ingredients from
text files. The Lambda function can simply invoke the Comprehend API and store
the results in the DynamoDB table.
It is reliable, as it can handle non-food records and errors gracefully. Amazon
Comprehend can detect the language and domain of the text files and return an
appropriate response. The Lambda function can also implement error handling
and logging mechanisms to ensure the data quality and integrity.
References:
Using AWS Lambda with Amazon S3 - AWS Lambda
What Is Amazon Comprehend? - Amazon Comprehend
Working with Tables - Amazon DynamoDB
Question # 29
A company has a new mobile app. Anywhere in the world, users can see local news ontopics they choose. Users also can post photos and videos from inside the app.Users access content often in the first minutes after the content is posted. New contentquickly replaces older content, and then the older content disappears. The local nature ofthe news means that users consume 90% of the content within the AWS Region where it isuploaded.Which solution will optimize the user experience by providing the LOWEST latency forcontent uploads?
A. Upload and store content in Amazon S3. Use Amazon CloudFront for the uploads. B. Upload and store content in Amazon S3. Use S3 Transfer Acceleration for the uploads. C. Upload content to Amazon EC2 instances in the Region that is closest to the user. Copythe data to Amazon S3. D. Upload and store content in Amazon S3 in the Region that is closest to the user. Usemultiple distributions of Amazon CloudFront.
Answer: B
Explanation: The most suitable solution for optimizing the user experience by providing
the lowest latency for content uploads is to upload and store content in Amazon S3 and
use S3 Transfer Acceleration for the uploads. This solution will enable the company to
leverage the AWS global network and edge locations to speed up the data transfer
between the users and the S3 buckets.
Amazon S3 is a storage service that provides scalable, durable, and highly available object
storage for any type of data. Amazon S3 allows users to store and retrieve data from
anywhere on the web, and offers various features such as encryption, versioning, lifecycle
management, and replication1.
S3 Transfer Acceleration is a feature of Amazon S3 that helps users transfer data to and
from S3 buckets more quickly. S3 Transfer Acceleration works by using optimized network
paths and Amazon’s backbone network to accelerate data transfer speeds. Users can
enable S3 Transfer Acceleration for their buckets and use a distinct URL to access them,
such as <bucket>.s3-accelerate.amazonaws.com2.
The other options are not correct because they either do not provide the lowest latency or
are not suitable for the use case. Uploading and storing content in Amazon S3 and using Amazon CloudFront for the uploads is not correct because this solution is not designed for
optimizing uploads, but rather for optimizing downloads. Amazon CloudFront is a content
delivery network (CDN) that helps users distribute their content globally with low latency
and high transfer speeds. CloudFront works by caching the content at edge locations
around the world, so that users can access it quickly and easily from anywhere3. Uploading
content to Amazon EC2 instances in the Region that is closest to the user and copying the
data to Amazon S3 is not correct because this solution adds unnecessary complexity and
cost to the process. Amazon EC2 is a computing service that provides scalable and secure
virtual servers in the cloud. Users can launch, stop, or terminate EC2 instances as needed,
and choose from various instance types, operating systems, and configurations4.
Uploading and storing content in Amazon S3 in the Region that is closest to the user and
using multiple distributions of Amazon CloudFront is not correct because this solution is not
cost-effective or efficient for the use case. As mentioned above, Amazon CloudFront is a
CDN that helps users distribute their content globally with low latency and high transfer
speeds. However, creating multiple CloudFront distributions for each Region would incur
additional charges and management overhead, and would not be necessary since 90% of
the content is consumed within the same Region where it is uploaded3.
References:
What Is Amazon Simple Storage Service? - Amazon Simple Storage Service
Amazon S3 Transfer Acceleration - Amazon Simple Storage Service
What Is Amazon CloudFront? - Amazon CloudFront
What Is Amazon EC2? - Amazon Elastic Compute Cloud
Question # 30
An ecommerce application uses a PostgreSQL database that runs on an Amazon EC2instance. During a monthly sales event, database usage increases and causes databaseconnection issues for the application. The traffic is unpredictable for subsequent monthlysales events, which impacts the sales forecast. The company needs to maintainperformance when there is an unpredictable increase in traffic.Which solution resolves this issue in the MOST cost-effective way?
A. Migrate the PostgreSQL database to Amazon Aurora Serverless v2. B. Enable auto scaling for the PostgreSQL database on the EC2 instance to accommodateincreased usage. C. Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a largerinstance type D. Migrate the PostgreSQL database to Amazon Redshift to accommodate increasedusage
Answer: A
Explanation: Amazon Aurora Serverless v2 is a cost-effective solution that can
automatically scale the database capacity up and down based on the application’s needs. It
can handle unpredictable traffic spikes without requiring any provisioning or management
of database instances. It is compatible with PostgreSQL and offers high performance,
A company's marketing data is uploaded from multiple sources to an Amazon S3 bucket A series ot data preparation jobs aggregate the data for reporting The data preparation jobsneed to run at regular intervals in parallel A few jobs need to run in a specific order laterThe company wants to remove the operational overhead of job error handling retry logic,and state managementWhich solution will meet these requirements?
A. Use an AWS Lambda function to process the data as soon as the data is uploaded tothe S3 bucket Invoke Other Lambda functions at regularly scheduled intervals B. Use Amazon Athena to process the data Use Amazon EventBndge Scheduler to invokeAthena on a regular internal C. Use AWS Glue DataBrew to process the data Use an AWS Step Functions statemachine to run the DataBrew data preparation jobs D. Use AWS Data Pipeline to process the data. Schedule Data Pipeline to process the dataonce at midnight.
Answer: C
Explanation: AWS Glue DataBrew is a visual data preparation tool that allows you to
easily clean, normalize, and transform your data without writing any code. You can create
and run data preparation jobs on your data stored in Amazon S3, Amazon Redshift, or
other data sources. AWS Step Functions is a service that lets you coordinate multiple AWS
services into serverless workflows. You can use Step Functions to orchestrate your
DataBrew jobs, define the order and parallelism of execution, handle errors and retries, and
monitor the state of your workflow. By using AWS Glue DataBrew and AWS Step
Functions, you can meet the requirements of the company with minimal operational
overhead, as you do not need to write any code, manage any servers, or deal with complex
dependencies.
References:
AWS Glue DataBrew
AWS Step Functions
Orchestrate AWS Glue DataBrew jobs using AWS Step Functions
Question # 32
A research company uses on-premises devices to generate data for analysis. Thecompany wants to use the AWS Cloud to analyze the data. The devices generate .csv filesand support writing the data to SMB file share. Company analysts must be able to use SQLcommands to query the data. The analysts will run queries periodically throughout the day.Which combination of steps will meet these requirements MOST cost-effectively? (SelectTHREE.)
A. Deploy an AWS Storage Gateway on premises in Amazon S3 File Gateway mode. B. Deploy an AWS Storage Gateway on premises in Amazon FSx File Gateway mode. C. Set up an AWS Glue crawler to create a table based on the data that is in Amazon S3. D. Set up an Amazon EMR cluster with EMR Fife System (EMRFS) to query the data thatis in Amazon S3. Provide access to analysts. E. Set up an Amazon Redshift cluster to query the data that is in Amazon S3. Provideaccess to analysts. F. Set up Amazon Athena to query the data that is in Amazon S3. Provide access toanalysts.
Answer: A,C,F
Explanation: To meet the requirements of the use case in a cost-effective way, the
following steps are recommended:
Deploy an AWS Storage Gateway on premises in Amazon S3 File Gateway mode.
This will allow the company to write the .csv files generated by the devices to an
SMB file share, which will be stored as objects in Amazon S3 buckets. AWS
Storage Gateway is a hybrid cloud storage service that integrates on-premises
environments with AWS storage. Amazon S3 File Gateway mode provides a
seamless way to connect to Amazon S3 and access a virtually unlimited amount of
cloud storage1.
Set up an AWS Glue crawler to create a table based on the data that is in Amazon
S3. This will enable the company to use standard SQL to query the data stored in
Amazon S3 buckets. AWS Glue is a serverless data integration service that
simplifies data preparation and analysis. AWS Glue crawlers can automatically
discover and classify data from various sources, and create metadata tables in the
AWS Glue Data Catalog2. The Data Catalog is a central repository that stores
information about data sources and how to access them3.
Set up Amazon Athena to query the data that is in Amazon S3. This will provide
the company analysts with a serverless and interactive query service that can
analyze data directly in Amazon S3 using standard SQL. Amazon Athena is
integrated with the AWS Glue Data Catalog, so users can easily point Athena at
the data source tables defined by the crawlers. Amazon Athena charges only for
the queries that are run, and offers a pay-per-query pricing model, which makes it
a cost-effective option for periodic queries4.
The other options are not correct because they are either not cost-effective or not suitable
for the use case. Deploying an AWS Storage Gateway on premises in Amazon FSx File
Gateway mode is not correct because this mode provides low-latency access to fully
managed Windows file shares in AWS, which is not required for the use case. Setting up
an Amazon EMR cluster with EMR File System (EMRFS) to query the data that is in
Amazon S3 is not correct because this option involves setting up and managing a cluster of
EC2 instances, which adds complexity and cost to the solution. Setting up an Amazon
Redshift cluster to query the data that is in Amazon S3 is not correct because this option
also involves provisioning and managing a cluster of nodes, which adds overhead and cost
to the solution.
References:
What is AWS Storage Gateway?
What is AWS Glue?
AWS Glue Data Catalog
What is Amazon Athena?
Question # 33
A company website hosted on Amazon EC2 instances processes classified data stored inThe application writes data to Amazon Elastic Block Store (Amazon EBS) volumes Thecompany needs to ensure that all data that is written to the EBS volumes is encrypted atrest.Which solution will meet this requirement?
A. Create an 1AM role that specifies EBS encryption Attach the role to the EC2 instances B. Create the EBS volumes as encrypted volumes Attach the EBS volumes to the EC2instances C. Create an EC2 instance tag that has a key of Encrypt and a value of True Tag allinstances that require encryption at the EBS level D. Create an AWS Key Management Service (AWS KMS) key policy that enforces EBSencryption in the account Ensure that the key policy is active
Answer: B
Explanation: The simplest and most effective way to ensure that all data that is written to
the EBS volumes is encrypted at rest is to create the EBS volumes as encrypted volumes.
You can do this by selecting the encryption option when you create a new EBS volume, or
by copying an existing unencrypted volume to a new encrypted volume. You can also
specify the AWS KMS key that you want to use for encryption, or use the default AWSmanaged
key. When you attach the encrypted EBS volumes to the EC2 instances, the data
will be automatically encrypted and decrypted by the EC2 host. This solution does not
require any additional IAM roles, tags, or policies. References:
Amazon EBS encryption
Creating an encrypted EBS volume
Encrypting an unencrypted EBS volume
Question # 34
A company has Amazon EC2 instances that run nightly batch jobs to process data. TheEC2 instances run in an Auto Scaling group that uses On-Demand billing. If a job fails onone instance: another instance will reprocess the job. The batch jobs run between 12:00AM and 06 00 AM local time every day.Which solution will provide EC2 instances to meet these requirements MOST cost-effectively'?
A. Purchase a 1-year Savings Plan for Amazon EC2 that covers the instance family of theAuto Scaling group that the batch job uses. B. Purchase a 1-year Reserved Instance for the specific instance type and operatingsystem of the instances in the Auto Scaling group that the batch job uses. C. Create a new launch template for the Auto Scaling group Set the instances to SpotInstances Set a policy to scale out based on CPU usage. D. Create a new launch template for the Auto Scaling group Increase the instance size Seta policy to scale out based on CPU usage.
Answer: C
Explanation: This option is the most cost-effective solution because it leverages the Spot
Instances, which are unused EC2 instances that are available at up to 90% discount
compared to On-Demand prices. Spot Instances can be interrupted by AWS when the
demand for On-Demand instances increases, but since the batch jobs are fault-tolerant and
can be reprocessed by another instance, this is not a major issue. By using a launch
template, the company can specify the configuration of the Spot Instances, such as the
instance type, the operating system, and the user data. By using an Auto Scaling group,
the company can automatically scale the number of Spot Instances based on the CPU
usage, which reflects the load of the batch jobs. This way, the company can optimize the
performance and the cost of the EC2 instances for the nightly batch jobs.
A. Purchase a 1-year Savings Plan for Amazon EC2 that covers the instance family of the
Auto Scaling group that the batch job uses. This option is not optimal because it requires a
commitment to a consistent amount of compute usage per hour for a one-year term,
regardless of the instance type, size, region, or operating system. This can limit the flexibility and scalability of the Auto Scaling group and result in overpaying for unused
compute capacity. Moreover, Savings Plans do not provide a capacity reservation, which
means the company still needs to reserve capacity with On-Demand Capacity
Reservations and pay lower prices with Savings Plans.
B. Purchase a 1-year Reserved Instance for the specific instance type and operating
system of the instances in the Auto Scaling group that the batch job uses. This option is not
ideal because it requires a commitment to a specific instance configuration for a one-year
term, which can reduce the flexibility and scalability of the Auto Scaling group and result in
overpaying for unused compute capacity. Moreover, Reserved Instances do not provide a
capacity reservation, which means the company still needs to reserve capacity with On-
Demand Capacity Reservations and pay lower prices with Reserved Instances.
D. Create a new launch template for the Auto Scaling group Increase the instance size Set
a policy to scale out based on CPU usage. This option is not cost-effective because it does
not take advantage of the lower prices of Spot Instances. Increasing the instance size can
improve the performance of the batch jobs, but it can also increase the cost of the On-
Demand instances. Moreover, scaling out based on CPU usage can result in launching
more instances than needed, which can also increase the cost of the system.
References:
1 Spot Instances - Amazon Elastic Compute Cloud
2 Launch templates - Amazon Elastic Compute Cloud
3 Auto Scaling groups - Amazon EC2 Auto Scaling
[4] Savings Plans - Amazon EC2 Reserved Instances and Other AWS Reservation
Models
Question # 35
A company hosts a three-tier web application in the AWS Cloud. A Multi-AZ Amazon RDSfor MySQL server forms the database layer. Amazon ElastiCache forms the cache layer.The company wants a caching strategy that adds or updates data in the cache when acustomer adds an item to the database. The data in the cache must always match the datain the database.Which solution will meet these requirements?
A. Implement the lazy loading caching strategy B. Implement the write-through caching strategy. C. Implement the adding TTL caching strategy. D. Implement the AWS AppConfig caching strategy.
Answer: B
Explanation: A write-through caching strategy adds or updates data in the cache
whenever data is written to the database. This ensures that the data in the cache is always
consistent with the data in the database. A write-through caching strategy also reduces the
cache miss penalty, as data is always available in the cache when it is requested.
However, a write-through caching strategy can increase the write latency, as data has to be
written to both the cache and the database. A write-through caching strategy is suitable for
applications that require high data consistency and low read latency.
A lazy loading caching strategy only loads data into the cache when it is requested, and
updates the cache when there is a cache miss. This can result in stale data in the cache,
as data is not updated in the cache when it is changed in the database. A lazy loading
caching strategy is suitable for applications that can tolerate some data inconsistency and
have a low cache miss rate.
An adding TTL caching strategy assigns a time-to-live (TTL) value to each data item in the cache, and removes the data from the cache when the TTL expires. This can help prevent
stale data in the cache, as data is periodically refreshed from the database. However, an
adding TTL caching strategy can also increase the cache miss rate, as data can be evicted
from the cache before it is requested. An adding TTL caching strategy is suitable for
applications that have a high cache hit rate and can tolerate some data inconsistency.
An AWS AppConfig caching strategy is not a valid option, as AWS AppConfig is a service
that enables customers to quickly deploy validated configurations to applications of any
size and scale. AWS AppConfig does not provide a caching layer for web applications.
References: Caching strategies - Amazon ElastiCache, Caching for high-volume workloads
with Amazon ElastiCache
Question # 36
A company wants to analyze and troubleshoot Access Denied errors and Unauthonzederrors that are related to 1AM permissions The company has AWS CloudTrail turned onWhich solution will meet these requirements with the LEAST effort?
A. Use AWS Glue and write custom scripts to query CloudTrail logs for the errors B. Use AWS Batch and write custom scripts to query CloudTrail logs for the errors C. Search CloudTrail logs with Amazon Athena queries to identify the errors D. Search CloudTrail logs with Amazon QuickSight. Create a dashboard to identify the errors.
Answer: C
Explanation: This solution meets the following requirements:
It is the least effort, as it does not require any additional AWS services, custom
scripts, or data processing steps. Amazon Athena is a serverless interactive query
service that allows you to analyze data in Amazon S3 using standard SQL. You
can use Athena to query CloudTrail logs directly from the S3 bucket where they
are stored, without any data loading or transformation. You can also use the AWS
Management Console, the AWS CLI, or the Athena API to run and manage your
queries.
It is effective, as it allows you to filter, aggregate, and join CloudTrail log data using
SQL syntax. You can use various SQL functions and operators to specify the
criteria for identifying Access Denied and Unauthorized errors, such as the error
code, the user identity, the event source, the event name, the event time, and the
resource ARN. You can also use subqueries, views, and common table
expressions to simplify and optimize your queries.
It is flexible, as it allows you to customize and save your queries for future use.
You can also export the query results to other formats, such as CSV or JSON, or
integrate them with other AWS services, such as Amazon QuickSight, for further
analysis and visualization.
References:
Querying AWS CloudTrail Logs - Amazon Athena
Analyzing Data in S3 using Amazon Athena | AWS Big Data Blog
Troubleshoot IAM permisson access denied or unauthorized errors | AWS re:Post
Question # 37
A global company runs its applications in multiple AWS accounts in AWS Organizations.The company's applications use multipart uploads to upload data to multiple Amazon S3buckets across AWS Regions. The company wants to report on incomplete multipartuploads for cost compliance purposes.Which solution will meet these requirements with the LEAST operational overhead?
A. Configure AWS Config with a rule to report the incomplete multipart upload object count. B. Create a service control policy (SCP) to report the incomplete multipart upload objectcount. C. Configure S3 Storage Lens to report the incomplete multipart upload object count. D. Create an S3 Multi-Region Access Point to report the incomplete multipart upload objectcount.
Answer: C
Explanation: S3 Storage Lens is a cloud storage analytics feature that provides
organization-wide visibility into object storage usage and activity across multiple AWS
accounts in AWS Organizations. S3 Storage Lens can report the incomplete multipart
upload object count as one of the metrics that it collects and displays on an interactive
dashboard in the S3 console. S3 Storage Lens can also export metrics in CSV or Parquet
format to an S3 bucket for further analysis. This solution will meet the requirements with the
least operational overhead, as it does not require any code development or policy changes.
References:
1 explains how to use S3 Storage Lens to gain insights into S3 storage usage and
activity.
2 describes the concept and benefits of multipart uploads.
Question # 38
A company has stored 10 TB of log files in Apache Parquet format in an Amazon S3 bucketThe company occasionally needs to use SQL to analyze the log files Which solution willmeet these requirements MOST cost-effectively?
A. Create an Amazon Aurora MySQL database Migrate the data from the S3 bucket intoAurora by using AWS Database Migration Service (AWS DMS) Issue SQL statements tothe Aurora database. B. Create an Amazon Redshift cluster Use Redshift Spectrum to run SQL statementsdirectly on the data in the S3 bucket C. Create an AWS Glue crawler to store and retrieve table metadata from the S3 bucketUse Amazon Athena to run SQL statements directly on the data in the S3 bucket D. Create an Amazon EMR cluster Use Apache Spark SQL to run SQL statements directlyon the data in the S3 bucket
Answer: C
Explanation: AWS Glue is a serverless data integration service that can crawl, catalog,
and prepare data for analysis. AWS Glue can automatically discover the schema and
partitioning of the data stored in Apache Parquet format in S3, and create a table in the
AWS Glue Data Catalog. Amazon Athena is a serverless interactive query service that can
run SQL queries directly on data in S3, without requiring any data loading or
transformation. Athena can use the table metadata from the AWS Glue Data Catalog to
query the data in S3. By using AWS Glue and Athena, you can analyze the log files in S3
most cost-effectively, as you only pay for the resources consumed by the crawler and the
queries, and you do not need to provision or manage any servers or clusters.
References:
AWS Glue
Amazon Athena
Analyzing Data in S3 using Amazon Athena
Question # 39
A pharmaceutical company is developing a new drug. The volume of data that the company generates has grown exponentially over the past few months. The company'sresearchers regularly require a subset of the entire dataset to be immediately available withminimal lag. However the entire dataset does not need to be accessed on a daily basis. Allthe data currently resides in on-premises storage arrays, and the company wants to reduceongoing capital expenses.Which storage solution should a solutions architect recommend to meet theserequirements?
A. Run AWS DataSync as a scheduled cron job to migrate the data to an Amazon S3bucket on an ongoing basis. B. Deploy an AWS Storage Gateway file gateway with an Amazon S3 bucket as the targetstorage Migrate the data to the Storage Gateway appliance. C. Deploy an AWS Storage Gateway volume gateway with cached volumes with anAmazon S3 bucket as the target storage. Migrate the data to the Storage Gatewayappliance. D. Configure an AWS Site-to-Site VPN connection from the on-premises environment toAWS. Migrate data to an Amazon Elastic File System (Amazon EFS) file system.
Answer: C
Explanation: AWS Storage Gateway is a hybrid cloud storage service that allows you to
seamlessly integrate your on-premises applications with AWS cloud storage. Volume
Gateway is a type of Storage Gateway that presents cloud-backed iSCSI block storage
volumes to your on-premises applications. Volume Gateway operates in either cache mode
or stored mode. In cache mode, your primary data is stored in Amazon S3, while retaining
your frequently accessed data locally in the cache for low latency access. In stored mode,
your primary data is stored locally and your entire dataset is available for low latency
access on premises while also asynchronously getting backed up to Amazon S3.
For the pharmaceutical company’s use case, cache mode is the most suitable option, as it
meets the following requirements:
It reduces the need to scale the on-premises storage infrastructure, as most of the
data is stored in Amazon S3, which is scalable, durable, and cost-effective.
It provides low latency access to the subset of the data that the researchers
regularly require, as it is cached locally in the Storage Gateway appliance.
It does not require the entire dataset to be accessed on a daily basis, as it is
stored in Amazon S3 and can be retrieved on demand.
It offers flexible data protection and recovery options, as it allows taking point-intime
copies of the volumes using AWS Backup, which are stored in AWS as
Amazon EBS snapshots.
Therefore, the solutions architect should recommend deploying an AWS Storage Gateway
volume gateway with cached volumes with an Amazon S3 bucket as the target storage and
migrating the data to the Storage Gateway appliance.
References:
Volume Gateway | Amazon Web Services
How Volume Gateway works (architecture) - AWS Storage Gateway
A company runs a three-tier web application in a VPC across multiple Availability Zones.Amazon EC2 instances run in an Auto Scaling group for the application tier.The company needs to make an automated scaling plan that will analyze each resource'sdaily and weekly historical workload trends. The configuration must scale resourcesappropriately according to both the forecast and live changes in utilization.Which scaling strategy should a solutions architect recommend to meet theserequirements?
A. Implement dynamic scaling with step scaling based on average CPU utilization from theEC2 instances. B. Enable predictive scaling to forecast and scale. Configure dynamic scaling with targettracking. C. Create an automated scheduled scaling action based on the traffic patterns of the webapplication. D. Set up a simple scaling policy. Increase the cooldown period based on the EC2 instancestartup time
Answer: B
Explanation:
This solution meets the requirements because it allows the company to use both predictive
scaling and dynamic scaling to optimize the capacity of its Auto Scaling group. Predictive
scaling uses machine learning to analyze historical data and forecast future traffic patterns.
It then adjusts the desired capacity of the group in advance of the predicted changes.
Dynamic scaling uses target tracking to maintain a specified metric (such as CPU
utilization) at a target value. It scales the group in or out as needed to keep the metric close to the target. By using both scaling methods, the company can benefit from faster, simpler,
and more accurate scaling that responds to both forecasted and live changes in utilization.
References:
Predictive scaling for Amazon EC2 Auto Scaling
[Target tracking scaling policies for Amazon EC2 Auto Scaling
Question # 41
A company deployed a serverless application that uses Amazon DynamoDB as a databaselayer The application has experienced a large increase in users. The company wants toimprove database response time from milliseconds to microseconds and to cache requeststo the database.Which solution will meet these requirements with the LEAST operational overhead?
A. Use DynamoDB Accelerator (DAX). B. Migrate the database to Amazon Redshift. C. Migrate the database to Amazon RDS. D. Use Amazon ElastiCache for Redis.
Answer: A
Explanation: DynamoDB Accelerator (DAX) is a fully managed, highly available caching
service built for Amazon DynamoDB. DAX delivers up to a 10 times performance
improvement—from milliseconds to microseconds—even at millions of requests per
second. DAX does all the heavy lifting required to add in-memory acceleration to your
DynamoDB tables, without requiring developers to manage cache invalidation, data
population, or cluster management. Now you can focus on building great applications for
your customers without worrying about performance at scale. You do not need to modify
application logic because DAX is compatible with existing DynamoDB API calls. This
solution will meet the requirements with the least operational overhead, as it does not
require any code development or manual intervention. References:
1 provides an overview of Amazon DynamoDB Accelerator (DAX) and its benefits.
2 explains how to use DAX with DynamoDB for in-memory acceleration.
Question # 42
An online video game company must maintain ultra-low latency for its game servers. Thegame servers run on Amazon EC2 instances. The company needs a solution that canhandle millions of UDP internet traffic requests each second.Which solution will meet these requirements MOST cost-effectively?
A. Configure an Application Load Balancer with the required protocol and ports for theinternet traffic. Specify the EC2 instances as the targets. B. Configure a Gateway Load Balancer for the internet traffic. Specify the EC2 instances asthe targets. C. Configure a Network Load Balancer with the required protocol and ports for the internettraffic. Specify the EC2 instances as the targets. D. Launch an identical set of game servers on EC2 instances in separate AWS Regions. Route internet traffic to both sets of EC2 instances.
Answer: C
Explanation: The most cost-effective solution for the online video game company is to
configure a Network Load Balancer with the required protocol and ports for the internet
traffic and specify the EC2 instances as the targets. This solution will enable the company
to handle millions of UDP requests per second with ultra-low latency and high performance.
A Network Load Balancer is a type of Elastic Load Balancing that operates at the
connection level (Layer 4) and routes traffic to targets (EC2 instances, microservices, or
containers) within Amazon VPC based on IP protocol data. A Network Load Balancer is
ideal for load balancing of both TCP and UDP traffic, as it is capable of handling millions of
requests per second while maintaining high throughput at ultra-low latency. A Network
Load Balancer also preserves the source IP address of the clients to the back-end
applications, which can be useful for logging or security purposes1.
Question # 43
A company maintains an Amazon RDS database that maps users to cost centers. Thecompany has accounts in an organization in AWS Organizations. The company needs asolution that will tag all resources that are created in a specific AWS account in theorganization. The solution must tag each resource with the cost center ID of the user whocreated the resource.Which solution will meet these requirements?
A. Move the specific AWS account to a new organizational unit (OU) in Organizations fromthe management account. Create a service control policy (SCP) that requires all existingresources to have the correct cost center tag before the resources are created. Apply the SCP to the new OU. B. Create an AWS Lambda function to tag the resources after the Lambda function looksup the appropriate cost center from the RDS database. Configure an Amazon EventBridgerule that reacts to AWS CloudTrail events to invoke the Lambda function. C. Create an AWS CloudFormation stack to deploy an AWS Lambda function. Configurethe Lambda function to look up the appropriate cost center from the RDS database and totag resources. Create an Amazon EventBridge scheduled rule to invoke theCloudFormation stack. D. Create an AWS Lambda function to tag the resources with a default value. Configure anAmazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambdafunction when a resource is missing the cost center tag.
Answer: B
Explanation: AWS Lambda is a serverless compute service that lets you run code without
provisioning or managing servers. Lambda can be used to tag resources with the cost
center ID of the user who created the resource, by querying the RDS database that maps
users to cost centers. Amazon EventBridge is a serverless event bus service that enables
event-driven architectures. EventBridge can be configured to react to AWS CloudTrail
events, which are recorded API calls made by or on behalf of the AWS account.
EventBridge can invoke the Lambda function when a resource is created in the specific
AWS account, passing the user identity and resource information as parameters. This
solution will meet the requirements, as it enables automatic tagging of resources based on
the user and cost center mapping.
References:
1 provides an overview of AWS Lambda and its benefits.
2 provides an overview of Amazon EventBridge and its benefits.
3 explains the concept and benefits of AWS CloudTrail events.
Question # 44
A company is designing a tightly coupled high performance computing (HPC) environmentin the AWS Cloud The company needs to include features that will optimize the HPCenvironment for networking and storage.Which combination of solutions will meet these requirements? (Select TWO )
A. Create an accelerator in AWS Global Accelerator. Configure custom routing for theaccelerator. B. Create an Amazon FSx for Lustre file system. Configure the file system with scratchstorage. C. Create an Amazon CloudFront distribution. Configure the viewer protocol policy to beHTTP and HTTPS. D. Launch Amazon EC2 instances. Attach an Elastic Fabric Adapter (EFA) to theinstances. E. Create an AWS Elastic Beanstalk deployment to manage the environment.
Answer: B,D
Explanation: These two solutions will optimize the HPC environment for networking and
storage. Amazon FSx for Lustre is a fully managed service that provides cost-effective,
high-performance, scalable storage for compute workloads. It is built on the world’s most
popular high-performance file system, Lustre, which is designed for applications that
require fast storage, such as HPC and machine learning. By configuring the file system
with scratch storage, you can achieve sub-millisecond latencies, up to hundreds of GBs/s
of throughput, and millions of IOPS. Scratch file systems are ideal for temporary storage
and shorter-term processing of data. Data is not replicated and does not persist if a file
server fails. For more information, see Amazon FSx for Lustre.
Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables
customers to run applications requiring high levels of inter-node communications at scale
on AWS. Its custom-built operating system (OS) bypass hardware interface enhances the
performance of inter-instance communications, which is critical to scaling HPC and
machine learning applications. EFA provides a low-latency, low-jitter channel for interinstance
communications, enabling your tightly-coupled HPC or distributed machine
learning applications to scale to thousands of cores. EFA uses libfabric interface and
libfabric APIs for communications, which are supported by most HPC programming
models. For more information, see Elastic Fabric Adapter. The other solutions are not suitable for optimizing the HPC environment for networking and
storage. AWS Global Accelerator is a networking service that helps you improve the
availability, performance, and security of your public applications by using the AWS global
network. It provides two global static public IPs, deterministic routing, fast failover, and TCP
termination at the edge for your application endpoints. However, it does not support OSbypass
capabilities or high-performance file systems that are required for HPC and
machine learning applications. For more information, see AWS Global Accelerator.
Amazon CloudFront is a content delivery network (CDN) service that securely delivers
data, videos, applications, and APIs to customers globally with low latency, high transfer
speeds, all within a developer-friendly environment. CloudFront is integrated with AWS
services such as Amazon S3, Amazon EC2, AWS Elemental Media Services, AWS Shield,
AWS WAF, and AWS Lambda@Edge. However, CloudFront is not designed for HPC and
machine learning applications that require high levels of inter-node communications and
fast storage. For more information, see [Amazon CloudFront].
AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web
applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go,
and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. You can
simply upload your code and Elastic Beanstalk automatically handles the deployment, from
capacity provisioning, load balancing, auto-scaling to application health monitoring.
However, Elastic Beanstalk is not optimized for HPC and machine learning applications
that require OS-bypass capabilities and high-performance file systems. For more
information, see [AWS Elastic Beanstalk].
References: Amazon FSx for Lustre, Elastic Fabric Adapter, AWS Global Accelerator,
[Amazon CloudFront], [AWS Elastic Beanstalk].
Question # 45
A company is running a photo hosting service in the us-east-1 Region. The service enablesusers across multiple countries to upload and view photos. Some photos are heavilyviewed for months, and others are viewed for less than a week. The application allowsuploads of up to 20 MB for each photo. The service uses the photo metadata to determinewhich photos to display to each user.Which solution provides the appropriate user access MOST cost-effectively?
A. Store the photos in Amazon DynamoDB. Turn on DynamoDB Accelerator (DAX) tocache frequently viewed items. B. Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photometadata and its S3 location in DynamoDB. C. Store the photos in the Amazon S3 Standard storage class. Set up an S3 Lifecyclepolicy to move photos older than 30 days to the S3 Standard-Infrequent Access (S3Standard-IA) storage class. Use the object tags to keep track of metadata. D. Store the photos in the Amazon S3 Glacier storage class. Set up an S3 Lifecycle policyto move photos older than 30 days to the S3 Glacier Deep Archive storage class. Store thephoto metadata and its S3 location in Amazon OpenSearch Service.
Answer: B
Explanation: This solution provides the appropriate user access most cost-effectively
because it uses the Amazon S3 Intelligent-Tiering storage class, which automatically
optimizes storage costs by moving data to the most cost-effective access tier when access patterns change, without performance impact or operational overhead1. This storage class
is ideal for data with unknown, changing, or unpredictable access patterns, such as photos
that are heavily viewed for months or less than a week. By storing the photo metadata and
its S3 location in DynamoDB, the application can quickly query and retrieve the relevant
photos for each user. DynamoDB is a fast, scalable, and fully managed NoSQL database
service that supports key-value and document data models2.
A company is designing a new web application that will run on Amazon EC2 Instances. Theapplication will use Amazon DynamoDB for backend data storage. The application trafficwill be unpredictable. T company expects that the application read and write throughput tothe database will be moderate to high. The company needs to scale in response toapplication traffic.Which DynamoDB table configuration will meet these requirements MOST cost-effectively?
A. Configure DynamoDB with provisioned read and write by using the DynamoDBStandard table class. Set DynamoDB auto scaling to a maximum defined capacity. B. Configure DynamoDB in on-demand mode by using the DynamoDB Standard tableclass. C. Configure DynamoDB with provisioned read and write by using the DynamoDBStandard Infrequent Access (DynamoDB Standard-IA) table class. Set DynamoDB autoscaling to a maximum defined capacity. D. Configure DynamoDB in on-demand mode by using the DynamoDB Standard InfrequentAccess (DynamoDB Standard-IA) table class.
Answer: B
Explanation: The most cost-effective DynamoDB table configuration for the web
application is to configure DynamoDB in on-demand mode by using the DynamoDB
Standard table class. This configuration will allow the company to scale in response to
application traffic and pay only for the read and write requests that the application performs
on the table.
On-demand mode is a flexible billing option that can handle thousands of requests per
second without capacity planning. On-demand mode automatically adjusts the table’s
capacity based on the incoming traffic, and charges only for the read and write requests
that are actually performed. On-demand mode is suitable for applications with
unpredictable or variable workloads, or applications that prefer the ease of paying for only
what they use1.
The DynamoDB Standard table class is the default and recommended table class for most
workloads. The DynamoDB Standard table class offers lower throughput costs than the
DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class, and is more
cost-effective for tables where throughput is the dominant cost. The DynamoDB Standard
table class also offers the same performance, durability, and availability as the DynamoDB
Standard-IA table class2. The other options are not correct because they are either not cost-effective or not suitable
for the use case. Configuring DynamoDB with provisioned read and write by using the
DynamoDB Standard table class, and setting DynamoDB auto scaling to a maximum
defined capacity is not correct because this configuration requires manual estimation and
management of the table’s capacity, which adds complexity and cost to the solution.
Provisioned mode is a billing option that requires users to specify the amount of read and
write capacity units for their tables, and charges for the reserved capacity regardless of
usage. Provisioned mode is suitable for applications with predictable or stable workloads,
or applications that require finer-grained control over their capacity settings1. Configuring
DynamoDB with provisioned read and write by using the DynamoDB Standard-Infrequent
Access (DynamoDB Standard-IA) table class, and setting DynamoDB auto scaling to a
maximum defined capacity is not correct because this configuration is not cost-effective for
tables with moderate to high throughput. The DynamoDB Standard-IA table class offers
lower storage costs than the DynamoDB Standard table class, but higher throughput costs.
The DynamoDB Standard-IA table class is optimized for tables where storage is the
dominant cost, such as tables that store infrequently accessed data2. Configuring
DynamoDB in on-demand mode by using the DynamoDB Standard-Infrequent Access
(DynamoDB Standard-IA) table class is not correct because this configuration is not costeffective
for tables with moderate to high throughput. As mentioned above, the DynamoDB
Standard-IA table class has higher throughput costs than the DynamoDB Standard table
class, which can offset the savings from lower storage costs.
References:
Table classes - Amazon DynamoDB
Read/write capacity mode - Amazon DynamoDB
Question # 47
A company's web application that is hosted in the AWS Cloud recently increased inpopularity. The web application currently exists on a single Amazon EC2 instance in asingle public subnet. The web application has not been able to meet the demand of theincreased web traffic.The company needs a solution that will provide high availability and scalability to meet theincreased user demand without rewriting the web application.Which combination of steps will meet these requirements? (Select TWO.)
A. Replace the EC2 instance with a larger compute optimized instance. B. Configure Amazon EC2 Auto Scaling with multiple Availability Zones in private subnets. C. Configure a NAT gateway in a public subnet to handle web requests. D. Replace the EC2 instance with a larger memory optimized instance. E. Configure an Application Load Balancer in a public subnet to distribute web traffic
Answer: B,E
Explanation:
These two steps will meet the requirements because they will provide high availability and
scalability for the web application without rewriting it. Amazon EC2 Auto Scaling allows you
to automatically adjust the number of EC2 instances in response to changes in demand. By
configuring Auto Scaling with multiple Availability Zones in private subnets, you can ensure
that your web application is distributed across isolated and fault-tolerant locations, and that
your instances are not directly exposed to the internet. An Application Load Balancer
operates at the application layer and distributes incoming web traffic across multiple
targets, such as EC2 instances, containers, or Lambda functions. By configuring an
Application Load Balancer in a public subnet, you can enable your web application to
handle requests from the internet and route them to the appropriate targets in the private
subnets.
References:
What is Amazon EC2 Auto Scaling?
What is an Application Load Balancer?
Question # 48
A company is designing a web application on AWS The application will use a VPNconnection between the company's existing data centers and the company's VPCs. Thecompany uses Amazon Route 53 as its DNS service. The application must use privateDNS records to communicate with the on-premises services from a VPC. Which solutionwill meet these requirements in the MOST secure manner?
A. Create a Route 53 Resolver outbound endpoint. Create a resolver rule. Associate theresolver rule with the VPC B. Create a Route 53 Resolver inbound endpoint. Create a resolver rule. Associate theresolver rule with the VPC. C. Create a Route 53 private hosted zone. Associate the private hosted zone with the VPC. D. Create a Route 53 public hosted zone. Create a record for each service to allow servicecommunication.
Answer: A
Explanation: To meet the requirements of the web application in the most secure manner,
the company should create a Route 53 Resolver outbound endpoint, create a resolver rule,
and associate the resolver rule with the VPC. This solution will allow the application to use
private DNS records to communicate with the on-premises services from a VPC. Route 53
Resolver is a service that enables DNS resolution between on-premises networks and
AWS VPCs. An outbound endpoint is a set of IP addresses that Resolver uses to forward
DNS queries from a VPC to resolvers on an on-premises network. A resolver rule is a rule
that specifies the domain names for which Resolver forwards DNS queries to the IP
addresses that you specify in the rule. By creating an outbound endpoint and a resolver
rule, and associating them with the VPC, the company can securely resolve DNS queries
for the on-premises services using private DNS records12.
The other options are not correct because they do not meet the requirements or are not
secure. Creating a Route 53 Resolver inbound endpoint, creating a resolver rule, and
associating the resolver rule with the VPC is not correct because this solution will allow
DNS queries from on-premises networks to access resources in a VPC, not vice versa. An
inbound endpoint is a set of IP addresses that Resolver uses to receive DNS queries from
resolvers on an on-premises network1. Creating a Route 53 private hosted zone and
associating it with the VPC is not correct because this solution will only allow DNS
resolution for resources within the VPC or other VPCs that are associated with the same
hosted zone. A private hosted zone is a container for DNS records that are only accessible
from one or more VPCs3. Creating a Route 53 public hosted zone and creating a record for
each service to allow service communication is not correct because this solution will expose the on-premises services to the public internet, which is not secure. A public hosted
zone is a container for DNS records that are accessible from anywhere on the internet3.
References:
Resolving DNS queries between VPCs and your network - Amazon Route 53
Working with rules - Amazon Route 53
Working with private hosted zones - Amazon Route 53
Question # 49
A media company stores movies in Amazon S3. Each movie is stored in a single video filethat ranges from 1 GB to 10 GB in size.The company must be able to provide the streaming content of a movie within 5 minutes ofa user purchase. There is higher demand for movies that are less than 20 years old thanfor movies that are more than 20 years old. The company wants to minimize hostingservice costs based on demand.Which solution will meet these requirements?
A. Store all media content in Amazon S3. Use S3 Lifecycle policies to move media datainto the Infrequent Access tier when the demand for a movie decreases. B. Store newer movie video files in S3 Standard Store older movie video files in S3Standard-Infrequent Access (S3 Standard-IA). When a user orders an older movie, retrievethe video file by using standard retrieval. C. Store newer movie video files in S3 Intelligent-Tiering. Store older movie video files inS3 Glacier Flexible Retrieval. When a user orders an older movie, retrieve the video file byusing expedited retrieval. D. Store newer movie video files in S3 Standard. Store older movie video files in S3 GlacierFlexible Retrieval. When a user orders an older movie, retrieve the video file by using bulkretrieval.
Answer: C
Explanation: This solution will meet the requirements of minimizing hosting service costs
based on demand and providing the streaming content of a movie within 5 minutes of a user purchase. S3 Intelligent-Tiering is a storage class that automatically optimizes storage
costs by moving data to the most cost-effective access tier when access patterns
change. It is suitable for data with unknown, changing, or unpredictable access patterns,
such as newer movies that may have higher demand1. S3 Glacier Flexible Retrieval is a
storage class that provides low-cost storage for archive data that is retrieved
asynchronously. It offers flexible data retrieval options from minutes to hours, and free bulk
retrievals in 5-12 hours. It is ideal for backup, disaster recovery, and offsite data storage
needs2. By using expedited retrieval, the user can access the older movie video file in 1-5
minutes, which meets the requirement of 5 minutes3.
Amazon S3 Glacier Flexible Retrieval and Glacier Deep Archive Retrieval …1, Amazon S3
Glacier Flexible Retrieval section3: Amazon S3 Glacier Flexible Retrieval and Glacier Deep
Archive Retrieval …1, Retrieval Rates section.
Question # 50
A business application is hosted on Amazon EC2 and uses Amazon S3 for encryptedobject storage. The chief information security officer has directed that no application trafficbetween the two services should traverse the public internet.Which capability should the solutions architect use to meet the compliance requirements?
A. AWS Key Management Service (AWS KMS) B. VPC endpoint C. Private subnet D. Virtual private gateway
To meet security requirements, a company needs to encrypt all of its application data intransit while communicating with an Amazon RDS MySQL DB instance. A recent securityaudit revealed that encryption at rest is enabled using AWS Key Management Service(AWS KMS), but data in transit is not enabled.What should a solutions architect do to satisfy the security requirements?
A. Enable 1AM database authentication on the database. B. Provide self-signed certificates. Use the certificates in all connections to the RDSinstance. C. Take a snapshot of the RDS instance. Restore the snapshot to a new instance withencryption enabled. D. Download AWS-provided root certificates. Provide the certificates in all connections tothe RDS instance.
Answer: D
Explanation: To satisfy the security requirements, the solutions architect should download
AWS-provided root certificates and provide the certificates in all connections to the RDS
instance. This will enable SSL/TLS encryption for data in transit between the application
and the RDS instance. SSL/TLS encryption provides a layer of security by encrypting data
that moves between the client and the server. Amazon RDS creates an SSL certificate and installs the certificate on the DB instance when the instance is provisioned. The application
can use the AWS-provided root certificates to verify the identity of the DB instance and
establish a secure connection1.
The other options are not correct because they do not enable encryption for data in transit
or are not relevant for the use case. Enabling IAM database authentication on the database
is not correct because this option only provides a method of authentication, not encryption.
IAM database authentication allows users to use AWS Identity and Access Management
(IAM) users and roles to access a database, instead of using a database user name and
password2. Providing self-signed certificates is not correct because this option is not
secure or reliable. Self-signed certificates are certificates that are signed by the same entity
that issued them, instead of by a trusted certificate authority (CA). Self-signed certificates
can be easily forged or compromised, and are not recognized by most browsers and
applications3. Taking a snapshot of the RDS instance and restoring it to a new instance
with encryption enabled is not correct because this option only enables encryption at rest,
not encryption in transit. Encryption at rest protects data that is stored on disk, but does not
protect data that is moving between the client and the server4.
References:
Using SSL/TLS to encrypt a connection to a DB instance - Amazon Relational
Database Service
IAM database authentication for MySQL and PostgreSQL - Amazon Relational
Database Service
What are self-signed certificates?
Encrypting Amazon RDS resources - Amazon Relational Database Service
Question # 52
A company stores text files in Amazon S3. The text files include customer chat messages,date and time information, and customer personally identifiable information (Pll).The company needs a solution to provide samples of the conversations to an externalservice provider for quality control. The external service provider needs to randomly picksample conversations up to the most recent conversation. The company must not sharethe customer Pll with the external service provider. The solution must scale when thenumber of customer conversations increases.Which solution will meet these requirements with the LEAST operational overhead?
A. Create an Object Lambda Access Point. Create an AWS Lambda function that redactsthe Pll when the function reads the file. Instruct the external service provider to access theObject Lambda Access Point. B. Create a batch process on an Amazon EC2 instance that regularly reads all new files,redacts the Pll from the files, and writes the redacted files to a different S3 bucket. Instructthe external service provider to access the bucket that does not contain the Pll. C. Create a web application on an Amazon EC2 instance that presents a list of the files,redacts the Pll from the files, and allows the external service provider to download newversions of the files that have the Pll redacted. D. Create an Amazon DynamoDB table. Create an AWS Lambda function that reads onlythe data in the files that does not contain Pll. Configure the Lambda function to store thenon-PII data in the DynamoDB table when a new file is written to Amazon S3. Grant theexternal service provider access to the DynamoDB table.
Answer: A
Explanation: The correct solution is to create an Object Lambda Access Point and an
AWS Lambda function that redacts the PII when the function reads the file. This way, the
company can use the S3 Object Lambda feature to modify the S3 object content on the fly,
without creating a copy or changing the original object. The external service provider can
access the Object Lambda Access Point and get the redacted version of the file. This
solution has the least operational overhead because it does not require any additional
storage, processing, or synchronization. The solution also scales automatically with the
number of customer conversations and the demand from the external service provider. The
other options are incorrect because: Option B is using a batch process on an EC2 instance to read, redact, and write
the files to a different S3 bucket. This solution has more operational overhead
because it requires managing the EC2 instance, the batch process, and the
additional S3 bucket. It also introduces latency and inconsistency between the
original and the redacted files.
Option C is using a web application on an EC2 instance to present, redact, and
download the files. This solution has more operational overhead because it
requires managing the EC2 instance, the web application, and the download
process. It also exposes the original files to the web application, which increases
the risk of leaking the PII.
Option D is using a DynamoDB table and a Lambda function to store the non-PII
data from the files. This solution has more operational overhead because it
requires managing the DynamoDB table, the Lambda function, and the data
transformation. It also changes the format and the structure of the original files,
which may affect the quality control process.
References:
S3 Object Lambda
Object Lambda Access Point
Lambda function
Question # 53
A company wants to deploy its containerized application workloads to a VPC across threeAvailability Zones. The company needs a solution that is highly available across AvailabilityZones. The solution must require minimal changes to the application.Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS ServiceAuto Scaling to use target tracking scaling. Set the minimum capacity to 3. Set the taskplacement strategy type to spread with an Availability Zone attribute. B. Use Amazon Elastic Kubernetes Service (Amazon EKS) self-managed nodes. ConfigureApplication Auto Scaling to use target tracking scaling. Set the minimum capacity to 3. C. Use Amazon EC2 Reserved Instances. Launch three EC2 instances in a spreadplacement group. Configure an Auto Scaling group to use target tracking scaling. Set theminimum capacity to 3. D. Use an AWS Lambda function. Configure the Lambda function to connect to a VPC.Configure Application Auto Scaling to use Lambda as a scalable target. Set the minimumcapacity to 3.
Answer: A
Explanation: The company wants to deploy its containerized application workloads to a
VPC across three Availability Zones, with high availability and minimal changes to the
application. The solution that will meet these requirements with the least operational
overhead is:
Use Amazon Elastic Container Service (Amazon ECS). Amazon ECS is a fully
managed container orchestration service that allows you to run and scale
containerized applications on AWS. Amazon ECS eliminates the need for you to
install, operate, and scale your own cluster management infrastructure. Amazon
ECS also integrates with other AWS services, such as VPC, ELB,
CloudFormation, CloudWatch, IAM, and more.
Configure Amazon ECS Service Auto Scaling to use target tracking scaling.
Amazon ECS Service Auto Scaling allows you to automatically adjust the number
of tasks in your service based on the demand or custom metrics. Target tracking
scaling is a policy type that adjusts the number of tasks in your service to keep a
specified metric at a target value. For example, you can use target tracking scaling
to maintain a target CPU utilization or request count per task for your service.
Set the minimum capacity to 3. This ensures that your service always has at least
three tasks running across three Availability Zones, providing high availability and
fault tolerance for your application.
Set the task placement strategy type to spread with an Availability Zone attribute.
This ensures that your tasks are evenly distributed across the Availability Zones in
your cluster, maximizing the availability of your service.
This solution will provide high availability across Availability Zones, require minimal
changes to the application, and reduce the operational overhead of managing your own
cluster infrastructure.
References: Amazon Elastic Container Service
Amazon ECS Service Auto Scaling
Target Tracking Scaling Policies for Amazon ECS Services
Amazon ECS Task Placement Strategies
Question # 54
A company needs to use its on-premises LDAP directory service to authenticate its usersto the AWS Management Console. The directory service is not compatible with SecurityAssertion Markup Language (SAML).Which solution meets these requirements?
A. Enable AWS 1AM Identity Center (AWS Single Sign-On) between AWS and the onpremisesLDAP. B. Create an 1AM policy that uses AWS credentials, and integrate the policy into LDAP. C. Set up a process that rotates the I AM credentials whenever LDAP credentials areupdated. D. Develop an on-premises custom identity broker application or process that uses AWSSecurity Token Service (AWS STS) to get short-lived credentials.
Answer: D
Explanation: The solution that meets the requirements is to develop an on-premises
custom identity broker application or process that uses AWS Security Token Service (AWS
STS) to get short-lived credentials. This solution allows the company to use its existing LDAP directory service to authenticate its users to the AWS Management Console, without
requiring SAML compatibility. The custom identity broker application or process can act as
a proxy between the LDAP directory service and AWS STS, and can request temporary
security credentials for the users based on their LDAP attributes and roles. The users can
then use these credentials to access the AWS Management Console via a sign-in URL
generated by the identity broker. This solution also enhances security by using short-lived
credentials that expire after a specified duration.
The other solutions do not meet the requirements because they either require SAML
compatibility or do not provide access to the AWS Management Console. Enabling AWS
IAM Identity Center (AWS Single Sign-On) between AWS and the on-premises LDAP
would require the LDAP directory service to support SAML 2.0, which is not the case for
this scenario. Creating an IAM policy that uses AWS credentials and integrating the policy
into LDAP would not provide access to the AWS Management Console, but only to the
AWS APIs. Setting up a process that rotates the IAM credentials whenever LDAP
credentials are updated would also not provide access to the AWS Management Console,
but only to the AWS CLI. Therefore, these solutions are not suitable for the given
requirements.
Question # 55
A company wants to migrate its on-premises Microsoft SQL Server Enterprise editiondatabase to AWS. The company's online application uses the database to processtransactions. The data analysis team uses the same production database to run reports foranalytical processing. The company wants to reduce operational overhead by moving tomanaged services wherever possible.Which solution will meet these requirements with the LEAST operational overhead?
A. Migrate to Amazon RDS for Microsoft SQL Server. Use read replicas for reportingpurposes. B. Migrate to Microsoft SQL Server on Amazon EC2. Use Always On read replicas forreporting purposes. C. Migrate to Amazon DynamoDB. Use DynamoDB on-demand replicas for reportingpurposes. D. Migrate to Amazon Aurora MySQL. Use Aurora read replicas for reporting purposes.
Answer: A
Explanation: Amazon RDS for Microsoft SQL Server is a fully managed service that offers
SQL Server 2014, 2016, 2017, and 2019 editions while offloading database administration
tasks such as backups, patching, and scaling. Amazon RDS supports read replicas, which
are read-only copies of the primary database that can be used for reporting purposes
without affecting the performance of the online application. This solution will meet the
requirements with the least operational overhead, as it does not require any code changes
or manual intervention.
References:
1 provides an overview of Amazon RDS for Microsoft SQL Server and its benefits.
2 explains how to create and use read replicas with Amazon RDS.
Question # 56
A company's website is used to sell products to the public. The site runs on Amazon EC2instances in an Auto Scaling group behind an Application Load Balancer (ALB). There isalso an Amazon CloudFront distribution, and AWS WAF is being used to protect againstSQL injection attacks. The ALB is the origin for the CloudFront distribution. A recent reviewof security logs revealed an external malicious IP that needs to be blocked from accessingthe website.What should a solutions architect do to protect the application?
A. Modify the network ACL on the CloudFront distribution to add a deny rule for themalicious IP address. B. Modify the configuration of AWS WAF to add an IP match condition to block themalicious IP address. C. Modify the network ACL for the EC2 instances in the target groups behind the ALB todeny the malicious IP address. D. Modify the security groups for the EC2 instances in the target groups behind the ALB todeny the malicious IP address.
Answer: B
Explanation: AWS WAF is a web application firewall that helps protect web applications
from common web exploits that could affect application availability, compromise security, or
consume excessive resources. AWS WAF allows users to create rules that block, allow, or
count web requests based on customizable web security rules. One of the types of rules
that can be created is an IP match rule, which allows users to specify a list of IP addresses
or IP address ranges that they want to allow or block. By modifying the configuration of
AWS WAF to add an IP match condition to block the malicious IP address, the solution
architect can prevent the attacker from accessing the website through the CloudFront
distribution and the ALB.
The other options are not correct because they do not effectively block the malicious IP
address from accessing the website. Modifying the network ACL on the CloudFront
distribution or the EC2 instances in the target groups behind the ALB will not work because
network ACLs are stateless and do not evaluate traffic at the application layer. Modifying
the security groups for the EC2 instances in the target groups behind the ALB will not work
because security groups are stateful and only evaluate traffic at the instance level, not at
the load balancer level.
References:
AWS WAF
How AWS WAF works
Working with IP match conditions
Question # 57
A company has a web application for travel ticketing. The application is based on adatabase that runs in a single data center in North America. The company wants to expandthe application to serve a global user base. The company needs to deploy the applicationto multiple AWS Regions. Average latency must be less than 1 second on updates to thereservation database.The company wants to have separate deployments of its web platform across multipleRegions. However the company must maintain a single primary reservation database thatis globally consistent.Which solution should a solutions architect recommend to meet these requirements?
A. Convert the application to use Amazon DynamoDB. Use a global table for the centerreservation table. Use the correct Regional endpoint in each Regional deployment. B. Migrate the database to an Amazon Aurora MySQL database. Deploy Aurora ReadReplicas in each Region. Use the correct Regional endpoint in each Regional deploymentfor access to the database. C. Migrate the database to an Amazon RDS for MySQL database Deploy MySQL readreplicas in each Region. Use the correct Regional endpoint in each Regional deploymentfor access to the database. D. Migrate the application to an Amazon Aurora Serverless database. Deploy instances ofthe database to each Region. Use the correct Regional endpoint in each Regionaldeployment to access the database. Use AWS Lambda functions to process event streamsin each Region to synchronize the databases.
A company has an application that uses an Amazon DynamoDB table for storage. Asolutions architect discovers that many requests to the table are not returning the latestdata. The company's users have not reported any other issues with database performance.Latency is in an acceptable range.Which design change should the solutions architect recommend?
A. Add read replicas to the table. B. Use a global secondary index (GSI). C. Request strongly consistent reads for the table. D. Request eventually consistent reads for the table.
Answer: C
Explanation: The most suitable design change for the company’s application is to request
strongly consistent reads for the table. This change will ensure that the requests to the
table return the latest data, reflecting the updates from all prior write operations.
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and
predictable performance with seamless scalability. DynamoDB supports two types of read
consistency: eventually consistent reads and strongly consistent reads. By default,
DynamoDB uses eventually consistent reads, unless users specify otherwise1.
Eventually consistent reads are reads that may not reflect the results of a recently
completed write operation. The response might not include the changes because of the
latency of propagating the data to all replicas. If users repeat their read request after a
short time, the response should return the updated data. Eventually consistent reads are
suitable for applications that do not require up-to-date data or can tolerate eventual
consistency1.
Strongly consistent reads are reads that return a result that reflects all writes that received a successful response prior to the read. Users can request a strongly consistent read by
setting the ConsistentRead parameter to true in their read operations, such as GetItem,
Query, or Scan. Strongly consistent reads are suitable for applications that require up-todate
data or cannot tolerate eventual consistency1.
The other options are not correct because they do not address the issue of read
consistency or are not relevant for the use case. Adding read replicas to the table is not
correct because this option is not supported by DynamoDB. Read replicas are copies of a
primary database instance that can serve read-only traffic and improve availability and
performance. Read replicas are available for some relational database services, such as
Amazon RDS or Amazon Aurora, but not for DynamoDB2. Using a global secondary index
(GSI) is not correct because this option is not related to read consistency. A GSI is an
index that has a partition key and an optional sort key that are different from those on the
base table. A GSI allows users to query the data in different ways, with eventual
consistency3. Requesting eventually consistent reads for the table is not correct because
this option is already the default behavior of DynamoDB and does not solve the problem of
requests not returning the latest data.
References:
Read consistency - Amazon DynamoDB
Working with read replicas - Amazon Relational Database Service
Working with global secondary indexes - Amazon DynamoDB
Question # 59
A company has an AWS Direct Connect connection from its corporate data center to itsVPC in the us-east-1 Region. The company recently acquired a corporation that hasseveral VPCs and a Direct Connect connection between its on-premises data center andthe eu-west-2 Region. The CIDR blocks for the VPCs of the company and the corporationdo not overlap. The company requires connectivity between two Regions and the datacenters. The company needs a solution that is scalable while reducing operationaloverhead. What should a solutions architect do to meet these requirements?
A. Set up inter-Region VPC peering between the VPC in us-east-1 and the VPCs in euwest-2. B. Create private virtual interfaces from the Direct Connect connection in us-east-1 to theVPCs in eu-west-2. C. Establish VPN appliances in a fully meshed VPN network hosted by Amazon EC2. UseAWS VPN CloudHub to send and receive data between the data centers and each VPC. D. Connect the existing Direct Connect connection to a Direct Connect gateway. Routetraffic from the virtual private gateways of the VPCs in each Region to the Direct Connectgateway.
Answer: D
Explanation: This solution meets the requirements because it allows the company to use a
single Direct Connect connection to connect to multiple VPCs in different Regions using a
Direct Connect gateway. A Direct Connect gateway is a globally available resource that
enables you to connect your on-premises network to VPCs in any AWS Region, except the
AWS China Regions. You can associate a Direct Connect gateway with a transit gateway
or a virtual private gateway in each Region. By routing traffic from the virtual private
gateways of the VPCs to the Direct Connect gateway, you can enable inter-Region and onpremises
connectivity for your VPCs. This solution is scalable because you can add more
VPCs in different Regions to the Direct Connect gateway without creating additional
connections. This solution also reduces operational overhead because you do not need to
manage multiple VPN appliances, VPN connections, or VPC peering connections.
References:
Direct Connect gateways
Inter-Region VPC peering
Question # 60
A company has five organizational units (OUs) as part of its organization in AWSOrganizations. Each OU correlates to the five businesses that the company owns. Thecompany's research and development (R&D) business is separating from the company andwill need its own organization. A solutions architect creates a separate new managementaccount for this purpose.What should the solutions architect do next in the new management account?
A. Have the R&D AWS account be part of both organizations during the transition. B. Invite the R&D AWS account to be part of the new organization after the R&D AWSaccount has left the prior organization. C. Create a new R&D AWS account in the new organization. Migrate resources from theprior R&D AWS account to the new R&D AWS account. D. Have the R&D AWS account join the new organization. Make the new managementaccount a member of the prior organization.
Answer: B
Explanation: it allows the solutions architect to create a separate organization for the
research and development (R&D) business and move its AWS account to the new organization. By inviting the R&D AWS account to be part of the new organization after it
has left the prior organization, the solutions architect can ensure that there is no overlap or
conflict between the two organizations. The R&D AWS account can accept or decline the
invitation to join the new organization. Once accepted, it will be subject to any policies and
controls applied by the new organization. References:
Inviting an AWS Account to Join Your Organization
Leaving an Organization as a Member Account
Question # 61
A company runs a real-time data ingestion solution on AWS. The solution consists of themost recent version of Amazon Managed Streaming for Apache Kafka (Amazon MSK). Thesolution is deployed in a VPC in private subnets across three Availability Zones.A solutions architect needs to redesign the data ingestion solution to be publicly availableover the internet. The data in transit must also be encrypted.Which solution will meet these requirements with the MOST operational efficiency
A. Configure public subnets in the existing VPC. Deploy an MSK cluster in the publicsubnets. Update the MSK cluster security settings to enable mutual TLS authentication. B. Create a new VPC that has public subnets. Deploy an MSK cluster in the publicsubnets. Update the MSK cluster security settings to enable mutual TLS authentication. C. Deploy an Application Load Balancer (ALB) that uses private subnets. Configure an ALBsecurity group inbound rule to allow inbound traffic from the VPC CIDR block for HTTPSprotocol. D. Deploy a Network Load Balancer (NLB) that uses private subnets. Configure an NLBlistener for HTTPS communication over the internet.
Answer: A
Explanation: The solution that meets the requirements with the most operational efficiency
is to configure public subnets in the existing VPC and deploy an MSK cluster in the public subnets. This solution allows the data ingestion solution to be publicly available over the
internet without creating a new VPC or deploying a load balancer. The solution also
ensures that the data in transit is encrypted by enabling mutual TLS authentication, which
requires both the client and the server to present certificates for verification. This solution
leverages the public access feature of Amazon MSK, which is available for clusters running
Apache Kafka 2.6.0 or later versions1.
The other solutions are not as efficient as the first one because they either create
unnecessary resources or do not encrypt the data in transit. Creating a new VPC with
public subnets would incur additional costs and complexity for managing network resources
and routing. Deploying an ALB or an NLB would also add more costs and latency for the
data ingestion solution. Moreover, an ALB or an NLB would not encrypt the data in transit
by itself, unless they are configured with HTTPS listeners and certificates, which would
require additional steps and maintenance. Therefore, these solutions are not optimal for the
given requirements.
References:
Public access - Amazon Managed Streaming for Apache Kafka
Question # 62
A company is building a shopping application on AWS. The application offers a catalog thatchanges once each month and needs to scale with traffic volume. The company wants thelowest possible latency from the application. Data from each user's shopping carl needs tobe highly available. User session data must be available even if the user is disconnectedand reconnects.What should a solutions architect do to ensure that the shopping cart data is preserved atall times?
A. Configure an Application Load Balancer to enable the sticky sessions feature (sessionaffinity) for access to the catalog in Amazon Aurora. B. Configure Amazon ElastiCacJie for Redis to cache catalog data from AmazonDynamoDB and shopping carl data from the user's session. C. Configure Amazon OpenSearch Service to cache catalog data from AmazonDynamoDB and shopping cart data from the user's session. D. Configure an Amazon EC2 instance with Amazon Elastic Block Store (Amazon EBS)storage for the catalog and shopping cart. Configure automated snapshots.
Answer: B
Explanation:
To ensure that the shopping cart data is preserved at all times, a solutions architect should
configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB
and shopping cart data from the user’s session. This solution has the following benefits:
It offers the lowest possible latency from the application, as ElastiCache for Redis
is a blazing fast in-memory data store that provides sub-millisecond latency to
power internet-scale real-time applications1.
It scales with traffic volume, as ElastiCache for Redis supports horizontal scaling
by adding more nodes or shards to the cluster, and vertical scaling by changing
the node type2.
It is highly available, as ElastiCache for Redis supports replication across multiple
Availability Zones and automatic failover in case of a primary node failure3.
It preserves user session data even if the user is disconnected and reconnects, as
ElastiCache for Redis can store session data, such as user login information and
shopping cart contents, in a persistent and durable manner using snapshots or
A company has deployed a multiplayer game for mobile devices. The game requires livelocation tracking of players based on latitude and longitude. The data store for the gamemust support rapid updates and retrieval of locations.The game uses an Amazon RDS for PostgreSQL DB instance with read replicas to storethe location data. During peak usage periods, the database is unable to maintain theperformance that is needed for reading and writing updates. The game's user base isincreasing rapidly.What should a solutions architect do to improve the performance of the data tier?
A. Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZenabled. B. Migrate from Amazon RDS to Amazon OpenSearch Service with OpenSearchDashboards. C. Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance.Modify the game to use DAX. D. Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance.Modify the game to use Redis.
Answer: D
Explanation: The solution that will improve the performance of the data tier is to deploy an
Amazon ElastiCache for Redis cluster in front of the existing DB instance and modify the
game to use Redis. This solution will enable the game to store and retrieve the location data of the players in a fast and scalable way, as Redis is an in-memory data store that
supports geospatial data types and commands. By using ElastiCache for Redis, the game
can reduce the load on the RDS for PostgreSQL DB instance, which is not optimized for
high-frequency updates and queries of location data. ElastiCache for Redis also supports
replication, sharding, and auto scaling to handle the increasing user base of the game.
The other solutions are not as effective as the first one because they either do not improve
the performance, do not support geospatial data, or do not leverage caching. Taking a
snapshot of the existing DB instance and restoring it with Multi-AZ enabled will not improve
the performance of the data tier, as it only provides high availability and durability, but not
scalability or low latency. Migrating from Amazon RDS to Amazon OpenSearch Service
with OpenSearch Dashboards will not improve the performance of the data tier, as
OpenSearch Service is mainly designed for full-text search and analytics, not for real-time
location tracking. OpenSearch Service also does not support geospatial data types and
commands natively, unlike Redis. Deploying Amazon DynamoDB Accelerator (DAX) in
front of the existing DB instance and modifying the game to use DAX will not improve the
performance of the data tier, as DAX is only compatible with DynamoDB, not with RDS for
PostgreSQL. DAX also does not support geospatial data types and commands.
References:
Amazon ElastiCache for Redis
Geospatial Data Support - Amazon ElastiCache for Redis
Amazon RDS for PostgreSQL
Amazon OpenSearch Service
Amazon DynamoDB Accelerator (DAX)
Question # 64
A company wants to run its experimental workloads in the AWS Cloud. The company has abudget for cloud spending. The company's CFO is concerned about cloud spendingaccountability for each department. The CFO wants to receive notification when thespending threshold reaches 60% of the budget.Which solution will meet these requirements?
A. Use cost allocation tags on AWS resources to label owners. Create usage budgets inAWS Budgets. Add an alert threshold to receive notification when spending exceeds 60%of the budget. B. Use AWS Cost Explorer forecasts to determine resource owners. Use AWS CostAnomaly Detection to create alert threshold notifications when spending exceeds 60% ofthe budget. C. Use cost allocation tags on AWS resources to label owners. Use AWS Support API onAWS Trusted Advisor to create alert threshold notifications when spending exceeds 60% ofthe budget D. Use AWS Cost Explorer forecasts to determine resource owners. Create usage budgetsin AWS Budgets. Add an alert threshold to receive notification when spending exceeds60% of the budget.
Answer: A
Explanation: This solution meets the requirements because it allows the company to track
and manage its cloud spending by using cost allocation tags to assign costs to different
departments, creating usage budgets to set spending limits, and adding alert thresholds to
receive notifications when the spending reaches a certain percentage of the budget. This
way, the company can monitor its experimental workloads and avoid overspending on the
cloud.
References:
Using Cost Allocation Tags
Creating an AWS Budget
Creating an Alert for an AWS Budgetc
Question # 65
A city has deployed a web application running on Amazon EC2 instances behind anApplication Load Balancer (ALB). The application's users have reported sporadicperformance, which appears to be related to DDoS attacks originating from random IPaddresses. The city needs a solution that requires minimal configuration changes and provides an audit trail for the DDoS sources. Which solution meets these requirements?
A. Enable an AWS WAF web ACL on the ALB, and configure rules to block traffic fromunknown sources. B. Subscribe to Amazon Inspector. Engage the AWS DDoS Response Team (DRT) tointegrate mitigating controls into the service. C. Subscribe to AWS Shield Advanced. Engage the AWS DDoS Response Team (DRT) tointegrate mitigating controls into the service. D. Create an Amazon CloudFront distribution for the application, and set the ALB as theorigin. Enable an AWS WAF web ACL on the distribution, and configure rules to blocktraffic from unknown sources.
Answer: C
Explanation: To protect the web application from DDoS attacks originating from random IP
addresses, a solutions architect should subscribe to AWS Shield Advanced and engage
the AWS DDoS Response Team (DRT) to integrate mitigating controls into the service.
AWS Shield Advanced is a managed service that provides protection against large and
sophisticated DDoS attacks, with access to 24/7 support and response from the DRT. The
DRT can help the city configure proactive and reactive safeguards, such as AWS WAF
rules, rate-based rules, and network ACLs, to block malicious traffic and improve the
application’s resilience. The service also provides an audit trail for the DDoS sources
through detailed attack reports and Amazon CloudWatch metrics.
Question # 66
A company runs a web application on Amazon EC2 instances in an Auto Scaling group thathas a target group. The company desgned the application to work with session affinity(sticky sessions) for a better user experience.The application must be available publicly over the internet as an endpoint_ A WAF mustbe applied to the endpoint for additional security. Session affinity (sticky sessions) must beconfigured on the endpointWhich combination of steps will meet these requirements? (Select TWO)
A. Create a public Network Load Balancer Specify the application target group. B. Create a Gateway Load Balancer Specify the application target group. C. Create a public Application Load Balancer Specify the application target group. D. Create a second target group. Add Elastic IP addresses to the EC2 instances E. Create a web ACL in AWS WAF Associate the web ACL with the endpoint
Answer: C,E
Explanation: C and E are the correct answers because they allow the company to create a
public endpoint for its web application that supports session affinity (sticky sessions) and
has a WAF applied for additional security. By creating a public Application Load Balancer,
the company can distribute incoming traffic across multiple EC2 instances in an Auto
Scaling group and specify the application target group. By creating a web ACL in AWS
WAF and associating it with the Application Load Balancer, the company can protect its
web application from common web exploits. By enabling session stickiness on the
Application Load Balancer, the company can ensure that subsequent requests from a user
during a session are routed to the same target. References:
Application Load Balancers
AWS WAF
Target Groups for Your Application Load Balancers
How Application Load Balancer Works with Sticky Sessions
Question # 67
A security audit reveals that Amazon EC2 instances are not being patched regularly. Asolutions architect needs to provide a solution that will run regular security scans across alarge fleet of EC2 instances. The solution should also patch the EC2 instances on a regularschedule and provide a report of each instance's patch status.Which solution will meet these requirements?
A. Set up Amazon Macie to scan the EC2 instances for software vulnerabilities. Set up acron job on each EC2 instance to patch the instance on a regular schedule. B. Turn on Amazon GuardDuty in the account. Configure GuardDuty to scan the EC2instances for software vulnerabilities. Set up AWS Systems Manager Session Manager topatch the EC2 instances on a regular schedule. C. Set up Amazon Detective to scan the EC2 instances for software vulnerabilities. Set upan Amazon EventBridge scheduled rule to patch the EC2 instances on a regular schedule. D. Turn on Amazon Inspector in the account. Configure Amazon Inspector to scan the EC2instances for software vulnerabilities. Set up AWS Systems Manager Patch Manager topatch the EC2 instances on a regular schedule.
Answer: D
Explanation: Amazon Inspector is an automated security assessment service that helps
improve the security and compliance of applications deployed on AWS. Amazon Inspector
automatically assesses applications for exposure, vulnerabilities, and deviations from best
practices. After performing an assessment, Amazon Inspector produces a detailed list of
security findings prioritized by level of severity1. Amazon Inspector can scan the EC2
instances for software vulnerabilities and provide a report of each instance’s patch status.
AWS Systems Manager Patch Manager is a capability of AWS Systems Manager that
automates the process of patching managed nodes with both security-related updates and
other types of updates. Patch Manager uses patch baselines, which include rules for autoapproving
patches within days of their release, in addition to optional lists of approved and
rejected patches. Patch Manager can patch fleets of Amazon EC2 instances, edge
devices, on-premises servers, and virtual machines (VMs) by operating system type2.
Patch Manager can patch the EC2 instances on a regular schedule and provide a report of
each instance’s patch status. Therefore, the combination of Amazon Inspector and AWS
Systems Manager Patch Manager will meet the requirements of the question.
The other options are not valid because:
Amazon Macie is a security service that uses machine learning to automatically
discover, classify, and protect sensitive data in AWS. Amazon Macie does not
scan the EC2 instances for software vulnerabilities, but rather for data
classification and protection3. A cron job is a Linux command for scheduling a task
to be executed sometime in the future. A cron job is not a reliable way to patch the
EC2 instances on a regular schedule, as it may fail or be interrupted by other
processes4.
Amazon GuardDuty is a threat detection service that continuously monitors for
malicious activity and unauthorized behavior to protect your AWS accounts and
workloads. Amazon GuardDuty does not scan the EC2 instances for software
vulnerabilities, but rather for network and API activity anomalies5. AWS Systems
Manager Session Manager is a fully managed AWS Systems Manager capability
that lets you manage your Amazon EC2 instances, edge devices, on-premises
servers, and virtual machines (VMs) through an interactive one-click browserbased
shell or the AWS Command Line Interface (AWS CLI). Session Manager
does not patch the EC2 instances on a regular schedule, but rather provides
secure and auditable node management2.
Amazon Detective is a security service that makes it easy to analyze, investigate,
and quickly identify the root cause of potential security issues or suspicious
activities. Amazon Detective does not scan the EC2 instances for software
vulnerabilities, but rather collects and analyzes data from AWS sources such as
Amazon GuardDuty, Amazon VPC Flow Logs, and AWS CloudTrail. Amazon EventBridge is a serverless event bus that makes it easy to connect applications
using data from your own applications, integrated Software-as-a-Service (SaaS)
applications, and AWS services. EventBridge delivers a stream of real-time data
from event sources, such as Zendesk, Datadog, or Pagerduty, and routes that
data to targets like AWS Lambda. EventBridge does not patch the EC2 instances
on a regular schedule, but rather triggers actions based on events.
References: Amazon Inspector, AWS Systems Manager Patch Manager, Amazon
A manufacturing company runs its report generation application on AWS. The applicationgenerates each report in about 20 minutes. The application is built as a monolith that runson a single Amazon EC2 instance. The application requires frequent updates to its tightlycoupled modules. The application becomes complex to maintain as the company adds newfeatures.Each time the company patches a software module, the application experiences downtime.Report generation must restart from the beginning after any interruptions. The companywants to redesign the application so that the application can be flexible, scalable, andgradually improved. The company wants to minimize application downtime.Which solution will meet these requirements?
A. Run the application on AWS Lambda as a single function with maximum provisionedconcurrency. B. Run the application on Amazon EC2 Spot Instances as microservices with a Spot Fleetdefault allocation strategy. C. Run the application on Amazon Elastic Container Service (Amazon ECS) asmicroservices with service auto scaling. D. Run the application on AWS Elastic Beanstalk as a single application environment withan all-at-once deployment strategy.
Answer: C
Explanation: The solution that will meet the requirements is to run the application on
Amazon Elastic Container Service (Amazon ECS) as microservices with service auto
scaling. This solution will allow the application to be flexible, scalable, and gradually
improved, as well as minimize application downtime. By breaking down the monolithic
application into microservices, the company can decouple the modules and update them
independently, without affecting the whole application. By running the microservices on
Amazon ECS, the company can leverage the benefits of containerization, such as
portability, efficiency, and isolation. By enabling service auto scaling, the company can
adjust the number of containers running for each microservice based on demand, ensuring optimal performance and cost. Amazon ECS also supports various deployment strategies,
such as rolling update or blue/green deployment, that can reduce or eliminate downtime
during updates.
The other solutions are not as effective as the first one because they either do not meet the
requirements or introduce new challenges. Running the application on AWS Lambda as a
single function with maximum provisioned concurrency will not meet the requirements, as it
will not break down the monolith into microservices, nor will it reduce the complexity of
maintenance. Lambda functions are also limited by execution time (15 minutes), memory
size (10 GB), and concurrency quotas, which may not be sufficient for the report generation
application. Running the application on Amazon EC2 Spot Instances as microservices with
a Spot Fleet default allocation strategy will not meet the requirements, as it will introduce
the risk of interruptions due to spot price fluctuations. Spot Instances are not guaranteed to
be available or stable, and may be reclaimed by AWS at any time with a two-minute
warning. This may cause report generation to fail or restart from scratch. Running the
application on AWS Elastic Beanstalk as a single application environment with an all-atonce
deployment strategy will not meet the requirements, as it will not break down the
monolith into microservices, nor will it minimize application downtime. The all-at-once
deployment strategy will deploy updates to all instances simultaneously, causing a brief
outage for the application.
References:
Amazon Elastic Container Service
Microservices on AWS
Service Auto Scaling - Amazon Elastic Container Service