Design High-Performing Architectures Design High-Performing Architectures 1 / 19 1) A Docker application, which is running on an Amazon ECS cluster behind a load balancer, is heavily using Amazon DynamoDB. The application requires improved database performance by distributing the workload evenly and utilizing the provisioned throughput efficiently. Which of the following should be implemented for the DynamoDB table? A) Avoid using a composite primary key, which is composed of a partition key and a sort key. B) Use partition keys with low-cardinality attributes, which have a few number of distinct values for each item. C) Use partition keys with high-cardinality attributes, which have a large number of distinct values for each item. D) Reduce the number of partition keys in the DynamoDB table. 2 / 19 2) A company is using Amazon S3 to store frequently accessed data. When an object is created or deleted, the S3 bucket will send an event notification to the Amazon SQS queue. A solutions architect needs to create a solution that will notify the development and operations team about the created or deleted objects. Which of the following would satisfy this requirement? A) Create an Amazon SNS topic and configure two SQS queues to subscribe to the topic. Grant S3 permission to send notifications to SNS and update the bucket to use the new SNS topic B) Set up an Amazon SNS topic and configure two SQS queues to poll the SNS topic. Grant S3 permission to send notifications to SNS and update the bucket to use the new SNS topic. C) Set up another SQS queue for the other team. Grant S3 permission to send a notification to the second SQS queue. D) Create a new Amazon SNS FIFO topic for the other team. Grant S3 permission to send the notification to the second SNS topic. 3 / 19 3) A popular social network is hosted in AWS and is using an Amazon DynamoDB table as its database. There is a requirement to implement a 'follow' feature where users can subscribe to certain updates made by a particular user and be notified via email. Which of the following is the most suitable solution to implement to meet the requirement? A) Create an AWS Lambda function that uses DynamoDB Streams Amazon Kinesis Adapter which will fetch data from the DynamoDB Streams endpoint. Set up an Amazon SNS Topic that will notify the subscribers via email when there is an update made by a particular user. B) Set up a DAX cluster to access the source DynamoDB table. Create a new DynamoDB trigger and an AWS Lambda function. For every update made in the user data, the trigger will send data to the Lambda function which will then notify the subscribers via email using Amazon SNS. C) Enable DynamoDB Stream and create an AWS Lambda trigger, as well as the IAM role which contains all of the permissions that the Lambda function will need at runtime. The data from the stream record will be processed by the Lambda function which will then publish a message to Amazon SNS Topic that will notify the subscribers via email. D) Using the Amazon Kinesis Client Library (KCL), write an application that leverages on DynamoDB Streams Kinesis Adapter that will fetch data from the DynamoDB Streams endpoint. When there are updates made by a particular user, notify the subscribers via email using Amazon SNS. 4 / 19 4) A startup is using Amazon RDS to store data from a web application. Most of the time, the application has low user activity but it receives bursts of traffic within seconds whenever there is a new product announcement. The Solutions Architect needs to create a solution that will allow users around the globe to access the data using an API. What should the Solutions Architect do meet the above requirement? A) Create an API using Amazon API Gateway and use an Auto Scaling group of Amazon EC2 instances to handle the bursts of traffic in seconds. B) Create an API using Amazon API Gateway and use the Amazon ECS cluster with Service Auto Scaling to handle the bursts of traffic in seconds. C) Create an API using Amazon API Gateway and use AWS Lambda to handle the bursts of traffic in seconds D) Create an API using Amazon API Gateway and use Amazon Elastic Beanstalk with Auto Scaling to handle the bursts of traffic in seconds. 5 / 19 5) A company wishes to query data that resides in multiple AWS accounts from a central data lake. Each account has its own Amazon S3 bucket that stores data unique to its business function. Access to the data lake must be granted based on user roles. Which solution will minimize overhead and costs while meeting the required access patterns? A) Use Amazon Data Firehose to consolidate data from multiple accounts into a single account. B) Use AWS Control Tower to centrally manage each account's S3 buckets. C) Use AWS Lake Formation to consolidate data from multiple accounts into a single account. D) Create a scheduled AWS Lambda function using Amazon EventBridge for transferring data from multiple accounts to the S3 buckets of the central account. 6 / 19 6) A tech company has a CRM application hosted on an Auto Scaling group of On-Demand EC2 instances with different instance types and sizes. The application is extensively used during office hours from 9 in the morning to 5 in the afternoon. Their users are complaining that the performance of the application is slow during the start of the day but then works normally after a couple of hours. Which of the following is the MOST operationally efficient solution to implement to ensure the application works properly at the beginning of the day? A) Configure a Dynamic scaling policy for the Auto Scaling group to launch new instances based on the Memory utilization. B) Configure a Dynamic scaling policy for the Auto Scaling group to launch new instances based on the CPU utilization C) Configure a Predictive scaling policy for the Auto Scaling group to automatically adjust the number of Amazon EC2 instances. D) Configure a Scheduled scaling policy for the Auto Scaling group to launch new instances before the start of the day 7 / 19 7) A company is using a combination of API Gateway and AWS Lambda for the web services of an online web portal that is accessed by hundreds of thousands of clients each day. The company will be announcing a new revolutionary product, and it is expected that the web portal will receive a massive number of visitors from all around the globe. How can the back-end systems and applications be protected from traffic spikes? A) Manually upgrade the Amazon EC2 instances being used by API Gateway. B) Use throttling limits in API Gateway. C) API Gateway will automatically scale and handle massive traffic spikes so you do not have to do anything. D) Deploy Multi-AZ in API Gateway with Read Replica. 8 / 19 8) A company has a web application that uses Internet Information Services (IIS) for Windows Server. A file share is used to store the application data on the network-attached storage of the company’s on-premises data center. To achieve a highly available system, the company plans to migrate the application and file share to AWS. Which of the following can be used to fulfill this requirement? A) Migrate the existing file share configuration to Amazon EFS. B) Migrate the existing file share configuration to Amazon FSx for Windows File Server C) Migrate the existing file share configuration to Amazon EBS. D) Migrate the existing file share configuration to AWS Storage Gateway 9 / 19 9) A company has a highly available architecture consisting of an Elastic Load Balancer and multiple Amazon EC2 instances configured with Auto Scaling across three Availability Zones. The company needs to monitor EC2 instances based on a specific metric that is not readily available in Amazon CloudWatch. Which of the following is a custom metric in CloudWatch that requires manual setup? A) Network packets out of an EC2 instance. B) CPU Utilization of an EC2 instance. C) Memory Utilization of an EC2 instance. D) Disk Read activity of an EC2 instance. 10 / 19 10) A company plans to launch an Amazon EC2 instance in a private subnet for its internal corporate web portal. For security purposes, the EC2 instance must send data to Amazon DynamoDB and Amazon S3 via private endpoints that don’t pass through the public Internet. Which of the following can meet the above requirements? A) Use AWS Direct Connect to route all access to S3 and DynamoDB via private endpoints. B) Use AWS VPN CloudHub to route all access to S3 and DynamoDB via private endpoints. C) Enable DynamoDB Encryption at Rest with the default AWS-managed key and S3 Server-Side Encryption with the default AWS KMS key to route all traffic to DynamoDB and S3 via private endpoints D) Use a DynamoDB VPC endpoint and an S3 VPC endpoint to route all access to these services via private endpoints 11 / 19 11) A cryptocurrency trading platform is using an API built in AWS Lambda and API Gateway. Due to the recent news and rumors about the upcoming price surge of Bitcoin, Ethereum, and other cryptocurrencies, it is expected that the trading platform would have a significant increase in site visitors and new users in the coming days ahead. In this scenario, how can you protect the backend systems of the platform from traffic spikes? A) Enable throttling limits and result caching in API Gateway. B) Move the Lambda function in a VPC. C) Use CloudFront in front of the API Gateway to act as a cache. D) Switch from using AWS Lambda and API Gateway to a more scalable and highly available architecture using EC2 instances, ELB, and Auto Scaling. 12 / 19 12) A healthcare organization wants to build a system that can predict drug prescription abuse. The organization will gather real-time data from multiple sources, which include Personally Identifiable Information (PII). It's crucial that this sensitive information is anonymized prior to landing in a NoSQL database for further processing. Which solution would meet the requirements? A) Create a data lake in Amazon S3 and use it as the primary storage for patient health data. Use an S3 trigger to run an AWS Lambda function that performs anonymization. Send the anonymized data to Amazon DynamoDB. B) Ingest real-time data using Amazon Kinesis Data Stream. Use an AWS Lambda function to anonymize the PII, then store it in Amazon DynamoDB C) Stream the data in an Amazon DynamoDB table. Enable DynamoDB Streams, and configure an AWS Lambda function with AmazonDynamoDBFullAccess permissions to perform anonymization on newly written items. D) Deploy an Amazon Data Firehose stream to capture and transform the streaming data. Deliver the anonymized data to Amazon Redshift for analysis 13 / 19 13) An e-commerce company runs a highly scalable web application that depends on an Amazon Aurora database. As the number of users increases, the read replica faces difficulties keeping up with the increasing read traffic, causing performance bottlenecks during peak periods. Which of the following will resolve the issue with the most cost-effective solution? A) Implement read scaling with Aurora Global Database B) Use automatic scaling for the Aurora read replica using Aurora Auto Scaling C) Set up a read replica that can operate across different regions. D) Increase the size of the Aurora DB cluster. 14 / 19 14) An AI-powered Forex trading application consumes thousands of data sets to train its machine learning model. The application’s workload requires a high-performance, parallel hot storage to process the training datasets concurrently. It also needs cost-effective cold storage to archive those datasets that yield low profit. Which of the following Amazon storage services should the developer use? A) Use Amazon FSx For Lustre and Amazon S3 for hot and cold storage respectively B) Use Amazon FSx For Windows File Server and Amazon S3 for hot and cold storage respectively. C) Use Amazon FSx For Lustre and the Provisioned IOPS SSD (io1) volumes of Amazon EBS for hot and cold storage respectively. D) Use Amazon Elastic File System and Amazon S3 for hot and cold storage respectively 15 / 19 15) A retail company receives raw .csv data files into its Amazon S3 bucket from multiple sources on an hourly basis, with an average file size of 2 GB. An automated process must be implemented to convert these .csv files into the more efficient Apache Parquet format and store the converted files in another S3 bucket. Additionally, the conversion process must be automatically initiated each time a new file is uploaded into the S3 bucket. Which of the following options must be implemented to meet these requirements with the LEAST operational overhead? A) Set up an Apache Spark job running in an Amazon EC2 instance and create an Amazon EventBridge (Amazon CloudWatch Events) rule to monitor S3 PUT events in the S3 bucket. Configure AWS Lambda to invoke the Spark job for every new .csv file added via a Function URL. B) Use an AWS Lambda function triggered by an S3 PUT event to convert the .csv files to Parquet format. Use the AWS Transfer Family with SFTP service to move the output files to the target S3 bucket. C) Create an ETL (Extract, Transform, Load) job and a Data Catalog table in AWS Glue. Configure the Glue crawler to run on a schedule to check for new files in the S3 bucket every hour and convert them to Parquet format. D) Utilize an AWS Glue extract, transform, and load (ETL) job to process and convert the .csv files to Apache Parquet format and then store the output files into the target S3 bucket. Set up an S3 Event Notification to track every S3 PUT event and invoke the ETL job in Glue through Amazon SQS. 16 / 19 16) A company collects atmospheric data such as temperature, air pressure, and humidity from different countries. Each site location is equipped with various weather instruments and a high-speed Internet connection. The average collected data in each location is around 500 GB and will be analyzed by a weather forecasting application hosted in Northern Virginia. The Solutions Architect must determine the fastest way to aggregate all the data. Which of the following options can satisfy the given requirement? A) Use AWS Snowball Edge to transfer large amounts of data. B) Upload the data to the closest Amazon S3 bucket. Set up a cross-region replication and copy the objects to the destination bucket. C) Set up a Site-to-Site VPN connection. D) Enable Transfer Acceleration in the destination bucket and upload the collected data using Multipart Upload. 17 / 19 17) A car dealership website hosted in Amazon EC2 stores car listings in an Amazon Aurora database managed by Amazon RDS. Once a vehicle has been sold, its data must be removed from the current listings and forwarded to a distributed processing system. Which of the following options can satisfy the given requirement? A) Create an RDS event subscription and send the notifications to Amazon SQS. Configure the SQS queues to fan out the event notifications to multiple Amazon SNS topics. Process the data using AWS Lambda functions. B) Create an RDS event subscription and send the notifications to AWS Lambda. Configure the Lambda function to fan out the event notifications to multiple Amazon SQS queues to update the processing system. C) Create an RDS event subscription and send the notifications to Amazon SNS. Configure the SNS topic to fan out the event notifications to multiple Amazon SQS queues. D) Create a native function or a stored procedure that invokes an AWS Lambda function. Configure the Lambda function to send event notifications to an Amazon SQS queue for the processing system to consume 18 / 19 18) An online learning company hosts its Microsoft .NET e-Learning application on a Windows Server in its on-premises data center. The application uses an Oracle Database Standard Edition as its backend database. The company wants a high-performing solution to migrate this workload to the AWS cloud to take advantage of the cloud’s high availability. The migration process should minimize development changes, and the environment should be easier to manage. Which of the following options should be implemented to meet the company requirements? (Select TWO.) A) Rehost the on-premises .NET application to an AWS Elastic Beanstalk Multi-AZ environment which runs in multiple Availability Zones B) Provision and replatform the application to Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes. Use the Windows Server Amazon Machine Image (AMI) and deploy the .NET application using to the ECS cluster via the ECS Anywhere service. C) Perform a homogeneous migration by moving the Oracle database to Amazon RDS for Oracle in a Multi-AZ deployment using AWS Database Migration Service (AWS DMS) D) Refactor the application to .NET Core and run it as a serverless container service using Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate. E) Use AWS Application Migration Service (AWS MGN) to migrate the on-premises Oracle database server to a new Amazon EC2 instance. 19 / 19 19) A global IT company with offices around the world has multiple AWS accounts. To improve efficiency and drive costs down, the Chief Information Officer (CIO) wants to set up a solution that centrally manages their AWS resources. This will allow them to procure AWS resources centrally and share resources such as AWS Transit Gateways, AWS License Manager configurations, or Amazon Route 53 Resolver rules across their various accounts. As the Solutions Architect, which combination of options should you implement in this scenario? (Select TWO.) A) Use AWS Control Tower to easily and securely share your resources with your AWS accounts. B) Consolidate all of the company's accounts using AWS ParallelCluster. C) Consolidate all of the company's accounts using AWS Organizations. D) Use the AWS Identity and Access Management service to set up cross-account access that will easily and securely share your resources with your AWS accounts E) Use the AWS Resource Access Manager (RAM) service to easily and securely share your resources with your AWS accounts. Your score isThe average score is 0% 0% Restart quiz