{"id":110502,"date":"2025-09-20T03:07:13","date_gmt":"2025-09-20T03:07:13","guid":{"rendered":"https:\/\/www.dumpsbase.com\/freedumps\/?p=110502"},"modified":"2025-09-20T03:07:13","modified_gmt":"2025-09-20T03:07:13","slug":"amazon-dea-c01-free-dumps-part-2-q41-q70-are-also-available-online-helping-you-check-the-aws-certified-data-engineer-associate-dumps-v10-02","status":"publish","type":"post","link":"https:\/\/www.dumpsbase.com\/freedumps\/amazon-dea-c01-free-dumps-part-2-q41-q70-are-also-available-online-helping-you-check-the-aws-certified-data-engineer-associate-dumps-v10-02.html","title":{"rendered":"Amazon DEA-C01 Free Dumps (Part 2, Q41-Q70) Are Also Available Online, Helping You Check the AWS Certified Data Engineer &#8211; Associate Dumps (V10.02)"},"content":{"rendered":"<p>Attempting the Amazon DEA-C01 dumps (V10.02) from DumpsBase is a great way to prepare for your AWS Certified Data Engineer &#8211; Associate certification exam. With the DEA-C01 dumps (V10.02), you will receive 100% validated practice questions and answers, covering every exam subject in depth, including clear explanations and insights that resolve any uncertainties. You can check our quality by reading our <a href=\"https:\/\/www.dumpsbase.com\/freedumps\/aws-certified-data-engineer-associate-dea-c01-dumps-v10-02-are-available-online-come-here-and-check-the-amazon-dea-c01-free-dumps-part-1-q1-q40.html\"><em><strong>DEA-C01 free dumps (Part 1, Q1-Q40) of V10.02<\/strong><\/em><\/a>. From these demo questions, you can 100% confirm that DumpsBase must be your good partner to complete the AWS Certified Data Engineer &#8211; Associate (DEA-C01) certification exam. Choose DumpsBase and learn the DEA-C01 updated dumps. This helps you become comfortable with timing, question types, and difficulty levels. Even more importantly, these DEA-C01 dumps (V10.02) identify your weak points so you can concentrate on your review effectively. Today, you can continue to check more free demos online.<\/p>\n<h2>Below are the <span style=\"background-color: #00ff00;\"><em>Amazon DEA-C01 free dumps (Part 2, Q41-Q70) of V10.02<\/em><\/span> for checking more:<\/h2>\n<script>\n\t  window.fbAsyncInit = function() {\n\t    FB.init({\n\t      appId            : '622169541470367',\n\t      autoLogAppEvents : true,\n\t      xfbml            : true,\n\t      version          : 'v3.1'\n\t    });\n\t  };\n\t\n\t  (function(d, s, id){\n\t     var js, fjs = d.getElementsByTagName(s)[0];\n\t     if (d.getElementById(id)) {return;}\n\t     js = d.createElement(s); js.id = id;\n\t     js.src = \"https:\/\/connect.facebook.net\/en_US\/sdk.js\";\n\t     fjs.parentNode.insertBefore(js, fjs);\n\t   }(document, 'script', 'facebook-jssdk'));\n\t<\/script><script type=\"text\/javascript\" >\ndocument.addEventListener(\"DOMContentLoaded\", function(event) { \nif(!window.jQuery) alert(\"The important jQuery library is not properly loaded in your site. Your WordPress theme is probably missing the essential wp_head() call. You can switch to another theme and you will see that the plugin works fine and this notice disappears. If you are still not sure what to do you can contact us for help.\");\n});\n<\/script>  \n  \n<div  id=\"watupro_quiz\" class=\"quiz-area single-page-quiz\">\n<p id=\"submittingExam10579\" style=\"display:none;text-align:center;\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.dumpsbase.com\/freedumps\/wp-content\/plugins\/watupro\/img\/loading.gif\" width=\"16\" height=\"16\"><\/p>\n\n<div class=\"watupro-exam-description\" id=\"description-quiz-10579\"><\/div>\n\n<form action=\"\" method=\"post\" class=\"quiz-form\" id=\"quiz-10579\"  enctype=\"multipart\/form-data\" >\n<div class='watu-question ' id='question-1' style=';'><div id='questionWrap-1'  class='   watupro-question-id-418551'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>1. <\/span>A company wants to implement real-time analytics capabilities. The company wants to use Amazon Kinesis Data Streams and Amazon Redshift to ingest and process streaming data at the rate of several gigabytes per second. The company wants to derive near real-time insights by using existing business intelligence (BI) and analytics tools. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_1' value='418551' \/><input type='hidden' id='answerType418551' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418551[]' id='answer-id-1621554' class='answer   answerof-418551 ' value='1621554'   \/><label for='answer-id-1621554' id='answer-label-1621554' class=' answer'><span>Use Kinesis Data Streams to stage data in Amazon S3. Use the COPY command to load data from Amazon S3 directly into Amazon Redshift to make the data immediately available for real-time analysis.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418551[]' id='answer-id-1621555' class='answer   answerof-418551 ' value='1621555'   \/><label for='answer-id-1621555' id='answer-label-1621555' class=' answer'><span>Access the data from Kinesis Data Streams by using SQL queries. Create materialized views directly on top of the stream. Refresh the materialized views regularly to query the most recent stream data.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418551[]' id='answer-id-1621556' class='answer   answerof-418551 ' value='1621556'   \/><label for='answer-id-1621556' id='answer-label-1621556' class=' answer'><span>Create an external schema in Amazon Redshift to map the data from Kinesis Data Streams to an Amazon Redshift object. Create a materialized view to read data from the stream. Set the materialized view to auto refresh.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418551[]' id='answer-id-1621557' class='answer   answerof-418551 ' value='1621557'   \/><label for='answer-id-1621557' id='answer-label-1621557' class=' answer'><span>Connect Kinesis Data Streams to Amazon Kinesis Data Firehose. Use Kinesis Data Firehose to stage the data in Amazon S3. Use the COPY command to load the data from Amazon S3 to a table in Amazon Redshift.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-2' style=';'><div id='questionWrap-2'  class='   watupro-question-id-418552'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>2. <\/span>A company stores petabytes of data in thousands of Amazon S3 buckets in the S3 Standard storage class. The data supports analytics workloads that have unpredictable and variable data access patterns. <br \/>\r<br>The company does not access some data for months. However, the company must be able to retrieve all data within milliseconds. The company needs to optimize S3 storage costs. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_2' value='418552' \/><input type='hidden' id='answerType418552' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418552[]' id='answer-id-1621558' class='answer   answerof-418552 ' value='1621558'   \/><label for='answer-id-1621558' id='answer-label-1621558' class=' answer'><span>Use S3 Storage Lens standard metrics to determine when to move objects to more cost-optimized storage classes. Create S3 Lifecycle policies for the S3 buckets to move objects to cost-optimized storage classes. Continue to refine the S3 Lifecycle policies in the future to optimize storage costs.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418552[]' id='answer-id-1621559' class='answer   answerof-418552 ' value='1621559'   \/><label for='answer-id-1621559' id='answer-label-1621559' class=' answer'><span>Use S3 Storage Lens activity metrics to identify S3 buckets that the company accesses infrequently. Configure S3 Lifecycle rules to move objects from S3 Standard to the S3 Standard-Infrequent Access (S3 Standard-IA) and S3 Glacier storage classes based on the age of the data.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418552[]' id='answer-id-1621560' class='answer   answerof-418552 ' value='1621560'   \/><label for='answer-id-1621560' id='answer-label-1621560' class=' answer'><span>Use S3 Intelligent-Tiering. Activate the Deep Archive Access tier.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418552[]' id='answer-id-1621561' class='answer   answerof-418552 ' value='1621561'   \/><label for='answer-id-1621561' id='answer-label-1621561' class=' answer'><span>Use S3 Intelligent-Tiering. Use the default access tier.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-3' style=';'><div id='questionWrap-3'  class='   watupro-question-id-418553'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>3. <\/span>A company uses an Amazon Redshift cluster that runs on RA3 nodes. The company wants to scale read and write capacity to meet demand. A data engineer needs to identify a solution that will turn on concurrency scaling. <br \/>\r<br>Which solution will meet this requirement?<\/div><input type='hidden' name='question_id[]' id='qID_3' value='418553' \/><input type='hidden' id='answerType418553' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418553[]' id='answer-id-1621562' class='answer   answerof-418553 ' value='1621562'   \/><label for='answer-id-1621562' id='answer-label-1621562' class=' answer'><span>Turn on concurrency scaling in workload management (WLM) for Redshift Serverless workgroups.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418553[]' id='answer-id-1621563' class='answer   answerof-418553 ' value='1621563'   \/><label for='answer-id-1621563' id='answer-label-1621563' class=' answer'><span>Turn on concurrency scaling at the workload management (WLM) queue level in the Redshift cluster.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418553[]' id='answer-id-1621564' class='answer   answerof-418553 ' value='1621564'   \/><label for='answer-id-1621564' id='answer-label-1621564' class=' answer'><span>Turn on concurrency scaling in the settings during the creation of and new Redshift cluster.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418553[]' id='answer-id-1621565' class='answer   answerof-418553 ' value='1621565'   \/><label for='answer-id-1621565' id='answer-label-1621565' class=' answer'><span>Turn on concurrency scaling for the daily usage quota for the Redshift cluster.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-4' style=';'><div id='questionWrap-4'  class='   watupro-question-id-418554'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>4. <\/span>A data engineer needs Amazon Athena queries to finish faster. The data engineer notices that all the files the Athena queries use are currently stored in uncompressed .csv format. The data engineer also notices that users perform most queries by selecting a specific column. <br \/>\r<br>Which solution will MOST speed up the Athena query performance?<\/div><input type='hidden' name='question_id[]' id='qID_4' value='418554' \/><input type='hidden' id='answerType418554' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418554[]' id='answer-id-1621566' class='answer   answerof-418554 ' value='1621566'   \/><label for='answer-id-1621566' id='answer-label-1621566' class=' answer'><span>Change the data format from .csvto JSON format. Apply Snappy compression.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418554[]' id='answer-id-1621567' class='answer   answerof-418554 ' value='1621567'   \/><label for='answer-id-1621567' id='answer-label-1621567' class=' answer'><span>Compress the .csv files by using Snappy compression.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418554[]' id='answer-id-1621568' class='answer   answerof-418554 ' value='1621568'   \/><label for='answer-id-1621568' id='answer-label-1621568' class=' answer'><span>Change the data format from .csvto Apache Parquet. Apply Snappy compression.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418554[]' id='answer-id-1621569' class='answer   answerof-418554 ' value='1621569'   \/><label for='answer-id-1621569' id='answer-label-1621569' class=' answer'><span>Compress the .csv files by using gzjg compression.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-5' style=';'><div id='questionWrap-5'  class='   watupro-question-id-418555'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>5. <\/span>A data engineer is using Amazon Athena to analyze sales data that is in Amazon S3. The data engineer writes a query to retrieve sales amounts for 2023 for several products from a table named sales_data. However, the query does not return results for all of the products that are in the sales_data table. <br \/>\r<br>The data engineer needs to troubleshoot the query to resolve the issue. <br \/>\r<br>The data engineer's original query is as follows: <br \/>\r<br>SELECT product_name, sum(sales_amount) <br \/>\r<br>FROM sales_data <br \/>\r<br>WHERE year = 2023 <br \/>\r<br>GROUP BY product_name <br \/>\r<br>How should the data engineer modify the Athena query to meet these requirements?<\/div><input type='hidden' name='question_id[]' id='qID_5' value='418555' \/><input type='hidden' id='answerType418555' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418555[]' id='answer-id-1621570' class='answer   answerof-418555 ' value='1621570'   \/><label for='answer-id-1621570' id='answer-label-1621570' class=' answer'><span>Replace sum(sales amount) with count(*J for the aggregation.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418555[]' id='answer-id-1621571' class='answer   answerof-418555 ' value='1621571'   \/><label for='answer-id-1621571' id='answer-label-1621571' class=' answer'><span>Change WHERE year = 2023 to WHERE extractlyear FROM sales data) = 2023.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418555[]' id='answer-id-1621572' class='answer   answerof-418555 ' value='1621572'   \/><label for='answer-id-1621572' id='answer-label-1621572' class=' answer'><span>Add HAVING sumfsales amount) &gt; 0 after the GROUP BY clause.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418555[]' id='answer-id-1621573' class='answer   answerof-418555 ' value='1621573'   \/><label for='answer-id-1621573' id='answer-label-1621573' class=' answer'><span>Remove the GROUP BY clause<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-6' style=';'><div id='questionWrap-6'  class='   watupro-question-id-418556'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>6. <\/span>A company is migrating its database servers from Amazon EC2 instances that run Microsoft SQL Server to Amazon RDS for Microsoft SQL Server DB instances. The company's analytics team must export large data elements every day until the migration is complete. The data elements are the result of SQL joins across multiple tables. The data must be in Apache Parquet format. The analytics team must store the data in Amazon S3. <br \/>\r<br>Which solution will meet these requirements in the MOST operationally efficient way?<\/div><input type='hidden' name='question_id[]' id='qID_6' value='418556' \/><input type='hidden' id='answerType418556' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418556[]' id='answer-id-1621574' class='answer   answerof-418556 ' value='1621574'   \/><label for='answer-id-1621574' id='answer-label-1621574' class=' answer'><span>Create a view in the EC2 instance-based SQL Server databases that contains the required data elements. Create an AWS Glue job that selects the data directly from the view and transfers the data in Parquet format to an S3 bucket. Schedule the AWS Glue job to run every day.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418556[]' id='answer-id-1621575' class='answer   answerof-418556 ' value='1621575'   \/><label for='answer-id-1621575' id='answer-label-1621575' class=' answer'><span>Schedule SQL Server Agent to run a daily SQL query that selects the desired data elements from the EC2 instance-based SQL Server databases. Configure the query to direct the output .csv objects to an S3 bucket. Create an S3 event that invokes an AWS Lambda function to transform the output format from .csv to Parquet.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418556[]' id='answer-id-1621576' class='answer   answerof-418556 ' value='1621576'   \/><label for='answer-id-1621576' id='answer-label-1621576' class=' answer'><span>Use a SQL query to create a view in the EC2 instance-based SQL Server databases that contains the required data elements. Create and run an AWS Glue crawler to read the view. Create an AWS Glue job that retrieves the data and transfers the data in Parquet format to an S3 bucket. Schedule the AWS Glue job to run every day.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418556[]' id='answer-id-1621577' class='answer   answerof-418556 ' value='1621577'   \/><label for='answer-id-1621577' id='answer-label-1621577' class=' answer'><span>Create an AWS Lambda function that queries the EC2 instance-based databases by using Java Database Connectivity (JDBC). Configure the Lambda function to retrieve the required data, transform the data into Parquet format, and transfer the data into an S3 bucket. Use Amazon EventBridge to schedule the Lambda function to run every day.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-7' style=';'><div id='questionWrap-7'  class='   watupro-question-id-418557'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>7. <\/span>A company loads transaction data for each day into Amazon Redshift tables at the end of each day. The company wants to have the ability to track which tables have been loaded and which tables still need to be loaded. <br \/>\r<br>A data engineer wants to store the load statuses of Redshift tables in an Amazon DynamoDB table. The data engineer creates an AWS Lambda function to publish the details of the load statuses to DynamoDB. <br \/>\r<br>How should the data engineer invoke the Lambda function to write load statuses to the DynamoDB table?<\/div><input type='hidden' name='question_id[]' id='qID_7' value='418557' \/><input type='hidden' id='answerType418557' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418557[]' id='answer-id-1621578' class='answer   answerof-418557 ' value='1621578'   \/><label for='answer-id-1621578' id='answer-label-1621578' class=' answer'><span>Use a second Lambda function to invoke the first Lambda function based on Amazon CloudWatch events.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418557[]' id='answer-id-1621579' class='answer   answerof-418557 ' value='1621579'   \/><label for='answer-id-1621579' id='answer-label-1621579' class=' answer'><span>Use the Amazon Redshift Data API to publish an event to Amazon EventBridqe. Configure an EventBridge rule to invoke the Lambda function.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418557[]' id='answer-id-1621580' class='answer   answerof-418557 ' value='1621580'   \/><label for='answer-id-1621580' id='answer-label-1621580' class=' answer'><span>Use the Amazon Redshift Data API to publish a message to an Amazon Simple Queue Service (Amazon SQS) queue. Configure the SQS queue to invoke the Lambda function.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418557[]' id='answer-id-1621581' class='answer   answerof-418557 ' value='1621581'   \/><label for='answer-id-1621581' id='answer-label-1621581' class=' answer'><span>Use a second Lambda function to invoke the first Lambda function based on AWS CloudTrail events.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-8' style=';'><div id='questionWrap-8'  class='   watupro-question-id-418558'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>8. <\/span>A financial company wants to use Amazon Athena to run on-demand SQL queries on a petabyte-scale dataset to support a business intelligence (BI) application. An AWS Glue job that runs during non-business hours updates the dataset once every day. The BI application has a standard data refresh frequency of 1 hour to comply with company policies. <br \/>\r<br>A data engineer wants to cost optimize the company's use of Amazon Athena without adding any additional infrastructure costs. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_8' value='418558' \/><input type='hidden' id='answerType418558' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418558[]' id='answer-id-1621582' class='answer   answerof-418558 ' value='1621582'   \/><label for='answer-id-1621582' id='answer-label-1621582' class=' answer'><span>Configure an Amazon S3 Lifecycle policy to move data to the S3 Glacier Deep Archive storage class after 1 day<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418558[]' id='answer-id-1621583' class='answer   answerof-418558 ' value='1621583'   \/><label for='answer-id-1621583' id='answer-label-1621583' class=' answer'><span>Use the query result reuse feature of Amazon Athena for the SQL queries.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418558[]' id='answer-id-1621584' class='answer   answerof-418558 ' value='1621584'   \/><label for='answer-id-1621584' id='answer-label-1621584' class=' answer'><span>Add an Amazon ElastiCache cluster between the Bl application and Athena.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418558[]' id='answer-id-1621585' class='answer   answerof-418558 ' value='1621585'   \/><label for='answer-id-1621585' id='answer-label-1621585' class=' answer'><span>Change the format of the files that are in the dataset to Apache Parquet.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-9' style=';'><div id='questionWrap-9'  class='   watupro-question-id-418559'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>9. <\/span>A company maintains multiple extract, transform, and load (ETL) workflows that ingest data from the company's operational databases into an Amazon S3 based data lake. The ETL workflows use AWS Glue and Amazon EMR to process data. <br \/>\r<br>The company wants to improve the existing architecture to provide automated orchestration and to require minimal manual effort. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_9' value='418559' \/><input type='hidden' id='answerType418559' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418559[]' id='answer-id-1621586' class='answer   answerof-418559 ' value='1621586'   \/><label for='answer-id-1621586' id='answer-label-1621586' class=' answer'><span>AWS Glue workflows<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418559[]' id='answer-id-1621587' class='answer   answerof-418559 ' value='1621587'   \/><label for='answer-id-1621587' id='answer-label-1621587' class=' answer'><span>AWS Step Functions tasks<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418559[]' id='answer-id-1621588' class='answer   answerof-418559 ' value='1621588'   \/><label for='answer-id-1621588' id='answer-label-1621588' class=' answer'><span>AWS Lambda functions<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418559[]' id='answer-id-1621589' class='answer   answerof-418559 ' value='1621589'   \/><label for='answer-id-1621589' id='answer-label-1621589' class=' answer'><span>Amazon Managed Workflows for Apache Airflow (Amazon MWAA) workflows<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-10' style=';'><div id='questionWrap-10'  class='   watupro-question-id-418560'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>10. <\/span>A data engineer must orchestrate a series of Amazon Athena queries that will run every day. Each query can run for more than 15 minutes. <br \/>\r<br>Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_10' value='418560' \/><input type='hidden' id='answerType418560' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418560[]' id='answer-id-1621590' class='answer   answerof-418560 ' value='1621590'   \/><label for='answer-id-1621590' id='answer-label-1621590' class=' answer'><span>Use an AWS Lambda function and the Athena Boto3 client start_query_execution API call to invoke the Athena queries programmatically.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418560[]' id='answer-id-1621591' class='answer   answerof-418560 ' value='1621591'   \/><label for='answer-id-1621591' id='answer-label-1621591' class=' answer'><span>Create an AWS Step Functions workflow and add two states. Add the first state before the Lambda function. Configure the second state as a Wait state to periodically check whether the Athena query has finished using the Athena Boto3 get_query_execution API call. Configure the workflow to invoke the next query when the current query has finished running.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418560[]' id='answer-id-1621592' class='answer   answerof-418560 ' value='1621592'   \/><label for='answer-id-1621592' id='answer-label-1621592' class=' answer'><span>Use an AWS Glue Python shell job and the Athena Boto3 client start_query_execution API call to invoke the Athena queries programmatically.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418560[]' id='answer-id-1621593' class='answer   answerof-418560 ' value='1621593'   \/><label for='answer-id-1621593' id='answer-label-1621593' class=' answer'><span>Use an AWS Glue Python shell script to run a sleep timer that checks every 5 minutes to determine whether the current Athena query has finished running successfully. Configure the Python shell script to invoke the next query when the current query has finished running.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418560[]' id='answer-id-1621594' class='answer   answerof-418560 ' value='1621594'   \/><label for='answer-id-1621594' id='answer-label-1621594' class=' answer'><span>Use Amazon Managed Workflows for Apache Airflow (Amazon MWAA) to orchestrate the Athena queries in AWS Batch.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-11' style=';'><div id='questionWrap-11'  class='   watupro-question-id-418561'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>11. <\/span>A company stores data from an application in an Amazon DynamoDB table that operates in provisioned capacity mode. The workloads of the application have predictable throughput load on a regular schedule. Every Monday, there is an immediate increase in activity early in the morning. The application has very low usage during weekends. <br \/>\r<br>The company must ensure that the application performs consistently during peak usage times. <br \/>\r<br>Which solution will meet these requirements in the MOST cost-effective way?<\/div><input type='hidden' name='question_id[]' id='qID_11' value='418561' \/><input type='hidden' id='answerType418561' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418561[]' id='answer-id-1621595' class='answer   answerof-418561 ' value='1621595'   \/><label for='answer-id-1621595' id='answer-label-1621595' class=' answer'><span>Increase the provisioned capacity to the maximum capacity that is currently present during peak load times.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418561[]' id='answer-id-1621596' class='answer   answerof-418561 ' value='1621596'   \/><label for='answer-id-1621596' id='answer-label-1621596' class=' answer'><span>Divide the table into two tables. Provision each table with half of the provisioned capacity of the original table. Spread queries evenly across both tables.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418561[]' id='answer-id-1621597' class='answer   answerof-418561 ' value='1621597'   \/><label for='answer-id-1621597' id='answer-label-1621597' class=' answer'><span>Use AWS Application Auto Scaling to schedule higher provisioned capacity for peak usage times. Schedule lower capacity during off-peak times.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418561[]' id='answer-id-1621598' class='answer   answerof-418561 ' value='1621598'   \/><label for='answer-id-1621598' id='answer-label-1621598' class=' answer'><span>Change the capacity mode from provisioned to on-demand. Configure the table to scale up and scale down based on the load on the table.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-12' style=';'><div id='questionWrap-12'  class='   watupro-question-id-418562'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>12. <\/span>A data engineer must orchestrate a data pipeline that consists of one AWS Lambda function and one AWS Glue job. The solution must integrate with AWS services. <br \/>\r<br>Which solution will meet these requirements with the LEAST management overhead?<\/div><input type='hidden' name='question_id[]' id='qID_12' value='418562' \/><input type='hidden' id='answerType418562' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418562[]' id='answer-id-1621599' class='answer   answerof-418562 ' value='1621599'   \/><label for='answer-id-1621599' id='answer-label-1621599' class=' answer'><span>Use an AWS Step Functions workflow that includes a state machine. Configure the state machine to run the Lambda function and then the AWS Glue job.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418562[]' id='answer-id-1621600' class='answer   answerof-418562 ' value='1621600'   \/><label for='answer-id-1621600' id='answer-label-1621600' class=' answer'><span>Use an Apache Airflow workflow that is deployed on an Amazon EC2 instance. Define a directed acyclic graph (DAG) in which the first task is to call the Lambda function and the second task is to call the AWS Glue job.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418562[]' id='answer-id-1621601' class='answer   answerof-418562 ' value='1621601'   \/><label for='answer-id-1621601' id='answer-label-1621601' class=' answer'><span>Use an AWS Glue workflow to run the Lambda function and then the AWS Glue job.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418562[]' id='answer-id-1621602' class='answer   answerof-418562 ' value='1621602'   \/><label for='answer-id-1621602' id='answer-label-1621602' class=' answer'><span>Use an Apache Airflow workflow that is deployed on Amazon Elastic Kubernetes Service (Amazon EKS). Define a directed acyclic graph (DAG) in which the first task is to call the Lambda function and the second task is to call the AWS Glue job.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-13' style=';'><div id='questionWrap-13'  class='   watupro-question-id-418563'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>13. <\/span>A company is planning to migrate on-premises Apache Hadoop clusters to Amazon EMR. The company also needs to migrate a data catalog into a persistent storage solution. <br \/>\r<br>The company currently stores the data catalog in an on-premises Apache Hive metastore on the Hadoop clusters. The company requires a serverless solution to migrate the data catalog. <br \/>\r<br>Which solution will meet these requirements MOST cost-effectively?<\/div><input type='hidden' name='question_id[]' id='qID_13' value='418563' \/><input type='hidden' id='answerType418563' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418563[]' id='answer-id-1621603' class='answer   answerof-418563 ' value='1621603'   \/><label for='answer-id-1621603' id='answer-label-1621603' class=' answer'><span>Use AWS Database Migration Service (AWS DMS) to migrate the Hive metastore into Amazon S3. \r\nConfigure AWS Glue Data Catalog to scan Amazon S3 to produce the data catalog.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418563[]' id='answer-id-1621604' class='answer   answerof-418563 ' value='1621604'   \/><label for='answer-id-1621604' id='answer-label-1621604' class=' answer'><span>Configure a Hive metastore in Amazon EM<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418563[]' id='answer-id-1621605' class='answer   answerof-418563 ' value='1621605'   \/><label for='answer-id-1621605' id='answer-label-1621605' class=' answer'><span>Migrate the existing on-premises Hive metastore into Amazon EM<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418563[]' id='answer-id-1621606' class='answer   answerof-418563 ' value='1621606'   \/><label for='answer-id-1621606' id='answer-label-1621606' class=' answer'><span>Use AWS Glue Data Catalog to store the company's data catalog as an external data catalog.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418563[]' id='answer-id-1621607' class='answer   answerof-418563 ' value='1621607'   \/><label for='answer-id-1621607' id='answer-label-1621607' class=' answer'><span>Configure an external Hive metastore in Amazon EM<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418563[]' id='answer-id-1621608' class='answer   answerof-418563 ' value='1621608'   \/><label for='answer-id-1621608' id='answer-label-1621608' class=' answer'><span>Migrate the existing on-premises Hive metastore into Amazon EM<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418563[]' id='answer-id-1621609' class='answer   answerof-418563 ' value='1621609'   \/><label for='answer-id-1621609' id='answer-label-1621609' class=' answer'><span>Use Amazon Aurora MySQL to store the company's data catalog.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418563[]' id='answer-id-1621610' class='answer   answerof-418563 ' value='1621610'   \/><label for='answer-id-1621610' id='answer-label-1621610' class=' answer'><span>Configure a new Hive metastore in Amazon EM<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418563[]' id='answer-id-1621611' class='answer   answerof-418563 ' value='1621611'   \/><label for='answer-id-1621611' id='answer-label-1621611' class=' answer'><span>Migrate the existing on-premises Hive metastore into Amazon EM<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418563[]' id='answer-id-1621612' class='answer   answerof-418563 ' value='1621612'   \/><label for='answer-id-1621612' id='answer-label-1621612' class=' answer'><span>Use the new metastore as the company's data catalog.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-14' style=';'><div id='questionWrap-14'  class='   watupro-question-id-418564'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>14. <\/span>A data engineer is building a data pipeline on AWS by using AWS Glue extract, transform, and load (ETL) jobs. The data engineer needs to process data from Amazon RDS and MongoDB, perform transformations, and load the transformed data into Amazon Redshift for analytics. The data updates must occur every hour. <br \/>\r<br>Which combination of tasks will meet these requirements with the LEAST operational overhead? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_14' value='418564' \/><input type='hidden' id='answerType418564' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418564[]' id='answer-id-1621613' class='answer   answerof-418564 ' value='1621613'   \/><label for='answer-id-1621613' id='answer-label-1621613' class=' answer'><span>Configure AWS Glue triggers to run the ETL jobs even\/ hour.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418564[]' id='answer-id-1621614' class='answer   answerof-418564 ' value='1621614'   \/><label for='answer-id-1621614' id='answer-label-1621614' class=' answer'><span>Use AWS Glue DataBrewto clean and prepare the data for analytics.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418564[]' id='answer-id-1621615' class='answer   answerof-418564 ' value='1621615'   \/><label for='answer-id-1621615' id='answer-label-1621615' class=' answer'><span>Use AWS Lambda functions to schedule and run the ETL jobs even\/ hour.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418564[]' id='answer-id-1621616' class='answer   answerof-418564 ' value='1621616'   \/><label for='answer-id-1621616' id='answer-label-1621616' class=' answer'><span>Use AWS Glue connections to establish connectivity between the data sources and Amazon Redshift.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418564[]' id='answer-id-1621617' class='answer   answerof-418564 ' value='1621617'   \/><label for='answer-id-1621617' id='answer-label-1621617' class=' answer'><span>Use the Redshift Data API to load transformed data into Amazon Redshift.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-15' style=';'><div id='questionWrap-15'  class='   watupro-question-id-418565'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>15. <\/span>A company uses Amazon RDS for MySQL as the database for a critical application. The database workload is mostly writes, with a small number of reads. <br \/>\r<br>A data engineer notices that the CPU utilization of the DB instance is very high. The high CPU utilization is slowing down the application. The data engineer must reduce the CPU utilization of the DB Instance. <br \/>\r<br>Which actions should the data engineer take to meet this requirement? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_15' value='418565' \/><input type='hidden' id='answerType418565' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418565[]' id='answer-id-1621618' class='answer   answerof-418565 ' value='1621618'   \/><label for='answer-id-1621618' id='answer-label-1621618' class=' answer'><span>Use the Performance Insights feature of Amazon RDS to identify queries that have high CPU utilization. Optimize the problematic queries.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418565[]' id='answer-id-1621619' class='answer   answerof-418565 ' value='1621619'   \/><label for='answer-id-1621619' id='answer-label-1621619' class=' answer'><span>Modify the database schema to include additional tables and indexes.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418565[]' id='answer-id-1621620' class='answer   answerof-418565 ' value='1621620'   \/><label for='answer-id-1621620' id='answer-label-1621620' class=' answer'><span>Reboot the RDS DB instance once each week.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418565[]' id='answer-id-1621621' class='answer   answerof-418565 ' value='1621621'   \/><label for='answer-id-1621621' id='answer-label-1621621' class=' answer'><span>Upgrade to a larger instance size.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418565[]' id='answer-id-1621622' class='answer   answerof-418565 ' value='1621622'   \/><label for='answer-id-1621622' id='answer-label-1621622' class=' answer'><span>Implement caching to reduce the database query load.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-16' style=';'><div id='questionWrap-16'  class='   watupro-question-id-418566'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>16. <\/span>A data engineer has a one-time task to read data from objects that are in Apache Parquet format in an Amazon S3 bucket. The data engineer needs to query only one column of the data. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_16' value='418566' \/><input type='hidden' id='answerType418566' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418566[]' id='answer-id-1621623' class='answer   answerof-418566 ' value='1621623'   \/><label for='answer-id-1621623' id='answer-label-1621623' class=' answer'><span>Confiqure an AWS Lambda function to load data from the S3 bucket into a pandas dataframe-Write a SQL SELECT statement on the dataframe to query the required column.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418566[]' id='answer-id-1621624' class='answer   answerof-418566 ' value='1621624'   \/><label for='answer-id-1621624' id='answer-label-1621624' class=' answer'><span>Use S3 Select to write a SQL SELECT statement to retrieve the required column from the S3 objects.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418566[]' id='answer-id-1621625' class='answer   answerof-418566 ' value='1621625'   \/><label for='answer-id-1621625' id='answer-label-1621625' class=' answer'><span>Prepare an AWS Glue DataBrew project to consume the S3 objects and to query the required column.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418566[]' id='answer-id-1621626' class='answer   answerof-418566 ' value='1621626'   \/><label for='answer-id-1621626' id='answer-label-1621626' class=' answer'><span>Run an AWS Glue crawler on the S3 objects. Use a SQL SELECT statement in Amazon Athena to query the required column.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-17' style=';'><div id='questionWrap-17'  class='   watupro-question-id-418567'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>17. <\/span>A company created an extract, transform, and load (ETL) data pipeline in AWS Glue. A data engineer must crawl a table that is in Microsoft SQL Server. The data engineer needs to extract, transform, and load the output of the crawl to an Amazon S3 bucket. The data engineer also must orchestrate the data pipeline. <br \/>\r<br>Which AWS service or feature will meet these requirements MOST cost-effectively?<\/div><input type='hidden' name='question_id[]' id='qID_17' value='418567' \/><input type='hidden' id='answerType418567' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418567[]' id='answer-id-1621627' class='answer   answerof-418567 ' value='1621627'   \/><label for='answer-id-1621627' id='answer-label-1621627' class=' answer'><span>AWS Step Functions<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418567[]' id='answer-id-1621628' class='answer   answerof-418567 ' value='1621628'   \/><label for='answer-id-1621628' id='answer-label-1621628' class=' answer'><span>AWS Glue workflows<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418567[]' id='answer-id-1621629' class='answer   answerof-418567 ' value='1621629'   \/><label for='answer-id-1621629' id='answer-label-1621629' class=' answer'><span>AWS Glue Studio<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418567[]' id='answer-id-1621630' class='answer   answerof-418567 ' value='1621630'   \/><label for='answer-id-1621630' id='answer-label-1621630' class=' answer'><span>Amazon Managed Workflows for Apache Airflow (Amazon MWAA)<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-18' style=';'><div id='questionWrap-18'  class='   watupro-question-id-418568'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>18. <\/span>A company has used an Amazon Redshift table that is named Orders for 6 months. The company performs weekly updates and deletes on the table. The table has an interleaved sort key on a column that contains AWS Regions.<br \/>\r\n<br \/>\r\nThe company wants to reclaim disk space so that the company will not run out of storage space. The company also wants to analyze the sort key column.<br \/>\r\n<br \/>\r\nWhich Amazon Redshift command will meet these requirements?<\/div><input type='hidden' name='question_id[]' id='qID_18' value='418568' \/><input type='hidden' id='answerType418568' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418568[]' id='answer-id-1621631' class='answer   answerof-418568 ' value='1621631'   \/><label for='answer-id-1621631' id='answer-label-1621631' class=' answer'><span>VACUUM FULL Orders<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418568[]' id='answer-id-1660512' class='answer   answerof-418568 ' value='1660512'   \/><label for='answer-id-1660512' id='answer-label-1660512' class=' answer'><span>VACUUM DELETE ONLY Orders<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418568[]' id='answer-id-1660513' class='answer   answerof-418568 ' value='1660513'   \/><label for='answer-id-1660513' id='answer-label-1660513' class=' answer'><span>VACUUM REINDEX Orders<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418568[]' id='answer-id-1660514' class='answer   answerof-418568 ' value='1660514'   \/><label for='answer-id-1660514' id='answer-label-1660514' class=' answer'><span>VACUUM SORT ONLY Orders<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-19' style=';'><div id='questionWrap-19'  class='   watupro-question-id-418569'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>19. <\/span>A company uses Amazon RDS to store transactional data. The company runs an RDS DB instance in a private subnet. A developer wrote an AWS Lambda function with default settings to insert, update, or delete data in the DB instance.<br \/>\r\n<br \/>\r\nThe developer needs to give the Lambda function the ability to connect to the DB instance privately without using the public internet.<br \/>\r\n<br \/>\r\nWhich combination of steps will meet this requirement with the LEAST operational overhead? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_19' value='418569' \/><input type='hidden' id='answerType418569' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418569[]' id='answer-id-1621632' class='answer   answerof-418569 ' value='1621632'   \/><label for='answer-id-1621632' id='answer-label-1621632' class=' answer'><span>Turn on the public access setting for the DB instance.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418569[]' id='answer-id-1660515' class='answer   answerof-418569 ' value='1660515'   \/><label for='answer-id-1660515' id='answer-label-1660515' class=' answer'><span>Update the security group of the DB instance to allow only Lambda function invocations on the database port.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418569[]' id='answer-id-1660516' class='answer   answerof-418569 ' value='1660516'   \/><label for='answer-id-1660516' id='answer-label-1660516' class=' answer'><span>Configure the Lambda function to run in the same subnet that the DB instance uses.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418569[]' id='answer-id-1660517' class='answer   answerof-418569 ' value='1660517'   \/><label for='answer-id-1660517' id='answer-label-1660517' class=' answer'><span>Attach the same security group to the Lambda function and the DB instance. Include a self-referencing rule that allows access through the database port.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418569[]' id='answer-id-1660518' class='answer   answerof-418569 ' value='1660518'   \/><label for='answer-id-1660518' id='answer-label-1660518' class=' answer'><span>Update the network ACL of the private subnet to include a self-referencing rule that allows access through the database port.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-20' style=';'><div id='questionWrap-20'  class='   watupro-question-id-418570'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>20. <\/span>A company has a frontend ReactJS website that uses Amazon API Gateway to invoke REST APIs. The APIs perform the functionality of the website. A data engineer needs to write a Python script that can be occasionally invoked through API Gateway. The code must return results to API Gateway. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_20' value='418570' \/><input type='hidden' id='answerType418570' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418570[]' id='answer-id-1621633' class='answer   answerof-418570 ' value='1621633'   \/><label for='answer-id-1621633' id='answer-label-1621633' class=' answer'><span>Deploy a custom Python script on an Amazon Elastic Container Service (Amazon ECS) cluster.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418570[]' id='answer-id-1621634' class='answer   answerof-418570 ' value='1621634'   \/><label for='answer-id-1621634' id='answer-label-1621634' class=' answer'><span>Create an AWS Lambda Python function with provisioned concurrency.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418570[]' id='answer-id-1621635' class='answer   answerof-418570 ' value='1621635'   \/><label for='answer-id-1621635' id='answer-label-1621635' class=' answer'><span>Deploy a custom Python script that can integrate with API Gateway on Amazon Elastic Kubernetes \r\nService (Amazon EKS).<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418570[]' id='answer-id-1621636' class='answer   answerof-418570 ' value='1621636'   \/><label for='answer-id-1621636' id='answer-label-1621636' class=' answer'><span>Create an AWS Lambda function. Ensure that the function is warm by scheduling an Amazon EventBridge rule to invoke the Lambda function every 5 minutes by using mock events.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-21' style=';'><div id='questionWrap-21'  class='   watupro-question-id-418571'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>21. <\/span>A data engineer runs Amazon Athena queries on data that is in an Amazon S3 bucket. The Athena queries use AWS Glue Data Catalog as a metadata table. <br \/>\r<br>The data engineer notices that the Athena query plans are experiencing a performance bottleneck. The data engineer determines that the cause of the performance bottleneck is the large number of partitions that are in the S3 bucket. The data engineer must resolve the performance bottleneck and reduce Athena query planning time. <br \/>\r<br>Which solutions will meet these requirements? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_21' value='418571' \/><input type='hidden' id='answerType418571' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418571[]' id='answer-id-1621637' class='answer   answerof-418571 ' value='1621637'   \/><label for='answer-id-1621637' id='answer-label-1621637' class=' answer'><span>Create an AWS Glue partition index. Enable partition filtering.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418571[]' id='answer-id-1621638' class='answer   answerof-418571 ' value='1621638'   \/><label for='answer-id-1621638' id='answer-label-1621638' class=' answer'><span>Bucket the data based on a column that the data have in common in a WHERE clause of the user query<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418571[]' id='answer-id-1621639' class='answer   answerof-418571 ' value='1621639'   \/><label for='answer-id-1621639' id='answer-label-1621639' class=' answer'><span>Use Athena partition projection based on the S3 bucket prefix.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418571[]' id='answer-id-1621640' class='answer   answerof-418571 ' value='1621640'   \/><label for='answer-id-1621640' id='answer-label-1621640' class=' answer'><span>Transform the data that is in the S3 bucket to Apache Parquet format.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418571[]' id='answer-id-1621641' class='answer   answerof-418571 ' value='1621641'   \/><label for='answer-id-1621641' id='answer-label-1621641' class=' answer'><span>Use the Amazon EMR S3DistCP utility to combine smaller objects in the S3 bucket into larger objects.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-22' style=';'><div id='questionWrap-22'  class='   watupro-question-id-418572'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>22. <\/span>A company needs to set up a data catalog and metadata management for data sources that run in the AWS Cloud. The company will use the data catalog to maintain the metadata of all the objects that are in a set of data stores. The data stores include structured sources such as Amazon RDS and Amazon Redshift. The data stores also include semi structured sources such as JSON files and .xml files that are stored in Amazon S3.<br \/>\r\n<br \/>\r\nThe company needs a solution that will update the data catalog on a regular basis. The solution also must detect changes to the source metadata.<br \/>\r\n<br \/>\r\nWhich solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_22' value='418572' \/><input type='hidden' id='answerType418572' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418572[]' id='answer-id-1621642' class='answer   answerof-418572 ' value='1621642'   \/><label for='answer-id-1621642' id='answer-label-1621642' class=' answer'><span>Use Amazon Aurora as the data catalog. Create AWS Lambda functions that will connect to the data catalog. Configure the Lambda functions to gather the metadata information from multiple sources and to update the Aurora data catalo<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418572[]' id='answer-id-1660519' class='answer   answerof-418572 ' value='1660519'   \/><label for='answer-id-1660519' id='answer-label-1660519' class=' answer'><span>Use the AWS Glue Data Catalog as the central metadata repository. Use AWS Glue crawlers to connect to multiple data stores and to update the Data Catalog with metadata changes. Schedule the crawlers to run periodically to update the metadata catalog.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418572[]' id='answer-id-1660520' class='answer   answerof-418572 ' value='1660520'   \/><label for='answer-id-1660520' id='answer-label-1660520' class=' answer'><span>Use Amazon DynamoDB as the data catalog. Create AWS Lambda functions that will connect to the data catalog. Configure the Lambda functions to gather the metadata information from multiple sources and to update the DynamoDB data catalog. Schedule the Lambda functions to run periodically.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418572[]' id='answer-id-1660521' class='answer   answerof-418572 ' value='1660521'   \/><label for='answer-id-1660521' id='answer-label-1660521' class=' answer'><span>Use the AWS Glue Data Catalog as the central metadata repository. Extract the schema for Amazon RDS and Amazon Redshift sources, and build the Data Catalog. Use AWS Glue crawlers for data that is in Amazon S3 to infer the schema and to automatically update the Data Catalog.g. Schedule the Lambda functions to run periodically.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-23' style=';'><div id='questionWrap-23'  class='   watupro-question-id-418573'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>23. <\/span>A company currently stores all of its data in Amazon S3 by using the S3 Standard storage class. <br \/>\r<br>A data engineer examined data access patterns to identify trends. During the first 6 months, most data files are accessed several times each day. Between 6 months and 2 years, most data files are accessed once or twice each month. After 2 years, data files are accessed only once or twice each year. <br \/>\r<br>The data engineer needs to use an S3 Lifecycle policy to develop new data storage rules. The new storage solution must continue to provide high availability. <br \/>\r<br>Which solution will meet these requirements in the MOST cost-effective way?<\/div><input type='hidden' name='question_id[]' id='qID_23' value='418573' \/><input type='hidden' id='answerType418573' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418573[]' id='answer-id-1621643' class='answer   answerof-418573 ' value='1621643'   \/><label for='answer-id-1621643' id='answer-label-1621643' class=' answer'><span>Transition objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 6 months. Transfer objects to S3 Glacier Flexible Retrieval after 2 years.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418573[]' id='answer-id-1621644' class='answer   answerof-418573 ' value='1621644'   \/><label for='answer-id-1621644' id='answer-label-1621644' class=' answer'><span>Transition objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 6 months. Transfer objects to S3 Glacier Flexible Retrieval after 2 years.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418573[]' id='answer-id-1621645' class='answer   answerof-418573 ' value='1621645'   \/><label for='answer-id-1621645' id='answer-label-1621645' class=' answer'><span>Transition objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 6 months. Transfer objects to S3 Glacier Deep Archive after 2 years.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418573[]' id='answer-id-1621646' class='answer   answerof-418573 ' value='1621646'   \/><label for='answer-id-1621646' id='answer-label-1621646' class=' answer'><span>Transition objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 6 months. Transfer \r\nobjects to S3 Glacier Deep Archive after 2 years.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-24' style=';'><div id='questionWrap-24'  class='   watupro-question-id-418574'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>24. <\/span>A company uses Amazon Athena for one-time queries against data that is in Amazon S3. The company has several use cases. The company must implement permission controls to separate query processes and access to query history among users, teams, and applications that are in the same AWS account. <br \/>\r<br>Which solution will meet these requirements?<\/div><input type='hidden' name='question_id[]' id='qID_24' value='418574' \/><input type='hidden' id='answerType418574' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418574[]' id='answer-id-1621647' class='answer   answerof-418574 ' value='1621647'   \/><label for='answer-id-1621647' id='answer-label-1621647' class=' answer'><span>Create an S3 bucket for each use case. Create an S3 bucket policy that grants permissions to appropriate individual IAM users. Apply the S3 bucket policy to the S3 bucket.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418574[]' id='answer-id-1621648' class='answer   answerof-418574 ' value='1621648'   \/><label for='answer-id-1621648' id='answer-label-1621648' class=' answer'><span>Create an Athena workgroup for each use case. Apply tags to the workgroup. Create an 1AM policy that uses the tags to apply appropriate permissions to the workgroup.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418574[]' id='answer-id-1621649' class='answer   answerof-418574 ' value='1621649'   \/><label for='answer-id-1621649' id='answer-label-1621649' class=' answer'><span>Create an JAM role for each use case. Assign appropriate permissions to the role for each use case. \r\nAssociate the role with Athena.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418574[]' id='answer-id-1621650' class='answer   answerof-418574 ' value='1621650'   \/><label for='answer-id-1621650' id='answer-label-1621650' class=' answer'><span>Create an AWS Glue Data Catalog resource policy that grants permissions to appropriate individual IAM users for each use case. Apply the resource policy to the specific tables that Athena uses.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-25' style=';'><div id='questionWrap-25'  class='   watupro-question-id-418575'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>25. <\/span>A company's data engineer needs to optimize the performance of table SQL queries. The company stores data in an Amazon Redshift cluster. The data engineer cannot increase the size of the cluster because of budget constraints. <br \/>\r<br>The company stores the data in multiple tables and loads the data by using the EVEN distribution style. Some tables are hundreds of gigabytes in size. Other tables are less than 10 MB in size. <br \/>\r<br>Which solution will meet these requirements?<\/div><input type='hidden' name='question_id[]' id='qID_25' value='418575' \/><input type='hidden' id='answerType418575' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418575[]' id='answer-id-1621651' class='answer   answerof-418575 ' value='1621651'   \/><label for='answer-id-1621651' id='answer-label-1621651' class=' answer'><span>Keep using the EVEN distribution style for all tables. Specify primary and foreign keys for all tables.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418575[]' id='answer-id-1621652' class='answer   answerof-418575 ' value='1621652'   \/><label for='answer-id-1621652' id='answer-label-1621652' class=' answer'><span>Use the ALL distribution style for large tables. Specify primary and foreign keys for all tables.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418575[]' id='answer-id-1621653' class='answer   answerof-418575 ' value='1621653'   \/><label for='answer-id-1621653' id='answer-label-1621653' class=' answer'><span>Use the ALL distribution style for rarely updated small tables. Specify primary and foreign keys for all tables.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418575[]' id='answer-id-1621654' class='answer   answerof-418575 ' value='1621654'   \/><label for='answer-id-1621654' id='answer-label-1621654' class=' answer'><span>Specify a combination of distribution, sort, and partition keys for all tables.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-26' style=';'><div id='questionWrap-26'  class='   watupro-question-id-418576'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>26. <\/span>A company uses AWS Step Functions to orchestrate a data pipeline. The pipeline consists of Amazon EMR jobs that ingest data from data sources and store the data in an Amazon S3 bucket. The pipeline also includes EMR jobs that load the data to Amazon Redshift. <br \/>\r<br>The company's cloud infrastructure team manually built a Step Functions state machine. The cloud infrastructure team launched an EMR cluster into a VPC to support the EMR jobs. However, the deployed Step Functions state machine is not able to run the EMR jobs. <br \/>\r<br>Which combination of steps should the company take to identify the reason the Step Functions state machine is not able to run the EMR jobs? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_26' value='418576' \/><input type='hidden' id='answerType418576' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418576[]' id='answer-id-1621655' class='answer   answerof-418576 ' value='1621655'   \/><label for='answer-id-1621655' id='answer-label-1621655' class=' answer'><span>Use AWS CloudFormation to automate the Step Functions state machine deployment. Create a step to pause the state machine during the EMR jobs that fail. Configure the step to wait for a human user to send approval through an email message. Include details of the EMR task in the email message for further analysis.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418576[]' id='answer-id-1621656' class='answer   answerof-418576 ' value='1621656'   \/><label for='answer-id-1621656' id='answer-label-1621656' class=' answer'><span>Verify that the Step Functions state machine code has all IAM permissions that are necessary to create and run the EMR jobs. Verify that the Step Functions state machine code also includes IAM permissions to access the Amazon S3 buckets that the EMR jobs use. Use Access Analyzer for S3 to check the S3 access properties.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418576[]' id='answer-id-1621657' class='answer   answerof-418576 ' value='1621657'   \/><label for='answer-id-1621657' id='answer-label-1621657' class=' answer'><span>Check for entries in Amazon CloudWatch for the newly created EMR cluster. Change the AWS Step Functions state machine code to use Amazon EMR on EK<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418576[]' id='answer-id-1621658' class='answer   answerof-418576 ' value='1621658'   \/><label for='answer-id-1621658' id='answer-label-1621658' class=' answer'><span>Change the IAM access policies and the security group configuration for the Step Functions state machine code to reflect inclusion of Amazon Elastic Kubernetes Service (Amazon EKS).<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418576[]' id='answer-id-1621659' class='answer   answerof-418576 ' value='1621659'   \/><label for='answer-id-1621659' id='answer-label-1621659' class=' answer'><span>Query the flow logs for the VP<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418576[]' id='answer-id-1621660' class='answer   answerof-418576 ' value='1621660'   \/><label for='answer-id-1621660' id='answer-label-1621660' class=' answer'><span>Determine whether the traffic that originates from the EMR cluster can successfully reach the data providers. Determine whether any security group that might be attached to the Amazon EMR cluster allows connections to the data source servers on the informed ports.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418576[]' id='answer-id-1621661' class='answer   answerof-418576 ' value='1621661'   \/><label for='answer-id-1621661' id='answer-label-1621661' class=' answer'><span>Check the retry scenarios that the company configured for the EMR jobs. Increase the number of seconds in the interval between each EMR task. Validate that each fallback state has the appropriate catch for each decision state. Configure an Amazon Simple Notification Service (Amazon SNS) topic to store the error messages.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-27' style=';'><div id='questionWrap-27'  class='   watupro-question-id-418577'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>27. <\/span>A retail company has a customer data hub in an Amazon S3 bucket. Employees from many countries use the data hub to support company-wide analytics. A governance team must ensure that the company's data analysts can access data only for customers who are within the same country as the analysts. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational effort?<\/div><input type='hidden' name='question_id[]' id='qID_27' value='418577' \/><input type='hidden' id='answerType418577' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418577[]' id='answer-id-1621662' class='answer   answerof-418577 ' value='1621662'   \/><label for='answer-id-1621662' id='answer-label-1621662' class=' answer'><span>Create a separate table for each country's customer data. Provide access to each analyst based on the country that the analyst serves.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418577[]' id='answer-id-1621663' class='answer   answerof-418577 ' value='1621663'   \/><label for='answer-id-1621663' id='answer-label-1621663' class=' answer'><span>Register the S3 bucket as a data lake location in AWS Lake Formation. Use the Lake Formation row-level security features to enforce the company's access policies.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418577[]' id='answer-id-1621664' class='answer   answerof-418577 ' value='1621664'   \/><label for='answer-id-1621664' id='answer-label-1621664' class=' answer'><span>Move the data to AWS Regions that are close to the countries where the customers are. Provide access to each analyst based on the country that the analyst serves.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418577[]' id='answer-id-1621665' class='answer   answerof-418577 ' value='1621665'   \/><label for='answer-id-1621665' id='answer-label-1621665' class=' answer'><span>Load the data into Amazon Redshift. Create a view for each country. Create separate 1AM roles for each country to provide access to data from each country. Assign the appropriate roles to the analysts.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-28' style=';'><div id='questionWrap-28'  class='   watupro-question-id-418578'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>28. <\/span>A data engineer must use AWS services to ingest a dataset into an Amazon S3 data lake. The data engineer profiles the dataset and discovers that the dataset contains personally identifiable information (PII). The data engineer must implement a solution to profile the dataset and obfuscate the PII. <br \/>\r<br>Which solution will meet this requirement with the LEAST operational effort?<\/div><input type='hidden' name='question_id[]' id='qID_28' value='418578' \/><input type='hidden' id='answerType418578' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418578[]' id='answer-id-1621666' class='answer   answerof-418578 ' value='1621666'   \/><label for='answer-id-1621666' id='answer-label-1621666' class=' answer'><span>Use an Amazon Kinesis Data Firehose delivery stream to process the dataset. Create an AWS Lambda transform function to identify the PI<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418578[]' id='answer-id-1621667' class='answer   answerof-418578 ' value='1621667'   \/><label for='answer-id-1621667' id='answer-label-1621667' class=' answer'><span>Use an AWS SDK to obfuscate the PI<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418578[]' id='answer-id-1621668' class='answer   answerof-418578 ' value='1621668'   \/><label for='answer-id-1621668' id='answer-label-1621668' class=' answer'><span>Set the S3 data lake as the target for the delivery stream.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418578[]' id='answer-id-1621669' class='answer   answerof-418578 ' value='1621669'   \/><label for='answer-id-1621669' id='answer-label-1621669' class=' answer'><span>Use the Detect PII transform in AWS Glue Studio to identify the PI<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418578[]' id='answer-id-1621670' class='answer   answerof-418578 ' value='1621670'   \/><label for='answer-id-1621670' id='answer-label-1621670' class=' answer'><span>Obfuscate the PI<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418578[]' id='answer-id-1621671' class='answer   answerof-418578 ' value='1621671'   \/><label for='answer-id-1621671' id='answer-label-1621671' class=' answer'><span>Use an AWS Step Functions state machine to orchestrate a data pipeline to ingest the data into the S3 data lake.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418578[]' id='answer-id-1621672' class='answer   answerof-418578 ' value='1621672'   \/><label for='answer-id-1621672' id='answer-label-1621672' class=' answer'><span>Use the Detect PII transform in AWS Glue Studio to identify the PI<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418578[]' id='answer-id-1621673' class='answer   answerof-418578 ' value='1621673'   \/><label for='answer-id-1621673' id='answer-label-1621673' class=' answer'><span>Create a rule in AWS Glue Data Quality to obfuscate the PI<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418578[]' id='answer-id-1621674' class='answer   answerof-418578 ' value='1621674'   \/><label for='answer-id-1621674' id='answer-label-1621674' class=' answer'><span>Use an AWS Step Functions state machine to orchestrate a data pipeline to ingest the data into the S3 data lake.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418578[]' id='answer-id-1621675' class='answer   answerof-418578 ' value='1621675'   \/><label for='answer-id-1621675' id='answer-label-1621675' class=' answer'><span>Ingest the dataset into Amazon DynamoD<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418578[]' id='answer-id-1621676' class='answer   answerof-418578 ' value='1621676'   \/><label for='answer-id-1621676' id='answer-label-1621676' class=' answer'><span>Create an AWS Lambda function to identify and obfuscate the PII in the DynamoDB table and to transform the data. Use the same Lambda function to ingest the data into the S3 data lake.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-29' style=';'><div id='questionWrap-29'  class='   watupro-question-id-418579'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>29. <\/span>A media company wants to improve a system that recommends media content to customer based on user behavior and preferences. To improve the recommendation system, the company needs to incorporate insights from third-party datasets into the company's existing analytics platform. <br \/>\r<br>The company wants to minimize the effort and time required to incorporate third-party datasets. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_29' value='418579' \/><input type='hidden' id='answerType418579' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418579[]' id='answer-id-1621677' class='answer   answerof-418579 ' value='1621677'   \/><label for='answer-id-1621677' id='answer-label-1621677' class=' answer'><span>Use API calls to access and integrate third-party datasets from AWS Data Exchange.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418579[]' id='answer-id-1621678' class='answer   answerof-418579 ' value='1621678'   \/><label for='answer-id-1621678' id='answer-label-1621678' class=' answer'><span>Use API calls to access and integrate third-party datasets from AWS<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418579[]' id='answer-id-1621679' class='answer   answerof-418579 ' value='1621679'   \/><label for='answer-id-1621679' id='answer-label-1621679' class=' answer'><span>Use Amazon Kinesis Data Streams to access and integrate third-party datasets from AWS Code Commit repositories.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418579[]' id='answer-id-1621680' class='answer   answerof-418579 ' value='1621680'   \/><label for='answer-id-1621680' id='answer-label-1621680' class=' answer'><span>Use Amazon Kinesis Data Streams to access and integrate third-party datasets from Amazon Elastic Container Registry (Amazon ECR).<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-30' style=';'><div id='questionWrap-30'  class='   watupro-question-id-418580'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>30. <\/span>A data engineer needs to maintain a central metadata repository that users access through Amazon EMR and Amazon Athena queries. The repository needs to provide the schema and properties of many tables. Some of the metadata is stored in Apache Hive. The data engineer needs to import the metadata from Hive into the central metadata repository. <br \/>\r<br>Which solution will meet these requirements with the LEAST development effort?<\/div><input type='hidden' name='question_id[]' id='qID_30' value='418580' \/><input type='hidden' id='answerType418580' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418580[]' id='answer-id-1621681' class='answer   answerof-418580 ' value='1621681'   \/><label for='answer-id-1621681' id='answer-label-1621681' class=' answer'><span>Use Amazon EMR and Apache Ranger.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418580[]' id='answer-id-1621682' class='answer   answerof-418580 ' value='1621682'   \/><label for='answer-id-1621682' id='answer-label-1621682' class=' answer'><span>Use a Hive metastore on an EMR cluster.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418580[]' id='answer-id-1621683' class='answer   answerof-418580 ' value='1621683'   \/><label for='answer-id-1621683' id='answer-label-1621683' class=' answer'><span>Use the AWS Glue Data Catalog.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418580[]' id='answer-id-1621684' class='answer   answerof-418580 ' value='1621684'   \/><label for='answer-id-1621684' id='answer-label-1621684' class=' answer'><span>Use a metastore on an Amazon RDS for MySQL DB instance.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div style='display:none' id='question-31'>\n\t<div class='question-content'>\n\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.dumpsbase.com\/freedumps\/wp-content\/plugins\/watupro\/img\/loading.gif\" width=\"16\" height=\"16\" alt=\"Loading...\" title=\"Loading...\" \/>&nbsp;Loading...\t<\/div>\n<\/div>\n\n<br \/>\n\t\n\t\t\t<div class=\"watupro_buttons flex \" id=\"watuPROButtons10579\" >\n\t\t  <div id=\"prev-question\" style=\"display:none;\"><input type=\"button\" value=\"&lt; Previous\" onclick=\"WatuPRO.nextQuestion(event, 'previous');\"\/><\/div>\t\t  \t\t  \t\t   \n\t\t   \t  \t\t<div><input type=\"button\" name=\"action\" class=\"watupro-submit-button\" onclick=\"WatuPRO.submitResult(event)\" id=\"action-button\" value=\"View Results\"  \/>\n\t\t<\/div>\n\t\t<\/div>\n\t\t\n\t<input type=\"hidden\" name=\"quiz_id\" value=\"10579\" id=\"watuPROExamID\"\/>\n\t<input type=\"hidden\" name=\"start_time\" id=\"startTime\" value=\"2026-05-11 15:01:46\" \/>\n\t<input type=\"hidden\" name=\"start_timestamp\" id=\"startTimeStamp\" value=\"1778511706\" \/>\n\t<input type=\"hidden\" name=\"question_ids\" value=\"\" \/>\n\t<input type=\"hidden\" name=\"watupro_questions\" value=\"418551:1621554,1621555,1621556,1621557 | 418552:1621558,1621559,1621560,1621561 | 418553:1621562,1621563,1621564,1621565 | 418554:1621566,1621567,1621568,1621569 | 418555:1621570,1621571,1621572,1621573 | 418556:1621574,1621575,1621576,1621577 | 418557:1621578,1621579,1621580,1621581 | 418558:1621582,1621583,1621584,1621585 | 418559:1621586,1621587,1621588,1621589 | 418560:1621590,1621591,1621592,1621593,1621594 | 418561:1621595,1621596,1621597,1621598 | 418562:1621599,1621600,1621601,1621602 | 418563:1621603,1621604,1621605,1621606,1621607,1621608,1621609,1621610,1621611,1621612 | 418564:1621613,1621614,1621615,1621616,1621617 | 418565:1621618,1621619,1621620,1621621,1621622 | 418566:1621623,1621624,1621625,1621626 | 418567:1621627,1621628,1621629,1621630 | 418568:1621631,1660512,1660513,1660514 | 418569:1621632,1660515,1660516,1660517,1660518 | 418570:1621633,1621634,1621635,1621636 | 418571:1621637,1621638,1621639,1621640,1621641 | 418572:1621642,1660519,1660520,1660521 | 418573:1621643,1621644,1621645,1621646 | 418574:1621647,1621648,1621649,1621650 | 418575:1621651,1621652,1621653,1621654 | 418576:1621655,1621656,1621657,1621658,1621659,1621660,1621661 | 418577:1621662,1621663,1621664,1621665 | 418578:1621666,1621667,1621668,1621669,1621670,1621671,1621672,1621673,1621674,1621675,1621676 | 418579:1621677,1621678,1621679,1621680 | 418580:1621681,1621682,1621683,1621684\" \/>\n\t<input type=\"hidden\" name=\"no_ajax\" value=\"0\">\t\t\t<\/form>\n\t<p>&nbsp;<\/p>\n<\/div>\n\n<script type=\"text\/javascript\">\n\/\/jQuery(document).ready(function(){\ndocument.addEventListener(\"DOMContentLoaded\", function(event) { \t\nvar question_ids = \"418551,418552,418553,418554,418555,418556,418557,418558,418559,418560,418561,418562,418563,418564,418565,418566,418567,418568,418569,418570,418571,418572,418573,418574,418575,418576,418577,418578,418579,418580\";\nWatuPROSettings[10579] = {};\nWatuPRO.qArr = question_ids.split(',');\nWatuPRO.exam_id = 10579;\t    \nWatuPRO.post_id = 110502;\nWatuPRO.store_progress = 0;\nWatuPRO.curCatPage = 1;\nWatuPRO.requiredIDs=\"0\".split(\",\");\nWatuPRO.hAppID = \"0.46685200 1778511706\";\nvar url = \"https:\/\/www.dumpsbase.com\/freedumps\/wp-content\/plugins\/watupro\/show_exam.php\";\nWatuPRO.examMode = 1;\nWatuPRO.siteURL=\"https:\/\/www.dumpsbase.com\/freedumps\/wp-admin\/admin-ajax.php\";\nWatuPRO.emailIsNotRequired = 0;\nWatuPROIntel.init(10579);\nWatuPRO.inCategoryPages=1;});    \t \n<\/script>\n","protected":false},"excerpt":{"rendered":"<p>Attempting the Amazon DEA-C01 dumps (V10.02) from DumpsBase is a great way to prepare for your AWS Certified Data Engineer &#8211; Associate certification exam. With the DEA-C01 dumps (V10.02), you will receive 100% validated practice questions and answers, covering every exam subject in depth, including clear explanations and insights that resolve any uncertainties. You can [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[175,18249],"tags":[19891,18538],"class_list":["post-110502","post","type-post","status-publish","format-standard","hentry","category-amazon","category-data-engineer-associate","tag-amazon-dea-c01-updated-dumps","tag-aws-certified-data-engineer-associate-dea-c01"],"_links":{"self":[{"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/posts\/110502","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/comments?post=110502"}],"version-history":[{"count":1,"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/posts\/110502\/revisions"}],"predecessor-version":[{"id":110503,"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/posts\/110502\/revisions\/110503"}],"wp:attachment":[{"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/media?parent=110502"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/categories?post=110502"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/tags?post=110502"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}