{"id":112981,"date":"2025-10-31T06:12:45","date_gmt":"2025-10-31T06:12:45","guid":{"rendered":"https:\/\/www.dumpsbase.com\/freedumps\/?p=112981"},"modified":"2025-12-11T07:17:50","modified_gmt":"2025-12-11T07:17:50","slug":"pass-the-aws-certified-data-engineer-associate-dea-c01-exam-by-using-the-dea-c01-dumps-v11-02-read-dea-c01-free-dumps-part-1-q1-q40-first","status":"publish","type":"post","link":"https:\/\/www.dumpsbase.com\/freedumps\/pass-the-aws-certified-data-engineer-associate-dea-c01-exam-by-using-the-dea-c01-dumps-v11-02-read-dea-c01-free-dumps-part-1-q1-q40-first.html","title":{"rendered":"Pass the AWS Certified Data Engineer &#8211; Associate (DEA-C01) Exam By Using the DEA-C01 Dumps (V11.02): Read DEA-C01 Free Dumps (Part 1, Q1-Q40) First"},"content":{"rendered":"<p>Passing the AWS Certified Data Engineer &#8211; Associate (DEA-C01) certification exam requires a proper study guide. So you are highly recommended to come to DumpsBase and download our updated AWS DEA-C01 dumps (V11.02). This is the most current version with 190 practice exam questions and answers, combining reliable materials with real exam questions and answers. The Amazon DEA-C01 dumps are developed with systematic training methods that make your learning planned and efficient. By using these refreshed exam questions, you not only save precious time but also build trust in outcomes higher than anticipated. Trust, the most updated DEA-C01 dumps (V11.02) are an amazing strategy for preparing and passing the AWS Certified Data Engineer &#8211; Associate (DEA-C01) exam.<\/p>\n<h2><span style=\"background-color: #99ccff;\"><em>Read DEA-C01 free dumps (Part 1, Q1-Q40) of V11.02<\/em><\/span> to check the quality before downloading:<\/h2>\n<script>\n\t  window.fbAsyncInit = function() {\n\t    FB.init({\n\t      appId            : '622169541470367',\n\t      autoLogAppEvents : true,\n\t      xfbml            : true,\n\t      version          : 'v3.1'\n\t    });\n\t  };\n\t\n\t  (function(d, s, id){\n\t     var js, fjs = d.getElementsByTagName(s)[0];\n\t     if (d.getElementById(id)) {return;}\n\t     js = d.createElement(s); js.id = id;\n\t     js.src = \"https:\/\/connect.facebook.net\/en_US\/sdk.js\";\n\t     fjs.parentNode.insertBefore(js, fjs);\n\t   }(document, 'script', 'facebook-jssdk'));\n\t<\/script><script type=\"text\/javascript\" >\ndocument.addEventListener(\"DOMContentLoaded\", function(event) { \nif(!window.jQuery) alert(\"The important jQuery library is not properly loaded in your site. Your WordPress theme is probably missing the essential wp_head() call. You can switch to another theme and you will see that the plugin works fine and this notice disappears. If you are still not sure what to do you can contact us for help.\");\n});\n<\/script>  \n  \n<div  id=\"watupro_quiz\" class=\"quiz-area single-page-quiz\">\n<p id=\"submittingExam11028\" style=\"display:none;text-align:center;\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.dumpsbase.com\/freedumps\/wp-content\/plugins\/watupro\/img\/loading.gif\" width=\"16\" height=\"16\"><\/p>\n\n<div class=\"watupro-exam-description\" id=\"description-quiz-11028\"><\/div>\n\n<form action=\"\" method=\"post\" class=\"quiz-form\" id=\"quiz-11028\"  enctype=\"multipart\/form-data\" >\n<div class='watu-question ' id='question-1' style=';'><div id='questionWrap-1'  class='   watupro-question-id-434260'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>1. <\/span>A data engineer must manage the ingestion of real-time streaming data into AWS. The data engineer wants to perform real-time analytics on the incoming streaming data by using time-based aggregations over a window of up to 30 minutes. The data engineer needs a solution that is highly fault tolerant. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_1' value='434260' \/><input type='hidden' id='answerType434260' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434260[]' id='answer-id-1680323' class='answer   answerof-434260 ' value='1680323'   \/><label for='answer-id-1680323' id='answer-label-1680323' class=' answer'><span>Use an AWS Lambda function that includes both the business and the analytics logic to perform time-based aggregations over a window of up to 30 minutes for the data in Amazon Kinesis Data Streams.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434260[]' id='answer-id-1680324' class='answer   answerof-434260 ' value='1680324'   \/><label for='answer-id-1680324' id='answer-label-1680324' class=' answer'><span>Use Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) to analyze the data that might occasionally contain duplicates by using multiple types of aggregations.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434260[]' id='answer-id-1680325' class='answer   answerof-434260 ' value='1680325'   \/><label for='answer-id-1680325' id='answer-label-1680325' class=' answer'><span>Use an AWS Lambda function that includes both the business and the analytics logic to perform aggregations for a tumbling window of up to 30 minutes, based on the event timestamp.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434260[]' id='answer-id-1680326' class='answer   answerof-434260 ' value='1680326'   \/><label for='answer-id-1680326' id='answer-label-1680326' class=' answer'><span>Use Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) to analyze the data by using multiple types of aggregations to perform time-based analytics over a window of up to 30 minutes.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-2' style=';'><div id='questionWrap-2'  class='   watupro-question-id-434261'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>2. <\/span>A company stores details about transactions in an Amazon S3 bucket. The company wants to log all writes to the S3 bucket into another S3 bucket that is in the same AWS Region. <br \/>\r<br>Which solution will meet this requirement with the LEAST operational effort?<\/div><input type='hidden' name='question_id[]' id='qID_2' value='434261' \/><input type='hidden' id='answerType434261' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434261[]' id='answer-id-1680327' class='answer   answerof-434261 ' value='1680327'   \/><label for='answer-id-1680327' id='answer-label-1680327' class=' answer'><span>Configure an S3 Event Notifications rule for all activities on the transactions S3 bucket to invoke an AWS Lambda function. Program the Lambda function to write the event to Amazon Kinesis Data Firehose. Configure Kinesis Data Firehose to write the event to the logs S3 bucket.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434261[]' id='answer-id-1680328' class='answer   answerof-434261 ' value='1680328'   \/><label for='answer-id-1680328' id='answer-label-1680328' class=' answer'><span>Create a trail of management events in AWS CloudTrai<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434261[]' id='answer-id-1680329' class='answer   answerof-434261 ' value='1680329'   \/><label for='answer-id-1680329' id='answer-label-1680329' class=' answer'><span>Configure the trail to receive data from the transactions S3 bucket. Specify an empty prefix and write-only events. Specify the logs S3 bucket as the destination bucket.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434261[]' id='answer-id-1680330' class='answer   answerof-434261 ' value='1680330'   \/><label for='answer-id-1680330' id='answer-label-1680330' class=' answer'><span>Configure an S3 Event Notifications rule for all activities on the transactions S3 bucket to invoke an AWS Lambda function. Program the Lambda function to write the events to the logs S3 bucket.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434261[]' id='answer-id-1680331' class='answer   answerof-434261 ' value='1680331'   \/><label for='answer-id-1680331' id='answer-label-1680331' class=' answer'><span>Create a trail of data events in AWS CloudTrai<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434261[]' id='answer-id-1680332' class='answer   answerof-434261 ' value='1680332'   \/><label for='answer-id-1680332' id='answer-label-1680332' class=' answer'><span>Configure the trail to receive data from the transactions S3 bucket. Specify an empty prefix and write-only events. Specify the logs S3 bucket as the destination bucket.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-3' style=';'><div id='questionWrap-3'  class='   watupro-question-id-434262'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>3. <\/span>A financial services company stores financial data in Amazon Redshift. A data engineer wants to run real-time queries on the financial data to support a web-based trading application. The data engineer wants to run the queries from within the trading application. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_3' value='434262' \/><input type='hidden' id='answerType434262' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434262[]' id='answer-id-1680333' class='answer   answerof-434262 ' value='1680333'   \/><label for='answer-id-1680333' id='answer-label-1680333' class=' answer'><span>Establish WebSocket connections to Amazon Redshift.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434262[]' id='answer-id-1680334' class='answer   answerof-434262 ' value='1680334'   \/><label for='answer-id-1680334' id='answer-label-1680334' class=' answer'><span>Use the Amazon Redshift Data AP<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434262[]' id='answer-id-1680335' class='answer   answerof-434262 ' value='1680335'   \/><label for='answer-id-1680335' id='answer-label-1680335' class=' answer'><span>Set up Java Database Connectivity (JDBC) connections to Amazon Redshift.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434262[]' id='answer-id-1680336' class='answer   answerof-434262 ' value='1680336'   \/><label for='answer-id-1680336' id='answer-label-1680336' class=' answer'><span>Store frequently accessed data in Amazon S3. Use Amazon S3 Select to run the queries.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-4' style=';'><div id='questionWrap-4'  class='   watupro-question-id-434263'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>4. <\/span>A company needs to partition the Amazon S3 storage that the company uses for a data lake. <br \/>\r<br>The partitioning will use a path of the S3 object keys in the following format: s3:\/\/bucket\/prefix\/year=2023\/month=01\/day=01. <br \/>\r<br>A data engineer must ensure that the AWS Glue Data Catalog synchronizes with the S3 storage when the company adds new partitions to the bucket. <br \/>\r<br>Which solution will meet these requirements with the LEAST latency?<\/div><input type='hidden' name='question_id[]' id='qID_4' value='434263' \/><input type='hidden' id='answerType434263' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434263[]' id='answer-id-1680337' class='answer   answerof-434263 ' value='1680337'   \/><label for='answer-id-1680337' id='answer-label-1680337' class=' answer'><span>Schedule an AWS Glue crawler to run every morning.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434263[]' id='answer-id-1680338' class='answer   answerof-434263 ' value='1680338'   \/><label for='answer-id-1680338' id='answer-label-1680338' class=' answer'><span>Manually run the AWS Glue CreatePartition API twice each day.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434263[]' id='answer-id-1680339' class='answer   answerof-434263 ' value='1680339'   \/><label for='answer-id-1680339' id='answer-label-1680339' class=' answer'><span>Use code that writes data to Amazon S3 to invoke the Boto3 AWS Glue create partition API call.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434263[]' id='answer-id-1680340' class='answer   answerof-434263 ' value='1680340'   \/><label for='answer-id-1680340' id='answer-label-1680340' class=' answer'><span>Run the MSCK REPAIR TABLE command from the AWS Glue console.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-5' style=';'><div id='questionWrap-5'  class='   watupro-question-id-434264'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>5. <\/span>A company is planning to migrate on-premises Apache Hadoop clusters to Amazon EMR. The company also needs to migrate a data catalog into a persistent storage solution. <br \/>\r<br>The company currently stores the data catalog in an on-premises Apache Hive metastore on the Hadoop clusters. The company requires a serverless solution to migrate the data catalog. <br \/>\r<br>Which solution will meet these requirements MOST cost-effectively?<\/div><input type='hidden' name='question_id[]' id='qID_5' value='434264' \/><input type='hidden' id='answerType434264' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434264[]' id='answer-id-1680341' class='answer   answerof-434264 ' value='1680341'   \/><label for='answer-id-1680341' id='answer-label-1680341' class=' answer'><span>Use AWS Database Migration Service (AWS DMS) to migrate the Hive metastore into Amazon S3. \r\nConfigure AWS Glue Data Catalog to scan Amazon S3 to produce the data catalog.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434264[]' id='answer-id-1680342' class='answer   answerof-434264 ' value='1680342'   \/><label for='answer-id-1680342' id='answer-label-1680342' class=' answer'><span>Configure a Hive metastore in Amazon EM<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434264[]' id='answer-id-1680343' class='answer   answerof-434264 ' value='1680343'   \/><label for='answer-id-1680343' id='answer-label-1680343' class=' answer'><span>Migrate the existing on-premises Hive metastore into Amazon EM<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434264[]' id='answer-id-1680344' class='answer   answerof-434264 ' value='1680344'   \/><label for='answer-id-1680344' id='answer-label-1680344' class=' answer'><span>Use AWS Glue Data Catalog to store the company's data catalog as an external data catalog.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434264[]' id='answer-id-1680345' class='answer   answerof-434264 ' value='1680345'   \/><label for='answer-id-1680345' id='answer-label-1680345' class=' answer'><span>Configure an external Hive metastore in Amazon EM<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434264[]' id='answer-id-1680346' class='answer   answerof-434264 ' value='1680346'   \/><label for='answer-id-1680346' id='answer-label-1680346' class=' answer'><span>Migrate the existing on-premises Hive metastore into Amazon EM<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434264[]' id='answer-id-1680347' class='answer   answerof-434264 ' value='1680347'   \/><label for='answer-id-1680347' id='answer-label-1680347' class=' answer'><span>Use Amazon Aurora MySQL to store the company's data catalog.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434264[]' id='answer-id-1680348' class='answer   answerof-434264 ' value='1680348'   \/><label for='answer-id-1680348' id='answer-label-1680348' class=' answer'><span>Configure a new Hive metastore in Amazon EM<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434264[]' id='answer-id-1680349' class='answer   answerof-434264 ' value='1680349'   \/><label for='answer-id-1680349' id='answer-label-1680349' class=' answer'><span>Migrate the existing on-premises Hive metastore into Amazon EM<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434264[]' id='answer-id-1680350' class='answer   answerof-434264 ' value='1680350'   \/><label for='answer-id-1680350' id='answer-label-1680350' class=' answer'><span>Use the new metastore as the company's data catalog.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-6' style=';'><div id='questionWrap-6'  class='   watupro-question-id-434265'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>6. <\/span>A data engineer maintains custom Python scripts that perform a data formatting process that many AWS Lambda functions use. When the data engineer needs to modify the Python scripts, the data engineer must manually update all the Lambda functions. <br \/>\r<br>The data engineer requires a less manual way to update the Lambda functions. <br \/>\r<br>Which solution will meet this requirement?<\/div><input type='hidden' name='question_id[]' id='qID_6' value='434265' \/><input type='hidden' id='answerType434265' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434265[]' id='answer-id-1680351' class='answer   answerof-434265 ' value='1680351'   \/><label for='answer-id-1680351' id='answer-label-1680351' class=' answer'><span>Store a pointer to the custom Python scripts in the execution context object in a shared Amazon \r\nS3 bucket.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434265[]' id='answer-id-1680352' class='answer   answerof-434265 ' value='1680352'   \/><label for='answer-id-1680352' id='answer-label-1680352' class=' answer'><span>Package the custom Python scripts into Lambda layers. Apply the Lambda layers to the Lambda functions.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434265[]' id='answer-id-1680353' class='answer   answerof-434265 ' value='1680353'   \/><label for='answer-id-1680353' id='answer-label-1680353' class=' answer'><span>Store a pointer to the custom Python scripts in environment variables in a shared Amazon S3 bucket.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434265[]' id='answer-id-1680354' class='answer   answerof-434265 ' value='1680354'   \/><label for='answer-id-1680354' id='answer-label-1680354' class=' answer'><span>Assign the same alias to each Lambda function. Call reach Lambda function by specifying the function's alias.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-7' style=';'><div id='questionWrap-7'  class='   watupro-question-id-434266'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>7. <\/span>A company uses Amazon Athena to run SQL queries for extract, transform, and load (ETL) tasks by using Create Table As Select (CTAS). The company must use Apache Spark instead of SQL to generate analytics. <br \/>\r<br>Which solution will give the company the ability to use Spark to access Athena?<\/div><input type='hidden' name='question_id[]' id='qID_7' value='434266' \/><input type='hidden' id='answerType434266' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434266[]' id='answer-id-1680355' class='answer   answerof-434266 ' value='1680355'   \/><label for='answer-id-1680355' id='answer-label-1680355' class=' answer'><span>Athena query settings<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434266[]' id='answer-id-1680356' class='answer   answerof-434266 ' value='1680356'   \/><label for='answer-id-1680356' id='answer-label-1680356' class=' answer'><span>Athena workgroup<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434266[]' id='answer-id-1680357' class='answer   answerof-434266 ' value='1680357'   \/><label for='answer-id-1680357' id='answer-label-1680357' class=' answer'><span>Athena data source<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434266[]' id='answer-id-1680358' class='answer   answerof-434266 ' value='1680358'   \/><label for='answer-id-1680358' id='answer-label-1680358' class=' answer'><span>Athena query editor<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-8' style=';'><div id='questionWrap-8'  class='   watupro-question-id-434267'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>8. <\/span>A company stores data in a data lake that is in Amazon S3. Some data that the company stores in the data lake contains personally identifiable information (PII). Multiple user groups need to access the raw data. The company must ensure that user groups can access only the PII that they require. <br \/>\r<br>Which solution will meet these requirements with the LEAST effort?<\/div><input type='hidden' name='question_id[]' id='qID_8' value='434267' \/><input type='hidden' id='answerType434267' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434267[]' id='answer-id-1680359' class='answer   answerof-434267 ' value='1680359'   \/><label for='answer-id-1680359' id='answer-label-1680359' class=' answer'><span>Use Amazon Athena to query the data. Set up AWS Lake Formation and create data filters to establish levels of access for the company's IAM roles. Assign each user to the IAM role that matches the user's PII access requirements.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434267[]' id='answer-id-1680360' class='answer   answerof-434267 ' value='1680360'   \/><label for='answer-id-1680360' id='answer-label-1680360' class=' answer'><span>Use Amazon QuickSight to access the data. Use column-level security features in QuickSight to limit the PII that users can retrieve from Amazon S3 by using Amazon Athena. Define QuickSight access levels based on the PII access requirements of the users.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434267[]' id='answer-id-1680361' class='answer   answerof-434267 ' value='1680361'   \/><label for='answer-id-1680361' id='answer-label-1680361' class=' answer'><span>Build a custom query builder UI that will run Athena queries in the background to access the data. Create user groups in Amazon Cognito. Assign access levels to the user groups based on the PII access requirements of the users.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434267[]' id='answer-id-1680362' class='answer   answerof-434267 ' value='1680362'   \/><label for='answer-id-1680362' id='answer-label-1680362' class=' answer'><span>Create IAM roles that have different levels of granular access. Assign the IAM roles to IAM user groups. Use an identity-based policy to assign access levels to user groups at the column level.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-9' style=';'><div id='questionWrap-9'  class='   watupro-question-id-434268'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>9. <\/span>A data engineer must build an extract, transform, and load (ETL) pipeline to process and load data from 10 source systems into 10 tables that are in an Amazon Redshift database. All the source systems generate .csv, JSON, or Apache Parquet files every 15 minutes. The source systems all deliver files into one Amazon S3 bucket. The file sizes range from 10 MB to 20 GB. The ETL pipeline must function correctly despite changes to the data schema. <br \/>\r<br>Which data pipeline solutions will meet these requirements? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_9' value='434268' \/><input type='hidden' id='answerType434268' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434268[]' id='answer-id-1680363' class='answer   answerof-434268 ' value='1680363'   \/><label for='answer-id-1680363' id='answer-label-1680363' class=' answer'><span>Use an Amazon EventBridge rule to run an AWS Glue job every 15 minutes. Configure the AWS Glue job to process and load the data into the Amazon Redshift tables.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434268[]' id='answer-id-1680364' class='answer   answerof-434268 ' value='1680364'   \/><label for='answer-id-1680364' id='answer-label-1680364' class=' answer'><span>Use an Amazon EventBridge rule to invoke an AWS Glue workflow job every 15 minutes. Configure the AWS Glue workflow to have an on-demand trigger that runs an AWS Glue crawler and then runs an AWS Glue job when the crawler finishes running successfully. Configure the AWS Glue job to process and load the data into the Amazon Redshift tables.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434268[]' id='answer-id-1680365' class='answer   answerof-434268 ' value='1680365'   \/><label for='answer-id-1680365' id='answer-label-1680365' class=' answer'><span>Configure an AWS Lambda function to invoke an AWS Glue crawler when a file is loaded into the S3 bucket. Configure an AWS Glue job to process and load the data into the Amazon Redshift tables. Create a second Lambda function to run the AWS Glue job. Create an Amazon EventBridge rule to invoke the second Lambda function when the AWS Glue crawler finishes running successfully.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434268[]' id='answer-id-1680366' class='answer   answerof-434268 ' value='1680366'   \/><label for='answer-id-1680366' id='answer-label-1680366' class=' answer'><span>Configure an AWS Lambda function to invoke an AWS Glue workflow when a file is loaded into the S3 bucket. Configure the AWS Glue workflow to have an on-demand trigger that runs an AWS Glue crawler and then runs an AWS Glue job when the crawler finishes running successfully. Configure the AWS Glue job to process and load the data into the Amazon Redshift tables.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434268[]' id='answer-id-1680367' class='answer   answerof-434268 ' value='1680367'   \/><label for='answer-id-1680367' id='answer-label-1680367' class=' answer'><span>Configure an AWS Lambda function to invoke an AWS Glue job when a file is loaded into the S3 bucket. Configure the AWS Glue job to read the files from the S3 bucket into an Apache Spark DataFrame. Configure the AWS Glue job to also put smaller partitions of the DataFrame into an Amazon Kinesis Data Firehose delivery stream. Configure the delivery stream to load data into the Amazon Redshift tables.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-10' style=';'><div id='questionWrap-10'  class='   watupro-question-id-434269'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>10. <\/span>A company currently stores all of its data in Amazon S3 by using the S3 Standard storage class. <br \/>\r<br>A data engineer examined data access patterns to identify trends. During the first 6 months, most data files are accessed several times each day. Between 6 months and 2 years, most data files are accessed once or twice each month. After 2 years, data files are accessed only once or twice each year. <br \/>\r<br>The data engineer needs to use an S3 Lifecycle policy to develop new data storage rules. The new storage solution must continue to provide high availability. <br \/>\r<br>Which solution will meet these requirements in the MOST cost-effective way?<\/div><input type='hidden' name='question_id[]' id='qID_10' value='434269' \/><input type='hidden' id='answerType434269' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434269[]' id='answer-id-1680368' class='answer   answerof-434269 ' value='1680368'   \/><label for='answer-id-1680368' id='answer-label-1680368' class=' answer'><span>Transition objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 6 months. Transfer objects to S3 Glacier Flexible Retrieval after 2 years.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434269[]' id='answer-id-1680369' class='answer   answerof-434269 ' value='1680369'   \/><label for='answer-id-1680369' id='answer-label-1680369' class=' answer'><span>Transition objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 6 months. Transfer objects to S3 Glacier Flexible Retrieval after 2 years.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434269[]' id='answer-id-1680370' class='answer   answerof-434269 ' value='1680370'   \/><label for='answer-id-1680370' id='answer-label-1680370' class=' answer'><span>Transition objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 6 months. Transfer objects to S3 Glacier Deep Archive after 2 years.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434269[]' id='answer-id-1680371' class='answer   answerof-434269 ' value='1680371'   \/><label for='answer-id-1680371' id='answer-label-1680371' class=' answer'><span>Transition objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 6 months. Transfer \r\nobjects to S3 Glacier Deep Archive after 2 years.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-11' style=';'><div id='questionWrap-11'  class='   watupro-question-id-434270'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>11. <\/span>A company stores data from an application in an Amazon DynamoDB table that operates in provisioned capacity mode. The workloads of the application have predictable throughput load on a regular schedule. Every Monday, there is an immediate increase in activity early in the morning. The application has very low usage during weekends. <br \/>\r<br>The company must ensure that the application performs consistently during peak usage times. <br \/>\r<br>Which solution will meet these requirements in the MOST cost-effective way?<\/div><input type='hidden' name='question_id[]' id='qID_11' value='434270' \/><input type='hidden' id='answerType434270' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434270[]' id='answer-id-1680372' class='answer   answerof-434270 ' value='1680372'   \/><label for='answer-id-1680372' id='answer-label-1680372' class=' answer'><span>Increase the provisioned capacity to the maximum capacity that is currently present during peak load times.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434270[]' id='answer-id-1680373' class='answer   answerof-434270 ' value='1680373'   \/><label for='answer-id-1680373' id='answer-label-1680373' class=' answer'><span>Divide the table into two tables. Provision each table with half of the provisioned capacity of the original table. Spread queries evenly across both tables.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434270[]' id='answer-id-1680374' class='answer   answerof-434270 ' value='1680374'   \/><label for='answer-id-1680374' id='answer-label-1680374' class=' answer'><span>Use AWS Application Auto Scaling to schedule higher provisioned capacity for peak usage times. Schedule lower capacity during off-peak times.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434270[]' id='answer-id-1680375' class='answer   answerof-434270 ' value='1680375'   \/><label for='answer-id-1680375' id='answer-label-1680375' class=' answer'><span>Change the capacity mode from provisioned to on-demand. Configure the table to scale up and scale down based on the load on the table.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-12' style=';'><div id='questionWrap-12'  class='   watupro-question-id-434271'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>12. <\/span>A company is planning to upgrade its Amazon Elastic Block Store (Amazon EBS) General Purpose SSD storage from gp2 to gp3. The company wants to prevent any interruptions in its Amazon EC2 instances that will cause data loss during the migration to the upgraded storage. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_12' value='434271' \/><input type='hidden' id='answerType434271' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434271[]' id='answer-id-1680376' class='answer   answerof-434271 ' value='1680376'   \/><label for='answer-id-1680376' id='answer-label-1680376' class=' answer'><span>Create snapshots of the gp2 volumes. Create new gp3 volumes from the snapshots. Attach the new gp3 volumes to the EC2 instances.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434271[]' id='answer-id-1680377' class='answer   answerof-434271 ' value='1680377'   \/><label for='answer-id-1680377' id='answer-label-1680377' class=' answer'><span>Create new gp3 volumes. Gradually transfer the data to the new gp3 volumes. When the transfer is complete, mount the new gp3 volumes to the EC2 instances to replace the gp2 volumes.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434271[]' id='answer-id-1680378' class='answer   answerof-434271 ' value='1680378'   \/><label for='answer-id-1680378' id='answer-label-1680378' class=' answer'><span>Change the volume type of the existing gp2 volumes to gp3. Enter new values for volume size, IOPS, and throughput.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434271[]' id='answer-id-1680379' class='answer   answerof-434271 ' value='1680379'   \/><label for='answer-id-1680379' id='answer-label-1680379' class=' answer'><span>Use AWS DataSync to create new gp3 volumes. Transfer the data from the original gp2 volumes to the new gp3 volumes.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-13' style=';'><div id='questionWrap-13'  class='   watupro-question-id-434272'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>13. <\/span>A company stores daily records of the financial performance of investment portfolios in .csv format in an Amazon S3 bucket. A data engineer uses AWS Glue crawlers to crawl the S3 data. The data engineer must make the S3 data accessible daily in the AWS Glue Data Catalog. <br \/>\r<br>Which solution will meet these requirements?<\/div><input type='hidden' name='question_id[]' id='qID_13' value='434272' \/><input type='hidden' id='answerType434272' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434272[]' id='answer-id-1680380' class='answer   answerof-434272 ' value='1680380'   \/><label for='answer-id-1680380' id='answer-label-1680380' class=' answer'><span>Create an IAM role that includes the AmazonS3FullAccess policy. Associate the role with the crawler. Specify the S3 bucket path of the source data as the crawler's data store. Create a daily schedule to run the crawler. Configure the output destination to a new path in the existing S3 bucket.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434272[]' id='answer-id-1680381' class='answer   answerof-434272 ' value='1680381'   \/><label for='answer-id-1680381' id='answer-label-1680381' class=' answer'><span>Create an IAM role that includes the AWSGlueServiceRole policy. Associate the role with the crawler. Specify the S3 bucket path of the source data as the crawler's data store. Create a daily schedule to run the crawler. Specify a database name for the output.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434272[]' id='answer-id-1680382' class='answer   answerof-434272 ' value='1680382'   \/><label for='answer-id-1680382' id='answer-label-1680382' class=' answer'><span>Create an IAM role that includes the AmazonS3FullAccess policy. Associate the role with the crawler. Specify the S3 bucket path of the source data as the crawler's data store. Allocate data processing units (DPUs) to run the crawler every day. Specify a database name for the output.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434272[]' id='answer-id-1680383' class='answer   answerof-434272 ' value='1680383'   \/><label for='answer-id-1680383' id='answer-label-1680383' class=' answer'><span>Create an IAM role that includes the AWSGlueServiceRole policy. Associate the role with the crawler. Specify the S3 bucket path of the source data as the crawler's data store. Allocate data processing units (DPUs) to run the crawler every day. Configure the output destination to a new path in the existing S3 bucket.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-14' style=';'><div id='questionWrap-14'  class='   watupro-question-id-434273'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>14. <\/span>A company uses an on-premises Microsoft SQL Server database to store financial transaction data. The company migrates the transaction data from the on-premises database to AWS at the end of each month. The company has noticed that the cost to migrate data from the on-premises database to an Amazon RDS for SQL Server database has increased recently. <br \/>\r<br>The company requires a cost-effective solution to migrate the data to AWS. The solution must cause minimal downtown for the applications that access the database. <br \/>\r<br>Which AWS service should the company use to meet these requirements?<\/div><input type='hidden' name='question_id[]' id='qID_14' value='434273' \/><input type='hidden' id='answerType434273' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434273[]' id='answer-id-1680384' class='answer   answerof-434273 ' value='1680384'   \/><label for='answer-id-1680384' id='answer-label-1680384' class=' answer'><span>AWS Lambda<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434273[]' id='answer-id-1680385' class='answer   answerof-434273 ' value='1680385'   \/><label for='answer-id-1680385' id='answer-label-1680385' class=' answer'><span>AWS Database Migration Service (AWS DMS)<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434273[]' id='answer-id-1680386' class='answer   answerof-434273 ' value='1680386'   \/><label for='answer-id-1680386' id='answer-label-1680386' class=' answer'><span>AWS Direct Connect<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434273[]' id='answer-id-1680387' class='answer   answerof-434273 ' value='1680387'   \/><label for='answer-id-1680387' id='answer-label-1680387' class=' answer'><span>AWS DataSync<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-15' style=';'><div id='questionWrap-15'  class='   watupro-question-id-434274'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>15. <\/span>A manufacturing company wants to collect data from sensors. A data engineer needs to implement a solution that ingests sensor data in near real time. <br \/>\r<br>The solution must store the data to a persistent data store. The solution must store the data in nested JSON format. The company must have the ability to query from the data store with a latency of less than 10 milliseconds. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_15' value='434274' \/><input type='hidden' id='answerType434274' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434274[]' id='answer-id-1680388' class='answer   answerof-434274 ' value='1680388'   \/><label for='answer-id-1680388' id='answer-label-1680388' class=' answer'><span>Use a self-hosted Apache Kafka cluster to capture the sensor data. Store the data in Amazon S3 for querying.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434274[]' id='answer-id-1680389' class='answer   answerof-434274 ' value='1680389'   \/><label for='answer-id-1680389' id='answer-label-1680389' class=' answer'><span>Use AWS Lambda to process the sensor data. Store the data in Amazon S3 for querying.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434274[]' id='answer-id-1680390' class='answer   answerof-434274 ' value='1680390'   \/><label for='answer-id-1680390' id='answer-label-1680390' class=' answer'><span>Use Amazon Kinesis Data Streams to capture the sensor data. Store the data in Amazon DynamoDB for querying.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434274[]' id='answer-id-1680391' class='answer   answerof-434274 ' value='1680391'   \/><label for='answer-id-1680391' id='answer-label-1680391' class=' answer'><span>Use Amazon Simple Queue Service (Amazon SQS) to buffer incoming sensor data. Use AWS Glue to store the data in Amazon RDS for querying.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-16' style=';'><div id='questionWrap-16'  class='   watupro-question-id-434275'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>16. <\/span>A data engineer needs to schedule a workflow that runs a set of AWS Glue jobs every day. The data engineer does not require the Glue jobs to run or finish at a specific time. <br \/>\r<br>Which solution will run the Glue jobs in the MOST cost-effective way?<\/div><input type='hidden' name='question_id[]' id='qID_16' value='434275' \/><input type='hidden' id='answerType434275' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434275[]' id='answer-id-1680392' class='answer   answerof-434275 ' value='1680392'   \/><label for='answer-id-1680392' id='answer-label-1680392' class=' answer'><span>Choose the FLEX execution class in the Glue job properties.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434275[]' id='answer-id-1680393' class='answer   answerof-434275 ' value='1680393'   \/><label for='answer-id-1680393' id='answer-label-1680393' class=' answer'><span>Use the Spot Instance type in Glue job properties.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434275[]' id='answer-id-1680394' class='answer   answerof-434275 ' value='1680394'   \/><label for='answer-id-1680394' id='answer-label-1680394' class=' answer'><span>Choose the STANDARD execution class in the Glue job properties.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434275[]' id='answer-id-1680395' class='answer   answerof-434275 ' value='1680395'   \/><label for='answer-id-1680395' id='answer-label-1680395' class=' answer'><span>Choose the latest version in the GlueVersion field in the Glue job properties.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-17' style=';'><div id='questionWrap-17'  class='   watupro-question-id-434276'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>17. <\/span>A company has used an Amazon Redshift table that is named Orders for 6 months. The company performs weekly updates and deletes on the table. The table has an interleaved sort key on a column that contains AWS Regions.<br \/>\r\n<br \/>\r\nThe company wants to reclaim disk space so that the company will not run out of storage space. The company also wants to analyze the sort key column.<br \/>\r\n<br \/>\r\nWhich Amazon Redshift command will meet these requirements?<\/div><input type='hidden' name='question_id[]' id='qID_17' value='434276' \/><input type='hidden' id='answerType434276' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434276[]' id='answer-id-1680396' class='answer   answerof-434276 ' value='1680396'   \/><label for='answer-id-1680396' id='answer-label-1680396' class=' answer'><span>VACUUM FULL Orders<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434276[]' id='answer-id-1682615' class='answer   answerof-434276 ' value='1682615'   \/><label for='answer-id-1682615' id='answer-label-1682615' class=' answer'><span>VACUUM DELETE ONLY Orders<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434276[]' id='answer-id-1682616' class='answer   answerof-434276 ' value='1682616'   \/><label for='answer-id-1682616' id='answer-label-1682616' class=' answer'><span>VACUUM REINDEX Orders<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434276[]' id='answer-id-1682617' class='answer   answerof-434276 ' value='1682617'   \/><label for='answer-id-1682617' id='answer-label-1682617' class=' answer'><span>VACUUM SORT ONLY Orders<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-18' style=';'><div id='questionWrap-18'  class='   watupro-question-id-434277'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>18. <\/span>A company wants to implement real-time analytics capabilities. The company wants to use Amazon Kinesis Data Streams and Amazon Redshift to ingest and process streaming data at the rate of several gigabytes per second. The company wants to derive near real-time insights by using existing business intelligence (BI) and analytics tools. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_18' value='434277' \/><input type='hidden' id='answerType434277' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434277[]' id='answer-id-1680397' class='answer   answerof-434277 ' value='1680397'   \/><label for='answer-id-1680397' id='answer-label-1680397' class=' answer'><span>Use Kinesis Data Streams to stage data in Amazon S3. Use the COPY command to load data from Amazon S3 directly into Amazon Redshift to make the data immediately available for real-time analysis.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434277[]' id='answer-id-1680398' class='answer   answerof-434277 ' value='1680398'   \/><label for='answer-id-1680398' id='answer-label-1680398' class=' answer'><span>Access the data from Kinesis Data Streams by using SQL queries. Create materialized views directly on top of the stream. Refresh the materialized views regularly to query the most recent stream data.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434277[]' id='answer-id-1680399' class='answer   answerof-434277 ' value='1680399'   \/><label for='answer-id-1680399' id='answer-label-1680399' class=' answer'><span>Create an external schema in Amazon Redshift to map the data from Kinesis Data Streams to an Amazon Redshift object. Create a materialized view to read data from the stream. Set the materialized view to auto refresh.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434277[]' id='answer-id-1680400' class='answer   answerof-434277 ' value='1680400'   \/><label for='answer-id-1680400' id='answer-label-1680400' class=' answer'><span>Connect Kinesis Data Streams to Amazon Kinesis Data Firehose. Use Kinesis Data Firehose to stage the data in Amazon S3. Use the COPY command to load the data from Amazon S3 to a table in Amazon Redshift.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-19' style=';'><div id='questionWrap-19'  class='   watupro-question-id-434278'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>19. <\/span>A retail company has a customer data hub in an Amazon S3 bucket. Employees from many countries use the data hub to support company-wide analytics. A governance team must ensure that the company's data analysts can access data only for customers who are within the same country as the analysts. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational effort?<\/div><input type='hidden' name='question_id[]' id='qID_19' value='434278' \/><input type='hidden' id='answerType434278' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434278[]' id='answer-id-1680401' class='answer   answerof-434278 ' value='1680401'   \/><label for='answer-id-1680401' id='answer-label-1680401' class=' answer'><span>Create a separate table for each country's customer data. Provide access to each analyst based on the country that the analyst serves.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434278[]' id='answer-id-1680402' class='answer   answerof-434278 ' value='1680402'   \/><label for='answer-id-1680402' id='answer-label-1680402' class=' answer'><span>Register the S3 bucket as a data lake location in AWS Lake Formation. Use the Lake Formation row-level security features to enforce the company's access policies.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434278[]' id='answer-id-1680403' class='answer   answerof-434278 ' value='1680403'   \/><label for='answer-id-1680403' id='answer-label-1680403' class=' answer'><span>Move the data to AWS Regions that are close to the countries where the customers are. Provide access to each analyst based on the country that the analyst serves.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434278[]' id='answer-id-1680404' class='answer   answerof-434278 ' value='1680404'   \/><label for='answer-id-1680404' id='answer-label-1680404' class=' answer'><span>Load the data into Amazon Redshift. Create a view for each country. Create separate 1AM roles for each country to provide access to data from each country. Assign the appropriate roles to the analysts.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-20' style=';'><div id='questionWrap-20'  class='   watupro-question-id-434279'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>20. <\/span>A media company wants to improve a system that recommends media content to customer based on user behavior and preferences. To improve the recommendation system, the company needs to incorporate insights from third-party datasets into the company's existing analytics platform. <br \/>\r<br>The company wants to minimize the effort and time required to incorporate third-party datasets. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_20' value='434279' \/><input type='hidden' id='answerType434279' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434279[]' id='answer-id-1680405' class='answer   answerof-434279 ' value='1680405'   \/><label for='answer-id-1680405' id='answer-label-1680405' class=' answer'><span>Use API calls to access and integrate third-party datasets from AWS Data Exchange.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434279[]' id='answer-id-1680406' class='answer   answerof-434279 ' value='1680406'   \/><label for='answer-id-1680406' id='answer-label-1680406' class=' answer'><span>Use API calls to access and integrate third-party datasets from AWS<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434279[]' id='answer-id-1680407' class='answer   answerof-434279 ' value='1680407'   \/><label for='answer-id-1680407' id='answer-label-1680407' class=' answer'><span>Use Amazon Kinesis Data Streams to access and integrate third-party datasets from AWS Code Commit repositories.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434279[]' id='answer-id-1680408' class='answer   answerof-434279 ' value='1680408'   \/><label for='answer-id-1680408' id='answer-label-1680408' class=' answer'><span>Use Amazon Kinesis Data Streams to access and integrate third-party datasets from Amazon Elastic Container Registry (Amazon ECR).<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-21' style=';'><div id='questionWrap-21'  class='   watupro-question-id-434280'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>21. <\/span>A company loads transaction data for each day into Amazon Redshift tables at the end of each day. The company wants to have the ability to track which tables have been loaded and which tables still need to be loaded. <br \/>\r<br>A data engineer wants to store the load statuses of Redshift tables in an Amazon DynamoDB table. The data engineer creates an AWS Lambda function to publish the details of the load statuses to DynamoDB. <br \/>\r<br>How should the data engineer invoke the Lambda function to write load statuses to the DynamoDB table?<\/div><input type='hidden' name='question_id[]' id='qID_21' value='434280' \/><input type='hidden' id='answerType434280' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434280[]' id='answer-id-1680409' class='answer   answerof-434280 ' value='1680409'   \/><label for='answer-id-1680409' id='answer-label-1680409' class=' answer'><span>Use a second Lambda function to invoke the first Lambda function based on Amazon CloudWatch events.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434280[]' id='answer-id-1680410' class='answer   answerof-434280 ' value='1680410'   \/><label for='answer-id-1680410' id='answer-label-1680410' class=' answer'><span>Use the Amazon Redshift Data API to publish an event to Amazon EventBridqe. Configure an EventBridge rule to invoke the Lambda function.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434280[]' id='answer-id-1680411' class='answer   answerof-434280 ' value='1680411'   \/><label for='answer-id-1680411' id='answer-label-1680411' class=' answer'><span>Use the Amazon Redshift Data API to publish a message to an Amazon Simple Queue Service (Amazon SQS) queue. Configure the SQS queue to invoke the Lambda function.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434280[]' id='answer-id-1680412' class='answer   answerof-434280 ' value='1680412'   \/><label for='answer-id-1680412' id='answer-label-1680412' class=' answer'><span>Use a second Lambda function to invoke the first Lambda function based on AWS CloudTrail events.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-22' style=';'><div id='questionWrap-22'  class='   watupro-question-id-434281'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>22. <\/span>A data engineer needs Amazon Athena queries to finish faster. The data engineer notices that all the files the Athena queries use are currently stored in uncompressed .csv format. The data engineer also notices that users perform most queries by selecting a specific column. <br \/>\r<br>Which solution will MOST speed up the Athena query performance?<\/div><input type='hidden' name='question_id[]' id='qID_22' value='434281' \/><input type='hidden' id='answerType434281' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434281[]' id='answer-id-1680413' class='answer   answerof-434281 ' value='1680413'   \/><label for='answer-id-1680413' id='answer-label-1680413' class=' answer'><span>Change the data format from .csvto JSON format. Apply Snappy compression.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434281[]' id='answer-id-1680414' class='answer   answerof-434281 ' value='1680414'   \/><label for='answer-id-1680414' id='answer-label-1680414' class=' answer'><span>Compress the .csv files by using Snappy compression.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434281[]' id='answer-id-1680415' class='answer   answerof-434281 ' value='1680415'   \/><label for='answer-id-1680415' id='answer-label-1680415' class=' answer'><span>Change the data format from .csvto Apache Parquet. Apply Snappy compression.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434281[]' id='answer-id-1680416' class='answer   answerof-434281 ' value='1680416'   \/><label for='answer-id-1680416' id='answer-label-1680416' class=' answer'><span>Compress the .csv files by using gzjg compression.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-23' style=';'><div id='questionWrap-23'  class='   watupro-question-id-434282'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>23. <\/span>A company uses Amazon RDS for MySQL as the database for a critical application. The database workload is mostly writes, with a small number of reads. <br \/>\r<br>A data engineer notices that the CPU utilization of the DB instance is very high. The high CPU utilization is slowing down the application. The data engineer must reduce the CPU utilization of the DB Instance. <br \/>\r<br>Which actions should the data engineer take to meet this requirement? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_23' value='434282' \/><input type='hidden' id='answerType434282' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434282[]' id='answer-id-1680417' class='answer   answerof-434282 ' value='1680417'   \/><label for='answer-id-1680417' id='answer-label-1680417' class=' answer'><span>Use the Performance Insights feature of Amazon RDS to identify queries that have high CPU utilization. Optimize the problematic queries.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434282[]' id='answer-id-1680418' class='answer   answerof-434282 ' value='1680418'   \/><label for='answer-id-1680418' id='answer-label-1680418' class=' answer'><span>Modify the database schema to include additional tables and indexes.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434282[]' id='answer-id-1680419' class='answer   answerof-434282 ' value='1680419'   \/><label for='answer-id-1680419' id='answer-label-1680419' class=' answer'><span>Reboot the RDS DB instance once each week.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434282[]' id='answer-id-1680420' class='answer   answerof-434282 ' value='1680420'   \/><label for='answer-id-1680420' id='answer-label-1680420' class=' answer'><span>Upgrade to a larger instance size.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434282[]' id='answer-id-1680421' class='answer   answerof-434282 ' value='1680421'   \/><label for='answer-id-1680421' id='answer-label-1680421' class=' answer'><span>Implement caching to reduce the database query load.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-24' style=';'><div id='questionWrap-24'  class='   watupro-question-id-434283'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>24. <\/span>A company has five offices in different AWS Regions. Each office has its own human resources (HR) department that uses a unique IAM role. The company stores employee records in a data lake that is based on Amazon S3 storage.<br \/>\r\n<br \/>\r\nA data engineering team needs to limit access to the records. Each HR department should be able to access records for only employees who are within the HR department's Region.<br \/>\r\n<br \/>\r\nWhich combination of steps should the data engineering team take to meet this requirement with the LEAST operational overhead? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_24' value='434283' \/><input type='hidden' id='answerType434283' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434283[]' id='answer-id-1680422' class='answer   answerof-434283 ' value='1680422'   \/><label for='answer-id-1680422' id='answer-label-1680422' class=' answer'><span>Use data filters for each Region to register the S3 paths as data locations.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434283[]' id='answer-id-1682618' class='answer   answerof-434283 ' value='1682618'   \/><label for='answer-id-1682618' id='answer-label-1682618' class=' answer'><span>Register the S3 path as an AWS Lake Formation location.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434283[]' id='answer-id-1682619' class='answer   answerof-434283 ' value='1682619'   \/><label for='answer-id-1682619' id='answer-label-1682619' class=' answer'><span>Modify the IAM roles of the HR departments to add a data filter for each department's Region.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434283[]' id='answer-id-1682620' class='answer   answerof-434283 ' value='1682620'   \/><label for='answer-id-1682620' id='answer-label-1682620' class=' answer'><span>Enable fine-grained access control in AWS Lake Formation. Add a data filter for each Region.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434283[]' id='answer-id-1682621' class='answer   answerof-434283 ' value='1682621'   \/><label for='answer-id-1682621' id='answer-label-1682621' class=' answer'><span>Create a separate S3 bucket for each Region. Configure an IAM policy to allow S3 access. Restrict access based on Region.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-25' style=';'><div id='questionWrap-25'  class='   watupro-question-id-434284'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>25. <\/span>A data engineer needs to securely transfer 5 TB of data from an on-premises data center to an Amazon S3 bucket. Approximately 5% of the data changes every day. Updates to the data need to be regularly proliferated to the S3 bucket. The data includes files that are in multiple formats. The data engineer needs to automate the transfer process and must schedule the process to run periodically. <br \/>\r<br>Which AWS service should the data engineer use to transfer the data in the MOST operationally efficient way?<\/div><input type='hidden' name='question_id[]' id='qID_25' value='434284' \/><input type='hidden' id='answerType434284' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434284[]' id='answer-id-1680423' class='answer   answerof-434284 ' value='1680423'   \/><label for='answer-id-1680423' id='answer-label-1680423' class=' answer'><span>AWS DataSync<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434284[]' id='answer-id-1680424' class='answer   answerof-434284 ' value='1680424'   \/><label for='answer-id-1680424' id='answer-label-1680424' class=' answer'><span>AWS Glue<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434284[]' id='answer-id-1680425' class='answer   answerof-434284 ' value='1680425'   \/><label for='answer-id-1680425' id='answer-label-1680425' class=' answer'><span>AWS Direct Connect<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434284[]' id='answer-id-1680426' class='answer   answerof-434284 ' value='1680426'   \/><label for='answer-id-1680426' id='answer-label-1680426' class=' answer'><span>Amazon S3 Transfer Acceleration<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-26' style=';'><div id='questionWrap-26'  class='   watupro-question-id-434285'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>26. <\/span>A data engineer runs Amazon Athena queries on data that is in an Amazon S3 bucket. The Athena queries use AWS Glue Data Catalog as a metadata table. <br \/>\r<br>The data engineer notices that the Athena query plans are experiencing a performance bottleneck. The data engineer determines that the cause of the performance bottleneck is the large number of partitions that are in the S3 bucket. The data engineer must resolve the performance bottleneck and reduce Athena query planning time. <br \/>\r<br>Which solutions will meet these requirements? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_26' value='434285' \/><input type='hidden' id='answerType434285' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434285[]' id='answer-id-1680427' class='answer   answerof-434285 ' value='1680427'   \/><label for='answer-id-1680427' id='answer-label-1680427' class=' answer'><span>Create an AWS Glue partition index. Enable partition filtering.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434285[]' id='answer-id-1680428' class='answer   answerof-434285 ' value='1680428'   \/><label for='answer-id-1680428' id='answer-label-1680428' class=' answer'><span>Bucket the data based on a column that the data have in common in a WHERE clause of the user query<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434285[]' id='answer-id-1680429' class='answer   answerof-434285 ' value='1680429'   \/><label for='answer-id-1680429' id='answer-label-1680429' class=' answer'><span>Use Athena partition projection based on the S3 bucket prefix.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434285[]' id='answer-id-1680430' class='answer   answerof-434285 ' value='1680430'   \/><label for='answer-id-1680430' id='answer-label-1680430' class=' answer'><span>Transform the data that is in the S3 bucket to Apache Parquet format.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434285[]' id='answer-id-1680431' class='answer   answerof-434285 ' value='1680431'   \/><label for='answer-id-1680431' id='answer-label-1680431' class=' answer'><span>Use the Amazon EMR S3DistCP utility to combine smaller objects in the S3 bucket into larger objects.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-27' style=';'><div id='questionWrap-27'  class='   watupro-question-id-434286'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>27. <\/span>A company maintains an Amazon Redshift provisioned cluster that the company uses for extract, transform, and load (ETL) operations to support critical analysis tasks. A sales team within the company maintains a Redshift cluster that the sales team uses for business intelligence (BI) tasks. The sales team recently requested access to the data that is in the ETL Redshift cluster so the team can perform weekly summary analysis tasks. The sales team needs to join data from the ETL cluster with data that is in the sales team's BI cluster. <br \/>\r<br>The company needs a solution that will share the ETL cluster data with the sales team without interrupting the critical analysis tasks. The solution must minimize usage of the computing resources of the ETL cluster. <br \/>\r<br>Which solution will meet these requirements?<\/div><input type='hidden' name='question_id[]' id='qID_27' value='434286' \/><input type='hidden' id='answerType434286' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434286[]' id='answer-id-1680432' class='answer   answerof-434286 ' value='1680432'   \/><label for='answer-id-1680432' id='answer-label-1680432' class=' answer'><span>Set up the sales team Bl cluster as a consumer of the ETL cluster by using Redshift data sharing.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434286[]' id='answer-id-1680433' class='answer   answerof-434286 ' value='1680433'   \/><label for='answer-id-1680433' id='answer-label-1680433' class=' answer'><span>Create materialized views based on the sales team's requirements. Grant the sales team direct access to the ETL cluster.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434286[]' id='answer-id-1680434' class='answer   answerof-434286 ' value='1680434'   \/><label for='answer-id-1680434' id='answer-label-1680434' class=' answer'><span>Create database views based on the sales team's requirements. Grant the sales team direct access to the ETL cluster.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434286[]' id='answer-id-1680435' class='answer   answerof-434286 ' value='1680435'   \/><label for='answer-id-1680435' id='answer-label-1680435' class=' answer'><span>Unload a copy of the data from the ETL cluster to an Amazon S3 bucket every week. Create an Amazon Redshift Spectrum table based on the content of the ETL cluster.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-28' style=';'><div id='questionWrap-28'  class='   watupro-question-id-434287'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>28. <\/span>A data engineer is using Amazon Athena to analyze sales data that is in Amazon S3. The data engineer writes a query to retrieve sales amounts for 2023 for several products from a table named sales_data. However, the query does not return results for all of the products that are in the sales_data table. <br \/>\r<br>The data engineer needs to troubleshoot the query to resolve the issue. <br \/>\r<br>The data engineer's original query is as follows: <br \/>\r<br>SELECT product_name, sum(sales_amount) <br \/>\r<br>FROM sales_data <br \/>\r<br>WHERE year = 2023 <br \/>\r<br>GROUP BY product_name <br \/>\r<br>How should the data engineer modify the Athena query to meet these requirements?<\/div><input type='hidden' name='question_id[]' id='qID_28' value='434287' \/><input type='hidden' id='answerType434287' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434287[]' id='answer-id-1680436' class='answer   answerof-434287 ' value='1680436'   \/><label for='answer-id-1680436' id='answer-label-1680436' class=' answer'><span>Replace sum(sales amount) with count(*J for the aggregation.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434287[]' id='answer-id-1680437' class='answer   answerof-434287 ' value='1680437'   \/><label for='answer-id-1680437' id='answer-label-1680437' class=' answer'><span>Change WHERE year = 2023 to WHERE extractlyear FROM sales data) = 2023.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434287[]' id='answer-id-1680438' class='answer   answerof-434287 ' value='1680438'   \/><label for='answer-id-1680438' id='answer-label-1680438' class=' answer'><span>Add HAVING sumfsales amount) &gt; 0 after the GROUP BY clause.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434287[]' id='answer-id-1680439' class='answer   answerof-434287 ' value='1680439'   \/><label for='answer-id-1680439' id='answer-label-1680439' class=' answer'><span>Remove the GROUP BY clause<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-29' style=';'><div id='questionWrap-29'  class='   watupro-question-id-434288'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>29. <\/span>A data engineer must use AWS services to ingest a dataset into an Amazon S3 data lake. The data engineer profiles the dataset and discovers that the dataset contains personally identifiable information (PII). The data engineer must implement a solution to profile the dataset and obfuscate the PII. <br \/>\r<br>Which solution will meet this requirement with the LEAST operational effort?<\/div><input type='hidden' name='question_id[]' id='qID_29' value='434288' \/><input type='hidden' id='answerType434288' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434288[]' id='answer-id-1680440' class='answer   answerof-434288 ' value='1680440'   \/><label for='answer-id-1680440' id='answer-label-1680440' class=' answer'><span>Use an Amazon Kinesis Data Firehose delivery stream to process the dataset. Create an AWS Lambda transform function to identify the PI<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434288[]' id='answer-id-1680441' class='answer   answerof-434288 ' value='1680441'   \/><label for='answer-id-1680441' id='answer-label-1680441' class=' answer'><span>Use an AWS SDK to obfuscate the PI<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434288[]' id='answer-id-1680442' class='answer   answerof-434288 ' value='1680442'   \/><label for='answer-id-1680442' id='answer-label-1680442' class=' answer'><span>Set the S3 data lake as the target for the delivery stream.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434288[]' id='answer-id-1680443' class='answer   answerof-434288 ' value='1680443'   \/><label for='answer-id-1680443' id='answer-label-1680443' class=' answer'><span>Use the Detect PII transform in AWS Glue Studio to identify the PI<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434288[]' id='answer-id-1680444' class='answer   answerof-434288 ' value='1680444'   \/><label for='answer-id-1680444' id='answer-label-1680444' class=' answer'><span>Obfuscate the PI<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434288[]' id='answer-id-1680445' class='answer   answerof-434288 ' value='1680445'   \/><label for='answer-id-1680445' id='answer-label-1680445' class=' answer'><span>Use an AWS Step Functions state machine to orchestrate a data pipeline to ingest the data into the S3 data lake.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434288[]' id='answer-id-1680446' class='answer   answerof-434288 ' value='1680446'   \/><label for='answer-id-1680446' id='answer-label-1680446' class=' answer'><span>Use the Detect PII transform in AWS Glue Studio to identify the PI<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434288[]' id='answer-id-1680447' class='answer   answerof-434288 ' value='1680447'   \/><label for='answer-id-1680447' id='answer-label-1680447' class=' answer'><span>Create a rule in AWS Glue Data Quality to obfuscate the PI<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434288[]' id='answer-id-1680448' class='answer   answerof-434288 ' value='1680448'   \/><label for='answer-id-1680448' id='answer-label-1680448' class=' answer'><span>Use an AWS Step Functions state machine to orchestrate a data pipeline to ingest the data into the S3 data lake.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434288[]' id='answer-id-1680449' class='answer   answerof-434288 ' value='1680449'   \/><label for='answer-id-1680449' id='answer-label-1680449' class=' answer'><span>Ingest the dataset into Amazon DynamoD<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434288[]' id='answer-id-1680450' class='answer   answerof-434288 ' value='1680450'   \/><label for='answer-id-1680450' id='answer-label-1680450' class=' answer'><span>Create an AWS Lambda function to identify and obfuscate the PII in the DynamoDB table and to transform the data. Use the same Lambda function to ingest the data into the S3 data lake.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-30' style=';'><div id='questionWrap-30'  class='   watupro-question-id-434289'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>30. <\/span>A company has a frontend ReactJS website that uses Amazon API Gateway to invoke REST APIs. The APIs perform the functionality of the website. A data engineer needs to write a Python script that can be occasionally invoked through API Gateway. The code must return results to API Gateway. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_30' value='434289' \/><input type='hidden' id='answerType434289' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434289[]' id='answer-id-1680451' class='answer   answerof-434289 ' value='1680451'   \/><label for='answer-id-1680451' id='answer-label-1680451' class=' answer'><span>Deploy a custom Python script on an Amazon Elastic Container Service (Amazon ECS) cluster.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434289[]' id='answer-id-1680452' class='answer   answerof-434289 ' value='1680452'   \/><label for='answer-id-1680452' id='answer-label-1680452' class=' answer'><span>Create an AWS Lambda Python function with provisioned concurrency.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434289[]' id='answer-id-1680453' class='answer   answerof-434289 ' value='1680453'   \/><label for='answer-id-1680453' id='answer-label-1680453' class=' answer'><span>Deploy a custom Python script that can integrate with API Gateway on Amazon Elastic Kubernetes \r\nService (Amazon EKS).<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434289[]' id='answer-id-1680454' class='answer   answerof-434289 ' value='1680454'   \/><label for='answer-id-1680454' id='answer-label-1680454' class=' answer'><span>Create an AWS Lambda function. Ensure that the function is warm by scheduling an Amazon EventBridge rule to invoke the Lambda function every 5 minutes by using mock events.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-31' style=';'><div id='questionWrap-31'  class='   watupro-question-id-434290'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>31. <\/span>A data engineering team is using an Amazon Redshift data warehouse for operational reporting. The team wants to prevent performance issues that might result from long- running queries. A data engineer must choose a system table in Amazon Redshift to record anomalies when a query optimizer identifies conditions that might indicate performance issues. <br \/>\r<br>Which table views should the data engineer use to meet this requirement?<\/div><input type='hidden' name='question_id[]' id='qID_31' value='434290' \/><input type='hidden' id='answerType434290' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434290[]' id='answer-id-1680455' class='answer   answerof-434290 ' value='1680455'   \/><label for='answer-id-1680455' id='answer-label-1680455' class=' answer'><span>STL USAGE CONTROL<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434290[]' id='answer-id-1680456' class='answer   answerof-434290 ' value='1680456'   \/><label for='answer-id-1680456' id='answer-label-1680456' class=' answer'><span>STL ALERT EVENT LOG<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434290[]' id='answer-id-1680457' class='answer   answerof-434290 ' value='1680457'   \/><label for='answer-id-1680457' id='answer-label-1680457' class=' answer'><span>STL QUERY METRICS<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434290[]' id='answer-id-1680458' class='answer   answerof-434290 ' value='1680458'   \/><label for='answer-id-1680458' id='answer-label-1680458' class=' answer'><span>STL PLAN INFO<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-32' style=';'><div id='questionWrap-32'  class='   watupro-question-id-434291'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>32. <\/span>A financial company wants to implement a data mesh. The data mesh must support centralized data governance, data analysis, and data access control. The company has decided to use AWS Glue for data catalogs and extract, transform, and load (ETL) operations. <br \/>\r<br>Which combination of AWS services will implement a data mesh? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_32' value='434291' \/><input type='hidden' id='answerType434291' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434291[]' id='answer-id-1680459' class='answer   answerof-434291 ' value='1680459'   \/><label for='answer-id-1680459' id='answer-label-1680459' class=' answer'><span>Use Amazon Aurora for data storage. Use an Amazon Redshift provisioned cluster for data analysis.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434291[]' id='answer-id-1680460' class='answer   answerof-434291 ' value='1680460'   \/><label for='answer-id-1680460' id='answer-label-1680460' class=' answer'><span>Use Amazon S3 for data storage. Use Amazon Athena for data analysis.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434291[]' id='answer-id-1680461' class='answer   answerof-434291 ' value='1680461'   \/><label for='answer-id-1680461' id='answer-label-1680461' class=' answer'><span>Use AWS Glue DataBrewfor centralized data governance and access control.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434291[]' id='answer-id-1680462' class='answer   answerof-434291 ' value='1680462'   \/><label for='answer-id-1680462' id='answer-label-1680462' class=' answer'><span>Use Amazon RDS for data storage. Use Amazon EMR for data analysis.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434291[]' id='answer-id-1680463' class='answer   answerof-434291 ' value='1680463'   \/><label for='answer-id-1680463' id='answer-label-1680463' class=' answer'><span>Use AWS Lake Formation for centralized data governance and access control.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-33' style=';'><div id='questionWrap-33'  class='   watupro-question-id-434292'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>33. <\/span>A company needs to set up a data catalog and metadata management for data sources that run in the AWS Cloud. The company will use the data catalog to maintain the metadata of all the objects that are in a set of data stores. The data stores include structured sources such as Amazon RDS and Amazon Redshift. The data stores also include semistructured sources such as JSON files and .xml files that are stored in Amazon S3.<br \/>\r\n<br \/>\r\nThe company needs a solution that will update the data catalog on a regular basis. The solution also must detect changes to the source metadata.<br \/>\r\n<br \/>\r\nWhich solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_33' value='434292' \/><input type='hidden' id='answerType434292' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434292[]' id='answer-id-1680464' class='answer   answerof-434292 ' value='1680464'   \/><label for='answer-id-1680464' id='answer-label-1680464' class=' answer'><span>Use Amazon Aurora as the data catalog. Create AWS Lambda functions that will connect to the data catalog. Configure the Lambda functions to gather the metadata information from multiple sources and to update the Aurora data catalog. Schedule the Lambda functions to run periodically.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434292[]' id='answer-id-1682612' class='answer   answerof-434292 ' value='1682612'   \/><label for='answer-id-1682612' id='answer-label-1682612' class=' answer'><span>Use the AWS Glue Data Catalog as the central metadata repository. Use AWS Glue crawlers to connect to multiple data stores and to update the Data Catalog with metadata changes. Schedule the crawlers to run periodically to update the metadata catalog.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434292[]' id='answer-id-1682613' class='answer   answerof-434292 ' value='1682613'   \/><label for='answer-id-1682613' id='answer-label-1682613' class=' answer'><span>Use Amazon DynamoDB as the data catalog. Create AWS Lambda functions that will connect to the data catalog. Configure the Lambda functions to gather the metadata information from multiple sources and to update the DynamoDB data catalog. Schedule the Lambda functions to run periodically.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434292[]' id='answer-id-1682614' class='answer   answerof-434292 ' value='1682614'   \/><label for='answer-id-1682614' id='answer-label-1682614' class=' answer'><span>Use the AWS Glue Data Catalog as the central metadata repository. Extract the schema for Amazon RDS and Amazon Redshift sources, and build the Data Catalog. Use AWS Glue crawlers for data that is in Amazon S3 to infer the schema and to automatically update the Data Catalog.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-34' style=';'><div id='questionWrap-34'  class='   watupro-question-id-434293'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>34. <\/span>A company uses Amazon Redshift for its data warehouse. The company must automate refresh schedules for Amazon Redshift materialized views. <br \/>\r<br>Which solution will meet this requirement with the LEAST effort?<\/div><input type='hidden' name='question_id[]' id='qID_34' value='434293' \/><input type='hidden' id='answerType434293' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434293[]' id='answer-id-1680465' class='answer   answerof-434293 ' value='1680465'   \/><label for='answer-id-1680465' id='answer-label-1680465' class=' answer'><span>Use Apache Airflow to refresh the materialized views.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434293[]' id='answer-id-1680466' class='answer   answerof-434293 ' value='1680466'   \/><label for='answer-id-1680466' id='answer-label-1680466' class=' answer'><span>Use an AWS Lambda user-defined function (UDF) within Amazon Redshift to refresh the materialized views.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434293[]' id='answer-id-1680467' class='answer   answerof-434293 ' value='1680467'   \/><label for='answer-id-1680467' id='answer-label-1680467' class=' answer'><span>Use the query editor v2 in Amazon Redshift to refresh the materialized views.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434293[]' id='answer-id-1680468' class='answer   answerof-434293 ' value='1680468'   \/><label for='answer-id-1680468' id='answer-label-1680468' class=' answer'><span>Use an AWS Glue workflow to refresh the materialized views.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-35' style=';'><div id='questionWrap-35'  class='   watupro-question-id-434294'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>35. <\/span>A data engineer needs to maintain a central metadata repository that users access through Amazon EMR and Amazon Athena queries. The repository needs to provide the schema and properties of many tables. Some of the metadata is stored in Apache Hive. The data engineer needs to import the metadata from Hive into the central metadata repository. <br \/>\r<br>Which solution will meet these requirements with the LEAST development effort?<\/div><input type='hidden' name='question_id[]' id='qID_35' value='434294' \/><input type='hidden' id='answerType434294' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434294[]' id='answer-id-1680469' class='answer   answerof-434294 ' value='1680469'   \/><label for='answer-id-1680469' id='answer-label-1680469' class=' answer'><span>Use Amazon EMR and Apache Ranger.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434294[]' id='answer-id-1680470' class='answer   answerof-434294 ' value='1680470'   \/><label for='answer-id-1680470' id='answer-label-1680470' class=' answer'><span>Use a Hive metastore on an EMR cluster.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434294[]' id='answer-id-1680471' class='answer   answerof-434294 ' value='1680471'   \/><label for='answer-id-1680471' id='answer-label-1680471' class=' answer'><span>Use the AWS Glue Data Catalog.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434294[]' id='answer-id-1680472' class='answer   answerof-434294 ' value='1680472'   \/><label for='answer-id-1680472' id='answer-label-1680472' class=' answer'><span>Use a metastore on an Amazon RDS for MySQL DB instance.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-36' style=';'><div id='questionWrap-36'  class='   watupro-question-id-434295'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>36. <\/span>A company is planning to use a provisioned Amazon EMR cluster that runs Apache Spark jobs to perform big data analysis. The company requires high reliability. A big data team must follow best practices for running cost-optimized and long-running workloads on Amazon EMR. The team must find a solution that will maintain the company's current level of performance. <br \/>\r<br>Which combination of resources will meet these requirements MOST cost-effectively? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_36' value='434295' \/><input type='hidden' id='answerType434295' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434295[]' id='answer-id-1680473' class='answer   answerof-434295 ' value='1680473'   \/><label for='answer-id-1680473' id='answer-label-1680473' class=' answer'><span>Use Hadoop Distributed File System (HDFS) as a persistent data store.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434295[]' id='answer-id-1680474' class='answer   answerof-434295 ' value='1680474'   \/><label for='answer-id-1680474' id='answer-label-1680474' class=' answer'><span>Use Amazon S3 as a persistent data store.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434295[]' id='answer-id-1680475' class='answer   answerof-434295 ' value='1680475'   \/><label for='answer-id-1680475' id='answer-label-1680475' class=' answer'><span>Use x86-based instances for core nodes and task nodes.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434295[]' id='answer-id-1680476' class='answer   answerof-434295 ' value='1680476'   \/><label for='answer-id-1680476' id='answer-label-1680476' class=' answer'><span>Use Graviton instances for core nodes and task nodes.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-434295[]' id='answer-id-1680477' class='answer   answerof-434295 ' value='1680477'   \/><label for='answer-id-1680477' id='answer-label-1680477' class=' answer'><span>Use Spot Instances for all primary nodes.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-37' style=';'><div id='questionWrap-37'  class='   watupro-question-id-434296'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>37. <\/span>A company is developing an application that runs on Amazon EC2 instances. Currently, the data that the application generates is temporary. However, the company needs to persist the data, even if the EC2 instances are terminated. <br \/>\r<br>A data engineer must launch new EC2 instances from an Amazon Machine Image (AMI) and configure the instances to preserve the data. <br \/>\r<br>Which solution will meet this requirement?<\/div><input type='hidden' name='question_id[]' id='qID_37' value='434296' \/><input type='hidden' id='answerType434296' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434296[]' id='answer-id-1680478' class='answer   answerof-434296 ' value='1680478'   \/><label for='answer-id-1680478' id='answer-label-1680478' class=' answer'><span>Launch new EC2 instances by using an AMI that is backed by an EC2 instance store volume that contains the application data. Apply the default settings to the EC2 instances.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434296[]' id='answer-id-1680479' class='answer   answerof-434296 ' value='1680479'   \/><label for='answer-id-1680479' id='answer-label-1680479' class=' answer'><span>Launch new EC2 instances by using an AMI that is backed by a root Amazon Elastic Block Store (Amazon EBS) volume that contains the application data. Apply the default settings to the EC2 instances.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434296[]' id='answer-id-1680480' class='answer   answerof-434296 ' value='1680480'   \/><label for='answer-id-1680480' id='answer-label-1680480' class=' answer'><span>Launch new EC2 instances by using an AMI that is backed by an EC2 instance store volume. Attach an Amazon Elastic Block Store (Amazon EBS) volume to contain the application data. Apply the default settings to the EC2 instances.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434296[]' id='answer-id-1680481' class='answer   answerof-434296 ' value='1680481'   \/><label for='answer-id-1680481' id='answer-label-1680481' class=' answer'><span>Launch new EC2 instances by using an AMI that is backed by an Amazon Elastic Block Store (Amazon EBS) volume. Attach an additional EC2 instance store volume to contain the application data. Apply the default settings to the EC2 instances.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-38' style=';'><div id='questionWrap-38'  class='   watupro-question-id-434297'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>38. <\/span>A company uses an Amazon Redshift provisioned cluster as its database. The Redshift cluster has five reserved ra3.4xlarge nodes and uses key distribution. <br \/>\r<br>A data engineer notices that one of the nodes frequently has a CPU load over 90%. SQL Queries that run on the node are queued. The other four nodes usually have a CPU load under 15% during daily operations. <br \/>\r<br>The data engineer wants to maintain the current number of compute nodes. The data engineer also wants to balance the load more evenly across all five compute nodes. <br \/>\r<br>Which solution will meet these requirements?<\/div><input type='hidden' name='question_id[]' id='qID_38' value='434297' \/><input type='hidden' id='answerType434297' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434297[]' id='answer-id-1680482' class='answer   answerof-434297 ' value='1680482'   \/><label for='answer-id-1680482' id='answer-label-1680482' class=' answer'><span>Change the sort key to be the data column that is most often used in a WHERE clause of the SQL SELECT statement.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434297[]' id='answer-id-1680483' class='answer   answerof-434297 ' value='1680483'   \/><label for='answer-id-1680483' id='answer-label-1680483' class=' answer'><span>Change the distribution key to the table column that has the largest dimension.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434297[]' id='answer-id-1680484' class='answer   answerof-434297 ' value='1680484'   \/><label for='answer-id-1680484' id='answer-label-1680484' class=' answer'><span>Upgrade the reserved node from ra3.4xlarqe to ra3.16xlarqe.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434297[]' id='answer-id-1680485' class='answer   answerof-434297 ' value='1680485'   \/><label for='answer-id-1680485' id='answer-label-1680485' class=' answer'><span>Change the primary key to be the data column that is most often used in a WHERE clause of the SQL SELECT statement.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-39' style=';'><div id='questionWrap-39'  class='   watupro-question-id-434298'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>39. <\/span>1.A data engineer is configuring an AWS Glue job to read data from an Amazon S3 bucket. The data engineer has set up the necessary AWS Glue connection details and an associated IAM role. However, when the data engineer attempts to run the AWS Glue job, the data engineer receives an error message that indicates that there are problems with the Amazon S3 VPC gateway endpoint. The data engineer must resolve the error and connect the AWS Glue job to the S3 bucket. <br \/>\r<br>Which solution will meet this requirement?<\/div><input type='hidden' name='question_id[]' id='qID_39' value='434298' \/><input type='hidden' id='answerType434298' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434298[]' id='answer-id-1680486' class='answer   answerof-434298 ' value='1680486'   \/><label for='answer-id-1680486' id='answer-label-1680486' class=' answer'><span>Update the AWS Glue security group to allow inbound traffic from the Amazon S3 VPC gateway endpoint.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434298[]' id='answer-id-1680487' class='answer   answerof-434298 ' value='1680487'   \/><label for='answer-id-1680487' id='answer-label-1680487' class=' answer'><span>Configure an S3 bucket policy to explicitly grant the AWS Glue job permissions to access the S3 bucket.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434298[]' id='answer-id-1680488' class='answer   answerof-434298 ' value='1680488'   \/><label for='answer-id-1680488' id='answer-label-1680488' class=' answer'><span>Review the AWS Glue job code to ensure that the AWS Glue connection details include a fully qualified domain name.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434298[]' id='answer-id-1680489' class='answer   answerof-434298 ' value='1680489'   \/><label for='answer-id-1680489' id='answer-label-1680489' class=' answer'><span>Verify that the VPC's route table includes inbound and outbound routes for the Amazon S3 VPC gateway endpoint.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-40' style=';'><div id='questionWrap-40'  class='   watupro-question-id-434299'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>40. <\/span>A media company uses software as a service (SaaS) applications to gather data by using third-party tools. The company needs to store the data in an Amazon S3 bucket. The company will use Amazon Redshift to perform analytics based on the data. <br \/>\r<br>Which AWS service or feature will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_40' value='434299' \/><input type='hidden' id='answerType434299' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434299[]' id='answer-id-1680490' class='answer   answerof-434299 ' value='1680490'   \/><label for='answer-id-1680490' id='answer-label-1680490' class=' answer'><span>Amazon Managed Streaming for Apache Kafka (Amazon MSK)<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434299[]' id='answer-id-1680491' class='answer   answerof-434299 ' value='1680491'   \/><label for='answer-id-1680491' id='answer-label-1680491' class=' answer'><span>Amazon AppFlow<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434299[]' id='answer-id-1680492' class='answer   answerof-434299 ' value='1680492'   \/><label for='answer-id-1680492' id='answer-label-1680492' class=' answer'><span>AWS Glue Data Catalog<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-434299[]' id='answer-id-1680493' class='answer   answerof-434299 ' value='1680493'   \/><label for='answer-id-1680493' id='answer-label-1680493' class=' answer'><span>Amazon Kinesis<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div style='display:none' id='question-41'>\n\t<div class='question-content'>\n\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.dumpsbase.com\/freedumps\/wp-content\/plugins\/watupro\/img\/loading.gif\" width=\"16\" height=\"16\" alt=\"Loading...\" title=\"Loading...\" \/>&nbsp;Loading...\t<\/div>\n<\/div>\n\n<br \/>\n\t\n\t\t\t<div class=\"watupro_buttons flex \" id=\"watuPROButtons11028\" >\n\t\t  <div id=\"prev-question\" style=\"display:none;\"><input type=\"button\" value=\"&lt; Previous\" onclick=\"WatuPRO.nextQuestion(event, 'previous');\"\/><\/div>\t\t  \t\t  \t\t   \n\t\t   \t  \t\t<div><input type=\"button\" name=\"action\" class=\"watupro-submit-button\" onclick=\"WatuPRO.submitResult(event)\" id=\"action-button\" value=\"View Results\"  \/>\n\t\t<\/div>\n\t\t<\/div>\n\t\t\n\t<input type=\"hidden\" name=\"quiz_id\" value=\"11028\" id=\"watuPROExamID\"\/>\n\t<input type=\"hidden\" name=\"start_time\" id=\"startTime\" value=\"2026-05-05 20:06:33\" \/>\n\t<input type=\"hidden\" name=\"start_timestamp\" id=\"startTimeStamp\" value=\"1778011593\" \/>\n\t<input type=\"hidden\" name=\"question_ids\" value=\"\" \/>\n\t<input type=\"hidden\" name=\"watupro_questions\" value=\"434260:1680323,1680324,1680325,1680326 | 434261:1680327,1680328,1680329,1680330,1680331,1680332 | 434262:1680333,1680334,1680335,1680336 | 434263:1680337,1680338,1680339,1680340 | 434264:1680341,1680342,1680343,1680344,1680345,1680346,1680347,1680348,1680349,1680350 | 434265:1680351,1680352,1680353,1680354 | 434266:1680355,1680356,1680357,1680358 | 434267:1680359,1680360,1680361,1680362 | 434268:1680363,1680364,1680365,1680366,1680367 | 434269:1680368,1680369,1680370,1680371 | 434270:1680372,1680373,1680374,1680375 | 434271:1680376,1680377,1680378,1680379 | 434272:1680380,1680381,1680382,1680383 | 434273:1680384,1680385,1680386,1680387 | 434274:1680388,1680389,1680390,1680391 | 434275:1680392,1680393,1680394,1680395 | 434276:1680396,1682615,1682616,1682617 | 434277:1680397,1680398,1680399,1680400 | 434278:1680401,1680402,1680403,1680404 | 434279:1680405,1680406,1680407,1680408 | 434280:1680409,1680410,1680411,1680412 | 434281:1680413,1680414,1680415,1680416 | 434282:1680417,1680418,1680419,1680420,1680421 | 434283:1680422,1682618,1682619,1682620,1682621 | 434284:1680423,1680424,1680425,1680426 | 434285:1680427,1680428,1680429,1680430,1680431 | 434286:1680432,1680433,1680434,1680435 | 434287:1680436,1680437,1680438,1680439 | 434288:1680440,1680441,1680442,1680443,1680444,1680445,1680446,1680447,1680448,1680449,1680450 | 434289:1680451,1680452,1680453,1680454 | 434290:1680455,1680456,1680457,1680458 | 434291:1680459,1680460,1680461,1680462,1680463 | 434292:1680464,1682612,1682613,1682614 | 434293:1680465,1680466,1680467,1680468 | 434294:1680469,1680470,1680471,1680472 | 434295:1680473,1680474,1680475,1680476,1680477 | 434296:1680478,1680479,1680480,1680481 | 434297:1680482,1680483,1680484,1680485 | 434298:1680486,1680487,1680488,1680489 | 434299:1680490,1680491,1680492,1680493\" \/>\n\t<input type=\"hidden\" name=\"no_ajax\" value=\"0\">\t\t\t<\/form>\n\t<p>&nbsp;<\/p>\n<\/div>\n\n<script type=\"text\/javascript\">\n\/\/jQuery(document).ready(function(){\ndocument.addEventListener(\"DOMContentLoaded\", function(event) { \t\nvar question_ids = \"434260,434261,434262,434263,434264,434265,434266,434267,434268,434269,434270,434271,434272,434273,434274,434275,434276,434277,434278,434279,434280,434281,434282,434283,434284,434285,434286,434287,434288,434289,434290,434291,434292,434293,434294,434295,434296,434297,434298,434299\";\nWatuPROSettings[11028] = {};\nWatuPRO.qArr = question_ids.split(',');\nWatuPRO.exam_id = 11028;\t    \nWatuPRO.post_id = 112981;\nWatuPRO.store_progress = 0;\nWatuPRO.curCatPage = 1;\nWatuPRO.requiredIDs=\"0\".split(\",\");\nWatuPRO.hAppID = \"0.50077400 1778011593\";\nvar url = \"https:\/\/www.dumpsbase.com\/freedumps\/wp-content\/plugins\/watupro\/show_exam.php\";\nWatuPRO.examMode = 1;\nWatuPRO.siteURL=\"https:\/\/www.dumpsbase.com\/freedumps\/wp-admin\/admin-ajax.php\";\nWatuPRO.emailIsNotRequired = 0;\nWatuPROIntel.init(11028);\nWatuPRO.inCategoryPages=1;});    \t \n<\/script>\n<p>&nbsp;<\/p>\n<h3><a href=\"https:\/\/www.dumpsbase.com\/freedumps\/aws-dea-c01-dumps-v11-02-help-you-pass-the-aws-certified-data-engineer-associate-exam-dea-c01-free-dumps-part-2-q41-q65-are-available.html\"><span style=\"background-color: #99ccff;\"><em>AWS DEA-C01 free dumps (Part 2, Q41-Q65) of V11.02<\/em><\/span><\/a> are here to help you check more about the latest dumps.<\/h3>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Passing the AWS Certified Data Engineer &#8211; Associate (DEA-C01) certification exam requires a proper study guide. So you are highly recommended to come to DumpsBase and download our updated AWS DEA-C01 dumps (V11.02). This is the most current version with 190 practice exam questions and answers, combining reliable materials with real exam questions and answers. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[175,18249],"tags":[20211,20212],"class_list":["post-112981","post","type-post","status-publish","format-standard","hentry","category-amazon","category-data-engineer-associate","tag-amazon-dea-c01","tag-aws-dea-c01"],"_links":{"self":[{"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/posts\/112981","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/comments?post=112981"}],"version-history":[{"count":3,"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/posts\/112981\/revisions"}],"predecessor-version":[{"id":115896,"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/posts\/112981\/revisions\/115896"}],"wp:attachment":[{"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/media?parent=112981"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/categories?post=112981"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/tags?post=112981"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}