{"id":108809,"date":"2025-08-18T02:38:52","date_gmt":"2025-08-18T02:38:52","guid":{"rendered":"https:\/\/www.dumpsbase.com\/freedumps\/?p=108809"},"modified":"2025-09-20T03:11:09","modified_gmt":"2025-09-20T03:11:09","slug":"aws-certified-data-engineer-associate-dea-c01-dumps-v10-02-are-available-online-come-here-and-check-the-amazon-dea-c01-free-dumps-part-1-q1-q40","status":"publish","type":"post","link":"https:\/\/www.dumpsbase.com\/freedumps\/aws-certified-data-engineer-associate-dea-c01-dumps-v10-02-are-available-online-come-here-and-check-the-amazon-dea-c01-free-dumps-part-1-q1-q40.html","title":{"rendered":"AWS Certified Data Engineer &#8211; Associate DEA-C01 Dumps (V10.02) Are Available Online: Come Here and Check the Amazon DEA-C01 Free Dumps (Part 1, Q1-Q40)"},"content":{"rendered":"<p>At DumpsBase, we believe that the best way to achieve success in the AWS Certified Data Engineer &#8211; Associate (DEA-C01) exam is to consistently practice with real exam questions and answers. The Amazon DEA-C01 dumps (V10.02) are available online, providing verified and accurate Amazon DEA-C01 exam questions to help you reduce exam-related stress and significantly improve overall performance. Our team has crafted 187 exam questions and answers, which are designed to mirror the real exam format, allowing you to test your knowledge under exam-like conditions. Each exam question comes with detailed explanations to help you understand the reasoning behind the answers. This not only strengthens your concepts but also boosts your confidence. With DumpsBase, you are not just preparing\u2014you are preparing to pass the AWS Certified Data Engineer &#8211; Associate (DEA-C01) exam.<\/p>\n<h2>Share <span style=\"background-color: #00ffff;\"><em>Amazon DEA-C01 free dumps (Part 1, Q1-Q40)<\/em><\/span> online to help you check V10.02:<\/h2>\n<script>\n\t  window.fbAsyncInit = function() {\n\t    FB.init({\n\t      appId            : '622169541470367',\n\t      autoLogAppEvents : true,\n\t      xfbml            : true,\n\t      version          : 'v3.1'\n\t    });\n\t  };\n\t\n\t  (function(d, s, id){\n\t     var js, fjs = d.getElementsByTagName(s)[0];\n\t     if (d.getElementById(id)) {return;}\n\t     js = d.createElement(s); js.id = id;\n\t     js.src = \"https:\/\/connect.facebook.net\/en_US\/sdk.js\";\n\t     fjs.parentNode.insertBefore(js, fjs);\n\t   }(document, 'script', 'facebook-jssdk'));\n\t<\/script><script type=\"text\/javascript\" >\ndocument.addEventListener(\"DOMContentLoaded\", function(event) { \nif(!window.jQuery) alert(\"The important jQuery library is not properly loaded in your site. Your WordPress theme is probably missing the essential wp_head() call. You can switch to another theme and you will see that the plugin works fine and this notice disappears. If you are still not sure what to do you can contact us for help.\");\n});\n<\/script>  \n  \n<div  id=\"watupro_quiz\" class=\"quiz-area single-page-quiz\">\n<p id=\"submittingExam10578\" style=\"display:none;text-align:center;\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.dumpsbase.com\/freedumps\/wp-content\/plugins\/watupro\/img\/loading.gif\" width=\"16\" height=\"16\"><\/p>\n\n<div class=\"watupro-exam-description\" id=\"description-quiz-10578\"><\/div>\n\n<form action=\"\" method=\"post\" class=\"quiz-form\" id=\"quiz-10578\"  enctype=\"multipart\/form-data\" >\n<div class='watu-question ' id='question-1' style=';'><div id='questionWrap-1'  class='   watupro-question-id-418511'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>1. <\/span>A manufacturing company collects sensor data from its factory floor to monitor and enhance operational efficiency. The company uses Amazon Kinesis Data Streams to publish the data that the sensors collect to a data stream. Then Amazon Kinesis Data Firehose writes the data to an Amazon S3 bucket.<br \/>\r\n<br \/>\r\nThe company needs to display a real-time view of operational efficiency on a large screen in the manufacturing facility.<br \/>\r\n<br \/>\r\nWhich solution will meet these requirements with the LOWEST latency?<\/div><input type='hidden' name='question_id[]' id='qID_1' value='418511' \/><input type='hidden' id='answerType418511' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418511[]' id='answer-id-1621390' class='answer   answerof-418511 ' value='1621390'   \/><label for='answer-id-1621390' id='answer-label-1621390' class=' answer'><span>Use Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) to process the sensor data. Use a connector for Apache Flink to write data to an Amazon Timestream database. Use the Timestream database as a source to create a Grafana dashboard.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418511[]' id='answer-id-1637766' class='answer   answerof-418511 ' value='1637766'   \/><label for='answer-id-1637766' id='answer-label-1637766' class=' answer'><span>Configure the S3 bucket to send a notification to an AWS Lambda function when any new object is created. Use the Lambda function to publish the data to Amazon Aurora. Use Aurora as a source to create an Amazon QuickSight dashboard.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418511[]' id='answer-id-1637767' class='answer   answerof-418511 ' value='1637767'   \/><label for='answer-id-1637767' id='answer-label-1637767' class=' answer'><span>Use Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) to process the sensor data. Create a new Data Firehose delivery stream to publish data directly to an Amazon Timestream database. Use the Timestream database as a source to create an Amazon QuickSight dashboard.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418511[]' id='answer-id-1637768' class='answer   answerof-418511 ' value='1637768'   \/><label for='answer-id-1637768' id='answer-label-1637768' class=' answer'><span>Use AWS Glue bookmarks to read sensor data from the S3 bucket in real time. Publish the data to an Amazon Timestream database. Use the Timestream database as a source to create a Grafana dashboard.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-2' style=';'><div id='questionWrap-2'  class='   watupro-question-id-418512'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>2. <\/span>A company has five offices in different AWS Regions. Each office has its own human resources (HR) department that uses a unique IAM role. The company stores employee records in a data lake that is based on Amazon S3 storage.<br \/>\r\n<br \/>\r\nA data engineering team needs to limit access to the records. Each HR department should be able to access records for only employees who are within the HR department's Region.<br \/>\r\n<br \/>\r\nWhich combination of steps should the data engineering team take to meet this requirement with the LEAST operational overhead? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_2' value='418512' \/><input type='hidden' id='answerType418512' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418512[]' id='answer-id-1621391' class='answer   answerof-418512 ' value='1621391'   \/><label for='answer-id-1621391' id='answer-label-1621391' class=' answer'><span>Use data filters for each Region to register the S3 paths as data locations.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418512[]' id='answer-id-1637769' class='answer   answerof-418512 ' value='1637769'   \/><label for='answer-id-1637769' id='answer-label-1637769' class=' answer'><span>Register the S3 path as an AWS Lake Formation location.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418512[]' id='answer-id-1637770' class='answer   answerof-418512 ' value='1637770'   \/><label for='answer-id-1637770' id='answer-label-1637770' class=' answer'><span>Modify the IAM roles of the HR departments to add a data filter for each department's Region.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418512[]' id='answer-id-1637771' class='answer   answerof-418512 ' value='1637771'   \/><label for='answer-id-1637771' id='answer-label-1637771' class=' answer'><span>Enable fine-grained access control in AWS Lake Formation. Add a data filter for each Region.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418512[]' id='answer-id-1637772' class='answer   answerof-418512 ' value='1637772'   \/><label for='answer-id-1637772' id='answer-label-1637772' class=' answer'><span>Create a separate S3 bucket for each Region. Configure an IAM policy to allow S3 access. Restrict access based on Region.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-3' style=';'><div id='questionWrap-3'  class='   watupro-question-id-418513'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>3. <\/span>1.A data engineer is configuring an AWS Glue job to read data from an Amazon S3 bucket. The data engineer has set up the necessary AWS Glue connection details and an associated IAM role. However, when the data engineer attempts to run the AWS Glue job, the data engineer receives an error message that indicates that there are problems with the Amazon S3 VPC gateway endpoint. The data engineer must resolve the error and connect the AWS Glue job to the S3 bucket. <br \/>\r<br>Which solution will meet this requirement?<\/div><input type='hidden' name='question_id[]' id='qID_3' value='418513' \/><input type='hidden' id='answerType418513' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418513[]' id='answer-id-1621392' class='answer   answerof-418513 ' value='1621392'   \/><label for='answer-id-1621392' id='answer-label-1621392' class=' answer'><span>Update the AWS Glue security group to allow inbound traffic from the Amazon S3 VPC gateway endpoint.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418513[]' id='answer-id-1621393' class='answer   answerof-418513 ' value='1621393'   \/><label for='answer-id-1621393' id='answer-label-1621393' class=' answer'><span>Configure an S3 bucket policy to explicitly grant the AWS Glue job permissions to access the S3 bucket.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418513[]' id='answer-id-1621394' class='answer   answerof-418513 ' value='1621394'   \/><label for='answer-id-1621394' id='answer-label-1621394' class=' answer'><span>Review the AWS Glue job code to ensure that the AWS Glue connection details include a fully qualified domain name.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418513[]' id='answer-id-1621395' class='answer   answerof-418513 ' value='1621395'   \/><label for='answer-id-1621395' id='answer-label-1621395' class=' answer'><span>Verify that the VPC's route table includes inbound and outbound routes for the Amazon S3 VPC gateway endpoint.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-4' style=';'><div id='questionWrap-4'  class='   watupro-question-id-418514'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>4. <\/span>A data engineer maintains custom Python scripts that perform a data formatting process that many AWS Lambda functions use. When the data engineer needs to modify the Python scripts, the data engineer must manually update all the Lambda functions. <br \/>\r<br>The data engineer requires a less manual way to update the Lambda functions. <br \/>\r<br>Which solution will meet this requirement?<\/div><input type='hidden' name='question_id[]' id='qID_4' value='418514' \/><input type='hidden' id='answerType418514' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418514[]' id='answer-id-1621396' class='answer   answerof-418514 ' value='1621396'   \/><label for='answer-id-1621396' id='answer-label-1621396' class=' answer'><span>Store a pointer to the custom Python scripts in the execution context object in a shared Amazon \r\nS3 bucket.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418514[]' id='answer-id-1621397' class='answer   answerof-418514 ' value='1621397'   \/><label for='answer-id-1621397' id='answer-label-1621397' class=' answer'><span>Package the custom Python scripts into Lambda layers. Apply the Lambda layers to the Lambda functions.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418514[]' id='answer-id-1621398' class='answer   answerof-418514 ' value='1621398'   \/><label for='answer-id-1621398' id='answer-label-1621398' class=' answer'><span>Store a pointer to the custom Python scripts in environment variables in a shared Amazon S3 bucket.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418514[]' id='answer-id-1621399' class='answer   answerof-418514 ' value='1621399'   \/><label for='answer-id-1621399' id='answer-label-1621399' class=' answer'><span>Assign the same alias to each Lambda function. Call reach Lambda function by specifying the function's alias.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-5' style=';'><div id='questionWrap-5'  class='   watupro-question-id-418515'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>5. <\/span>A security company stores IoT data that is in JSON format in an Amazon S3 bucket. The data structure can change when the company upgrades the IoT devices. The company wants to create a data catalog that includes the IoT data. The company's analytics department will use the data catalog to index the data. <br \/>\r<br>Which solution will meet these requirements MOST cost-effectively?<\/div><input type='hidden' name='question_id[]' id='qID_5' value='418515' \/><input type='hidden' id='answerType418515' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418515[]' id='answer-id-1621400' class='answer   answerof-418515 ' value='1621400'   \/><label for='answer-id-1621400' id='answer-label-1621400' class=' answer'><span>Create an AWS Glue Data Catalog. Configure an AWS Glue Schema Registry. Create a new AWS Glue workload to orchestrate the ingestion of the data that the analytics department will use into Amazon Redshift Serverless.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418515[]' id='answer-id-1621401' class='answer   answerof-418515 ' value='1621401'   \/><label for='answer-id-1621401' id='answer-label-1621401' class=' answer'><span>Create an Amazon Redshift provisioned cluster. Create an Amazon Redshift Spectrum database for the analytics department to explore the data that is in Amazon S3. Create Redshift stored procedures to load the data into Amazon Redshift.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418515[]' id='answer-id-1621402' class='answer   answerof-418515 ' value='1621402'   \/><label for='answer-id-1621402' id='answer-label-1621402' class=' answer'><span>Create an Amazon Athena workgroup. Explore the data that is in Amazon S3 by using Apache Spark through Athena. Provide the Athena workgroup schema and tables to the analytics department.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418515[]' id='answer-id-1621403' class='answer   answerof-418515 ' value='1621403'   \/><label for='answer-id-1621403' id='answer-label-1621403' class=' answer'><span>Create an AWS Glue Data Catalog. Configure an AWS Glue Schema Registry. Create AWS Lambda user defined functions (UDFs) by using the Amazon Redshift Data AP<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418515[]' id='answer-id-1621404' class='answer   answerof-418515 ' value='1621404'   \/><label for='answer-id-1621404' id='answer-label-1621404' class=' answer'><span>Create an AWS Step Functions job to orchestrate the ingestion of the data that the analytics department will use into Amazon Redshift Serverless.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-6' style=';'><div id='questionWrap-6'  class='   watupro-question-id-418516'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>6. <\/span>A company maintains an Amazon Redshift provisioned cluster that the company uses for extract, transform, and load (ETL) operations to support critical analysis tasks. A sales team within the company maintains a Redshift cluster that the sales team uses for business intelligence (BI) tasks. The sales team recently requested access to the data that is in the ETL Redshift cluster so the team can perform weekly summary analysis tasks. The sales team needs to join data from the ETL cluster with data that is in the sales team's BI cluster. <br \/>\r<br>The company needs a solution that will share the ETL cluster data with the sales team without interrupting the critical analysis tasks. The solution must minimize usage of the computing resources of the ETL cluster. <br \/>\r<br>Which solution will meet these requirements?<\/div><input type='hidden' name='question_id[]' id='qID_6' value='418516' \/><input type='hidden' id='answerType418516' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418516[]' id='answer-id-1621405' class='answer   answerof-418516 ' value='1621405'   \/><label for='answer-id-1621405' id='answer-label-1621405' class=' answer'><span>Set up the sales team Bl cluster as a consumer of the ETL cluster by using Redshift data sharing.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418516[]' id='answer-id-1621406' class='answer   answerof-418516 ' value='1621406'   \/><label for='answer-id-1621406' id='answer-label-1621406' class=' answer'><span>Create materialized views based on the sales team's requirements. Grant the sales team direct access to the ETL cluster.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418516[]' id='answer-id-1621407' class='answer   answerof-418516 ' value='1621407'   \/><label for='answer-id-1621407' id='answer-label-1621407' class=' answer'><span>Create database views based on the sales team's requirements. Grant the sales team direct access to the ETL cluster.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418516[]' id='answer-id-1621408' class='answer   answerof-418516 ' value='1621408'   \/><label for='answer-id-1621408' id='answer-label-1621408' class=' answer'><span>Unload a copy of the data from the ETL cluster to an Amazon S3 bucket every week. Create an Amazon Redshift Spectrum table based on the content of the ETL cluster.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-7' style=';'><div id='questionWrap-7'  class='   watupro-question-id-418517'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>7. <\/span>A company receives .csv files that contain physical address data. The data is in columns that have the following names: Door_No, Street_Name, City, and Zip_Code. <br \/>\r<br>The company wants to create a single column to store these values in the following format: <br \/>\r<br><br><img decoding=\"async\" width=405 height=145 id=\"\u56fe\u7247 7\" src=\"https:\/\/www.dumpsbase.com\/freedumps\/wp-content\/uploads\/2025\/08\/image001.jpg\"><br><br \/>\r<br>Which solution will meet this requirement with the LEAST coding effort?<\/div><input type='hidden' name='question_id[]' id='qID_7' value='418517' \/><input type='hidden' id='answerType418517' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418517[]' id='answer-id-1621409' class='answer   answerof-418517 ' value='1621409'   \/><label for='answer-id-1621409' id='answer-label-1621409' class=' answer'><span>Use AWS Glue DataBrew to read the files. Use the NEST TO ARRAY transformation to create the new column.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418517[]' id='answer-id-1621410' class='answer   answerof-418517 ' value='1621410'   \/><label for='answer-id-1621410' id='answer-label-1621410' class=' answer'><span>Use AWS Glue DataBrew to read the files. Use the NEST TO MAP transformation to create the new column.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418517[]' id='answer-id-1621411' class='answer   answerof-418517 ' value='1621411'   \/><label for='answer-id-1621411' id='answer-label-1621411' class=' answer'><span>Use AWS Glue DataBrew to read the files. Use the PIVOT transformation to create the new column.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418517[]' id='answer-id-1621412' class='answer   answerof-418517 ' value='1621412'   \/><label for='answer-id-1621412' id='answer-label-1621412' class=' answer'><span>Write a Lambda function in Python to read the files. Use the Python data dictionary type to create the new column.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-8' style=';'><div id='questionWrap-8'  class='   watupro-question-id-418518'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>8. <\/span>A data engineer needs to create an AWS Lambda function that converts the format of data from .csv to Apache Parquet. The Lambda function must run only if a user uploads a .csv file to an Amazon S3 bucket. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_8' value='418518' \/><input type='hidden' id='answerType418518' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418518[]' id='answer-id-1621413' class='answer   answerof-418518 ' value='1621413'   \/><label for='answer-id-1621413' id='answer-label-1621413' class=' answer'><span>Create an S3 event notification that has an event type of s3:ObjectCreated:*. Use a filter rule to generate notifications only when the suffix includes .csv. Set the Amazon Resource Name (ARN) of the Lambda function as the destination for the event notification.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418518[]' id='answer-id-1621414' class='answer   answerof-418518 ' value='1621414'   \/><label for='answer-id-1621414' id='answer-label-1621414' class=' answer'><span>Create an S3 event notification that has an event type of s3:ObjectTagging:* for objects that have a tag set to .csv. Set the Amazon Resource Name (ARN) of the Lambda function as the destination for the event notification.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418518[]' id='answer-id-1621415' class='answer   answerof-418518 ' value='1621415'   \/><label for='answer-id-1621415' id='answer-label-1621415' class=' answer'><span>Create an S3 event notification that has an event type of s3:*. Use a filter rule to generate notifications only when the suffix includes .csv. Set the Amazon Resource Name (ARN) of the Lambda function as the destination for the event notification.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418518[]' id='answer-id-1621416' class='answer   answerof-418518 ' value='1621416'   \/><label for='answer-id-1621416' id='answer-label-1621416' class=' answer'><span>Create an S3 event notification that has an event type of s3:ObjectCreated:*. Use a filter rule to generate notifications only when the suffix includes .csv. Set an Amazon Simple Notification Service (Amazon SNS) topic as the destination for the event notification. Subscribe the Lambda function to the SNS topic.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-9' style=';'><div id='questionWrap-9'  class='   watupro-question-id-418519'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>9. <\/span>A company is developing an application that runs on Amazon EC2 instances. Currently, the data that the application generates is temporary. However, the company needs to persist the data, even if the EC2 instances are terminated. <br \/>\r<br>A data engineer must launch new EC2 instances from an Amazon Machine Image (AMI) and configure the instances to preserve the data. <br \/>\r<br>Which solution will meet this requirement?<\/div><input type='hidden' name='question_id[]' id='qID_9' value='418519' \/><input type='hidden' id='answerType418519' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418519[]' id='answer-id-1621417' class='answer   answerof-418519 ' value='1621417'   \/><label for='answer-id-1621417' id='answer-label-1621417' class=' answer'><span>Launch new EC2 instances by using an AMI that is backed by an EC2 instance store volume that contains the application data. Apply the default settings to the EC2 instances.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418519[]' id='answer-id-1621418' class='answer   answerof-418519 ' value='1621418'   \/><label for='answer-id-1621418' id='answer-label-1621418' class=' answer'><span>Launch new EC2 instances by using an AMI that is backed by a root Amazon Elastic Block Store (Amazon EBS) volume that contains the application data. Apply the default settings to the EC2 instances.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418519[]' id='answer-id-1621419' class='answer   answerof-418519 ' value='1621419'   \/><label for='answer-id-1621419' id='answer-label-1621419' class=' answer'><span>Launch new EC2 instances by using an AMI that is backed by an EC2 instance store volume. Attach an Amazon Elastic Block Store (Amazon EBS) volume to contain the application data. Apply the default settings to the EC2 instances.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418519[]' id='answer-id-1621420' class='answer   answerof-418519 ' value='1621420'   \/><label for='answer-id-1621420' id='answer-label-1621420' class=' answer'><span>Launch new EC2 instances by using an AMI that is backed by an Amazon Elastic Block Store (Amazon EBS) volume. Attach an additional EC2 instance store volume to contain the application data. Apply the default settings to the EC2 instances.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-10' style=';'><div id='questionWrap-10'  class='   watupro-question-id-418520'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>10. <\/span>A company uses an Amazon QuickSight dashboard to monitor usage of one of the company's applications. The company uses AWS Glue jobs to process data for the dashboard. The company stores the data in a single Amazon S3 bucket. The company adds new data every day. <br \/>\r<br>A data engineer discovers that dashboard queries are becoming slower over time. The data engineer determines that the root cause of the slowing queries is long-running AWS Glue jobs. <br \/>\r<br>Which actions should the data engineer take to improve the performance of the AWS Glue jobs? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_10' value='418520' \/><input type='hidden' id='answerType418520' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418520[]' id='answer-id-1621421' class='answer   answerof-418520 ' value='1621421'   \/><label for='answer-id-1621421' id='answer-label-1621421' class=' answer'><span>Partition the data that is in the S3 bucket. Organize the data by year, month, and day.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418520[]' id='answer-id-1621422' class='answer   answerof-418520 ' value='1621422'   \/><label for='answer-id-1621422' id='answer-label-1621422' class=' answer'><span>Increase the AWS Glue instance size by scaling up the worker type.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418520[]' id='answer-id-1621423' class='answer   answerof-418520 ' value='1621423'   \/><label for='answer-id-1621423' id='answer-label-1621423' class=' answer'><span>Convert the AWS Glue schema to the DynamicFrame schema class.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418520[]' id='answer-id-1621424' class='answer   answerof-418520 ' value='1621424'   \/><label for='answer-id-1621424' id='answer-label-1621424' class=' answer'><span>Adjust AWS Glue job scheduling frequency so the jobs run half as many times each day.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418520[]' id='answer-id-1621425' class='answer   answerof-418520 ' value='1621425'   \/><label for='answer-id-1621425' id='answer-label-1621425' class=' answer'><span>Modify the 1AM role that grants access to AWS glue to grant access to all S3 features.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-11' style=';'><div id='questionWrap-11'  class='   watupro-question-id-418521'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>11. <\/span>A company needs to build a data lake in AWS. The company must provide row-level data access and column-level data access to specific teams. The teams will access the data by using Amazon Athena, Amazon Redshift Spectrum, and Apache Hive from Amazon EMR. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_11' value='418521' \/><input type='hidden' id='answerType418521' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418521[]' id='answer-id-1621426' class='answer   answerof-418521 ' value='1621426'   \/><label for='answer-id-1621426' id='answer-label-1621426' class=' answer'><span>Use Amazon S3 for data lake storage. Use S3 access policies to restrict data access by rows and columns. Provide data access through Amazon S3.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418521[]' id='answer-id-1621427' class='answer   answerof-418521 ' value='1621427'   \/><label for='answer-id-1621427' id='answer-label-1621427' class=' answer'><span>Use Amazon S3 for data lake storage. Use Apache Ranger through Amazon EMR to restrict data access by rows and columns. Provide data access by using Apache Pig.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418521[]' id='answer-id-1621428' class='answer   answerof-418521 ' value='1621428'   \/><label for='answer-id-1621428' id='answer-label-1621428' class=' answer'><span>Use Amazon Redshift for data lake storage. Use Redshift security policies to restrict data access by rows and columns. Provide data access by using Apache Spark and Amazon Athena federated queries.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418521[]' id='answer-id-1621429' class='answer   answerof-418521 ' value='1621429'   \/><label for='answer-id-1621429' id='answer-label-1621429' class=' answer'><span>Use Amazon S3 for data lake storage. Use AWS Lake Formation to restrict data access by rows and columns. Provide data access through AWS Lake Formation.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-12' style=';'><div id='questionWrap-12'  class='   watupro-question-id-418522'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>12. <\/span>A company uses Amazon Redshift for its data warehouse. The company must automate refresh schedules for Amazon Redshift materialized views. <br \/>\r<br>Which solution will meet this requirement with the LEAST effort?<\/div><input type='hidden' name='question_id[]' id='qID_12' value='418522' \/><input type='hidden' id='answerType418522' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418522[]' id='answer-id-1621430' class='answer   answerof-418522 ' value='1621430'   \/><label for='answer-id-1621430' id='answer-label-1621430' class=' answer'><span>Use Apache Airflow to refresh the materialized views.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418522[]' id='answer-id-1621431' class='answer   answerof-418522 ' value='1621431'   \/><label for='answer-id-1621431' id='answer-label-1621431' class=' answer'><span>Use an AWS Lambda user-defined function (UDF) within Amazon Redshift to refresh the materialized views.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418522[]' id='answer-id-1621432' class='answer   answerof-418522 ' value='1621432'   \/><label for='answer-id-1621432' id='answer-label-1621432' class=' answer'><span>Use the query editor v2 in Amazon Redshift to refresh the materialized views.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418522[]' id='answer-id-1621433' class='answer   answerof-418522 ' value='1621433'   \/><label for='answer-id-1621433' id='answer-label-1621433' class=' answer'><span>Use an AWS Glue workflow to refresh the materialized views.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-13' style=';'><div id='questionWrap-13'  class='   watupro-question-id-418523'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>13. <\/span>A data engineer needs to securely transfer 5 TB of data from an on-premises data center to an Amazon S3 bucket. Approximately 5% of the data changes every day. Updates to the data need to be regularly proliferated to the S3 bucket. The data includes files that are in multiple formats. The data engineer needs to automate the transfer process and must schedule the process to run periodically. <br \/>\r<br>Which AWS service should the data engineer use to transfer the data in the MOST operationally efficient way?<\/div><input type='hidden' name='question_id[]' id='qID_13' value='418523' \/><input type='hidden' id='answerType418523' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418523[]' id='answer-id-1621434' class='answer   answerof-418523 ' value='1621434'   \/><label for='answer-id-1621434' id='answer-label-1621434' class=' answer'><span>AWS DataSync<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418523[]' id='answer-id-1621435' class='answer   answerof-418523 ' value='1621435'   \/><label for='answer-id-1621435' id='answer-label-1621435' class=' answer'><span>AWS Glue<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418523[]' id='answer-id-1621436' class='answer   answerof-418523 ' value='1621436'   \/><label for='answer-id-1621436' id='answer-label-1621436' class=' answer'><span>AWS Direct Connect<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418523[]' id='answer-id-1621437' class='answer   answerof-418523 ' value='1621437'   \/><label for='answer-id-1621437' id='answer-label-1621437' class=' answer'><span>Amazon S3 Transfer Acceleration<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-14' style=';'><div id='questionWrap-14'  class='   watupro-question-id-418524'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>14. <\/span>A company uses an on-premises Microsoft SQL Server database to store financial transaction data. The company migrates the transaction data from the on-premises database to AWS at the end of each month. The company has noticed that the cost to migrate data from the on-premises database to an Amazon RDS for SQL Server database has increased recently. <br \/>\r<br>The company requires a cost-effective solution to migrate the data to AWS. The solution must cause minimal downtown for the applications that access the database. <br \/>\r<br>Which AWS service should the company use to meet these requirements?<\/div><input type='hidden' name='question_id[]' id='qID_14' value='418524' \/><input type='hidden' id='answerType418524' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418524[]' id='answer-id-1621438' class='answer   answerof-418524 ' value='1621438'   \/><label for='answer-id-1621438' id='answer-label-1621438' class=' answer'><span>AWS Lambda<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418524[]' id='answer-id-1621439' class='answer   answerof-418524 ' value='1621439'   \/><label for='answer-id-1621439' id='answer-label-1621439' class=' answer'><span>AWS Database Migration Service (AWS DMS)<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418524[]' id='answer-id-1621440' class='answer   answerof-418524 ' value='1621440'   \/><label for='answer-id-1621440' id='answer-label-1621440' class=' answer'><span>AWS Direct Connect<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418524[]' id='answer-id-1621441' class='answer   answerof-418524 ' value='1621441'   \/><label for='answer-id-1621441' id='answer-label-1621441' class=' answer'><span>AWS DataSync<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-15' style=';'><div id='questionWrap-15'  class='   watupro-question-id-418525'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>15. <\/span>A data engineer must build an extract, transform, and load (ETL) pipeline to process and load data from 10 source systems into 10 tables that are in an Amazon Redshift database. All the source systems generate .csv, JSON, or Apache Parquet files every 15 minutes. The source systems all deliver files into one Amazon S3 bucket. The file sizes range from 10 MB to 20 GB. The ETL pipeline must function correctly despite changes to the data schema. <br \/>\r<br>Which data pipeline solutions will meet these requirements? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_15' value='418525' \/><input type='hidden' id='answerType418525' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418525[]' id='answer-id-1621442' class='answer   answerof-418525 ' value='1621442'   \/><label for='answer-id-1621442' id='answer-label-1621442' class=' answer'><span>Use an Amazon EventBridge rule to run an AWS Glue job every 15 minutes. Configure the AWS Glue job to process and load the data into the Amazon Redshift tables.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418525[]' id='answer-id-1621443' class='answer   answerof-418525 ' value='1621443'   \/><label for='answer-id-1621443' id='answer-label-1621443' class=' answer'><span>Use an Amazon EventBridge rule to invoke an AWS Glue workflow job every 15 minutes. Configure the AWS Glue workflow to have an on-demand trigger that runs an AWS Glue crawler and then runs an AWS Glue job when the crawler finishes running successfully. Configure the AWS Glue job to process and load the data into the Amazon Redshift tables.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418525[]' id='answer-id-1621444' class='answer   answerof-418525 ' value='1621444'   \/><label for='answer-id-1621444' id='answer-label-1621444' class=' answer'><span>Configure an AWS Lambda function to invoke an AWS Glue crawler when a file is loaded into the S3 bucket. Configure an AWS Glue job to process and load the data into the Amazon Redshift tables. Create a second Lambda function to run the AWS Glue job. Create an Amazon EventBridge rule to invoke the second Lambda function when the AWS Glue crawler finishes running successfully.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418525[]' id='answer-id-1621445' class='answer   answerof-418525 ' value='1621445'   \/><label for='answer-id-1621445' id='answer-label-1621445' class=' answer'><span>Configure an AWS Lambda function to invoke an AWS Glue workflow when a file is loaded into the S3 bucket. Configure the AWS Glue workflow to have an on-demand trigger that runs an AWS Glue crawler and then runs an AWS Glue job when the crawler finishes running successfully. Configure the AWS Glue job to process and load the data into the Amazon Redshift tables.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418525[]' id='answer-id-1621446' class='answer   answerof-418525 ' value='1621446'   \/><label for='answer-id-1621446' id='answer-label-1621446' class=' answer'><span>Configure an AWS Lambda function to invoke an AWS Glue job when a file is loaded into the S3 bucket. Configure the AWS Glue job to read the files from the S3 bucket into an Apache Spark DataFrame. Configure the AWS Glue job to also put smaller partitions of the DataFrame into an Amazon Kinesis Data Firehose delivery stream. Configure the delivery stream to load data into the Amazon Redshift tables.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-16' style=';'><div id='questionWrap-16'  class='   watupro-question-id-418526'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>16. <\/span>A company stores details about transactions in an Amazon S3 bucket. The company wants to log all writes to the S3 bucket into another S3 bucket that is in the same AWS Region. <br \/>\r<br>Which solution will meet this requirement with the LEAST operational effort?<\/div><input type='hidden' name='question_id[]' id='qID_16' value='418526' \/><input type='hidden' id='answerType418526' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418526[]' id='answer-id-1621447' class='answer   answerof-418526 ' value='1621447'   \/><label for='answer-id-1621447' id='answer-label-1621447' class=' answer'><span>Configure an S3 Event Notifications rule for all activities on the transactions S3 bucket to invoke an AWS Lambda function. Program the Lambda function to write the event to Amazon Kinesis Data Firehose. Configure Kinesis Data Firehose to write the event to the logs S3 bucket.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418526[]' id='answer-id-1621448' class='answer   answerof-418526 ' value='1621448'   \/><label for='answer-id-1621448' id='answer-label-1621448' class=' answer'><span>Create a trail of management events in AWS CloudTrai<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418526[]' id='answer-id-1621449' class='answer   answerof-418526 ' value='1621449'   \/><label for='answer-id-1621449' id='answer-label-1621449' class=' answer'><span>Configure the trail to receive data from the transactions S3 bucket. Specify an empty prefix and write-only events. Specify the logs S3 bucket as the destination bucket.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418526[]' id='answer-id-1621450' class='answer   answerof-418526 ' value='1621450'   \/><label for='answer-id-1621450' id='answer-label-1621450' class=' answer'><span>Configure an S3 Event Notifications rule for all activities on the transactions S3 bucket to invoke an AWS Lambda function. Program the Lambda function to write the events to the logs S3 bucket.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418526[]' id='answer-id-1621451' class='answer   answerof-418526 ' value='1621451'   \/><label for='answer-id-1621451' id='answer-label-1621451' class=' answer'><span>Create a trail of data events in AWS CloudTrai<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418526[]' id='answer-id-1621452' class='answer   answerof-418526 ' value='1621452'   \/><label for='answer-id-1621452' id='answer-label-1621452' class=' answer'><span>Configure the trail to receive data from the transactions S3 bucket. Specify an empty prefix and write-only events. Specify the logs S3 bucket as the destination bucket.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-17' style=';'><div id='questionWrap-17'  class='   watupro-question-id-418527'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>17. <\/span>A company needs to partition the Amazon S3 storage that the company uses for a data lake. <br \/>\r<br>The partitioning will use a path of the S3 object keys in the following format: s3:\/\/bucket\/prefix\/year=2023\/month=01\/day=01. <br \/>\r<br>A data engineer must ensure that the AWS Glue Data Catalog synchronizes with the S3 storage when the company adds new partitions to the bucket. <br \/>\r<br>Which solution will meet these requirements with the LEAST latency?<\/div><input type='hidden' name='question_id[]' id='qID_17' value='418527' \/><input type='hidden' id='answerType418527' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418527[]' id='answer-id-1621453' class='answer   answerof-418527 ' value='1621453'   \/><label for='answer-id-1621453' id='answer-label-1621453' class=' answer'><span>Schedule an AWS Glue crawler to run every morning.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418527[]' id='answer-id-1621454' class='answer   answerof-418527 ' value='1621454'   \/><label for='answer-id-1621454' id='answer-label-1621454' class=' answer'><span>Manually run the AWS Glue CreatePartition API twice each day.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418527[]' id='answer-id-1621455' class='answer   answerof-418527 ' value='1621455'   \/><label for='answer-id-1621455' id='answer-label-1621455' class=' answer'><span>Use code that writes data to Amazon S3 to invoke the Boto3 AWS Glue create partition API call.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418527[]' id='answer-id-1621456' class='answer   answerof-418527 ' value='1621456'   \/><label for='answer-id-1621456' id='answer-label-1621456' class=' answer'><span>Run the MSCK REPAIR TABLE command from the AWS Glue console.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-18' style=';'><div id='questionWrap-18'  class='   watupro-question-id-418528'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>18. <\/span>A data engineer must manage the ingestion of real-time streaming data into AWS. The data engineer wants to perform real-time analytics on the incoming streaming data by using time-based aggregations over a window of up to 30 minutes. The data engineer needs a solution that is highly fault tolerant. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_18' value='418528' \/><input type='hidden' id='answerType418528' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418528[]' id='answer-id-1621457' class='answer   answerof-418528 ' value='1621457'   \/><label for='answer-id-1621457' id='answer-label-1621457' class=' answer'><span>Use an AWS Lambda function that includes both the business and the analytics logic to perform time-based aggregations over a window of up to 30 minutes for the data in Amazon Kinesis Data Streams.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418528[]' id='answer-id-1621458' class='answer   answerof-418528 ' value='1621458'   \/><label for='answer-id-1621458' id='answer-label-1621458' class=' answer'><span>Use Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) to analyze the data that might occasionally contain duplicates by using multiple types of aggregations.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418528[]' id='answer-id-1621459' class='answer   answerof-418528 ' value='1621459'   \/><label for='answer-id-1621459' id='answer-label-1621459' class=' answer'><span>Use an AWS Lambda function that includes both the business and the analytics logic to perform aggregations for a tumbling window of up to 30 minutes, based on the event timestamp.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418528[]' id='answer-id-1621460' class='answer   answerof-418528 ' value='1621460'   \/><label for='answer-id-1621460' id='answer-label-1621460' class=' answer'><span>Use Amazon Managed Service for Apache Flink (previously known as Amazon Kinesis Data Analytics) to analyze the data by using multiple types of aggregations to perform time-based analytics over a window of up to 30 minutes.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-19' style=';'><div id='questionWrap-19'  class='   watupro-question-id-418529'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>19. <\/span>A data engineer needs to join data from multiple sources to perform a one-time analysis job. The data is stored in Amazon DynamoDB, Amazon RDS, Amazon Redshift, and Amazon S3. <br \/>\r<br>Which solution will meet this requirement MOST cost-effectively?<\/div><input type='hidden' name='question_id[]' id='qID_19' value='418529' \/><input type='hidden' id='answerType418529' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418529[]' id='answer-id-1621461' class='answer   answerof-418529 ' value='1621461'   \/><label for='answer-id-1621461' id='answer-label-1621461' class=' answer'><span>Use an Amazon EMR provisioned cluster to read from all sources. Use Apache Spark to join the data and perform the analysis.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418529[]' id='answer-id-1621462' class='answer   answerof-418529 ' value='1621462'   \/><label for='answer-id-1621462' id='answer-label-1621462' class=' answer'><span>Copy the data from DynamoDB, Amazon RDS, and Amazon Redshift into Amazon S3. Run Amazon Athena queries directly on the S3 files.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418529[]' id='answer-id-1621463' class='answer   answerof-418529 ' value='1621463'   \/><label for='answer-id-1621463' id='answer-label-1621463' class=' answer'><span>Use Amazon Athena Federated Query to join the data from all data sources.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418529[]' id='answer-id-1621464' class='answer   answerof-418529 ' value='1621464'   \/><label for='answer-id-1621464' id='answer-label-1621464' class=' answer'><span>Use Redshift Spectrum to query data from DynamoDB, Amazon RDS, and Amazon S3 directly from Redshift.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-20' style=';'><div id='questionWrap-20'  class='   watupro-question-id-418530'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>20. <\/span>A company stores daily records of the financial performance of investment portfolios in .csv format in an Amazon S3 bucket. A data engineer uses AWS Glue crawlers to crawl the S3 data. The data engineer must make the S3 data accessible daily in the AWS Glue Data Catalog. <br \/>\r<br>Which solution will meet these requirements?<\/div><input type='hidden' name='question_id[]' id='qID_20' value='418530' \/><input type='hidden' id='answerType418530' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418530[]' id='answer-id-1621465' class='answer   answerof-418530 ' value='1621465'   \/><label for='answer-id-1621465' id='answer-label-1621465' class=' answer'><span>Create an IAM role that includes the AmazonS3FullAccess policy. Associate the role with the crawler. Specify the S3 bucket path of the source data as the crawler's data store. Create a daily schedule to run the crawler. Configure the output destination to a new path in the existing S3 bucket.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418530[]' id='answer-id-1621466' class='answer   answerof-418530 ' value='1621466'   \/><label for='answer-id-1621466' id='answer-label-1621466' class=' answer'><span>Create an IAM role that includes the AWS GlueServiceRole policy. Associate the role with the crawler. Specify the S3 bucket path of the source data as the crawler's data store. Create a daily schedule to run the crawler. Specify a database name for the output.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418530[]' id='answer-id-1621467' class='answer   answerof-418530 ' value='1621467'   \/><label for='answer-id-1621467' id='answer-label-1621467' class=' answer'><span>Create an IAM role that includes the AmazonS3FullAccess policy. Associate the role with the crawler. Specify the S3 bucket path of the source data as the crawler's data store. Allocate data processing units (DPUs) to run the crawler every day. Specify a database name for the output.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418530[]' id='answer-id-1621468' class='answer   answerof-418530 ' value='1621468'   \/><label for='answer-id-1621468' id='answer-label-1621468' class=' answer'><span>Create an IAM role that includes the AWS GlueServiceRole policy. Associate the role with the crawler. Specify the S3 bucket path of the source data as the crawler's data store. Allocate data processing units (DPUs) to run the crawler every day. Configure the output destination to a new path in the existing S3 bucket.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-21' style=';'><div id='questionWrap-21'  class='   watupro-question-id-418531'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>21. <\/span>A data engineer must ingest a source of structured data that is in .csv format into an Amazon S3 data lake. The .csv files contain 15 columns. Data analysts need to run Amazon Athena queries on one or two columns of the dataset. The data analysts rarely query the entire file. <br \/>\r<br>Which solution will meet these requirements MOST cost-effectively?<\/div><input type='hidden' name='question_id[]' id='qID_21' value='418531' \/><input type='hidden' id='answerType418531' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418531[]' id='answer-id-1621469' class='answer   answerof-418531 ' value='1621469'   \/><label for='answer-id-1621469' id='answer-label-1621469' class=' answer'><span>Use an AWS Glue PySpark job to ingest the source data into the data lake in .csv format.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418531[]' id='answer-id-1621470' class='answer   answerof-418531 ' value='1621470'   \/><label for='answer-id-1621470' id='answer-label-1621470' class=' answer'><span>Create an AWS Glue extract, transform, and load (ETL) job to read from the .csv structured data source. Configure the job to ingest the data into the data lake in JSON format.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418531[]' id='answer-id-1621471' class='answer   answerof-418531 ' value='1621471'   \/><label for='answer-id-1621471' id='answer-label-1621471' class=' answer'><span>Use an AWS Glue PySpark job to ingest the source data into the data lake in Apache Avro format.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418531[]' id='answer-id-1621472' class='answer   answerof-418531 ' value='1621472'   \/><label for='answer-id-1621472' id='answer-label-1621472' class=' answer'><span>Create an AWS Glue extract, transform, and load (ETL) job to read from the .csv structured data source. Configure the job to write the data into the data lake in Apache Parquet format.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-22' style=';'><div id='questionWrap-22'  class='   watupro-question-id-418532'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>22. <\/span>A data engineering team is using an Amazon Redshift data warehouse for operational reporting. The team wants to prevent performance issues that might result from long- running queries. A data engineer must choose a system table in Amazon Redshift to record anomalies when a query optimizer identifies conditions that might indicate performance issues. <br \/>\r<br>Which table views should the data engineer use to meet this requirement?<\/div><input type='hidden' name='question_id[]' id='qID_22' value='418532' \/><input type='hidden' id='answerType418532' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418532[]' id='answer-id-1621473' class='answer   answerof-418532 ' value='1621473'   \/><label for='answer-id-1621473' id='answer-label-1621473' class=' answer'><span>STL USAGE CONTROL<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418532[]' id='answer-id-1621474' class='answer   answerof-418532 ' value='1621474'   \/><label for='answer-id-1621474' id='answer-label-1621474' class=' answer'><span>STL ALERT EVENT LOG<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418532[]' id='answer-id-1621475' class='answer   answerof-418532 ' value='1621475'   \/><label for='answer-id-1621475' id='answer-label-1621475' class=' answer'><span>STL QUERY METRICS<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418532[]' id='answer-id-1621476' class='answer   answerof-418532 ' value='1621476'   \/><label for='answer-id-1621476' id='answer-label-1621476' class=' answer'><span>STL PLAN INFO<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-23' style=';'><div id='questionWrap-23'  class='   watupro-question-id-418533'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>23. <\/span>A manufacturing company wants to collect data from sensors. A data engineer needs to implement a solution that ingests sensor data in near real time. <br \/>\r<br>The solution must store the data to a persistent data store. The solution must store the data in nested JSON format. The company must have the ability to query from the data store with a latency of less than 10 milliseconds. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_23' value='418533' \/><input type='hidden' id='answerType418533' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418533[]' id='answer-id-1621477' class='answer   answerof-418533 ' value='1621477'   \/><label for='answer-id-1621477' id='answer-label-1621477' class=' answer'><span>Use a self-hosted Apache Kafka cluster to capture the sensor data. Store the data in Amazon S3 for querying.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418533[]' id='answer-id-1621478' class='answer   answerof-418533 ' value='1621478'   \/><label for='answer-id-1621478' id='answer-label-1621478' class=' answer'><span>Use AWS Lambda to process the sensor data. Store the data in Amazon S3 for querying.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418533[]' id='answer-id-1621479' class='answer   answerof-418533 ' value='1621479'   \/><label for='answer-id-1621479' id='answer-label-1621479' class=' answer'><span>Use Amazon Kinesis Data Streams to capture the sensor data. Store the data in Amazon DynamoDB for querying.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418533[]' id='answer-id-1621480' class='answer   answerof-418533 ' value='1621480'   \/><label for='answer-id-1621480' id='answer-label-1621480' class=' answer'><span>Use Amazon Simple Queue Service (Amazon SQS) to buffer incoming sensor data. Use AWS Glue to store the data in Amazon RDS for querying.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-24' style=';'><div id='questionWrap-24'  class='   watupro-question-id-418534'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>24. <\/span>A company uses Amazon Athena to run SQL queries for extract, transform, and load (ETL) tasks by using Create Table As Select (CTAS). The company must use Apache Spark instead of SQL to generate analytics. <br \/>\r<br>Which solution will give the company the ability to use Spark to access Athena?<\/div><input type='hidden' name='question_id[]' id='qID_24' value='418534' \/><input type='hidden' id='answerType418534' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418534[]' id='answer-id-1621481' class='answer   answerof-418534 ' value='1621481'   \/><label for='answer-id-1621481' id='answer-label-1621481' class=' answer'><span>Athena query settings<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418534[]' id='answer-id-1621482' class='answer   answerof-418534 ' value='1621482'   \/><label for='answer-id-1621482' id='answer-label-1621482' class=' answer'><span>Athena workgroup<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418534[]' id='answer-id-1621483' class='answer   answerof-418534 ' value='1621483'   \/><label for='answer-id-1621483' id='answer-label-1621483' class=' answer'><span>Athena data source<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418534[]' id='answer-id-1621484' class='answer   answerof-418534 ' value='1621484'   \/><label for='answer-id-1621484' id='answer-label-1621484' class=' answer'><span>Athena query editor<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-25' style=';'><div id='questionWrap-25'  class='   watupro-question-id-418535'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>25. <\/span>A company uses an Amazon Redshift provisioned cluster as its database. The Redshift cluster has five reserved ra3.4xlarge nodes and uses key distribution. <br \/>\r<br>A data engineer notices that one of the nodes frequently has a CPU load over 90%. SQL Queries that run on the node are queued. The other four nodes usually have a CPU load under 15% during daily operations. <br \/>\r<br>The data engineer wants to maintain the current number of compute nodes. The data engineer also wants to balance the load more evenly across all five compute nodes. <br \/>\r<br>Which solution will meet these requirements?<\/div><input type='hidden' name='question_id[]' id='qID_25' value='418535' \/><input type='hidden' id='answerType418535' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418535[]' id='answer-id-1621485' class='answer   answerof-418535 ' value='1621485'   \/><label for='answer-id-1621485' id='answer-label-1621485' class=' answer'><span>Change the sort key to be the data column that is most often used in a WHERE clause of the SQL SELECT statement.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418535[]' id='answer-id-1621486' class='answer   answerof-418535 ' value='1621486'   \/><label for='answer-id-1621486' id='answer-label-1621486' class=' answer'><span>Change the distribution key to the table column that has the largest dimension.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418535[]' id='answer-id-1621487' class='answer   answerof-418535 ' value='1621487'   \/><label for='answer-id-1621487' id='answer-label-1621487' class=' answer'><span>Upgrade the reserved node from ra3.4xlarqe to ra3.16xlarqe.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418535[]' id='answer-id-1621488' class='answer   answerof-418535 ' value='1621488'   \/><label for='answer-id-1621488' id='answer-label-1621488' class=' answer'><span>Change the primary key to be the data column that is most often used in a WHERE clause of the SQL SELECT statement.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-26' style=';'><div id='questionWrap-26'  class='   watupro-question-id-418536'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>26. <\/span>A company is planning to use a provisioned Amazon EMR cluster that runs Apache Spark jobs to perform big data analysis. The company requires high reliability. A big data team must follow best practices for running cost-optimized and long-running workloads on Amazon EMR. The team must find a solution that will maintain the company's current level of performance. <br \/>\r<br>Which combination of resources will meet these requirements MOST cost-effectively? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_26' value='418536' \/><input type='hidden' id='answerType418536' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418536[]' id='answer-id-1621489' class='answer   answerof-418536 ' value='1621489'   \/><label for='answer-id-1621489' id='answer-label-1621489' class=' answer'><span>Use Hadoop Distributed File System (HDFS) as a persistent data store.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418536[]' id='answer-id-1621490' class='answer   answerof-418536 ' value='1621490'   \/><label for='answer-id-1621490' id='answer-label-1621490' class=' answer'><span>Use Amazon S3 as a persistent data store.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418536[]' id='answer-id-1621491' class='answer   answerof-418536 ' value='1621491'   \/><label for='answer-id-1621491' id='answer-label-1621491' class=' answer'><span>Use x86-based instances for core nodes and task nodes.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418536[]' id='answer-id-1621492' class='answer   answerof-418536 ' value='1621492'   \/><label for='answer-id-1621492' id='answer-label-1621492' class=' answer'><span>Use Graviton instances for core nodes and task nodes.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418536[]' id='answer-id-1621493' class='answer   answerof-418536 ' value='1621493'   \/><label for='answer-id-1621493' id='answer-label-1621493' class=' answer'><span>Use Spot Instances for all primary nodes.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-27' style=';'><div id='questionWrap-27'  class='   watupro-question-id-418537'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>27. <\/span>A company is building an analytics solution. The solution uses Amazon S3 for data lake storage and Amazon Redshift for a data warehouse. The company wants to use Amazon Redshift Spectrum to query the data that is in Amazon S3. <br \/>\r<br>Which actions will provide the FASTEST queries? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_27' value='418537' \/><input type='hidden' id='answerType418537' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418537[]' id='answer-id-1621494' class='answer   answerof-418537 ' value='1621494'   \/><label for='answer-id-1621494' id='answer-label-1621494' class=' answer'><span>Use gzip compression to compress individual files to sizes that are between 1 GB and 5 G<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418537[]' id='answer-id-1621495' class='answer   answerof-418537 ' value='1621495'   \/><label for='answer-id-1621495' id='answer-label-1621495' class=' answer'><span>Use a columnar storage file format.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418537[]' id='answer-id-1621496' class='answer   answerof-418537 ' value='1621496'   \/><label for='answer-id-1621496' id='answer-label-1621496' class=' answer'><span>Partition the data based on the most common query predicates.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418537[]' id='answer-id-1621497' class='answer   answerof-418537 ' value='1621497'   \/><label for='answer-id-1621497' id='answer-label-1621497' class=' answer'><span>Split the data into files that are less than 10 K<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418537[]' id='answer-id-1621498' class='answer   answerof-418537 ' value='1621498'   \/><label for='answer-id-1621498' id='answer-label-1621498' class=' answer'><span>Use file formats that are not<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-28' style=';'><div id='questionWrap-28'  class='   watupro-question-id-418538'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>28. <\/span>A company has a production AWS account that runs company workloads. The company's security team created a security AWS account to store and analyze security logs from the production AWS account. The security logs in the production AWS account are stored in Amazon CloudWatch Logs. The company needs to use Amazon Kinesis Data Streams to deliver the security logs to the security AWS account. <br \/>\r<br>Which solution will meet these requirements?<\/div><input type='hidden' name='question_id[]' id='qID_28' value='418538' \/><input type='hidden' id='answerType418538' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418538[]' id='answer-id-1621499' class='answer   answerof-418538 ' value='1621499'   \/><label for='answer-id-1621499' id='answer-label-1621499' class=' answer'><span>Create a destination data stream in the production AWS account. In the security AWS account, create an IAM role that has cross-account permissions to Kinesis Data Streams in the production AWS account.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418538[]' id='answer-id-1621500' class='answer   answerof-418538 ' value='1621500'   \/><label for='answer-id-1621500' id='answer-label-1621500' class=' answer'><span>Create a destination data stream in the security AWS account. Create an IAM role and a trust policy to grant CloudWatch Logs the permission to put data into the stream. Create a subscription filter in the security AWS account.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418538[]' id='answer-id-1621501' class='answer   answerof-418538 ' value='1621501'   \/><label for='answer-id-1621501' id='answer-label-1621501' class=' answer'><span>Create a destination data stream in the production AWS account. In the production AWS account, create an IAM role that has cross-account permissions to Kinesis Data Streams in the security AWS account.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418538[]' id='answer-id-1621502' class='answer   answerof-418538 ' value='1621502'   \/><label for='answer-id-1621502' id='answer-label-1621502' class=' answer'><span>Create a destination data stream in the security AWS account. Create an IAM role and a trust policy to grant CloudWatch Logs the permission to put data into the stream. Create a subscription filter in the production AWS account.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-29' style=';'><div id='questionWrap-29'  class='   watupro-question-id-418539'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>29. <\/span>A data engineer needs to schedule a workflow that runs a set of AWS Glue jobs every day. The data engineer does not require the Glue jobs to run or finish at a specific time. <br \/>\r<br>Which solution will run the Glue jobs in the MOST cost-effective way?<\/div><input type='hidden' name='question_id[]' id='qID_29' value='418539' \/><input type='hidden' id='answerType418539' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418539[]' id='answer-id-1621503' class='answer   answerof-418539 ' value='1621503'   \/><label for='answer-id-1621503' id='answer-label-1621503' class=' answer'><span>Choose the FLEX execution class in the Glue job properties.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418539[]' id='answer-id-1621504' class='answer   answerof-418539 ' value='1621504'   \/><label for='answer-id-1621504' id='answer-label-1621504' class=' answer'><span>Use the Spot Instance type in Glue job properties.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418539[]' id='answer-id-1621505' class='answer   answerof-418539 ' value='1621505'   \/><label for='answer-id-1621505' id='answer-label-1621505' class=' answer'><span>Choose the STANDARD execution class in the Glue job properties.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418539[]' id='answer-id-1621506' class='answer   answerof-418539 ' value='1621506'   \/><label for='answer-id-1621506' id='answer-label-1621506' class=' answer'><span>Choose the latest version in the GlueVersion field in the Glue job properties.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-30' style=';'><div id='questionWrap-30'  class='   watupro-question-id-418540'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>30. <\/span>A company is planning to upgrade its Amazon Elastic Block Store (Amazon EBS) General Purpose SSD storage from gp2 to gp3. The company wants to prevent any interruptions in its Amazon EC2 instances that will cause data loss during the migration to the upgraded storage. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_30' value='418540' \/><input type='hidden' id='answerType418540' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418540[]' id='answer-id-1621507' class='answer   answerof-418540 ' value='1621507'   \/><label for='answer-id-1621507' id='answer-label-1621507' class=' answer'><span>Create snapshots of the gp2 volumes. Create new gp3 volumes from the snapshots. Attach the new gp3 volumes to the EC2 instances.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418540[]' id='answer-id-1621508' class='answer   answerof-418540 ' value='1621508'   \/><label for='answer-id-1621508' id='answer-label-1621508' class=' answer'><span>Create new gp3 volumes. Gradually transfer the data to the new gp3 volumes. When the transfer is complete, mount the new gp3 volumes to the EC2 instances to replace the gp2 volumes.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418540[]' id='answer-id-1621509' class='answer   answerof-418540 ' value='1621509'   \/><label for='answer-id-1621509' id='answer-label-1621509' class=' answer'><span>Change the volume type of the existing gp2 volumes to gp3. Enter new values for volume size, IOPS, and throughput.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418540[]' id='answer-id-1621510' class='answer   answerof-418540 ' value='1621510'   \/><label for='answer-id-1621510' id='answer-label-1621510' class=' answer'><span>Use AWS DataSync to create new gp3 volumes. Transfer the data from the original gp2 volumes to the new gp3 volumes.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-31' style=';'><div id='questionWrap-31'  class='   watupro-question-id-418541'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>31. <\/span>A company receives call logs as Amazon S3 objects that contain sensitive customer information. The company must protect the S3 objects by using encryption. The company must also use encryption keys that only specific employees can access. <br \/>\r<br>Which solution will meet these requirements with the LEAST effort?<\/div><input type='hidden' name='question_id[]' id='qID_31' value='418541' \/><input type='hidden' id='answerType418541' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418541[]' id='answer-id-1621511' class='answer   answerof-418541 ' value='1621511'   \/><label for='answer-id-1621511' id='answer-label-1621511' class=' answer'><span>Use an AWS CloudHSM cluster to store the encryption keys. Configure the process that writes to Amazon S3 to make calls to CloudHSM to encrypt and decrypt the objects. Deploy an IAM policy that restricts access to the CloudHSM cluster.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418541[]' id='answer-id-1621512' class='answer   answerof-418541 ' value='1621512'   \/><label for='answer-id-1621512' id='answer-label-1621512' class=' answer'><span>Use server-side encryption with customer-provided keys (SSE-C) to encrypt the objects that contain customer information. Restrict access to the keys that encrypt the objects.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418541[]' id='answer-id-1621513' class='answer   answerof-418541 ' value='1621513'   \/><label for='answer-id-1621513' id='answer-label-1621513' class=' answer'><span>Use server-side encryption with AWS KMS keys (SSE-KMS) to encrypt the objects that contain customer information. Configure an IAM policy that restricts access to the KMS keys that encrypt the objects.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418541[]' id='answer-id-1621514' class='answer   answerof-418541 ' value='1621514'   \/><label for='answer-id-1621514' id='answer-label-1621514' class=' answer'><span>Use server-side encryption with Amazon S3 managed keys (SSE-S3) to encrypt the objects that contain customer information. Configure an IAM policy that restricts access to the Amazon S3 managed keys that encrypt the objects.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-32' style=';'><div id='questionWrap-32'  class='   watupro-question-id-418542'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>32. <\/span>An airline company is collecting metrics about flight activities for analytics. The company is conducting a proof of concept (POC) test to show how analytics can provide insights that the company can use to increase on-time departures. <br \/>\r<br>The POC test uses objects in Amazon S3 that contain the metrics in .csv format. The POC test uses Amazon Athena to query the data. The data is partitioned in the S3 bucket by date. <br \/>\r<br>As the amount of data increases, the company wants to optimize the storage solution to improve query performance. <br \/>\r<br>Which combination of solutions will meet these requirements? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_32' value='418542' \/><input type='hidden' id='answerType418542' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418542[]' id='answer-id-1621515' class='answer   answerof-418542 ' value='1621515'   \/><label for='answer-id-1621515' id='answer-label-1621515' class=' answer'><span>Add a randomized string to the beginning of the keys in Amazon S3 to get more throughput across partitions.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418542[]' id='answer-id-1621516' class='answer   answerof-418542 ' value='1621516'   \/><label for='answer-id-1621516' id='answer-label-1621516' class=' answer'><span>Use an S3 bucket that is in the same account that uses Athena to query the data.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418542[]' id='answer-id-1621517' class='answer   answerof-418542 ' value='1621517'   \/><label for='answer-id-1621517' id='answer-label-1621517' class=' answer'><span>Use an S3 bucket that is in the same AWS Region where the company runs Athena queries.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418542[]' id='answer-id-1621518' class='answer   answerof-418542 ' value='1621518'   \/><label for='answer-id-1621518' id='answer-label-1621518' class=' answer'><span>Preprocess the .csv data to JSON format by fetching only the document keys that the query requires.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418542[]' id='answer-id-1621519' class='answer   answerof-418542 ' value='1621519'   \/><label for='answer-id-1621519' id='answer-label-1621519' class=' answer'><span>Preprocess the .csv data to Apache Parquet format by fetching only the data blocks that are needed for predicates.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-33' style=';'><div id='questionWrap-33'  class='   watupro-question-id-418543'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>33. <\/span>A company is migrating a legacy application to an Amazon S3 based data lake. A data engineer reviewed data that is associated with the legacy application. The data engineer found that the legacy data contained some duplicate information. <br \/>\r<br>The data engineer must identify and remove duplicate information from the legacy application data. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_33' value='418543' \/><input type='hidden' id='answerType418543' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418543[]' id='answer-id-1621520' class='answer   answerof-418543 ' value='1621520'   \/><label for='answer-id-1621520' id='answer-label-1621520' class=' answer'><span>Write a custom extract, transform, and load (ETL) job in Python. Use the DataFramedrop duplicatesf) function by importing the Pandas library to perform data deduplication.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418543[]' id='answer-id-1621521' class='answer   answerof-418543 ' value='1621521'   \/><label for='answer-id-1621521' id='answer-label-1621521' class=' answer'><span>Write an AWS Glue extract, transform, and load (ETL) job. Use the FindMatches machine learning (ML) transform to transform the data to perform data deduplication.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418543[]' id='answer-id-1621522' class='answer   answerof-418543 ' value='1621522'   \/><label for='answer-id-1621522' id='answer-label-1621522' class=' answer'><span>Write a custom extract, transform, and load (ETL) job in Python. Import the Python dedupe library. Use the dedupe library to perform data deduplication.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418543[]' id='answer-id-1621523' class='answer   answerof-418543 ' value='1621523'   \/><label for='answer-id-1621523' id='answer-label-1621523' class=' answer'><span>Write an AWS Glue extract, transform, and load (ETL) job. Import the Python dedupe library. Use the dedupe library to perform data deduplication.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-34' style=';'><div id='questionWrap-34'  class='   watupro-question-id-418544'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>34. <\/span>A data engineer needs to use AWS Step Functions to design an orchestration workflow. The workflow must parallel process a large collection of data files and apply a specific transformation to each file. <br \/>\r<br>Which Step Functions state should the data engineer use to meet these requirements?<\/div><input type='hidden' name='question_id[]' id='qID_34' value='418544' \/><input type='hidden' id='answerType418544' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418544[]' id='answer-id-1621524' class='answer   answerof-418544 ' value='1621524'   \/><label for='answer-id-1621524' id='answer-label-1621524' class=' answer'><span>Parallel state<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418544[]' id='answer-id-1621525' class='answer   answerof-418544 ' value='1621525'   \/><label for='answer-id-1621525' id='answer-label-1621525' class=' answer'><span>Choice state<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418544[]' id='answer-id-1621526' class='answer   answerof-418544 ' value='1621526'   \/><label for='answer-id-1621526' id='answer-label-1621526' class=' answer'><span>Map state<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418544[]' id='answer-id-1621527' class='answer   answerof-418544 ' value='1621527'   \/><label for='answer-id-1621527' id='answer-label-1621527' class=' answer'><span>Wait state<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-35' style=';'><div id='questionWrap-35'  class='   watupro-question-id-418545'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>35. <\/span>A financial services company stores financial data in Amazon Redshift. A data engineer wants to run real-time queries on the financial data to support a web-based trading application. The data engineer wants to run the queries from within the trading application. <br \/>\r<br>Which solution will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_35' value='418545' \/><input type='hidden' id='answerType418545' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418545[]' id='answer-id-1621528' class='answer   answerof-418545 ' value='1621528'   \/><label for='answer-id-1621528' id='answer-label-1621528' class=' answer'><span>Establish WebSocket connections to Amazon Redshift.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418545[]' id='answer-id-1621529' class='answer   answerof-418545 ' value='1621529'   \/><label for='answer-id-1621529' id='answer-label-1621529' class=' answer'><span>Use the Amazon Redshift Data AP<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418545[]' id='answer-id-1621530' class='answer   answerof-418545 ' value='1621530'   \/><label for='answer-id-1621530' id='answer-label-1621530' class=' answer'><span>Set up Java Database Connectivity (JDBC) connections to Amazon Redshift.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418545[]' id='answer-id-1621531' class='answer   answerof-418545 ' value='1621531'   \/><label for='answer-id-1621531' id='answer-label-1621531' class=' answer'><span>Store frequently accessed data in Amazon S3. Use Amazon S3 Select to run the queries.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-36' style=';'><div id='questionWrap-36'  class='   watupro-question-id-418546'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>36. <\/span>A financial company wants to implement a data mesh. The data mesh must support centralized data governance, data analysis, and data access control. The company has decided to use AWS Glue for data catalogs and extract, transform, and load (ETL) operations. <br \/>\r<br>Which combination of AWS services will implement a data mesh? (Choose two.)<\/div><input type='hidden' name='question_id[]' id='qID_36' value='418546' \/><input type='hidden' id='answerType418546' value='checkbox'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418546[]' id='answer-id-1621532' class='answer   answerof-418546 ' value='1621532'   \/><label for='answer-id-1621532' id='answer-label-1621532' class=' answer'><span>Use Amazon Aurora for data storage. Use an Amazon Redshift provisioned cluster for data analysis.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418546[]' id='answer-id-1621533' class='answer   answerof-418546 ' value='1621533'   \/><label for='answer-id-1621533' id='answer-label-1621533' class=' answer'><span>Use Amazon S3 for data storage. Use Amazon Athena for data analysis.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418546[]' id='answer-id-1621534' class='answer   answerof-418546 ' value='1621534'   \/><label for='answer-id-1621534' id='answer-label-1621534' class=' answer'><span>Use AWS Glue DataBrewfor centralized data governance and access control.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418546[]' id='answer-id-1621535' class='answer   answerof-418546 ' value='1621535'   \/><label for='answer-id-1621535' id='answer-label-1621535' class=' answer'><span>Use Amazon RDS for data storage. Use Amazon EMR for data analysis.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='checkbox' name='answer-418546[]' id='answer-id-1621536' class='answer   answerof-418546 ' value='1621536'   \/><label for='answer-id-1621536' id='answer-label-1621536' class=' answer'><span>Use AWS Lake Formation for centralized data governance and access control.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-37' style=';'><div id='questionWrap-37'  class='   watupro-question-id-418547'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>37. <\/span>A company stores data in a data lake that is in Amazon S3. Some data that the company stores in the data lake contains personally identifiable information (PII). Multiple user groups need to access the raw data. The company must ensure that user groups can access only the PII that they require. <br \/>\r<br>Which solution will meet these requirements with the LEAST effort?<\/div><input type='hidden' name='question_id[]' id='qID_37' value='418547' \/><input type='hidden' id='answerType418547' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418547[]' id='answer-id-1621537' class='answer   answerof-418547 ' value='1621537'   \/><label for='answer-id-1621537' id='answer-label-1621537' class=' answer'><span>Use Amazon Athena to query the data. Set up AWS Lake Formation and create data filters to establish levels of access for the company's IAM roles. Assign each user to the IAM role that matches the user's PII access requirements.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418547[]' id='answer-id-1621538' class='answer   answerof-418547 ' value='1621538'   \/><label for='answer-id-1621538' id='answer-label-1621538' class=' answer'><span>Use Amazon QuickSight to access the data. Use column-level security features in QuickSight to limit the PII that users can retrieve from Amazon S3 by using Amazon Athena. Define QuickSight access levels based on the PII access requirements of the users.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418547[]' id='answer-id-1621539' class='answer   answerof-418547 ' value='1621539'   \/><label for='answer-id-1621539' id='answer-label-1621539' class=' answer'><span>Build a custom query builder UI that will run Athena queries in the background to access the data. Create user groups in Amazon Cognito. Assign access levels to the user groups based on the PII access requirements of the users.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418547[]' id='answer-id-1621540' class='answer   answerof-418547 ' value='1621540'   \/><label for='answer-id-1621540' id='answer-label-1621540' class=' answer'><span>Create IAM roles that have different levels of granular access. Assign the IAM roles to IAM user groups. Use an identity-based policy to assign access levels to user groups at the column level.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-38' style=';'><div id='questionWrap-38'  class='   watupro-question-id-418548'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>38. <\/span>A company uses Amazon S3 to store semi-structured data in a transactional data lake. Some of the data files are small, but other data files are tens of terabytes. <br \/>\r<br>A data engineer must perform a change data capture (CDC) operation to identify changed data from the data source. The data source sends a full snapshot as a JSON file every day and ingests the changed data into the data lake. <br \/>\r<br>Which solution will capture the changed data MOST cost-effectively?<\/div><input type='hidden' name='question_id[]' id='qID_38' value='418548' \/><input type='hidden' id='answerType418548' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418548[]' id='answer-id-1621541' class='answer   answerof-418548 ' value='1621541'   \/><label for='answer-id-1621541' id='answer-label-1621541' class=' answer'><span>Create an AWS Lambda function to identify the changes between the previous data and the current data. Configure the Lambda function to ingest the changes into the data lake.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418548[]' id='answer-id-1621542' class='answer   answerof-418548 ' value='1621542'   \/><label for='answer-id-1621542' id='answer-label-1621542' class=' answer'><span>Ingest the data into Amazon RDS for MySQ<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418548[]' id='answer-id-1621543' class='answer   answerof-418548 ' value='1621543'   \/><label for='answer-id-1621543' id='answer-label-1621543' class=' answer'><span>Use AWS Database Migration Service (AWS DMS) to write the changed data to the data lake.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418548[]' id='answer-id-1621544' class='answer   answerof-418548 ' value='1621544'   \/><label for='answer-id-1621544' id='answer-label-1621544' class=' answer'><span>Use an open source data lake format to merge the data source with the S3 data lake to insert the new data and update the existing data.<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418548[]' id='answer-id-1621545' class='answer   answerof-418548 ' value='1621545'   \/><label for='answer-id-1621545' id='answer-label-1621545' class=' answer'><span>Ingest the data into an Amazon Aurora MySQL DB instance that runs Aurora Serverless. Use AWS Database Migration Service (AWS DMS) to write the changed data to the data lake.<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-39' style=';'><div id='questionWrap-39'  class='   watupro-question-id-418549'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>39. <\/span>A company is migrating on-premises workloads to AWS. The company wants to reduce overall operational overhead. The company also wants to explore serverless options. <br \/>\r<br>The company's current workloads use Apache Pig, Apache Oozie, Apache Spark, Apache Hbase, and Apache Flink. The on-premises workloads process petabytes of data in seconds. The company must maintain similar or better performance after the migration to AWS. <br \/>\r<br>Which extract, transform, and load (ETL) service will meet these requirements?<\/div><input type='hidden' name='question_id[]' id='qID_39' value='418549' \/><input type='hidden' id='answerType418549' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418549[]' id='answer-id-1621546' class='answer   answerof-418549 ' value='1621546'   \/><label for='answer-id-1621546' id='answer-label-1621546' class=' answer'><span>AWS Glue<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418549[]' id='answer-id-1621547' class='answer   answerof-418549 ' value='1621547'   \/><label for='answer-id-1621547' id='answer-label-1621547' class=' answer'><span>Amazon EMR<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418549[]' id='answer-id-1621548' class='answer   answerof-418549 ' value='1621548'   \/><label for='answer-id-1621548' id='answer-label-1621548' class=' answer'><span>AWS Lambda<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418549[]' id='answer-id-1621549' class='answer   answerof-418549 ' value='1621549'   \/><label for='answer-id-1621549' id='answer-label-1621549' class=' answer'><span>Amazon Redshift<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div class='watu-question ' id='question-40' style=';'><div id='questionWrap-40'  class='   watupro-question-id-418550'>\n\t\t\t<div class='question-content'><div><span class='watupro_num'>40. <\/span>A media company uses software as a service (SaaS) applications to gather data by using third-party tools. The company needs to store the data in an Amazon S3 bucket. The company will use Amazon Redshift to perform analytics based on the data. <br \/>\r<br>Which AWS service or feature will meet these requirements with the LEAST operational overhead?<\/div><input type='hidden' name='question_id[]' id='qID_40' value='418550' \/><input type='hidden' id='answerType418550' value='radio'><!-- end question-content--><\/div><div class='question-choices watupro-choices-columns '><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418550[]' id='answer-id-1621550' class='answer   answerof-418550 ' value='1621550'   \/><label for='answer-id-1621550' id='answer-label-1621550' class=' answer'><span>Amazon Managed Streaming for Apache Kafka (Amazon MSK)<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418550[]' id='answer-id-1621551' class='answer   answerof-418550 ' value='1621551'   \/><label for='answer-id-1621551' id='answer-label-1621551' class=' answer'><span>Amazon AppFlow<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418550[]' id='answer-id-1621552' class='answer   answerof-418550 ' value='1621552'   \/><label for='answer-id-1621552' id='answer-label-1621552' class=' answer'><span>AWS Glue Data Catalog<\/span><\/label><\/div><div class='watupro-question-choice  ' dir='auto' ><input type='radio' name='answer-418550[]' id='answer-id-1621553' class='answer   answerof-418550 ' value='1621553'   \/><label for='answer-id-1621553' id='answer-label-1621553' class=' answer'><span>Amazon Kinesis<\/span><\/label><\/div><!-- end question-choices--><\/div><!-- end questionWrap--><\/div><\/div><div style='display:none' id='question-41'>\n\t<div class='question-content'>\n\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.dumpsbase.com\/freedumps\/wp-content\/plugins\/watupro\/img\/loading.gif\" width=\"16\" height=\"16\" alt=\"Loading...\" title=\"Loading...\" \/>&nbsp;Loading...\t<\/div>\n<\/div>\n\n<br \/>\n\t\n\t\t\t<div class=\"watupro_buttons flex \" id=\"watuPROButtons10578\" >\n\t\t  <div id=\"prev-question\" style=\"display:none;\"><input type=\"button\" value=\"&lt; Previous\" onclick=\"WatuPRO.nextQuestion(event, 'previous');\"\/><\/div>\t\t  \t\t  \t\t   \n\t\t   \t  \t\t<div><input type=\"button\" name=\"action\" class=\"watupro-submit-button\" onclick=\"WatuPRO.submitResult(event)\" id=\"action-button\" value=\"View Results\"  \/>\n\t\t<\/div>\n\t\t<\/div>\n\t\t\n\t<input type=\"hidden\" name=\"quiz_id\" value=\"10578\" id=\"watuPROExamID\"\/>\n\t<input type=\"hidden\" name=\"start_time\" id=\"startTime\" value=\"2026-04-21 12:54:58\" \/>\n\t<input type=\"hidden\" name=\"start_timestamp\" id=\"startTimeStamp\" value=\"1776776098\" \/>\n\t<input type=\"hidden\" name=\"question_ids\" value=\"\" \/>\n\t<input type=\"hidden\" name=\"watupro_questions\" value=\"418511:1621390,1637766,1637767,1637768 | 418512:1621391,1637769,1637770,1637771,1637772 | 418513:1621392,1621393,1621394,1621395 | 418514:1621396,1621397,1621398,1621399 | 418515:1621400,1621401,1621402,1621403,1621404 | 418516:1621405,1621406,1621407,1621408 | 418517:1621409,1621410,1621411,1621412 | 418518:1621413,1621414,1621415,1621416 | 418519:1621417,1621418,1621419,1621420 | 418520:1621421,1621422,1621423,1621424,1621425 | 418521:1621426,1621427,1621428,1621429 | 418522:1621430,1621431,1621432,1621433 | 418523:1621434,1621435,1621436,1621437 | 418524:1621438,1621439,1621440,1621441 | 418525:1621442,1621443,1621444,1621445,1621446 | 418526:1621447,1621448,1621449,1621450,1621451,1621452 | 418527:1621453,1621454,1621455,1621456 | 418528:1621457,1621458,1621459,1621460 | 418529:1621461,1621462,1621463,1621464 | 418530:1621465,1621466,1621467,1621468 | 418531:1621469,1621470,1621471,1621472 | 418532:1621473,1621474,1621475,1621476 | 418533:1621477,1621478,1621479,1621480 | 418534:1621481,1621482,1621483,1621484 | 418535:1621485,1621486,1621487,1621488 | 418536:1621489,1621490,1621491,1621492,1621493 | 418537:1621494,1621495,1621496,1621497,1621498 | 418538:1621499,1621500,1621501,1621502 | 418539:1621503,1621504,1621505,1621506 | 418540:1621507,1621508,1621509,1621510 | 418541:1621511,1621512,1621513,1621514 | 418542:1621515,1621516,1621517,1621518,1621519 | 418543:1621520,1621521,1621522,1621523 | 418544:1621524,1621525,1621526,1621527 | 418545:1621528,1621529,1621530,1621531 | 418546:1621532,1621533,1621534,1621535,1621536 | 418547:1621537,1621538,1621539,1621540 | 418548:1621541,1621542,1621543,1621544,1621545 | 418549:1621546,1621547,1621548,1621549 | 418550:1621550,1621551,1621552,1621553\" \/>\n\t<input type=\"hidden\" name=\"no_ajax\" value=\"0\">\t\t\t<\/form>\n\t<p>&nbsp;<\/p>\n<\/div>\n\n<script type=\"text\/javascript\">\n\/\/jQuery(document).ready(function(){\ndocument.addEventListener(\"DOMContentLoaded\", function(event) { \t\nvar question_ids = \"418511,418512,418513,418514,418515,418516,418517,418518,418519,418520,418521,418522,418523,418524,418525,418526,418527,418528,418529,418530,418531,418532,418533,418534,418535,418536,418537,418538,418539,418540,418541,418542,418543,418544,418545,418546,418547,418548,418549,418550\";\nWatuPROSettings[10578] = {};\nWatuPRO.qArr = question_ids.split(',');\nWatuPRO.exam_id = 10578;\t    \nWatuPRO.post_id = 108809;\nWatuPRO.store_progress = 0;\nWatuPRO.curCatPage = 1;\nWatuPRO.requiredIDs=\"0\".split(\",\");\nWatuPRO.hAppID = \"0.75627500 1776776098\";\nvar url = \"https:\/\/www.dumpsbase.com\/freedumps\/wp-content\/plugins\/watupro\/show_exam.php\";\nWatuPRO.examMode = 1;\nWatuPRO.siteURL=\"https:\/\/www.dumpsbase.com\/freedumps\/wp-admin\/admin-ajax.php\";\nWatuPRO.emailIsNotRequired = 0;\nWatuPROIntel.init(10578);\nWatuPRO.inCategoryPages=1;});    \t \n<\/script>\n<p>&nbsp;<\/p>\n<h3>Continue to check the <a href=\"https:\/\/www.dumpsbase.com\/freedumps\/amazon-dea-c01-free-dumps-part-2-q41-q70-are-also-available-online-helping-you-check-the-aws-certified-data-engineer-associate-dumps-v10-02.html\"><span style=\"background-color: #00ffff;\"><em>Amazon DEA-C01 free dumps (Part 2, Q41-Q70) of V10.02<\/em><\/span><\/a> online.<\/h3>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>At DumpsBase, we believe that the best way to achieve success in the AWS Certified Data Engineer &#8211; Associate (DEA-C01) exam is to consistently practice with real exam questions and answers. The Amazon DEA-C01 dumps (V10.02) are available online, providing verified and accurate Amazon DEA-C01 exam questions to help you reduce exam-related stress and significantly [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[175,18249],"tags":[18250,18538],"class_list":["post-108809","post","type-post","status-publish","format-standard","hentry","category-amazon","category-data-engineer-associate","tag-amazon-dea-c01-dumps","tag-aws-certified-data-engineer-associate-dea-c01"],"_links":{"self":[{"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/posts\/108809","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/comments?post=108809"}],"version-history":[{"count":3,"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/posts\/108809\/revisions"}],"predecessor-version":[{"id":110505,"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/posts\/108809\/revisions\/110505"}],"wp:attachment":[{"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/media?parent=108809"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/categories?post=108809"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dumpsbase.com\/freedumps\/wp-json\/wp\/v2\/tags?post=108809"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}