Pass Your Microsoft Fabric Analytics Engineer DP-600 Exam with Confidence Using DP-600 Dumps (V13.02) – You Can Check DP-600 Free Dumps (Part 2, Q41-Q70) First

Now you can choose to pass the Microsoft Fabric Analytics Engineer DP-600 exam, then earn the Microsoft Certified: Fabric Analytics Engineer Associate certification to be your gateway to proving your expertise in data analytics, Power BI, and advanced Microsoft Fabric solutions. To pass successfully, you can choose the Microsoft DP-600 dumps (V13.02) as your learning materials. DumpsBase’s DP-600 dumps (V13.02) come with verified, up-to-date, and reliable questions and answers, giving you the ultimate edge in mastering exam objectives and passing with confidence. To verify the dumps, you can read our free dumps online. Before, we shared the DP-600 free dumps (Part 1, Q1-Q40) online, and today we will continue to share the DP-600 free dumps (Part 2, Q41-Q70).

Below are the DP-600 free dumps (Part 2, Q41-Q70) to help you check more:

1. HOTSPOT

You need to migrate the Research division data for Productline2. The solution must meet the data preparation requirements.

How should you complete the code? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

2. Which syntax should you use in a notebook to access the Research division data for Productline1?

3. HOTSPOT

You need to migrate the Research division data for Productline1. The solution must meet the data preparation requirements.

How should you complete the code? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

4. You need to refresh the Orders table of the Online Sales department. The solution must meet the semantic model requirements.

What should you include in the solution?

5. Which syntax should you use in a notebook to access the Research division data for Productline1?

6. Prepare data

Testlet 2

Case study

This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.

To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.

At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.

To start the case study

To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.

Overview

Litware, Inc. is a manufacturing company that has offices throughout North America. The analytics team at Litware contains data engineers, analytics engineers, data analysts, and data scientists.

Existing Environment

Fabric Environment

Litware has been using a Microsoft Power BI tenant for three years. Litware has NOT enabled any Fabric capacities and features.

Available Data

Litware has data that must be analyzed as shown in the following table.

The Product data contains a single table and the following columns.

The customer satisfaction data contains the following tables:

- Survey

- Question

- Response

For each survey submitted, the following occurs:

- One row is added to the Survey table.

- One row is added to the Response table for each question in the survey.

The Question table contains the text of each survey question. The third question in each survey response is an overall satisfaction score. Customers can submit a survey after each purchase.

User Problems

The analytics team has large volumes of data, some of which is semi-structured. The team wants to use Fabric to create a new data store.

Product data is often classified into three pricing groups: high, medium, and low. This logic is implemented in several databases and semantic models, but the logic does NOT always match across implementations.

Requirements

Planned Changes

Litware plans to enable Fabric features in the existing tenant. The analytics team will create a new data store as a proof of concept (PoC). The remaining Litware users will only get access to the Fabric features once the PoC is complete. The PoC will be completed by using a Fabric trial capacity.

The following three workspaces will be created:

- AnalyticsPOC: Will contain the data store, semantic models, reports pipelines, dataflow, and notebooks used to populate the data store

- DataEngPOC: Will contain all the pipelines, dataflows, and notebooks used to populate OneLake

- DataSciPOC: Will contain all the notebooks and reports created by the data scientists

The following will be created in the AnalyticsPOC workspace:

- A data store (type to be decided)

- A custom semantic model

- A default semantic model

- Interactive reports

The data engineers will create data pipelines to load data to OneLake either hourly or daily depending on the data source. The analytics engineers will create processes to ingest, transform, and load the data to the data store in the AnalyticsPOC workspace daily. Whenever possible, the data engineers will use low-code tools for data ingestion. The choice of which data cleansing and transformation tools to use will be at the data engineers’ discretion.

All the semantic models and reports in the Analytics POC workspace will use the data store as the sole data source.

Technical Requirements

The data store must support the following:

- Read access by using T-SQL or Python

- Semi-structured and unstructured data

- Row-level security (RLS) for users executing T-SQL queries

Files loaded by the data engineers to OneLake will be stored in the Parquet format and will meet Delta Lake specifications.

Data will be loaded without transformation in one area of the AnalyticsPOC data store. The data will then be cleansed, merged, and transformed into a dimensional model.

The data load process must ensure that the raw and cleansed data is updated completely before populating the dimensional model.

The dimensional model must contain a date dimension. There is no existing data source for the date dimension. The Litware fiscal year matches the calendar year. The date dimension must always contain dates from 2010 through the end of the current year.

The product pricing group logic must be maintained by the analytics engineers in a single location. The pricing group data must be made available in the data store for T-SQL queries and in the default semantic model.

The following logic must be used:

- List prices that are less than or equal to 50 are in the low pricing group.

- List prices that are greater than 50 and less than or equal to 1,000 are in the medium pricing group.

- List prices that are greater than 1,000 are in the high pricing group.

Security Requirements

Only Fabric administrators and the analytics team must be able to see the Fabric items created as part of the PoC.

Litware identifies the following security requirements for the Fabric items in the AnalyticsPOC workspace:

- Fabric administrators will be the workspace administrators.

- The data engineers must be able to read from and write to the data store. No access must be granted to datasets or reports.

- The analytics engineers must be able to read from, write to, and create schemas in the data store. They also must be able to create and share semantic models with the data analysts and view and modify all reports in the workspace.

- The data scientists must be able to read from the data store, but not write to it. They will access the data by using a Spark notebook

- The data analysts must have read access to only the dimensional model objects in the data store. They also must have access to create Power BI reports by using the semantic models created by the analytics engineers.

- The date dimension must be available to all users of the data store.

- The principle of least privilege must be followed.

Both the default and custom semantic models must include only tables or views from the dimensional model in the data store.

Litware already has the following Microsoft Entra security groups:

- FabricAdmins: Fabric administrators

- AnalyticsTeam: All the members of the analytics team

- DataAnalysts: The data analysts on the analytics team

- DataScientists: The data scientists on the analytics team

- DataEngineers: The data engineers on the analytics team

- AnalyticsEngineers: The analytics engineers on the analytics team

Report Requirements

The data analysts must create a customer satisfaction report that meets the following requirements:

- Enables a user to select a product to filter customer survey responses to only those who have purchased that product.

- Displays the average overall satisfaction score of all the surveys submitted during the last 12 months up to a selected date.

- Shows data as soon as the data is updated in the data store.

- Ensures that the report and the semantic model only contain data from the current and previous year.

- Ensures that the report respects any table-level security specified in the source data store.

- Minimizes the execution time of report queries.

What should you recommend using to ingest the customer data into the data store in the AnalyticsPOC workspace?

7. You need to recommend a solution to prepare the tenant for the PoC.

Which two actions should you recommend performing from the Fabric Admin portal? Each correct answer presents part of the solution. NOTE: Each correct answer is worth one point.

8. You need to implement the date dimension in the data store. The solution must meet the technical requirements.

What are two ways to achieve the goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

9. You need to ensure the data loading activities in the AnalyticsPOC workspace are executed in the appropriate sequence. The solution must meet the technical requirements.

What should you do?

10. HOTSPOT

You need to resolve the issue with the pricing group classification.

How should you complete the T-SQL statement? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

11. Prepare data

Question Set 3

You have a Fabric tenant that contains a machine learning model registered in a Fabric workspace.

You need to use the model to generate predictions by using the PREDICT function in a Fabric notebook.

Which two languages can you use to perform model scoring? Each correct answer presents a complete solution. NOTE: Each correct answer is worth one point.

12. You have a Fabric workspace that contains a DirectQuery semantic model. The model queries a data source that has 500 million rows.

You have a Microsoft Power Bi report named Report1 that uses the model. Report1 contains visuals on multiple pages.

You need to reduce the query execution time for the visuals on all the pages.

What are two features that you can use? Each correct answer presents a complete solution. NOTE: Each correct answer is worth one point.

13. HOTSPOT

You have a Fabric tenant that contains two lakehouses.

You are building a dataflow that will combine data from the lakehouses.

The applied steps from one of the queries in the dataflow is shown in the following exhibit.

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point.

14. You have a Fabric tenant that contains a lakehouse named Lakehouse1. Lakehouse1 contains a table named Table1.

You are creating a new data pipeline.

You plan to copy external data to Table1. The schema of the external data changes regularly.

You need the copy operation to meet the following requirements:

- Replace Table1 with the schema of the external data.

- Replace all the data in Table1 with the rows in the external data.

You add a Copy data activity to the pipeline.

What should you do for the Copy data activity?

15. You have a Fabric tenant that contains a lakehouse.

You plan to query sales data files by using the SQL endpoint. The files will be in an Amazon Simple Storage Service (Amazon S3) storage bucket.

You need to recommend which file format to use and where to create a shortcut.

Which two actions should you include in the recommendation? Each correct answer presents part of the solution. NOTE: Each correct answer is worth one point.

16. You have a Fabric tenant that contains a lakehouse named Lakehouse1. Lakehouse1 contains a subfolder named Subfolder1 that contains CSV files.

You need to convert the CSV files into the delta format that has V-Order optimization enabled.

What should you do from Lakehouse explorer?

17. You have a Fabric tenant that contains a lakehouse named Lakehouse1. Lakehouse1 contains an unpartitioned table named Table1.

You plan to copy data to Table1 and partition the table based on a date column in the source data.

You create a Copy activity to copy the data to Table1.

You need to specify the partition column in the Destination settings of the Copy activity.

What should you do first?

18. You have source data in a folder on a local computer.

You need to create a solution that will use Fabric to populate a data store.

The solution must meet the following requirements:

- Support the use of dataflows to load and append data to the data store.

- Ensure that Delta tables are V-Order optimized and compacted automatically.

Which two types of data stores should you use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

19. HOTSPOT

You have a Fabric tenant that contains a lakehouse.

You are using a Fabric notebook to save a large DataFrame by using the following code.

df.write.partitionBy(“year”, “month”, “day”).mode(“overwrite”).parquet(“Files/ SalesOrder”)

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:

20. You have a Fabric tenant that contains a data pipeline.

You need to ensure that the pipeline runs every four hours on Mondays and Fridays.

To what should you set Repeat for the schedule?

21. DRAG DROP

You are creating a data flow in Fabric to ingest data from an Azure SQL database by using a T-SQL statement.

You need to ensure that any foldable Power Query transformation steps are processed by the Microsoft SQL Server engine.

How should you complete the code? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.

22. HOTSPOT

You have a Fabric tenant that contains a lakehouse named Lakehouse1.

Lakehouse1 contains a table named Nyctaxi_raw. Nyctaxi_raw contains the following table:

You create a Fabric notebook and attach it to Lakehouse1.

You need to use PySpark code to transform the data.

The solution must meet the following requirements:

- Add a column named pickupDate that will contain only the date portion of pickupDateTime.

- Filter the DataFrame to include only rows where fareAmount is a positive number that is less than 100.

How should you complete the code? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

23. HOTSPOT

You have a Fabric tenant.

You need to configure OneLake security for users shown in the following table.

The solution must follow the principle of least privilege.

Which permission should you assign to each user? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

24. DRAG DROP

You are implementing a medallion architecture in a single Fabric workspace.

You have a lakehouse that contains the Bronze and Silver layers and a warehouse that contains the Gold layer.

You create the items required to populate the layers as shown in the following table.

You need to ensure that the layers are populated daily in sequential order such that Silver is populated only after Bronze is complete, and Gold is populated only after Silver is complete. The solution must minimize development effort and complexity.

What should you use to execute each set of items? To answer, drag the appropriate options to the correct items. Each option may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.

25. DRAG DROP

You are building a solution by using a Fabric notebook.

You have a Spark DataFrame assigned to a variable named df. The DataFrame returns four columns.

You need to change the data type of a string column named Age to integer. The solution must return a DataFrame that includes all the columns.

How should you complete the code? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.

26. HOTSPOT

You have an Azure Data Lake Storage Gen2 account named storage1 that contains a Parquet file named sales.parquet.

You have a Fabric tenant that contains a workspace named Workspace1.

Using a notebook in Workspace1, you need to load the content of the file to the default lakehouse. The solution must ensure that the content will display automatically as a table named Sales in Lakehouse explorer.

How should you complete the code? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

27. You have a Fabric workspace named Workspace1 that contains a lakehouse named Lakehouse1.

In Workspace1, you create a data pipeline named Pipeline1.

You have CSV files stored in an Azure Storage account.

You need to add an activity to Pipeline1 that will copy data from the CSV files to Lakehouse1. The activity must support Power Query M formula language expressions.

Which type of activity should you add?

28. HOTSPOT

You have a Fabric tenant that contains lakehouse named Lakehouse1. Lakehouse1 contains a Delta table with eight columns.

You receive new data that contains the same eight columns and two additional columns.

You create a Spark DataFrame and assign the DataFrame to a variable named df. The DataFrame contains the new data.

You need to add the new data to the Delta table to meet the following requirements:

- Keep all the existing rows.

- Ensure that all the new data is added to the table.

How should you complete the code? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

29. You have a Fabric tenant that contains a lakehouse.

You plan to use a visual query to merge two tables.

You need to ensure that the query returns all the rows in both tables.

Which type of join should you use?

30. DRAG DROP

You are implementing two dimension tables named Customers and Products in a Fabric warehouse.

You need to use slowly changing dimension (SCD) to manage the versioning of data.

The solution must meet the requirements shown in the following table.

Which type of SCD should you use for each table? To answer, drag the appropriate SCD types to the correct tables. Each SCD type may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.


 

Microsoft PL-600 Free Dumps (Part 2, Q41-Q80): Read Online to Find the Quality of the Microsoft Power Platform Solution Architect PL-600 Dumps (V19.02)

Add a Comment

Your email address will not be published. Required fields are marked *