DP-200 Exam - Free Actual Q&As, Page 19 | ExamTopics
Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
sale

Want to Unlock All Questions for this Exam?

Full Exam Access, Discussions, No Robots Checks

Microsoft DP-200 Exam Actual Questions (P. 19)

The questions for DP-200 were last updated on May 1, 2024.
  • Viewing page 19 out of 38 pages.
  • Viewing questions 181-190 out of 372 questions
Question #27 Topic 4

You have a SQL pool in Azure Synapse.
You discover that some queries fail or take a long time to complete.
You need to monitor for transactions that have rolled back.
Which dynamic management view should you query?

  • A. sys.dm_pdw_nodes_tran_database_transactions
  • B. sys.dm_pdw_waits
  • C. sys.dm_pdw_request_steps
  • D. sys.dm_pdw_exec_sessions
Reveal Solution Hide Solution   Discussion   1

Correct Answer: A ๐Ÿ—ณ๏ธ
You can use Dynamic Management Views (DMVs) to monitor your workload including investigating query execution in SQL pool.
If your queries are failing or taking a long time to proceed, you can check and monitor if you have any transactions rolling back.
Example:
-- Monitor rollback

SELECT -
SUM(CASE WHEN t.database_transaction_next_undo_lsn IS NOT NULL THEN 1 ELSE 0 END), t.pdw_node_id, nod.[type]
FROM sys.dm_pdw_nodes_tran_database_transactions t
JOIN sys.dm_pdw_nodes nod ON t.pdw_node_id = nod.pdw_node_id
GROUP BY t.pdw_node_id, nod.[type]
Reference:
https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor#monitor-transaction-log-rollback

Question #28 Topic 4

You have an alert on a SQL pool in Azure Synapse that uses the signal logic shown in the exhibit.

On the same day, failures occur at the following times:
โœ‘ 08:01
โœ‘ 08:03
โœ‘ 08:04
โœ‘ 08:06
โœ‘ 08:11
โœ‘ 08:16
โœ‘ 08:19
The evaluation period starts on the hour.
At which times will alert notifications be sent?

  • A. 08:15 only
  • B. 08:10, 08:15, and 08:20
  • C. 08:05 and 08:10 only
  • D. 08:10 only
  • E. 08:05 only
Reveal Solution Hide Solution   Discussion   5

Correct Answer: B ๐Ÿ—ณ๏ธ
Reference:
https://docs.microsoft.com/en-us/azure/azure-sql/database/alerts-insights-configure-portal

Question #29 Topic 4

You plan to monitor the performance of Azure Blob storage by using Azure Monitor.
You need to be notified when there is a change in the average time it takes for a storage service or API operation type to process requests.
For which two metrics should you set up alerts? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  • A. SuccessE2ELatency
  • B. SuccessServerLatency
  • C. UsedCapacity
  • D. Egress
  • E. Ingress
Reveal Solution Hide Solution   Discussion   1

Correct Answer: AB ๐Ÿ—ณ๏ธ
Success E2E Latency: The average end-to-end latency of successful requests made to a storage service or the specified API operation. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.
Success Server Latency: The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in
SuccessE2ELatency.
Reference:
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-scalable-app-verify-metrics

Question #30 Topic 4

HOTSPOT -
You have an Azure data factory that has two pipelines named PipelineA and PipelineB.
PipelineA has four activities as shown in the following exhibit.

PipelineB has two activities as shown in the following exhibit.

You create an alert for the data factory that uses Failed pipeline runs metrics for both pipelines and all failure types. The metric has the following settings:
โœ‘ Operator: Greater than
โœ‘ Aggregation type: Total
โœ‘ Threshold value: 2
โœ‘ Aggregation granularity (Period): 5 minutes
โœ‘ Frequency of evaluation: Every 5 minutes
Data Factory monitoring records the failures shown in the following table.

For each of the following statements, select yes if the statement is true. Otherwise, select no.
NOTE: Each correct answer selection is worth one point.
Hot Area:

Reveal Solution Hide Solution   Discussion   6

Correct Answer:
Box 1: No -
Only one failure at this point.

Box 2: No -
Only two failures within 5 minutes.

Box 3: Yes -
More than two (three) failures in 5 minutes
Reference:
https://docs.microsoft.com/en-us/azure/azure-sql/database/alerts-insights-configure-portal

Question #31 Topic 4

You have an enterprise data warehouse in Azure Synapse Analytics named DW1 on a server named Server1.
You need to verify whether the size of the transaction log file for each distribution of DW1 is smaller than 160 GB.
What should you do?

  • A. On the master database, execute a query against the sys.dm_pdw_nodes_os_performance_counters dynamic management view.
  • B. From Azure Monitor in the Azure portal, execute a query against the logs of DW1.
  • C. On DW1, execute a query against the sys.database_files dynamic management view.
  • D. Execute a query against the logs of DW1 by using the Get-AzOperationalInsightsSearchResult PowerShell cmdlet.
Reveal Solution Hide Solution   Discussion   5

Correct Answer: A ๐Ÿ—ณ๏ธ
The following query returns the transaction log size on each distribution. If one of the log files is reaching 160 GB, you should consider scaling up your instance or limiting your transaction size.
-- Transaction log size

SELECT -
instance_name as distribution_db,
cntr_value*1.0/1048576 as log_file_size_used_GB,
pdw_node_id
FROM sys.dm_pdw_nodes_os_performance_counters

WHERE -
instance_name like 'Distribution_%'
AND counter_name = 'Log File(s) Used Size (KB)'
Reference:
https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-manage-monitor

Question #32 Topic 4

HOTSPOT -
You need to collect application metrics, streaming query events, and application log messages for an Azure Databricks cluster.
Which type of library and workspace should you implement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Reveal Solution Hide Solution   Discussion  

Correct Answer:
You can send application logs and metrics from Azure Databricks to a Log Analytics workspace. It uses the Azure Databricks Monitoring Library, which is available on GitHub.
References:
https://docs.microsoft.com/en-us/azure/architecture/databricks-monitoring/application-logs

Question #33 Topic 4

HOTSPOT -
You have an Azure Cosmos DB database.
You need to use Azure Stream Analytics to check for uneven distributions of queries that can affect performance.
Which two settings should you configure? To answer, select the appropriate settings in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Reveal Solution Hide Solution   Discussion   4

Correct Answer:
Box 1: RIGHT -
Use right for dates.
1- RIGHT means < or >=
2- LEFT means <= and >.
Box 2: 20090101, 201001010, 20110101, 20120101
Four values are better than three or two.
Reference:
https://medium.com/@selcukkilinc23/what-it-means-range-right-and-left-in-table-partitioning-2d654cb99ade

Question #34 Topic 4

HOTSPOT -
You have an Azure Cosmos DB database.
You need to use Azure Stream Analytics to check for uneven distributions of queries that can affect performance.
Which two settings should you configure? To answer, select the appropriate settings in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Reveal Solution Hide Solution   Discussion  

Correct Answer:
PartitionKeyStatistics: Select this option to log the statistics of the partition keys. This is currently represented with the storage size (KB) of the partition keys.
PartitionKeyRUConsumption: This log reports the aggregated per-second RU/s consumption of partition keys. Currently, Azure Cosmos DB reports partition keys for SQL API accounts only and for point read/write and stored procedure operations. other APIs and operation types are not supported. For other APIs, the partition key column in the diagnostic log table will be empty. This log contains data such as subscription ID, region name, database name, collection name, partition key, operation type, and request charge.
Note:
How to get partition key statistics to evaluate skew across top 3 partitions for a database account:

AzureDiagnostics -
| where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="PartitionKeyStatistics"
| project SubscriptionId, regionName_s, databaseName_s, collectionName_s, partitionKey_s, sizeKb_d, ResourceId
Incorrect Answers:
DataPlaneRequests: Select this option to log back-end requests to the SQL API accounts in Azure Cosmos DB.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/cosmosdb-monitor-resource-logs

Question #35 Topic 4

DRAG DROP -
You are implementing an Azure Blob storage account for an application that has the following requirements:
โœ‘ Data created during the last 12 months must be readily accessible.
โœ‘ Blobs older than 24 months must use the lowest storage costs. This data will be accessed infrequently.
โœ‘ Data created 12 to 24 months ago will be accessed infrequently but must be readily accessible at the lowest storage costs.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:

Reveal Solution Hide Solution   Discussion   2

Correct Answer:
Step 1: Create a block blob in a Blob storage account
First create the block blob.
Azure Blob storage lifecycle management offers a rich, rule-based policy for GPv2 and Blob storage accounts.
Step 2: Use an Azure Resource Manager template that has a lifecycle management policy
Step 3: Create a rule that has the rule actions of TierToCool and TierToArchive
Each rule definition includes a filter set and an action set. The filter set limits rule actions to a certain set of objects within a container or objects names.
Note: You can add a Rule through Azure portal:
Sign in to the Azure portal.
1. In the Azure portal, search for and select your storage account.
2. Under Blob service, select Lifecycle Management to view or change your rules.
3. Select the List View tab.
4. Select Add a rule and name your rule on the Details form. You can also set the Rule scope, Blob type, and Blob subtype values.
5. Select Base blobs to set the conditions for your rule. For example, blobs are moved to cool storage if they haven't been modified for 30 days.
6. Etc.
Incorrect Answers:
โœ‘ Schedule the lifecycle management policy to run:
You don't Schedule the lifecycle management policy to run. The platform runs the lifecycle policy once a day. Once you configure a policy, it can take up to 24 hours for some actions to run for the first time.
โœ‘ Create a rule filter:
No need for a rule filter. Rule filters limit rule actions to a subset of blobs within the storage account.
Reference:
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-lifecycle-management-concepts

Question #36 Topic 4

You have an Azure Cosmos DB database that uses the SQL API.
You need to delete stale data from the database automatically.
What should you use?

  • A. soft delete
  • B. Low Latency Analytical Processing (LLAP)
  • C. schema on read
  • D. Time to Live (TTL)
Reveal Solution Hide Solution   Discussion   3

Correct Answer: D ๐Ÿ—ณ๏ธ
With Time to Live or TTL, Azure Cosmos DB provides the ability to delete items automatically from a container after a certain time period. By default, you can set time to live at the container level and override the value on a per-item basis. After you set the TTL at a container or at an item level, Azure Cosmos DB will automatically remove these items after the time period, since the time they were last modified.
References:
https://docs.microsoft.com/en-us/azure/cosmos-db/time-to-live

Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
Loading ...