You have a SQL pool in Azure Synapse.
You discover that some queries fail or take a long time to complete.
You need to monitor for transactions that have rolled back.
Which dynamic management view should you query?
Correct Answer:
A
๐ณ๏ธ
You can use Dynamic Management Views (DMVs) to monitor your workload including investigating query execution in SQL pool.
If your queries are failing or taking a long time to proceed, you can check and monitor if you have any transactions rolling back.
Example:
-- Monitor rollback
SELECT -
SUM(CASE WHEN t.database_transaction_next_undo_lsn IS NOT NULL THEN 1 ELSE 0 END), t.pdw_node_id, nod.[type]
FROM sys.dm_pdw_nodes_tran_database_transactions t
JOIN sys.dm_pdw_nodes nod ON t.pdw_node_id = nod.pdw_node_id
GROUP BY t.pdw_node_id, nod.[type]
Reference:
https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor#monitor-transaction-log-rollback
You have an alert on a SQL pool in Azure Synapse that uses the signal logic shown in the exhibit.
On the same day, failures occur at the following times:
โ 08:01
โ 08:03
โ 08:04
โ 08:06
โ 08:11
โ 08:16
โ 08:19
The evaluation period starts on the hour.
At which times will alert notifications be sent?
Correct Answer:
B
๐ณ๏ธ
Reference:
https://docs.microsoft.com/en-us/azure/azure-sql/database/alerts-insights-configure-portal
You plan to monitor the performance of Azure Blob storage by using Azure Monitor.
You need to be notified when there is a change in the average time it takes for a storage service or API operation type to process requests.
For which two metrics should you set up alerts? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Correct Answer:
AB
๐ณ๏ธ
Success E2E Latency: The average end-to-end latency of successful requests made to a storage service or the specified API operation. This value includes the required processing time within Azure Storage to read the request, send the response, and receive acknowledgment of the response.
Success Server Latency: The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in
SuccessE2ELatency.
Reference:
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-scalable-app-verify-metrics
HOTSPOT -
You have an Azure data factory that has two pipelines named PipelineA and PipelineB.
PipelineA has four activities as shown in the following exhibit.
PipelineB has two activities as shown in the following exhibit.
You create an alert for the data factory that uses Failed pipeline runs metrics for both pipelines and all failure types. The metric has the following settings:
โ Operator: Greater than
โ Aggregation type: Total
โ Threshold value: 2
โ Aggregation granularity (Period): 5 minutes
โ Frequency of evaluation: Every 5 minutes
Data Factory monitoring records the failures shown in the following table.
For each of the following statements, select yes if the statement is true. Otherwise, select no.
NOTE: Each correct answer selection is worth one point.
Hot Area:
Correct Answer:
Box 1: No -
Only one failure at this point.
Box 2: No -
Only two failures within 5 minutes.
Box 3: Yes -
More than two (three) failures in 5 minutes
Reference:
https://docs.microsoft.com/en-us/azure/azure-sql/database/alerts-insights-configure-portal
You have an enterprise data warehouse in Azure Synapse Analytics named DW1 on a server named Server1.
You need to verify whether the size of the transaction log file for each distribution of DW1 is smaller than 160 GB.
What should you do?
Correct Answer:
A
๐ณ๏ธ
The following query returns the transaction log size on each distribution. If one of the log files is reaching 160 GB, you should consider scaling up your instance or limiting your transaction size.
-- Transaction log size
SELECT -
instance_name as distribution_db,
cntr_value*1.0/1048576 as log_file_size_used_GB,
pdw_node_id
FROM sys.dm_pdw_nodes_os_performance_counters
WHERE -
instance_name like 'Distribution_%'
AND counter_name = 'Log File(s) Used Size (KB)'
Reference:
https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-manage-monitor
HOTSPOT -
You need to collect application metrics, streaming query events, and application log messages for an Azure Databricks cluster.
Which type of library and workspace should you implement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Correct Answer:
You can send application logs and metrics from Azure Databricks to a Log Analytics workspace. It uses the Azure Databricks Monitoring Library, which is available on GitHub.
References:
https://docs.microsoft.com/en-us/azure/architecture/databricks-monitoring/application-logs
HOTSPOT -
You have an Azure Cosmos DB database.
You need to use Azure Stream Analytics to check for uneven distributions of queries that can affect performance.
Which two settings should you configure? To answer, select the appropriate settings in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Correct Answer:
Box 1: RIGHT -
Use right for dates.
1- RIGHT means < or >=
2- LEFT means <= and >.
Box 2: 20090101, 201001010, 20110101, 20120101
Four values are better than three or two.
Reference:
https://medium.com/@selcukkilinc23/what-it-means-range-right-and-left-in-table-partitioning-2d654cb99ade
HOTSPOT -
You have an Azure Cosmos DB database.
You need to use Azure Stream Analytics to check for uneven distributions of queries that can affect performance.
Which two settings should you configure? To answer, select the appropriate settings in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Correct Answer:
PartitionKeyStatistics: Select this option to log the statistics of the partition keys. This is currently represented with the storage size (KB) of the partition keys.
PartitionKeyRUConsumption: This log reports the aggregated per-second RU/s consumption of partition keys. Currently, Azure Cosmos DB reports partition keys for SQL API accounts only and for point read/write and stored procedure operations. other APIs and operation types are not supported. For other APIs, the partition key column in the diagnostic log table will be empty. This log contains data such as subscription ID, region name, database name, collection name, partition key, operation type, and request charge.
Note:
How to get partition key statistics to evaluate skew across top 3 partitions for a database account:
AzureDiagnostics -
| where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="PartitionKeyStatistics"
| project SubscriptionId, regionName_s, databaseName_s, collectionName_s, partitionKey_s, sizeKb_d, ResourceId
Incorrect Answers:
DataPlaneRequests: Select this option to log back-end requests to the SQL API accounts in Azure Cosmos DB.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/cosmosdb-monitor-resource-logs
DRAG DROP -
You are implementing an Azure Blob storage account for an application that has the following requirements:
โ Data created during the last 12 months must be readily accessible.
โ Blobs older than 24 months must use the lowest storage costs. This data will be accessed infrequently.
โ Data created 12 to 24 months ago will be accessed infrequently but must be readily accessible at the lowest storage costs.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
Correct Answer:
Step 1: Create a block blob in a Blob storage account
First create the block blob.
Azure Blob storage lifecycle management offers a rich, rule-based policy for GPv2 and Blob storage accounts.
Step 2: Use an Azure Resource Manager template that has a lifecycle management policy
Step 3: Create a rule that has the rule actions of TierToCool and TierToArchive
Each rule definition includes a filter set and an action set. The filter set limits rule actions to a certain set of objects within a container or objects names.
Note: You can add a Rule through Azure portal:
Sign in to the Azure portal.
1. In the Azure portal, search for and select your storage account.
2. Under Blob service, select Lifecycle Management to view or change your rules.
3. Select the List View tab.
4. Select Add a rule and name your rule on the Details form. You can also set the Rule scope, Blob type, and Blob subtype values.
5. Select Base blobs to set the conditions for your rule. For example, blobs are moved to cool storage if they haven't been modified for 30 days.
6. Etc.
Incorrect Answers:
โ Schedule the lifecycle management policy to run:
You don't Schedule the lifecycle management policy to run. The platform runs the lifecycle policy once a day. Once you configure a policy, it can take up to 24 hours for some actions to run for the first time.
โ Create a rule filter:
No need for a rule filter. Rule filters limit rule actions to a subset of blobs within the storage account.
Reference:
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-lifecycle-management-concepts
You have an Azure Cosmos DB database that uses the SQL API.
You need to delete stale data from the database automatically.
What should you use?
Correct Answer:
D
๐ณ๏ธ
With Time to Live or TTL, Azure Cosmos DB provides the ability to delete items automatically from a container after a certain time period. By default, you can set time to live at the container level and override the value on a per-item basis. After you set the TTL at a container or at an item level, Azure Cosmos DB will automatically remove these items after the time period, since the time they were last modified.
References:
https://docs.microsoft.com/en-us/azure/cosmos-db/time-to-live