Using SemPy in Fabric Notebooks to extract capacity consumption for DAX queries

When working with Power BI semantic models, it’s not always obvious how much capacity a single query actually consumes. If you’re running DAX queries against large models, understanding this usage is key to optimizing performance and managing capacity costs.
In this article, we’ll walk through how to use Fabric Notebooks with the SemPy library to execute DAX queries, capture their capacity unit (CU) consumption, and even extend the approach to analyze usage across workspaces or entire capacities.
So far, the only reliable way to check capacity consumption in Microsoft Fabric has been through the Fabric Capacity Metrics App. While the app does a decent job of visualizing usage, we can all agree it’s a bit cumbersome — it demands a solid understanding of capacity concepts, how the visuals work, where to apply filters, and usually several clicks before you find what you need.
A more direct approach is to query the semantic model that powers the Capacity Metrics App and extract exactly the data you want. This method requires some familiarity with the model’s structure and a bit of time to craft the appropriate queries, but it provides far more control. We covered different ways to connect and query the Metrics App’s semantic model in an earlier post.
One of the most common needs is to determine the capacity consumed by a specific visual — or more precisely, its underlying DAX query. While this information is available in the Metrics App, retrieving it can be challenging, especially on a busy capacity that’s serving requests from many users across your organization.
Let’s simplify this by programmatically tracking capacity consumption for a DAX query using Fabric Notebooks and the SemPy (Semantic Link Python) library. In this approach, we’ll first execute a DAX query against a semantic model, and then run another DAX query against the Capacity Metrics semantic model to retrieve the corresponding capacity unit usage data.
Prerequisites
Before getting started, make sure you have the following in place:
- Permissions: You should have the rights to create Fabric items.
- DAX query: A DAX query ready to execute against your target semantic model.
- Build permission to semantic models: A Fabric Capacity Metrics semantic model and the semantic model on which you’ll run the DAX query.
- UTC_offset: You should know the value of the UTC_offset parameter in the Capacity Metrics semantic model. Your model will use timestamps adjusted to this offset, rather than the local time on your machine.
- Workspace setup: The workspace containing the Capacity Metrics semantic model should be hosted on a dedicated capacity (P SKU, F SKU, or Fabric Trial capacity).
- XMLA endpoint: The capacity’s XMLA endpoint should be enabled.
How do you get the DAX query?
There are several ways to capture the DAX query behind a report visual. You can extract it using Performance Analyzer in Power BI Desktop or Power BI Service (Web), through DAX Studio, or by using SQL Server Profiler. If you’ve set up a Workspace Monitoring Eventhouse, that’s another option as well. It’s perfectly fine to write one yourself, but test it in Power BI Desktop or DAX Studio to ensure it works.
Tracking Capacity Consumption
In Fabric, all operations are categorized as either interactive or background. DAX queries fall under interactive operations since they involve a user actively waiting for results.
In the Fabric Capacity Metrics App, capacity consumption details are available at the timepoint level on the TimePoint Detail page. You can access this page by using the Drill Through feature from the Utilization or Throttling visuals on the Compute page.

On the TimePoint Detail page, the capacity consumed by a query is displayed in the Interactive Operations section for the selected time range visual. This visual also includes other helpful information, such as the operation’s start and end times, the user who triggered it, its duration, the total capacity units (CUs) consumed, and the CUs billed for the selected time point.

For our purpose, the key metric is the total CUs consumed by the operation. We will use the DAX query behind this visual and customize it to extract the necessary data.
Capacity Consumption for all Interactive Operations on the Capacity
The base query for tracking CU usage by a DAX query returns the capacity consumption of all interactive operations on a capacity at a given time point. It’s relatively straightforward once you understand how it works and what inputs it requires. It’s easy to adjust for different scenarios once you get familiar with its structure.
// DAX query to return the capacity consumption by the interactive operation for a capacity during a TimePoint
DEFINE
// Change the TimePoint MPARAMETER to get details of a desired TimePoint
MPARAMETER 'TimePoint' =
( DATE ( 2025, 10, 4 ) + TIME ( 00, 00, 30 ) )
// Provide a capacity ID
MPARAMETER 'CapacitiesList' = { "<CAPACITY_ID>" }
VAR __TimePointInteractiveOperations =
SUMMARIZECOLUMNS (
'Timepoint Interactive Detail'[Operation start time],
'Timepoint Interactive Detail'[Operation end time],
'Timepoint Interactive Detail'[Status],
'Timepoint Interactive Detail'[Operation],
'Timepoint Interactive Detail'[User],
'Items'[Workspace Id],
'Items'[Workspace name],
'Items'[Item kind],
'Items'[Item Id],
'Items'[Item name],
"Operation Duration (s)", CALCULATE ( SUM ( 'Timepoint Interactive Detail'[Duration (s)] ) ),
"TimePoint CU", CALCULATE ( SUM ( 'Timepoint Interactive Detail'[Timepoint CU (s)] ) ),
"Total CU", CALCULATE ( SUM ( 'Timepoint Interactive Detail'[Total CU (s)] ) )
)
EVALUATE
__TimePointInteractiveOperations
ORDER BY
'Timepoint Interactive Detail'[Operation start time] DESC,
'Timepoint Interactive Detail'[Operation end time]
Key points to understand this DAX query
MPARAMETER“TimePoint” drives the query. The result set includes only the interactive operations whose CU consumption is being smoothed within the defined TimePoint.- Smoothing distributes the consumed CUs across future timepoints.
- Each timepoint in Fabric spans 30 seconds, so setting TimePoint anywhere between
(00,00,30)and(00,00,59)will return the same results - Every interactive operation is smoothed over at least 5 minutes (10 timepoints) and up to 64 minutes (128 timepoints). This means you don’t need an exact timestamp to retrieve CU consumption for an operation. As long as your TimePoint falls within that smoothing window, you’ll find the operation in the results.
- The TimePoint you define in the query must refer to the UTC_offset that is set in your Capacity Metrics semantic model, rather than your local time.
MPARAMETER“CapacitiesList” specifies which capacity (or capacities) to include in the result set.- Operation Start & End Time shows when the operation began and was completed. The TimePoint parameter doesn’t filter based on these timestamps, but rather on the operations being smoothed during that timepoint. So it’s normal to see operations outside the defined TimePoint.
- Status, Operation, and User indicate whether the operation succeeded or failed, the type of operation performed, and the user who triggered it.
- Operation Duration (s) represents the time taken for the operation — essentially, End Time – Start Time.
- TimePoint CU shows the capacity units billed for the specified timepoint.
- Total CU represents the total capacity units consumed by the operation. This is the key metric we’re after for identifying the CU consumption of a DAX query.
You can easily customize this query to track CU consumption for a specific workspace, semantic model, or user by applying the appropriate filters. Here’s a sample run of this query in DAX Studio to give you a visual reference of the expected output.

Capacity Consumption for a DAX Query
Now that we understand how interactive operations and timepoints work in Fabric, the rest is pretty straightforward. We’ll use a Fabric Notebook with SemPy to execute a DAX query against a target semantic model, and then run a customized version of the base query we discussed earlier to retrieve the capacity unit consumption details for that DAX query.
Here’s the code snippet to run the DAX query against a semantic model.
# Import the Semantic Link Python library
import sempy.fabric as fabric
# Import other libraries and modules
from datetime import datetime, timedelta
import time
## ........ ENSURE THE USER NAME DISPLAYED MATCHES WITH YOUR UPN IF NOT THEN MANUALLY SET THE UPN IN THE dax_query_user VARIABLE ........ ##
# Get the name of the user running the notebook
# This user name will be used to filter the operations in the DAX query to the Capacity Metric semantic model
dax_query_user = mssparkutils.env.getUserName()
print(dax_query_user)
## ........ PROVIDE THE NAME OR ID OF YOUR TARGET WORKSPACE, TARGET SEMANTIC MODEL, THE CAPACITY ID OF THE TARGET WORKSPACE, AND THE DAX QUERY IN FOLLOWING VARIABLES ........ ##
target_workspace = "<TARGET_WORKSPACE_NAME_OR_ID>"
target_semantic_model = "<TARGET_SEMANTIC_MODEL_NAME_OR_ID>"
target_capacity_Id = "<CAPACITY_ID_OF_TARGET_WORKSPACE>"
# DAX query to run against the target semantic model
target_dax_query = """
<DAX_QUERY_TO_RUN>
"""
# Execute DAX query against the target semantic model
dax_result = fabric.evaluate_dax(
workspace = target_workspace,
dataset = target_semantic_model,
dax_string = target_dax_query
)
display(dax_result)
A few pointers about this code snippet
dax_query_userretrieves and stores the username of the person running the Fabric Notebook. The DAX query is executed using this user’s ID, which we’ll later reference to filter operations in the Capacity Metrics semantic model for this specific user.target_variables hold the necessary details required to execute the DAX query.evaluate_daxis a SemPy function used to run the DAX query.
Note: It’s a good idea to verify the output of the dax_query_user variable, as it may change over time and might not always return the correct UPN of the user executing the notebook.
An important point to remember is to provide a relevant TimePoint value when retrieving operation details. As mentioned earlier, the query only returns operations that are being billed during the specified TimePoint. You don’t need an exact TimePointvalue as long as it’s within the range in which the operation is being smoothed.
Another key detail is that billing for an operation begins at the next 30-second interval. For example, if an operation ends at 11:30:15, its billing will not start until 11:30:30. Therefore, it’s crucial to reference the relevant TimePointin your query to obtain the CU details of the operation.
The second code snippet handles this automatically. Once the target DAX query finishes execution, it takes the current UTC, rounds it up to the next 30-second mark, and formats it to match the DAX syntax (DATE() + TIME()). If needed, you can also manually set the TimePoint using the operation_TimePoint variable.
# The billing of Capacity Units starts from the beginning of the next TimePoint i.e., next 30 seconds mark.
# To automate the TimePoint in the DAX query to the Capacity Metric semantic model:
# 1. Get the current utc time in the dax_query_execution_time variable
# 2. Round up the time to the next 30 second mark
# 3. Format the time to fit the DAX query as DAX DATE() + TIME()
# This time will be used as the TimePoint in the DAX query to the Capacity Metric semantic model.
# 1. Get the current utc time in the dax_query_execution_time variable
dax_query_execution_time = datetime.utcnow()
# 2. Round up the time to the next 30 second mark
rounded_seconds = 30 if dax_query_execution_time.second < 30 else 0
extra_minutes = 1 if dax_query_execution_time.second >= 30 else 0
rounded_time = dax_query_execution_time.replace(
second=rounded_seconds,
microsecond=0
) + timedelta(minutes=extra_minutes)
# 3. Format the time to fit the DAX query as DAX DATE() + TIME()
operation_TimePoint = f"DATE({rounded_time.year}, {rounded_time.month}, {rounded_time.day}) + TIME({rounded_time.hour}, {rounded_time.minute}, {rounded_time.second})"
## ........ ALTERNATIVELY, SET THE TIMEPOINT MANUALLY ........ ##
# operation_TimePoint = "DATE(YYYY, MM, DD) + TIME(HH, MM, SS)"
Note: The datetime.utcnow() function was tested with a Capacity Metrics semantic model where the UTC_offset parameter is set to 0. If your model uses a different UTC_offset value, you may need to adjust datetime.utcnow() accordingly to align with the correct timestamps.
The third code snippet captures the capacity consumption of the DAX query. We’ll do this by running a customized version of the base query against the Capacity Metrics semantic model. This query returns all operations that are being smoothed during the specified TimePoint. The applied filters narrow the results to include only the semantic model and the user who triggered the DAX query operation from the Fabric Notebook.
An important point to note is that it can take a few minutes for the operation and CU details to appear in the Capacity Metrics semantic model. The code snippet includes a sleep function to introduce a delay before running the DAX query. In most cases, a 6-minute delay works well; however, you may need to increase this value if CU details don’t appear in the results, even after verifying that all filters are correctly set.
# The details of an operation appears in the capacity metrics semantic model in approx 6minutes
# The waitTime variable is used to set the sleep timer before running the DAX query to the Capacity Metric semantic model
# Set wait time in seconds
waitTime = 6 * 60 # 6 minutes
## ........ PROVIDE THE NAME OR ID OF YOUR FABRIC CAPACTIY METRICS WORKSPACE AND FABRIC CAPACTIY METRICS SEMANTIC MODEL ........ ##
capacity_metric_workspace = "<CAPACITY_METRIC_WORKSPACE_NAME_OR_ID>"
capacity_metric_semantic_model = "<CAPACITY_METRIC_SEMANTIC_MODEL_NAME_OR_ID>"
# DAX query to extract capacity consumption data from the capacity metrics semantic model
capacity_consumption_dax_query = f"""
// DAX query to return the capacity consumption by the interactive operation for a capacity during a TimePoint
DEFINE
// Change the TimePoint MPARAMETER to get details of a desired TimePoint
MPARAMETER 'TimePoint' =
{operation_TimePoint}
// Provide a capacity ID
MPARAMETER 'CapacitiesList' = {{ "{target_capacity_Id}" }}
VAR __WorkspaceId =
TREATAS ( {{ "{target_workspace}" }}, 'Items'[Workspace Id] )
VAR __SemanticModelId =
TREATAS ( {{ "{target_semantic_model}" }}, 'Items'[Item Id] )
VAR __User =
TREATAS ( {{ "{dax_query_user}" }}, 'Timepoint Interactive Detail'[User] )
VAR __TimePointInteractiveOperations =
SUMMARIZECOLUMNS (
'Timepoint Interactive Detail'[Operation start time],
'Timepoint Interactive Detail'[Operation end time],
'Timepoint Interactive Detail'[Status],
'Timepoint Interactive Detail'[Operation],
'Timepoint Interactive Detail'[User],
'Items'[Workspace Id],
'Items'[Workspace name],
'Items'[Item kind],
'Items'[Item Id],
'Items'[Item name],
__WorkspaceId,
__SemanticModelId,
__User,
"Operation Duration (s)", CALCULATE ( SUM ( 'Timepoint Interactive Detail'[Duration (s)] ) ),
"TimePoint CU", CALCULATE ( SUM ( 'Timepoint Interactive Detail'[Timepoint CU (s)] ) ),
"Total CU", CALCULATE ( SUM ( 'Timepoint Interactive Detail'[Total CU (s)] ) )
)
EVALUATE
__TimePointInteractiveOperations
ORDER BY
'Timepoint Interactive Detail'[Operation start time] DESC,
'Timepoint Interactive Detail'[Operation end time]
"""
# Wait at least 5 minutes before running the DAX query to the Capacity Metric semantic model to ensure operation details are available.
print(f"Waiting for {waitTime} seconds before running the DAX query to the Capacity Metric semantic model to ensure operation details are available.")
time.sleep(waitTime)
# Execute DAX query to extract capacity consumption data from the capacity metrics semantic model
capacity_consumption_result = fabric.evaluate_dax(
workspace = capacity_metric_workspace,
dataset = capacity_metric_semantic_model,
dax_string = capacity_consumption_dax_query
)
display(capacity_consumption_result)
⚠️ Note
The DAX query was written and tested against the Fabric Capacity Metrics App version 1031 (updated on 11 Sep 2025). It may not work with older or newer versions, as the semantic model structure and object names are subject to change.
Running these code snippets is straightforward:
- Create a new Fabric Notebook.
- Attach a Lakehouse to it.
- Paste the code snippets into cell blocks, replace the placeholders with your values and data, and you’re ready to go.

Note: DAX queries generated by report visuals or executed via the REST API appear as operation type “Query.” In contrast, DAX queries run through DAX Studio, Fabric Notebooks, or similar tools appear as “XMLA Read Operation” as they connect to the semantic model via the XMLA endpoint.
You can find the complete Fabric notebook here.
That’s all for this article! Let us know how it worked for you or if you discovered any other useful variations.
Next Steps
If you’re exploring different ways to extract data from the Fabric Capacity Metrics App, check out our detailed guide covering various tools and methods to connect and retrieve data.
We also have an article that explains how to extract the memory size of all semantic models, either within a single capacity or across multiple capacities, from the Capacity Metrics App.
And if you’re interested in going deeper with SemPy, don’t miss our walkthrough on using SemPy to retrieve memory size and VertiPaq statistics for a semantic model.

Leave a comment