Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Exceeded capacity limits makes your fabric environment fail hard. It makes Power BI etc. unaccessible. In order to protect my report layer I am considering moving my reports to a separate capacity if possible. Is that a good idea? For example replacing a F8 with two F4.
As in my case where I want to separate the reports - what else do I need to have in the "reports capacity"? warehouse and semantic model?
Anyone with similar experiences?
Solved! Go to Solution.
Workspace A (capacity A):
- This is your existing Workspace on the existing Capacity
- Here is your existing Warehouse A
New capacity B, with new workspace B:
- Create a new Lakehouse B
- Create shortcuts inside Lakehouse B, which point to the tables in Warehouse A (the tables in Warehouse A are the target paths for the shortcuts).
This means, the tables from Warehouse A will be available inside Lakehouse B.
- Create a new, custom direct lake semantic model, using the shortcut tables in Lakehouse B.
- Build reports on the semantic model.
Warehouse A (Capacity A) -> shortcuts -> Lakehouse B (Capacity B) -> Semantic model (Capacity B) -> Reports (I think they can be wherever, but put them on Capacity B for simplicity. Anyway, it's the semantic model that consumes the resources.)
Are you using Direct Lake or Import mode?
If you're using Direct Lake, you could create a new Lakehouse on your "report capacity", and use shortcuts to bring data into that Lakehouse. Then build a semantic model and report.
If you're using Import Mode, you can have the semantic model and reports in a pro license workspace (shared capacity).
Direkt Lake
do I understand it correct that i could create shortcut in lakehouse pointing to my warehouse in a separate workspace and then create a semantic model on top of the shortcut objects in the warehouse?
Workspace A (capacity A):
- This is your existing Workspace on the existing Capacity
- Here is your existing Warehouse A
New capacity B, with new workspace B:
- Create a new Lakehouse B
- Create shortcuts inside Lakehouse B, which point to the tables in Warehouse A (the tables in Warehouse A are the target paths for the shortcuts).
This means, the tables from Warehouse A will be available inside Lakehouse B.
- Create a new, custom direct lake semantic model, using the shortcut tables in Lakehouse B.
- Build reports on the semantic model.
Warehouse A (Capacity A) -> shortcuts -> Lakehouse B (Capacity B) -> Semantic model (Capacity B) -> Reports (I think they can be wherever, but put them on Capacity B for simplicity. Anyway, it's the semantic model that consumes the resources.)
thanks, I will give it a try
is the total capacity of two F4 equal to a F8?
Kind of, but not entirely.
You will be able to take greater advantage of smoothing when everything is on the same, big capacity. It's like having a bigger buffer reservoir.
However, if you run into problems on a single, bigger capacity, then it affects everything you have on that capacity.
When you split into two capacities, they are isolated from each other. So the issues should not spill over. But each of them will more easily run into problems when they are smaller, than if they were one big capacity.
Have you checked the capacity metrics app to see what has caused the F8 capacity to fail? Background operations or interactive operations
Both of these capacities are toy capacities. An F64 has only 25 GB of RAM. That is considered tiny.
And F64 is approx. AUD$9,000/month - is that correct?
No, that would have been a P1. F SKUs are consumption based. The cost will be whatever your usage is.
"F SKUs are consumption based. The cost will be whatever your usage is."
No.
The cost is a fixed hourly rate.
The fixed hourly rate depends on which F SKU size you choose.
The fixed hourly price also depends on which Azure region you choose to deploy your capacity in.
https://azure.microsoft.com/en-us/pricing/details/microsoft-fabric/
If our usage (utilization %) is 20%, or 0%, or 80%, or 100%, we will pay the same price. We will pay the fixed hourly price, irrespective of our consumption.
The price of an F64 is ~9000 USD/month with pay-as-you-go option (~13 USD/hour), or ~6000 USD/month with reservation option (~8 USD/hour).
But the price depends on which Azure region you use. So the F64 price can be higher or lower than what I mentioned, depending on your chosen region.
A reserved F64 (or more precisely: a reservation of 64 CUs) is approximately the same as a P1.
In general, the hourly rate for reservation capacity is ~40% cheaper than pay-as-you-go. With reservation, I think you are committed for 1 year minimum.
However: with the pay-as-you-go, it is possible to pause the capacity, and only pay for the hours when the capacity is not paused.
But pausing the capacity also means you will be charged for any remaining throttling and smoothing.
Microsoft Fabric has an array of capacities that you can buy. The capacities are split into Stock Keeping Units (SKU). Each SKU provides a different amount of computing power, measured by its Capacity Unit (CU) value. Refer to the Capacity and SKUs table to see how many CUs each SKU provides.
Microsoft Fabric operates on two types of SKUs:
Azure - Billed per second with no commitment.
Microsoft 365 - Billed monthly or yearly, with a monthly commitment
Buy a Microsoft Fabric subscription - Microsoft Fabric | Microsoft Learn
You cannot buy P SKUs any more.
I see, the billing seems to be per second (and we can calculate a fixed rate per second).
But the principle remains the same:
If we buy an F64, we pay the fixed rate per second for the entire F64, regardless of whether our usage is 5% or 50% or 100%.
With the pay-as-you-go option, we can choose to pause the capacity when we don't need it to be available (running). We will not be billed for the compute in the period when it is paused. But as long as the capacity is not paused, we pay the full fixed rate per second.
(Pausing a capacity also means we will be charged for any remaining throttling and smoothing.)
Reserved capacity units (CUs) are ~40% cheaper than the pay-as-you-go rate, and the reservation is a commitment for 1 year. Even if we pause the capacity the billing for reserved capacity units will not be paused. (Although there seems to be an option to cancel a reservation and get Azure credits in return - I haven't looked into the details and limitations regarding that).
The amount of RAM depends on which Fabric workload we're looking at.
For example, Spark on an F64 has a lot more than 25 GB of RAM. https://learn.microsoft.com/en-us/fabric/data-engineering/spark-compute
For Power BI, the max limit is 25 GB RAM on an F64:
The "muscle power" on an F8 is generally a bit stronger than an F4. For the Power BI workload, though, they have the same max memory (both of them have a max Power BI memory of 3GB). But there are some differences, see the link as an example.
Generally a good idea, but you need to make sure that the workspaces you move don't have any leftover dependencies on workspaces you didn't move.
(Don't bother with the details - this is all happening on workspace level)
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
User | Count |
---|---|
70 | |
44 | |
14 | |
12 | |
5 |
User | Count |
---|---|
80 | |
77 | |
27 | |
8 | |
7 |