Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hi all,
I've configured a custom Spark environment in Microsoft Fabric. Everything works fine functionally — the notebook runs, Spark starts, and I can process data as expected.
However, I’ve noticed that when using this custom environment, I do not get any of the usual startup logs in the Prelaunch (stdout), stderr-active, or stdout-active sections during the notebook’s early lifecycle. Specifically, I’m missing:
Diagnostic output like SAS token retrieval, Spark context injection, ZooKeeper init, etc.
Fabric-injected environment metadata (e.g., workspace ID, session ID)
Any logs prior to the first Python cell execution
When I run the exact same notebook in the default Fabric Spark environment, all of those logs are present. In contrast, my custom environment shows only the generic Prelaunch boilerplate:
Setting up env variables Setting up job resources Copying debugging information Launching container End of LogType:prelaunch.out
Spark itself is launching fine (confirmed via event logs and job success).
spark.livy.synapse.session-warmup.enabled is false in custom, true in default.
My custom environment includes all the expected spark.sql.extensions (even more than the default).
I do not appear to have spark.plugins = org.apache.spark.microsoft.tools.api.plugin.MSToolsRedirectExecutorConsoleOutputPlugin enabled by default — trying to add this manually.
What is responsible for emitting those startup logs? Is it the Docker image’s entrypoint script, or something else injected by Fabric?
What does Fabric require from a custom image in order to capture and redirect early log output into the notebook UI (stderr-active, stdout-active, etc.)?
Is there an officially recommended base image or bootstrap process that ensures Fabric-compatible diagnostics are preserved?
Any insights into what’s missing or how to fix this would be hugely appreciated.
Thanks!
Solved! Go to Solution.
Hi @HoneHealthMB ,
Thanks for reaching out to the Microsoft fabric community forum
Thanks for the detailed explanation — this is a great question and you've already done a solid job narrowing down the issue. Based on your description, it sounds like your custom Spark environment in Microsoft Fabric is not emitting early lifecycle logs because it bypasses some of the Fabric-specific bootstrap mechanisms.
Let me walk through what’s likely happening and how to resolve it.
In Microsoft Fabric, the early logs (like SAS token retrieval, Spark context injection, ZooKeeper init, etc.) are not emitted by Spark directly. Instead, they come from Fabric’s internal runtime orchestration, which includes:
When using a custom environment, especially if it uses a different base image or overrides startup behavior, these components may not get initialized — resulting in missing logs in prelaunch.out, stderr-active, and stdout-active.
You've already identified a few causes:
All of these play a role in suppressing the log output normally visible in the default environment.
3.How to Fix This
Here’s how to bring back the diagnostics and logs in your custom Spark environment:
You’ve noticed this is disabled:
spark.livy.synapse.session-warmup.enabled=false
This warm-up process initializes components that emit early diagnostics and environmental metadata. You should enable it by setting:
spark.livy.synapse.session-warmup.enabled=true
This plugin redirects executor and Spark context output to the notebook UI:
spark.plugins=org.apache.spark.microsoft.tools.api.plugin.MSToolsRedirectExecutorConsoleOutputPlugin
If not set, logs won’t be redirected properly, especially during executor startup.
Microsoft’s default Spark environment in Fabric uses a custom base image that includes:
If you’re starting from a generic Spark base image (e.g., bitnami/spark or spark:latest), these components are missing.
7.Recommended action: Contact Microsoft support and request access or documentation for the appropriate Fabric-compatible Spark base image. They may provide one like:
bash
mcr.microsoft.com/synapse/spark
Or internal documentation on expected image structure.
Ensure your Docker image preserves or replicates Microsoft’s entrypoint scripts, which typically emit:
If your custom image overrides the entrypoint with a simplified one, all of this early logging gets skipped.
Monitor Apache Spark applications with Azure Log Analytics - Microsoft Fabric | Microsoft Learn
Collect your Apache Spark applications logs and metrics using Azure Event Hubs - Microsoft Fabric | ...
Create, Configure, and Use an Environment in Fabric - Microsoft Fabric | Microsoft Learn
If you found this post helpful, please 'consider giving it Kudos' and marking it as the 'accepted solution' to assist other members in finding it more easily.
Thank you.
Hi @HoneHealthMB ,
If your question has been answered, kindly mark the appropriate response as the Accepted Solution. This small step goes a long way in helping others with similar issues.
We appreciate your collaboration and support!
Best regards,
LakshmiNarayana
Hi @HoneHealthMB ,
If your issue has been resolved, please mark the most helpful reply as the Accepted Solution to close the thread. This helps ensure the discussion remains useful for other community members.
Thank you for your attention, and we look forward to your confirmation.
Best regards,
LakshmiNarayana
Hi @HoneHealthMB ,
May I ask if this issue has been resolved? If so, kindly mark the most helpful reply as Accepted and consider giving it Kudos. Doing so helps other community members with similar issues find solutions more quickly. If you're still facing challenges, feel free to let us know—we’ll be glad to assist you further.
Looking forward to your response.
Thank you for your cooperation.
Best regards,
LakshmiNarayana
Hi @HoneHealthMB ,
Thanks for reaching out to the Microsoft fabric community forum
Thanks for the detailed explanation — this is a great question and you've already done a solid job narrowing down the issue. Based on your description, it sounds like your custom Spark environment in Microsoft Fabric is not emitting early lifecycle logs because it bypasses some of the Fabric-specific bootstrap mechanisms.
Let me walk through what’s likely happening and how to resolve it.
In Microsoft Fabric, the early logs (like SAS token retrieval, Spark context injection, ZooKeeper init, etc.) are not emitted by Spark directly. Instead, they come from Fabric’s internal runtime orchestration, which includes:
When using a custom environment, especially if it uses a different base image or overrides startup behavior, these components may not get initialized — resulting in missing logs in prelaunch.out, stderr-active, and stdout-active.
You've already identified a few causes:
All of these play a role in suppressing the log output normally visible in the default environment.
3.How to Fix This
Here’s how to bring back the diagnostics and logs in your custom Spark environment:
You’ve noticed this is disabled:
spark.livy.synapse.session-warmup.enabled=false
This warm-up process initializes components that emit early diagnostics and environmental metadata. You should enable it by setting:
spark.livy.synapse.session-warmup.enabled=true
This plugin redirects executor and Spark context output to the notebook UI:
spark.plugins=org.apache.spark.microsoft.tools.api.plugin.MSToolsRedirectExecutorConsoleOutputPlugin
If not set, logs won’t be redirected properly, especially during executor startup.
Microsoft’s default Spark environment in Fabric uses a custom base image that includes:
If you’re starting from a generic Spark base image (e.g., bitnami/spark or spark:latest), these components are missing.
7.Recommended action: Contact Microsoft support and request access or documentation for the appropriate Fabric-compatible Spark base image. They may provide one like:
bash
mcr.microsoft.com/synapse/spark
Or internal documentation on expected image structure.
Ensure your Docker image preserves or replicates Microsoft’s entrypoint scripts, which typically emit:
If your custom image overrides the entrypoint with a simplified one, all of this early logging gets skipped.
Monitor Apache Spark applications with Azure Log Analytics - Microsoft Fabric | Microsoft Learn
Collect your Apache Spark applications logs and metrics using Azure Event Hubs - Microsoft Fabric | ...
Create, Configure, and Use an Environment in Fabric - Microsoft Fabric | Microsoft Learn
If you found this post helpful, please 'consider giving it Kudos' and marking it as the 'accepted solution' to assist other members in finding it more easily.
Thank you.
User | Count |
---|---|
15 | |
5 | |
4 | |
4 | |
3 |
User | Count |
---|---|
8 | |
8 | |
8 | |
7 | |
6 |