What I learned

Never call spark.stop() in Databricks workflows. Explicitly call sys.exit().

Context

I ran into a bug in my team where a pipeline (which was vibe-coded) continued to run even at the end of the file because spark.stop() was called.

This happens because it stops the Spark Session (which would work in a vanilla JVM environment) but the termination of Databricks Workflows depends on configuration and cluster policies. There is more information in the Databricks documentation below.

Source: Databricks Knowledge Base