Docs
Advanced TopicsPySpark Performance Tuning

PySpark Performance Tuning

Advanced techniques for optimizing large stateful joins. Prevent memory spillage by forcing Broadcast Hash Joins intelligently using AI hints in your pipeline canvas.

Important: Out Of Memory (OOM) Errors

Improperly tuned joins will trigger unrecoverable OOM faults on executor nodes. Before applying manual overrides, ensure you have reviewed the automated Shuffle telemetry available inside the Data Observability dashboard.

Mitigating Shuffle Overheads

The most expensive mathematical operation in any distributed computing framework is the Network Shuffle—the process of redistributing data amongst different nodes prior to a complex aggregation or join. DataFlow AI automates most tuning, but provides strict manual controls when required.

Broadcast Variables

When joining a massive 5-Terabyte transaction table strictly against a tiny 5MB Country Dimension mapping, you must instruct the cluster not to shuffle.

DataFlow AI allows you to attach a Broadcast() hint directly onto the visual node. The 5MB table is natively duplicated to all active workers, keeping the TB table flowing linearly in parallel.

Dynamic Salting

If your aggregation is grouped by UserID, and 80% of your records belong to a single "Guest" account, the default hash partitioner assigns all 80% to a single CPU thread, killing it.

Our system provides automated query-injection techniques that append randomized salt hashes to the key, forcing balanced mathematical re-distribution before the grouping phase.

← Back to Main App

© 2026 DataFlow AI Docs