The amount of data to be transferred at the same time is set pretty high and you wouldn't usually reach that limit unless you have a really huge dataset. I am not sure if the solution you think might work will actually work but anyway, here's how you can do it. You can change it at runtime.
val sc = new SparkContext(new SparkConf())
./bin/spark-submit <all your existing options> --spark.shuffle.maxChunksBeingTransferred=NEW_VALUE