Ideally, you would use snappy compression (default) due to snappy compressed parquet files being splittable.
Using snappy instead of gzip will significantly increase the file size, so if storage space is an issue, that needs to be considered.
.option("compression", "gzip") is the option to override the default snappy compression.
If you need to resize/repartition your Dataset/DataFrame/RDD, call the .coalesce(<num_partitions> or worst case .repartition(<num_partitions>) function.
Also, parquet file size and for that matter all files generally should be greater in size than the HDFS block size (default 128MB).
Refer the following links to know more:
https://forums.databricks.com/questions/101/what-is-an-optimal-size-for-file-partitions-using.html
http://boristyukin.com/is-snappy-compressed-parquet-file-splittable/
Hope this will help you!