You can do this using globbing. See the Spark dataframeReader "load" method. Load can take a single path string, a sequence of paths, or no argument for data sources that don't have paths (i.e. not HDFS or S3 or other file systems).
val df = sqlContext.read.format("com.databricks.spark.xml")
.option("rowTag", "address") //the root node of your xml to be treated as row
load can take a long string with comma separated paths