You can do this using globbing. See the Spark dataframeReader "load" method. Load can take a single path string, a sequence of paths, or no argument for data sources that don't have paths (i.e. not HDFS or S3 or other file systems).
val df = sqlContext.read.format("com.databricks.spark.xml")
.option("inferschema","true")
.option("rowTag", "address") //the root node of your xml to be treated as row
.load("/path/to/files/*.xml")
load can take a long string with comma separated paths
.load("/path/to/files/File1.xml, /path/to/files/File2.xml")