I have a file on HDFS that I want to know how many lines are. (testfile)
In linux, I can do:
wc -l <filename>
Can I do something similar with "hadoop fs" command? I can print file contents with:
hadoop fs -text /user/mklein/testfile
How do I know how many lines do I have? I want to avoid copying the file to local filesystem then running the wc command.
Note: My file is compressed using snappy compression, which is why I have to use -text instead of -cat