Friday, 11 January 2019
Lines and Word Count of a Given File using Scala
package com.spark.scala.learning
import scala.io.Source
object LinesWordsCount {
def main(args: Array[String]): Unit = {
var file = Source.fromFile("D:\\iEd\\sample.txt")
var countL = 0
for (i <- 0 to file.getLines().length - 1) {
countL = countL + 1
}
file.close()
file = Source.fromFile("D:\\iEd\\sample.txt")
var countW = 0
for (j <- file.getLines()) {
countW = countW + j.split("").size
}
println("The Number of Lines : " + countL)
println("The Number of words : " + countW)
}
}
The Number of Lines : 4
The Number of words : 91
Subscribe to:
Post Comments (Atom)
Flume - Simple Demo
// create a folder in hdfs : $ hdfs dfs -mkdir /user/flumeExa // Create a shell script which generates : Hadoop in real world <n>...
-
How to fetch Spark Application Id programmaticall while running the Spark Job? scala> spark.sparkContext.applicationId res124: String = l...
-
input data: ---------- customerID, itemID, amount 44,8602,37.19 35,5368,65.89 2,3391,40.64 47,6694,14.98 29,680,13.08 91,8900,24.59 ...
-
pattern matching is similar to switch statements in C#, Java no fall-through - at least one condition matched no breaks object PatternExa { ...
No comments:
Post a Comment