[cloudera@quickstart ~]$ spark-shell --master local
[cloudera@quickstart ~]$ spark-shell --master yarn-client
// to view GUI based spark job control panel
http://localhost:4040/executors/
How much resource you want to allot for spark application?
spark-shell --master local[2] // use 2 cores, and default memory 1 GB
spark-shell --master local[4] // use 4 cores
// allot 2GB memory
[cloudera@quickstart ~]$ spark-shell --master local[4] --executor-memory 2G
Subscribe to:
Post Comments (Atom)
Flume - Simple Demo
// create a folder in hdfs : $ hdfs dfs -mkdir /user/flumeExa // Create a shell script which generates : Hadoop in real world <n>...
-
How to fetch Spark Application Id programmaticall while running the Spark Job? scala> spark.sparkContext.applicationId res124: String = l...
-
input data: ---------- customerID, itemID, amount 44,8602,37.19 35,5368,65.89 2,3391,40.64 47,6694,14.98 29,680,13.08 91,8900,24.59 ...
-
pattern matching is similar to switch statements in C#, Java no fall-through - at least one condition matched no breaks object PatternExa { ...
No comments:
Post a Comment