Friday, 18 January 2019

Spark-shell startup parameters

[cloudera@quickstart ~]$ spark-shell --master local

[cloudera@quickstart ~]$ spark-shell --master yarn-client

// to view GUI based spark job control panel
http://localhost:4040/executors/

How much resource you want to allot for spark application?

spark-shell --master local[2]  // use 2 cores,  and default memory 1 GB

spark-shell --master local[4]  // use 4 cores

// allot 2GB memory
[cloudera@quickstart ~]$ spark-shell --master local[4] --executor-memory 2G

No comments:

Post a Comment

Flume - Simple Demo

// create a folder in hdfs : $ hdfs dfs -mkdir /user/flumeExa // Create a shell script which generates : Hadoop in real world <n>...