[cloudera@quickstart ~]$ spark-shell --master local
[cloudera@quickstart ~]$ spark-shell --master yarn-client
// to view GUI based spark job control panel
http://localhost:4040/executors/
How much resource you want to allot for spark application?
spark-shell --master local[2] // use 2 cores, and default memory 1 GB
spark-shell --master local[4] // use 4 cores
// allot 2GB memory
[cloudera@quickstart ~]$ spark-shell --master local[4] --executor-memory 2G
Subscribe to:
Post Comments (Atom)
Flume - Simple Demo
// create a folder in hdfs : $ hdfs dfs -mkdir /user/flumeExa // Create a shell script which generates : Hadoop in real world <n>...
-
Import data from MySQL to HDFS using SQOOP with conditional data importing //Conditional import using Where sqoop import \ -connect jdbc:m...
-
// How to add auto generated column Index to existing dataframe? scala> import org.apache.spark.sql.functions._ import org.apache.spark.s...
-
Input file: emp.csv ---------------- empno,ename,designation,manager,hire_date,sal,deptno 7788,SCOTT,ANALYST,7566,12/9/1982,3000,20 73...
No comments:
Post a Comment