Banking.scala:
-------------
package com.spark.scala.learning
object Banking {
def main(args:Array[String]):Unit = {
val ob:BankingTransactions = new BankingTransactions
while(true)
{
println("1 . Deposit \n 2. Withdrawal \n 3. Balance Check \n 4. Close")
println("Enter your choice:\t")
val ch:Int = scala.io.StdIn.readInt()
ch match{
case 1 =>
println("Enter the amount to deposit:\t")
val depositAmount:Long = scala.io.StdIn.readLong()
val bal_deposit:Long = ob.deposit(depositAmount)
println("The balance after deposit is " + bal_deposit)
case 2=>
println("Enter the amount to withdraw: \t")
val withdrawlAmount:Long = scala.io.StdIn.readLong()
val bal_withdrawn :Long = ob.withdrawal(withdrawlAmount)
if (bal_withdrawn <0)
println("Insufficient Funds")
else
println("The Balance after Withdrawal is "+bal_withdrawn)
case 3=>
val bal_check:Long = ob.balanceCheck()
println("The balance is " + bal_check)
case 4=>
System.exit(0)
case _=>
println("Bad Choice")
}
}
}
}
BankingTransactions.scala:
-------------------------
package com.spark.scala.learning
class BankingTransactions {
var bal:Long = 0
def deposit(amount:Long):Long = {
bal = bal + amount
bal
}
def withdrawal(amount:Long):Long = {
if (bal - amount < 0){
bal
}
bal = bal - amount
bal
}
def balanceCheck():Long={
bal
}
}
1 . Deposit
2. Withdrawal
3. Balance Check
4. Close
Enter your choice:
1
Enter the amount to deposit:
100
The balance after deposit is 100
1 . Deposit
2. Withdrawal
3. Balance Check
4. Close
Enter your choice:
2
Enter the amount to withdraw:
30
The Balance after Withdrawal is 70
1 . Deposit
2. Withdrawal
3. Balance Check
4. Close
Enter your choice:
3
The balance is 70
1 . Deposit
2. Withdrawal
3. Balance Check
4. Close
Enter your choice:
234
Bad Choice
1 . Deposit
2. Withdrawal
3. Balance Check
4. Close
Enter your choice:
Subscribe to:
Post Comments (Atom)
Flume - Simple Demo
// create a folder in hdfs : $ hdfs dfs -mkdir /user/flumeExa // Create a shell script which generates : Hadoop in real world <n>...
-
Import data from MySQL to HDFS using SQOOP with conditional data importing //Conditional import using Where sqoop import \ -connect jdbc:m...
-
// How to add auto generated column Index to existing dataframe? scala> import org.apache.spark.sql.functions._ import org.apache.spark.s...
-
Input file: emp.csv ---------------- empno,ename,designation,manager,hire_date,sal,deptno 7788,SCOTT,ANALYST,7566,12/9/1982,3000,20 73...
No comments:
Post a Comment