Running Job on Apache Spark2
Upload sherlock.txt in ~/hadoop-admin/data to HDFS
hdfs dfs -put sherlock.txt /user/training/
Open the spark shell
pyspark --master yarn
Making RDD from the textFile
avglens = sc.textFile("sherlock.txt")
avglens
avglensFM = avglens.flatMap(lambda line : line.split())
avglensFM
avglensMap = avglensFM.map(lambda word: (word[0], len(word)))
avglensMap
avglensGrp = avglensMap.groupByKey(2)
avglensGrp
avglensGMap = avglensGrp.map(lambda (k, values): (k, sum(values)/len(values)))
avglensGMap
Last updated