Running Job on Apache Spark2
Upload sherlock.txt in ~/hadoop-admin/data to HDFS
hdfs dfs -put sherlock.txt /user/training/Open the spark shell
pyspark --master yarnMaking RDD from the textFile
avglens = sc.textFile("sherlock.txt")
avglensavglensFM = avglens.flatMap(lambda line : line.split())
avglensFMavglensMap = avglensFM.map(lambda word: (word[0], len(word)))
avglensMapavglensGrp = avglensMap.groupByKey(2)
avglensGrpavglensGMap = avglensGrp.map(lambda (k, values): (k, sum(values)/len(values)))
avglensGMapLast updated