This is a very broad question, but the short of it is:
- Create an RDD of the input data.
- Call
map
with your mapper code. Output key-value pairs. - Call
reduceByKey
with your reducer code. - Write the resulting RDD to disk.
Spark is more flexible than MapReduce: there is a great variety of methods that you could use between steps 1 and 4 to transform the data.
0
solved MapReduce to Spark