Being stored in HDFS as 3 blocks, I'd like to know whether three-time(MapReduce) process is needed for 3 blocks or only one process is sufficient for each having the same data
Map Reduce works on two terms , Map and reduce. Map split the input into independent chunks, reduce works on output from the map ,reduce reduce the output o from and generate the final result.Map Reduce works on the pair.Key is unique for all the inputs and value may be same for the input.
Guide Reduce takes a shot at two terms , Map and decrease. Guide split the contribution to autonomous lumps, lessen chips away at yield from the guide ,decrease diminish the yield o from and produce the last result.Map Reduce takes a shot at the pair.Key is extraordinary for every one of the information sources and esteem might be same for the information.bigdata training in bangalorebig data training in chennai
Harvard Innovation Lab
114 Western Ave, Boston, MA 02134
Toll Free: (844) EXPERFY or