How can I turn off hadoop speculative execution from Java -
After reading
I'm trying to stop speculation execution using the new Java API, but it has no effect is.
This is my main square:
public square key {public static zero main (string [] args throws exceptions {configuration conf = new configuration ()); // old API: //conf.setBoolean (.map.tasks.speculative.execution ", false); // new API: conf.setBoolean (" mapreduce.map.speculative ", incorrect); int res = ToolRunner.run (Conf, new loggersmapradus (), args); System.exit (race);}} and my MapReducer starts like this:
@ Override public entrant run (string [] args throws exceptions {configuration conf = super.getConf (); / * * Instant a job object for your job configuration * / job job = Job.getInstance (conf); But when I see I see that log:
2014-04-24 10: 06: 21,418 INFO org.apache.hadoop.mapreduce lib.input.FileInputFormat (main): total input path for process: 16 2014-04-24 10: 06: 21,574 INFO org.apache.hadoop.mapreduce.JobSubmitter (main): Number of divisions: 26 If I understand then it means that the speculation The execution is still going on, otherwise if I have only 16 input files, why would it be 26 divisions? Am i making a mistake
Note: I believe I use the new API, as I see these alerts in the log:
2014-04-24 10: 06: 21,590 information org.apache.hadoop.conf.Configuration.deprecation (main): mapred.job.classpath.files dislikes. Instead, use mapreduce.job.classpath.files
"16 file = 16 Mapper "This is a misconception.
"16 files = minimum 16 mapper" is correct.
If some of the 16 files are more than the block size, then they have many mapers. Therefore, your 16 file generating 26 mappers may not be due to speculative execution.
Conf does definitely work and you can verify by looking at your job. XML
Comments
Post a Comment