# Apache Spark 黑名单(Blacklist)机制介绍

2018/01/12 17:29

• 有个节点上的磁盘由于某些原因出现间歇性故障，导致某些扇区不能被读取。假设我们的 Spark 作业需要的数据正好就在这些扇区上，这将会导致这个 Task 失败。
• 这个作业的 Driver 获取到这个信息，知道 Task 失败了，所以它会重新提交这个 Task。
• Scheduler 获取这个请求之后，它会考虑到数据的本地性问题，所以很可能还是把这个 Task 分发到上述的机器，因为它并不知道上述机器的磁盘出现了问题。
• 因为这个机器的磁盘出现问题，所以这个 Task 可能一样失败。然后 Driver 重新这些操作，最终导致了 Spark 作业出现失败！

spark.blacklist.enabled false 如果这个参数这为 true，那么 Spark 将不再会往黑名单里面的执行器调度任务。黑名单算法可以由其他“spark.blacklist”配置选项进一步控制，详情参见下面的介绍。
spark.blacklist.timeout 1h (实验性) How long a node or executor is blacklisted for the entire application, before it is unconditionally removed from the blacklist to attempt running new tasks.
spark.blacklist.task.maxTaskAttemptsPerExecutor 1 (实验性) For a given task, how many times it can be retried on one executor before the executor is blacklisted for that task.
spark.blacklist.task.maxTaskAttemptsPerNode 2 (实验性) For a given task, how many times it can be retried on one node, before the entire node is blacklisted for that task.
spark.blacklist.stage.maxFailedTasksPerExecutor 2 (实验性) How many different tasks must fail on one executor, within one stage, before the
executor is blacklisted for that stage.
spark.blacklist.stage.maxFailedExecutorsPerNode 2 (实验性) How many different executors are marked as blacklisted for a given stae, before the entire node is marked as failed for the stage.
spark.blacklist.application.maxFailedTasksPerExecutor 2 (实验性) How many different tasks must fail on one executor, in successful task sets, before the executor is blacklisted for the entire application. Blacklisted executors will be automatically added back to the pool of available resources after the timeout specified by spark.blacklist.timeout. Note that with dynamic allocation, though, the executors may get marked as idle and be reclaimed by the cluster manager.
spark.blacklist.application.maxFailedExecutorsPerNode 2 (实验性) How many different executors must be blacklisted for the entire application, before the node is blacklisted for the entire application. Blacklisted nodes will be automatically added back to the pool of available resources after the timeout specified by spark.blacklist.timeout. Note that with dynamic allocation, though, the executors on the node may get marked as idle and be reclaimed by the cluster manager.
spark.blacklist.killBlacklistedExecutors false (实验性) If set to "true", allow Spark to automatically kill, and attempt to re-create, executors when they are blacklisted. Note that, when an entire node is added to the blacklist, all of the executors on that node will be killed.

0
0 收藏

0 评论
0 收藏
0