1.executor.instances * executor.memory <= spark可分配的内存数 * 0.75(例如上图: 48G * 0.75 = 36)
2.executor.instances * executor.cores <= spark可分配的cpu核数 * 0.75(例如上图: 6核 * 0.75 = 4)
3.cores.max = executor.instances * executor.cores
默认情况下,executor.memory 配置为8G,除非总的内存比8G还小,根据上面策略,其它选项配置如下
executor.instances = spark可分配的内存数 * 0.75 / executor.memory = 48 * 0.75 / 8 = 4
executor.cores = spark可分配的cpu核数 * 0.75 / executor.instances = 6 * 0.75 / 4 = 1
cores.max = executor.instances * executor.cores = 4 * 1 = 4