问题
Java作业失败IndexOutOfBoundsException报错消息 :
Java.Lang.IndexObjectsExceptive:index:0长度目标范围(00)
检视栈迹时会看到类似的东西
Py4JavaError:调用o617计时出错sparkExceptive: 因阶段故障中止作业:7.0任务4次失效,最新故障:7.0任务0.3(2195,102072328执行程序0):java.Lang.IndexOpenbesExpense:index:0,长度:107741824at io.netty.buffer.ArrowBuf.checkIndex(ArrowBuf.java:716)at io.netty.buffer.ArrowBuf.setBytes(ArrowBuf.java:954)at org.apache.arrow.vector.BaseVariableWidthVector.reallocDataBuffer(BaseVariableWidthVector.java:508)at org.apache.arrow.vector.BaseVariableWidthVector.handleSafe(BaseVariableWidthVector.java:1239)at org.apache.arrow.vector.BaseVariableWidthVector.setSafe(BaseVariableWidthVector.java:1066)at org.apache.spark.sql.execution.arrow.StringWriter.setValue(ArrowWriter.scala:287)at org.apache.spark.sql.execution.arrow.ArrowFieldWriter.write(ArrowWriter.scala:151)at org.apache.spark.sql.execution.arrow.ArrowWriter.write(ArrowWriter.scala:105)at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$writeIteratorToStream$1(ArrowPythonRunner.scala:100)at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1581)at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIteratorToStream(ArrowPythonRunner.scala:122)at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:478)at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:2146)api.python.BasePythonRunner$WriterThread.run(ythonRunner.scala:270)驱动栈at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2519)at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2466)at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2460)at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2460)at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1152)at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1152)at scala.Option.foreach(Option.scala:407)at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1152)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2721)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2668)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2656)ache.spark.util.EventLoop$aon1.une(EventLoop.scala:49)at io.netty.buffer.ArrowBuf.checkIndex(ArrowBuf.java:716)at io.netty.buffer.ArrowBuf.setBytes(ArrowBuf.java:954)at org.apache.arrow.vector.BaseVariableWidthVector.reallocDataBuffer(BaseVariableWidthVector.java:508)at org.apache.arrow.vector.BaseVariableWidthVector.handleSafe(BaseVariableWidthVector.java:1239)at org.apache.arrow.vector.BaseVariableWidthVector.setSafe(BaseVariableWidthVector.java:1066)at org.apache.spark.sql.execution.arrow.StringWriter.setValue(ArrowWriter.scala:287)at org.apache.spark.sql.execution.arrow.ArrowFieldWriter.write(ArrowWriter.scala:151)at org.apache.spark.sql.execution.arrow.ArrowWriter.write(ArrowWriter.scala:105)at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.$anonfun$writeIteratorToStream$1(ArrowPythonRunner.scala:100)at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1581)at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.writeIteratorToStream(ArrowPythonRunner.scala:122)at org.apache.spark.api.python.BasePythonRunner$WriterThread.$anonfun$run$1(PythonRunner.scala:478)at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:2146)at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:270)
因果
错误因一而发生箭头缓冲限制.何时群集化并用应用Pandas出错
求解
问题周游工作时,可设置集群内以下值spark配置高山市AWS系统|休眠|GCP:
spark.databricks.execution.pandasZeroConfConversion.groupbyApply.enabled=true
设置允许群集化正确运算熊猫运算