| Stage Id ▾ | Pool Name | Description | Submitted | Duration | Tasks: Succeeded/Total | Input | Output | Shuffle Read | Shuffle Write | Failure Reason |
|---|---|---|---|---|---|---|---|---|---|---|
| 3 | tenants-pool-768 | replenishmentRunId = 10000000279 tenantId = 1699186316786241789 activityType = GetDemands activityId = f040ccd4-a735-3816-9ede-2f92f7eff62a workflowType = GetDemandsWorkflow workflowId = 32dadaef-b067-375c-aaa0-35b7156aef6c attempt = 4 cornerstoneTenantId = 8449 marketUnit = IW_MU_CRP-127007_1 scenario = STANDARD load at FileStorageHandler.java:153 org.apache.spark.sql.classic.DataFrameReader.load(DataFrameReader.scala:58) com.sap.s4hana.eureka.business.crpdemandservice.storageaccess.FileStorageHandler.readFileWithSchemaCheck(FileStorageHandler.java:153) com.sap.s4hana.eureka.business.crpdemandservice.storageaccess.WorklistReaderImpl.readWorklist(WorklistReaderImpl.java:37) com.sap.s4hana.eureka.business.crpdemandservice.core.business.WorkloadDetermination.determineWorkload(WorkloadDetermination.java:74) com.sap.s4hana.eureka.business.crpdemandservice.core.business.DemandDetermination.determineDemands(DemandDetermination.java:98) com.sap.s4hana.eureka.business.crpdemandservice.core.controller.GetDemandsActivityImpl.getDemands(GetDemandsActivityImpl.java:57) java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) java.base/java.lang.reflect.Method.invoke(Unknown Source) io.temporal.internal.activity.RootActivityInboundCallsInterceptor$POJOActivityInboundCallsInterceptor.executeActivity(RootActivityInboundCallsInterceptor.java:44) io.temporal.internal.activity.RootActivityInboundCallsInterceptor.execute(RootActivityInboundCallsInterceptor.java:23) io.temporal.internal.activity.ActivityTaskExecutors$BaseActivityTaskExecutor.execute(ActivityTaskExecutors.java:88) io.temporal.internal.activity.ActivityTaskHandlerImpl.handle(ActivityTaskHandlerImpl.java:105) io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handleActivity(ActivityWorker.java:294) io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:258) io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:221) io.temporal.internal.worker.PollTaskExecutor.lambda$process$1(PollTaskExecutor.java:76) java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) | 2026/04/10 15:03:21 | 0.3 s |
0/1
(8 failed)
| Job aborted due to stage failure: Task 0 in stage 3.0 failed 8 times, most recent failure: Lost task 0.7 in stage 3.0 (TID 31) (10.96.2.69 executor 1): org.apache.spark.SparkException: Exception thrown in awaitResult: Job aborted due to stage failure: Task 0 in stage 3.0 failed 8 times, most recent failure: Lost task 0.7 in stage 3.0 (TID 31) (10.96.2.69 executor 1): org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.SparkThreadUtils$.awaitResult(SparkThreadUtils.scala:53) at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:342) at org.apache.spark.util.ThreadUtils$.parmap(ThreadUtils.scala:419) at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$.readParquetFootersInParallel(ParquetFileFormat.scala:444) at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$.$anonfun$mergeSchemasInParallel$1(ParquetFileFormat.scala:494) at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$.$anonfun$mergeSchemasInParallel$1$adapted(ParquetFileFormat.scala:486) at org.apache.spark.sql.execution.datasources.SchemaMergeUtils$.$anonfun$mergeSchemasInParallel$2(SchemaMergeUtils.scala:80) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:866) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:866) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:374) at org.apache.spark.rdd.RDD.iterator(RDD.scala:338) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:171) at org.apache.spark.scheduler.Task.run(Task.scala:147) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$5(Executor.scala:647) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:80) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:77) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:99) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:650) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.base/java.lang.Thread.run(Unknown Source) Caused by: java.io.IOException: Unable to instantiate custom ConnectionConfigurator class. at com.sap.hana.datalake.files.shaded.org.apache.hadoop.hdfs.web.URLConnectionFactory.newCustomURLConnectionFactory(URLConnectionFactory.java:130) at com.sap.hana.datalake.files.shaded.org.apache.hadoop.hdfs.web.WebHdfsFileSystem.initialize(WebHdfsFileSystem.java:298) at com.sap.hana.datalake.files.HdlfsFileSystem.initializeWebHdfsFileSystem(HdlfsFileSystem.java:531) at com.sap.hana.datalake.files.HdlfsFileSystem.initialize(HdlfsFileSystem.java:167) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3615) at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:172) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3716) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3667) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:557) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:366) at org.apache.parquet.hadoop.util.HadoopInputFile.fromStatus(HadoopInputFile.java:50) at org.apache.spark.sql.execution.datasources.parquet.ParquetFooterReader.readFooter(ParquetFooterReader.java:76) at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$.$anonfun$readParquetFootersInParallel$1(ParquetFileFormat.scala:451) at org.apache.spark.util.ThreadUtils$.$anonfun$parmap$2(ThreadUtils.scala:416) at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:687) at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:467) at java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(Unknown Source) at java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source) at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown Source) at java.base/java.util.concurrent.ForkJoinPool.scan(Unknown Source) at java.base/java.util.concurrent.ForkJoinPool.runWorker(Unknown Source) at java.base/java.util.concurrent.ForkJoinWorkerThread.run(Unknown Source) Caused by: java.lang.reflect.InvocationTargetException at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Unknown Source) at java.base/java.lang.reflect.Constructor.newInstance(Unknown Source) at com.sap.hana.datalake.files.shaded.org.apache.hadoop.hdfs.web.URLConnectionFactory.newCustomURLConnectionFactory(URLConnectionFactory.java:128) ... 21 more Caused by: java.io.IOException: com.sap.s4hana.eureka.business.hdlfs.exception.InvalidHdlfsConfigException: Failed to find valid temporal context for null at com.sap.hana.datalake.files.HdlfsConnectionConfigurator.<init>(HdlfsConnectionConfigurator.java:107) at com.sap.hana.datalake.files.HdlfsConnectionConfigurator.<init>(HdlfsConnectionConfigurator.java:91) ... 27 more Caused by: com.sap.s4hana.eureka.business.hdlfs.exception.InvalidHdlfsConfigException: Failed to find valid temporal context for null at com.sap.s4hana.eureka.business.hdlfs.config.HdlfsTenantContextManager.getHdlfsDetailsByTenant(HdlfsTenantContextManager.java:62) at com.sap.s4hana.eureka.business.hdlfs.keystore.impl.HdlfsTenantKeyStoresFactory.init(HdlfsTenantKeyStoresFactory.java:43) at com.sap.hana.datalake.files.HdlfsConnectionConfigurator.createSocketFactoryForKeyStores(HdlfsConnectionConfigurator.java:294) at com.sap.hana.datalake.files.HdlfsConnectionConfigurator.createSocketFactory(HdlfsConnectionConfigurator.java:272) at com.sap.hana.datalake.files.HdlfsConnectionConfigurator.<init>(HdlfsConnectionConfigurator.java:103) ... 28 more Driver stacktrace: |