when I run the bin / workloads / streaming / repartition / spark / run.sh

when I run the bin / workloads / streaming / repartition / spark / run.sh script it shows INFO Closing socket connection to /10.0.5.46. (kafka.network.processor) in the kafka cluster and the following org.apache.spark.SparkException: ArrayBuffer (java.nio.channels.ClosedChannelException) exception in hibench.
please help me I have been struggling with this situation for a month now and so has the Flink cluster. the data is not generated and I can not get any results.

the hibench log shows the following error:

ERROR kafka.DirectKafkaInputDStream: ArrayBuffer(java.nio.channels.ClosedChannelException)
19/01/21 12:34:42 ERROR scheduler.JobScheduler: Error generating jobs for time 1548074082000 ms
org.apache.spark.SparkException: ArrayBuffer(java.nio.channels.ClosedChannelException)
at org.apache.spark.streaming.kafka.DirectKafkaInputDStream.latestLeaderOffsets(DirectKafkaInputDStream.scala:123)
at org.apache.spark.streaming.kafka.DirectKafkaInputDStream.compute(DirectKafkaInputDStream.scala:145)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:342)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:341)
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:336)
at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:334)
at scala.Option.orElse(Option.scala:289)

 ERROR spark.SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: Required executor memory (12288+1228 MB) is above the max threshold (12288 MB) of this cluster! Please check the values of ‘yarn.scheduler.maximum-allocation-mb’ and/o$
        at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:302)
        at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:166)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
        at org.apache.spark.streaming.StreamingContext$.createNewSparkContext(StreamingContext.scala:839)
        at org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:85)
        at com.intel.hibench.sparkbench.streaming.RunBench$.run(RunBench.scala:90)
        at com.intel.hibench.sparkbench.streaming.RunBench$.main(RunBench.scala:74)
        at com.intel.hibench.sparkbench.streaming.RunBench.main(RunBench.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Welcome @isabelbongo,

You might want to update your post:

  • with a better fitting title
  • to represent code inside of code blocks, which makes your post more readable
  • to strip non-relevant output
  • and to clearly explain
    • what you would like to achieve?
    • how you ended up in this situation?
    • what you think went wrong?
    • what you’ve already tried?

That is a long list - and you definitely don’t need to answer all of those questions. Answering some of them, however, gives us a better picture of the situation and allows people on the forum to help you better.