spark worker insufficient memory -


i have spark/cassandra setup using spark cassandra java connector query on table. far, have 1 spark master node (2 cores) , 1 worker node (4 cores). both of them have following spark-env.sh under conf/:

#!/usr/bin/env bash export spark_local_ip=127.0.0.1 export spark_master_ip="192.168.4.134" export spark_worker_memory=1g export spark_executor_memory=2g 

here spark execution code:

    sparkconf conf = new sparkconf();     conf.setappname("testing");     conf.setmaster("spark://192.168.4.134:7077");     conf.set("spark.cassandra.connection.host", "192.168.4.129");     conf.set("spark.logconf", "true");     conf.set("spark.driver.maxresultsize", "50m");     conf.set("spark.executor.memory", "200m");     conf.set("spark.eventlog.enabled", "true");     conf.set("spark.eventlog.dir", "/tmp/");     conf.set("spark.executor.extraclasspath", "/home/enlighted/ebd.jar");     conf.set("spark.cores.max", "1");     javasparkcontext sc = new javasparkcontext(conf);       javardd<string> cassandrarowsrdd = cassandrajavautil.javafunctions(sc).cassandratable("testing", "ec")     .map(new function<cassandrarow, string>() {         private static final long serialversionuid = -6263533266898869895l;         @override         public string call(cassandrarow cassandrarow) throws exception {             return cassandrarow.tostring();         }     });     system.out.println("data cassandrarows: \n" + stringutils.join(cassandrarowsrdd.toarray(), "\n"));     sc.close(); 

now start master spark on first node , worker on second node , thhen run above code. creates executor thread on worker, see following message on application side logs:

[timer-0] warn org.apache.spark.scheduler.taskschedulerimpl  - initial job has not accepted resources; check cluster ui ensure workers registered , have sufficient resources 

now keeping same setup, when run spark/sbin/start-all.sh on master server, creates master instance worker instance on first node. again when run same code , worker assigned new worker, works fine.

what issue original worker running on node different master node?

figured out root cause. master randomly assigning port worker communication. because of firewall on master, worker couldn't send out messages master (maybe resource details). weird worker didn't bother throw error also.


Comments

Popular posts from this blog

python - pip install -U PySide error -

arrays - C++ error: a brace-enclosed initializer is not allowed here before ‘{’ token -

cytoscape.js - How to add nodes to Dagre layout with Cytoscape -