If I run the following Main.scala
class:
object main extends App {
import org.apache.spark.sql.SparkSession
val spark = SparkSession
.builder()
.config("spark.master", "local")
.config("spark.network.timeout", "10000s")
.config("spark.executor.heartbeatInterval", "5000s")
.getOrCreate()
println("Hello World")
//stop spark
spark.stop()
}
*Edit: here is my log4j.properties
file which is located under main\resources
:
log4j.appender.A1=org.apache.log4j.ConsoleAppender
log4j.appender.A1.Threshold=INFO
The log4j
will log all INFO
logs to the console but the INFO
logs will be classified as [error]
, like so:
[info] Running controller.main compile:last run
[error] Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
[error] 19/03/14 22:35:08 INFO SparkContext: Running Spark version 2.3.0
[error] 19/03/14 22:35:09 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[error] 19/03/14 22:35:09 INFO SparkContext: Submitted application: 289d7336-196c-4b3a-8bf9-6e247b7a6883
[error] 19/03/14 22:35:09 INFO SecurityManager: Changing view acls to: elicrane
[error] 19/03/14 22:35:09 INFO SecurityManager: Changing modify acls to: elicrane
[error] 19/03/14 22:35:09 INFO SecurityManager: Changing view acls groups to:
[error] 19/03/14 22:35:09 INFO SecurityManager: Changing modify acls groups to:
[error] 19/03/14 22:35:09 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(elicrane); groups with view permissions: Set(); users with modify permissions: Set(elicrane); groups with modify permissions: Set()
[error] 19/03/14 22:35:09 INFO Utils: Successfully started service 'sparkDriver' on port 51903.
[error] 19/03/14 22:35:09 INFO SparkEnv: Registering MapOutputTracker
[error] 19/03/14 22:35:09 INFO SparkEnv: Registering BlockManagerMaster
[error] 19/03/14 22:35:09 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
[error] 19/03/14 22:35:09 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
[error] 19/03/14 22:35:09 INFO DiskBlockManager: Created local directory at /private/var/folders/tj/qnhq1s7x6573bbxlkps7p3wc0000gn/T/blockmgr-e2e86467-a8bc-4cec-8ee9-fa55ef8ce99b
[error] 19/03/14 22:35:09 INFO MemoryStore: MemoryStore started with capacity 912.3 MB
[error] 19/03/14 22:35:09 INFO SparkEnv: Registering OutputCommitCoordinator
[error] 19/03/14 22:35:09 INFO Utils: Successfully started service 'SparkUI' on port 4040.
[error] 19/03/14 22:35:10 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.158:4040
[error] 19/03/14 22:35:10 INFO Executor: Starting executor ID driver on host localhost
[error] 19/03/14 22:35:10 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 51904.
[error] 19/03/14 22:35:10 INFO NettyBlockTransferService: Server created on 192.168.1.158:51904
[error] 19/03/14 22:35:10 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
[error] 19/03/14 22:35:10 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.158, 51904, None)
[error] 19/03/14 22:35:10 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.158:51904 with 912.3 MB RAM, BlockManagerId(driver, 192.168.1.158, 51904, None)
[error] 19/03/14 22:35:10 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.158, 51904, None)
[error] 19/03/14 22:35:10 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.1.158, 51904, None)
[info] Hello World
[error] 19/03/14 22:35:10 INFO SparkUI: Stopped Spark web UI at http://192.168.1.158:4040
[error] 19/03/14 22:35:10 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
[error] 19/03/14 22:35:10 INFO MemoryStore: MemoryStore cleared
[error] 19/03/14 22:35:10 INFO BlockManager: BlockManager stopped
[error] 19/03/14 22:35:10 INFO BlockManagerMaster: BlockManagerMaster stopped
[error] 19/03/14 22:35:10 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
[error] 19/03/14 22:35:10 INFO SparkContext: Successfully stopped SparkContext
[error] 19/03/14 22:35:10 INFO ShutdownHookManager: Shutdown hook called
[error] 19/03/14 22:35:10 INFO ShutdownHookManager: Deleting directory /private/var/folders/tj/qnhq1s7x6573bbxlkps7p3wc0000gn/T/spark-bd7cacce-4431-442e-8942-62eec678717a
[success] Total time: 11 s, completed Mar 14, 2019 10:35:10 PM
These errors do not appear to prevent the application from functioning, but they are annoying. So I have two questions:
1) What is causing these errors?
2) Should I just suppress them, if so how?