Feb 16, 2017
Sparklyr Error: Failed to launch Spark shell. Ports file does not exist.
Hanya mendapatkan galat ini ketika menghubungkan R dengan Spark menggunakan sparklyr.
Error in start_shell(scon, list(), jars, packages) :
Failed to launch Spark shell. Ports file does not exist.
Path: C:\Users\User\AppData\Local\rstudio\spark\Cache\spark-2.0.0-bin-hadoop2.7\bin\spark-submit.cmd
Parameters: –packages “com.databricks:spark-csv_2.11:1.3.0,com.amazonaws:aws-java-sdk-pom:1.10.34” –jars “C:\Users\User\Documents\R\win-library\3.3\sparklyr\java\rspark_utils.jar” sparkr-shell C:\Users\User\Temp\RtmpO0cLos\file23c0703c73bf.outIn addition: Warning message:
running command ‘”C:\Users\User\AppData\Local\rstudio\spark\Cache\spark-2.0.0-bin-hadoop2.7\bin\spark-submit.cmd” –packages “com.databricks:spark-csv_2.11:1.3.0,com.amazonaws:aws-java-sdk-pom:1.10.34” –jars “C:\Users\User\Documents\R\win-library\3.3\sparklyr\java\rspark_utils.jar” sparkr-shell C:\Users\User\Temp\RtmpO0cLos\file23c0703c73bf.out’ had status 127
Berdasarkan artikel ini katanya hanya masalah hak akses. Yaitu tidak adanya hak eksekusi terhadap berkas di C:\Users\User\AppData\Local\rstudio\spark\Cache\spark-2.0.0-bin-hadoop2.7\bin\*.cmd beserta solusinya.
Karena ternyata berkasnya tidak hanya satu dan saya orangnya pemalas, maka saya mencari solusi agar masalah ini dapat selesai dalam satu baris perintah. Dan saya menemukan ini.
Sehingga perintah dapat dijalankan.
C:\Users\User\AppData\Local\rstudio\spark\Cache\spark-2.0.0-bin-hadoop2.7\bin>cacls *.cmd /grant User:F
Are you sure (Y/N)?Y
processed file: C:\Users\User\AppData\Local\rstudio\spark\Cache\spark-2.0.0-bin-hadoop2.7\bin\beeline.cmd
processed file: C:\Users\User\AppData\Local\rstudio\spark\Cache\spark-2.0.0-bin-hadoop2.7\bin\load-spark-env.cmd
processed file: C:\Users\User\AppData\Local\rstudio\spark\Cache\spark-2.0.0-bin-hadoop2.7\bin\pyspark.cmd
processed file: C:\Users\User\AppData\Local\rstudio\spark\Cache\spark-2.0.0-bin-hadoop2.7\bin\pyspark2.cmd
processed file: C:\Users\User\AppData\Local\rstudio\spark\Cache\spark-2.0.0-bin-hadoop2.7\bin\run-example.cmd
processed file: C:\Users\User\AppData\Local\rstudio\spark\Cache\spark-2.0.0-bin-hadoop2.7\bin\spark-class.cmd
processed file: C:\Users\User\AppData\Local\rstudio\spark\Cache\spark-2.0.0-bin-hadoop2.7\bin\spark-class2.cmd
processed file: C:\Users\User\AppData\Local\rstudio\spark\Cache\spark-2.0.0-bin-hadoop2.7\bin\spark-shell.cmd
processed file: C:\Users\User\AppData\Local\rstudio\spark\Cache\spark-2.0.0-bin-hadoop2.7\bin\spark-shell2.cmd
processed file: C:\Users\User\AppData\Local\rstudio\spark\Cache\spark-2.0.0-bin-hadoop2.7\bin\spark-submit.cmd
processed file: C:\Users\User\AppData\Local\rstudio\spark\Cache\spark-2.0.0-bin-hadoop2.7\bin\spark-submit2.cmd
processed file: C:\Users\User\AppData\Local\rstudio\spark\Cache\spark-2.0.0-bin-hadoop2.7\bin\sparkR.cmd
processed file: C:\Users\User\AppData\Local\rstudio\spark\Cache\spark-2.0.0-bin-hadoop2.7\bin\sparkR2.cmdC:\Users\User\AppData\Local\rstudio\spark\Cache\spark-2.0.0-bin-hadoop2.7\bin>
Selesai. Spark berjalan dengan semestinya.
Komentar Terbaru