I'm trying to install a module on my heroku app. Running this locally (minus the heroku run at the start) works, but I get an error when trying to run it on Heroku.
heroku run play install securesocial-0.2.2
and here's the output
...
~ Do you want to install this version (y/n)? y
~ Installing module securesocial-0.2.2...
~
~ Fetching http://www.playframework.org/modules/securesocial-0.2.2.zip
Traceback (most recent call last):
File ".play/play", line 153, in <module>
status = cmdloader.commands[play_command].execute(command=play_command, app=play_app, args=remaining_args, env=play_env, cmdloader=cmdloader)
File "/app/.play/framework/pym/play/commands/modulesrepo.py", line 58, in execute
install(app, args, env)
File "/app/.play/framework/pym/play/commands/modulesrepo.py", line 378, in install
Downloader().retrieve(fetch, archive)
File "/app/.play/framework/pym/play/commands/modulesrepo.py", line 88, in retrieve
try: urllib.urlretrieve(url, destination, self.progress)
File "/usr/local/lib/python2.7/urllib.py", line 91, in urlretrieve
return _urlopener.retrieve(url, filename, reporthook, data)
File "/usr/local/lib/python2.7/urllib.py", line 241, in retrieve
tfp = open(filename, 'wb')
IOError: [Errno 2] No such file or directory: u'/app/.play/modules/securesocial-0.2.2.zip'
What's the proper way to do this? I've been searching, but I can't find any documentation on it.
Never used heroku but perhaps this step by step tutorial might help you out.
After you add the module locally you should be able to add the changes that are made to git and then push a new version of your app to Heroku.
Related
I have created a web application using Java (Java 8) on Eclipse IDE (version 2019-06 (4.12.0)). When trying to deploy it to Google App Engine, the following error messages are shown:
Beginning deployment of service [default]...
Traceback (most recent call last):
File "C:\Work\google-cloud-sdk\platform\bundledpython\lib\logging__init__.py", line 861, in emit
msg = self.format(record)
File "C:\Work\google-cloud-sdk\platform\bundledpython\lib\logging__init__.py", line 734, in format
return fmt.format(record)
File "C:\Work\google-cloud-sdk\lib\googlecloudsdk\core\log.py", line 337, in format
msg = super(_LogFileFormatter, self).format(record)
File "C:\Work\google-cloud-sdk\platform\bundledpython\lib\logging__init__.py", line 465, in format
record.message = record.getMessage()
File "C:\Work\google-cloud-sdk\platform\bundledpython\lib\logging__init__.py", line 329, in getMessage
msg = msg % self.args
UnicodeDecodeError: 'ascii' codec can't decode byte 0x8e in position 10: ordinal not in range(128)
Logged from file context_util.py, line 386
I guess there is something about the python file in my google-cloud-sdk, but I never edited or even did anything to it. Would it be a version problem? It seems that my application could be deployed successfully, but this error bugs me.
Here is my gcloud version:
Google Cloud SDK 277.0.0
app-engine-java 1.9.77
app-engine-python 1.9.88
bq 2.0.52
cloud-datastore-emulator 2.1.0
core 2020.01.17
gsutil 4.47
I'm setting up GeoSpark Python and after installing all the pre-requisites, I'm running the very basic code examples to test it.
from pyspark.sql import SparkSession
from geo_pyspark.register import GeoSparkRegistrator
spark = SparkSession.builder.\
getOrCreate()
GeoSparkRegistrator.registerAll(spark)
df = spark.sql("""SELECT st_GeomFromWKT('POINT(6.0 52.0)') as geom""")
df.show()
I tried running it with python3 basic.py and spark-submit basic.py, both give me this error:
Traceback (most recent call last):
File "/home/jessica/Downloads/geo_pyspark/basic.py", line 8, in <module>
GeoSparkRegistrator.registerAll(spark)
File "/home/jessica/Downloads/geo_pyspark/geo_pyspark/register/geo_registrator.py", line 22, in registerAll
cls.register(spark)
File "/home/jessica/Downloads/geo_pyspark/geo_pyspark/register/geo_registrator.py", line 27, in register
spark._jvm. \
TypeError: 'JavaPackage' object is not callable
I'm using Java 8, Python 3, Apache Spark 2.4, my JAVA_HOME is set correctly, I'm running Linux Mint 19. My SPARK_HOME is also set:
$ printenv SPARK_HOME
/home/jessica/spark/
How can I fix this?
The Jars for geoSpark are not correctly registered with your Spark Session. There's a few ways around this ranging from a tad inconvenient to pretty seamless. For example, if when you call spark-submit you specify:
--jars jar1.jar,jar2.jar,jar3.jar
then the problem will go away, you can also provide a similar command to pyspark if that's your poison.
If, like me, you don't really want to be doing this every time you boot (and setting this as a .conf() in Jupyter will get tiresome) then instead you can go into $SPARK_HOME/conf/spark-defaults.conf and set:
spark-jars jar1.jar,jar2.jar,jar3.jar
Which will then be loaded when you create a spark instance. If you've not used the conf file before it'll be there as spark-defaults.conf.template.
Of course, when I say jar1.jar.... What I really mean is something along the lines of:
/jars/geo_wrapper_2.11-0.3.0.jar,/jars/geospark-1.2.0.jar,/jars/geospark-sql_2.3-1.2.0.jar,/jars/geospark-viz_2.3-1.2.0.jar
but that's up to you to get the right ones from the geo_pyspark package.
If you are using an EMR:
You need to set your cluster config json to
[
{
"classification":"spark-defaults",
"properties":{
"spark.jars": "/jars/geo_wrapper_2.11-0.3.0.jar,/jars/geospark-1.2.0.jar,/jars/geospark-sql_2.3-1.2.0.jar,/jars/geospark-viz_2.3-1.2.0.jar"
},
"configurations":[]
}
]
and also get your jars to upload as part of your bootstrap. You can do this from Maven but I just threw them on an S3 bucket:
#!/bin/bash
sudo mkdir /jars
sudo aws s3 cp s3://geospark-test-ds/bootstrap/geo_wrapper_2.11-0.3.0.jar /jars/
sudo aws s3 cp s3://geospark-test-ds/bootstrap/geospark-1.2.0.jar /jars/
sudo aws s3 cp s3://geospark-test-ds/bootstrap/geospark-sql_2.3-1.2.0.jar /jars/
sudo aws s3 cp s3://geospark-test-ds/bootstrap/geospark-viz_2.3-1.2.0.jar /jars/
If you are using an EMR Notebook
You need a magic cell at the top of your notebook:
%%configure -f
{
"jars": [
"s3://geospark-test-ds/bootstrap/geo_wrapper_2.11-0.3.0.jar",
"s3://geospark-test-ds/bootstrap/geospark-1.2.0.jar",
"s3://geospark-test-ds/bootstrap/geospark-sql_2.3-1.2.0.jar",
"s3://geospark-test-ds/bootstrap/geospark-viz_2.3-1.2.0.jar"
]
}
I was seeing a similar kind of issue with SparkMeasure jars on Windows 10 machine
self.stagemetrics =
self.sc._jvm.ch.cern.sparkmeasure.StageMetrics(self.sparksession._jsparkSession)
TypeError: 'JavaPackage' object is not callable
So what I did was
Went to 'SPARK_HOME' via Pyspark shell, and installed the required jar
bin/pyspark --packages ch.cern.sparkmeasure:spark-measure_2.12:0.16
Grabbed that jar ( ch.cern.sparkmeasure_spark-measure_2.12-0.16.jar ) and copied into the the Jars folder of 'SPARK_HOME'
Reran the script and now it worked without that above error.
when I run sample IBM Bluemix Liberty for Java application https://github.com/ibmjstart/bluemix-java-postgresql-uploader.git following error:
-----> Downloaded app package (1.9M)
-----> Downloaded app buildpack cache (4.0K)
OK
/var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:101:in build_pack': Unable to detect a supported application type (RuntimeError) from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:74:inblock in compile_with_timeout'
from /usr/lib/ruby/1.9.1/timeout.rb:68:in timeout' from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:73:incompile_with_timeout'
from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:54:in block in stage_application' from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:50:inchdir'
from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:50:in stage_application' from /var/vcap/packages/dea_next/buildpacks/bin/run:10:in'
FAILED
Server error, status code: 400, error code: 170001, message: Staging error: cannot get instances since staging failed
TIP: use 'cf logs jpu-henryhan --recent' for more information
The top error looks like you left off the -p <path_to_war> parameter when doing a push. If you just push a directory containing a WAR file, it will not be detected by the Java buildpack.
The tip provided in the output of your cf push request is relevant.
TIP: use 'cf logs jpu-henryhan --recent' for more information
Running that command will tail the log files produced during the staging process and let you see what error may have been raised. Often, it can be a missing dependency or a transient failure of some sort.
I just successfully deployed the sample using the "deploy to Bluemix" button and manually via the cf command line tool. Unless you changed the code, it is most likely that this error is a transient failure.
Run following command:
$ cf push jpu- -b https://github.com/cloudfoundry/java-buildpack --no-manifest --no-start -p PostgreSQLUpload.war
add the parameter to set the buildpack "-b https://github.com/cloudfoundry/java-buildpack"
I have setup a single-node Hadoop 1.2.1 cluster and trying to run this script:
pydoop script transpose.py matrix.txt t_matrix
The script returns nothing and the job is in pending status.
The question is, after running the script the job is in pending status for more than 10 minutes. Why the Job is not running properly?
And this is the output generated while running:
Traceback (most recent call last): File "/home/hduser/hadoop/tmp/mapred/local
/taskTracker/distcache/-2030848362897089950_-2130723868_1886929692/localhost
/user/hduser /pydoop_script_91c491cf7e6b42f6bcbeda09edae9385
/exe90d967507f86405a9606c35582b2fc43", line 10, in import pydoop.pipes File"/usr/local
/lib/python2.7/dist-packages/pydoop/pipes.py", line 29, in pp =
pydoop.import_version_specific_module('_pipes') File "/usr/local/lib/python2.7/dist-
packages/pydoop/__init__.py", line 107, in import_version_specific_module return
import_module(complete_mod_name(name)) File "/usr/lib/python2.7/importli/__init__.py",
line 37, in import_module __import__(name) ImportError: /usr/local/lib/python2.7/dist-
packages/pydoop/_pipes_1_2_1.so: undefined symbol: BIO_s_mem
You are missing one required SSL library.
You will need to find and link "libssl.so.1.0.0" in your environment.
Try to execute the following before running your pydoop script:
export LD_PRELOAD=PATH_TO/libssl.so.1.0.0
For example:
export LD_PRELOAD=/lib/x86_64-linux-gnu/libssl.so.1.0.0
I want to start create mods and so on to minecraft. I am a javaprogrammer myself, so the coding shouldn't be a problem. but to start the programming I need to decompile the server file from minecraft. Everything to do so is given and explained in a txt document, but nontheless I still get an annoying error.
I have checked my pth for JDK and JRE, have the latest version of both and all.
I really hope someone can help me. Here's the traceback:
File "runtime\decompile.py", line 143, in <module> main()
File "runtime\decompile.py", line 143, in <module> main()
File "runtime(options.config, options.force_jad) commands = Commands(donffile)
File "C:\Program Files\Java\MCP\runtime\commands.py", line 158, in __init__ self.checkfolders()
File "C.\Program Files\Java\MCP\runtime\commands.py", line 530, in checkfolders os.makedirs(self.dirtemp)
File "os.pyc", line 157, in makedirs
WindowsError: [Error 5] Access is denied: 'temp'
If I run as administrator I get following msg:
The system cannot find the path specified.
You'd safe my day if you could help me with this problem.
Which decompiler do you use? I'd start checking this commands.py line 530 for any clues what value does this dirtemp has. I'd recommend using the JD decompiler.
Also, I'd be surprised if Notch hadn't used any obfuscators to prevent decompiling. That means you'll get a bunch of unreadable gibberish Java sources.