AccessDenied Error while using Psutil library in MacOs via Jython - java

I am trying to use psutil library in MacOS via Jython but when I make a call to psutil.cpu_times function, I get the following error.
except AccessDenied as err:
In psutil documentation the reason is explained as following.
Note (OSX) psutil.AccessDenied is always raised unless running as root
(lsof does the same).
Is there a way to overcome this problem? I will run the program in Linux enviroment after development so a temporary solution will be fine.
Thanks.

For those who land on this page looking for an answer to the same question. I couldn't find a solution to this problem and returned back to extracting system status from /sys.

Related

Anylogic: Error when run a model as standalone Java application

I want to export my model as a standalone java application. When I want to run .bat file (my OS is windows 7), the following error appears; it says that it can not create Java virtual machine and the error which is given is about illegal access: deny.
what should I do?
And is there any other way to run a model on a computer where anylogic is not installed?
thanks in advance.
What version of AnyLogic are you using? This option has been taken care of in the latest versions of AnyLogic.
Simply delete the following line in the .bat file
set OPTIONS_XJAL=--illegal-access=deny
Or something similar related to the option --illegal-access=deny
Depending on what Java version you are using this option might not be available. Most models (depending on what Java functions you use in your model) should run just fine. If they don't you need to check the error that they give and investigate further.
In the latest AnyLogic they handle this using the following code
set OPTIONS_XJAL=--illegal-access=deny
IF "%VERSION:~0,2%"=="1." set OPTIONS_XJAL=

Problem while running a Jar file in ubuntu

I am not very familiar with the Ubuntu. I have moved a jar file relatd to Blazegraph which I used on my Windows machine to my Ubuntu VM (Ubuntu 18.04 LTS Bionic).
I have also used chmod +x filename to make it executable. But when running the file, I get the following error:
ERROR: Banner.java:160: Uncaught exception in thread
java.lang.NullPointerException at
com.bigdata.rdf.sail.webapp.StandaloneNanoSparqlServer.main(StandaloneNanoSparqlServer.java:142)
Why do I get this message? I also found this thread on GitHub, but seems no one had a chance in fixing it!
Note: The file is blazegraph.jar which acts as a local server for the blazegraph so I can run SPARQL queries on some ontologies. Is this because the file is trying to act as a server and possibly firewall issues? However, the server will be at http://localhost:9999/blazegraph/ which I think shouldn't have to do anything with the firewall (if there is any on Linux).
Seems this is a bug with the Blazegraph.
See these links to read more about the problem: [1] [2].
PS: There have been some ways for getting rid of the problem but they did not work for me or I could't make them work. I initially wanted to remove this question but thought maybe others have the same problem.

Spark RDD Class not Found

I am new to Spark and need help with the error:
java.lang.NoClassDefFoundError: org/apache/spark/rdd/RDD$
I am creating a standalone Spark example in Scala. I ran sbt clean package and sbt assembly to package the scala spark code. Both completed successfully without any error. Any operation on a RDD throws error. Any pointers to fix this issue will be really helpful.
I invoke the job using spark-submit command.
$SPARK_HOME/bin/spark-submit --class org.apache.spark.examples.GroupTest /Users/../spark_workspace/spark/examples/target/scala-2.10/spark-examples_2.10-1.3.0-SNAPSHOT.jar
I managed to throw this error and get past it. This is definitely a YMMV answer, but I leave it here in case it eventually helps someone.
In my case, I was running a homebrew installed spark (1.2.0) and mahout (0.11.0) on a mac. It was pretty baffling to me because if I ran a mahout command line by hand I didn't get the error, but if I invoked it from within some python code it threw the error.
I realized that I had updated my SPARK_HOME variable in my profile to use 1.4.1 instead and had re-sourced it in my by-hand terminal. The terminal where I was running the python code was still using 1.2.0. I re-sourced my profile in my python terminal and now it "just works."
The whole thing feels very black magicky, if I were to guess some rational reason for this error being thrown, maybe it's because one moving part assumes a different spark version, architecture, whatever than you have. That seems to be the solution hinted at in the comments, too.

Hadoop setup - Java trying to access window server

When I launch start-dfs.sh in Hadoop 2.2.0 on Mac OS X, buried in the error messages, I have this:
2014-02-24 14:48:23,448 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.lang.InternalError: Can't connect to window server - not enough permissions.
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1833)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1730)
...
I've dealt with this error before - it happens when Java is trying to access the window server (even though it's a command line program). For security reasons, this isn't allowed without a graphical login. This of course means that if you're running a headless server, you have to physically log in before your daemons run.
Now, without going into a rant about how STUPID Java and/or the developers are for doing this (it seems to be a trend; the only other Java server component I use does the same thing), I found the option:
-Djava.awt.headless=true
which looks like a possible solution. But not only do I have no idea where to pass the option to Hadoop, but I tried it on the other software, and it still gives the error.
I'd appreciate any help.
So I tried it, and it turns out there's two things going on. The option DOES in fact work... for Hadoop. But here's where it gets weird:
Hadoop won't append to a log file... if one exists, it leaves it and silently fails to write any logs. So I was looking at old log files.
The other software still doesn't work, but they said they're aware of the bug and they're fixing it.
I fixed it using:
export HADOOP_OPTS="-Djava.security.krb5.realm=OX.AC.UK -Djava.security.krb5.kdc=kdc0.ox.ac.uk:kdc1.ox.ac.uk -Djava.awt.headless=true"
in my bashrc

clj-ssh / JSch unable to load library 'c' on Windows

I've added clj-ssh as a dependency to a Leiningen project, and I can (use 'clj-ssh.ssh) but calling (ssh-agent {}) gives the error
UnsatisfiedLinkError Unable to load library 'c': The specified module cannot be found.
at com.sun.jna.NativeLibrary.loadNativeLibrary
...
at org.jcraft.jsch.agentproxy.usocket.JNAUSocketFactory$CLibrary.(clinit)
...
Anyone know why this is? I'm thinking it could be to do with Windows not coming with a C standard library, in which case could installing e.g. cygwin help?
Try installing cygwin and add something like:
-Djava.library.path=...path to lib dir ...
if it doesn't find the library on it's own.
I'm the author of jsch-agent-proxy, which has been used in clj-ssh.
I think it will not work for cygwin's ssh-agent, because JNA does not provide
the native library for it. How about trying Putty's Pageant? If you need to use cygwin's ssh-agent and "nc" command exits on your cygwin environment, how about using NCUSocketFactory? I'm not so familiar with clj-ssh, but it will be possible to use NCUSocketFactroy instead of JNAUSocketFactory, according to agent.clj.
UPDATE:
I have confirmed that I can successfully run clj-ssh with
ssh-agent on my Cygwin environment by applying the following commit,
GitHub clj-ssh commit:f1109e2c0dfa25c9db563b2f64d2b7dcb4653adf
Ok, after some digging in the source it seems that clj-ssh attempts to use the system ssh-agent by default (which seems like strange behaviour if it isn't windows compatible). This makes clj-ssh.cli unusable but clj-ssh.ssh is fine with the fix
(ssh-agent {:use-system-ssh-agent false})
If you do want to use a system ssh-agent, the readme for clj-ssh and ymnk below mention PuTTY's pageant, I couldn't find any info on setting this up but it should be doable with cygwin.
I've seen this if the SSH_AUTH_SOCK environment variable is set. Clearing this environment variable before starting the jvm might solve the issue.
When SSH_AUTH_SOCK is not set, clj-ssh should automatically use pageant if it is running.
Looks like the best solution would be to support NCUSocketFactory as per ymnk's commit, and add documentation about cygwin's ssh-agent. Happy to take a pull request for that.

Categories