I'm trying to find total file descriptors and found that sigar api allows to get those information. However while trying to do the below
Sigar sigar = new Sigar();
sigar.getProcFd(<pid>);
replaced the pid with an actual process if, throws the following exception:
org.hyperic.sigar.SigarNotImplementedException: This method has not been implemented on this platform
at org.hyperic.sigar.SigarNotImplementedException.<clinit>(SigarNotImplementedException.java:28)
at org.hyperic.sigar.ProcFd.gather(Native Method)
at org.hyperic.sigar.ProcFd.fetch(ProcFd.java:30)
at org.hyperic.sigar.Sigar.getProcFd(Sigar.java:531)
From the exception it's clear that the native Method - gather() hasn't been implemented/available on my OS (Mac OS X). How do I fix this? I tried adding the "libsigar-universal64-macosx.dylib" to the classpath but with no luck.
Also, I tried creating ProcFd like below instead of getting it from sigar:
ProcFd proc = new ProcFd();
System.out.println("Total FD: " + proc.getTotal());
In this case the output is always 0. Based on the api doc it looks like it should be providing the total number of open file descriptor (http://cpansearch.perl.org/src/DOUGM/hyperic-sigar-1.6.3-src/docs/javadoc/org/hyperic/sigar/ProcFd.html). Not sure if it's returning 0 because of the same reason as above i.e. missing implementation for my OS. Is that correct?
Also, wondering why is that when ProcFd is got using "sigar.getProcFd()" it throws the above mentioned exception. But when created using "ProcFd proc = new ProcFd()" it doesn't, however proc.getTotal() always returns 0?
I ended up using lsof in shell script instead of using sigar library. Never got this to work on mac. I tried in Linux and it worked without any issues.
The answer is in the documentation (http://cpansearch.perl.org/src/DOUGM/hyperic-sigar-1.6.3-src/docs/javadoc/org/hyperic/sigar/ProcFd.html), and as per your finding: OSX is not supported.
getTotal
public long getTotal()
Get the Total number of open file descriptors.
Supported Platforms: AIX, HPUX, Linux, Solaris, Win32.
System equivalent commands:
AIX: lsof
Darwin: lsof
FreeBSD: lsof
HPUX: lsof
Linux: lsof
Solaris: lsof
Win32:
Returns:
Total number of open file descriptors
Related
I'm trying to port over some old Ruby code I used to run on Heroku to a Python-based Google Cloud Function.
This code runs Apple's Reporter tool which is "a command-line tool that you can use to download your Sales and Trends reports and Payments and Financial Reports". Docs can be found here.
The Ruby code worked well for years until yesterday, running on Heroku with a Ruby + Java build pack. A small snippet of this, where options are args received :
options = [
vendor_id,
file_type,
sub_file_type,
'Daily',
trimmed_date,
version
]
Dir.chdir("#{Rails.root}/tmp/") do
stdout, stderr, status = Open3.capture3("java -jar #{Rails.root}/public/jars/Reporter.jar p=Reporter.properties m=Robot.XML Sales.getReport #{options.join(', ')}")
return {:status => status, :error => stderr.to_s, :stdout => stdout.to_s }
end
The error I'm seeing on Heroku after no code or stack updates is Network is available but cannot connect to application. Check your proxy and firewall settings and try again.
Most of our other similar processes have been moved to Google Cloud Functions, so after getting nowhere with the above error I thought I'd move this also.
So a similar snippet this time in Python:
from subprocess import Popen, PIPE
def execute_reporter_jar(vendor_id, trimmed_date, file_type, api_version):
process = Popen(["java -jar Reporter.jar p=Reporter.properties Sales.getReportVersion Sales, Detailed"], stdin=PIPE, stdout=PIPE, shell=True)
out, err = process.communicate()
print("returncode = %s", process.returncode)
print("stdout = %s", out)
print("stderr = %s", err)
This works well locally, but when I deploy to Gooogle Cloud it seemingly runs successfully in a few ms, however, nothing actually happens and when I dig deeper it seems the subprocess is returning a 127 - command not found error. So it seems the cloud function can't access Java.
After a good 24hrs, I've hit a wall with this. Can anyone help? I have zero Java knowledge and I know cloud functions have a Java runtime, but I would prefer to stick with Python.
The ultimate aim is for Apple's reporter to run and save the requested file to Google Cloud Storage.
Thanks in advance!
The execution environment for Cloud Function's with Python runtime (both 3.7 and 3.8) is currently based in Ubuntu 18.04 (check the information in this link).
The runtime only includes the following system packages and running subprocess is usually not a recommended idea as the system packages included are limited.
If it's paramount for you to stick with Python you could try to deploy your function using the BuildPack CLI and extending the builder image to install Java on the Python runtime or if your application can be dockerized consider building an image yourself with Java included and deploying your application in Cloud Run.
What i am doing basically is automating some shell commands (theses commands including hadoop shell commands) using java code, now I am doing the follwoing commands on bash:
hadoop fs -mkdir path//tp//folder
hadoop fs -chmod a+w path//to//folder
everything working fine, now when to trying to use java code to perform the same actions:
org.apache.hadoop.fs.FileSystem.mkdir(new Path("path//to//folder"),new FsPermission(FsAction.ALL, FsAction.ALL, FsAction.ALL))
unfortunately this method:
public void setPermission(Path p, FsPermission permission) throws IOException
{
}
is not implemented (respectively: empty) with hadoop v 2.6.0 ~ 2.8.0
My question how can i add read/write permission to hadoop path using java code?
First of all, you might want to cross-check the results of your analysis. If you look here for example, you find that FileSystem is actually an abstract class. So it wouldnt surprise me if the specific subclass that actually gets instantiated at some point overrides that empty method setPermissions() - based on the underlying OS for example.
In any case, there is a simple, but ugly workaround: use ProcessBuilder and run
hadoop fs -chmod a+w path//to//folder
from within Java. And write down:
// TODO: check with next version of hadoop if fs.FileSystem.setPermission() is now implemented
I was attempting to use Streamsets to query an Oracle database and publish the data into Kafka. I downloaded Streamsets' tarball on my Mac and unzipped it into my home directory. Running $HOME/streamsets-datacollector-2.1.0.2/bin/streamsets dc started up on my first try, then I followed the instructions here to add the jdbc driver, then the instructions here to configure my streamsets job. However, I got an error: JDBC_00 - Cannot connect to specified database: com.streamsets.pipeline.api.StageException: JDBC_06 - Failed to initialize connection pool: java.sql.SQLRecoverableException: IO Error: Bad file descriptor.
This wound up having something to do with the limit on the number of files a process can have open. When I ran ulimit -n on the laptop, it showed 4864, then I set it to 10,000 via ulimit -n 10000, restarted the streamsets server, and it worked! If I need to keep running this, I will find a more procedural way of setting the ulimit for this process to work around this issue.
you definitely want need increase ulimit -n. To permanently change for all users in Ubuntu try the following:
echo "* soft nofile 60000" > /etc/security/limits.d/*_limits.conf && echo "* hard nofile 60000" >> /etc/security/limits.d/*_limits.conf
substitute 60000 with whatever number you want. I've never had a problem in streamsets using 60k. you should be able to do something similar in bsd.
I'm trying to use RemoveDrive.exe, found here, in my Java application. I have it in my JAR, and I'm extracting it to a temporary file using the following code, however when I try to run it I get an IOException which says CreateProcess error=5, Access is denied. The program doesn't normally need admin priviledges though. Any ideas on what could be causing the issue?
File RDexe = File.createTempFile("rmvd", ".exe");
InputStream exesrc = (InputStream) GraphicUI.class.getResource("RemoveDrive.exe").openStream();
FileOutputStream out = new FileOutputStream(RDexe);
byte[] temp = new byte[1024];
int rc;
while((rc = exesrc.read(temp)) > 0)
out.write(temp, 0, rc);
exesrc.close();
out.close();
RDexe.deleteOnExit();
// run executable
Runtime runtime = Runtime.getRuntime();
System.out.println(RDexe.getPath() + " " + "F:\\" + " -b -s");
Process proc = runtime.exec(RDexe.getPath() + " " + "F:\\" + " -b");
InputStream is = proc.getInputStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(is));
String line; boolean ejected = false;
while((line = reader.readLine()) != null)
if(line.equalsIgnoreCase("failed")) ejected = false;
else if(line.equalsIgnoreCase("success")) ejected = true;
reader.close();
is.close();
UPDATE: If I enable the built-in Administrator account (net user administrator /active:yes), everything works fine from there. However if I right click and run as administrator in my standard account, I still get the error and UAC doesn't even ask for permission.
EDIT: Seeing as though the bounty is nearly finished, please see my SuperUser question which has helped me solve this problem... I'll be awarding the bounty and accepting an answer soon.
This may not be the problem in your situation, but some anti-virus programs will prevent executables or scripts inside temporary folders from being run. Instead of creating a temporary file, try putting it in the user directory:
File rdExe = new File(System.getProperty("user.home") + "/.yourProgramName/rmvd.exe");
rdExe.getParentFile().mkdirs();
just a heads up on another way to run files, have you thought of using the java Desktop object? : http://docs.oracle.com/javase/6/docs/api/java/awt/Desktop.html
i've found it useful when needing to run programs through my java program. something like this could work for you:
Desktop.getDesktop().open(new File("enter path and name of the file"));
hope you find it useful
I am not JAVA user but isn't it 32 vs. 64 bit issue ?
On 64 bit Windows error code 5 usually means that executable is not 64 bit compatible. Sometimes this is the case even when executable need to access only some (older win) system directory which does not exist anymore. To prove this try to use your executable in command line. if you can manage to get it work there than it is different issue. If not find executable for your OS.
Another possibility is that the file has to be physically present on some drive.
You wrote that you has it as temporary. Not shore what it means for JAVA. If it only copy it to some file and delete after use than its OK but if it is only in memory somewhere than that could be problem if executable need access to itself. To prove this just copy the file to some known location and then run it from there (in JAVA). if it works than you will need to do something about it (copy and delete executable from JAVA before and after execution to physical disk medium or whatever)
Another possibility is that error code 5 comes from JAVA environment an not from OS
In that case I have not a clue what it means (not JAVA user)
Seeing as though it has only been touched on here, I will say that the issue was related to permissions in Windows, and is not anything to do with Java.
As stated in the SuperUser question I've linked to in my original question, I found that my usual account did not have ownership of that folder for some unknown reason - so nothing could be executed; it wasn't just the temporary file I had created in Java.
Even though I am an administrator, in order to take ownership of the folder I had to enable the Built-In administrator account and grant myself ownership. Since I did that, it has all worked as expected.
Thanks to all for their efforts, I will award the bounty to the answer that was most detailed and put me on the right tracks.
What version of Windows are you running? Microsoft significantly tightened the restrictions around executing programs in Windows 7. My guess is that it the OS won't allow you to fork something that wasn't authenticated at the time your program was launched. I'd try running it on Windows 2000 or XP and see if you have the same issues.
I have two servers running on Glassfish 2.1 both have the same web app.
Two times this error occurred: Some jsp pages stop displaying only showing a blank page, and the following errors are printed in the logs...
PWC1231: Servlet.service() for servlet jsp threw exception
java.io.FileNotFoundException: /path/to/jsp/file/jsp_file.jsp.java
(Permission denied) at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:179) at
java.io.FileOutputStream.(FileOutputStream.java:70) at
org.apache.jasper.compiler.AntJavaCompiler.getJavaWriter(AntJavaCompiler.java:213)
at
org.apache.jasper.compiler.Compiler.generateJava(Compiler.java:173)
at org.apache.jasper.compiler.Compiler.compile(Compiler.java:409) at
org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:592)
at
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:344)
at
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:470)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:364)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:831) at
org.apache.catalina.core.ApplicationFilterChain.servletService(ApplicationFilterChain.java:411)
at
org.apache.catalina.core.ApplicationDispatcher.doInvoke(ApplicationDispatcher.java:855)
at
org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:703)
at
org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:542)
at
org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:474)
at
org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:366)
at
org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1056)
at
org.apache.struts.tiles.TilesRequestProcessor.doForward(TilesRequestProcessor.java:261)
at
org.apache.struts.action.RequestProcessor.processForwardConfig(RequestProcessor.java:388)
at
org.apache.struts.tiles.TilesRequestProcessor.processForwardConfig(TilesRequestProcessor.java:316)
at
org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:231)
at
org.apache.struts.action.ActionServlet.process(ActionServlet.java:1164)
at
org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:415)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:738) at
javax.servlet.http.HttpServlet.service(HttpServlet.java:831) at
org.apache.catalina.core.ApplicationFilterChain.servletService(ApplicationFilterChain.java:411)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:317)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:198)
at
com.my.app.filtro.FiltroCallcenter.doFilter(FiltroCallcenter.java:90)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:230)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:198)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:288)
at
org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:271)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:202)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:632)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:577)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:94) at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:206)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:632)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:577)
at
org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:571)
at
org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:1080)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:150)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:632)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:577)
at
org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:571)
at
org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:1080)
at
org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:272)
at
com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.invokeAdapter(DefaultProcessorTask.java:637)
at
com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.doProcess(DefaultProcessorTask.java:568)
at
com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.process(DefaultProcessorTask.java:813)
at
com.sun.enterprise.web.connector.grizzly.DefaultReadTask.executeProcessorTask(DefaultReadTask.java:341)
at
com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:263)
at
com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:214)
at
com.sun.enterprise.web.connector.grizzly.TaskBase.run(TaskBase.java:265)
at
com.sun.enterprise.web.connector.grizzly.ssl.SSLWorkerThread.run(SSLWorkerThread.java:106)
Followed by this:
PWC6344: Unable to create output writer for file /path/to/jsp/file/jsp_file.jsp.java|#]
Sometimes only the PWC6344 error is printed and sometimes both, the PWC1231 error is always followed by PWC6344 (which somewhat makes sense, because that exception is thrown when an IOException occurs).
Well, both times this errors happened, the only thing I did was stop and start the instance, and everything was alright again, also this error has only occurred in one of the servers.
Whats is happening ?... Or how can I diagnose what is causing this so I can fix the problem instead of stopping and re starting for eternity...
UPDATES:
I was looking into the possibility of this been a file descriptor problem, as suggested by sbridges but!, the maximum number of file handlers is 811975 with 4520 opened files in one server and 359532 with only 6894 in the other.
So, I guess its safe to say this is not the problem!
Does someone have another theory?
It looks like the permissions are set incorrectly and you can't write the compiled jsp page to disk,
/path/to/jsp/file/jsp_file.jsp.java (Permission denied)
Are the permissions correct on that directory/file?
Rename jsp_file.jsp.java to jsp_file.jsp.
PWC6344: Unable to create output writer for file /path/to/jsp/file/jsp_file.jsp.java|#]
This can also happen if the underlying operating system has run out of file handlers/descriptors which are required in order to open a file for reading or writing. I'm not closely familiar with CentOS, but Google hints that it has a "relatively low" limit of 1024 given the lot of problems related to it in the search results. Among the results you'll see a lot of questions/answers as to how to increase it, such as the following blog:
Increase the number of file descriptors on Centos and Fedora Linux
Raising the number of file descriptors for a regular user on CentOS/Fedora/Redhat is surprisingly difficult to learn how to do. There are lots of incomplete walk throughs on the web, some with typos and other problems.
Here are the steps that worked for me to raise the open file descriptor limit from 1024 (the default) to 65535:
As root, edit /etc/sysctl.conf and add the line:
fs.file-max = 512000
At the bash prompt, run:
$ sysctl -p
That will cause the settings to take effect. You can also cat 512000 > /proc/sys/fs/file-max but that may reset on reboot.
Edit /etc/security/limits.conf and add the following:
* - nofile 65535
See the inline comments for more details on what that does and how to make it more restrictive if you prefer.
As root, run
$ ulimit -n 65535
and make sure you have no errors. To double check, run ulimit -n and make sure the response is 65535.
Ensure that PAM authentication is turned on for SSH, or else when you try to connect as a regular user, you won’t see the new limits. Edit /etc/ssh/sshd_config and make sure you have:
UsePAM yes
Restart SSH /sbin/service sshd restart if you made any changes.
Login as a regular user with a new SSH session & shell and run:
$ ulimit -n 65535
Run ulimit -n again to check and good luck!