while I try to attach a process with code like:
vm = VirtualMachine.attach(pid);
vm.loadAgent(attachJarPath, properties);
I got this error:
java.io.IOException: Premature EOF
at sun.tools.attach.HotSpotVirtualMachine.readInt(HotSpotVirtualMachine.java:248)
at sun.tools.attach.LinuxVirtualMachine.execute(LinuxVirtualMachine.java:199)
at sun.tools.attach.HotSpotVirtualMachine.loadAgentLibrary(HotSpotVirtualMachine.java:58)
at sun.tools.attach.HotSpotVirtualMachine.loadAgentLibrary(HotSpotVirtualMachine.java:79)
at sun.tools.attach.HotSpotVirtualMachine.loadAgent(HotSpotVirtualMachine.java:103)
what can i do to fix this error?
I guess I've found the reason:
the paratemers of "properties" transfer to aim JVM cannot be too long in:
vm.loadAgent(attachJarPath, properties);
because of the limit of attach Ti support;
you can use a file to transfer your data
Related
Environment Information
OS: Linux( java program running in a docker container)
Java version: 1.8
Node version: 10.22.1
I am trying to create a process from an executable js file with the java ProcessBuilder class like below:
String command = "/node/app/bin/app.js"
final ProcessBuilder pb = new ProcessBuilder(command);
Process process = pb.start();
OutputStream out = process.getOutputStream();
out.write("\n[:--:]".getBytes());
out.flush();// this line throws the error java.io.IOException: Stream closed
After running the pb.start() the process is created successfully, below is the debug information:
The problem is when the out.flush() command which throws an exception when executed. Below is the execution error:
Caused by: java.io.IOException: Stream closed
at java.lang.ProcessBuilder$NullOutputStream.write(ProcessBuilder.java:433) ~[?:1.8.0_292]
at java.io.OutputStream.write(OutputStream.java:116) ~[?:1.8.0_292]
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[?:1.8.0_292]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[?:1.8.0_292]
The /node/app/bin/app.js content is like below:
#!/usr/bin/env node
var path = require('path');
var apJsFile = path.resolve(__dirname, '../app.js'); //app.js is working correctly
require(appJsFile);
Your sub-process is likely to be reporting an error and exits while you are writing to the stdin of the sub-process getOutputStream() - hence stream closed. As you are not consuming stdout and stderr streams correctly you haven't had a chance to read any error message.
An easy way to check is to redirect stdout and stderr to a file to inspect afterwards:
pb.redirectOutput(new File("std.out"));
pb.redirectError(new File("std.err"));
Process process = pb.start();
...
int rc = p.waitFor();
Those files may help you spot the actual problem. The most rebust way to deal with ProcessBuilder is to use consumer tasks in background threads to deal with sub-process stdin, stdout and stderr and wait for these consumers to finish after waitFor(). That will prevent sub-process stall / freeze if I/O is pending on any one of the streams: getOutputStream(), getInputStream() and getErrorStream().
The command /node/app/bin/app.js when executed on my environment directly in linux container was not working.
Since pb.start() was not throwing any exception I thought this was not a problem of the command itself because it will have caused the start method to throw an exception.
After fixing the error /node/app/bin/app.js(it was a formatting issue and shebangs was not working properly) then the error caused by out.flush() disappeared.
I want to read CSV file using Flink-API locally, by the following code:
csvPath="data/weather.csv";
List<Tuple2<String, Double>> csv= env.readCsvFile(csvPath)
.types(String.class,Double.class).collect();
I tried some files in different size(from 800mb to 6gb). Sometimes the operation is completed successfully and sometimes it is not, because of the following timeout exception:
Exception in thread "main" java.util.concurrent.TimeoutException: Futures timed out after [10000 milliseconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:153)
at scala.concurrent.Await$$anonfun$ready$1.apply(package.scala:169)
at scala.concurrent.Await$$anonfun$ready$1.apply(package.scala:169)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.ready(package.scala:169)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster.shutdown(FlinkMiniCluster.scala:439)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster.stop(FlinkMiniCluster.scala:408)
at org.apache.flink.client.LocalExecutor.stop(LocalExecutor.java:127)
at org.apache.flink.client.LocalExecutor.executePlan(LocalExecutor.java:195)
at org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:91)
at org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:923)
at org.apache.flink.api.java.DataSet.collect(DataSet.java:410)
at org.apache.flink.simpleCSV.run(simpleCSV.java:83)
how can I fix this problem? increase this timeout programmatically? Or should I put a config file somewhere? Is there a specific heap size that I should set based on the file size?
collect() transfers the data from the cluster to the local client. This does only work for very small data sets (< 10 MB).
If you have larger data sets, you need to process them on the cluster and emit the results through an output format, e.g., write it to a file.
If you are debugging this program, you can set a break point at the constructor of org.apache.flink.api.java.LocalEnvironment (the constructor with config) and run the following command to change the timeout to 200 seconds (Alt+F8 in IntelliJ Idea):
config.setString("akka.ask.timeout", "200 s")
To find LocalEnvironment class in IntelliJ Idea, press Ctr+n, and check "Include non-project classes in the pop-up window, then type "LocalEnvironment" in the edit box.
I have a problem which I think is the same as that described here:
Error when opening a lucene index: Map failed
However the solution does not apply in this case so I am providing more details and asking again.
The index is created using Solr 5.3
The line of code causing the exception is:
IndexReader indexReader = DirectoryReader.open(FSDirectory.open(Paths.get("the_path")));
The exception stacktrace is:
Exception in thread "main" java.io.IOException: Map failed: MMapIndexInput(path="/mnt/fastdata/ac1zz/JATE/solr-5.3.0/server/solr/jate/data_aclrd/index/_5t.tvd") [this may be caused by lack of enough unfragmented virtual address space or too restrictive virtual memory limits enforced by the operating system, preventing us to map a chunk of 434505698 bytes. Please review 'ulimit -v', 'ulimit -m' (both should return 'unlimited'), and 'sysctl vm.max_map_count'. More information: http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:907)
at org.apache.lucene.store.MMapDirectory.map(MMapDirectory.java:265)
at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:239)
at org.apache.lucene.codecs.compressing.CompressingTermVectorsReader.<init>(CompressingTermVectorsReader.java:144)
at org.apache.lucene.codecs.compressing.CompressingTermVectorsFormat.vectorsReader(CompressingTermVectorsFormat.java:91)
at org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:120)
at org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:65)
at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:58)
at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:50)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:731)
at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:50)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63)
at uk.ac.shef.dcs.jate.app.AppATTF.extract(AppATTF.java:39)
at uk.ac.shef.dcs.jate.app.AppATTF.main(AppATTF.java:33)
The suggested solutions as in the exception message do not work in this case because I am running the application on a server and I do not have permissions to change those.
Namely,
ulimit -v unlimited
prints: "-bash: ulimit: virtual memory: cannot modify limit: Operation not permitted"
and
sysctl -w vm.max_map_count=10000000
gives:"error: permission denied on key 'vm.max_map_count'"
Is there anyway I can solve this?
Thanks
I have found a solution and so I am answering myself.
If you really cannot set ulimit or vm.max_map_count, the only solution, according to http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html, is to configure your solr (or if you work with Lucene api, choose explicitly) to use SimpleFSDirectory (if windows) or NIOFSDirectory, both are slower than the default.
For example
DirectoryReader.open(new NIOFSDirectory(Paths.get("path_to_index"), FSLockFactory.getDefault()))
I call a java class from PLSQL Dev. My Logger.java calls the FileHandler with a string "C:\Logs\mylog.TXT"
fileHndlr = new FileHandler(logFileName, false);
and the result is:
Exception in thread "Root Thread" java.lang.Error: java.io.IOException: sjonfile_fileinfo fais to get fileinfo
at sun.nio.ch.FileKey.create(FileKey.java:41)
at sun.nio.ch.FileChannelImpl$SharedFileLockTable.<init>(FileChannelImpl.java:1037)
at sun.nio.ch.FileChannelImpl.fileLockTable(FileChannelImpl.java:806)
at sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:867)
at java.nio.channels.FileChannel.tryLock(FileChannel.java:962)
at java.util.logging.FileHandler.openFiles(FileHandler.java:394)
at java.util.logging.FileHandler.<init>(FileHandler.java:268)
at xxx.logger.Logger.logSetup(Logger.java:194)
Caused by: java.io.IOException: sjonfile_fileinfo fais to get fileinfo
at sun.nio.ch.FileKey.init(Native Method)
at sun.nio.ch.FileKey.create(FileKey.java:39)
... 9 more
Seems some rights is missing to create the file, what need to be checked?
UPDATE:
we installed the processmonitor to the server where the file operations happening and we found two suspicious events:
1) BUFFER OVERFLOW error at QueryAllInformationFile
Date & Time: 2015.03.02. 9:23:14
Event Class: File System
Operation: QueryAllInformationFile
Result: BUFFER OVERFLOW
Path: C:\Logs\...\filename.lck
TID: 2708
Duration: 0.0000138
CreationTime: 2015.02.25. 15:49:13
LastAccessTime: 2015.02.25. 15:49:13
LastWriteTime: 2015.03.02. 9:23:14
ChangeTime: 2015.03.02. 9:23:14
FileAttributes: A
AllocationSize: 0
EndOfFile: 0
NumberOfLinks: 1
DeletePending: False
Directory: False
IndexNumber: 0x1500000007d793
EaSize: 0
Access: Generic Write, Read Attributes
Position: 0
Mode: Synchronous IO Non-Alert
AlignmentRequirement: Long
2) SHARING VIOLATION error at CreateFile operation
Date & Time: 2015.03.02. 9:23:20
Event Class: File System
Operation: CreateFile
Result: SHARING VIOLATION
Path: C:\Logs\...\filename.lck
TID: 2708
Duration: 0.0000284
Desired Access: Read Attributes, Delete
Disposition: Open
Options: Non-Directory File, Open Reparse Point
Attributes: n/a
ShareMode: Read, Write, Delete
AllocationSize: n/a
following actions need to be executed with sysdba user:
GRANT READ,WRITE ON DIRECTORY userDirectory TO userSchema;
Execute dbms_java.grant_permission('userSchema', 'java.io.FilePermission', 'userDirectory/*', 'read,write,execute,delete');
We are porting a simple Java application between Tandem NonStop systems, from G-Series to H-Series. Java version is 1.5.0_02.
When performing basic I/O tasks like getting output stream from or opening a client socket, we receive exceptions like
java.io.IOException: Value out of range
or
java.net.SocketException: Value out of range
("value out of range" is Tandem native jargon for, well, quite everything I suppose).
Has anybody got similar issues? i.e. I/O corruption while for example messing with JNI?
I suppose there is something wrong with the system, but where might it be?
Thank you.
EDIT:
adding snippets as requested
sample snippet (a) - using Runtime.exec () (adapted)
Properties envVars = new Properties();
Process p = r.exec("/bin/env");
envVars.load(p.getInputStream());
Stack trace (a):
java.io.IOException: Value out of range (errno:4034)
at java.io.FileInputStream.readBytes(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:194)
at java.lang.UNIXProcess$DeferredCloseInputStream.read(UNIXProcess.java:221)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:254)
at java.io.BufferedInputStream.read(BufferedInputStream.java:313)
at sun.nio.cs.StreamDecoder$CharsetSD.readBytes(StreamDecoder.java:411)
at sun.nio.cs.StreamDecoder$CharsetSD.implRead(StreamDecoder.java:453)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:183)
at java.io.InputStreamReader.read(InputStreamReader.java:167)
at java.io.BufferedReader.fill(BufferedReader.java:136)
at java.io.BufferedReader.readLine(BufferedReader.java:299)
at java.io.BufferedReader.readLine(BufferedReader.java:362)
at util.Environment.getVariables(Environment.java:39)
Last line fails, and output gets redirected to console (!).
sample snippet (b) - using HttpURLConnection:
public WorkerThread (HttpURLConnection conn, String requestData, Logger logger)
{
this.conn = conn;
...
}
public void run ()
{
OutputStream out = conn.getOutputStream ();
}
Stack trace (b):
java.net.SocketException: Value out of range (errno:4034)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
at java.net.Socket.connect(Socket.java:507)
at sun.net.NetworkClient.doConnect(NetworkClient.java:155)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:365)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:477)
at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:280)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:337)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:176)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:736)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:162)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:828)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:230)
Case (a) can be avoided because it was a workaround for other issues with previous JRE version (!), but same behaviour with sockets is really nasty.
Error code 4034 seem to indicate that a specific server is not running in your NonStop cluster. Are you sure that your system is setup properly?
Update: The problem was caused by a spurious .so library.