I compiled matlab function using library compiler in matlab 2015b, I suspect the pca function the source of the exception, because I have made a simple function of addition and it was executed without any problems, how can I execute the pca function??
Matlab function:
function [ COEFF,SCORE,latent] = ACP( path )
Data = fileread(path);
Data = strrep(Data, ',', '.');
FID = fopen('comma2pointData.txt', 'w');
fwrite(FID, Data, 'char');
fclose(FID);
Data=importdata('comma2pointData.txt','\t');
[COEFF,SCORE,latent] = pca(Data);
end
Java code:
String path = "/Desktop/datamicro.txt";
Object[] result = null;
acpClass acp = null;
try {
acp = new acpClass();
result=acp.ACP(3, path);
} catch (MWException ex) {
Logger.getLogger(CalculAcpFrame.class.getName()).log(Level.SEVERE, null, ex);
} finally {
MWArray.disposeArray(result);
acp.dispose();
}
datamicro.txt
0,25 0,16 0,95 0,53 0,22 1,17 549,00
0,20 0,06 0,39 0,62 0,18 1,09 293,25
0,16 0,05 0,31 0,39 0,14 0,78 935,00
0,19 0,06 0,40 0,62 0,23 1,14 380,00
The exception:
Caught "std::exception" Exception message is:
Timed out waiting for Thread to Process
avr. 06, 2017 11:59:57 PM microarchi_proj.Microarchi_proj main
GRAVE: null
... Matlab M-code Stack Trace ...
com.mathworks.toolbox.javabuilder.MWException: Timed out waiting for Thread to Process
at com.mathworks.toolbox.javabuilder.internal.MWMCR.mclFeval(Native Method)
at com.mathworks.toolbox.javabuilder.internal.MWMCR.access$600(MWMCR.java:31)
at com.mathworks.toolbox.javabuilder.internal.MWMCR$6.mclFeval(MWMCR.java:861)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.mathworks.toolbox.javabuilder.internal.MWMCR$5.invoke(MWMCR.java:759)
at com.sun.proxy.$Proxy0.mclFeval(Unknown Source)
at com.mathworks.toolbox.javabuilder.internal.MWMCR.invoke(MWMCR.java:427)
at ACPFunction.acpClass.ACP(acpClass.java:210)
at microarchi_proj.Microarchi_proj.main(Microarchi_proj.java:1145)
Edited:
After data has been provided, the code has been edited to accommodate data formatting.
The edited code is:
function [ COEFF,SCORE,latent] = ACP(path)
%% Reading file as a string
Data = fileread(path);
%% Converting comma decimal to point decimal values
Data = strrep(Data, ',', '.');
%% writing point decimal values to new file
FID = fopen('comma2pointData.txt', 'w');
fwrite(FID, Data, 'char');
fclose(FID);
% Delete unwanted variables
clear Data FID
%% Reading new file
nData=importdata('comma2pointData.txt','\t');
% determin rows and columns
rows = length(nData);
[~,columns] = size(strsplit(nData{1},' '));
b = zeros(rows,columns);
%
for row = 1:rows
line = nData{row};
a = strsplit(line,' ');
b(row,:)=cellfun(#str2num,a);
end
% Delete unwanted variables
clear nData a line row rows columns
%% Calling pca function
[COEFF,SCORE,latent] = pca(b);
% Delete unwanted variables
clear b
% End of Function
end
I hope it will solve your problem.
Related
my code here:
function MsgReceivedInPastHourchannelId,connectorID, status) {
var client = new com.mirth.connect.client.core.Client('https://127.0.0.1:443/');
try{
var loginStatus = client.login('userID', 'psw');
} catch(ex) {
client.close();
throw 'Unable to log-on the server , Error: ' + ex.message;
}
var filter = new com.mirth.connect.model.filters.MessageFilter;
var calendar = java.util.Calendar;
var startDate = new calendar.getInstance();
var endDate = new calendar.getInstance();
..
.. logic to set start/end date
..
filter.setStartDate(startDate);
filter.setEndDate(endDate);
var statuses = new java.util.HashSet();
var Status = com.mirth.connect.donkey.model.message.Status;
var list = Lists.list().append(connectorID);
var metricStatus = Status.SENT;
statuses.add(metricStatus);
filter.setStatuses(statuses);
filter.setIncludedMetaDataIds(list) ;
var nCount =client.getMessageCount(channelId, filter);
client.close();
return nCount
}
reference :
Mirth getMessageCount using Javascript not working
Mostly it works fine, but it randomly throw exception at line number 218, this is
var client = new com.mirth.connect.client.core.Client('https://127.0.0.1:443/')
anyone have experience or solution to get rid of such error:
[2021-06-30 02:00:02,000] ERROR (com.mirth.connect.connectors.js.JavaScriptDispatcher:193):
Error evaluating JavaScript Writer (JavaScript Writer "Submit Hx channel status to DataDog" on channel 1xxxxxxxxxxxxxxxx4).
com.mirth.connect.server.MirthJavascriptTransformerException: CHANNEL:
ChannelStatus-Poller-CountCONNECTOR:
Submit Hxchannel status to DataDogSCRIPT SOURCE:
JavaScript WriterSOURCE CODE:
218: var client = new com.mirth.connect.client.core.Client('https://127.0.0.1:443/') 221: // log on to the server222:
try{LINE NUMBER: 218 DETAILS:
Wrapped java.lang.IllegalStateException: zip file closed
at a7fa25a9-af95-4410-bb4f-f4f08ae0badb:218 (MsgReceivedInPastHour)
at a7fa25a9-af95-4410-bb4f-f4f08ae0badb:1013 (doScript)
at a7fa25a9-af95-4410-bb4f-f4f08ae0badb:1033
at com.mirth.connect.connectors.js.JavaScriptDispatcher$JavaScriptDispatcherTask.doCall(JavaScriptDispatcher.java:184)
at com.mirth.connect.connectors.js.JavaScriptDispatcher$JavaScriptDispatcherTask.doCall(JavaScriptDispatcher.java:122)
at com.mirth.connect.server.util.javascript.JavaScriptTask.call(JavaScriptTask.java:113)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)Caused by: java.lang.IllegalStateException: zip file closed
at java.util.zip.ZipFile.ensureOpen(ZipFile.java:686)
at java.util.zip.ZipFile.access$200(ZipFile.java:60)
at java.util.zip.ZipFile$ZipEntryIterator.hasNext(ZipFile.java:508)
at java.util.zip.ZipFile$ZipEntryIterator.hasMoreElements(ZipFile.java:503)
at java.util.jar.JarFile$JarEntryIterator.hasNext(JarFile.java:253)
at java.util.jar.JarFile$JarEntryIterator.hasMoreElements(JarFile.java:262)
at org.reflections.vfs.ZipDir$1$1.computeNext(ZipDir.java:30)
at org.reflections.vfs.ZipDir$1$1.computeNext(ZipDir.java:26)
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:141)
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:136)
at org.reflections.Reflections.scan(Reflections.java:240)
at org.reflections.Reflections.scan(Reflections.java:204)
at org.reflections.Reflections.<init>(Reflections.java:129)
at org.reflections.Reflections.<init>(Reflections.java:170)
at org.reflections.Reflections.<init>(Reflections.java:143)
at com.mirth.connect.client.core.Client.<init>(Client.java:176)
at com.mirth.connect.client.core.Client.<init>(Client.java:143)
at sun.reflect.GeneratedConstructorAccessor159.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.mozilla.javascript.MemberBox.newInstance(MemberBox.java:159)
at org.mozilla.javascript.NativeJavaClass.constructInternal(NativeJavaClass.java:266)
at org.mozilla.javascript.NativeJavaClass.constructSpecific(NativeJavaClass.java:205)
at org.mozilla.javascript.NativeJavaClass.construct(NativeJavaClass.java:166)
at org.mozilla.javascript.Interpreter.interpretLoop(Interpreter.java:1525)
at org.mozilla.javascript.Interpreter.interpret(Interpreter.java:815)
at org.mozilla.javascript.InterpretedFunction.call(InterpretedFunction.java:109)
at org.mozilla.javascript.ContextFactory.doTopCall(ContextFactory.java:405)
at org.mozilla.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:3508)
at org.mozilla.javascript.InterpretedFunction.exec(InterpretedFunction.java:120)
at com.mirth.connect.server.util.javascript.JavaScriptTask.executeScript(JavaScriptTask.java:150)
at com.mirth.connect.connectors.js.JavaScriptDispatcher$JavaScriptDispatcherTask.doCall(JavaScriptDispatcher.java:149)
... 6 more
Issue could not be solved in Mirth connector Admin environment, to solve this issue, I used DB query instead.
go to Mirth DB, you can find related tables and it is more safe to query such DB.
The reason not to invoke,
answer from Mirth slack general channel,
"When using the Client class you're pretty much looping back to the server, since all the code is executed on the server anyways."
so always didn't com.mirth.connect.client.core.Client class in Mirth code.
Issue closed.
We want to create the partitions in hive table, but the partition name have some spaces. So it cant create the partitions. Currently we are using the java.
We tried to escape the space but all are throwing exception.
URL -s3n://comp-data-bckp/data/datav/sample_test/sample_test_inner/2016-06-27/hive/warehouse/datav/sample_test_inner/platform=SONY PS3
HiveConf hiveConf= new HiveConf();
hiveConf.setVar(HiveConf.ConfVars.METASTOREURIS, URL);
hiveConf.setIntVar(HiveConf.ConfVars.METASTORE_CLIENT_SOCKET_TIMEOUT, 60);
CliSessionState css = new CliSessionState(hiveConf);
css.in = System.in;
css.out = null;
css.err = null;
css.setIsSilent(true);
SessionState.start(css);
CliDriver cli = new CliDriver();
int response = cli.processLine(statements);
SessionState s = SessionState.get();
if (s != null && s.out != null && s.out != System.out)
{
s.out.close();
}
return response;
java.net.URISyntaxException: Illegal character in path at index 126: s3n://comp-data-bckp/data/datav/sample_test/sample_test_inner/2016-06-27/hive/warehouse/datav/sample_test_inner/platform=SONY PS3
When we try to escape the space character with below options system still throws the above error.
- \\ (eg. S3n://…./ platform=SONY\\ PS3)
- %20 (eg. S3n://…./ platform=SONY%20PS3)
- + (eg. S3n://…./ platform=SONY+PS3)
Please assist if there are any options to escape it and provide inputs for proceeding further.
I have broken wiki xml dump into many small parts of 1M and tried to clean it (after cleaning it with another program by somebody else)
I get an out of memory error which I don't know how to solve. Can anyone enlighten me?
I get the following error message:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.<init>(FreqProxTermsWriterPerField.java:212)
at org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.newInstance(FreqProxTermsWriterPerField.java:235)
at org.apache.lucene.index.ParallelPostingsArray.grow(ParallelPostingsArray.java:48)
at org.apache.lucene.index.TermsHashPerField$PostingsBytesStartArray.grow(TermsHashPerField.java:252)
at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:292)
at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:151)
at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:645)
at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:342)
at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:301)
at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:241)
at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:454)
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1541)
at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1256)
at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1237)
at qa.main.ja.Indexing$$anonfun$5$$anonfun$apply$4.apply(SearchDocument.scala:234)
at qa.main.ja.Indexing$$anonfun$5$$anonfun$apply$4.apply(SearchDocument.scala:224)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.Iterator$class.foreach(Iterator.scala:750)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1202)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at qa.main.ja.Indexing$$anonfun$5.apply(SearchDocument.scala:224)
at qa.main.ja.Indexing$$anonfun$5.apply(SearchDocument.scala:220)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
Where line 234 is as follows:
writer.addDocument(document)
It is adding some documents to Lucene
and where line 224 is as follows:
for (doc <- target_xml \\ "doc") yield {
It is the first line of a for loop for adding various elements as fields in the index.
Is it a code problem, setting problem or hardware problem?
EDIT
Hi, this is my for loop:
for (knowledgeFile <- knowledgeFiles) yield {
System.err.println(s"processing file: ${knowledgeFile}")
val target_xml=XML.loadString(" <file>"+cleanFile(knowledgeFile).mkString+"</file>")
for (doc <- target_xml \\ "doc") yield {
val id = (doc \ "#id").text
val title = (doc \ "#title").text
val text = doc.text
val document = new Document()
document.add(new StringField("id", id, Store.YES))
document.add(new TextField("text", new StringReader(title + text)))
writer.addDocument(document)
val xml_doc = <page><title>{ title }</title><text>{ text }</text></page>
id -> xml_doc
}
}).flatten.toArray`
The inner loop just loops thru every doc element. The outer loop loops thru every file. Is the nested for the source of the problem?
Below is the cleanFile function for reference:
def cleanFile(fileName:String):Array[String] = {
val tagRe = """<\/?doc.*?>""".r
val lines = Source.fromFile(fileName).getLines.toArray
val outLines = new Array[String](lines.length)
for ((line,lineNo) <- lines.zipWithIndex) yield {
if (tagRe.findFirstIn(line)!=None)
{
outLines(lineNo) = line
}
else
{
outLines(lineNo) = StringEscapeUtils.escapeXml11(line)
}
}
outLines
}
Thanks again
Looks like you would like to try increasing the heap size by having -xmx jvm argument?
This is my FULL test code with the main method:
public class TestSetAscii {
public static void main(String[] args) throws SQLException, FileNotFoundException {
String dataFile = "FastLoad1.csv";
String insertTable = "INSERT INTO " + "myTableName" + " VALUES(?,?,?)";
Connection conStd = DriverManager.getConnection("jdbc:xxxxx", "xxxxxx", "xxxxx");
InputStream dataStream = new FileInputStream(new File(dataFile));
PreparedStatement pstmtFld = conStd.prepareStatement(insertTable);
// Until this line everything is awesome
pstmtFld.setAsciiStream(1, dataStream, -1); // This line fails
System.out.println("works");
}
}
I get the "cbColDef value out of range" error
Exception in thread "main" java.sql.SQLException: [Teradata][ODBC Teradata Driver] Invalid precision: cbColDef value out of range
at sun.jdbc.odbc.JdbcOdbc.createSQLException(Unknown Source)
at sun.jdbc.odbc.JdbcOdbc.standardError(Unknown Source)
at sun.jdbc.odbc.JdbcOdbc.SQLBindInParameterAtExec(Unknown Source)
at sun.jdbc.odbc.JdbcOdbcPreparedStatement.setStream(Unknown Source)
at sun.jdbc.odbc.JdbcOdbcPreparedStatement.setAsciiStream(Unknown Source)
at file.TestSetAscii.main(TestSetAscii.java:21)
Here is the link to my FastLoad1.csv file. I guess that setAsciiStream fails because of the FastLoad1.csv file , but I am not sure
(In my previous question I was not able to narrow down the problem that I had. Now I have shortened the code.)
It would depend on the table schema, but the third parameter of setAsciiStream is length.
So
pstmtFld.setAsciiStream(1, dataStream, 4);
would work for a field of length 4 bytes.
But I dont think it would work as you expect it in the code. For each bind you should have separate stream.
This function setAsciiStream() is designed to be used for large data values BLOBS or long VARCHARS. It is not designed to read csv file line by line and split them into separate values.
Basicly it just binds one of the question marks with the inputStream.
After looking into the provided example it looks like teradata could handle csv but you have to explicitly tell that with:
String urlFld = "jdbc:teradata://whomooz/TMODE=ANSI,CHARSET=UTF8,TYPE=FASTLOADCSV";
I don't have enough reputation to comment, but I feel that this info can be valuable to those navigating fast load via JDBC for the first time.
This code will get the full stack trace and is very helpful for diagnosing problems with fast load:
catch (SQLException ex){
for ( ; ex != null ; ex = ex.getNextException ())
ex.printStackTrace () ;
}
In the case of the code above, it works if you specify TYPE=FASTLOADCSV in the connection string, but when run multiple times will fail due to the creation of the error tables _ERR_1 and _ERR_2. Drop these tables and clear out the destination tables to run again.
I am trying to load up my own UDF in pig. I have made it into a jar using eclipse's export function. I am getting this 1066 error when running my pig script. I am not sure B = .. as I can dump A, but I can not dump B.
Script
REGISTER myudfs.jar;
DEFINE HOUR myudfs.HOUR;
A = load 'access_log_Jul95' using PigStorage(' ') as (ip:chararray, dash1:chararray, dash2:chararray, date:chararray, getRequset:chararray, status:int, port:int);
B = FOREACH A GENERATE HOUR(ip);
DUMP B;
Function
package myudfs;
import java.io.IOException;
import org.apache.pig.EvalFunc;
import org.apache.pig.data.Tuple;
import org.apache.pig.impl.util.WrappedIOException;
public class HOUR extends EvalFunc<String>
{
#SuppressWarnings("deprecation")
public String exec(Tuple input) throws IOException {
if (input == null || input.size() == 0)
return null;
try{
String str = (String)input.get(0);
return str.toUpperCase();
}catch(Exception e){
throw WrappedIOException.wrap("Caught exception processing input row ", e);
}
}
}
Running command
pig -x mapreduce 2.pig
Data Format
199.72.81.55 - - [01/Jul/1995:00:00:01 -0400] "GET /history/apollo/ HTTP/1.0" 200 6245
| | | | |
ip date getRequest status port
Pig Stack Trace
ERROR 1066: Unable to open iterator for alias B
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias B
at org.apache.pig.PigServer.openIterator(PigServer.java:836)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:696)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:320)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:194)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:170)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
at org.apache.pig.Main.run(Main.java:604)
at org.apache.pig.Main.main(Main.java:157)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Caused by: java.io.IOException: Job terminated with anomalous status FAILED
at org.apache.pig.PigServer.openIterator(PigServer.java:828)
... 12 more
I am extremely unfamiliar with pig, and any and all pointers would be greatly appreciated. I know this is a lot of information to look at, but I have had no luck in mutating any data in a UDF, and I am just not sure where I went wrong.
Thanks