I try to save some R-dataframes into .xlsx-files using the write.xlsx function of the xlsx package like this
write.xlsx(tab,file="test",sheetName="testsheet",col.names=TRUE,row.names=FALSE,append=FALSE)
whereas the object tab is a data frame, as prooved here
> class(tab)
[1] "data.frame"
When I run the code I get the following error message
> write.xlsx(tab,file="test.xlsx",sheetName="testsheet",col.names=TRUE,row.names=FALSE,append=FALSE)
Fehler in .jcall("RJavaTools", "Z", "hasField", .jcast(x, "java/lang/Object"), :
RcallMethod: cannot determine object class
and I have no particular idea what the problem could be.
PS: I'm running R 2.14.1 in the StatET 2.0 plugin in Eclipse 3.7 on a 64bit machine.
When you work in Eclipse, you can start R using either rj - a Java terminal, or RTerm - the native R terminal.
If you are using the rj terminal and something doesn't work, try the same thing with RTerm.
I have never tried to figure out why, but a few things don't work properly in rj. This includes all use of RCOM as well as printing of the return value of system().
I use rj by default because I like the way it deals with help (amongst other benefits).
But if things don't work, I try it in RTerm. One day I'll have some spare time and I'll take it up with the author.
PS. I want to stress that I absolutely love StatET in Eclipse. These oddities or rj are very minor inconveniences in the grand scheme of things.
From my experience these kind of errors are produced when the standard rj package is installed instead of the one supplied bij the StatET developer.
Check the installation guide here:
http://www.walware.de/goto/statet
If you would happen to be using Debian or Ubuntu, you can also use the repository from OpenAnalytics to install StatET and the correct rj packages in one go.
http://deb.openanalytics.eu/howto.html
I had same problem. Two codes work with my problem:
FIRST) Convert vector to dataframe:
library(xlsx)
data <- data.frame(c(1,2,3))
write.xlsx(data, file = "C:/Users/Name/Downloads/data.xlsx")
SECOND) Use another library:
`# Using openxlsx package
library(openxlsx)
dataD1 <- data.frame(c(1,2,3))
write.xlsx(dataD1, "C:/Users/Name/Downloads/dataD1.xlsx")
I hope you have solved your problem.
Related
Hello Palantir community on StackOverflow - if you exist?
I'm having a problem with pXML and PXZ files on a QuickStart instance (see below for details). If I export from Graph (even if only a few relatively small Objects), then try and reimport that file, I get the error message,
Error: The file [file path & name] is not a valid .pxz file:
com.palantir.exceptionPalantirUserMessageException. Unexpected error
while validating PalantirXML; please see the log for details.
The log will then give some version of,
Value " with length = '0' is not faced-valid with respect to minLength
'1' for type #AnonType_namedataSource;.
Multiple Java error references will then follow (195, 131, 384, 318, etc).
So this seems to be an issue with Palantir writing an XML file badly, then not recognising it when you try to reimport.
The XML file itself seems ok - it's not very small, all the XML tags close off, etc. But clearly there's a value somewhere that's meant to be a positive, and it's not being populated in the correct way. The errant tag isn't obvious, if that's the case.
Weirdly, I can usually export a single Object (or maybe two or three) - but not if the Object is too complex (eg has lots of Properties).
I'm using an installation of Palantir Quick Start 3.8 (3.8.2.8.603030, Java Version: 1.6.0_30 Sun Microsystems Inc. - Java HotSpot(TM) 64-Bit Server VM build 20.5-b03 64-bit).
I've tried various configurations of Java updates (6.3 32 & 64, 7.25 32 & 64, no Java update (Pal 3.8 comes with 6.3).
The computer is an Intel, 2.7 Ghz with 16 GB of RAM, running Windows 7 (SP1), 64bt.
I tried disabling the AV (McAfee) and Windows firewall - no difference.
I'll leave it there for now - very grateful for any advice / suggestions.
R
That's an old version of Palantir! I worked on the code you're seeing errors with through many versions of Gotham. The problem is that the first step in the import process is validating the pXML against its .xsd file. While writing, the library makes sure the XML is syntactically valid, but doesn't verify it against the schema.
That error makes it sound like a DataSource is missing some value that's required by the schema. Exporting from a new Investigation may work, but this bug would need to be fixed by a Palantir developer.
You could also try it later version where it may be fixed already.
So, here's my problem. I've got my ANTLR4 code successfully compiled, without errors and now I want to test it out. The ANTLR4 Documentation tells me, to test my applications, I shall do this:
java org.antlr.v4.runtime.misc.TestRig
I've tried this and got following error:
Error: Main Class org.antlr.v4.runtime.misc.TestRig couldn't be found or load.
I've checked if my CLASSPATH wasn't set, but everything was correctly set as it should be. I also tried moving the file directly to my test folder and opened CMD there and tried it again, I occur the same error. Searching in the Internet didn't help, as no one seemed to have occurred this error with ANTLR4 before.
Specs:
Java 1.7.0.55
ANTLR 4.4
There seems to be something wrong with your classpath, contrary to your belief everything is okay.
When I download the ANTLR 4 JAR and run TestRig:
wget http://www.antlr.org/download/antlr-4.4-complete.jar
...
java -cp antlr-4.4-complete.jar org.antlr.v4.runtime.misc.TestRig
I see the following on my console:
java org.antlr.v4.runtime.misc.TestRig GrammarName startRuleName
[-tokens] [-tree] [-gui] [-ps file.ps] [-encoding encodingname]
[-trace] [-diagnostics] [-SLL]
[input-filename(s)]
Use startRuleName='tokens' if GrammarName is a lexer grammar.
Omitting input-filename makes rig read from stdin.
I am new to Mahout. I want to install it and try it out. So far I have Maven3 and Java 1.6 installed and configured on my Mac. My question is:
Do I have to install Hadoop firstly before installing Mahout?
Some tutorials include installing Hadoop and some not which confuse me. I know Mahout is built on top of Hadoop. But not all of Mahout depends on Hadoop.
Can someone provide some useful detailed resources about installation?
http://chimpler.wordpress.com/2013/02/20/playing-with-the-mahout-recommendation-engine-on-a-hadoop-cluster/
http://chimpler.wordpress.com/2013/03/13/using-the-mahout-naive-bayes-classifier-to-automatically-classify-twitter-messages/
these 2 links helped me get up and running on OSX. It's not strictly necessary to use hadoop with mahout, however almost certainly it would be useful to gain experience with both as you go, if you are planning to use in a scalable system ...
Giving another answer to this question now that it's two years later and I finally got an itemsimilarity command to run on a mac after a lot of cursing and some blood spilled... Hope this saves someone some time and misery. Except my coworkers! Your weakness disgusts me! Anyway...
First for the "do I need $FINICKY_BIG_DATA_PLATFORM" question, see:
http://mahout.apache.org/users/basics/algorithms.html
Hadoop and/or spark are not hard requirements, some algorithms run on a single machine. But, the algorithm you may be interested in may only run on hadoop and/or spark. The docs on recommendations also steer you pretty strongly toward running the spark based algorithms. They also encourage you to use the black box command line commands, which can have different arguments between the single machine and spark versions (itemsimilarity, for example). So you don't NEED it, but you'll probably still need it.
I tried brew installs of hadoop, apache-spark and mahout. If you use the absolute latest versions (mahout 0.11.0, apache-spark 1.4.1, hadoop 2.7.1), you may have some of these problems:
" Got error Cannot find Spark class path. Is 'SPARK_HOME' set? " To fix this, not only do you need to have that environment variable set (mine is set to "/usr/local/Cellar/apache-spark/1.4.1/libexec"), you also need the apparently now deprecated compute-classpath.sh script in ${SPARK_HOME}/bin/ . I had a 1.2.0 spark installation handy, so I lifted one from there.
Bonus gotcha, in that 1.2.0 install there are two compute-classpath.sh scripts, one is just a one-liner invoking the other. You will probably be happier if you copy over the "real" one, so use less to check.
" java.lang.UnsatisfiedLinkError: no snappyjava in java.library.path " To fix this, the Internet will tell you to get a copy of libsnappyjava.jnilib , put it in /usr/lib/java and rename it libsnappyjava.dylib . I did "brew install snappy," which installed version 1.1.3 and included symlinks named libsnappy.dylib and libsnappy.jnilib. Note that these are just symlinks and that the names aren't quite right... So after copying and renaming the main lib file I at least got a new error, which brings us to...
" Exception in thread "main" java.lang.UnsatisfiedLinkError: org.xerial.snappy.SnappyNative.maxCompressedLength(I)I " The Internet was less forthcoming with suggestions. I did see one post saying that version 1.0.xxx didn't have whatever magic pony code but version 1.1.1.3 did. I went to http://central.maven.org/maven2/org/xerial/snappy/snappy-java/ , downloaded snappy-java-1.1.1.3.jar and dropped that as-is into /usr/lib/java , no name changes. This made the snappy errors go away and I could run a "mahout spark-itemsimilarity" command to completion, YMMV, this advice is provided as-is with no warranty.
Please note that snappy error induced despair may drive you to download the spark .tgz and build it from scratch. The build process will take up ~2 hours of your life that you will never get back and you will still get snappy errors at the end. Ultimately I could run the same command with this hand-built version as with the brew installed version, the snappy jar ended up being the main thing.
You don't need hadoop at all to try out mahout. Below is a sample code which take model as input from a file and will print recommendations.
package com.ml.recommend;
import java.io.File;
import java.io.IOException;
import java.util.List;
import org.apache.mahout.cf.taste.common.TasteException;
import org.apache.mahout.cf.taste.impl.model.file.FileDataModel;
import org.apache.mahout.cf.taste.impl.neighborhood.NearestNUserNeighborhood;
import org.apache.mahout.cf.taste.impl.recommender.CachingRecommender;
import org.apache.mahout.cf.taste.impl.recommender.GenericUserBasedRecommender;
import org.apache.mahout.cf.taste.impl.similarity.PearsonCorrelationSimilarity;
import org.apache.mahout.cf.taste.model.DataModel;
import org.apache.mahout.cf.taste.neighborhood.UserNeighborhood;
import org.apache.mahout.cf.taste.recommender.RecommendedItem;
import org.apache.mahout.cf.taste.recommender.Recommender;
import org.apache.mahout.cf.taste.similarity.UserSimilarity;
public class App {
public static void main(String[] args) throws IOException, TasteException {
DataModel model = new FileDataModel(new File("data.txt"));
UserSimilarity userSimilarity = new PearsonCorrelationSimilarity(model);
UserNeighborhood neighborhood = new NearestNUserNeighborhood(3,
userSimilarity, model);
Recommender recommender = new GenericUserBasedRecommender(model,
neighborhood, userSimilarity);
Recommender cachingRecommender = new CachingRecommender(recommender);
List<RecommendedItem> recommendations = cachingRecommender.recommend(
1000000000000006075L, 10);
System.out.println(recommendations);
}
}
yesterday we migrated to windows 7 in our firm and also updated the java packages and also R (to 2.14).
Then I tried to load the xlsx package, because I rely heavly on it but i get the following error:
Error : .onAttach in attachNamespace()
Error: .jnew("org/apache/poi/xssf/usermodel/XSSFWorkbook")
I tried the following, but it did not work:
Sys.setenv(PATH=paste(Sys.getenv("PATH"),"C:\\Program Files (x86)\\Java\\jre6\\bin\\client",collapse=';'))
options(java.parameters = "-Xmx1000m")
Since I never work with java i have no clue what I can do. Can you help me?
Thank you!
sessionInfo()
R version 2.14.1 (2011-12-22)
Platform: i386-pc-mingw32/i386 (32-bit)
locale:
[1] LC_COLLATE=German_Austria.1252 LC_CTYPE=German_Austria.1252
[3] LC_MONETARY=German_Austria.1252 LC_NUMERIC=C
[5] LC_TIME=German_Austria.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] xlsxjars_0.4.0 rJava_0.9-3
loaded via a namespace (and not attached):
[1] tools_2.14.1 xlsx_0.4.2
The interesting thing is, that the package XLConnect loads without problems.EDIT: Ok, it loads without problems but loading a workbook does not work:
Error: NoSuchMethodError (Java): org.apache.xmlbeans.XmlOptions.setSaveAggressiveNamespaces()Lorg/apache/xmlbeans/XmlOptions;
So maybe it is really no Java problem. But I don't want to re-write all my code to XLConnect!
Nobody any ideas what I could try?
I encountered exactly the same error and found a work-around. If you specify a library location on the network to install the package into, the error occurs.
## Example where error occurs:
install.packages('xlsx', lib='\\network\R\library')
library('xlsx', lib='\\network\R\library'))
However, if you change the default location for package installation within R, then you should be able to call the package library without the error. That is, simply typing install.packages('xlsx'), and having the package install automatically to its default location, allowed the package to work properly.
This is doing my head right in!
I am messing about with JRuby trying to make some Java calls. Here is the source Im messing with.
require 'java'
module JavaLang
include_package "java.lang"
end
module JavaSql
include_package 'java.sql'
end
begin
JavaLang::Class.forName("com.mysql.jdbc.Driver").newInstance
jdbcconnection = JavaSql::DriverManager.getConnection("jdbc:mysql://localhost:3306/accounts", 'root', '');
puts 'Werked'
rescue Exception => ex
connectmsg = "Could not connect to the database: " + ex.message;
puts connectmsg
end
I am using Netbeans 6.8 as the IDE.
When I run the script it all works fine and I get Werked printing out in the output.
When I try to run this through on the debugger I get
Could not connect to the database: java.lang.ClassNotFoundException: com/mysql/jdbc/Driver
Im sure its just something basic to do with setting a debugger configuration, but I can't find anything anywhere to give me a clue.
Why would the debugger not pick up these java classes?
Edit
Just to follow up, this is a bug in Netbeans 6.8. Here is the bug report.
Relieved that Im not going mad!
This seems like a general classpath issue. The fact that the reflection fails, supports this. Are you sure that your classpath is the same / similar to your runtime classpath?