hbase connection with java client - java

I have a java program ( eclipse , Maven , windows7) to fill Hbase which exist on a VirtualBox from a service,
the program works fine when I run it from eclipse.
But when i created an executable jar using the assembly plugin in Maven and run it from the cmd or from cygwin I get this error
> [2016-05-03 14:46:44,663][DEBUG] Reading reply sessionid:0x154769ed563000a, packet:: clientPath:null serverPath:null finished:false header:: 300,4 replyHeader:: 300,3632,-101 request:: '/hbase-unsecure/meta-region-server,F response:: [2016-05-03 14:46:44,663][DEBUG] hconnection-0x6b63f5ff-0x154769ed563000a, quorum=sandbox:2181, baseZNode=/hbase-unsecure Unable to get data of znode /hbase-unsecure/meta-region-server because node does not exist (not an error) [2016-05-03 14:46:44,663][DEBUG] Looked up meta region location, connection=org.apache.hadoop.hbase.client.ZooKeeperRegistry#695e5335; servers = null [2016-05-03 14:46:44,689][DEBUG] Closing scanner id=-1
>org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the locations
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:312)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:295)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:160)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:155)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:811)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:303)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:313)
Could any of you help me with this?

Some more information like the following may be helpful. Run these commands in the virtualbox where you have the "hbase" command
hbase zkCli -server sandbox:2181
Once you connect, you can run, in the zkcli prompt:
ls /
Please paste the output of the above command. The baseZnode of "/hbase-unsecure " does not seem to exist. In general the base znode is "/hbase".
Kindly cross check the classpath of your running program by the java code, as suggested by #RamPrasadG
System.out.println(System.getProperty("java.classpath"))

Related

Cannot open sever.jar file

So I was trying to make a 1.17.1 minecraft server on my mac. I couldn't open my 1.17.1_server.jar with Java 8 so I download Java 16.0.2.
Unfortunately, everytime I was opening the 1.17.1_server.jar file, I got
"The Java JAR file "1.17._server.jar" could not be launched." .
I first thought that it was because the file was launch by Java 8 instead of 16.
So I went into the terminal and run :<path to java> -jar 1.17.1_server.jar
I then got this : Error: Unable to access jarfile 1.17.1_server.jar
Finally i tried to put the path of the jar file in the command...
So I've run : path to java -jar path to server
and got this :
[main/ERROR]: Failed to load properties from file: server.properties
[15:57:35] [main/WARN]: Failed to load eula.txt
[15:57:35] [main/INFO]: You need to agree to the EULA in order to run the server. Go to eula.txt for more info.
So why I have to agreed Eula if i've never launched it ? Does it think that he already been launched ?
As stated in the error message
[15:57:35] [main/INFO]: You need to agree to the EULA in order to run the server. Go to eula.txt for more info.
You have to open the file and check yes
Okay find the solution: my eula and server properties file were in my user folder dk why (never moved them so...)

Running tests on travis using mysql

I've been trying for the last week or so to make integration tests work on travis for a school project. I've debugged a fair bit of the project but now I'm blocked and need external help.
To give a bit of context, so far, I've debugged the java project so that the tests can be launched from eclipse or from maven in command line. I've worked on the travis file so that a database is created, the database scripts run and the java tests launch. However, the tests fail on travis because of a "table missing" in the database.
This is a link to our repo.
This is the travis.yml file's code:
language : java
jdk:
- oraclejdk8
service:
- mysql
before_script:
- mysql -e 'DROP DATABASE IF EXISTS koalatest'
- mysql -e 'CREATE DATABASE IF NOT EXISTS koalatest;'
- mysql -u root --default-character-set=utf8 koalatest < backend/koalacal-backend/koalacal.sql
script: cd backend && cd koalacal-backend && mvn test -X
after_success:
- bash <(curl -s https://codecov.io/bash)
The java project that is being built and run by maven is located under rootfolder -> backend -> koalacal-backend.
Here is a link to the error log maven produces on travis.
This line seems to be the source of the error:
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'koalatest.Calendar' doesn't exist
I have two hypothesis:
1- The sql script that creates all the tables is not being run properly by travis.
To test this hypothesis, I changed the name of the script called by travis. As expected, I got an error saying that travis can't find the file. So at least, I know that this line of code causes travis to look up at an sql file.
- mysql -u root --default-character-set=utf8 koalatest < backend/koalacal-backend/koalacal.sql
That being said, I have no idea if the file is run properly on the database.
For the sake of putting all relevant informations in this post, here is a link to the database script.
2- The tests can't connect properly to the database.
Here is the config file that contain the info regarding which database to connect to:
TestInstance=true
user=root
password=
serverName=localhost
databaseName=koalacal
portNumber=3306
testUser=root
testPassword=
testServerName=127.0.0.1
testDatabaseName=koalatest
testPortNumber=3306</code>
If the parameter TestInstance is set to true, the tests take the informations testUser, testPassword, testServerName, testDatabaseName and testPortNumber to connect to the relevant database.
I believe the connection informations currently contained in the config file match how the travis documentation says we need to connect to a mysql database. I tried to change the testUser to something invalid (like root3) and got error messages as expected.
Maybe somehow the tests can't connect to the database and don't produce a related error message, but I doubt it.
Can someone look at my problem and see if I've missed something obvious (or not)? I don't know what else to try and I don't want to be blocked one more week on a technical issue.
For anyone who may google travis mysql and has a similar error to the one I had, I solved my problem.
The error was caused by a case sensitivity issue. The java code tried to connect to tables like 'Calendar' and 'Event' while the sql script created the tables 'calendar' and 'event'.
It took a long time to troubleshoot this because the case sensitivity didn't pose any problem on my machine. Maven can run its tests properly without any issue. It's only on the travis servers that case sensitivity of the tables started to matter.

Error when push Liberty for Java application in IBM Bluemix

when I run sample IBM Bluemix Liberty for Java application https://github.com/ibmjstart/bluemix-java-postgresql-uploader.git following error:
-----> Downloaded app package (1.9M)
-----> Downloaded app buildpack cache (4.0K)
OK
/var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:101:in build_pack': Unable to detect a supported application type (RuntimeError) from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:74:inblock in compile_with_timeout'
from /usr/lib/ruby/1.9.1/timeout.rb:68:in timeout' from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:73:incompile_with_timeout'
from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:54:in block in stage_application' from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:50:inchdir'
from /var/vcap/packages/dea_next/buildpacks/lib/buildpack.rb:50:in stage_application' from /var/vcap/packages/dea_next/buildpacks/bin/run:10:in'
FAILED
Server error, status code: 400, error code: 170001, message: Staging error: cannot get instances since staging failed
TIP: use 'cf logs jpu-henryhan --recent' for more information
The top error looks like you left off the -p <path_to_war> parameter when doing a push. If you just push a directory containing a WAR file, it will not be detected by the Java buildpack.
The tip provided in the output of your cf push request is relevant.
TIP: use 'cf logs jpu-henryhan --recent' for more information
Running that command will tail the log files produced during the staging process and let you see what error may have been raised. Often, it can be a missing dependency or a transient failure of some sort.
I just successfully deployed the sample using the "deploy to Bluemix" button and manually via the cf command line tool. Unless you changed the code, it is most likely that this error is a transient failure.
Run following command:
$ cf push jpu- -b https://github.com/cloudfoundry/java-buildpack --no-manifest --no-start -p PostgreSQLUpload.war
add the parameter to set the buildpack "-b https://github.com/cloudfoundry/java-buildpack"

Websphere works when run but fails when debug

I'm using Intellij with WebSphere 8. When I run from within IDE the server works normally. When I try to run it in debug mode however, it fails with the following error:
C:\IBM\WebSphere\AppServer\profiles\AppSrv01\bin\generated_websphere_server_start_script.cmd
C:\IBM\WebSphere\AppServer\java\bin\java -Dfile.encoding=windows-1252 -classpath "C:\IBM\WebSphere\AppServer\runtimes\com.ibm.ws.admin.client_8.5.0.jar;C:\IBM\WebSphere\AppServer\plugins\com.ibm.ws.security.crypto.jar;C:\Program Files (x86)\JetBrains\IntelliJ IDEA 13.1.3\plugins\webSphereIntegration\lib\webSphereIntegration.jar;C:\Program Files (x86)\JetBrains\IntelliJ IDEA 13.1.3\plugins\JavaEE\lib\javaee-impl.jar;C:\Program Files (x86)\JetBrains\IntelliJ IDEA 13.1.3\lib\openapi.jar;C:\Program Files (x86)\JetBrains\IntelliJ IDEA 13.1.3\plugins\webSphereIntegration\lib\specifics\webSphereClientImpl.jar" com.intellij.javaee.oss.process.JavaeeProcess 62847 com.intellij.j2ee.webSphere.agent.WebSphereAgent
Error: JDWP agent already loaded - please check java command line options
[2014-08-11 01:58:59,248] Artifact x.ear: Server is not connected. Deploy is not available.
JVMJ9TI064E Agent initialization function Agent_OnLoad failed for library jdwp, return code -1
Detected server admin port: 8880
JVMJ9VM015W Initialization error for library j9jvmti26(-3): JVMJ9VM009E J9VMDllMain failed
Detected server http port: 9080
Disconnected from server
I tried almost everything I have no idea what the problem is. I did google it for several hours with no luck.
Do anyone know what this all is about and how can it be fixed?
Here is my server configuration:
Solution 1
unchecked the Pass environment variables check box and restart the server in debug mode. it should work properly.
Run --> Edit Configuration --> websphere server --> Startup/Conections tab
select debug, you will see the Pass environment variables check box. it need to be unchecked for debug to work.
Solution 2
if debug mode of websphere is working in eclipse and not working in intellij that reason i found out is debugging serverice on the websphere is already started and intellij is again trying to start the debggin service. so stop the service from websphere console (Servers > Server Types > WebSphere application servers > [serverName] > Debugging Service) and all the default configurations in intellij should work.
I had the same problem. Finally I figure it out. I hope this solution helps. I'm using Intellij idea 2019.1.3.. and WebSphere 8.5.5.13
check WebSphere start server script find debug env variable name (in my script (WebSphere\AppServer\bin\startServer.bat)
WAS_DEBUG)
add same debug option name to intelliJ IDEA
run- debug configuration environment tab.
Since default env variable is debug and cannot be overridden
check pass env variables
add WAS_DEBUG option as I did
or you can update WAS_DEBUG as DEBUG (same as default in IntelliJ) in server startup.bat script.
Both should work.
Pretty old, but i was able to experience too!
They will try to fix this now in: https://youtrack.jetbrains.com/issue/IDEA-193580
firstly, sorry if the translation is not good, I am Brazilian and I would like to share the solution I made.
First: Locate the file "startServer.bat in "WebSphere\AppServer\bin" and open it with any text editor.
Second: Search for the second incidence of the word "WAS_DEBUG" and replace it with "DEBUG", as shown in the image...
Third: In Intellij, go to the server settings and in the "Startup/Connection" tab, select debug.
Fourth: Now uncheck the "use default" checkbox in "Startup script" and point to the location of the "startServer.bat" file in the webSphere directory.
Fifth: Right on the left side of the checkbox, there is an option to enter a parameter. Click and a "Program Arguments" field will appear. Enter the server name, in my case it was "server1".
Okay, now just test
enter image description here
enter image description here

Running MapReduce job written in Java through my PHP web page

My PHP server is hosted on Job Tracker machine and I am trying to run the map reduce job through my web page by calling the command line executing the jar command,
but I am getting no response and job is not starting.
However if I run a command to list the hdfs using same methodology it is running fine. Please guide me.
Following command is not responding me anything and job is not running:
exec("HADOOP_DIR/bin/hadoop jar /usr/local/MapReduce.jar Mapreduce [input Path] [output Path]");
But if I do this:
exec("HADOOP_DIR/bin/hadoop dfs -ls /user/hadoop");
It is running fine.
I solved this problem by changing the php server user to hduser (user which has permission to write files in hdfs). without changing this user only the commands which reads from the hdfs were working and not the one which needs to create the files or write on hdfs.
When i tried to run the command for creating the directory in hdfs through my php script, I got the following error in my php server logs (/var/log/apache2/error.log):
mkdir: org.apache.hadoop.security.AccessControlException: Permission denied: user=www-data, access=WRITE, inode="hduser":hduser:supergroup:rwxr-xr-x
And on running the Jar command to trigger MapRed program I got the following error:
Exception in thread "main" java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createTempFile(File.java:1879)
at org.apache.hadoop.util.RunJar.main(RunJar.java:115)
Then what i did is i changed the user in /etc/apache2/apache2.conf to my hadoop user and then restarted my server and every thing was working fine now.
I should reference Execute hadoop jar from PHP Server fails. Permission denied post which helped me alot in solving this problem. I hope this post helps others too.

Categories