Connect remote hadoop server using eclipse - java

I am trying to run some map-reduce programs on a remote server of Hadoop 2.0.0, which is running on CentOS 6.4 using ssh.
I am using Eclipse LUNA on my windows 8 machine.
Is there a way to run the programs directly on my Eclipse without converting them to JAR files?

If hadoop is running on a linux machine, you can not connect directly from windows. The connection has to be SSH.
I hope you are looking for something like this:
https://help.eclipse.org/kepler/index.jsp?topic=%2Forg.eclipse.rse.doc.user%2Fgettingstarted%2Fgusing.html
The correct answer (similar) to this is here:
Work on a remote project with Eclipse via SSH

Related

How to use JDBC written on Windows and deployed to Ubuntu?

Is it possible to develop a java application using jdbc on a windows laptop, and then deploy the jar to an ubuntu VM?
I have a small application that connects to a database on the VM (using IP address). This works fine on Windows in IntelliJ if I run it within the IDE. It also works fine if I move the project to Intellij on Ubuntu and run from within the IDE. But I want to deploy a JAR file to the Ubuntu platform. When I do this I get "No suitable driver found for jdbc..."
I have tried:
Installing jdbc on ubuntu using "sudo apt-get install libmysql-java".
Adding the Platform Independent current jdbc driver as a library.
Including this library in the jar artifact when building.
Adding the ubuntu jdbc to the CLASSPATH (export CLASSPATH=$CLASSPATH:/usr/share/java/mysql.jar)
I can't be the only person who wants to develop on Windows and deploy to Ubuntu?

Issue while configuring Hadoop 2.6.0 in Eclipse on Windows

I've been trying to configure Hadoop 2.6.0 in Eclipse on Windows using this tutorial - http://www.srccodes.com/p/article/47/build-install-configure-eclipse-plugin-apache-hadoop
I'm able to build Hadoop 2.6.0 plugin jar for Eclipse. My Hadoop cluster daemons are up and running on Windows. But when I try to connect eclipse to HDFS as per the tutorial nothing shows up.
I've also tried with Map/Reduce(V2) Master Port - 50070(namenode https port) and DFS Master Port - 8020(fs port) but no luck.
Any advise would be of great help.
Fixed this issue by running Eclipse as an administrator. Hadoop location would show '0' if there is no data in HDFS. At first instance it looks like, eclipse is not able to connect to HDFS though.

Netbeans 8.0 and networks project are too slow

I have a VMware Virtual Machine running on my PC because I use it as a development environment. I'm running on the VM CentOS 6.5. I have a Samba shared folder on that VM and I connect through "host-only" connection from Windows 7 to CentOS without any problem.
Now I create a project using a shared folder as project sources and opening Netbeans and wait for the project to be opened takes sometimes 15 min or more.
I disable "background scanning" following instructions on this post but still to slow, any advice on this one? What is the best approach? Use remote project sources through SFTP connection?
The real fix would be a fix of your samba. There is very likely some locking due to some network timeout in it. Most likely it is some WINS/DNS configuration problem. Only you can debug it (tcpdump/strace the samba process).
However, the best approach is not the fix of your samba, but using a different version for deployment. Your question makes likely that you have your development environment on your Windows host machine, and you deploy your code to your CentOS.
Use a native filesystem on your host, typically bound it into a versioning system (most likely, git), and make the deployment a part of your build scripts. For example, you could install a cygwin, and then upload your compiled code (or even the source) to your VM by an rsync. This rsync can already be called from your build script (like maven-exec-plugin in a pom.xml, if you use maven).

Configuring Eclipse (on windows) for Hadoop environment(on vm having SUSE Linux Enterprise Server 11 )

I had Install Hadoop in pseudo-distributed mode on a VMware having SUSE Linux Enterprise Server 11. I am able to run the hello world examples like word count. Also I used WinSCP to connect to that VM and uploaded several XML files into hadoop cluster.
My question is now how can I configure my eclipse which I am having on my local machine that is windows 7 to connect that VM and write some java code to play with data I had dumped in the cluster. I did some work and able to get Map/Reduce perspective in the eclipse but not able to figure out how to connect hadoop on VM from my local machine, write my java code (mapper,reducer classes) to play with data and save the result back in cluster.
If someone can help me with this that will be great. Thanks in advance.
Let me know if more information is needed.
I am using hadoop-0.20.2-cdh3u5 and eclipse europa 3.3.1
I am struggling with this as well at the moment. Maybe you will find these links helpful:
http://www.bigdatafarm.com/?p=4
http://developer.yahoo.com/hadoop/tutorial/module3.html
Cheers

Remote Java Debugging with Eclipse's Remote System Explorer

Does anyone have experience debugging java code (code lives on remote Linux server) using Eclipse's Remote System Explorer? I'm able to explorer files and use the built in shell but I can't get it to stop on any breakpoints within the eclipse ide.
I'm taking a punt that your remote java server needs to be started using something like:
java -Xdebug -Xrunjdwp:transport=dt_socket,address=8001,server=y suspend=y -jar stockTradingGUI.jar
Then you will be able to connect using the eclipse remote debugger
For a full run down see: http://javarevisited.blogspot.com/2011/02/how-to-setup-remote-debugging-in.html

Categories