I am trying to add a node to my Hudson master.
The node runs Windows Server 2008 Enterprise edition and it has Java, Ant and .NET installed on it.
The connection log of that machine shows this output and is never able to connect.
Connecting to machine01
Checking if Java exists
java full version "1.6.0_25-b06"
Copying slave.jar
Starting the service
Connecting to machine01
Checking if Java exists
java full version "1.6.0_25-b06"
Copying slave.jar
Starting the service
Connecting to machine01
The message keeps on repeating and never connects.
Upon further investigation, I see that the "Hudson Slave at <FS Root>" service is registered, but the "hudson-slave.exe" in the FS root is not there. It means that this .exe file is not copied onto the slave at all. I have checked the entire hudson.war, but no exe file exists in it - may be it is getting created? Only slave.jar is being copied.
I wonder why no error is reported and master keeps trying. Can any one suggest a solution for this?
Try this:
Convert your slave into a JNLP (Java Web Start) slave, start the web service from your slave, and then use it install the service (File > Install as Service)
Also, check to make sure the folder you have assigned as FS Root is writeable by the user you have specified.
Related
I'd like for my webapp which is deployed as a war ROOT.war to have write access to /var/www/html/static/images so that it can write uploaded and converted images to that folder so nginx can serve it statically. Currently it doesn't work and triggers a java.nio.file.FileSystemException exception together with the Filesystem is read-only message.
But the filesystem is not read-only and is in great condition. The folder has already been chmodded 777.
Extra info:
The tomcat setup is running on an Ubuntu 18.04 Azure VM with managed disk. The folder is residing on an Ext4 formatted drive
Let's start with: chmod 777 is great for testing, but absolutely unfit for the real world and you shouldn't get used to this setting. Rather set the owner/group correctly, before you give world write permissions.
Edit: A similar question just came up on the Tomcat mailing list, and Emmanuel Bourg pointed out that Debian Tomcat is sandboxed by systemd. Read your /usr/share/doc/tomcat9/README.Debian which contains this paragraph:
Tomcat is sandboxed by systemd and only has write access to the
following directories:
/var/lib/tomcat9/conf/Catalina (actually /etc/tomcat9/Catalina)
/var/lib/tomcat9/logs (actually /var/log/tomcat9)
/var/lib/tomcat9/webapps
/var/lib/tomcat9/work (actually /var/cache/tomcat9)
If write access to other directories is required the service settings
have to be overridden. This is done by creating an override.conf file
in /etc/systemd/system/tomcat9.service.d/ containing:
[Service]
ReadWritePaths=/path/to/the/directory/
The service has to be restarted afterward with:
systemctl daemon-reload
systemctl restart tomcat9
Edit 2022: Note that these are the 2019 paths - validate the file locations for later versions. From the comments to this answer (thank you to V H and Ng Sek Long) here are some updates:
In current Ubuntu file is here: sudo vi /etc/systemd/system/multi-user.target.wants/tomcat9.service – V H Feb 26, 2022 at 19:55
Mine (Ubuntu 20) is installed here /lib/systemd/system/tomcat9.service smh everybody use a different path. – Ng Sek Long Mar 28, 2022 at 8:36
End of edit, continuing with the passage that didn't solve OP's problem, but should stay in:
If - all things tested - Tomcat should have write access to that directory, but doesn't have it, the error message points me to an assumption: Could it be that
Tomcat is running as root?
The directory is mounted through NFS?
The default configuration for NFS is that root has no permissions whatsoever on that external filesystem (or was it no write-permission? this is ancient historical memory - look up "NFS root squash" to get the full story)
If this is a condition that matches what you are running, you should stop running Tomcat as root, and rather run it as an unprivileged user. Then you can set the permissions on the directory in question to be writeable by your tomcat-user, and readable by nginx, and you're done.
Running Tomcat as root is a recipe for disaster: You don't want a process that's available from the internet to run as root.
If these conditions don't meet your configuration: Elaborate on the configuration. I'd still stand by this description for others who might find this question/answer later.
This is a possible duplicate of Intellij docker with live update of EAR inside websphere, but that question is 3 years old and the answer "not supported" is maybe outdated.
Via IntelliJ IDEA (2020.3 Ultimate) on Windows 10, I want to deploy an application (EAR) into a Websphere Liberty server which is running in a docker container. This worked with traditional websphere, now we are switching to WLP. Is this possible at all?
The server is up and running. I'm perfectly able to copy the ear to the (mounted) "dropins" folder and the server picks it up, expands it and deploys it without problems.
I'm also able to attach the debugger in IntelliJ to the server and it will stop at breakpoints, it can even successfully update code ("hot swap classes").
What I did already:
I configured ports (7777,8880,9043,9443,9080) to forward 1:1 from the container to my local machine. I can successfully access at least port 9080 through a local browser.
I downloaded Websphere Liberty, Full Java EE 8 Profile, the same version as in the docker container, from IBM official website to my local hard drive, and expanded the zip.
Then I tried to follow the guide https://www.jetbrains.com/help/idea/run-debug-configuration-websphere-server.html and added a Run/Debug Configuration "Websphere Remote", I chose the expanded folder for IntelliJ for the "Application Server" configuration (I know that IntelliJ needed a local installation of Websphere for traditional version, so this may hold true for WLP as well).
I added the EAR-artifact for deployment in the Run Configuration. I copied the server name from the one inside docker and set the connection settings (localhost:9080).
The first try ("Test Connection") resulted in an error from IntelliJ: Error running 'WebSphere Application': JMX file not found: C:\[...]\wlp-javaee8-20.0.0.12\usr\servers\defaultServer\workarea\com.ibm.ws.jmx.local.address
Then I tried and copied this file from my docker container, it had the content
service:jmx:rmi://127.0.0.1/stub/[... some seemingly byte64-encoded string] to the local path. This resulted in a different error: Error connecting to the Application Server: java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested exception is: java.net.ConnectException: Connection refused: connect
I also tried to use the content of the file com.ibm.ws.jmx.rest.address which was service:jmx:rest://localhost:9443/IBMJMXConnectorREST, (where I replaced the internal host name of the docker container by "localhost"), but that resulted in Error running 'WebSphere Application': java.net.MalformedURLException: Unsupported protocol: rest
If I start the local server on my machine (not in docker), connection works and deployment too. But this is not my aim.
PS: My server.xml contains <applicationMonitor updateTrigger="mbean"/>.
I just update my Mac OS to Catalina version and my Hybris server stopped turning on. So on command ./hybrisserver.sh start I get this
MacBook-Pro-Sasha:platform sashayukhimchuk$ ./hybrisserver.sh start
Starting hybrisPlatform on Tomcat...
/Users/sashayukhimchuk/hybris/CXCOMM181100P_1-70004085/hybris/bin/platform/tomcat/bin/wrapper.sh:
line 1388: 4614 Killed: 9
"/Users/sashayukhimchuk/hybris/CXCOMM181100P_1-70004085/hybris/bin/platform/tomcat/bin/./wrapper-macosx-universal-64"
"/Users/sashayukhimchuk/hybris/CXCOMM181100P_1-70004085/hybris/bin/platform/tomcat/conf/wrapper.conf"
wrapper.syslog.ident="hybrisPlatform"
wrapper.pidfile="/Users/sashayukhimchuk/hybris/CXCOMM181100P_1-70004085/hybris/bin/platform/tomcat/bin/hybrisPlatform.pid"
wrapper.daemonize=TRUE wrapper.name="hybrisPlatform"
wrapper.displayname="hybrisPlatform on Tomcat"
wrapper.statusfile="/Users/sashayukhimchuk/hybris/CXCOMM181100P_1-70004085/hybris/bin/platform/tomcat/bin/hybrisPlatform.status"
wrapper.java.statusfile="/Users/sashayukhimchuk/hybris/CXCOMM181100P_1-70004085/hybris/bin/platform/tomcat/bin/hybrisPlatform.java.status" wrapper.script.version=3.5.29 --
Waiting for hybrisPlatform on Tomcat..................
WARNING: hybrisPlatform on Tomcat may have failed to start.
I had to allow the wrapper-macosx-universal-64 and some other libs in Systemsettings > Security > General while starting the Hybris Server.
Remove the app (and all the necesary side apps) from Mac OS quarantine:
xattr -d com.apple.quarantine wrapper-macosx-universal-64
You can find the blocked wrapper under /bin/platform/tomcat/bin
If you get an error saying that the wrapper-macosx-universal-64 or libwrapper-macosx-universal-64.jnilib cannot be opened because the developer cannot be verified, proceed as follows:
Go to system preferences security and privacy
Click the General tab.
In the list, select the executable that you tried to run, and unblock it.
Run the command again.
to fix it, u need new tomcat in bin/platform. I just create new hybris project with .zip and copy tomcat folder to my hybris project and after some confirming about security in Mac OS setting, it works
It's probably could happen as a result that wrapper-macosx-universal-64 was deleted by macOS during a security check.
You can copy the file from the original path in hybris zip archive -
hybris-dir/bin/platform/tomcat/wrapper-macosx-universal-64
to your installation and follow the actions related to macos developers security check to allow starting the tomcat
I have a remote host set up with Spark standalone instance (one master and one slave on the same machine for now). I also have local Java code with spark-core dependency and a packaged jar with actual Spark Application. I'm trying to start it using SparkLauncher class as described in it's Javadoc.
Here is dependency:
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>${spark.version}</version>
And here is the code of the louncher:
new SparkLauncher()
.setVerbose(true)
.setDeployMode("cluster")
.setSparkHome("/opt/spark/current").setAppResource(Resources.getResource("validation.jar").getPath())
.setMainClass("com.blah.SparkTestApplication")
.setMaster("spark://" + sparkMasterHostWithPort))
.startApplication();
The error I'm getting is either path not found /opt/spark/current/ or, if I remove setSparkHome call, Spark home not found; set it explicitly or use the SPARK_HOME environment variable.
Here is my naive question(s): is there any workaround allowing me not to have Spark binaries installed on the local host where I want to run only the Launcher? Why Spark Java code referenced in the dependencies is not capable / is not enough to connect to some configured remote Spark Master and submitting the application jar? Even if I put Spark binaries, application code and if needed even the Spark Java jar to hdfs location and use other deployment approach, like YARN, would it be enough to use Launcher just to trigger submission and start remotely?
The reason is that I want to avoid installing Spark binaries on multiple client nodes only to submit and start dynamically created/modified Spark applications from there, it sounds like a waste to me. Not to mention necessity to package application in jar for each submission.
Short answer: you must have spark binaries on the client machine and SPARK_HOME environment variable pointing to it.
Long answer: however if you want to launch the job on remote cluster then you could make use of the following configurations in your spark job:
val spark = SparkSession.builder.master("yarn")
.config("spark.submit.deployMode", "cluster")
.config("spark.driver.host", "remote.spark.driver.host.on.the.cluster")
.config("spark.driver.port", "35000")
.config("spark.blockManager.port", "36000")
.getOrCreate()
spark.driver.port and spark.blockManager.port are not mandatory, but needed if you are working in a closed environment, like let's say kubernetes network, and have some port gateway service defined for spark client pod.
Having remote host defined in master setting of the SparkLauncher will not work. You need to get the hadoop configurations from the cluster, usually it is located in /etc/hadoop/conf on the cluster nodes. Place hadoop config directory in the client machine and point HADOOP_CONF_DIR environment variable to it. This should be enough to get started.
I tried to install confluence on my own ubuntu server, but always failed. The error is:
com.atlassian.util.concurrent.LazyReference$InitializationException: java.lang.IllegalStateException: Spring Application context has not been set
at com.atlassian.util.concurrent.LazyReference.getInterruptibly(LazyReference.java:149)
caused by: java.lang.IllegalStateException: Spring Application context has not been set
at com.atlassian.spring.container.SpringContainerContext.getComponent(SpringContainerContext.java:48)
I saw some solutions in the jira confluence forum saying try to fix the permission of installed directory and home directory. I tried but failed again. How can I fix the problem.
In my case the issue was corrupted confluence.cfg.xml file (contains DB connection stings and other settings). The file size was 0 bytes.
I would suggest to use a VM to create a new installation and borrow confluence.cfg.xml from that installation.
It's embarrassing that this behavior has been allowed to exist for nearly 7 years in a commercial product. This is a basic stuff...
I wish that was on instructions somewhere:
Make single backup copy of confluence.cfg.xml immediately before any writes to it by the application. Application should be able to restore from backed up copy if it gets corrupted.
Atlassian documentation lists the following causes of this problem:
The user running Confluence does not have write permissions to the home folder defined in <install>/confluence/WEB-INF/classes/confluence-init.properties or the install directory.
You are running Confluence as the root user or if you have an application firewall enabled (SeLinux or AppArmor).
The database driver is not located in the <install>/confluence/WEB-INF/lib folder or you are using a database version that is incompatible with the bundled driver.
The hostname of the server can not be resolved.
In my case I was running it as root user inside docker container.