I need to disable IPv6. For that the java documentation indicates setting jvm property java.net.preferIPv4Stack=true.
But I don't understand how to do it from the code itself.
Many forums demonstrated doing it from the command prompt, but I need to do it at runtime.
You can use System.setProperty("java.net.preferIPv4Stack" , "true");
This is equivalent to passing it in the command line via -Djava.net.preferIPv4Stack=true
Another approach, if you're desperate and don't have access to (a) the code or (b) the command line, then you can use environment variables:
http://docs.oracle.com/javase/7/docs/webnotes/tsg/TSG-Desktop/html/plugin.html.
Specifically for java web start set the environment variable:
JAVAWS_VM_ARGS
and for applets:
_JPI_VM_OPTIONS
e.g.
_JPI_VM_OPTIONS=-Djava.net.preferIPv4Stack=true
Additionally, under Windows global options (for general Java applications) can be set in the Java control plan page under the "Java" tab.
I ran into this very problem trying to send mail with javax.mail from a web application in a web server running Java 7. Internal mail server destinations failed with "network unreachable", despite telnet and ping working from the same host, and while external mail servers worked. I tried
System.setProperty("java.net.preferIPv4Stack" , "true");
in the code, but that failed. So the parameter value was probably cached earlier by the system. Setting the VM argument
-Djava.net.preferIPv4Stack=true
in the web server startup script worked.
One further bit of evidence: in a very small targeted test program, setting the system property in the code did work. So the parameter is probably cached when the first Socket is used, probably not just as the JVM starts.
well,
I used System.setProperty("java.net.preferIPv4Stack" , "true"); and it works from JAVA, but it doesn't work on JBOSS AS7.
Here is my work around solution,
Add the below line to the end of the file ${JBOSS_HOME}/bin/standalone.conf.bat (just after :JAVA_OPTS_SET )
set "JAVA_OPTS=%JAVA_OPTS% -Djava.net.preferIPv4Stack=true"
Note: restart JBoss server
you can set the environment variable JAVA_TOOL_OPTS like as follows, which will be picked by JVM for any application.
set JAVA_TOOL_OPTS=-Djava.net.preferIPv4Stack=true
You can set this from the command prompt or set in system environment variables, based on your need. Note that this will reflect into all the java applications that run in your machine, even if it's a java interpreter that you have in a private setup.
System.setProperty is not working for applets. Because JVM already running before applet start. In this case we use applet parameters like this:
deployJava.runApplet({
id: 'MyApplet',
code: 'com.mkysoft.myapplet.SomeClass',
archive: 'com.mkysoft.myapplet.jar'
}, {
java_version: "1.6*", // Target version
cache_option: "no",
cache_archive: "",
codebase_lookup: true,
java_arguments: "-Djava.net.preferIPv4Stack=true"
},
"1.6" // Minimum version
);
You can find deployJava.js at https://www.java.com/js/deployJava.js
Related
I'm using a Kerberos enabled Spark cluster for running our Spark applications. The Kerberos has been setup previously by other members of the organization, and I have no idea how it works. In the initial days, we had used the Kerberos debug logs to understand the exception "Unable to obtain password from user" which was being raised due to absence of a JCE certificate in the cacerts folder of jre security. However, we no longer require the logs and thus, used the -Dsun.security.krb5.debug=false parameter to disable the logging. However, this did not have any effect. Is there any other parameter that could do the trick? Please help me.
Excerpt from the GitBook "Hadoop and Kerberos: The Madness Beyond the Gate" by Steve Loughran, chapter Low-Level Secrets
JVM Library logging
You can turn Kerberos low-level logging on -Dsun.security.krb5.debug=true
This doesn't come out via Log4J, or java.util logging; it just comes
out on the console. Which is somewhat inconvenient —but bear in mind
they are logging at a very low level part of the system. And it does
at least log. If you find yourself down at this level you are in
trouble. Bear that in mind.
If you want to debug what is happening in SPNEGO, another system
property lets you enable this: -Dsun.security.spnego.debug=true
You can ask for both of these in the HADOOP_OPTS environment variable
export HADOOP_OPTS="-Dsun.security.krb5.debug=true -Dsun.security.spnego.debug=true"
Hadoop-side JAAS debugging
Set the env variable HADOOP_JAAS_DEBUG to true and UGI will set the
"debug" flag on any JAAS files it creates.
You can do this on the client, before issuing a hadoop, hdfs or yarn
command, and set it in the environment script of a YARN service to
turn it on there.
export HADOOP_JAAS_DEBUG=true
On the next Hadoop command, you'll see a trace like (.........)
Caveat: the Java properties starting with sun.security. apply to Sun/Oracle Java run-time, and also OpenJDK run-time and its variants. But not to IBM Java, etc.
Excerpt from the Java 8 documentation under Troubleshooting Security
If you want to monitor security access, you can set the
java.security.debug System property.
(.......) Separate multiple options with a comma.
When troubleshooting Kerberos specifically, I personally use that combination:
-Djava.security.debug=gssloginconfig,configfile,configparser,logincontext
Excerpt from the Oracle JDK 9 Release Notes section tools/launcher
JDK 9 supports a new environment variable JDK_JAVA_OPTIONS to
prepend options to those specified on the command line. The new
environment variable has several advantages over the
legacy/unsupported _JAVA_OPTIONS environment variable including the
ability to include java launcher options (...)
These two env variables are a very dirty (and utterly difficult to detect) way to inject Java system properties without them appearing on the command line.
What does that mean for you? Well, you have to search for multiple Java system props and environment variables, which might be set
for env variables: globally (cf. /etc/profile.d/*.sh), or at account level (cf. ~/.bashrc and friends), or inside Hadoop "include files", or directly inside a shell script that runs your Spark job
for system props: in any shell-or-env variable that is later developed in a shell script (...) or any env var picked up by Java on startup, or in YARN configuration files (when using Spark-on-YARN), or directly on a Java command-line
Good luck.
I personally would run a dummy Spark job that just dumps all env variables and Java system props; then inspect the dump to detect what to search for; then run a brute-force find ... -exec grep ... on the Linux filesystem (repeat as needed).
I've been trying to automate the creation of our development environment by combining batch files and WLST, but I am struggling to change the memory WebLogic server will start with.
Currently we are manually changing the memory settings in the <DOMAIN_HOME>/bin/setDomainEnv.cmd script, but this is a workaround. It should be possible to to do it automatically without much effort.
Setting the Domain
The script that sets the Domain in pretty simple:
set JAVA_HOME=C:\Program Files\Java\jdk1.6.0_45
set MW_HOME=C:\dev\wls1036_dev
set DOMAIN_HOME=C:\dev\domain
cd %MW_HOME%
call configure.cmd
mkdir %DOMAIN_HOME%
cd %DOMAIN_HOME%
%JAVA_HOME%\bin\java.exe -Xmx1024m -XX:MaxPermSize=256m -Dweblogic.management.username=weblogic -Dweblogic.management.password=welcome1 weblogic.Server
I've tried to use some variables in this script such as MEM_ARGS, JAVA_OPTIONS, but none of these are forwarded to the final configuration of the domain, later it always starts with 512 heap, and 128 perm, which are not enough.
WLST memory start args
We are using Eclipse, and it does call the startWebLogic.cmd as start script. It is the standard configuration.
I tried to use the following WLST script. It does set the server start arguments, but WebLogic is not using those properties and loads not enough memory.
edit()
startEdit()
cd('/Servers/myserver/ServerStart/myserver')
cmo.setArguments('-Xmx1024m -XX:MaxPermSize=256m')
activate()
Any ideas?
You can use the trick for getting ServerStart arguments:
Write simple offline WLST-script to get arguments from config.xml:
getArguments.py
import sys
readDomain(sys.argv[1])
cd('Server/%s/ServerStart/NO_NAME_0' % sys.argv[2])
argsFile = open('arguments.txt', 'w')
print >>argsFile, cmo.arguments
Add this script to startWeblogic.cmd like:
startWebLogic.cmd
...
set DOMAIN_HOME=%~dp0
path\to\wlst.cmd getArguments.py %DOMAIN_HOME% admin_server_name
set /p EXTRA_JAVA_PROPERTIES=<arguments.txt
call "%DOMAIN_HOME%\bin\startWebLogic.cmd" %*
There is no easy way of setting values when executing WebLogic from Eclipse. It'll call the batch script and, at least at current version, does not allow to send dynamic parameters.
We solved it making the setDomainEnv.cmd file part of our versioned configuration:
Copy the setDomainEnv.cmd file to your version control configuration.
Edit whatever you want (memory, etc)
Copy the file like copy custom\setDomainEnv.cmd %DOMAIN_HOME%\bin /y when running your development environment configuration script.
Now every time you configure your development environment memory values will be ready without manual intervention.
You have to reedit your stuff when updating WebLogic, so you don't end up with an outdated component.
My Java Webstart application runs in a controlled trusted environment. This is a closed internal network where I have some control on how the application is started.
How can I pass JVM arguments to the application, even if they are considered 'unsecure' for use by webstart by the JVM?
There are several options to pass JVM arguments to webstart.
Through JNLP file.
Through the JAVA_TOOL_OPTIONS environment variable.
Through the deployment settings on the local computer.
Through the javaws command (I was unable to get this to work).
Note that I have included links to the java 8 version of this documentation. All of these ideas are supported and documented in other Java versions, however sometimes they work a tiny bit different, or have slightly changed restrictions.
Through JNLP file.
The JNLP supports many JVM arguments through the JNLP file. Some can be done though direct settings, such as initial-heap-size and max-heap-size. For other settings java-vm-args can be used.
The JNLP File Syntax documentation lists some supported java-vm-args for 'this version' however it is unsure if that is the version 1.4+ of the example, or JRE 8. I know some unlisted settings are actually supported, such as -XX:MaxGCPauseMillis and activating the G1 garbage collector. You can make a JNLP and then use jinfo -flag MaxGCPauseMillis <pid> to test if a setting has been correctly propagated.
This is the preferred method, because it does not require any direct control of the JVM. The down-side is that only supports specific parameters that are considered 'safe'.
Through the JAVA_TOOL_OPTIONS environment variable
When you start Java Webstart by using the javaws command, you can use the JAVA_TOOL_OPTIONS to set any parameter you want on all JVM started from that environment.
So in Linux you can do to set an unsupported parameter:
export JAVA_TOOL_OPTIONS=-XX:SoftRefLRUPolicyMSPerMB=2000
javaws <my jnlp>
Note that this will affect all java applications ran with this system variable. So setting this for all users, or a specific user should be done with great care. Setting this only for a single application as the example above is much safer.
The advantage of this solution is that you can pass any parameter you want. The downside is that it requires a specific way of launching the application, or a very broad setting. It also requires control over the client system.
Through the deployment settings on the local computer
You can also pass JVM arguments by changing the deployment settings of the JVM. This can be done through the Java Control Panel, which allows you to set default runtime settings.
If you want to automate these settings you can use the deployment properties file. Unfortunately the JRE specific section of this file is undocumented. Manually it is very easy to adapt this file:
deployment.javaws.jre.0.args=-XX\:SoftRefLRUPolicyMSPerMB\=2000
When automating this file, you have to watch very carefully that it contains these settings for all detected JVMs, so you have to be sure to change the correct one. Also this will be used for all Webstart and applets on your system.
Through the javaws command (I was unable to get this to work)
There should be another way (besides the JAVA_TOOL_OPTIONS method) to change the parameters using the command line. The javaws documentation lists the -J option to pass arguments to the JVM, for example by running your JNLP as follows:
javaws -J-XX:SoftRefLRUPolicyMSPerMB=2000 <my jnlp>
However I have not been able to get this to actually set the JVM parameters.
I'm running Weblogic locally, but will run also be run in production on server instances administred from weblogic server
I have set a system property in Weblogic using, "-DRUNTIME_ENVIRONMENT=LOCALHEST" under the menu item in Servers -> Configuration-> Server start -> Arguments:
I my java file, i have System.out.println("ENVR_:" + System.getProperty("RUNTIME_ENVIRONMENT"));
And it prints null, is there some argument i have missed?
Have to add "set JAVA_OPTIONS=%JAVA_OPTIONS% -Druntime.environment=local" to the startWebLogic.cmd file
I believe the settings on that page apply only if Node Manager is used. So you will need to start your application server with Node Manager and not using the command line or other means.
If you are using Linux/Mac OS(I am using WebLogic 12.2 on Mac):
Find the file startWebLogic.sh then edit
Find and this line change this line JAVA_OPTIONS="${SAVE_JAVA_OPTIONS} -Denv=dev"
-Denv=dev is the environment you want
I've been trying the whole day to make Tomcat6 use system proxy settings. Tried various ways, about 200 different Versions of
tomcat6 //US/Tomcat6 ++JvmOptions "-Djava.net.useSystemProxies=true"
I tried to set the property in service.bat in the "install" section like this (also many similar versions):
...
:foundJvm
echo Using JVM: "%PR_JVM%"
"%EXECUTABLE%" //IS//%SERVICE_NAME% --StartClass org.apache.catalina.startup.Bootstrap --StopClass org.apache.catalina.startup.Bootstrap --StartParams start --StopParams stop --JvmOptions "-Djava.net.useSystemProxies=true"
I tried settings this with the tomcat6w GUI. Not sure if it does anything anyway.
Also tried setting JAVA_HOME to JRE and JDK. No difference.
Tried setting -Dhttp.proxyHost=proxyhostURL and -Dhttp.proxyPort=proxyPortNumber. Those at least seem not to be ignored because the connection then failed (used random local ip and port).
Now the fun fact: I can run it through catalina.bat, set the parameter there (CATALINA_OPTS=...) and it works like a charm. So what is that doing there? I would like to have it as a service which would be way more user friendly, but if there's no way to achieve it, I'm willing to consider just throwing catalina.bat into autorun.
So... did anybody ever get that working? Or does anybody have ideas/advices?
Assuming this is on Windows, I found a Registry key under:
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Apache Software Foundation\Procrun 2.0\<app-name>\Parameters\Java
The entry is named Options of type REG_MULTI_SZ. This contained all the -D JVM options, one line per option. I added our HTTP/HTTPS proxy name (we're using NTLM authentication proxies)
-Dhttp.proxyHost=proxy.company.local
-Dhttp.proxyPort=8080
-Dhttps.proxyHost=proxy.company.local
-Dhttps.proxyPort=8080
-Dhttp.proxyUser=svc_account
-Dhttp.proxyPassword=svc_Password