In the Solr logs I see error -
java.lang.UnsupportedOperationException: Serialization support for
org.apache.commons.collections.functors.InvokerTransformer is disabled for
security reasons. To enable it set system property
'org.apache.commons.collections.enableUnsafeSerialization' to 'true',
but you must ensure that your application does not de-serialize
objects from untrusted sources.
I am trying to add flag -Dorg.apache.commons.collections.enableUnsafeSerialization=true, but it don't help.
How to correctly enable this property? (I haven't access to the solrconfig.xml)
You can add it to SOLR_OPTS environment variable or pass it directly to start script:
bin/solr start -Dorg.apache.commons.collections.enableUnsafeSerialization=true
As per Configuring solrconfig.xml docs:
In general, any Java system property that you want to set can be passed through the bin/solr script using the standard -Dproperty=value syntax. Alternatively, you can add common system properties to the SOLR_OPTS environment variable defined in the Solr include file (bin/solr.in.sh or bin/solr.in.cmd).
Related
Im have a producer and consumer java code and Im trying to upgrade it to connect with the Kafka which is secured with SSL. I'm in a situation that the ssl related passwords should be given only via environmental variables.
So is it possible to directly refer to the values refered by Environmental variables in the KafkaProducer.properties and KafkaConsumer.properties files
For Example:
I declare an environmental variable in linux system SSL_KEY_PASSWORD=password
And inside the KafkaProducer/Consumer properties, I declare as,
''' ssl.key.password=${SSL_KEY_PASSWORD} '''
Sample KAFKA Consumer/Producer property file config may look like,
# For SSL
security.protocol=SSL
ssl.truststore.location=/var/private/ssl/client.truststore.jks
ssl.truststore.password=${TRUSTSTORE_PASS_ENV_VARIABLE}
# For SSL auth
ssl.keystore.location=/var/private/ssl/client.keystore.jks
ssl.keystore.password=${KEYSTORE_PASS_ENV_VARIABLE}
ssl.key.password=${KEY_PASS_ENV_VARIABLE}
No, they cannot.
Unclear how you are using the files or Kafka clients. If from the shell commands, you should create a bash wrapper around the command you're running that uses sed or other templating scripts that generates the files before running the final command.
If you are actually writing Java code, then build the Properties from the environment there, and not using files.
Don't think properties file values are interpolated but probably you can test it once. Alternatively you can also remove these lines from property file and do it from code something like below...
final Properties properties = new Properties();
final FileInputStream input = new FileInputStream(yourExistingFile);
properties.load(input);
properties.put("ssl.key.password",System.getenv("SSL_KEY_PASSWORD"));//this is additional property
We run our micronaut integration tests in the cloud in a docker container
We're setting the MICRONAUT_ENVIRONMENTS=staging in the docker environment variables, to force our application to read the config values from application-staging.yaml.
However, micronaut is automatically adding "test" as an environment, and then read the config values from application-test.yaml.
From the docs (https://docs.micronaut.io/2.2.1/guide/index.html#propertySource), environment variables should have priority compared to deduced environments when loading the config
Is there any reason why micronaut is giving priority to the application-test.yaml values here?
The test environment is added when micronaut tests are running, even when setting up the environment variable MICRONAUT_ENVIRONMENTS
After a bit of digging, it seems the "test" environment is added before the DefaultEnvironment class is initialized, hence it's added even if micronaut.env.deduction is set to false
We are using WLS server instance running as windows service (Setting Up a WebLogic Server Instance as a Windows Service) and we facing some class collision, therefore we set up -verbose:class parameter to see CL resource details.
But after server restart it doesn't wrote the JRE process output to file specified in System property parameter:
-Dweblogic.Stdout="d:\Oracle\Middleware\user_projects\domains\myWLSdomain\
stdout.txt"
I searched for any verbose output as e.g.:
[Loaded java.lang.Object from C:\Java\jdk1.7.0_75\jre\lib\rt.jar]
but I didn't find any. Therefore I checked windows registry key generated by WL_HOME\server\bin\wlsvc script, if contains all our specified parameters and also searched if is possible to specify verbose output by any next JRE parameter, but without result.
After some searching in registry I found that WLSVC generated system service parameters keys at following location:
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\{service_name}\Parameters]
..
"Log"=
..
one of parameters is Log sub-key, when I set it properly, JRE process started to use specified location for logging all details.
I have downloaded commom-daemon tool and used with a java application. I have created a bat file as shown below
set SERVICE_NAME=sample
set PR_INSTALL=D:\commons-daemon-1.0.15-bin-windows-signed\prunsrv.exe
REM Service log configuration
set PR_LOGPREFIX=%SERVICE_NAME%
set PR_LOGPATH=D:\logs
set PR_STDOUTPUT=D:\logs\stdout.txt
set PR_STDERROR=D:\logs\stderr.txt
set PR_LOGLEVEL=Error
REM Path to java installation
set PR_JVM=C:\Java\jre7\bin\client\jvm.dll
set PR_CLASSPATH=D:\commons-daemon-1.0.15-bin-windows-signed\Daemon.jar
REM Startup configuration
set PR_STARTUP=auto
set PR_STARTMODE=jvm
set PR_STARTCLASS=com.SomeService
set PR_STARTMETHOD=start
REM Shutdown configuration
set PR_STOPMODE=jvm
set PR_STOPCLASS=com.SomeService
set PR_STOPMETHOD=stop
REM JVM configuration
set PR_JVMMS=256
set PR_JVMMX=1024
set PR_JVMSS=4000
set PR_JVMOPTIONS=-Duser.language=DE;-Duser.region=de
In cmd , I install the service using the command
prunsrv.exe //IS//sample
After this, a service named sample become available in the list of services and when I tried to start it it shows:
Windows could not start the sample on Local Computer. For more information review the System event log. If this is a non-Microsoft service, contact the service vendor and refer to the server specific
error code 1
UPDATED
When I run
prunsrv.exe //ES//sample
it shows
The data area passed to a system call is too small.
Failed to start service
Can any one help me in this?
I had the same problem. In my case (not yours exactly), the problem was the jvm.dll path, because the variable %JAVA_HOME% has spaces. So to solve this, I modify the assignment
set CG_PATH_TO_JVM=%JAVA_HOME%\jre\bin\server\jvm.dll
to
set CG_PATH_TO_JVM="%JAVA_HOME%\jre\bin\server\jvm.dll"
and that's all.
Also, you could check the variables assignment with this command:
prunmgr//ES//yourservicename_as_in_windows
To help others troubleshooting.
If you look at:
https://commons.apache.org/proper/commons-daemon/procrun.html
There is a parameter:
--LogPath
which defaults to:
%SystemRoot%\System32\LogFiles\Apache
A log file is generated there which contains some additional error messages and possibly useful information.
The original questioner changed the log path to:
set PR_LOGPATH=D:\logs
So looking there would be the appropriate thing to do in their case.
I also had this cryptic error message 'The data area passed to a system call is too small.' with no further information in either the startup log or the Windows/System32/LogFiles/Apache/ logs on Win 8/Server 2008.
I had renamed my packages and the --StartClass and --StopClass parameters were wrong.
I agreed with OscarSan that a space in %JAVA_HOME% could cause the "error code 1" problem. I solve this problem by re-installing JDK 1.8 to change the installation path from C:\Program Files Java\jdk1.8.0_144 to C:\Java\jdk1.8.0_144. Problem solved.
In the Java API example they create a Datastore by using DatastoreHelper.getOptionsfromEnv
But this creates the warning
WARNING: Not using any credentials
and leads ultimately to:
DatastoreException(null): beginTransaction 401
I set my environment variables to the following:
export DATASTORE_DATASET={Project-ID}
export DATASTORE_HOST="https://www.googleapis.com/datastore/v1/datasets/{Project-ID}"
export DATASTORE_SERVICE_ACCOUNT="{email address}"
export DATASTORE_PRIVATE_KEY_FILE="{path to local p12 keyfile}"
But still when I try to see what the credentials are:
println("Datastore helper: " +DatastoreHelper.getOptionsfromEnv
.dataset(datasetId).build().getCredential)
I get null, what could be missing?
Also is there either a way to set the Credentials inside the project (instead of using the getOptionsfromEnv)?
The problem was that even though I used
source ~/.bash_profile
to refresh my environemnt variables and the echo command showed me that they were indeed updated, apperently I needed to restart my terminal (using Mac OSX) for them to be also updated for sbt and Scala.
I am not sure why this is the case and if this is Scala specific but now I managed to authenticate and communicate with the server.
I managed to figure it out by using the local installation of the Datastore Server and continuing to have the same problems.