Cannot load custom File System on Flink's shadow jar - java

I needed some metadata on my S3 objects, so I had to override the S3 file system provided by flink.
I followed this guide to the letter and now I have a custom file system which works on my local machine, when I run my application in the IDE.
Now I am trying to use it on a local kafka cluster OR on my docker deployment, and I keep getting this error Could not find a file system implementation for scheme 's3c'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
I package my application using shadowJar, using the following configuration:
shadowJar {
configurations = [project.configurations.flinkShadowJar]
mainClassName = "dev.vox.collect.delivery.Application"
mergeServiceFiles()
}
I have my service file in src/main/resources/META-INF/services/org.apache.flink.core.fs.FileSystemFactory that contains a single line with the namespace and name of my factory :dev.vox.collect.delivery.filesystem.S3CFileSystemFactory
If I unzip my shadowJar I can see in its org.apache.flink.core.fs.FileSystemFactory file it has both my factory and the others declared by Flink, which should be correct:
dev.vox.collect.delivery.filesystem.S3CFileSystemFactory
org.apache.flink.fs.s3hadoop.S3FileSystemFactory
org.apache.flink.fs.s3hadoop.S3AFileSystemFactory
When I use the S3 file system provided by flink everything works, it is just mine that does not.
I am assuming the service loader is not loading my factory, either because it does not find it or because it is not declared correctly.
How can I make it work? Am I missing something?

Related

Custom LiquibaseDataTypes not found in docker image classpath

I am trying to build a custom Liquibase docker image (based on the official liquibase/liquibase:4.3.5 image) for running database migrations in Kubernetes.
I am using some custom types for the database which are implemented using #DataTypeInfo annotation and extending existing LiquibaseDataTypes like liquibase.datatype.core.VarcharType (class discovery is implemented using the META-INF/services/liquibase.datatype.LiquibaseDatatype mechanism introduced in Liquibase 4+).
These extensions are implemented inside their own maven module called "schema-impl", which is generating a schema-impl.jar. Everything was working fine when using migrations integrated inside the app startup process, but now we want this to be done by the dedicated docker image.
The only information in the Liquibase documentation regarding this topic is the "Drivers and extensions" section from this document. According to this, I added the schema-impl.jar into the /liquibase/classpath directory during the image building process and also modified the liquibase.docker.properties in order to add this jar file explicitly inside the classpath property:
classpath: /liquibase/changelog:/liquibase/classpath:/liquibase/classpath/schema-impl.jar
liquibase.headless: true
However, when I try to run my changesets with the docker image, I am always getting an error because it cannot find the custom type definition:
liquibase.exception.DatabaseException: ERROR: type "my-string" does not exist
Any help would be really appreciated. Thanks in advance.
Ok I found it. Basically the problem was that I needed to include the classpath in the entrypoint command, not in the liquibase.docker.properties file (which seems to be useless for this usecase), like this:
--classpath=/liquibase/changelog:/liquibase/classpath/schema-impl.jar

Spring boot external properties file

I'm trying to run a java8 app using spring boot version 2.2.4. The app is then packed in a docker image.
The way I run my app as specified in a Dockerfile which ends liek this:
FROM openjdk:8
.....
CMD /usr/local/openjdk-8/bin/java -jar -Dspring.config.location=/opt/$APP/ /opt/$APP/$APP.jar
The problem I encounter is the loading of external properties files.
For example I have application.properties file similar to this, which is packaed inside the JAR:
spring.data.mongodb.host=localhost
spring.data.mongodb.port=27017
spring.data.mongodb.database=db1
application.queue.sqs.queue_name=somesqs
In addition, I also inject the docker image an addition application.properties file located at /opt/myapp/. This external file is similar to this:
spring.data.mongodb.uri=mongodb://username:password#MONGO_URL:27017/db_name
application.queue.sqs.queue_name=another_sqs
Expected Behavior: the app will load both new another_sqs location, and external mongo connection.
However, Actual Behavior: when reading the logs I can see that t the new sqs url (i.e. another_sqs) is loaded properly, although the new value for mongo connection is discarded and is therefore using the local embedded mongo engine.
I consulted the following post on stackoverflow to try and understand what I am experiencing:
Spring Boot and multiple external configuration files
But for my understanding, when using spring 2.X and above, the -Dspring.config.location should override all other properties file.
Here is where I started debugging:
TRY 1 : I attached into the docker container, cd into /opt/$APP/ where both my app.jar and application.properties are located, executed the following command java -jar app.jar and viola - it works! A connection to the external mongo source is established. This may be explained by the priority of spring loading properties files as specified in spring's docs.
TRY 2 : Attach the container, cd into $HOME/, execute java -jar /opt/$APP/app.jar -Dspring.config.location=/opt/$APP/ - Do not connect to external mongo, however does connects to the another_sqs. Strange thing - only part of the application.properties values are loaded? Isn't it the way spring 1.X works by adding value from multiple files?
TRY 3 : Attach the container, cd into $HOME/, execute java -jar /opt/$APP/app.jar -Dspring.config.location=file:/opt/$APP/applicartion.properties - same behavior.
Try 4: Edited Dockerfile to include the following execution:
CMD usr/local/openjdk-8/bin/java -jar -Dspring.config.location=classpath:/application.properties,file:/opt/$APP/application.properties /opt/$APP/$APP-$VER.jar
And it works again. Both another_sqs and external mongo are loaded properly on "Try 4".
My question is therefore:
Why should I explicitly specify the classpath:/application.proeprties? Isn't -Dspring.config.location=/opt/$APP/ or -Dspring.config.location=file:/opt/$APP/application.properties should be enough?
When you specify -Dspring.config.location=file:/opt/$APP/application.properties you're overriding the default value of config.location with your application.properties. If you want to use another application.properties, but still using the default properties without declaring them you should use
-Dspring.config.additional-location=file:/opt/$APP/application.properties
In this way, config.location will still have the default value and you will load the external properties as an additional-location.
From the Spring Documentation:
You can also refer to an explicit location by using the spring.config.location environment property (which is a comma-separated list of directory locations or file paths).
When custom config locations are configured by using spring.config.location, they replace the default locations
Alternatively, when custom config locations are configured by using spring.config.additional-location, they are used in addition to the default locations.

passing variables beetwen groovy files

I'm managing many jobs in Jenkins by DSL plugin. That plugin is using .groovy definitions so I think even if someone doesn't use Jenkins but using groovy may be able to help.
Generally, I want to create an additional file, that may be a groovy file, JSON or YAML, whatever. It important is the possibility to connect that file with my .groovy file.
In that file, I'm defining variables(rather just strings) for example address IP or other stuff
eg.
ip_gitlab: 1.2.3.4
default_user: admin
In my groovy files, I want to be able to use these variables.
That approach is possible in groovy?
I suggest use a property file as #JBaruch wrote
ip_gitlab=1.2.3.4
default_user=admin
And load it
Properties properties = new Properties()
File propertiesFile = new File('test.properties')
propertiesFile.withInputStream {
properties.load(it)
}
Then you can use it, get ip for example:
def ipPropertyName= 'ip_gitlab'
properties."$ipPropertyName"
Make groovy file and define some general information and use load.
E.g., hello.conf (written by groovy)
build_name = 'hello'
build_config = [
'git': 'your git repository',
'build_job': ['bulid_a', 'build_b']
]
And use it by load
load 'hello.conf'
println(build_name)
for (job in build_config['build_job']) {
build job: job
}
if you want a Jenkins specific answer:
There's a Config File Provider Plugin to jenkins.
You can store config/properties files via Managed files.
Go to Manage Jenkins>Managed files and and create a new file. It supports .groovy, .json, .xml and many others.
Once you have that, you can load the said file inside a job using the Provide Config file checkbox which will load the file into an env variable automatically.

JUnit test failed to read property file from another module class

I am creating a project which have multiple module.I am using the gradle build tool and IntelliJ IDEA.I have two module webservice and utilities.
Project Structure as-
I am reading the config.properties file in my utilities module.In which I am defining the server port and other values. When I am calling the method of utilities module (which reads the property file and return the values) from my webservice module classes that work fine and return proper values.
But when trying to call same method from test classes of webservice module then utility class method failed to read property file.
Now I am not getting what is going wrong.
Thanks.
Be sure that the property file exists in /src/test/resources as well as /src/main/resources as the classpath used when executing tests differs to your regular application classpath.

How to load jndi bindings from within a Storm jar?

We have a project moving to use Storm and as such our code must be packaged in a jar. We had previously used com.sun.jndi.fscontext.RefFSContextFactory as our InitialContextFactory implementation to load the jndicontext bindings from a file in the system config directory in the classpath (worked fine). However when attempting to use this factory to load the context from within the jar, we get the following:
javax.naming.InvalidNameException: unknown protocol: jar
at com.sun.jndi.fscontext.FScontextFactory.getFileNameFromURLSTring(FSContextFactory.java:139)
at com.sun.jndi.fscontext.RefFSContextFactory.createContext(RefFSContextFactory.java:31)
This is due to the factory attempting to load jdni context from the following URL:
"jar:file:/mount/storm-dir/data/storm.jar!/jndicontext"
This is a valid URL yet the factory does not understand how to open a jar. Is there an implementation of javax.naming.spi.InitialContextFactory that does? Alternatively is there a way I could work around this issue and add a config directory to Storm's classpath?

Categories