How to deal wiith querydsl multiple schema have same table name? - java

Simply I faced a problem when trying to access query DSL with multiple schemas, I added multiple schemas as below
<schemaPattern>ABC,DEF</schemaPattern>
and my table name pattern is
<tableNamePattern>PQR,STU</tableNamePattern>
suppose both schemas have DEF table then when I compile maven project it gives me the below error.
Failed to execute goal com.querydsl:querydsl-maven-plugin:4.2.1:export (default) on project TestProject:
Execution default of goal com.querydsl:querydsl-maven-plugin:4.2.1:export failed: Attempted to write multiple times to D:\test\repos\testProject\target\generated-sources\testPackage\domain\dependency\QDEF.java, please check your configuration
Can anyone tell me a way to resolve this and also can explain how to access generated classes in specific schema(for example I want to declare QDEF qdet = QDEF.qdef , this is normal way, but how can I declare QDEF in STU schema)?

I believe this was resolved here. It looks like <schemaToPackage>true</schemaToPackage> is what you need.

Related

How to customize the generated rowmappers in jOOQ?

I have a project using jOOQ code generation in one of the modules.
That module is using a Converter to map from OffsetDateTime to Instant and vice-versa. The problem I am having is that while this seems to work for the generated Table/Pojo/Record definitions, it is not working in the generated RowMappers class. The generated RowMappers code looks like:
public static Function<Row,com.redhat.runtimes.data.access.tables.pojos.Todos> getTodosMapper() {
return row -> {
com.redhat.runtimes.data.access.tables.pojos.Todos pojo = new com.redhat.runtimes.data.access.tables.pojos.Todos();
pojo.setId(row.getUUID("ID"));
pojo.setTitle(row.getString("TITLE"));
pojo.setDescription(row.getString("DESCRIPTION"));
pojo.setCreated(row.getInstant("CREATED"));
pojo.setDueDate(row.getInstant("DUE_DATE"));
pojo.setComplete(row.getBoolean("COMPLETE"));
pojo.setAuthor(row.getString("AUTHOR"));
return pojo;
};
}
But on the Row object, there is no method getInstant. Ideally, it should be calling DateTimeConverter.from(row.getOffsetDateTime("<field>")) using my Converter class. Is there a way to customize this behavior?
Thanks in advance!
EDIT
Something else I just noticed in the Maven output:
[INFO] 'com.redhat.runtimes.data.access.converters.UUIDConverter' to map 'java.util.UUID' could not be accessed from code generator.
[INFO] 'com.redhat.runtimes.data.access.converters.UUIDConverter' to map 'java.util.UUID' could not be accessed from code generator.
[INFO] 'com.redhat.runtimes.data.access.converters.DateTimeConverter' to map 'java.time.Instant' could not be accessed from code generator.
[INFO] 'com.redhat.runtimes.data.access.converters.DateTimeConverter' to map 'java.time.Instant' could not be accessed from code generator.
Perhaps that is why the RowMappers are not working?
EDIT AGAIN
I fixed the error above by including the converters dependency in the plugin dependencies but I am still getting an error compiling the RowMappers class.
EDIT PART 3
#lukas-eder - The RowMappers/DAOs are getting generated by the jooq-vertx library I guess. I will have to reach out to Jan Klingsporn and ask his advice.

#Select raise error when with no parameters instead selecting all entites (Datastax Java Driver Mapper)

I'm using the Datastax Java Driver Mapper.
The Doc and Github describe the possibility of using the #Select annotation with no parameters to select all rows within a table.
https://docs.datastax.com/en/developer/java-driver/4.4/manual/mapper/daos/select/
https://github.com/datastax/java-driver/pull/1285
So I did the following:
#Dao
public interface SchaduleJobDao {
(...)
#Select
#StatementAttributes (consistencyLevel = "LOCAL_QUORUM")
PagingIterable<ScheduleJobEntity> all();
(..)
However, an error is raised by Eclipse in the all() method line:
"Invalid parameter list: Select methods that don't use a custom clause must take the partition key components in the exact order (expected PK of ScheduleJobEntity: [java.lang.String])"
According to the references above, it should be allowed.
I did check the version, and prior 4.2 this feature should be working, I'm using the 4.4. So it doesn't seem to be version related.
My pom file:
<dependency>
<groupId>com.datastax.oss</groupId>
<artifactId>java-driver-mapper-processor</artifactId>
<version>4.4.0</version>
</dependency>
What I may be doing wrong? Is where a way to solve this?
Thanks
I think this might be a configuration issue. Could you check that your POM doesn't reference an older version of the processor in another place? In particular, another way to provide it is in the annotationProcessorPaths section of the compiler plugin, as shown here.
We have an integration test that covers this case and I just double-checked that it passes in 4.4.0. Also, the error message slightly changed after we introduced the feature, it used to say "must take the partition key components", now it says "must match"; you're quoting the old message.

Customize Junit 4 XML report

Background:
TestNG supports adding your own Reporter classes in order to modify the reports being generated or generating new reports as needed.
However, JUnit doesn't have such a functionality, so a brute-force way would be to write your own Runner and then generate your own custom report.
But, I ask this question in order to find if there is something better?
Basically, I want to add custom attribute to every executed method.
<testcase name="test_test_something" classname="some.class.name" time="0.069" my-own-attribute="somevalue"/>
So my question is:
How is this XML report generated by JUnit and Gradle?
Is there a way to modify this process of report generation to add custom data to the report while doing minimal changes?
How is this XML report generated by JUnit and Gradle?
It’s eventually generated by the Gradle internal class org.gradle.api.internal.tasks.testing.junit.result.JUnitXmlResultWriter.
Basically, I want to add custom attribute to every executed method.
<testcase name="test_test_something" classname="some.class.name" time="0.069" my-own-attribute="somevalue"/>
[…]
Is there a way to modify this process of report generation to add custom data to the report while doing minimal changes?
Unfortunately, there is no way to add further attributes to the <testcase/> element. This code shows how the element and its attributes are currently created; there is no way to hook into that creation process.
If you can live with a hacky solution, then you could try writing your custom data to StdOut/StdErr during the test and set the outputPerTestCase property as follows:
// (configuring the default `test` task of the `java` plugin here; works with
// any task of Gradle’s `Test` type, though)
test {
reports {
junitXml {
outputPerTestCase = true
}
}
}
The written output will then end up at least somewhere within the <testcase/> element and you might be able use it from there somehow.

How to create a simple plugin in Neo4j?

I configured Maven and managed to run example-Plugins like FullTextIndex (https://github.com/neo4j-contrib/neo4j-rdf/blob/master/src/main/java/org/neo4j/rdf/fulltext/FulltextIndex.java).
Still I struggle to create a simple Function by myself. I want to have a java-function that can find a node by ID and return its properties.
I know I can do this in Cypher, but the target is to understand the logic of plugins for Neo4j.
So after importing the plugin i should be able to type in:
INPUT ID
call example.function(217)
OUTPUT e. g.
name = Tree, age = 85, label = Plant, location = Munich
Thanks a lot!
In Neo4j, user-defined procedures are simple .jar files that you will put in the $NEO4J_HOME/plugins directory. Logically, to create a new user-defined procedure you will need to generate this jar file. You can do it configuring a new maven project or using the repository Neo4j Procedure Template.
User-defined procedures are simply Java classes with methods annotated with #Procedure. If the procedure writes in the database then mode = WRITE should be defined (not your case).
Also you will need query the database to get the node by ID and return the properties. To do it you will need inject in your Java class the GraphDatabaseService class using the #Context annotation.
To achieve your goal, I believe that you will need to use the getNodeById() method from GraphDatabaseService and the getProperties() in the returned Node.
What you are looking for is User Defined Functions / Procedures. There is a dedicated section in the neo4j documentation :
https://neo4j.com/developer/procedures-functions/#_extending_cypher
http://neo4j.com/docs/developer-manual/current/extending-neo4j/procedures/#user-defined-procedures
You can also look at APOC which contains hundreds of such examples used in real life.
https://github.com/neo4j-contrib/neo4j-apoc-procedures

Unable to use multiple ebean databases in Play 2

We are setting up a slightly complicated project using Play Framework 2.0.3.
We need to access several databases (pre-existing) and would like to do it using the frameworks built-in facilities (ie. EBean).
We tried to create all model classes within the "models" package, and then map each class with its FQN to the corresponding EBean property in the application.conf:
ebean.firstDB="models.ClassA,models.ClassB,models.ClassC"
ebean.secondDB="models.ClassD"
ebean.thirdDB="models.ClassE,models.ClassF"
This doesn't seem to work:
PersistenceException: Error with [models.SomeClass] It has not been enhanced but it's superClass [class play.db.ebean.Model] is? (You are not allowed to mix enhancement in a single inheritance hierarchy) marker[play.db.ebean.Model] className[models.SomeClass]
We checked and re-checked and the configuration is OK!
We then tried to use a different Java package for each database model classes and map them accordingly in the application.conf:
ebean.firstDB = "packageA.*"
ebean.secondDB = "packageB.*"
ebean.thirdDB = "packageC.*"
This works fine when reading information from the database, but when you try to save/update objects we get:
PersistenceException: The default EbeanServer has not been defined? This is normally set via the ebean.datasource.default property. Otherwise it should be registered programatically via registerServer()
Any ideas?
Thanks!
Ricardo
You have to specify in your query which database you want to access.
For example, if you want to retrieve all users from your secondDB :
// Get access to your secondDB
EbeanServer secondDB = Ebean.getServer("secondDB");
// Get all users in secondDB
List<User> userList = secondDB.find(User.class).findList();
When using save(), delete(), update() or refresh(), you have to specify the Ebean server, for instance for the save() method:
classA.save("firstDB");
I have encounter the same problem and waste a whole day to investigate into it,finally I have got it.
1.define named eabean server
db.default.driver=com.mysql.jdbc.Driver
db.default.url="jdbc:mysql://localhost:3306/db1"
db.default.user=root
db.default.password=123456
db.aux.driver=com.mysql.jdbc.Driver
db.aux.url="jdbc:mysql://localhost:3306/db2"
db.aux.user=root
db.aux.password=123456
now you have two ebean server [default] and [aux] at run time.
2.app conf file
ebean.default="models.*"
ebean.aux= "secondary.*"
Now entiies under package models.* configured to [default] server and entities under package secondary.* configured to [aux] server. I think this may related to java class enhancement or something. You don't need to separate Entities into different packages, but if entities of different ebean servers are under same package, it may cause weird trouble and exceptions.
When using you model, save/delete/update related method should add server name as parameter
Student s = new Student(); s.save("aux");
When use finder,you should define your finder as
public static Finder find = new Finder("aux",Long.class,Student.class);
Might not be the same case, I ran to this SomeClass not enhanced PersistenceException with Play 2.1.0,
and only what was missing was a public declaration in SomeClass model class that I had forgotten..
In Play 2.1.0 the error message was a little different:
PersistenceException: java.lang.IllegalStateException: Class [class play.db.ebean.Model] is enhanced and [class models.Address] is not - (you can not mix!!)
This solved my issue with saving to my db table and resolving the error:
"javax.persistence.PersistenceException: The default EbeanServer has not been defined ? This is normally set via the ebean.datasource.default property. Otherwise it should be registered programatically via registerServer()"

Categories