Custom entities do not load within a Spring Cloud Dataflow Server - java

Once I #EnableDataFlowServer my SpringBoot application, my own custom entities do not load. (I get the 'type not managed' exception which occurs when JPA isn't finding your entities).
These entities are found within another Spring module that I import, like
#Import({MyDomainsModule.class})
I'm using 2.0.0.m2 of Spring Cloud DataFlow.
Some debugging I've done:
If I add this to my Spring Boot application main class:
#EntityScan({
"com.company.mydomain.entities"
})
Then my entities start to load as usual, but then Spring DataFlow breaks. For example, any time I try to load the UI, I'll get:
|ne.jdbc.spi.SqlExceptionHelper| Table 'dataflow.appregistration' doesn't exist
That makes me thinking by simply adding the EntityScan, I broke some naming strategy since the actual name of the table is of course app_registration
I think this is mostly a 'how do I do multiple locations of JPA-based code in one project', rather than a Spring Cloud DataFlow question. But knowing the fix might require a better understanding of how SCDF. I've checked out the project and reading up both Spring Boot and how SCDF configures itself.
Any help is greatly appreciated!

I had a bad strategy coming in from one of my properties, overriding what SCDF added this to my application.properties.
So to be explicit, I set this in my properties:
spring.jpa.hibernate.naming.physical-strategy=org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
spring.jpa.hibernate.naming.implicit-strategy=org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
And then my SpringBoot application looks like
#SpringBootApplication(exclude = LocalDataFlowServerAutoConfiguration.class)
#Import({MyDomainModule.class})
#EnableDataFlowServer
// EnableDataFlowServer has an EntityScan, which causes ours to not be picked up!
// Look in DataFlowControllerAutoConfiguration for more information
#EntityScan({
"com.company.mydomain.entities"
})

Related

How to integrate Olingo(Odata) in Java SpringBoot Project

Can anyone please help me on how to integrate Olingo (Odata) in a Springboot Java Appln.
I'm pretty new to Spring boot and have implemented one project and wanted it to convert to Oling (Odata).
I have gone through various resources but with a bunch of different approaches not sure how to do it the correct way.
Please let me know if some has worked on it and can guide me.
link to the project on which I applied spring-boot.
If you are trying to integrate OlingoJPA there are a couple of things you will have to do
Implement JPAServiceFactory and initializeODataJPAContext, basically this is about defining the persistence unit and entity manager
Then you can create a Spring Boot Configuration to mount your OData endpoint and initialize the EntityManagerFactory
Then you can point to your database here
An finally you can define the JPA entities you want your service to expose
The full Spring Boot + JPA project sample is located Github. Feel free to go though it, raise an issue or submit a Pull request for impalements

Why Spring does not recognize #BatchSize annotation

Scenario:
I'm supporting an Enterprise application that runs in Wildfly10. The application (.war) uses J2EE technologies (EJBs, JPA, JAX-RS) and SpringBoot features (like, SpringMVC, SpringRest, SpringData, SpringRestData) ... Both stacks co-exists "happily" because they don't interact between them; however, they do share common classes, like utility or Entity Classes (the stacks map to the same database model). Why the application uses those stacks is out the scope of the question.
Currently, I'm trying to improve the performance of a #RestController that pulls some data from the database using a JPA Spring Repository. I found that we're suffering the N + 1 queries problem when calling the #RestController. In past projects (where there were only J2EE technologies), I have used the #BatchSize hibernate annotation to mitigate this problem with total success.
But, in this project, Spring seems to be skipping such annotation. How do I know that? Because I turned on the hibernate SQL logging (hibernate.show_sql) and I can see the N + 1 queries is still happening ...
Key Points:
Here are some insights about the application that you must know before providing (or trying to guess) any answer:
The application has many sub-modules encapsulated as libraries inside WAR file (/WEB-INF/lib) ... Some of these libraries are the jars that encapsulate the entity classes; others are the jars that encapsulate the REST Services (that could be JAX-RS services or Spring Controllers).
The Spring configuration is done in the classes defined in the WAR artifact: in there, we have a class (that extends from SpringBootServletInitializer) annotated with #SpringBootApplication and another class (that extends from RepositoryRestConfigurerAdapter) annotated with #Configuration. Spring customization is done is such class.
The application works with multiple datasources, which are defined in the Wildly server. Spring DATA JPA must address any query pointing to the right datasource. To accomplish this requirement, the application (Spring) was configured like this:
#Bean(destroyMethod="")
#ConfigurationProperties(prefix="app.datasource")
public DataSource dataSource() {
// the following class extends from AbstractRoutingDataSource
// and resolve datasources using JNDI names (the wildfly mode!)
return new DataSourceRouter();
}
#Bean("entityManagerFactory")
public LocalContainerEntityManagerFactoryBean getEntityManagerFactoryBean() {
LocalContainerEntityManagerFactoryBean lemfb;
lemfb = new LocalContainerEntityManagerFactoryBean();
lemfb.setPersistenceUnitName("abcd-pu");
lemfb.setDataSource(dataSource());
return lemfb;
}
The last #Bean declaration favors the use of a persistence.xml file, which we do have in the route /WEB-INF/classes/META-INF/ (i.e. Spring does find this file!) ... In such file, we define our domain classes, so that Spring JPA can see such entities. Also, we can define special JPA properties like: hibernate.show_sql and hibernate.use_sql_comments without issues (this is how I detected the N + 1 queries problem in the first place) ...
What I have done so far?
I tried to add the #BatchSize annotation to the problematic collection. No luck!
I created a new JAX-RS Service whose purpose was to mimic the behavior of the #RestController. I confirmed that the #BatchSize annotation does work in the application's deployment, at least, in JAX-RS Services! (NOTE: the service uses it own persistence.xml) ...
Test details (Updated 2020/07/30): What I did here was to create a new JAX-RS Service and deployed it inside the WAR application, next to the #RestController that presents the problem (I mean, it is the same WAR and the same physical JVM). Both services pull from database the same entity (same class - same classloader), which has a lazy Collection annotated with #BatchSize! ... If I invoke both services, the JAX-RS honors the #BatchSize and pulls the collection using the expected strategy, the #RestController does not ... So, what it is happening here? The only thing different between the services is that each one has a different persistence.xml: the persistence.xml for the JAX-RS is picked by Wildfly directly, the other one is picked by Spring and delegated to Wildfly (I guess) ...
I tried to add the properties: hibernate.batch_fetch_style (=dynamic) and hibernate.default_batch_fetch_size (=10), to the persistence.xml read by Spring ... No luck. I debug the Spring startup process and I saw that such properties are passed to the Spring Engine, but Spring does not care about them. The weird thing here is that properties like: hibernate.show_sql, Spring does honor them ... For those who are asking: "What does these properties do?" Well, they are global equivalent to apply #BatchSize to any JPA lazy collection or proxy without declaring such annotation in any entity.
I setup a small SpringBoot Project using the same Spring version as enterprise application (which is 1.5.8.RELEASE, by the way) and both the annotation and properties approach worked as supposed to.
I've been stuck with this issue for two days, any help to fix this will be appreciated ... thanks!
There are 2-3 possible issues that I can think off.
For some reason, whatever you modify isnt picked up by wildfly - Wildfly classpath resolution is a separate Topic and some missing configuration can cause you a nightmare. This you can identify if you have access to debug the query, and in if you put a breakpoint in the constructor of your Entity class, you will get a chance to evaluate the entity configuration being used, somewhere in the execution conetxt.
BatchSize doesnt work on OneToOne, It only works on OneToMany relationships.
A typical way to define BatchSize is to do along with Lazy load as mentioned in the example here. If you are not using Lazy fetch, hibernate assumes that you are willing to make an eager load and makes another select query to fetch all the details.Please confirm you are using the same syntax as given in the example above.
New Addition:
Put Conditional Breakpoints in PropertyBinder#setLazy() function, and may be backtrace it and put relavent breakpoints in CollectionBinder and AnnotationBinder. then restart/redeploy the server and see what data you are getting for the relavent properties. That will give you fair idea where it is failing..
Why conditional breakpoint? Its because you will have thousands of properties and if you do not add condition to the breakpoint, you will take 1 hour to reach your actual breakpoint
What should be the condition - If its property binder, the condition shoud be like `this.name == . For other classes also you can use the same approach.
Sorry for too detailed description on conditional breakpoints, you might find it redundent.
Looks like the only way to debug your problem is to debug hibernate framework from server startup, then only we will be able to find out the rootcause

Spring Boot REST ยท #Constraint for delete?

I'm working on a system's back end that uses Spring Boot, REST, HATEOAS, Hibernate and PostgreSQL. For validation, I started using classes that extend org.springframework.validation.Validator. It works well, but only for calls made by the front end. For calls made in the back end, such as by using EntityManager, they don't fire. I've managed to have another validator being called in this situation by using #Constraint for ElementType.TYPE, but it only gets called for create and save methods.
Is it possible to use this validator to validate on delete methods too? There's a project here that's a non operational subset of the project I'm working on, containing the validators I mentioned.
Thanks in advance.
P.S.: I'd rather avoid manually calling the validators whenever I call a repository method in the back end.
P.P.S.: This answer makes me believe it's possible, but I couldn't translate the XML configuration to JavaConfig.
I finally found the answer. In application.properties, add:
spring.jpa.properties.javax.persistence.validation.group.pre-remove=javax.validation.groups.Default
The linked question told me which property I needed, but I didn't know where to place it. I tried to use custom Java configuration and even persistence.xml configuration, but several other things failed.
Here, I learned that "[...] all properties in spring.jpa.properties.* are passed through as normal JPA properties (with the prefix stripped) when the local EntityManagerFactory is created." So I just added that prefix and it worked.

Ease of rolling back from Spring Boot to regular Spring and viewing hybrid of Spring Context and configurations while using Spring Boot

I am assessing whether spring-boot and how I could migrate to using it.
One question I have is whether a project that uses spring boot can be converted easily back to a regular spring project which uses the traditional spring configuration files if that is required. This would be useful in my mind for a few reasons.
1) merging with legacy projects, because as I have read moving from legacy spring to spring-boot is somewhat tedious.
2) Obtaining a view of the spring application context file and webapp configuration files to understand what the actual configurations being used are.
Another question I have is regarding the lack of application-context file, is there a way to have some kind of hybrid where there is still an application-context file that can be seen? Part of my concern is that spring-boot auto configures components without us knowing and learning how they are configured and work together.
Spring Boot provides auto-configuration.
When #SpringBootApplication is encountered, it triggers a search of the CLASSPATH for a file called META-INF/spring.factories which is just a regular text file that enumerates a list of Java configuration classes. Java configuration was introduced in 2006 and then merged into Spring 3 in 2009. So it's been around for a long time. These Java configuration classes define beans in the same way that XML does. Each class is annotated with #Configuration and therein you find beans defined using methods (factory methods, or provider methods) whose return value is managed and exposed via Spring. Each such provider method is annotated with #Bean. This tells Spring to consider the method and its return value the definition of the bean.
Spring Boot tries to launch all the Java configurations it sees enumerated in that text file. It tries to launch RabbitAutoConfiguration.class, which in turn provides beans for connecting to RabbitMQ and so on. Of course, you don't want those beans in certain cases, so Spring Boot takes advantage of Spring framework 4's #Conditional mechanism to conditionally register those beans only if certain conditions are met: is a type on the CLASSPATH, is a property exposed through the environment, has there been another bean of the same type defined by the user, etc. Spring boot uses these to only create the RabbitMQ-specific beans if, for example, the dependencies that you would get from org.springframework.boot:spring-boot-starter-amqp are on the CLASSPATH. It also considers that the user may have provided a different implementation of RabbitTemplate in some othe rbean definition (be it XML or Java configuration) so it uses that if it's there.
These java configuration classes are the same sort of Java configuration classes you would write without Spring Boot. BUT... WHY? 80% of the time, the auto-configuration that Spring Boot provides is going to be as good or better than the configuration you would write yourself. There are only so many ways to configure Spring MVC, Spring Data, Spring Batch, etc., and the wager you take using Spring Boot is that the leaders and engineers on those various projects can provide the most sensible 80%-case configuration that you probably don't care to write, anyway.
So, yes you could use Spring Boot with existing applications, but you'd have to move those existing applications to Spring 4 (which is easy to do if you're using the spring-boot-starter-* dependencies) to take advantage of #Conditional. Spring Boot prefers: NO configuration, Java configuration, XML configuration, in that order.
If you have an existing application, I'd do the following:
find out what dependencies you can remove from your Gradle/Maven build and just have taken care of for you with the various spring-boot-starter- dependencies.
add #SpringBootApplication to a root component class. Eg, if your package is a.b.c, put a class Application in a.Application and annotate that with #SpringBootApplication
You can run it as a Servlet 3 application or in an embedded servlet container. It might be easier to just run in a standard servlet container as you take baby steps. Go to http://start.spring.io and make sure to choose war in the packaging drop down. Copy the ServletInitializer class and the specification from the pom.xml to ensure that your application is a .war, not a .jar. Then, once everything works on Spring Boot, rm -rf the Initializer and then revert the Maven build to a simpler .jar using the Spring Boot plugin for extra win.
If your application has lots of existing configuration, import it using #Import(OldConfig.class) or #ImportResource("old-config.xml") on the a.Application configuration class. The auto-configuration will kick in but it will see, for example, that you may have already defined a DataSource so it'll plug that in in the right places. What I like to do now is just start the application up, see if everything's OK, then start removing code in my old Java or XML configuration. Don't remove your business code, but remove things related to turning on parts of Spring. Anything that looks like #Enable..* or ..:annotation-driven/>. Restart and verify things still work. The goal is to let Spring Boot do as much of the heavy lifting as possible. Viewing the source is very handy here so you can see what Spring Boot will try to do for you. Spring Boot will log information on what #Conditional conditions evaluated to true for you. Simply provide --Ddebug=true wen you start the application to see the output. You could also export DEBUG=true then restart your IDE to run it as long as the environment variable is ivsible in your shell.

How do I properly set up cross-store persistence using Spring Data JPA + Neo4j?

I am trying to get a very minimal JPA + SDN (Spring Data Neo4j) cross store project running and am trying to demonstrate that saving a partial entity using a JPA repository call will create a corresponding node in Neo4j.
I have followed the instructions / advice that I have been able to find on SO, Google and Spring's site but am currently still having trouble standing things up. I currently have a minimal test project created at:
https://github.com/simon-lam/sdn-cross-store-poc
The project uses Spring Boot and has a simple domain containing a graph entity, GraphNodeEntity.java, and a partial entity, PartialEntity.java. I have written a very basic test, PartialEntityRepositoryTest.java, to do a save on the partial entity and am seeing:
The wrong transaction manager seems to be used because the CrossStoreNeo4jConfiguration class does not properly autowire entityManagerFactory, it is null
As a result of the above ^, no ID is assigned to my entity
I do not see any SDN activity in the logs at all
Am I doing something glaringly wrong?
More generally, I was hoping to confirm some assumptions and better understand cross store persistence support in general:
To enable it, do I need to enable advanced mapping?
As part of enabling advanced mapping, I need to set up AspectJ; does this include enabling load time weaving? If so is this accomplished through using the #EnableLoadTimeWeaving config?
Assuming that all my configuration is eventually fixed, should I expect to see partial nodes persist in Neo4j when I persist them using a JPA repository? This should be handled by the cross store support which is driven by aspects right?
Thank you for any help that can be offered!
I sent a message to the Neo4j Google Group and got some feedback from Michael Hunger so I'm going to share here:
Turns out the cross store lib has been dormant for a while
JPA repos are not supported, only the EntityManager operations are
The cross store setup was not meant for a remote server and was not tested
So in summary my core understanding / assumptions were off!
Source: https://groups.google.com/forum/#!topic/neo4j/FGI8692AVJQ

Categories