DataTableRepository in Spring data Elasticsearch - java

Currently, we are using spring data JPA with MySql database with DataTabaleRepository which works well with JPA. Now we are moving our data to Spring data elasticserch but DataTabaleRepository is not working with that. Is there any alternative for that or how can I implement a custom repository for that?

spring-data-jpa-datatables does not implement support for ElasticsearchRepository, as you say and use the Specification API which is not implemented by Spring Data for Elasticsearch, so extending it would take some work.
What you need to do is create your own ElasticsearchRepositoryFactoryBean (ie. ElasticsearchDataTablesRepositoryFactoryBean) and your own implementation of AbstractElasticsearchRepository that implements the specifics of spring-data-jpa-datatables just like DataTablesRepositoryImpl. You should also define your own DataTablesRepository (ElasticsearchDataTablesRepository that extends ElasticsearchRepository) with the same methods.
The org.springframework.data.jpa.datatables.mapping classes can be reused, but you'll have to recreate the logic found in SpecificationFactory for elasticsearch using QueryBuilders, which will be the most time consuming part I imagine.
When you're done, you can use the #EnableElasticsearchRepositories just like described by spring-data-jpa-datatables ie.:
#EnableElasticsearchRepositories(repositoryFactoryBeanClass = ElasticsearchDataTablesRepositoryFactoryBean.class))
And extend your repositories with your ElasticsearchDataTablesRepository interface and you're good to go.
For reference you should look at SpecificationFactory and AbstractElasticsearchRepository (the search method) and get familiar with Elasticsearch QueryBuilders.

Related

Using SqlEncryptionService in java flyway migration

I am trying to write a java flyway migration where I am trying to update the column values by encrypting using SqlEncryptionService. But I am facing an issue since the call which is made to the KeyStoreRepository is not returning back.
#Repository
public interface KeyStoreRepository extends JpaRepository<KeyStoreEntity, UUID> {
KeyStoreEntity findTopByMasterKeyIdOrderByCreatedDateDesc(UUID masterKeyId);
}
My guess here is we cannot make use of JPA repository in the java flyway migration script. But in case if I am wrong, do correct me with the actual reason.
Also is there any alternative way to achieve this?

Apache spark, is it possible to have Google guice as dependency injection technique

Is it possible to use Google guice as dependency injection provider for a Apache spark Java application?
I am able to achieve this if the execution is happening at the driver but no control over when the execution is happening at executors.
Is it even possible to use the injected objects at the executors? Its hard to manage the code with out the dependency injection in the spark applications.
I think the neutrino framework is exactly for your requirement.
Disclaimer: I am the author of the neutrino framework.
This framework provides the capability to use dependency injection (DI) to generate the objects and control their scope at both the driver and executors.
How does it do that
As we know, to adopt the DI framework, we need to first build a dependency graph first, which describes the dependency relationship between various types and can be used to generate instances along with their dependencies. Guice uses Module API to build the graph while the Spring framework uses XML files or annotations.
The neutrino is built based on Guice framework, and of course, builds the dependency graph with the guice module API. It doesn't only keep the graph in the driver, but also has the same graph running on every executor.
In the dependency graph, some nodes may generate objects which may be passed to the executors, and neutrino framework would assign unique ids to these nodes. As every JVM have the same graph, the graph on each JVM have the same node id set.
When an instance to be transferred is requested from the graph at the driver, instead of creating the actual instance, it just returns a stub object which holds the object creation method (including the node id). When the stub object is passed to the executors, the framework will find the corresponding node in the graph in the executor JVM with the id and recreate the same object and its dependencies there.
Here is an example:
Example:
Here is a simple example (just filter a event stream based on redis data):
trait EventFilter[T] {
def filter(t: T): Boolean
}
// The RedisEventFilter class depends on JedisCommands directly,
// and doesn't extend `java.io.Serializable` interface.
class RedisEventFilter #Inject()(jedis: JedisCommands)
extends EventFilter[ClickEvent] {
override def filter(e: ClickEvent): Boolean = {
// filter logic based on redis
}
}
/* create injector */
val injector = ...
val eventFilter = injector.instance[EventFilter[ClickEvent]]
val eventStream: DStream[ClickEvent] = ...
eventStream.filter(e => eventFilter.filter(e))
Here is how to config the bindings:
class FilterModule(redisConfig: RedisConfig) extends SparkModule {
override def configure(): Unit = {
// the magic is here
// The method `withSerializableProxy` will generate a proxy
// extending `EventFilter` and `java.io.Serializable` interfaces with Scala macro.
// The module must extend `SparkModule` or `SparkPrivateModule` to get it
bind[EventFilter[ClickEvent]].withSerializableProxy
.to[RedisEventFilter].in[SingletonScope]
}
}
With neutrino, the RedisEventFilter doesn't even care about serialization problem. Every thing just works like in a single JVM.
For details, please refer to the neutrino readme file.
Limitation
Since this framework uses scala macro to generate the proxy class, the guice modules and the logic of how to wire up these modules needs to be written with scala. Other classes such as EventFilter and its implementations can be java.

Use Spring Data repositories to fill in test data

I'd like to ask whether it is alright to use apps repositories(Spring Data based) to fill in testing data. I know I can use sql file with data, but sometimes I need something more dynamical. I find writing sql or datasets definitions cumbersome(and hard to maintain in case of schema change). Is there anything wrong with using app repositories? There are all basic CRUD operations already there. Note we are talking especially about integration testing.
I feel it is kind of weird to use part of app to test itself. Maybe I can create another set of repositories to be used in test contexts.
No, there is absolutely nothing wrong with using Spring Data repositories to create test data.
I even prefer that since it often allows for simpler refactoring.
As with any use of JPA in tests you need to keep in mind that JPA implementations are a write-behind cache. You probably want to flush and clear the EntityManager after setting up the test data, so that you don't get anything from the 1st level cache that really should come from the database. Also, this ensures data is actually written to the database and problems with that will surface.
You might be interested in a couple of articles about testing with Hibernate. They don't use Spring Data, but it would work with Spring Data JPA just the same.
I would recommand to use Flyway to setup your databases and use Flyway test extension for integration testing.
So that you can do something like that:
#ContextConfiguration(locations = {"/context/simple_applicationContext.xml"})
#TestExecutionListeners({DependencyInjectionTestExecutionListener.class,
FlywayTestExecutionListener.class})
#Test
#FlywayTest(locationsForMigrate = {"loadmsql"}) // execution once per class
public class MethodTest extends AbstractTestNGSpringContextTests {
#BeforeClass
#FlywayTest(locationsForMigrate = {"loadmsql"}) // execution once per class
public static void beforeClass() {
// maybe some additional things
}
#BeforeMethod
#FlywayTest(locationsForMigrate = {"loadmsql"}) // execution before each test method
public void beforeMethod() {
// maybe before every test method
}
#Test
#FlywayTest(locationsForMigrate = {"loadmsql"}) // as method annotation
public void simpleCountWithoutAny() {
// or just with an annotation above the test method where you need it
}

Objectify Transactions using VoidWork

I am using Objectify version 4. I want to use transactions in my project. Project is based on GWT Java and objectify. As per objectify tutorials i found that ofy().transact() method to be used. So i preferred to use the following
ofy().transact(new VoidWork() {
public void vrun() {
Here i wrote code for saving data to entity
}
});
When i execute the project on development server/local i get a error message stating that
No source code is available for type com.googlecode.objectify.VoidWork; did you forget to inherit a required module?
The method createBillingDocs() is undefined for the type new VoidWork(){}
createBillingDocs is my method which i want to execute in transaction.
So any help?
Thanks in advance
You can't run transactions or use Objectify client-side; it is a server-side framework for accessing the datastore. You need to separate out your client-side logic from your server-side logic and define your GWT modules carefully.

Preserve Generics when generating JSON schema

I'm using jackson-module-jsonSchema and jsonschema2pojo API.
Brief explanation: I'm trying to json-schemify my server's Spring controller contract objects (objects that the controllers return and objects that they accept as parameters) and package them up to use with a packaged retrofit client in order to break the binary dependency between the client and server. The overall solution uses an annotation processor to read the Spring annotations on the controller and generate a retrofit client.
I've got it mostly working, but realized today I've got a problem where generic objects are part of the contract, e.g.
public class SomeContractObject<T> {
...
}
Of course, when I generate the schema for said object, the generic types aren't directly supported. So when I send it through the jsonschema2pojo api I end up with a class like so:
public class SomeContractObject {
}
So my question is simple but may have a non-trivial answer: Is there any way to pass that information through via the json schema to jsonschema2pojo?

Categories