I have a PagingAndSortingRepository:
public interface BrowserLinkPagination extends PagingAndSortingRepository<BrowserLink, Long> {
List<BrowserLink> findByUserAndUriLike(User user, String uri, Pageable pageable);
}
Now what I want to do is to search for multiple words in the uri column. Like comes pretty close, but it is order dependent on how the words occur in the string.
EDIT for clarification: Order dependences is exactly what I not want. I want the search strings to be independent of order, so LIKE is not what I am looking for.
I guess this is pretty common to find, having several search terms to look for in a string, I know I could implement it by providing the SQL to execute, but I am curiuous if there is a way to express that in terms of spring data? am
I am using postgresql for production and h2 for development / tests.
After reading more about the topic it is kind of obvious I need some kind of fulltext search.
I will start using the one provided by postgresql. I found a short working example: http://rachbelaid.com/postgres-full-text-search-is-good-enough/
Thanks for the comments.
Use In in method name. In Your case it could look like that:
public interface BrowserLinkPagination extends PagingAndSortingRepository<BrowserLink, Long> {
List<BrowserLink> findByUserAndUriIn(User user, List<String> uri, Pageable pageable);
}
When You are using the standard API created by Spring, the usage from the browser URI is pretty simple - just type in the address:
/your_api/browserlink/search/findByUserAndUriIn?user=xxx&uri=uri1,uri2,uri3
Related
I am using jOOQ for working with a relational database. I have a SELECT query for which I need to write unit tests with mocking. Based on this doc and this post, I need to define my own data provider, which should look something like this:
class MyProvider implements MockDataProvider {
DSLContext create = DSL.using(SQLDialect.MYSQL);
#Override
public MockResult[] execute(MockExecuteContext mockExecuteContext) throws SQLException {
MockResult[] mock = new MockResult[1];
String sql = mockExecuteContext.sql();
if (sql.startsWith("select")) {
Result<Record2<String, String>> result = create.newResult(COL_1, COL_2);
result.add(create.newRecord(COL_1, COL_2)
.values("val1", "val2"));
mock[0] = new MockResult(1, result);
}
return mock;
}
}
where COL_1 and COL_2 are defined as follows:
Field<String> COL_1 = field("Column1", String.class);
Field<String> COL_2 = field("Column2", String.class);
It's quite simple and straightforward when SELECT is a small one (as in the above example, just 2 columns). I am wondering how it should be done in case of complex and large selects. For instance I have a SELECT statement which selects 30+ columns from multiple table joins. Seems the same approach of
Result<Record_X<String, ...>> result = create.newResult(COL_1, ...);
result.add(create.newRecord(COL_1, ...)
.values("val1", ...));
does not work in case of more than 22 columns.
Any help is appreciated.
Answering your question
There is no such limitation as a maximum of 22 columns. As documented here:
Higher-degree records
jOOQ chose to explicitly support degrees up to 22 to match Scala's typesafe tuple, function and product support. Unlike Scala, however, jOOQ also supports higher degrees without the additional typesafety.
You can still construct a record with more than 22 fields using DSLContext.newRecord(Field...). Now, there is no values(Object...) method on the Record type, because the Record type is the super type of all the Record1 - Record22 types. If such an overload were present, then the type safety on the sub types would be lost, because the values(Object...) method is applicable for all types of arguments. This might be fixed in the future by introducing a new RecordN subtype.
But you can load data into your record with other means, e.g. by calling Record.fromArray(Object...):
Record record = create.newRecord(COL_1, ...);
record.fromArray("val1", ...);
result.add(record);
The values() method being mere convenience (adding type safety) for fromArray().
Disclaimer:
I'm assuming you read the disclaimer on the documentation page you've linked. I'm posting it here anyway for other readers of this question, who might not have read the disclaimer:
Disclaimer: The general idea of mocking a JDBC connection with this jOOQ API is to provide quick workarounds, injection points, etc. using a very simple JDBC abstraction. It is NOT RECOMMENDED to emulate an entire database (including complex state transitions, transactions, locking, etc.) using this mock API. Once you have this requirement, please consider using an actual database instead for integration testing, rather than implementing your test database inside of a MockDataProvider.
It seems you're about to re-implement a database which can "run" any type of query, including a query with 23+ columns, and every time you change the query under test, you will also change this test here. I still recommend you do integration testing instead, using testcontainers or even with H2, which will help cover many more queries than any such unit test approach. Here's a quick example showing how to do that: https://github.com/jOOQ/jOOQ/tree/main/jOOQ-examples/jOOQ-testcontainers-example
Also, integration tests will help test query correctness. Unit tests like these will only provide dummy results, irrespective of the actual query. It is likely that such mocks can be implemented much more easily on a higher level than the SQL level, i.e. by mocking the DAO, or repository, or whatever methods, instead.
I'm coming from a C# background and trying to implement an Android App. In .Net C#, retrieving specific data from a database is relatively easy using Entity Framework and Linq, my usual approach is something like this (simplified for clarity):
public IQueryable<T> GetElements<T>()
where T : class, IDBKeyProvider
{
return this.db.Set<T>().Where(e => e.Dbstate == (int)DBState.Active);
}
This method call results in a generic IQueryable and later on, I can use the power of deferred execution and expression trees to specify exactly which elements I want using a predicate, loading only desired elements in memory.
This is something I would very much like to go for in my Android App, however, I'm not exactly sure how I could arrive at a similar result, if I can at all.
I looked into some Java Predicate examples, which seemed promising and I also found Room to be delightfully familiar. My problem, however, is that I cannot make my queries fully customizable due to the fact that, Room still needs some hard-coded info about my db (original here):
#Dao
public interface MyDao {
#Query("SELECT first_name, last_name FROM user WHERE region IN (:regions)")
public LiveData<List<User>> loadUsersFromRegionsSync(List<String> regions);
}
I could perhaps extract the relevant pieces of information with Java Reflection from the predicate parameter, but I feel this to be a hack rather than a proper solution.
I'm trying to write a query that returns a fairly large amount of data (200ish nodes). The nodes are very simple:
public class MyPojo{
private String type;
private String value;
private Long createdDate;
...
}
I originally used the Spring Data Neo4j template interface, but found that it was very slow after around 100 nodes were returned.
public interface MyPojoRepository extends GraphRepository<MyPojo> {
public List<MyPojo> findByType(String type);
}
I turned on debugging to see why it was so slow, and it turned out SDN was making a query for each node's labels. This made sense, as I understand SDN it needs the labels to do its duck-typing. However, Cypher returns all pertinent data in one go so there's no need for this.
So, I tried rewriting it as a Cypher query instead:
public interface MyPojoRepository extends GraphRepository<MyPojo> {
#Query("MATCH(n:MyPojo) WHERE n.type = {0} RETURN n")
public List<MyPojo> findByType(String type);
}
This had the same problem. I dug a little deeper, and while this query returned all node data in one go, it leaves out the labels. There is a way to get them, which works in the Neo4j console so I tried it with SDN:
"MATCH(n:MyPojo) WHERE n.type = {0} RETURN n, labels(n)"
Unfortunately, this caused an exception about having multiple columns. After looking through the source code, this makes sense because Neo4j returns a map of returned columns which in my case looked like: n, labels(n). SDN would have no way of knowing that there was a label column to read.
So, my question is this: is there a way to provide labels as part of the Cypher query to prevent needing to query again for each node? Or, barring that, is there a way to feed SDN a Node containing labels and properties and convert it to a POJO?
Note: I realize that the SDN team is working on using Cypher completely under the hood in a future release. As of now, a lot of the codebase uses the old (and, I believe, deprecated) REST services. If there is any future work going on that would affect this, I would be overjoyed to hear about it.
You're right it would be solvable for the simple use-case and should also solved.
Unfortunately the current APIs don't return the labels as part of the node so we would have to rewrite the inner workings to generate the extra meta-information and return all of that correctly.
One idea is to use RETURN {id:id(n), labels:labels(n), data:n)} as n for the full representation.
The problem is this breaks with user defined queries.
Not sure when and how we can schedule that work. Feel free to raise it as JIRA issue or watch/upvote any related issues.
I work with Mongo and Java. Actually I use Criteria to make select .
for example
Criteria.where("customerType").is(customerType) (1)
for me customerType is key in MongoDB. If I change it in future, my (1) query will be
useless and more bad if I used it many times my code will be broken. I have googled ,but
there is no result. My question is , are there any project or workaround for such problem?
I'm not sure I've got your question completely, but take a look at spring-data-mongo http://projects.spring.io/spring-data-mongodb/
You would just need to declare interface
public interface CustomerRepository extends MongoRepository<Customer, String> {
public Customer findByCustomerType(String customerType);
}
and spring will do it's magic, providing implementation under the hood.
I have been brushing up on my design patterns and came across a thought that I could not find a good answer for anywhere. So maybe someone with more experience can help me out.
Is the DAO pattern only meant to be used to access data in a database?
Most the answers I found imply yes; in fact most that talk or write on the DAO pattern tend to automatically assume that you are working with some kind of database.
I disagree though. I could have a DAO like follows:
public interface CountryData {
public List<Country> getByCriteria(Criteria criteria);
}
public final class SQLCountryData implements CountryData {
public List<Country> getByCriteria(Criteria criteria) {
// Get From SQL Database.
}
}
public final class GraphCountryData implements CountryData {
public List<Country> getByCriteria(Criteria criteria) {
// Get From an Injected In-Memory Graph Data Structure.
}
}
Here I have a DAO interface and 2 implementations, one that works with an SQL database and one that works with say an in-memory graph data structure. Is this correct? Or is the graph implementation meant to be created in some other kind of layer?
And if it is correct, what is the best way to abstract implementation specific details that are required by each DAO implementation?
For example, take the Criteria Class I reference above. Suppose it is like this:
public final class Criteria {
private String countryName;
public String getCountryName() {
return this.countryName;
}
public void setCountryName(String countryName) {
this.countryName = countryName;
}
}
For the SQLCountryData, it needs to somehow map the countryName property to an SQL identifier so that it can generate the proper SQL. For the GraphCountryData, perhaps some sort of Predicate Object against the countryName property needs to be created to filter out vertices from the graph that fail.
What's the best way to abstract details like this without coupling client code working against the abstract CountryData with implementation specific details like this?
Any thoughts?
EDIT:
The example I included of the Criteria Class is simple enough, but consider if I want to allow the client to construct complex criterias, where they should not only specify the property to filter on, but also the equality operator, logical operators for compound criterias, and the value.
DAO's are part of the DAL (Data Access Layer) and you can have data backed by any kind of implementation (XML, RDBMS etc.). You just need to ensure that the project instance is injected/used at runtime. DI frameworks like Spring/Guice shine in this case. Also, your Criteria interface/implementation should be generic enough so that only business details are captured (i.e country name criteria) and the actual mapping is again handled by the implementation class.
For SQL, in your case, either you can hand generate SQL, generate it using a helper library like Spring or use a full fledged framework like MyBatis. In our project, Spring XML configuration files were used to decouple the client and the implementation; it might vary in your case.
EDIT: I see that you have raised a similar concern in the previous question. The answer still remains the same. You can add as much flexibility as you want in your interface; you just need to ensure that the implementation is smart enough to make sense of all the arguments it receives and maps them appropriately to the underlying source. In our case, we retrieved the value object from the business layer and converted it to a map in the SQL implementation layer which can be used by MyBatis. Again, this process was pretty much transparent and the only way for the service layer to communicate with DAO was via the interface defined value objects.
No, I don't believe it's tied to only databases. The acronym is for Data Access Object, not "Database Access Object" so it can be usable with any type of data source.
The whole point of it is to separate the application from the backing data store so that the store can be modified at will, provided it still follows the same rules.
That doesn't just mean turfing Oracle and putting in DB2. It could also mean switching to a totally non-DBMS-based solution.
ok this is a bit philosophical question, so I'll tell what I'm thinking about it.
DAO usually stands for Data Access Object. Here the source of data is not always Data Base, although in real world, implementations are usually come to this.
It can be XML, text file, some remote system, or, like you stated in-memory graph of objects.
From what I've seen in real-world project, yes, you right, you should provide different DAO implementations for accessing the data in different ways.
In this case one dao goes to DB, and another dao implementation goes to object graph.
The interface of DAO has to be designed very carefully. Your 'Criteria' has to be generic enough to encapsulate the way you're going to get the data from.
How to achieve this level of decoupling? The answer can vary depending on your system, by in general, I would say, the answer would be "as usual, by adding an another level of indirection" :)
You can also think about your criteria object as a data object where you supply only the data needed for the query. In this case you won't even need to support different Criteria.
Each particular implementation of DAO will take this data and treat it in its own different way: one will construct query for the graph, another will bind this to your SQL.
To minimize hassling with maintenance I would suggest you to use Dependency Management frameworks (like Spring, for example). Usually these frameworks are suited well to instantiate your DAO objects and play good together.
Good Luck!
No, DAO for databases only is a common misconception.
DAO is a "Data Access Object", not a "Database Access Object". Hence anywhere you need to CRUD data to/from ( e.g. file, memory, database, etc.. ), you can use DAO.
In Domain Driven Design there is a Repository pattern. While Repository as a word is far better than three random letters (DAO), the concept is the same.
The purpose of the DAO/Repository pattern is to abstract a backing data store, which can be anything that can hold a state.