What's easiest way to load address field content into a string array from a Postgres database:
id | name | address
-----------------------------------------------------------------
1 | John | {"line1","line2","line3"}
2 | Steve | {"addr1","addr2","addr3"}
String[0] = "line1"
String[1] = "line2"
String[2] = "line3"
I'm not sure if this is a serialized string array but somehow i'm failing on this simple task.
Related
I am reading City and Country data from 2 CSV files and need to merge the result using Java Stream (I need to keep the same order as the first result). I thought using parallel stream or ComletableFuture, but as I need the result of first fetch for passing as parameter to the second fetch, I am not sure if they are suitable for this scenario.
So, in order to read data from the first query and pass the result of this query to the second one and obtain result, what should I do in Java Stream?
Here are the related entities. I have to relate them using country code values.
Assume that I just need the country names for the following cities. Please keep in mind that, I need to keep the same order as the first result. For example, if the result is [Berlin, Kopenhag, Paris] then the second result should be the same order as [Germany, Denmark, France].
City:
id | name | countryCode |
------------------------------
1 | Berlin | DE |
2 | Munich | DE |
3 | Köln | DE |
4 | Paris | FR |
5 | Kopenhag | DK |
...
Country:
id | name | code |
----------------------------------
100 | Germany | DE |
105 | France | FR |
108 | Denmark | DK |
...
Here are the related classes:
public class City{
#CsvBindByPosition(position = 0)
private Integer id;
#CsvBindByPosition(position = 1)
private String name;
#CsvBindByPosition(position = 2)
private String countryCode;
// setters, getters, etc.
}
public class Country {
#CsvBindByPosition(position = 0)
private Integer id;
#CsvBindByPosition(position = 1)
private String name;
#CsvBindByPosition(position = 2)
private String code;
// setters, getters, etc.
}
You can merge your data with stream, for example add an countryName in City :
List<Country> countries = // Your CSV Country lines
List<City> cities = // Your CSV City lines
cities.forEach(city -> city.setCountryName(countries.stream()
.filter(country -> country.getCode().equals(city.getCountryCode()))
.map(Country::getName).findAny().orElse(null)));
I am building my index like that:
graph = JanusGraphFactory.open("conf/janusgraph-cql-es-server.properties")
final JanusGraphManagement mt = graph.openManagement();
PropertyKey key = indexManagement.getPropertyKey("myID");
mt.buildIndex("byID", Vertex.class).addKey(key).buildCompositeIndex();
mt.commit();
ManagementSystem.awaitGraphIndexStatus(graph,"byID").call();
...
final JanusGraphManagement updateMt = graph.openManagement();
updateMt.updateIndex(updateMt.getGraphIndex("byID"), SchemaAction.REINDEX).get();
updateMt.commit();
But when I call:
graph.traversal().V().has("myID", "100");
I get a full scan, that returns a correct result:
o.j.g.transaction.StandardJanusGraphTx : Query requires iterating over all vertices [(myID = 100)]. For better performance, use indexes
Also if I print the schema I have:
---------------------------------------------------------------------------------------------------
Vertex Index Name | Type | Unique | Backing | Key: Status |
---------------------------------------------------------------------------------------------------
byID | Composite | false | internalindex | myID: INSTALLED |
---------------------------------------------------------------------------------------------------
Edge Index (VCI) Name | Type | Unique | Backing | Key: Status |
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Relation Index | Type | Direction | Sort Key | Order | Status |
---------------------------------------------------------------------------------------------------
Also looking at the backing it says internalindex, I wonder if I misconfigured something.
edit:
There were 2 problems.
The index was Installed not Ready.
For string properties you also need to do:
mgmt.buildIndex('byID', Vertex.class).addKey(ID, Mapping.TEXT.asParameter())...
Shot in the dark but looks like you not creating the PropertyKey myID before trying to use it.
Try something like:
final JanusGraphManagement mt = graph.openManagement();
PropertyKey key = indexManagement.getPropertyKey("myID");
if(key == null) {
key = mt.makePropertyKey("myID").dataType(String.class).make();
}
mt.buildIndex("byID", Vertex.class).addKey(key).buildCompositeIndex();
mt.commit();
I have an instance of Elasticsearch running with thousands of documents. My index has 2 fields like this:
|____Type_____|__ Date_added __ |
| walking | 2018-11-27T00:00:00.000 |
| walking | 2018-11-26T00:00:00.000 |
| running | 2018-11-24T00:00:00.000 |
| running | 2018-11-25T00:00:00.000 |
| walking | 2018-11-27T04:00:00.000 |
I want to group by and count how many matches were found for the "type" field, in a certain range.
In SQL I would do something like this:
select type,
count(type)
from index
where date_added between '2018-11-20' and '2018-11-30'
group by type
I want to get something like this:
| type | count |
| running | 2 |
| walking | 3 |
I'm using the High Level Rest Client api in my project, so far my query looks like this, it's only filtering by the start and end time:
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
sourceBuilder.query(QueryBuilders
.boolQuery()
.must(QueryBuilders
.rangeQuery("date_added")
.from(start.getTime())
.to(end.getTime()))
)
);
How can I do a "group by" in the "type" field? Is it possible to do this in ElasticSearch?
That's a good start! Now you need to add a terms aggregation to your query:
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
sourceBuilder.query(QueryBuilders.boolQuery()
.must(QueryBuilders
.rangeQuery("date_added")
.from(start.getTime())
.to(end.getTime()))
)
);
// add these two lines
TermsAggregationBuilder groupBy = AggregationBuilders.terms("byType").field("type.keyword");
sourceBuilder.aggregation(groupBy);
After using Val's reply to aggregate the fields, I wanted to print the aggregations of my query together with the value of them. Here's what I did:
Terms terms = searchResponse.getAggregations().get("byType");
Collection<Terms.Bucket> buckets = (Collection<Bucket>) terms.getBuckets();
for (Bucket bucket : buckets) {
System.out.println("Type: " + bucket.getKeyAsString() + " = Count("+bucket.getDocCount()+")");
}
This is the output after running the query in an index with 2700 documents with a field called "type" and 2 different types:
Type: walking = Count(900)
Type: running = Count(1800)
The following are the list of different kinds of books that customers read in a library. The values are stored with the power of 2 in a column called bookType.
I need to fetch list of books with the combinations of persons who read
only Novel Or only Fairytale Or only BedTime Or both Novel + Fairytale
from the database with logical operational query.
Fetch list for the following combinations :
person who reads only novel(Stored in DB as 1)
person who reads both novel and fairy tale(Stored in DB as 1+2 = 3)
person who reads all the three i.e {novel + fairy tale + bed time} (stored in DB as 1+2+4 = 7)
The count of these are stored in the database in a column called BookType(marked with red in fig.)
How can I fetch the above list using MySQL query
From the example, I need to fetch users like novel readers (1,3,5,7).
The heart of this question is conversion of decimal to binary and mysql has a function to do just - CONV(num , from_base , to_base );
In this case from_base would be 10 and to_base would be 2.
I would wrap this in a UDF
So given
MariaDB [sandbox]> select id,username
-> from users
-> where id < 8;
+----+----------+
| id | username |
+----+----------+
| 1 | John |
| 2 | Jane |
| 3 | Ali |
| 6 | Bruce |
| 7 | Martha |
+----+----------+
5 rows in set (0.00 sec)
MariaDB [sandbox]> select * from t;
+------+------------+
| id | type |
+------+------------+
| 1 | novel |
| 2 | fairy Tale |
| 3 | bedtime |
+------+------------+
3 rows in set (0.00 sec)
This UDF
drop function if exists book_type;
delimiter //
CREATE DEFINER=`root`#`localhost` FUNCTION `book_type`(
`indec` int
)
RETURNS varchar(255) CHARSET latin1
LANGUAGE SQL
NOT DETERMINISTIC
CONTAINS SQL
SQL SECURITY DEFINER
COMMENT ''
begin
declare tempstring varchar(100);
declare outstring varchar(100);
declare book_types varchar(100);
declare bin_position int;
declare str_length int;
declare checkit int;
set tempstring = reverse(lpad(conv(indec,10,2),4,0));
set str_length = length(tempstring);
set checkit = 0;
set bin_position = 0;
set book_types = '';
looper: while bin_position < str_length do
set bin_position = bin_position + 1;
set outstring = substr(tempstring,bin_position,1);
if outstring = 1 then
set book_types = concat(book_types,(select trim(type) from t where id = bin_position),',');
end if;
end while;
set outstring = book_types;
return outstring;
end //
delimiter ;
Results in
+----+----------+---------------------------+
| id | username | book_type(id) |
+----+----------+---------------------------+
| 1 | John | novel, |
| 2 | Jane | fairy Tale, |
| 3 | Ali | novel,fairy Tale, |
| 6 | Bruce | fairy Tale,bedtime, |
| 7 | Martha | novel,fairy Tale,bedtime, |
+----+----------+---------------------------+
5 rows in set (0.00 sec)
Note the loop in the UDF to walk through the binary string and that the position of the 1's relate to the ids in the look up table;
I leave it to you to code for errors and tidy up.
I'm using modelmapper-jooq to map jOOQ records to custom pojos. Let's assume I have table like
| name | second_name | surname
----------------------------
1 | Mary | Jane | McLeod
----------------------------
2 | John | Henry | Newman
----------------------------
3 | Paul | | Signac
----------------------------
4 | Anna | | Pavlova
so the second_name can be null. My Person POJO looks like:
public class Person {
private String name;
private String secondName;
private String surname;
// assume getters and setters
}
When I map Result<Record> into Collection<Person>, every element in this collection has secondName equal null. When I map only first two rows, everything is OK. How to handle it properly, so the secondName field is null only when corresponding field in database is null? I've checked that fields in Record instances have proper values. I configure modelmapper in this way:
ModelMapper modelMapper = new ModelMapper();
modelMapper.getConfiguration().addValueReader(new RecordValueReader());
modelMapper.getConfiguration().setSourceNameTokenizer(NameTokenizers.UNDERSCORE);
Also I'm doing mapping like:
//...
private final Type collectionPersonType = new TypeToken<Collection<Person>>() {}.getType();
//...
Result<Record> result = query.fetch();
return modelMapper.map(result, collectionPersonType);