I'm using Hibernate Validation (JSR 303) and I'm trying to tame the Eclipse formatter to have nested annotations on seperate lines. Example:
#DefinedParametersMatchesResultPresence.List( {
#DefinedParametersMatchesResultPresence( measurement = Measurement.penetrationLength ),
#DefinedParametersMatchesResultPresence( measurement = Measurement.coneResistance ), #DefinedParametersMatchesResultPresence( measurement = Measurement.depth ),
#DefinedParametersMatchesResultPresence( measurement = Measurement.electricalConductivity ),
} )
However.. I can't get the annotations in the DefinedParametersMatchesResultPresence.List on a new line when running format. Additionally, the formatter does not comply to my max line length and wrap to a new line.
I'm using:
Version: Neon.2 Release (4.6.2)
Build id: 20161208-0600
Although the formatting concerns annotations, the array formatting also applies. After setting array formatting correctly it works as expected.
Next: its also possible to use the #Repeatable annotation when using your own annotations which lead to a nicer
Related
I am trying to do a windowed aggregation query on a data stream that contains over 40 attributes in Flink. The stream's schema contains an epoch timestamp which I want to use for the WatermarkStrategy so I can actually define tumbling windows over it.
I know from the docs, that you can define a Timestamp using the SQL Api in an CREATE TABLE-query by first using TO_TIMESTAMP_LTZ on the epochs to convert it to a proper timestamp which can be used in the following WATERMARK FOR-statement. Since I got a really huge schema tho, I want to deserialise and provide the schema NOT by completely writing the complete CREATE TABLE-statement containing all columns BUT by using a custom class derived from the proto file that cointains the schema. As far as I know, this is only possible by providing a deserializer for the KafkaSourceBuilder and calling the results function of the stream on the class derived from the protofile with protoc. Which means, that I have to define the table using the stream api.
Inspired by the answer to this question, I do it like this:
WatermarkStrategy watermarkStrategy = WatermarkStrategy
.<Row>forBoundedOutOfOrderness(Duration.ofSeconds(10))
.withTimestampAssigner( (event, ts) -> (Long) event.getField("ts"));
tableEnv.createTemporaryView(
"bidevents",
stream
.returns(BiddingEvent.BidEvent.class)
.map(e -> Row.of(
e.getTracking().getCampaign().getId(),
e.getTracking().getAuction().getId(),
Timestamp.from(Instant.ofEpochSecond(e.getTimestamp().getMilliseconds() / 1000))
)
)
.returns(Types.ROW_NAMED(new String[] {"campaign_id", "auction_id", "ts"}, Types.STRING, Types.STRING, Types.SQL_TIMESTAMP))
.assignTimestampsAndWatermarks(watermarkStrategy)
);
tableEnv.executeSql("DESCRIBE bidevents").print();
Table resultTable = tableEnv.sqlQuery("" +
"SELECT " +
" TUMBLE_START(ts, INTERVAL '1' DAY) AS window_start, " +
" TUMBLE_END(ts, INTERVAL '1' DAY) AS window_end, " +
" campaign_id, " +
" count(distinct auction_id) auctions " +
"FROM bidevents " +
"GROUP BY TUMBLE(ts, INTERVAL '1' DAY), campaign_id");
DataStream<Row> resultStream = tableEnv.toDataStream(resultTable);
resultStream.print();
env.execute();
I get this error:
Caused by: org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: Window aggregate can only be defined over a time attribute column, but TIMESTAMP(9) encountered.
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372) ~[flink-dist-1.15.1.jar:1.15.1]
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222) ~[flink-dist-1.15.1.jar:1.15.1]
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114) ~[flink-dist-1.15.1.jar:1.15.1]
at org.apache.flink.client.deployment.application.ApplicationDispatcherBootstrap.runApplicationEntryPoint(ApplicationDispatcherBootstrap.java:291) ~[flink-dist-1.15.1.jar:1.15.1]
This seems kind of logical, since in line 3 I cast a java.sql.Timestamp to a Long value, which it is not (but also the stacktrace does not indicate that an error occurred during the cast). But when I do not convert the epoch (in Long-Format) during the map-statement to a Timestamp, I get this exception:
"Cannot apply '$TUMBLE' to arguments of type '$TUMBLE(<BIGINT>, <INTERVAL DAY>)'"
How can I assign the watermark AFTER the map-statement and use the column in the later SQL Query to create a tumbling window?
======UPDATE=====
Thanks to a comment from David, I understand that I need the column to be of type TIMESTAMP(p) with precision p <= 3. To my understanding this means, that my timestamp may not be more precise than having full milliseconds. So i tried different ways to create Java Timestamps (java.sql.Timestamps and java.time.LocaleDateTime) which correspond to the Flink timestamps.
Some examples are:
1 Trying to convert epochs into a LocalDateTime by setting nanoseconds (the 2nd parameter of ofEpochSecond to 0):
LocalDateTime.ofEpochSecond(e.getTimestamp().getMilliseconds() / 1000, 0, ZoneOffset.UTC )
2 After reading the answer from Svend in this question who uses LocalDateTime.parse on timestamps that look like this "2021-11-16T08:19:30.123", I tried this:
LocalDateTime.parse(
DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ss").format(
LocalDateTime.ofInstant(
Instant.ofEpochSecond(e.getTimestamp().getMilliseconds() / 1000),
ZoneId.systemDefault()
)
)
)
As you can see, the timestamps even only have seconds-granularity (which i checked when looking at the printed output of the stream I created) which I assume should mean, they have a precision of 0. But actually when using this stream for defining a table/view, it once again has the type TIMESTAMP(9).
3 I also tried it with the sql timestamps:
new Timestamp(e.getTimestamp().getMilliseconds() )
This also did not change anything. I somehow always end up with a precision of 9.
Can somebody please help me how I can fix this?
Ok, I found the solution to the problem. If you got a stream containing a timestamp which you want to define as event time column for watermarks, you can use this function:
Table inputTable = tableEnv.fromDataStream(
stream,
Schema.newBuilder()
.column("campaign_id", "STRING")
.column("auction_id", "STRING")
.column("ts", "TIMESTAMP(3)")
.watermark("ts", "SOURCE_WATERMARK()")
.build()
);
The important part is, that you can "cast" the timestamp ts from TIMESTAMP(9) "down" to TIMESTAMP(3) or any other precision below 4 and you can set the column to contain the water mark.
Another mention that seems important to me is, that only Timestamps of type java.time.LocalDateTime actually worked for later use as watermarks for tumbling windows.
Any other attempts to influence the precision of the timestamps by differently creating java.sql.Timestamp or java.time.LocalDateTime failed. This seemed to be the only viable way.
I have an ontology
<owl:ObjectProperty rdf:about="http://purl.obolibrary.org/obo/BFO_0000050">
<owl:inverseOf rdf:resource="http://purl.obolibrary.org/obo/BFO_0000051"/>
<rdf:type rdf:resource="http://www.w3.org/2002/07/owl#TransitiveProperty"/>
<oboInOwl:hasDbXref rdf:datatype="http://www.w3.org/2001/XMLSchema#string">BFO:0000050</oboInOwl:hasDbXref>
<oboInOwl:hasOBONamespace rdf:datatype="http://www.w3.org/2001/XMLSchema#string">external</oboInOwl:hasOBONamespace>
<oboInOwl:id rdf:datatype="http://www.w3.org/2001/XMLSchema#string">part_of</oboInOwl:id>
<oboInOwl:shorthand rdf:datatype="http://www.w3.org/2001/XMLSchema#string">part_of</oboInOwl:shorthand>
<rdfs:label rdf:datatype="http://www.w3.org/2001/XMLSchema#string">part of</rdfs:label>
</owl:ObjectProperty>
I'm trying to extract all the ObjectProperties,
for (OWLObjectProperty obp : ont.getObjectPropertiesInSignature()){
System.out.println(obp.toString());
}
this will print the name of ObjectProperty, e.g. http://purl.obolibrary.org/obo/BFO_0000050.
I wonder how to get the rdfs:label, e.g. part of
The rdfs:label in OWL is an annotation.
To get the label you must query for the annotation of the objectProperty you want.
To display all annotations of an ontology you can do something like that :
final OWLOntology ontology = manager.loadOntologyFromOntologyDocument(new File(my_file));
final List<OWLAnnotation> annotations = ontology.objectPropertiesInSignature()//
.filter(objectProperty -> objectProperty.equals(the_object_property_I_want))//
.flatMap(objectProperty -> ontology.annotationAssertionAxioms(objectProperty.getIRI()))//
.map(OWLAnnotationAssertionAxiom::getAnnotation)//
.collect(Collectors.toList());
for (final OWLAnnotation annotation : annotations)
System.out.println(annotation.getProperty() + "\t" + annotation.getValue());
getObjectPropertiesInSignature() is deprecated in the modern (more than one year) version of owlapi (5). So please considere using the stream version objectPropertiesInSignature of java-8 . java-9 have been release few days ago, so it is a good time to learn the stream functionnality.
NB: the annotations are almost free, but OWL2 have put some more standardisation on it, so there is annotations with 'predefined semantics'.
I'm using ElasticSearch 2.4.2 (via HibernateSearch 5.7.1.Final from Java).
I have a problem with string sorting.
The language of my application has diacritics, which have a specific alphabetic
ordering. For example Ł goes directly after L, Ó goes after O, etc.
So you are supposed to sort the strings like this:
Dla
Dła
Doa
Dóa
Dza
Eza
ElasticSearch sorts by typical letters first, and moves all strange
letters to at the end:
Dla
Doa
Dza
Dła
Dóa
Eza
Can I add a custom letter ordering for ElasticSearch?
Maybe there are some plugins for this?
Do I need to write my own plugin? How do I start?
I found a plugin for Polish language for ElasticSearch,
but as I understand it is for analysing, and analysing is not a solution
in my case, because it will ignore diacritics and leave words with L and Ł mixed:
Dla
Dłb
Dlc
This would sometimes be acceptable, but is not acceptable in my specific usecase.
I will be grateful for any remarks on this.
I've never used it, but there is a plugin that could fit your needs: the ICU collation plugin.
You will have to use the icu_collation token filter, which will turns the tokens into collation keys. For that reason you will need to use a separate #Field (e.g. myField_sort) in Hibernate Search.
You can assign a specific analyzer to your field with #Field(name = "myField_sort", analyzer = #Analyzer(definition = "myCollationAnalyzer")), and define this analyzer (type, parameters) with something like that on one of your entities:
#Entity
#Indexed
#AnalyzerDef(
name = "myCollationAnalyzer",
filters = {
#TokenFilterDef(
name = "polish_collation",
factory = ElasticsearchTokenFilterFactory.class,
params = {
#Parameter(name = "type", value = "'icu_collation'"),
#Parameter(name = "language", value = "'pl'")
}
)
}
)
public class MyEntity {
See the documentation for more information: https://docs.jboss.org/hibernate/stable/search/reference/en-US/html_single/#_custom_analyzers
It's admittedly a bit clumsy right now, but analyzer configuration will get a bit cleaner in the next Hibernate Search version with normalizers and analyzer definition providers.
Note: as usual, your field will need to be declared as sortable (#SortableField(forField = "myField_sort")).
I'm processing Excel files with ExcelExplorer based on Stringtemplate4 (ST).
The files contain several columns with dates.
By default, the dates are rendered following the "MM/dd/yy" date format.
Is there a way to render the dates as "dd/MM/yyyy"?
I've tried it in several ways:
I've tried defining it via the command line, without success.
Defining LC_ALL=fr_FR doesn't work.
Defining LC_TIME="dd/MM/yyyy" doesn't work.
See Setting java locale settings
Calling java with the following command line options doesn't work.
java -Duser.language=fr -Duser.country=FR -Duser.variant=UTF-8 ...
I've tried the following templates without success:
renderRow(row) ::= <<
<row.MyDate; format="dd/MM/yyyy">
>>
Although attribute MyDate is defined as a Date type, the above doesn't work. I don't want to define MyDate as a Date type in Java as proposed in Format date in String Template email
NB: After checking, I found out that ExcelExporter/ST defines attribute MyDate as a Date type!
The following template doesn't work either :
renderRow(row; format="dd/MM/yyyy") ::= <<
<row.MyDate>
>>
You need to add a renderer to your STGroup for each class you want to format:
dir = STGroupDir(templateDirectory, '$', '$')
dir.registerRenderer(Number.class, NumberRenderer())
dir.registerRenderer(Date.class, DateRenderer())
Now, in my templates, I can use
<row.MyDate; format="dd/MM/yyyy"> format string is used with java.text.SimpleDateFormat
or
<row.MyNumber; format="%,d"> format string is used with java.util.Formatter
If you need a custom formatter, take a look at the DateRenderer, it would be pretty straightforward to create your own.
Here's the documentation:
https://github.com/antlr/stringtemplate4/blob/master/doc/renderers.md
Our project contains many statements in the method chaining fluent style:
int totalCount = ((Number) em
.createQuery("select count(up) from UserPermission up where " +
"up.database.id = :dbId and " +
"up.user.id <> :currentUserId ")
.setParameter("dbId", cmd.getDatabaseId())
.setParameter("currentUserId", currentUser.getId())
.getSingleResult())
.intValue();
I've got checkstyle mostly configured to match our existing code style, but now it's failing on these snippets, preferring instead:
int totalCount = ((Number) em
.createQuery("select count(up) from UserPermission up where " +
"up.database.id = :dbId and " +
"up.user.id <> :currentUserId ")
.setParameter("dbId", cmd.getDatabaseId())
.setParameter("currentUserId", currentUser.getId())
.getSingleResult())
.intValue();
Which is totally inappropriate. Is there anyway to configure checkstyle to accept the method chaining style? Is there an alternate tool I can run from maven to enforce this kind of indentation?
I never made this work in Eclipse so we barely use Format Source. In the end it is often best to extend. We tried hard and failed. It was one and half year ago. In the end we use formatting text only in Eclipse by Selecting the line or to preformat before we format by hand.
Usually the formating done by a engineer carries a certain meaning. And so automatic format will never work. Especially if you do something like
public static void myMethod(
int value, String value2, String value3)
If you autoformat this it fails similar to your example.
So feel free to join the club of not using automatic formatting beside as a step before you format it the human way.
with intellij , it can be done by selecting "align when multiline" in case of "method chain calls" so i guess this property is misconfigured in the configurations.