I've put a HashMap<String, Set<Long>> object into a MongoDB document under "disabled_channels" but I can't figure out how to retrieve it and turn it back into a HashMap<String, Set<Long>> object in local memory. I'm usually very good at reading in lists, individual values, etc, with something like found.getList("disabled_commands", String.class) but I'm really lost on how to approach this.
MongoCollection<Document> collection = bot.getDataManager().getConfig();
Document config = new Document("guild", guild.getIdLong());
Document found = collection.find(config).first();
// I get lost here
Document itself is a map implementation internally. Reference
You need to use get function on found document and cast it to Document as below
Document channels = (Document)found.get("disabled_channels")
Then you can access elements in channels using the same get method and cast it as per the need.
Elasticsearch Java High Level REST Client's GET API provides a way to control which fields of the _source are fetched.
val request = GetRequest(index)
.id(id)
.fetchSourceContext(FetchSourceContext(true, includedFields, excludedFields))
elasticClient.get(request, RequestOptions.DEFAULT)
How can I achieve this with the Search APIs?
For example for the following search request:
val source = SearchSourceBuilder()
source.query(QueryBuilders.matchAllQuery())
val request = SearchRequest(index)
.source(source)
elasticClient.search(request, RequestOptions.DEFAULT)
Please refer this from official ES doc,
This method also accepts an array of one or more wildcard patterns to control which fields get included or excluded in a more fine-grained way:
code block
String[] includeFields = new String[] {"title", "innerObject.*"};
String[] excludeFields = new String[] {"user"};
sourceBuilder.fetchSource(includeFields, excludeFields);
Simlar to get API which you already mentioned, you can provide an array of
includeFields and excludeFields to control fetching of the fields from _source fields.
I am using rest high level client elastic search in my JAVA application. Document can be found here.
In my application at startup I am deleting index named "posts" where Elasticsearch datas are stored and creating again Index "posts" following this link
CreateIndexRequest request = new CreateIndexRequest("posts");
But, Inside index I need to create one type named "doc". Which is not mentioned in the website.
Temporary fix is when I am posting some data following this link it is creating type
Map<String, Object> jsonMap = new HashMap<>();
jsonMap.put("user", "kimchy");
jsonMap.put("postDate", new Date());
jsonMap.put("message", "trying out Elasticsearch");
IndexRequest indexRequest = new IndexRequest("posts", "doc", "1")
.source(jsonMap);
But, in this process when I am posting only then I can able to create type "doc". If I am not posting and trying to hit controller which calls data frmo index "posts" and type "doc". It gives error as "doc" type is not there.
Anyone hava any idea how to create type using rest high level client ES in java
By type you mean document type?
What about the second section Index Mappings in the link you provided?
Does this not work for you?
I needed to set type to "_doc" to make it working with ES7.6
If you know how to insert documents through API then this way will much more easy for you to do anything similar API (DELETE,POST,PUT...)
First, you will need RestHighLevelClient and all you have to do
String index = "/indexName/_doc"; <- Your path or type here
Request request = new Request("POST", index); <- Your method
request.setJsonEntity(
"{ \"message\": \" example add insert\" }" <- Your request body
);
client.getLowLevelClient().performRequest(request);
This will execute like how API does.
I'm a noob to Kafka and Avro. So i have been trying to get the Producer/Consumer running. So far i have been able to produce and consume simple Bytes and Strings, using the following :
Configuration for the Producer :
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.ByteArraySerializer");
Schema.Parser parser = new Schema.Parser();
Schema schema = parser.parse(USER_SCHEMA);
Injection<GenericRecord, byte[]> recordInjection = GenericAvroCodecs.toBinary(schema);
KafkaProducer<String, byte[]> producer = new KafkaProducer<>(props);
for (int i = 0; i < 1000; i++) {
GenericData.Record avroRecord = new GenericData.Record(schema);
avroRecord.put("str1", "Str 1-" + i);
avroRecord.put("str2", "Str 2-" + i);
avroRecord.put("int1", i);
byte[] bytes = recordInjection.apply(avroRecord);
ProducerRecord<String, byte[]> record = new ProducerRecord<>("mytopic", bytes);
producer.send(record);
Thread.sleep(250);
}
producer.close();
}
Now this is all well and good, the problem comes when i'm trying to serialize a POJO.
So , i was able to get the AvroSchema from the POJO using the utility provided with Avro.
Hardcoded the schema, and then tried to create a Generic Record to send through the KafkaProducer
the producer is now set up as :
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.KafkaAvroSerializer");
Schema.Parser parser = new Schema.Parser();
Schema schema = parser.parse(USER_SCHEMA); // this is the Generated AvroSchema
KafkaProducer<String, byte[]> producer = new KafkaProducer<>(props);
this is where the problem is : the moment i use KafkaAvroSerializer, the producer doesn't come up due to :
missing mandatory parameter : schema.registry.url
I read up on why this is required, so that my consumer is able to decipher whatever the producer is sending to me.
But isn't the schema already embedded in the AvroMessage?
Would be really great if someone can share a working example of using KafkaProducer with the KafkaAvroSerializer without having to specify schema.registry.url
would also really appreciate any insights/resources on the utility of the schema registry.
thanks!
Note first: KafkaAvroSerializer is not provided in vanilla apache kafka - it is provided by Confluent Platform. (https://www.confluent.io/), as part of its open source components (http://docs.confluent.io/current/platform.html#confluent-schema-registry)
Rapid answer: no, if you use KafkaAvroSerializer, you will need a schema registry. See some samples here:
http://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html
The basic idea with schema registry is that each topic will refer to an avro schema (ie, you will only be able to send data coherent with each other. But a schema can have multiple version, so you still need to identify the schema for each record)
We don't want to write the schema for everydata like you imply - often, schema is bigger than your data! That would be a waste of time parsing it everytime when reading, and a waste of ressources (network, disk, cpu)
Instead, a schema registry instance will do a binding avro schema <-> int schemaId and the serializer will then write only this id before the data, after getting it from registry (and caching it for later use).
So inside kafka, your record will be [<id> <bytesavro>] (and magic byte for technical reason), which is an overhead of only 5 bytes (to compare to the size of your schema)
And when reading, your consumer will find the corresponding schema to the id, and deserializer avro bytes regarding it. You can find way more in confluent doc
If you really have a use where you want to write the schema for every record, you will need an other serializer (I think writing your own, but it will be easy, just reuse https://github.com/confluentinc/schema-registry/blob/master/avro-serializer/src/main/java/io/confluent/kafka/serializers/AbstractKafkaAvroSerializer.java and remove the schema registry part to replace it with the schema, same for reading). But if you use avro, I would really discourage this - one day a later, you will need to implement something like avro registry to manage versioning
While the checked answer is all correct, it should also be mentioned that schema registration can be disabled.
Simply set auto.register.schemas to false.
You can create your Custom Avro serialiser, then even without Schema registry you would be able to produce records to topics. Check below article.
https://codenotfound.com/spring-kafka-apache-avro-serializer-deserializer-example.html
Here they have use Kafkatemplate . I have tried using
KafkaProducer<String, User> UserKafkaProducer
It is working fine
But if you want to use KafkaAvroSerialiser, you need to give Schema registryURL
As others have pointed out, KafkaAvroSerializer requires Schema Registry which is part of Confluent platform, and usage requires licensing.
The main advantage of using the schema registry is that your bytes on wire will smaller, as opposed to writing a binary payload with schema for every message.
I wrote a blog post detailing the advantages
You can always make your value classes to implement Serialiser<T>, Deserialiser<T> (and Serde<T> for Kafka Streams) manually. Java classes are usually generated from Avro files, so editing that directly isn't a good idea, but wrapping is maybe verbose but possible way.
Another way is to tune Arvo generator templates that are used for Java classes generation and generate implementation of all those interfaces automatically. Both Avro maven and gradle plugins supports custom templates, so it should be easy to configure.
I've created https://github.com/artemyarulin/avro-kafka-deserializable that has changed template files and simple CLI tool that you can use for file generation
Came across this issue, while implementing i18n using spring resource bundle, for a spring based java project, which has a html+js UI.
I need to read all content from a properties file, for a particular locale, and pass this info to the client side, so that the relevant messages can be shown on the UI.
However, the ResourceBundleMessagesource/ReloadableResourceBundleMessagesource objects seem to allow retrieving only a single message at a time.
ReloadableResourceBundleMessageSource msgSource = new ReloadableResourceBundleMessageSource();
//Methods to get only a single message at a time
String message = msgSource.getMessage("edit.delete.success", null, new Locale(localeString));
I am currently using java.util.ResourceBundle, and looping over the object using it's keyset.
rB = ResourceBundle.getBundle("messages", new Locale(localeString));
Map<String, String> msgs = new HashMap<>();
for(String messageKey : rB.keySet()){
msgs.put(messageKey, rB.getString(messageKey));
}
Q1 : Is there any better/more elegant way to solve this?
Q2 : What is the reason for the authors to not allow for accessing all properties from a file ?