Suppose I have 2 bins in aeropike set
number(key) 2. timeLeft
I wanted to get a timeLeft value from aerospike for a number.
But if the particular record is not present then I want to create the record and set a default value 6000 to timeLeft and then get the value in the single transaction.
public Record someMethod(String num) {
WritePolicy writePolicy = aerospikeRepo.getWritePolicy(null, ttl, true);
return aerospikeRepo.operate(writePolicy, namespace, set, num, Operation.get());
}
Personally, I think the .operate() method of the aerospike client will be used somehow but did not find relevant Operation to set the default value if not present.
You can do it using Expressions. Here is sample code:
import com.aerospike.client.AerospikeClient;
import com.aerospike.client.policy.WritePolicy;
import com.aerospike.client.Bin;
import com.aerospike.client.Key;
import com.aerospike.client.Record;
import com.aerospike.client.Value;
import com.aerospike.client.policy.RecordExistsAction;
import com.aerospike.client.AerospikeException;
import com.aerospike.client.ResultCode;
import com.aerospike.client.Operation;
import com.aerospike.client.exp.Exp;
import com.aerospike.client.exp.ExpOperation;
import com.aerospike.client.exp.ExpWriteFlags;
import com.aerospike.client.exp.Expression;
System.out.println("Client modules imported.");
AerospikeClient client = new AerospikeClient("localhost", 3000);
WritePolicy wP = new WritePolicy();
wP.respondAllOps = true;
int iNumber = 11;
int iTimeLeft = 6000;
for(int i=0; i<5; i++){
Key key = new Key ("test", "testset", iNumber);
Expression tlExp = Exp.build(Exp.val(iTimeLeft));
Record record = client.operate(wP, key,
ExpOperation.write("timeLeft", tlExp, ExpWriteFlags.CREATE_ONLY | ExpWriteFlags.POLICY_NO_FAIL),
//ExpOperation.write("timeLeft", tlExp, ExpWriteFlags.DEFAULT),
Operation.get("timeLeft"));
List<?> list = record.getList("timeLeft");
System.out.println(list.get(1));
iTimeLeft = iTimeLeft - 1000; //should not alter record value
}
This gives the following output:
Client modules imported.
6000
6000
6000
6000
6000
However, if I use the DEFAULT, the output will be modified each time. (what you don't want, compared to the correct flags above (CREATE_ONLY|POLICY_NO_FAIL i.e. silently go on to next operation if you want to update the record only if the bin does not exist).
Client modules imported.
6000
5000
4000
3000
2000
(note, I've resolved my problem and posted the code at the bottom)
I'm playing around with TensorFlow and the backend processing must take place in Java. I've taken one of the models from the https://developers.google.com/machine-learning/crash-course and saved it with tf.saved_model.save(my_model,"house_price_median_income") (using a docker container). I copied the model off and loaded it into Java (using the 2.0 stuff built from source because I'm on windows).
I can load the model and run it:
try (SavedModelBundle model = SavedModelBundle.load("./house_price_median_income", "serve")) {
try (Session session = model.session()) {
Session.Runner runner = session.runner();
float[][] in = new float[][]{ {2.1518f} } ;
Tensor<?> jack = Tensor.create(in);
runner.feed("serving_default_layer1_input", jack);
float[][] probabilities = runner.fetch("StatefulPartitionedCall").run().get(0).copyTo(new float[1][1]);
for (int i = 0; i < probabilities.length; ++i) {
System.out.println(String.format("-- Input #%d", i));
for (int j = 0; j < probabilities[i].length; ++j) {
System.out.println(String.format("Class %d - %f", i, probabilities[i][j]));
}
}
}
}
The above is hardcoded to an input and output but I want to be able to read the model and provide some information so the end-user can select the input and output, etc.
I can get the inputs and outputs with the python command: saved_model_cli show --dir ./house_price_median_income --all
What I want to do it get the inputs and outputs via Java so my code doesn't need to execute python script to get them. I can get operations via:
Graph graph = model.graph();
Iterator<Operation> itr = graph.operations();
while (itr.hasNext()) {
GraphOperation e = (GraphOperation)itr.next();
System.out.println(e);
And this outputs both the inputs and outputs as "operations" BUT how do I know that it is an input and\or an output? The python tool uses the SignatureDef but that doesn't seem to appear in the TensorFlow 2.0 java stuff at all. Am I missing something obvious or is it just missing from TensforFlow 2.0 Java library?
NOTE, I've sorted my issue with the answer help below. Here is my full bit of code in case somebody would like it in the future. Note this is TF 2.0 and uses the SNAPSHOT mentioned below. I make a few assumptions but it shows how to pull the input and output and then use them to run a model
import org.tensorflow.SavedModelBundle;
import org.tensorflow.Session;
import org.tensorflow.Tensor;
import org.tensorflow.exceptions.TensorFlowException;
import org.tensorflow.Session.Run;
import org.tensorflow.Graph;
import org.tensorflow.Operation;
import org.tensorflow.Output;
import org.tensorflow.GraphOperation;
import org.tensorflow.proto.framework.SignatureDef;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import org.tensorflow.proto.framework.MetaGraphDef;
import java.util.Map;
import org.tensorflow.proto.framework.TensorInfo;
import org.tensorflow.types.TFloat32;
import org.tensorflow.tools.Shape;
import java.nio.FloatBuffer;
import org.tensorflow.tools.buffer.DataBuffers;
import org.tensorflow.tools.ndarray.FloatNdArray;
import org.tensorflow.tools.ndarray.StdArrays;
import org.tensorflow.proto.framework.TensorInfo;
public class v2tensor {
public static void main(String[] args) {
try (SavedModelBundle savedModel = SavedModelBundle.load("./house_price_median_income", "serve")) {
SignatureDef modelInfo = savedModel.metaGraphDef().getSignatureDefMap().get("serving_default");
TensorInfo input1 = null;
TensorInfo output1 = null;
Map<String, TensorInfo> inputs = modelInfo.getInputsMap();
for(Map.Entry<String, TensorInfo> input : inputs.entrySet()) {
if (input1 == null) {
input1 = input.getValue();
System.out.println(input1.getName());
}
System.out.println(input);
}
Map<String, TensorInfo> outputs = modelInfo.getOutputsMap();
for(Map.Entry<String, TensorInfo> output : outputs.entrySet()) {
if (output1 == null) {
output1=output.getValue();
}
System.out.println(output);
}
try (Session session = savedModel.session()) {
Session.Runner runner = session.runner();
FloatNdArray matrix = StdArrays.ndCopyOf(new float[][]{ { 2.1518f } } );
try (Tensor<TFloat32> jack = TFloat32.tensorOf(matrix) ) {
runner.feed(input1.getName(), jack);
try ( Tensor<TFloat32> rezz = runner.fetch(output1.getName()).run().get(0).expect(TFloat32.DTYPE) ) {
TFloat32 data = rezz.data();
data.scalars().forEachIndexed((i, s) -> {
System.out.println(s.getFloat());
} );
}
}
}
} catch (TensorFlowException ex) {
ex.printStackTrace();
}
}
}
What you need to do is to read the SavedModelBundle metadata as a MetaGraphDef, from there you can retrieve input and output names from the SignatureDef, like in Python.
In TF Java 1.* (i.e. the client you are using in your example), the proto definitions are not available out-of-the-box from the tensorflow artifact, you need to add a dependency to org.tensorflow:proto as well and deserialize the result of SavedModelBundle.metaGraphDef() into a MetaGraphDef proto.
In TF Java 2.* (the new client actually only available as snapshots from here), the protos are present right away so you can simply call this line to retrieve the right SignatureDef:
savedModel.metaGraphDef().signatureDefMap.getValue("serving_default")
I'm comparing 2 excel files cell by cell and when i found a difference i print it example DIFF Cell values at: Sch HI (1 of 4)!K40 => '6.0' v/s '5.0' cell position old value and new value
so instead of cell position i need to print the box name
#Override
public void reportDiffCell(CellPos c1, CellPos c2) {
sheets.add(c1.getSheetName());
rows.add(c1.getRow());
cols.add(c1.getColumn());
results.add("DIFF Cell values at: " + c1.getCellPosition() + " => '" + c1.getCellValue()
+ "' v/s '" + c2.getCellValue() + "'");
}
An example of gathering the range names from a spreadsheet, so that they can be compared for a "diff" report...
For example, here is a spreadsheet with two named ranges:
Name : animals
Refers to: Sheet1!$C$3:$D$4,Sheet1!$C$5
Name : birds
Refers to: Sheet1!$B$8:$B$9
The following code populates the range names and references into a map:
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import java.util.List;
import java.util.Map;
import java.util.HashMap;
import org.apache.poi.ss.usermodel.Workbook;
import org.apache.poi.ss.usermodel.WorkbookFactory;
import org.apache.poi.ss.usermodel.Name;
...
public Map<String, String> compare(String fileName) {
Map<String, String> namesMap = new HashMap();
File file = new File(fileName);
try (InputStream is = new FileInputStream(file)) {
Workbook wb = WorkbookFactory.create(is);
List<? extends Name> names = wb.getAllNames();
names.forEach((name) -> {
namesMap.put(name.getNameName(), name.getRefersToFormula());
});
} catch (FileNotFoundException ex) {
// handler
} catch (IOException ex) {
// handler
}
return namesMap;
}
Now you can repeat this for each of your two Excel files, and then compare the keys and values in the two map objects (different range names; same names but different ranges of cells).
UPDATE: THe above sample was written using Open JDK 13. The following POI dependencies were used (assuming Maven):
<dependencies>
<dependency>
<groupId>org.apache.poi</groupId>
<artifactId>poi</artifactId>
<version>4.1.2</version>
</dependency>
<dependency>
<groupId>org.apache.poi</groupId>
<artifactId>poi-ooxml</artifactId>
<version>4.1.2</version>
</dependency>
</dependencies>
You could add a VBA function to the workbook and call that from java...
Function CellName(r As Range)
On Error Resume Next
CellName = r.Name.Name
If Err Then CellName = r.Address(0, 0)
End Function
I have trained custom NER and Relation extraction model and I have checked generating triples with corenlp server but when I'm using OpenIEDemo.java
to generate triples it's generating triples having relations "has" and "have" only but not the relations on which I have trained my Relation Extraction model on.
I'm loading custom NER and Relation Extraction model while running the same script. Here is my OpenIEDemo.java file...
package edu.stanford.nlp.naturalli;
import edu.stanford.nlp.ie.util.RelationTriple;
import edu.stanford.nlp.io.IOUtils;
import edu.stanford.nlp.ling.CoreAnnotations;
import edu.stanford.nlp.pipeline.Annotation;
import edu.stanford.nlp.pipeline.StanfordCoreNLP;
import edu.stanford.nlp.semgraph.SemanticGraph;
import edu.stanford.nlp.semgraph.SemanticGraphCoreAnnotations;
import edu.stanford.nlp.util.CoreMap;
import edu.stanford.nlp.util.PropertiesUtils;
import java.util.Collection;
import java.util.List;
import java.util.Properties;
/**
* A demo illustrating how to call the OpenIE system programmatically.
* You can call this code with:
*
* <pre>
* java -mx1g -cp stanford-openie.jar:stanford-openie-models.jar edu.stanford.nlp.naturalli.OpenIEDemo
* </pre>
*
*/
public class OpenIEDemo {
private OpenIEDemo() {} // static main
public static void main(String[] args) throws Exception {
Properties props = new Properties();
props.setProperty("annotators", "tokenize, ssplit, pos, lemma, depparse, natlog, openie");
props.setProperty("ner.model", "./ner/ner-model.ser.gz");
props.setProperty("sup.relation.model", "./relation_extractor/relation_model_pipeline.ser.ser");
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
// Annotate an example document.
String text;
if (args.length > 0) {
text = args[0];
} else {
text = "Obama was born in Hawaii. He is our president.";
}
Annotation doc = new Annotation(text);
pipeline.annotate(doc);
// Loop over sentences in the document
int sentNo = 0;
for (CoreMap sentence : doc.get(CoreAnnotations.SentencesAnnotation.class)) {
System.out.println("Sentence #" + ++sentNo + ": " + sentence.get(CoreAnnotations.TextAnnotation.class));
// Print SemanticGraph
System.out.println(sentence.get(SemanticGraphCoreAnnotations.EnhancedDependenciesAnnotation.class).toString(SemanticGraph.OutputFormat.LIST));
// Get the OpenIE triples for the sentence
Collection<RelationTriple> triples = sentence.get(NaturalLogicAnnotations.RelationTriplesAnnotation.class);
// Print the triples
for (RelationTriple triple : triples) {
System.out.println(triple.confidence + "\t" +
triple.subjectLemmaGloss() + "\t" +
triple.relationLemmaGloss() + "\t" +
triple.objectLemmaGloss());
}
// Alternately, to only run e.g., the clause splitter:
List<SentenceFragment> clauses = new OpenIE(props).clausesInSentence(sentence);
for (SentenceFragment clause : clauses) {
System.out.println(clause.parseTree.toString(SemanticGraph.OutputFormat.LIST));
}
System.out.println();
}
}
}
Thanks in advance.
As OpenIE module of stanfordCoreNLP not using custom relation model(don't know why) I can not use custom relation extraction model with this code instead I had to run SanfordCoreNLP pipeline adding path for my custom NER and Relation Extraction model in server.properties file and generate triples. If someone know the reason why OpenIE is not using custom Relation Extraction model please comment, it will be very useful for others.
How Can we create a topic in Kafka from the IDE using API because when I do this:
bin/kafka-create-topic.sh --topic mytopic --replica 3 --zookeeper localhost:2181
I get the error:
bash: bin/kafka-create-topic.sh: No such file or directory
And I followed the developer setup as it is.
In Kafka 0.8.1+ -- the latest version of Kafka as of today -- you can programmatically create a new topic via AdminCommand. The functionality of CreateTopicCommand (part of the older Kafka 0.8.0) that was mentioned in one of the previous answers to this question was moved to AdminCommand.
Scala example for Kafka 0.8.1:
import kafka.admin.AdminUtils
import kafka.utils.ZKStringSerializer
import org.I0Itec.zkclient.ZkClient
// Create a ZooKeeper client
val sessionTimeoutMs = 10000
val connectionTimeoutMs = 10000
// Note: You must initialize the ZkClient with ZKStringSerializer. If you don't, then
// createTopic() will only seem to work (it will return without error). The topic will exist in
// only ZooKeeper and will be returned when listing topics, but Kafka itself does not create the
// topic.
val zkClient = new ZkClient("zookeeper1:2181", sessionTimeoutMs, connectionTimeoutMs,
ZKStringSerializer)
// Create a topic named "myTopic" with 8 partitions and a replication factor of 3
val topicName = "myTopic"
val numPartitions = 8
val replicationFactor = 3
val topicConfig = new Properties
AdminUtils.createTopic(zkClient, topicName, numPartitions, replicationFactor, topicConfig)
Build dependencies, using sbt as example:
libraryDependencies ++= Seq(
"com.101tec" % "zkclient" % "0.4",
"org.apache.kafka" % "kafka_2.10" % "0.8.1.1"
exclude("javax.jms", "jms")
exclude("com.sun.jdmk", "jmxtools")
exclude("com.sun.jmx", "jmxri"),
...
)
EDIT: Added Java example for Kafka 0.9.0.0 (latest version as of Jan 2016).
Maven dependencies:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.9.0.0</version>
</dependency>
<dependency>
<groupId>com.101tec</groupId>
<artifactId>zkclient</artifactId>
<version>0.7</version>
</dependency>
Code:
import org.I0Itec.zkclient.ZkClient;
import org.I0Itec.zkclient.ZkConnection;
import java.util.Properties;
import kafka.admin.AdminUtils;
import kafka.utils.ZKStringSerializer$;
import kafka.utils.ZkUtils;
public class KafkaJavaExample {
public static void main(String[] args) {
String zookeeperConnect = "zkserver1:2181,zkserver2:2181";
int sessionTimeoutMs = 10 * 1000;
int connectionTimeoutMs = 8 * 1000;
// Note: You must initialize the ZkClient with ZKStringSerializer. If you don't, then
// createTopic() will only seem to work (it will return without error). The topic will exist in
// only ZooKeeper and will be returned when listing topics, but Kafka itself does not create the
// topic.
ZkClient zkClient = new ZkClient(
zookeeperConnect,
sessionTimeoutMs,
connectionTimeoutMs,
ZKStringSerializer$.MODULE$);
// Security for Kafka was added in Kafka 0.9.0.0
boolean isSecureKafkaCluster = false;
ZkUtils zkUtils = new ZkUtils(zkClient, new ZkConnection(zookeeperConnect), isSecureKafkaCluster);
String topic = "my-topic";
int partitions = 2;
int replication = 3;
Properties topicConfig = new Properties(); // add per-topic configurations settings here
AdminUtils.createTopic(zkUtils, topic, partitions, replication, topicConfig);
zkClient.close();
}
}
EDIT 2: Added Java example for Kafka 0.10.2.0 (latest version as of April 2017).
Maven dependencies:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>0.10.2.0</version>
</dependency>
<dependency>
<groupId>com.101tec</groupId>
<artifactId>zkclient</artifactId>
<version>0.9</version>
</dependency>
Code:
import org.I0Itec.zkclient.ZkClient;
import org.I0Itec.zkclient.ZkConnection;
import java.util.Properties;
import kafka.admin.AdminUtils;
import kafka.admin.RackAwareMode;
import kafka.utils.ZKStringSerializer$;
import kafka.utils.ZkUtils;
public class KafkaJavaExample {
public static void main(String[] args) {
String zookeeperConnect = "zkserver1:2181,zkserver2:2181";
int sessionTimeoutMs = 10 * 1000;
int connectionTimeoutMs = 8 * 1000;
String topic = "my-topic";
int partitions = 2;
int replication = 3;
Properties topicConfig = new Properties(); // add per-topic configurations settings here
// Note: You must initialize the ZkClient with ZKStringSerializer. If you don't, then
// createTopic() will only seem to work (it will return without error). The topic will exist in
// only ZooKeeper and will be returned when listing topics, but Kafka itself does not create the
// topic.
ZkClient zkClient = new ZkClient(
zookeeperConnect,
sessionTimeoutMs,
connectionTimeoutMs,
ZKStringSerializer$.MODULE$);
// Security for Kafka was added in Kafka 0.9.0.0
boolean isSecureKafkaCluster = false;
ZkUtils zkUtils = new ZkUtils(zkClient, new ZkConnection(zookeeperConnect), isSecureKafkaCluster);
AdminUtils.createTopic(zkUtils, topic, partitions, replication, topicConfig, RackAwareMode.Enforced$.MODULE$);
zkClient.close();
}
}
As of 0.11.0.0 all you need is:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.11.0.0</version>
</dependency>
This artifact now contains the AdminClient (org.apache.kafka.clients.admin).
AdminClient can handle many Kafka admin tasks, including topic creation:
Properties config = new Properties();
config.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka:9092");
AdminClient admin = AdminClient.create(config);
Map<String, String> configs = new HashMap<>();
int partitions = 1;
int replication = 1;
admin.createTopics(asList(new NewTopic("topic", partitions, replication).configs(configs)));
The output of this command is a CreateTopicsResult, which you can use to get a Future for the whole operation or for each individual topic creation:
to get a future for the whole operation, use CreateTopicsResult#all().
to get Futures for all the topics individually, use CreateTopicsResult#values().
For example:
CreateTopicsResult result = ...
KafkaFuture<Void> all = result.all();
or:
CreateTopicsResult result = ...
for (Map.Entry<String, KafkaFuture<Void>> entry : result.values().entrySet()) {
try {
entry.getValue().get();
log.info("topic {} created", entry.getKey());
} catch (InterruptedException | ExecutionException e) {
if (Throwables.getRootCause(e) instanceof TopicExistsException) {
log.info("topic {} existed", entry.getKey());
}
}
}
KafkaFuture is "a flexible future which supports call chaining and other asynchronous programming patterns," and "will eventually become a thin shim on top of Java 8's CompletebleFuture."
For creating a topic through java api and Kafka 0.8+ try the following,
First import below statement
import kafka.utils.ZKStringSerializer$;
Create object for ZkClient in the following way,
ZkClient zkClient = new ZkClient("localhost:2181", 10000, 10000, ZKStringSerializer$.MODULE$);
AdminUtils.createTopic(zkClient, myTopic, 10, 1, new Properties());
You can try with kafka.admin.CreateTopicCommand scala class to create Topic from Java code...providng the necessary arguments.
String [] arguments = new String[8];
arguments[0] = "--zookeeper";
arguments[1] = "10.***.***.***:2181";
arguments[2] = "--replica";
arguments[3] = "1";
arguments[4] = "--partition";
arguments[5] = "1";
arguments[6] = "--topic";
arguments[7] = "test-topic-Biks";
CreateTopicCommand.main(arguments);
NB: You should add the maven dependencies for jopt-simple-4.5 & zkclient-0.1
Based on the latest kafka-client api and Kafka 2.1.1, the working version of code follows:
Import the latest kafka-clients using sbt.
// https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients
libraryDependencies += Seq("org.apache.kafka" % "kafka-clients" % "2.1.1",
"org.apache.kafka" %% "kafka" % "2.1.1")
The code for topic creation in scala:
import java.util.Arrays
import java.util.Properties
import org.apache.kafka.clients.admin.NewTopic
import org.apache.kafka.clients.admin.{AdminClient, AdminClientConfig}
class CreateKafkaTopic {
def create(): Unit = {
val config = new Properties()
config.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "192.30.1.5:9092")
val localKafkaAdmin = AdminClient.create(config)
val partitions = 3
val replication = 1.toShort
val topic = new NewTopic("integration-02", partitions, replication)
val topics = Arrays.asList(topic)
val topicStatus = localKafkaAdmin.createTopics(topics).values()
//topicStatus.values()
println(topicStatus.keySet())
}
}
Validate the new topic using:
./kafka-topics.sh --zookeeper 192.30.1.5:2181 --list
Hope it is helpful to someone.
Reference: http://kafka.apache.org/21/javadoc/index.html?org/apache/kafka/clients/admin/AdminClient.html
If you are using Kafka 0.10.0.0+, creating topic from Java requires passing parameter of RackAwareMode type. It's a Scala case object, and getting it's instance from Java is tricky (proof: How do I "get" a Scala case object from Java? for example. But it is not applicable for our case).
Luckily, rackAwareMode is an optional parameter. Yet Java does not support optional parameters. How do we solve that? Here is a solution:
AdminUtils.createTopic(zkUtils, topic, 1, 1,
AdminUtils.createTopic$default$5(),
AdminUtils.createTopic$default$6());
Use it with miguno's answer, and you are good to go.
A few ways your call wouldn't work.
If your Kafka cluster didn't have enough nodes to support a replication value of 3.
If there is a chroot path prefix you have to append it after the zookeeper port
You arent in the Kafka install directory when running (This is the most likely)
From Kafka 0.8 Producer Example the sample below will create a topic named page_visits and also start producing if the auto.create.topics.enable attribute is set to true (default) in the Kafka Broker config file
import java.util.*;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
public class TestProducer {
public static void main(String[] args) {
long events = Long.parseLong(args[0]);
Random rnd = new Random();
Properties props = new Properties();
props.put("metadata.broker.list", "broker1:9092,broker2:9092 ");
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("partitioner.class", "example.producer.SimplePartitioner");
props.put("request.required.acks", "1");
ProducerConfig config = new ProducerConfig(props);
Producer<String, String> producer = new Producer<String, String>(config);
for (long nEvents = 0; nEvents < events; nEvents++) {
long runtime = new Date().getTime();
String ip = “192.168.2.” + rnd.nextInt(255);
String msg = runtime + “,www.example.com,” + ip;
KeyedMessage<String, String> data = new KeyedMessage<String, String>("page_visits", ip, msg);
producer.send(data);
}
producer.close();
}
}
There is AdminZkClient which we can use to manage topics in Kafka server.
String zookeeperHost = "127.0.0.1:2181";
Boolean isSucre = false;
int sessionTimeoutMs = 200000;
int connectionTimeoutMs = 15000;
int maxInFlightRequests = 10;
Time time = Time.SYSTEM;
String metricGroup = "myGroup";
String metricType = "myType";
KafkaZkClient zkClient = KafkaZkClient.apply(zookeeperHost,isSucre,sessionTimeoutMs,
connectionTimeoutMs,maxInFlightRequests,time,metricGroup,metricType);
AdminZkClient adminZkClient = new AdminZkClient(zkClient);
String topicName1 = "myTopic";
int partitions = 3;
int replication = 1;
Properties topicConfig = new Properties();
adminZkClient.createTopic(topicName1,partitions,replication,
topicConfig,RackAwareMode.Disabled$.MODULE$);
You can refer this link for details
https://www.analyticshut.com/streaming-services/kafka/create-and-list-kafka-topics-in-java/
From which IDE are your trying ?
Please provide complete path , below are the command from terminal which will create a topic
cd kafka/bin
./kafka-create-topic.sh --topic test --zookeeper localhost:2181
As of Kafka 0.10.1 the ZKStringSerializer mentioned by Michael is private (for Scala). You can use the factory methods createZkClient or createZkClientAndConnection in ZkUtils.
Scala example for Kafka 0.10.1:
import kafka.utils.ZkUtils
val sessionTimeoutMs = 10000
val connectionTimeoutMs = 10000
val (zkClient, zkConnection) = ZkUtils.createZkClientAndConnection(
"localhost:2181", sessionTimeoutMs, connectionTimeoutMs)
Then just create the topic as Michael suggested:
import kafka.admin.AdminUtils
val zkUtils = new ZkUtils(zkClient, zkConnection, false)
val numPartitions = 4
val replicationFactor = 1
val topicConfig = new Properties
val topic = "my-topic"
AdminUtils.createTopic(zkUtils, topic, numPartitions, replicationFactor, topicConfig)