rxjava2 for load testing inconsistencies as result of onComplete event - java

There is code that is supposed to do the load testing form some function that performs the http call (we call it callInit here) and collects some data in the LoaTestMetricsData:
the collected responses
and the total duration of the execution.
import io.reactivex.Observable;
import io.reactivex.Scheduler;
import io.reactivex.Single;
import io.reactivex.observers.TestObserver;
import io.reactivex.schedulers.Schedulers;
import io.reactivex.subjects.PublishSubject;
import io.reactivex.subjects.Subject;
import io.restassured.internal.RestAssuredResponseImpl;
import io.restassured.response.Response;
import org.junit.jupiter.api.Test;
import java.time.Duration;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.Supplier;
import static java.lang.Thread.sleep;
import static org.hamcrest.CoreMatchers.equalTo;
import static org.hamcrest.CoreMatchers.is;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.allOf;
import static org.hamcrest.Matchers.greaterThanOrEqualTo;
import static org.hamcrest.Matchers.lessThan;
public class TestRx {
#Test
public void loadTest() {
int CALL_N_TIMES = 10;
final long CALL_NIT_EVERY_MILLISECONDS = 100;
final LoaTestMetricsData loaTestMetricsData = loadTestHttpCall(
this::callInit,
CALL_N_TIMES,
CALL_NIT_EVERY_MILLISECONDS
);
assertThat(loaTestMetricsData.responseList.size(), is(equalTo(Long.valueOf(CALL_N_TIMES).intValue())));
long errorCount = loaTestMetricsData.responseList.stream().filter(x -> x.getStatusCode() != 200).count();
long executionTime = loaTestMetricsData.duration.getSeconds();
//assertThat(errorCount, is(equalTo(0)));
assertThat(executionTime , allOf(greaterThanOrEqualTo(1L),lessThan(3L)));
}
// --
private Single<Response> callInit() {
try {
return Single.fromCallable(() -> {
System.out.println("...");
sleep(1000);
Response response = new RestAssuredResponseImpl();
return response;
});
} catch (Exception ex) {
throw new RuntimeException(ex.getMessage());
}
}
// --
private LoaTestMetricsData loadTestHttpCall(final Supplier<Single<Response>> restCallFunction, long callnTimes, long callEveryMilisseconds) {
long startTimeMillis = System.currentTimeMillis();
final LoaTestMetricsData loaDestMetricsData = new LoaTestMetricsData();
final AtomicInteger atomicInteger = new AtomicInteger(0);
final TestObserver<Response> testObserver = new TestObserver<Response>() {
public void onNext(Response response) {
loaDestMetricsData.responseList.add(response);
super.onNext(response);
}
public void onComplete() {
loaDestMetricsData.duration = Duration.ofMillis(System.currentTimeMillis() - startTimeMillis);
super.onComplete();
}
};
final Subject<Response> subjectInitCallResults = PublishSubject.create(); // Memo: Subjects are hot so if you don't observe them the right time, you may not get events. Thus: subscribe first then emit (onNext)
final Scheduler schedulerIo = Schedulers.io();
subjectInitCallResults
.subscribeOn(schedulerIo)
.subscribe(testObserver); // subscribe first
final Observable<Long> source = Observable.interval(callEveryMilisseconds, TimeUnit.MILLISECONDS).take(callnTimes);
source.subscribe(x -> {
final Single<Response> singleResult = restCallFunction.get();
singleResult
.subscribeOn(schedulerIo)
.subscribe( result -> {
int count = atomicInteger.incrementAndGet();
if(count == callnTimes) {
subjectInitCallResults.onNext(result); // then emit
subjectInitCallResults.onComplete();
} else {
subjectInitCallResults.onNext(result);
}
});
});
testObserver.awaitTerminalEvent();
testObserver.assertComplete();
testObserver.assertValueCount(Long.valueOf(callnTimes).intValue()); // !!!
return loaDestMetricsData;
}
}
The: LoaTestMetricsData is defined as:
public class LoaTestMetricsData {
public List<Response> responseList = new ArrayList<>();
public Duration duration;
}
Sometimes test fails with this error:
java.lang.AssertionError: Value counts differ; expected: 10 but was: 9 (latch = 0, values = 9, errors = 0, completions = 1)
Expected :10
Actual :9 (latch = 0, values = 9, errors = 0, completions = 1)
<Click to see difference>
If someone could tell me why?
As is some of the subjectInitCallResults.onNext() has not been executed, or consumed. But why.. I understand that PublishSubject is hot observable, thus I subscribe for the events before emitting/onNext anything to it.
UPDATE:
What would fix it, is this ugly code, that would wait for the subject to fill up:
while(subjectInitCallResults.count().blockingGet() != callnTimes) {
Thread.sleep(100);
}
..
testObserver.awaitTerminalEvent();
But is the proper / better way of doing it?
Thanks.

Related

Flink Inner Join Missing records and adding duplicates

We are running a flink application on AWS Kinesis Analytics.
We are using kafka as our source and sink and event time as our water mark generation. We have a window of 5 seconds. We are performing an inner join using a common field.
Kakfa topics have 12 partitions and flink have 3 way parallelism.
Issues observed : We are seeing for some window we are missing records. Records should join based on the event-time but not joining and for other windows we are seeing duplicate records.
sample records
{"empName":"ted","timestamp":"0","uuid":"f2c2e48a44064d0fa8da5a3896e0e42a","empId":"23698"}
{"empName":"ted","timestamp":"1","uuid":"069f2293ad144dd38a79027068593b58","empId":"23145"}
{"empName":"john","timestamp":"2","uuid":"438c1f0b85154bf0b8e4b3ebf75947b6","empId":"23698"}
{"empName":"john","timestamp":"0","uuid":"76d1d21ed92f4a3f8e14a09e9b40a13b","empId":"23145"}
{"empName":"ted","timestamp":"0","uuid":"bbc3bad653aa44c4894d9c4d13685fba","empId":"23698"}
{"empName":"ted","timestamp":"0","uuid":"530871933d1e4443ade447adc091dcbe","empId":"23145"}
{"empName":"ted","timestamp":"1","uuid":"032d7be009cb448bb40fe5c44582cb9c","empId":"23698"}
{"empName":"john","timestamp":"1","uuid":"e5916821bd4049bab16f4dc62d4b90ea","empId":"23145"}
{"empId":"23698","timestamp":"0","expense":"234"}
{"empId":"23698","timestamp":"0","expense":"34"}
{"empId":"23698","timestamp":"1","expense":"234"}
{"empId":"23145","timestamp":"2","expense":"234"}
{"empId":"23698","timestamp":"2","expense":"234"}
{"empId":"23698","timestamp":"0","expense":"234"}
{"empId":"23145","timestamp":"0","expense":"234"}
{"empId":"23698","timestamp":"0","expense":"34"}
{"empId":"23145","timestamp":"1","expense":"34"}
Below is code for your reference.
As you can see here for the two streams there are a lot event timestamp which can repeat. There can be thousands of employee and empId combinations(in the real data there are many more dimensions) and they are all coming in a single kafka topic.
import java.text.SimpleDateFormat;
import java.time.Duration;
import java.time.Instant;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.time.temporal.ChronoUnit;
import java.util.Properties;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.api.common.eventtime.WatermarkStrategy;
import org.apache.flink.api.common.functions.AggregateFunction;
import org.apache.flink.api.common.functions.JoinFunction;
import org.apache.flink.api.common.functions.ReduceFunction;
import org.apache.flink.api.common.state.ValueState;
import org.apache.flink.api.common.state.ValueStateDescriptor;
import org.apache.flink.api.common.typeinfo.TypeHint;
import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.api.java.functions.KeySelector;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.api.java.tuple.Tuple1;
import org.apache.flink.api.java.tuple.Tuple3;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.std.StringDeserializer;
import org.apache.flink.streaming.api.TimeCharacteristic;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.KeyedStream;
import org.apache.flink.streaming.api.functions.co.CoFlatMapFunction;
import org.apache.flink.streaming.api.functions.co.KeyedCoProcessFunction;
import org.apache.flink.streaming.api.functions.co.RichCoFlatMapFunction;
import org.apache.flink.streaming.api.functions.sink.PrintSinkFunction;
import org.apache.flink.streaming.api.functions.timestamps.AscendingTimestampExtractor;
import org.apache.flink.streaming.api.windowing.assigners.TumblingEventTimeWindows;
import org.apache.flink.streaming.api.windowing.assigners.TumblingProcessingTimeWindows;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.Semantic;
import org.apache.flink.util.Collector;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class Main {
private static final Logger LOG = LoggerFactory.getLogger(Main.class);
// static String TOPIC_IN = "event_hub_all-mt-partitioned";
static String TOPIC_ONE = "kafka_one_multi";
static String TOPIC_TWO = "kafka_two_multi";
static String TOPIC_OUT = "final_join_topic_multi";
static String BOOTSTRAP_SERVER = "localhost:9092";
public static void main(String[] args) {
Producer<String> emp = new Producer<String>(BOOTSTRAP_SERVER, StringSerializer.class.getName());
Producer<String> dept = new Producer<String>(BOOTSTRAP_SERVER, StringSerializer.class.getName());
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
Properties props = new Properties();
props.put("bootstrap.servers", BOOTSTRAP_SERVER);
props.put("client.id", "flink-example1");
FlinkKafkaConsumer<Employee> kafkaConsumerOne = new FlinkKafkaConsumer<>(TOPIC_ONE, new EmployeeSchema(),
props);
LOG.info("Coming to main function");
//Commenting event timestamp for watermark generation!!
var empDebugStream = kafkaConsumerOne.assignTimestampsAndWatermarks(
WatermarkStrategy.<Employee>forBoundedOutOfOrderness(Duration.ofSeconds(5))
.withTimestampAssigner((employee, timestamp) -> employee.getTimestamp().getTime())
.withIdleness(Duration.ofSeconds(1)));
// for allowing Flink to handle late elements
kafkaConsumerOne.setStartFromLatest();
FlinkKafkaConsumer<EmployeeExpense> kafkaConsumerTwo = new FlinkKafkaConsumer<>(TOPIC_TWO,
new DepartmentSchema(), props);
//Commenting event timestamp for watermark generation!!
kafkaConsumerTwo.assignTimestampsAndWatermarks(
WatermarkStrategy.<EmployeeExpense>forBoundedOutOfOrderness(Duration.ofSeconds(5))
.withTimestampAssigner((employeeExpense, timestamp) -> employeeExpense.getTimestamp().getTime())
.withIdleness(Duration.ofSeconds(1)));
kafkaConsumerTwo.setStartFromLatest();
// EventSerializationSchema<EmployeeWithExpenseAggregationStats> employeeWithExpenseAggregationSerializationSchema = new EventSerializationSchema<EmployeeWithExpenseAggregationStats>(
// TOPIC_OUT);
EventSerializationSchema<EmployeeWithExpense> employeeWithExpenseSerializationSchema = new EventSerializationSchema<EmployeeWithExpense>(
TOPIC_OUT);
// FlinkKafkaProducer<EmployeeWithExpenseAggregationStats> sink = new FlinkKafkaProducer<EmployeeWithExpenseAggregationStats>(
// TOPIC_OUT,
// employeeWithExpenseAggregationSerializationSchema,props,
// FlinkKafkaProducer.Semantic.AT_LEAST_ONCE);
FlinkKafkaProducer<EmployeeWithExpense> sink = new FlinkKafkaProducer<EmployeeWithExpense>(TOPIC_OUT,
employeeWithExpenseSerializationSchema, props, FlinkKafkaProducer.Semantic.AT_LEAST_ONCE);
DataStream<Employee> empStream = env.addSource(kafkaConsumerOne)
.transform("debugFilter", empDebugStream.getProducedType(), new StreamWatermarkDebugFilter<>())
.keyBy(emps -> emps.getEmpId());
DataStream<EmployeeExpense> expStream = env.addSource(kafkaConsumerTwo).keyBy(exps -> exps.getEmpId());
// DataStream<EmployeeWithExpense> aggInputStream = empStream.join(expStream)
empStream.join(expStream).where(new KeySelector<Employee, Tuple1<Integer>>() {
/**
*
*/
private static final long serialVersionUID = 1L;
#Override
public Tuple1<Integer> getKey(Employee value) throws Exception {
return Tuple1.of(value.getEmpId());
}
}).equalTo(new KeySelector<EmployeeExpense, Tuple1<Integer>>() {
/**
*
*/
private static final long serialVersionUID = 1L;
#Override
public Tuple1<Integer> getKey(EmployeeExpense value) throws Exception {
return Tuple1.of(value.getEmpId());
}
}).window(TumblingEventTimeWindows.of(Time.seconds(5))).allowedLateness(Time.seconds(15))
.apply(new JoinFunction<Employee, EmployeeExpense, EmployeeWithExpense>() {
/**
*
*/
private static final long serialVersionUID = 1L;
#Override
public EmployeeWithExpense join(Employee first, EmployeeExpense second) throws Exception {
return new EmployeeWithExpense(second.getTimestamp(), first.getEmpId(), second.getExpense(),
first.getUuid(), LocalDateTime.now()
.format(DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ss.SSS'+0000'")));
}
}).addSink(sink);
// KeyedStream<EmployeeWithExpense, Tuple3<Integer, Integer,Long>> inputKeyedByGWNetAccountProductRTG = aggInputStream
// .keyBy(new KeySelector<EmployeeWithExpense, Tuple3<Integer, Integer,Long>>() {
//
// /**
// *
// */
// private static final long serialVersionUID = 1L;
//
// #Override
// public Tuple3<Integer, Integer,Long> getKey(EmployeeWithExpense value) throws Exception {
// return Tuple3.of(value.empId, value.expense,Instant.ofEpochMilli(value.timestamp.getTime()).truncatedTo(ChronoUnit.SECONDS).toEpochMilli());
// }
// });
//
// inputKeyedByGWNetAccountProductRTG.window(TumblingEventTimeWindows.of(Time.seconds(2)))
// .aggregate(new EmployeeWithExpenseAggregator()).addSink(sink);
// streamOne.print();
// streamTwo.print();
// DataStream<KafkaRecord> streamTwo = env.addSource(kafkaConsumerTwo);
//
// streamOne.connect(streamTwo).flatMap(new CoFlatMapFunction<KafkaRecord, KafkaRecord, R>() {
// })
//
// // Create Kafka producer from Flink API
// Properties prodProps = new Properties();
// prodProps.put("bootstrap.servers", BOOTSTRAP_SERVER);
//
// FlinkKafkaProducer<KafkaRecord> kafkaProducer =
//
// new FlinkKafkaProducer<KafkaRecord>(TOPIC_OUT,
//
// ((record, timestamp) -> new ProducerRecord<byte[], byte[]>(TOPIC_OUT, record.key.getBytes(), record.value.getBytes())),
//
// prodProps,
//
// Semantic.EXACTLY_ONCE);;
//
// DataStream<KafkaRecord> stream = env.addSource(kafkaConsumer);
//
// stream.filter((record) -> record.value != null && !record.value.isEmpty()).keyBy(record -> record.key)
// .timeWindow(Time.seconds(15)).allowedLateness(Time.milliseconds(500))
// .reduce(new ReduceFunction<KafkaRecord>() {
// /**
// *
// */
// private static final long serialVersionUID = 1L;
// KafkaRecord result = new KafkaRecord();
// #Override
// public KafkaRecord reduce(KafkaRecord record1, KafkaRecord record2) throws Exception
// {
// result.key = "outKey";
//
// result.value = record1.value + " " + record2.value;
//
// return result;
// }
// }).addSink(kafkaProducer);
// produce a number as string every second
new MessageGenerator(emp, TOPIC_ONE, "EMP").start();
new MessageGenerator(dept, TOPIC_TWO, "EXP").start();
// for visual topology of the pipeline. Paste the below output in
// https://flink.apache.org/visualizer/
// System.out.println(env.getExecutionPlan());
// start flink
try {
env.execute();
LOG.debug("Starting flink application!!");
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Questions
How do we debug when the window is emitted ?? Is there a way to add
both the streams to a sink(kafka) and see when the records are
emitted window wise ?
Can we put the late arrival records to a sink to check more about them ?
What is cause of duplicates ?? how do we debug them?
Any help in this direction is greatly appreciated. Thanks in advance.

External service call with Java 11 HttpClient sync vs async

My microservice is calling an external service POST call and I want to use Java 11 Httpclient. Here how shall the send() and sendAsync() methods can make difference? I have tested with multiple amount of request, almost same latency. I tried executing 100 endpoint call for my service with 10 or 20 thread or more. The result for both methods are almost same.
I use sendAsync() with thenApply().get in response receive.
I would like to know what is preferred way and why? Is using async is also fast(which is not as per my current result)?
Thanks in advance for your answers!
Here's a test of both methods:
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpRequest.Builder;
import java.net.http.HttpResponse;
import java.net.http.HttpResponse.BodyHandlers;
import java.time.Duration;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.Executors;
import java.util.function.Function;
import java.util.function.Supplier;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
public class HttpClientTest {
static final int REQUEST_COUNT = 100;
static final String URI_TEMPLATE = "https://jsonplaceholder.typicode.com/posts/%d";
public static void main(final String[] args) throws Exception {
final List<HttpRequest> requests = IntStream.rangeClosed(1, REQUEST_COUNT)
.mapToObj(i -> String.format(URI_TEMPLATE, i))
.map(URI::create)
.map(HttpRequest::newBuilder)
.map(Builder::build)
.collect(Collectors.toList());
final HttpClient client = HttpClient.newBuilder()
.executor(Executors.newFixedThreadPool(REQUEST_COUNT))
.build();
final ThrowingFunction<HttpRequest, String> sendSync =
request -> client.send(request, BodyHandlers.ofString()).body();
final ThrowingFunction<CompletableFuture<HttpResponse<String>>, String> getSync =
future -> future.get().body();
benchmark("sync", () -> requests.stream()
.map(sendSync)
.collect(Collectors.toList()));
benchmark("async", () -> requests.stream()
.map(request -> client.sendAsync(request, BodyHandlers.ofString()))
.collect(Collectors.toList()) // materialize to send the requests
.stream()
.map(getSync)
.collect(Collectors.toList()));
}
static void benchmark(final String name, final Supplier<List<String>> supplier) {
new Thread(() -> {
final long start = System.nanoTime();
System.out.printf("%s: start%n", name);
final List<String> result = supplier.get();
final Duration duration = Duration.ofNanos(System.nanoTime() - start);
final int size = result.stream()
.mapToInt(String::length)
.sum();
System.out.printf("%s: end, got %d chars, took %s%n", name, size, duration);
}, name).start();
}
#FunctionalInterface
static interface ThrowingFunction<T, R> extends Function<T, R> {
default R apply(final T t) {
try {
return applyThrowing(t);
} catch (final Exception e) {
throw new RuntimeException(e);
}
}
R applyThrowing(T t) throws Exception;
}
}
Example output:
sync: start
async: start
async: end, got 26118 chars, took PT1.6102532S
sync: end, got 26118 chars, took PT4.3368509S
The higher the parallelism level of the API, the better the asynchronous method will perform.

Error with StreamIdentifier when using MultiStreamTracker in kinesis

I'm getting an error with StreamIdentifier when trying to use MultiStreamTracker in a kinesis consumer application.
java.lang.IllegalArgumentException: Unable to deserialize StreamIdentifier from first-stream-name
What is causing this error? I can't find a good example of using the tracker with kinesis.
The stream name works when using a consumer with a single stream so I'm not sure what is happening. It looks like the consumer is trying to parse the accountId and streamCreationEpoch. But when I create the identifiers I am using the singleStreamInstance method. Is the stream name required to have these values? They appear to be optional from the code.
This test is part of a complete example on github.
package kinesis.localstack.example;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.UUID;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import com.amazonaws.services.kinesis.producer.KinesisProducer;
import com.amazonaws.services.kinesis.producer.KinesisProducerConfiguration;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.testcontainers.containers.localstack.LocalStackContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
import org.testcontainers.utility.DockerImageName;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.cloudwatch.CloudWatchAsyncClient;
import software.amazon.awssdk.services.dynamodb.DynamoDbAsyncClient;
import software.amazon.awssdk.services.kinesis.KinesisAsyncClient;
import software.amazon.kinesis.common.ConfigsBuilder;
import software.amazon.kinesis.common.InitialPositionInStream;
import software.amazon.kinesis.common.InitialPositionInStreamExtended;
import software.amazon.kinesis.common.KinesisClientUtil;
import software.amazon.kinesis.common.StreamConfig;
import software.amazon.kinesis.common.StreamIdentifier;
import software.amazon.kinesis.coordinator.Scheduler;
import software.amazon.kinesis.exceptions.InvalidStateException;
import software.amazon.kinesis.exceptions.ShutdownException;
import software.amazon.kinesis.lifecycle.events.InitializationInput;
import software.amazon.kinesis.lifecycle.events.LeaseLostInput;
import software.amazon.kinesis.lifecycle.events.ProcessRecordsInput;
import software.amazon.kinesis.lifecycle.events.ShardEndedInput;
import software.amazon.kinesis.lifecycle.events.ShutdownRequestedInput;
import software.amazon.kinesis.processor.FormerStreamsLeasesDeletionStrategy;
import software.amazon.kinesis.processor.FormerStreamsLeasesDeletionStrategy.NoLeaseDeletionStrategy;
import software.amazon.kinesis.processor.MultiStreamTracker;
import software.amazon.kinesis.processor.ShardRecordProcessor;
import software.amazon.kinesis.processor.ShardRecordProcessorFactory;
import software.amazon.kinesis.retrieval.KinesisClientRecord;
import software.amazon.kinesis.retrieval.polling.PollingConfig;
import static java.util.stream.Collectors.toList;
import static org.assertj.core.api.Assertions.assertThat;
import static org.awaitility.Awaitility.await;
import static org.testcontainers.containers.localstack.LocalStackContainer.Service.CLOUDWATCH;
import static org.testcontainers.containers.localstack.LocalStackContainer.Service.DYNAMODB;
import static org.testcontainers.containers.localstack.LocalStackContainer.Service.KINESIS;
import static software.amazon.kinesis.common.InitialPositionInStream.TRIM_HORIZON;
import static software.amazon.kinesis.common.StreamIdentifier.singleStreamInstance;
#Testcontainers
public class KinesisMultiStreamTest {
static class TestProcessorFactory implements ShardRecordProcessorFactory {
private final TestKinesisRecordService service;
public TestProcessorFactory(TestKinesisRecordService service) {
this.service = service;
}
#Override
public ShardRecordProcessor shardRecordProcessor() {
throw new UnsupportedOperationException("must have streamIdentifier");
}
public ShardRecordProcessor shardRecordProcessor(StreamIdentifier streamIdentifier) {
return new TestRecordProcessor(service, streamIdentifier);
}
}
static class TestRecordProcessor implements ShardRecordProcessor {
public final TestKinesisRecordService service;
public final StreamIdentifier streamIdentifier;
public TestRecordProcessor(TestKinesisRecordService service, StreamIdentifier streamIdentifier) {
this.service = service;
this.streamIdentifier = streamIdentifier;
}
#Override
public void initialize(InitializationInput initializationInput) {
}
#Override
public void processRecords(ProcessRecordsInput processRecordsInput) {
service.addRecord(streamIdentifier, processRecordsInput);
}
#Override
public void leaseLost(LeaseLostInput leaseLostInput) {
}
#Override
public void shardEnded(ShardEndedInput shardEndedInput) {
try {
shardEndedInput.checkpointer().checkpoint();
} catch (Exception e) {
throw new IllegalStateException(e);
}
}
#Override
public void shutdownRequested(ShutdownRequestedInput shutdownRequestedInput) {
}
}
static class TestKinesisRecordService {
private List<ProcessRecordsInput> firstStreamRecords = Collections.synchronizedList(new ArrayList<>());
private List<ProcessRecordsInput> secondStreamRecords = Collections.synchronizedList(new ArrayList<>());
public void addRecord(StreamIdentifier streamIdentifier, ProcessRecordsInput processRecordsInput) {
if(streamIdentifier.streamName().contains(firstStreamName)) {
firstStreamRecords.add(processRecordsInput);
} else if(streamIdentifier.streamName().contains(secondStreamName)) {
secondStreamRecords.add(processRecordsInput);
} else {
throw new IllegalStateException("no list for stream " + streamIdentifier);
}
}
public List<ProcessRecordsInput> getFirstStreamRecords() {
return Collections.unmodifiableList(firstStreamRecords);
}
public List<ProcessRecordsInput> getSecondStreamRecords() {
return Collections.unmodifiableList(secondStreamRecords);
}
}
public static final String firstStreamName = "first-stream-name";
public static final String secondStreamName = "second-stream-name";
public static final String partitionKey = "partition-key";
DockerImageName localstackImage = DockerImageName.parse("localstack/localstack:latest");
#Container
public LocalStackContainer localstack = new LocalStackContainer(localstackImage)
.withServices(KINESIS, CLOUDWATCH)
.withEnv("KINESIS_INITIALIZE_STREAMS", firstStreamName + ":1," + secondStreamName + ":1");
public Scheduler scheduler;
public TestKinesisRecordService service = new TestKinesisRecordService();
public KinesisProducer producer;
#BeforeEach
void setup() {
KinesisAsyncClient kinesisClient = KinesisClientUtil.createKinesisAsyncClient(
KinesisAsyncClient.builder().endpointOverride(localstack.getEndpointOverride(KINESIS)).region(Region.of(localstack.getRegion()))
);
DynamoDbAsyncClient dynamoClient = DynamoDbAsyncClient.builder().region(Region.of(localstack.getRegion())).endpointOverride(localstack.getEndpointOverride(DYNAMODB)).build();
CloudWatchAsyncClient cloudWatchClient = CloudWatchAsyncClient.builder().region(Region.of(localstack.getRegion())).endpointOverride(localstack.getEndpointOverride(CLOUDWATCH)).build();
MultiStreamTracker tracker = new MultiStreamTracker() {
private List<StreamConfig> configs = List.of(
new StreamConfig(singleStreamInstance(firstStreamName), InitialPositionInStreamExtended.newInitialPosition(TRIM_HORIZON)),
new StreamConfig(singleStreamInstance(secondStreamName), InitialPositionInStreamExtended.newInitialPosition(TRIM_HORIZON)));
#Override
public List<StreamConfig> streamConfigList() {
return configs;
}
#Override
public FormerStreamsLeasesDeletionStrategy formerStreamsLeasesDeletionStrategy() {
return new NoLeaseDeletionStrategy();
}
};
ConfigsBuilder configsBuilder = new ConfigsBuilder(tracker, "KinesisPratTest", kinesisClient, dynamoClient, cloudWatchClient, UUID.randomUUID().toString(), new TestProcessorFactory(service));
scheduler = new Scheduler(
configsBuilder.checkpointConfig(),
configsBuilder.coordinatorConfig(),
configsBuilder.leaseManagementConfig(),
configsBuilder.lifecycleConfig(),
configsBuilder.metricsConfig(),
configsBuilder.processorConfig().callProcessRecordsEvenForEmptyRecordList(false),
configsBuilder.retrievalConfig()
);
new Thread(scheduler).start();
producer = producer();
}
#AfterEach
public void teardown() throws ExecutionException, InterruptedException, TimeoutException {
producer.destroy();
Future<Boolean> gracefulShutdownFuture = scheduler.startGracefulShutdown();
gracefulShutdownFuture.get(60, TimeUnit.SECONDS);
}
public KinesisProducer producer() {
var configuration = new KinesisProducerConfiguration()
.setVerifyCertificate(false)
.setCredentialsProvider(localstack.getDefaultCredentialsProvider())
.setMetricsCredentialsProvider(localstack.getDefaultCredentialsProvider())
.setRegion(localstack.getRegion())
.setCloudwatchEndpoint(localstack.getEndpointOverride(CLOUDWATCH).getHost())
.setCloudwatchPort(localstack.getEndpointOverride(CLOUDWATCH).getPort())
.setKinesisEndpoint(localstack.getEndpointOverride(KINESIS).getHost())
.setKinesisPort(localstack.getEndpointOverride(KINESIS).getPort());
return new KinesisProducer(configuration);
}
#Test
void testFirstStream() {
String expected = "Hello";
producer.addUserRecord(firstStreamName, partitionKey, ByteBuffer.wrap(expected.getBytes(StandardCharsets.UTF_8)));
var result = await().timeout(600, TimeUnit.SECONDS)
.until(() -> service.getFirstStreamRecords().stream()
.flatMap(r -> r.records().stream())
.map(KinesisClientRecord::data)
.map(r -> StandardCharsets.UTF_8.decode(r).toString())
.collect(toList()), records -> records.size() > 0);
assertThat(result).anyMatch(r -> r.equals(expected));
}
#Test
void testSecondStream() {
String expected = "Hello";
producer.addUserRecord(secondStreamName, partitionKey, ByteBuffer.wrap(expected.getBytes(StandardCharsets.UTF_8)));
var result = await().timeout(600, TimeUnit.SECONDS)
.until(() -> service.getSecondStreamRecords().stream()
.flatMap(r -> r.records().stream())
.map(KinesisClientRecord::data)
.map(r -> StandardCharsets.UTF_8.decode(r).toString())
.collect(toList()), records -> records.size() > 0);
assertThat(result).anyMatch(r -> r.equals(expected));
}
}
Here is the error I am getting.
[Thread-9] ERROR software.amazon.kinesis.coordinator.Scheduler - Worker.run caught exception, sleeping for 1000 milli seconds!
java.lang.IllegalArgumentException: Unable to deserialize StreamIdentifier from first-stream-name
at software.amazon.kinesis.common.StreamIdentifier.multiStreamInstance(StreamIdentifier.java:75)
at software.amazon.kinesis.coordinator.Scheduler.getStreamIdentifier(Scheduler.java:1001)
at software.amazon.kinesis.coordinator.Scheduler.buildConsumer(Scheduler.java:917)
at software.amazon.kinesis.coordinator.Scheduler.createOrGetShardConsumer(Scheduler.java:899)
at software.amazon.kinesis.coordinator.Scheduler.runProcessLoop(Scheduler.java:419)
at software.amazon.kinesis.coordinator.Scheduler.run(Scheduler.java:330)
at java.base/java.lang.Thread.run(Thread.java:829)
According to documentation:
The serialized stream identifier should be of the following format: account-id:StreamName:streamCreationTimestamp
So your code should be like this:
private List<StreamConfig> configs = List.of(
new StreamConfig(multiStreamInstance("111111111:multiStreamTest-1:12345"), InitialPositionInStreamExtended.newInitialPosition(TRIM_HORIZON)),
new StreamConfig(multiStreamInstance("111111111:multiStreamTest-2:12389"), InitialPositionInStreamExtended.newInitialPosition(TRIM_HORIZON)));
Note: this also will change leaseKey format to account-id:StreamName:streamCreationTimestamp:ShardId

Google Pub/Sub Java examples

I'm not able to find a way to read messages from pub/sub using java.
I'm using this maven dependency in my pom
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-pubsub</artifactId>
<version>0.17.2-alpha</version>
</dependency>
I implemented this main method to create a new topic:
public static void main(String... args) throws Exception {
// Your Google Cloud Platform project ID
String projectId = ServiceOptions.getDefaultProjectId();
// Your topic ID
String topicId = "my-new-topic-1";
// Create a new topic
TopicName topic = TopicName.create(projectId, topicId);
try (TopicAdminClient topicAdminClient = TopicAdminClient.create()) {
topicAdminClient.createTopic(topic);
}
}
The above code works well and, indeed, I can see the new topic I created using the google cloud console.
I implemented the following main method to write a message to my topic:
public static void main(String a[]) throws InterruptedException, ExecutionException{
String projectId = ServiceOptions.getDefaultProjectId();
String topicId = "my-new-topic-1";
String payload = "Hellooooo!!!";
PubsubMessage pubsubMessage =
PubsubMessage.newBuilder().setData(ByteString.copyFromUtf8(payload)).build();
TopicName topic = TopicName.create(projectId, topicId);
Publisher publisher;
try {
publisher = Publisher.defaultBuilder(
topic)
.build();
publisher.publish(pubsubMessage);
System.out.println("Sent!");
} catch (IOException e) {
System.out.println("Not Sended!");
e.printStackTrace();
}
}
Now I'm not able to verify if this message was really sent.
I would like to implement a message reader using a subscription to my topic.
Could someone show me a correct and working java example about reading messages from a topic?
Anyone can help me?
Thanks in advance!
Here is the version using the google cloud client libraries.
package com.techm.data.client;
import com.google.cloud.pubsub.v1.AckReplyConsumer;
import com.google.cloud.pubsub.v1.MessageReceiver;
import com.google.cloud.pubsub.v1.Subscriber;
import com.google.cloud.pubsub.v1.SubscriptionAdminClient;
import com.google.common.util.concurrent.MoreExecutors;
import com.google.pubsub.v1.ProjectSubscriptionName;
import com.google.pubsub.v1.ProjectTopicName;
import com.google.pubsub.v1.PubsubMessage;
import com.google.pubsub.v1.PushConfig;
/**
* A snippet for Google Cloud Pub/Sub showing how to create a Pub/Sub pull
* subscription and asynchronously pull messages from it.
*/
public class CreateSubscriptionAndConsumeMessages {
private static String projectId = "projectId";
private static String topicId = "topicName";
private static String subscriptionId = "subscriptionName";
public static void createSubscription() throws Exception {
ProjectTopicName topic = ProjectTopicName.of(projectId, topicId);
ProjectSubscriptionName subscription = ProjectSubscriptionName.of(projectId, subscriptionId);
try (SubscriptionAdminClient subscriptionAdminClient = SubscriptionAdminClient.create()) {
subscriptionAdminClient.createSubscription(subscription, topic, PushConfig.getDefaultInstance(), 0);
}
}
public static void main(String... args) throws Exception {
ProjectSubscriptionName subscription = ProjectSubscriptionName.of(projectId, subscriptionId);
createSubscription();
MessageReceiver receiver = new MessageReceiver() {
#Override
public void receiveMessage(PubsubMessage message, AckReplyConsumer consumer) {
System.out.println("Received message: " + message.getData().toStringUtf8());
consumer.ack();
}
};
Subscriber subscriber = null;
try {
subscriber = Subscriber.newBuilder(subscription, receiver).build();
subscriber.addListener(new Subscriber.Listener() {
#Override
public void failed(Subscriber.State from, Throwable failure) {
// Handle failure. This is called when the Subscriber encountered a fatal error
// and is
// shutting down.
System.err.println(failure);
}
}, MoreExecutors.directExecutor());
subscriber.startAsync().awaitRunning();
// In this example, we will pull messages for one minute (60,000ms) then stop.
// In a real application, this sleep-then-stop is not necessary.
// Simply call stopAsync().awaitTerminated() when the server is shutting down,
// etc.
Thread.sleep(60000);
} finally {
if (subscriber != null) {
subscriber.stopAsync().awaitTerminated();
}
}
}
}
This is working fine for me.
The Cloud Pub/Sub Pull Subscriber Guide has sample code for reading messages from a topic.
I haven't used google cloud client libraries but used the api client libraries. Here is how I created a subscription.
package com.techm.datapipeline.client;
import java.io.IOException;
import java.security.GeneralSecurityException;
import com.google.api.client.googleapis.json.GoogleJsonResponseException;
import com.google.api.client.http.HttpStatusCodes;
import com.google.api.services.pubsub.Pubsub;
import com.google.api.services.pubsub.Pubsub.Projects.Subscriptions.Create;
import com.google.api.services.pubsub.Pubsub.Projects.Subscriptions.Get;
import com.google.api.services.pubsub.Pubsub.Projects.Topics;
import com.google.api.services.pubsub.model.ExpirationPolicy;
import com.google.api.services.pubsub.model.Subscription;
import com.google.api.services.pubsub.model.Topic;
import com.techm.datapipeline.factory.PubsubFactory;
public class CreatePullSubscriberClient {
private final static String PROJECT_NAME = "yourProjectId";
private final static String TOPIC_NAME = "yourTopicName";
private final static String SUBSCRIPTION_NAME = "yourSubscriptionName";
public static void main(String[] args) throws IOException, GeneralSecurityException {
Pubsub pubSub = PubsubFactory.getService();
String topicName = String.format("projects/%s/topics/%s", PROJECT_NAME, TOPIC_NAME);
String subscriptionName = String.format("projects/%s/subscriptions/%s", PROJECT_NAME, SUBSCRIPTION_NAME);
Topics.Get listReq = pubSub.projects().topics().get(topicName);
Topic topic = listReq.execute();
if (topic == null) {
System.err.println("Topic doesn't exist...run CreateTopicClient...to create the topic");
System.exit(0);
}
Subscription subscription = null;
try {
Get getReq = pubSub.projects().subscriptions().get(subscriptionName);
subscription = getReq.execute();
} catch (GoogleJsonResponseException e) {
if (e.getStatusCode() == HttpStatusCodes.STATUS_CODE_NOT_FOUND) {
System.out.println("Subscription " + subscriptionName + " does not exist...will create it");
}
}
if (subscription != null) {
System.out.println("Subscription already exists ==> " + subscription.toPrettyString());
System.exit(0);
}
subscription = new Subscription();
subscription.setTopic(topicName);
subscription.setPushConfig(null); // indicating a pull
ExpirationPolicy expirationPolicy = new ExpirationPolicy();
expirationPolicy.setTtl(null); // never expires;
subscription.setExpirationPolicy(expirationPolicy);
subscription.setAckDeadlineSeconds(null); // so defaults to 10 sec
subscription.setRetainAckedMessages(true);
Long _week = 7L * 24 * 60 * 60;
subscription.setMessageRetentionDuration(String.valueOf(_week)+"s");
subscription.setName(subscriptionName);
Create createReq = pubSub.projects().subscriptions().create(subscriptionName, subscription);
Subscription createdSubscription = createReq.execute();
System.out.println("Subscription created ==> " + createdSubscription.toPrettyString());
}
}
And once you create the subscription (pull type)...this is how you pull the messages from the topic.
package com.techm.datapipeline.client;
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.util.ArrayList;
import java.util.List;
import com.google.api.client.googleapis.json.GoogleJsonResponseException;
import com.google.api.client.http.HttpStatusCodes;
import com.google.api.client.util.Base64;
import com.google.api.services.pubsub.Pubsub;
import com.google.api.services.pubsub.Pubsub.Projects.Subscriptions.Acknowledge;
import com.google.api.services.pubsub.Pubsub.Projects.Subscriptions.Get;
import com.google.api.services.pubsub.Pubsub.Projects.Subscriptions.Pull;
import com.google.api.services.pubsub.model.AcknowledgeRequest;
import com.google.api.services.pubsub.model.Empty;
import com.google.api.services.pubsub.model.PullRequest;
import com.google.api.services.pubsub.model.PullResponse;
import com.google.api.services.pubsub.model.ReceivedMessage;
import com.techm.datapipeline.factory.PubsubFactory;
public class PullSubscriptionsClient {
private final static String PROJECT_NAME = "yourProjectId";
private final static String SUBSCRIPTION_NAME = "yourSubscriptionName";
private final static String SUBSCRIPTION_NYC_NAME = "test";
public static void main(String[] args) throws IOException, GeneralSecurityException {
Pubsub pubSub = PubsubFactory.getService();
String subscriptionName = String.format("projects/%s/subscriptions/%s", PROJECT_NAME, SUBSCRIPTION_NAME);
//String subscriptionName = String.format("projects/%s/subscriptions/%s", PROJECT_NAME, SUBSCRIPTION_NYC_NAME);
try {
Get getReq = pubSub.projects().subscriptions().get(subscriptionName);
getReq.execute();
} catch (GoogleJsonResponseException e) {
if (e.getStatusCode() == HttpStatusCodes.STATUS_CODE_NOT_FOUND) {
System.out.println("Subscription " + subscriptionName
+ " does not exist...run CreatePullSubscriberClient to create");
}
}
PullRequest pullRequest = new PullRequest();
pullRequest.setReturnImmediately(false); // wait until you get a message
pullRequest.setMaxMessages(1000);
Pull pullReq = pubSub.projects().subscriptions().pull(subscriptionName, pullRequest);
PullResponse pullResponse = pullReq.execute();
List<ReceivedMessage> msgs = pullResponse.getReceivedMessages();
List<String> ackIds = new ArrayList<String>();
int i = 0;
if (msgs != null) {
for (ReceivedMessage msg : msgs) {
ackIds.add(msg.getAckId());
//System.out.println(i++ + ":===:" + msg.getAckId());
String object = new String(Base64.decodeBase64(msg.getMessage().getData()));
System.out.println("Decoded object String ==> " + object );
}
//acknowledge all the received messages
AcknowledgeRequest content = new AcknowledgeRequest();
content.setAckIds(ackIds);
Acknowledge ackReq = pubSub.projects().subscriptions().acknowledge(subscriptionName, content);
Empty empty = ackReq.execute();
}
}
}
Note: This client only waits until it receives at least one message and terminates if it's receives one (up to a max of value - set in MaxMessages) at once.
Let me know if this helps. I'm going to try the cloud client libraries soon and will post an update once I get my hands on them.
And here's the missing factory class ...if you plan to run it...
package com.techm.datapipeline.factory;
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.util.ArrayList;
import java.util.Collection;
import java.util.logging.Level;
import java.util.logging.Logger;
import com.google.api.client.googleapis.auth.oauth2.GoogleCredential;
import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
import com.google.api.client.http.HttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.pubsub.Pubsub;
import com.google.api.services.pubsub.PubsubScopes;
public class PubsubFactory {
private static Pubsub instance = null;
private static final Logger logger = Logger.getLogger(PubsubFactory.class.getName());
public static synchronized Pubsub getService() throws IOException, GeneralSecurityException {
if (instance == null) {
instance = buildService();
}
return instance;
}
private static Pubsub buildService() throws IOException, GeneralSecurityException {
logger.log(Level.FINER, "Start of buildService");
HttpTransport transport = GoogleNetHttpTransport.newTrustedTransport();
JsonFactory jsonFactory = new JacksonFactory();
GoogleCredential credential = GoogleCredential.getApplicationDefault(transport, jsonFactory);
// Depending on the environment that provides the default credentials (for
// example: Compute Engine, App Engine), the credentials may require us to
// specify the scopes we need explicitly.
if (credential.createScopedRequired()) {
Collection<String> scopes = new ArrayList<>();
scopes.add(PubsubScopes.PUBSUB);
credential = credential.createScoped(scopes);
}
logger.log(Level.FINER, "End of buildService");
// TODO - Get the application name from outside.
return new Pubsub.Builder(transport, jsonFactory, credential).setApplicationName("Your Application Name/Version")
.build();
}
}
The message reader is injected on the subscriber. This part of the code will handle the messages:
MessageReceiver receiver =
new MessageReceiver() {
#Override
public void receiveMessage(PubsubMessage message, AckReplyConsumer consumer) {
// handle incoming message, then ack/nack the received message
System.out.println("Id : " + message.getMessageId());
System.out.println("Data : " + message.getData().toStringUtf8());
consumer.ack();
}
};

add DataTable data not working?

I'm trying to get the method data.SetValue(...) working in the asynchronous callback in method getNames. Unfortunately it doesn't work. data.setValue(...) does work in the synchronous method createColumnChartView.
What could be the cause of this problem? Please explain why setting data doesn't work in getNames. Thanks in advance!
import java.util.ArrayList;
import com.google.gwt.core.client.GWT;
import com.google.gwt.user.client.rpc.AsyncCallback;
import com.google.gwt.user.client.ui.Widget;
import com.google.gwt.visualization.client.DataTable;
import com.google.gwt.visualization.client.AbstractDataTable.ColumnType;
import com.google.gwt.visualization.client.visualizations.corechart.ColumnChart;
import com.google.gwt.visualization.client.visualizations.corechart.CoreChart;
import com.google.gwt.visualization.client.visualizations.corechart.Options;
import com.practicum.client.Product;
import com.practicum.client.rpc.ProductService;
import com.practicum.client.rpc.ProductServiceAsync;
public class DataOutColumnChart {
private final DataTable data = DataTable.create();
private final Options options = CoreChart.createOptions();
private final ProductServiceAsync productService = GWT.create(ProductService.class);
public DataOutColumnChart(Runnable runnable) {
}
public Widget createColumnChartView() {
/* create a datatable */
data.addColumn(ColumnType.STRING, "Price");
data.addColumn(ColumnType.NUMBER, "EUR");
data.addRows(2);
data.setValue(0, 0, "Bar 1");
data.setValue(0, 1, 123);
getNames();
/* create column chart */
options.setWidth(400);
options.setHeight(300);
options.setBackgroundColor("#e8e8e9");
return new ColumnChart(data, options);
}
public void getNames() {
productService.getNames(new AsyncCallback<ArrayList<Product>>() {
public void onFailure(Throwable caught) {
}
public void onSuccess(ArrayList<Product> result) {
for (Product p : result) {
data.setValue(0, 0, "Bar 2"); // DONT WORK, NOTHING HAPPENS
data.setValue(0, 1, 345); // DONT WORK, NOTHING HAPPENS
System.out.println("Bla bla test"); // THIS WORKS
}
}
});
}
}
The problem is occurring because you're setting data to a DataTable that has already been rendered. Your Asynchronous call in getNames() completes too slowly to affect the DataTable in time for the rendering of the ColumnChart. Even if it did complete fast enough, it would always be a race condition. Ideally, you would not actually render that chart until after you've received all necessary data from the RPC call.
Another option is to store a reference to that ColumnChart and call columnChart.draw(...) after you get your data back from RPC.
Edit:
Here's the example you requested.
import java.util.ArrayList;
import com.google.gwt.core.client.GWT;
import com.google.gwt.user.client.rpc.AsyncCallback;
import com.google.gwt.user.client.ui.Widget;
import com.google.gwt.visualization.client.DataTable;
import com.google.gwt.visualization.client.AbstractDataTable.ColumnType;
import com.google.gwt.visualization.client.visualizations.corechart.ColumnChart;
import com.google.gwt.visualization.client.visualizations.corechart.CoreChart;
import com.google.gwt.visualization.client.visualizations.corechart.Options;
import com.practicum.client.Product;
import com.practicum.client.rpc.ProductService;
import com.practicum.client.rpc.ProductServiceAsync;
public class DataOutColumnChart {
private final DataTable data = DataTable.create();
private final Options options = CoreChart.createOptions();
private final ProductServiceAsync productService = GWT.create(ProductService.class);
private ColumnChart chart = null;
public DataOutColumnChart(Runnable runnable) {
}
public void initColumnChart() {
/* create a datatable */
data.addColumn(ColumnType.STRING, "Price");
data.addColumn(ColumnType.NUMBER, "EUR");
/* create column chart */
options.setWidth(400);
options.setHeight(300);
options.setBackgroundColor("#e8e8e9");
chart = new ColumnChart(data, options);
}
public void getNames() {
productService.getNames(new AsyncCallback<ArrayList<Product>>() {
public void onFailure(Throwable caught) {
}
public void onSuccess(ArrayList<Product> result) {
if (result != null && result.size() > 0) {
// if there is data...
data.addRows(result.size()); // add a row for each result
for (int i = 0; i < result.size(); i++) {
// loop through the results
Product product = result.get(i); // get out the product
// ...then set the column values for this row
data.setValue(i, 0, product.getSomeProperty());
data.setValue(i, 1, product.getSomeOtherProperty());
}
updateChart();
}
}
});
}
public void updateChart() {
chart.draw(data, options);
}
}

Categories