Inserting an object from REST API to Kafka using Kafka Connect API - java

I have some issues developing a Kafka source connector using Kafka Connect API.
I get data from a REST API using Retrofit and GSON and then try to insert it into the Kafka.
Here is my source task class:
public class BitfinexSourceTask extends SourceTask implements BitfinexTickerGetter.OnTickerReadyListener {
private static final String DATETIME_FIELD = "datetime";
private BitfinexService service;
private ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
private BlockingQueue<SourceRecord> queue = null;
private BitfinexTickerGetter tickerGetter;
private final Runnable runnable = new Runnable() {
#Override
public void run() {
try {
tickerGetter.get();
} catch (IOException e) {
e.printStackTrace();
}
}
};
private ScheduledFuture<?> scheduledFuture;
#Override
public String version() {
return VersionUtil.getVersion();
}
#Override
public void start(Map<String, String> map) {
service = BitfinexServiceFactory.create();
queue = new LinkedBlockingQueue<>();
tickerGetter = new BitfinexTickerGetter(service, this);
scheduledFuture = scheduler.scheduleAtFixedRate(runnable, 0, 5, TimeUnit.MINUTES);
}
#Override
public List<SourceRecord> poll() throws InterruptedException {
List<SourceRecord> result = new LinkedList<>();
if (queue.isEmpty()) result.add(queue.take());
queue.drainTo(result);
return result;
}
#Override
public void stop() {
scheduledFuture.cancel(true);
scheduler.shutdown();
}
#Override
public void onTickerReady(Ticker ticker) {
Map<String, ?> srcOffset = Collections.singletonMap(DATETIME_FIELD, ticker.getDatetime());
Map<String, ?> srcPartition = Collections.singletonMap("from", "bitfinex");
SourceRecord record = new SourceRecord(srcPartition, srcOffset, ticker.getSymbol(), Schema.STRING_SCHEMA, ticker.getDatetime(), Ticker.SCHEMA, ticker);
queue.offer(record);
}
}
I actually was able to build and add the connector. It runs without any errors or something, but the topic was not created. I have decided to create the topic manually and then re-run the connector, but the topic remained empty. Ticker is my POJO object containing string and double fields.
Can someone help me with this?

Related

Unable to create Kafka Redis Sink with Single Message Transformations

I am trying to create a Kafka Redis sink that deletes a particular key in Redis. One of the ways is to create a Record or Message in Kafka with a specific key and Value as null. But as per the use case, generating the keys is not possible. As a workaround, I wrote a Single message transformer that takes the message from Kafka, sets a particular Key, and sets Value equals null.
Here are my Kafka Connect Confgurations
"connector.class": "com.github.jcustenborder.kafka.connect.redis.RedisSinkConnector",
"transforms.invalidaterediskeys.type": "com.github.cjmatta.kafka.connect.smt.InvalidateRedisKeys",
"redis.database": "0",
"redis.client.mode": "Standalone",
"topics": "test_redis_deletion2",
"tasks.max": "1",
"redis.hosts": "REDIS-HOST",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"transforms": "invalidaterediskeys"
}
Here is the code for the transformations :
public class InvalidateRedisKeys<R extends ConnectRecord<R>> implements Transformation<R> {
private static final Logger LOG = LoggerFactory.getLogger(InvalidateRedisKeys.class);
private static final ObjectMapper mapper = new ObjectMapper()
.configure(DeserializationConfig.Feature.FAIL_ON_UNKNOWN_PROPERTIES, false);
#Override
public ConfigDef config() {
return new ConfigDef();
}
#Override
public void configure(Map<String, ?> settings) {
}
#Override
public void close() {
}
#Override
public R apply(R r) {
try {
return r.newRecord(
r.topic(),
r.kafkaPartition(),
Schema.STRING_SCHEMA,
getKey(r.value()),
null,
null,
r.timestamp()
);
} catch (IOException e) {
LOG.error("a.jsonhandling.{}", e.getMessage());
return null;
} catch (Exception e) {
LOG.error("a.exception.{}", e.getMessage());
return null;
}
}
private String getKey(Object value) throws IOException {
A a = mapper.readValue(value.toString(), A.class);
long userId = a.getUser_id();
int roundId = a.getRound_id();
return KeyGeneratorUtil.getKey(userId, roundId);
}
}
where A is
public class A {
private long user_id;
private int round_id;
}
And KeyGeneratorUtil contains a static function that generates a relevant string and sends the results.
I took help from
https://github.com/cjmatta/kafka-connect-insert-uuid
https://github.com/apache/kafka/tree/trunk/connect/transforms/src/main/java/org/apache/kafka/connect/transforms
When I try to initialize Kafka Connect, it says invalid Configurations. Is there something that I am missing?

Using non-blocking code from blocking code to reduce number of threads being used

In case of blocking APIs the caller expects code to return some value.
e.g. response = blockingAPI.execute()
In case of non-blocking code, we communicate via callbacks.
The overall request (web) pipeline is blocking and sequential. Steps depend on response of previous steps. In one of the step we need to call a web service. The current setup assumes this service to be called using blocking API.
How can we call it in a non-blocking way and still return to pipeline like a blocking code? The main reason for trying a non-blocking call here is to be able to serve more HTTP Requests per server by not using a new thread per service call. The intention is not to perform some other task while IO is complete, as sequence needs to be maintained. One of the way could be to get Future, loop and check if it is done and sleep in between. Which sounds bad.
We are using Spring and have custom execution pipeline per request. I was thinking of using async-http-client for service call.
Sample Sync Client
#Component
public class SyncClient {
private static final int CONNECTION_TIMEOUT = 1000;
private static final int CONNECTION_REQUEST_TIMEOUT = 1000;
private static final int SOCKET_TIMEOUT = 1000;
private static final String URL = "http://localhost:8080/returnback";
private static final String RETURN_BACK = "returnBack";
private final HttpClientBuilder httpClientBuilder;
public SyncClient() {
httpClientBuilder = HttpClientBuilder.create();
httpClientBuilder.useSystemProperties();
httpClientBuilder.setRedirectStrategy(new LaxRedirectStrategy());
}
public Map<String, String> send() throws Exception {
final CloseableHttpClient closeableHttpClient = httpClientBuilder.build();
final HttpRequestBase httpRequestBase = new HttpGet(URL);
httpRequestBase.setConfig(getCustomConfig());
final String returnBackValue = UUID.randomUUID().toString();
httpRequestBase.addHeader(new BasicHeader(RETURN_BACK, returnBackValue));
CloseableHttpResponse response = null;
final Map<String, String> output = new HashMap<>();
try {
response = closeableHttpClient.execute(httpRequestBase);
if(response == null) {
throw new RuntimeException("Unexpected null http response");
}
output.put(returnBackValue, response.getFirstHeader(RETURN_BACK).getValue());
}
finally {
HttpClientUtils.closeQuietly(response);
}
return output;
}
private RequestConfig getCustomConfig() {
return RequestConfig.copy(RequestConfig.DEFAULT)
.setConnectTimeout(CONNECTION_TIMEOUT)
.setSocketTimeout(SOCKET_TIMEOUT)
.setConnectionRequestTimeout(CONNECTION_REQUEST_TIMEOUT).build();
}
}
Sample Polling Async Client
#Component
public class PollingAsyncClient {
private static final String URL = "http://localhost:8080/returnback";
private static final String RETURN_BACK = "returnBack";
private static final int CONNECTION_TIMEOUT = 1000;
private static final int REQUEST_TIMEOUT = 1000;
private final AsyncHttpClient asyncHttpClient;
public PollingAsyncClient() {
final AsyncHttpClientConfig asyncHttpClientConfig = new DefaultAsyncHttpClientConfig.Builder()
.setMaxConnections(200)
.setMaxConnectionsPerHost(20)
.setConnectTimeout(CONNECTION_TIMEOUT).build();
this.asyncHttpClient = new DefaultAsyncHttpClient(asyncHttpClientConfig);
}
public Map<String, String> send() {
final RequestBuilder requestBuilder = new RequestBuilder("GET");
final Request request = requestBuilder.setUrl(URL)
.setRequestTimeout(REQUEST_TIMEOUT)
.build();
final String returnBackValue = UUID.randomUUID().toString();
requestBuilder.addHeader(RETURN_BACK,returnBackValue);
ListenableFuture<Response> listenableFuture = null;
final Map<String, String> output = new HashMap<>();
try {
listenableFuture = asyncHttpClient.executeRequest(request);
while (!listenableFuture.isDone()) {
Thread.sleep(50);
}
Response response = listenableFuture.get();
output.put(returnBackValue, response.getHeader(RETURN_BACK));
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
return output;
}
}
Now the consumer can call both the clients as:
Map<String, String> map = null;
try {
map = client.send();
} catch (Exception e) {
e.printStackTrace();
}
And thus to the consumer, both of them are blocking. But per my understanding non-blocking client will consume less resources (threads). But this sleep loop will eat up CPU. Another way could be to use wait and notify. But that would result in creating lot of new objects per request. I will try that and update, though.
EDIT (for comments):
I said "sleep loop" will eat up CPU i.e. while loop will eat up, that's what I meant. As the thread will wake up and try again.
EDIT:
Sample Waiting Async Client
public class WaitingAsyncClient {
private static final String URL = "http://localhost:8080/returnback";
private static final String RETURN_BACK = "returnBack";
private static final int CONNECTION_TIMEOUT = 2000;
private static final int REQUEST_TIMEOUT = 2000;
private final AsyncHttpClient asyncHttpClient;
public WaitingAsyncClient() {
final AsyncHttpClientConfig asyncHttpClientConfig = new DefaultAsyncHttpClientConfig.Builder()
.setMaxConnections(200)
.setMaxConnectionsPerHost(20)
.setConnectTimeout(CONNECTION_TIMEOUT).build();
this.asyncHttpClient = new DefaultAsyncHttpClient(asyncHttpClientConfig);
}
public Map<String, String> send() {
final Object lock = new Object();
final RequestBuilder requestBuilder = new RequestBuilder("GET");
final Request request = requestBuilder.setUrl(URL)
.setRequestTimeout(REQUEST_TIMEOUT)
.build();
final String returnBackValue = UUID.randomUUID().toString();
requestBuilder.addHeader(RETURN_BACK,returnBackValue);
ListenableFuture<Response> listenableFuture = null;
final Map<String, String> output = new HashMap<>();
try {
listenableFuture = asyncHttpClient.executeRequest(request, new AsyncHandler<Response>() {
private final Response.ResponseBuilder builder = new Response.ResponseBuilder();
#Override
public void onThrowable(Throwable t) {
synchronized (lock) {
lock.notify();
}
}
#Override
public State onBodyPartReceived(HttpResponseBodyPart bodyPart) throws Exception {
builder.accumulate(bodyPart);
return State.CONTINUE;
}
#Override
public State onStatusReceived(HttpResponseStatus responseStatus) throws Exception {
builder.accumulate(responseStatus);
return State.CONTINUE;
}
#Override
public State onHeadersReceived(HttpResponseHeaders headers) throws Exception {
builder.accumulate(headers);
return State.CONTINUE;
}
#Override
public Response onCompleted() throws Exception {
synchronized (lock) {
lock.notify();
}
return builder.build();
}
});
synchronized (lock) {
lock.wait();
}
Response response = listenableFuture.get();
output.put(returnBackValue, response.getHeader(RETURN_BACK));
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
return output;
}
}
Now, I think I need to do a performance check.

How to implement retry policies while sending data to another application?

I am working on my application which sends data to zeromq. Below is what my application does:
I have a class SendToZeroMQ that send data to zeromq.
Add same data to retryQueue in the same class so that it can be retried later on if acknowledgment is not received. It uses guava cache with maximumSize limit.
Have a separate thread which receives acknowledgement from the zeromq for the data that was sent earlier and if acknowledgement is not received, then SendToZeroMQ will retry sending that same piece of data. And if acknowledgement is received, then we will remove it from retryQueue so that it cannot be retried again.
Idea is very simple and I have to make sure my retry policy works fine so that I don't loose my data. This is very rare but in case if we don't receive acknolwedgements.
I am thinking of building two types of RetryPolicies but I am not able to understand how to build that here corresponding to my program:
RetryNTimes: In this it will retry N times with a particular sleep between each retry and after that, it will drop the record.
ExponentialBackoffRetry: In this it will exponentially keep retrying. We can set some max retry limit and after that it won't retry and will drop the record.
Below is my SendToZeroMQ class which sends data to zeromq, also retry every 30 seconds from a background thread and start ResponsePoller runnable which keeps running forever:
public class SendToZeroMQ {
private final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(5);
private final Cache<Long, byte[]> retryQueue =
CacheBuilder
.newBuilder()
.maximumSize(10000000)
.concurrencyLevel(200)
.removalListener(
RemovalListeners.asynchronous(new CustomListener(), executorService)).build();
private static class Holder {
private static final SendToZeroMQ INSTANCE = new SendToZeroMQ();
}
public static SendToZeroMQ getInstance() {
return Holder.INSTANCE;
}
private SendToZeroMQ() {
executorService.submit(new ResponsePoller());
// retry every 30 seconds for now
executorService.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
for (Entry<Long, byte[]> entry : retryQueue.asMap().entrySet()) {
sendTo(entry.getKey(), entry.getValue());
}
}
}, 0, 30, TimeUnit.SECONDS);
}
public boolean sendTo(final long address, final byte[] encodedRecords) {
Optional<ZMQSocketInfo> liveSockets = PoolManager.getInstance().getNextSocket();
if (!liveSockets.isPresent()) {
return false;
}
return sendTo(address, encodedRecords, liveSockets.get().getSocket());
}
public boolean sendTo(final long address, final byte[] encodedByteArray, final Socket socket) {
ZMsg msg = new ZMsg();
msg.add(encodedByteArray);
boolean sent = msg.send(socket);
msg.destroy();
// adding to retry queue
retryQueue.put(address, encodedByteArray);
return sent;
}
public void removeFromRetryQueue(final long address) {
retryQueue.invalidate(address);
}
}
Below is my ResponsePoller class which polls all the acknowledgement from the zeromq. And if we get an acknowledgement back from the zeromq then we will remove that record from the retry queue so that it doesn't get retried otherwise it will get retried.
public class ResponsePoller implements Runnable {
private static final Random random = new Random();
#Override
public void run() {
ZContext ctx = new ZContext();
Socket client = ctx.createSocket(ZMQ.PULL);
String identity = String.format("%04X-%04X", random.nextInt(), random.nextInt());
client.setIdentity(identity.getBytes(ZMQ.CHARSET));
client.bind("tcp://" + TestUtils.getIpaddress() + ":8076");
PollItem[] items = new PollItem[] {new PollItem(client, Poller.POLLIN)};
while (!Thread.currentThread().isInterrupted()) {
// Tick once per second, pulling in arriving messages
for (int centitick = 0; centitick < 100; centitick++) {
ZMQ.poll(items, 10);
if (items[0].isReadable()) {
ZMsg msg = ZMsg.recvMsg(client);
Iterator<ZFrame> it = msg.iterator();
while (it.hasNext()) {
ZFrame frame = it.next();
try {
long address = TestUtils.getAddress(frame.getData());
// remove from retry queue since we got the acknowledgment for this record
SendToZeroMQ.getInstance().removeFromRetryQueue(address);
} catch (Exception ex) {
// log error
} finally {
frame.destroy();
}
}
msg.destroy();
}
}
}
ctx.destroy();
}
}
Question:
As you can see above, I am sending encodedRecords to zeromq using SendToZeroMQ class and then it gets retried every 30 seconds depending on whether we got an acknolwedgement back from ResponsePoller class or not.
For each encodedRecords there is a unique key called address and that's what we will get back from zeromq as an acknowledgement.
How can I go ahead and extend this example to build two retry policies that I mentioned above and then I can pick what retry policy I want to use while sending data. I came up with below interface but then I am not able understand how should I move forward to implement those retry policies and use it in my above code.
public interface RetryPolicy {
/**
* Called when an operation has failed for some reason. This method should return
* true to make another attempt.
*/
public boolean allowRetry(int retryCount, long elapsedTimeMs);
}
Can I use guava-retrying or failsafe here becuase these libraries already have many retry policies which I can use?
I am not able to work out all the details regarding how to use the relevant API-s, but as for algorithm, you could try:
the retry-policy needs to have some sort of state attached to each message (atleast the number of times the current message has been retried, possible what the current delay is). You need to decide whether the RetryPolicy should keep that itself or if you want to store it inside the message.
instead of allowRetry, you could have a method calculating when the next retry should occur (in absolute time or as a number of milliseconds in the future), which will be a function of the state mentioned above
the retry queue should contain information on when each message should be retried.
instead of using scheduleAtFixedRate, find the message in the retry queue which has the lowest when_is_next_retry (possibly by sorting on absolute retry-timestamp and picking the first), and let the executorService reschedule itself using schedule and the time_to_next_retry
for each retry, pull it from the retry queue, send the message, use the RetryPolicy for calculating when the next retry should be (if it is to be retried) and insert back into the retry queue with a new value for when_is_next_retry (if the RetryPolicy returns -1, it could mean that the message shall not be retried any more)
not a perfect way, but can be achieved by below way as well.
public interface RetryPolicy {
public boolean allowRetry();
public void decreaseRetryCount();
}
Create two implementation. For RetryNTimes
public class RetryNTimes implements RetryPolicy {
private int maxRetryCount;
public RetryNTimes(int maxRetryCount) {
this.maxRetryCount = maxRetryCount;
}
public boolean allowRetry() {
return maxRetryCount > 0;
}
public void decreaseRetryCount()
{
maxRetryCount = maxRetryCount-1;
}}
For ExponentialBackoffRetry
public class ExponentialBackoffRetry implements RetryPolicy {
private int maxRetryCount;
private final Date retryUpto;
public ExponentialBackoffRetry(int maxRetryCount, Date retryUpto) {
this.maxRetryCount = maxRetryCount;
this.retryUpto = retryUpto;
}
public boolean allowRetry() {
Date date = new Date();
if(maxRetryCount <= 0 || date.compareTo(retryUpto)>=0)
{
return false;
}
return true;
}
public void decreaseRetryCount() {
maxRetryCount = maxRetryCount-1;
}}
You need to make some changes in SendToZeroMQ class
public class SendToZeroMQ {
private final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(5);
private final Cache<Long,RetryMessage> retryQueue =
CacheBuilder
.newBuilder()
.maximumSize(10000000)
.concurrencyLevel(200)
.removalListener(
RemovalListeners.asynchronous(new CustomListener(), executorService)).build();
private static class Holder {
private static final SendToZeroMQ INSTANCE = new SendToZeroMQ();
}
public static SendToZeroMQ getInstance() {
return Holder.INSTANCE;
}
private SendToZeroMQ() {
executorService.submit(new ResponsePoller());
// retry every 30 seconds for now
executorService.scheduleAtFixedRate(new Runnable() {
public void run() {
for (Map.Entry<Long, RetryMessage> entry : retryQueue.asMap().entrySet()) {
RetryMessage retryMessage = entry.getValue();
if(retryMessage.getRetryPolicy().allowRetry())
{
retryMessage.getRetryPolicy().decreaseRetryCount();
entry.setValue(retryMessage);
sendTo(entry.getKey(), retryMessage.getMessage(),retryMessage);
}else
{
retryQueue.asMap().remove(entry.getKey());
}
}
}
}, 0, 30, TimeUnit.SECONDS);
}
public boolean sendTo(final long address, final byte[] encodedRecords, RetryMessage retryMessage) {
Optional<ZMQSocketInfo> liveSockets = PoolManager.getInstance().getNextSocket();
if (!liveSockets.isPresent()) {
return false;
}
if(null==retryMessage)
{
RetryPolicy retryPolicy = new RetryNTimes(10);
retryMessage = new RetryMessage(retryPolicy,encodedRecords);
retryQueue.asMap().put(address,retryMessage);
}
return sendTo(address, encodedRecords, liveSockets.get().getSocket());
}
public boolean sendTo(final long address, final byte[] encodedByteArray, final ZMQ.Socket socket) {
ZMsg msg = new ZMsg();
msg.add(encodedByteArray);
boolean sent = msg.send(socket);
msg.destroy();
return sent;
}
public void removeFromRetryQueue(final long address) {
retryQueue.invalidate(address);
}}
Here is a working little simulation of your environment that shows how this can be done. Note the Guava cache is the wrong data structure here, since you aren't interested in eviction (I think). So I'm using a concurrent hashmap:
package experimental;
import static java.util.concurrent.TimeUnit.MILLISECONDS;
import java.util.Arrays;
import java.util.Iterator;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.ScheduledExecutorService;
class Experimental {
/** Return the desired backoff delay in millis for the given retry number, which is 1-based. */
interface RetryStrategy {
long getDelayMs(int retry);
}
enum ConstantBackoff implements RetryStrategy {
INSTANCE;
#Override
public long getDelayMs(int retry) {
return 1000L;
}
}
enum ExponentialBackoff implements RetryStrategy {
INSTANCE;
#Override
public long getDelayMs(int retry) {
return 100 + (1L << retry);
}
}
static class Sender {
private final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(4);
private final ConcurrentMap<Long, Retrier> pending = new ConcurrentHashMap<>();
/** Send the given data with given address on the given socket. */
void sendTo(long addr, byte[] data, int socket) {
System.err.println("Sending " + Arrays.toString(data) + "#" + addr + " on " + socket);
}
private class Retrier implements Runnable {
private final RetryStrategy retryStrategy;
private final long addr;
private final byte[] data;
private final int socket;
private int retry;
private Future<?> future;
Retrier(RetryStrategy retryStrategy, long addr, byte[] data, int socket) {
this.retryStrategy = retryStrategy;
this.addr = addr;
this.data = data;
this.socket = socket;
this.retry = 0;
}
synchronized void start() {
if (future == null) {
future = executorService.submit(this);
pending.put(addr, this);
}
}
synchronized void cancel() {
if (future != null) {
future.cancel(true);
future = null;
}
}
private synchronized void reschedule() {
if (future != null) {
future = executorService.schedule(this, retryStrategy.getDelayMs(++retry), MILLISECONDS);
}
}
#Override
synchronized public void run() {
sendTo(addr, data, socket);
reschedule();
}
}
long getVerifiedAddr() {
System.err.println("Pending messages: " + pending.size());
Iterator<Long> i = pending.keySet().iterator();
long addr = i.hasNext() ? i.next() : 0;
return addr;
}
class CancellationPoller implements Runnable {
#Override
public void run() {
while (!Thread.currentThread().isInterrupted()) {
try {
Thread.sleep(1000);
} catch (InterruptedException ex) {
Thread.currentThread().interrupt();
}
long addr = getVerifiedAddr();
if (addr == 0) {
continue;
}
System.err.println("Verified message (to be cancelled) " + addr);
Retrier retrier = pending.remove(addr);
if (retrier != null) {
retrier.cancel();
}
}
}
}
Sender initialize() {
executorService.submit(new CancellationPoller());
return this;
}
void sendWithRetriesTo(RetryStrategy retryStrategy, long addr, byte[] data, int socket) {
new Retrier(retryStrategy, addr, data, socket).start();
}
}
public static void main(String[] args) {
Sender sender = new Sender().initialize();
for (long i = 1; i <= 10; i++) {
sender.sendWithRetriesTo(ConstantBackoff.INSTANCE, i, null, 42);
}
for (long i = -1; i >= -10; i--) {
sender.sendWithRetriesTo(ExponentialBackoff.INSTANCE, i, null, 37);
}
}
}
You can use apache camel. It provides a component for zeromq, and tools like errohandler, redeliverypolicy, deadletter channel and such things are natively provided.

HbaseSink Flume Exception

Following is my flume Sink code to split event and store in Hbase,It gives me error when it takes null event
public class MyHbaseEventSerializer implements HbaseEventSerializer {
#Override
public void configure(Context context){}
#Override
public void initialize(Event event, byte[] columnFamily) {
this.payload = event.getBody();
this.cf = columnFamily;
this.e = event;
}
#Override
public List<Row> getActions() throws FlumeException {
List<Row> actions = Lists.newArrayList();
try{
// here splitting event and store in Hbase.
}catch(Exception e){
throw new FlumeException("Could not get row key!", e);
}
return actions
}
#Override
public List<Increment> getIncrements() {
List<Increment> incs = new LinkedList<Increment>();
}
#Override
public void close() {}
}
It Continuous infinite with this error
ERROR : [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.SinkRunner$PollingRunner.run:160) - Unable to deliver event. Exception follows.
java.lang.IllegalStateException: begin() called when transaction is OPEN!
at org.apache.flume.channel.BasicTransactionSemantics.begin(BasicTransactionSemantics.java:131)
at org.apache.flume.sink.hbase.HBaseSink.process(HBaseSink.java:234)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:724)
Has any one solution to resolve this
Thanks in advance..

Play Test - Data persistence issues - POSTed data is not available when using GET

I'm working on a Play 2.0 based RESTful API implementation and when I'm trying to run the test cases (CRUD operations), I see the POSTed request content (Successful 201 response) is not available when I do a GET operation in subsequent test case.
Please take a look at my JUnit test class -
public class TagTest {
public static FakeApplication app;
private static String AUTH_HEADER = "Authorization";
private static String AUTH_VALUE = "Basic a25paadsfdfasdfdsfasdmeSQxMjM=";
private static int tagId = 0;
private static Map<String, String> postDataMap = new HashMap<String, String>();
private static Map<String, String> updateDataMap = new HashMap<String, String>();
private static String searchText = null;
#BeforeClass
public static void setUpBeforeClass() {
// Set up new FakeApplication before running any tests
app = Helpers.fakeApplication();
Helpers.start(app);
postDataMap.put("text", "Created");
updateDataMap.put("text", "Updated");
searchText = "Date"; // case insensitive substring pattern for "Updated"
}
#Test
public void createTagTest() {
app = Helpers.fakeApplication();
running(fakeApplication(), new Runnable() {
public void run() {
JsonNode json = Json.toJson(postDataMap);
FakeRequest request=new FakeRequest().withJsonBody(json);
Result result = callAction(controllers.routes.ref.Application.createTag(),request.withHeader(TagTest.AUTH_HEADER, TagTest.AUTH_VALUE));
Map<String, String> headerMap = Helpers.headers(result);
String location = headerMap.get(Helpers.LOCATION);
String tagIdStr = location.replace("/tags/","");
try {
tagId = Integer.parseInt(tagIdStr);
assertThat(status(result)).isEqualTo(Helpers.CREATED);
System.out.println("Tag Id : "+tagId+" Location : "+headerMap.get(Helpers.LOCATION)); // Here I'm getting resource URI from API which means it is successful run
} catch (NumberFormatException e) {
System.out.println("Inside NumberFormatException");
e.printStackTrace();
assertThat(0).isEqualTo(1);
}
System.out.println("createTagTest is successful");
}
});
}
#Test
public void getTagTest() {
app = Helpers.fakeApplication();
running(fakeApplication(), new Runnable() {
public void run() {
FakeRequest request = new FakeRequest();
Result result = callAction(controllers.routes.ref.Application.getTag(tagId), request.withHeader(TagTest.AUTH_HEADER, TagTest.AUTH_VALUE));
String content = contentAsString(result);
if(content.length()==0) {
assertThat(status(result)).isEqualTo(Helpers.NO_CONTENT);
} else {
assertThat(status(result)).isEqualTo(Helpers.OK);
}
System.out.println("getTagTest is successful");
}
});
}
#Test
public void updateTagTest() {
app = Helpers.fakeApplication();
running(fakeApplication(), new Runnable(){
public void run() {
JsonNode json = Json.toJson(updateDataMap);
FakeRequest request = new FakeRequest().withJsonBody(json);
Result result = callAction(controllers.routes.ref.Application.updateTag(tagId),
request.withHeader(TagTest.AUTH_HEADER, TagTest.AUTH_VALUE));
assertThat(status(result)).isEqualTo(Helpers.OK);
System.out.println("updateTagTest is successful");
}
});
}
#Test
public void getAllTagsTest() {
app = Helpers.fakeApplication();
running(fakeApplication(), new Runnable() {
public void run() {
FakeRequest request = new FakeRequest();
Result result = callAction(controllers.routes.ref.Application.getAllTags(null), request.withHeader(TagTest.AUTH_HEADER, TagTest.AUTH_VALUE));
String content = contentAsString(result);
System.out.println(content);
if(content.length()==0) {
System.out.println("No content");
assertThat(status(result)).isEqualTo(Helpers.NO_CONTENT);
} else {
System.out.println("Content");
assertThat(status(result)).isEqualTo(Helpers.OK);
}
System.out.println("getAllTagsTest is successful");
}
});
}
#Test
public void getTagsByTextTest() {
app = Helpers.fakeApplication();
running(fakeApplication(), new Runnable() {
public void run() {
FakeRequest request = new FakeRequest();
Result result = callAction(controllers.routes.ref.Application.getAllTags(searchText), request.withHeader(TagTest.AUTH_HEADER, TagTest.AUTH_VALUE));
String content = contentAsString(result);
if(content.length()==0) {
assertThat(status(result)).isEqualTo(Helpers.NO_CONTENT);
} else {
assertThat(status(result)).isEqualTo(Helpers.OK);
}
System.out.println("getAllTagsByTextTest is successful");
}
});
}
#Test
public void deleteTagTest() {
app = Helpers.fakeApplication();
running(fakeApplication(), new Runnable() {
public void run() {
FakeRequest request = new FakeRequest();
Result result = callAction(controllers.routes.ref.Application.deleteTag(tagId), request.withHeader(TagTest.AUTH_HEADER, TagTest.AUTH_VALUE));
assertThat(status(result)).isEqualTo(Helpers.OK);
System.out.println("deleteTagTest is successful");
}
});
}
#AfterClass
public static void tearDownAfterClass() {
// Stop FakeApplication after all tests complete
Helpers.stop(app);
}
}
When I run the test, Tag is created but it was not picked up in the subsequent test when trying to do GET /tags/1 and resulted in 204 No content.
Please throw some light what could be the reason behind this. Another observation is, it worked yesterday and all of a sudden this issue has come into picture.
Great help if someone can help me resolve this issue.
JUnit does not support an ordered sequence of test methods. That's a feature--not a bug. Tests should be independent. As a result, you can't guarantee that getTagTest comes after createTagTest. Sometimes it will; sometimes it won't.
The individual operations should have their own test fixtures with the appropriate preconditions defined with #BeforeClass.
If you insist on a defined order, then use dependsOnMethods in TestNG.

Categories