Now i'm working with Apache Kafka and have task:
We have some csv-files in directory, it's a mini-batch files, each file is about 25-30 mb. All i need - parse file and put it to kafka.
As I can see, Kafka have some interesting thing like Connector.
I can create Source-Connector and SourceTask, but i don't understand one thing:
when i handle file, how i can stop or delete my task?
For example i have dummy connector:
public class DummySourceConnector extends SourceConnector {
private static final Logger logger = LogManager.getLogger();
#Override
public String version() {
logger.info("version");
return "1";
}
#Override
public ConfigDef config() {
logger.info("config");
return null;
}
#Override
public Class<? extends Task> taskClass() {
return DummySourceTask.class;
}
#Override
public void start(Map<String, String> props) {
logger.info("start {}", props);
}
#Override
public void stop() {
logger.info("stop");
}
#Override
public List<Map<String, String>> taskConfigs(int maxTasks) {
logger.info("taskConfigs {}", maxTasks);
return ImmutableList.of(ImmutableMap.of("key", "value"));
}
And Task:
public class DummySourceTask extends SourceTask {
private static final Logger logger = LogManager.getLogger();
private long offset = 0;
#Override
public String version() {
logger.info("version");
return "1";
}
#Override
public void start(Map<String, String> props) {
logger.info("start {}", props);
}
#Override
public List<SourceRecord> poll() throws InterruptedException {
Thread.sleep(3000);
final String value = "Offset " + offset++ + " Timestamp " + Instant.now().toString();
logger.info("poll value {}", value);
return ImmutableList.of(new SourceRecord(
ImmutableMap.of("partition", 0),
ImmutableMap.of("offset", offset),
"topic-dummy",
SchemaBuilder.STRING_SCHEMA,
value
));
}
public void stop() {
logger.info("stop");
}
But how i can close my task when it's all done?
Or maybe you can help me with another idea for this task.
Thanx for your help!
First, I encourage you to have a look at existing connectors here. I feel like the spooldir connector would be helpful to you. It may even be possible for you to just download and install it without having to write any code at all.
Second, if I'm understanding correctly, you want to stop a task. I believe this discussion is what you want.
A not so elegant solution of terminating a Task when an event happens is to check for the event in the source of the task and call System.exit(1).
Nevertheless the most elegant solution I have found is this:
When the event occurs the Connector Task apply a REST call to the broker in order to stop the Connector that runs the Task.
To do this the Task itself should know the name of the Connector that runs the task which you can find following the steps of this discussion.
So the name of the connector it is in properties argument of Task, there exists a property with "name" key, and whose value is the name of the Connector which executes the Task ( which we want to stop if an event occurs).
Finally, we make a REST call and we get a 204 answer with no content if the task stops.
The code of the call is this:
try {
URL url = new URL("url/" + connectorName);
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setRequestMethod("DELETE");
conn.setRequestProperty("Accept", "application/json");
if (conn.getResponseCode() != 204) {
throw new RuntimeException("Failed : HTTP error code : "
+ conn.getResponseCode());
}
BufferedReader br = new BufferedReader(new InputStreamReader(
(conn.getInputStream())));
String output;
System.out.println("Task Stopped \n");
while ((output = br.readLine()) != null) {
System.out.println(output);
}
conn.disconnect();
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
Now all the Connector Tasks stop.
(Of course as it is mentioned previously you have to keep in mind that the logic of each SourceTask and each SinkTask is neverending. They are supposed to never stop if an event occurs but instead to continuously seaching for new entries in the files you provide them. So usually you stop them with a REST call and if you want them to stop when an event occurs you put that REST call in their own code.)
Related
Im using Akka 2.5.6 in Java 8 and I want to know the right way to finish de ActorSystem, part of the functionality of my code is to process some XML files and validate them, to achieve this I have created 3 actors:
Controller, Processor and Validator.
The Controller is responsible for initiating the process and sending file by file and other information to the Processor, then the Processor create a digital signature of the file and sends the response to the Validator that finally validates the status and sends an OK message to the Controller which is counting the number of files validated and compares them with the total files. Once the total of files with the total of validated files are equal, I call to finish the ActorSystem with the terminate () method.
The method to finish is as follows:
private void endActors()
{
ActorSystem actorSystem = getContext().system();
Future <Terminated> terminated = actorSystem.terminate();
do {
log.info ("Waiting to finish ...");
try {
Thread.sleep (30000L);
} catch (InterruptedException ex) {
log.error ("Error in Thread.");
}
} while (! ended.isCompleted ());
log.info ("Actors finished processing.");
}
The loop never ends because the future is never complete, I dont know if this is the right way, I hope you have understood me and can help me or give me some advice.
Try the following (the key here is the on complete) . I wrote a class along these lines to use in a setup and teardown for junit, to avoid issues from actor system not fully terminating in the teardown of one test before being created in another test. (that caused port already in use issues)
private static ActorSystem system = null;
private static Future<Terminated> terminatedFuture;
public static ActorSystem getFreshActorSystem() {
tearDownActorSystem();
while(system != null) {
try {
Thread.sleep(500L);
} catch (InterruptedException e) {
}
}
system = ActorSystem.create();
return system;
}
public static void tearDownActorSystem() {
if (system !=null && !isInMiddleOfTerminating()) {
terminatedFuture = system.terminate();
terminatedFuture.onComplete( new OnComplete(){
#Override
public void onComplete(Throwable failure, Object success) throws Throwable {
system = null;
terminatedFuture = null;
}
} , system.dispatcher());
}
}
private static boolean isInMiddleOfTerminating() {
return terminatedFuture !=null;
}
At the moment i am trying to find the best way to manage concurrent API Calls within my application. Currently i have been using HTTPURLConnection to make my HTTP method calls and although it works fine, eventually i would come across some 'Socket exception: connection reset' whilst calls are being made. however, i am using multithreading as i have many different api calls running concurrently.
I have looked into using AsyncRestTemplate and although it is working i find that in the console a list of the pool and thread is shown i.e [pool-6-thread-1] however when it becomes [pool-2018-thread-1] that is when it decides to stop making any more api calls.
This is the code that i am using:
//This method is inside another class in my actual application but here for simplicity
public static ListenableFuture<ResponseEntity<String>> getLastPrice( AsyncRestTemplate asyncRestTemplate) {
String url = "https://bittrex.com/api/v1.1/public/getmarketsummary?market=btc-dar";
asyncRestTemplate = new AsyncRestTemplate(new ConcurrentTaskExecutor(Executors.newCachedThreadPool()));
return asyncRestTemplate.exchange(url, HttpMethod.GET, new HttpEntity<>("result"), String.class);
}
public PriceThread(JTextField lastPriceJT) {
this.lastPriceJT = lastPriceJT;
}
#Override
public void run() {
AsyncRestTemplate asyncRestTemplate = new AsyncRestTemplate(new ThreadPoolTaskExecutor());
while (true) {
try {
getLastPrice(coin, asyncRestTemplate)
.addCallback(new ListenableFutureCallback<ResponseEntity<String>>() {
#Override
public void onSuccess(ResponseEntity<String> response) {
//TODO: Add real response handling
try {
JSONObject result = new JSONObject(response.getBody());
String status = LOGGER.printResponseToLogger(result);
BigDecimal last = result.getJSONArray("result").getJSONObject(0).getBigDecimal("Last");
lastPriceJT.setText(last.toPlainString());
} catch (Exception e) {
LOGGER.printResponseToLogger(e.getMessage());
}
}
#Override
public void onFailure(Throwable ex) {
//TODO: Add real logging solution
LOGGER.printResponseToLogger(ex.getMessage());
}
});
} catch (Exception e) {
LOGGER.printResponseToLogger(e.getMessage());
}
}
}
Currently i'm thinking the solution to this issue would be for me to reuse the pools so that it doesn't increment to 2018 if that is possible but i have not found a way to do so.
I'm very new in openfire and first time using java, I got stuck when I trying to develop plugin for crud. Could you give me some sample to make crud plugin ability? Thanks for your help before...
You can start from this answer: Mapping Openfire Custom plugin with aSmack Client
and follow the official tutorial with first 3 points of the answer.
About CRUD:
Let's assume you want to audit all your messages as XML in your database, so you'll implement a PacketInterceptor just to keep an easy scenario.
Your class plugin will looks like:
public class MyCustomPlugin implements Plugin, PacketInterceptor {//foo}
in method initializePlugin you'll have an invokation like:
public void initializePlugin(PluginManager manager, File pluginDirectory)
{
InterceptorManager.getInstance().addInterceptor(this);
}
and in method interceptPacket something like that:
#Override
public void interceptPacket(Packet packet, Session session,
boolean incoming, boolean processed) throws PacketRejectedException {
if (!processed)
{
boolean done = doMyCRUDAction(packet);
}
if (!done)
{ //do something if error occourred}
}
now let's write on database:
private static final String AUDIT_CHAT =
"INSERT INTO MYTABLE(MESSAGEASXML) VALUES (?)";
private boolean doMyCRUDAction(Packet packet)
{
if ((packet instanceof Message))
{
Message message = (Message) packet.createCopy();
boolean isAudited = false;
Connection con = null;
PreparedStatement statement = null;
try {
con = DbConnectionManager.getConnection();
statement = con.prepareStatement(AUDIT_CHAT);
statement.setString(1, message.toString());
statement.executeQuery();
isAudited = true;
}
catch (SQLException e) {
Log.error(e.getMessage(), e);
}
catch (Exception ex)
{
Log.error(ex.getMessage(), ex);
}
finally {
DbConnectionManager.closeConnection(statement, con);
}
return isAudited;
}
}
please keep in mind this is a reduced snippet of a working code, so there can be some sintax to fix
If your CRUD must follow an explicit IQ request, you'll have to extends an IQHandler and create a custom IQ and send to the client in handleIQ(IQ packet) method. You can check in Openfire sourcecode about detailed and complex implementations.
I'm working with Java and mysql for database and I ran into a weird problem:
One of my clients have a very unstable connection and sometimes packet loss can be high. Ok that's not software's fault I know, but I went there to test and, when the program calls "DriverManager.getConnection()" and the network connection gets unstable, that line gets to lock the application (or the given thread) by several minutes. I have added some logics of course to use another datasource for caching data locally then saving to the network host when possible, but, I can't often let the program hang for longer than 10s (And this method doesn't seem to have any timeout specification).
So, I came out with a workaround like this:
public class CFGBanco implements Serializable {
public String driver = "com.mysql.jdbc.Driver";
public String host;
public String url = "";
public String proto = "jdbc:mysql://";
public String database;
public String user;
public String password;
}
private static java.sql.Connection Connect(HostConfig dataHost) throws java.sql.SQLException, ClassNotFoundException
{
dataHost.url = dataHost.proto+dataHost.host;
if(dataHost.database != null && !dataHost.database.equals("")) dataHost.url += "/"+dataHost.database;
java.lang.Class.forName(dataHost.driver);
ArrayList<Object> lh = new ArrayList<>();
lh.add(0, null);
Thread ConThread = new Thread(()-> {
try {
lh.add(0, java.sql.DriverManager.getConnection(
dataHost.url, dataHost.user, dataHost.password));
} catch(Exception x ) {
System.out.println(x.getMessage());
}
}, "ConnThread-"+SessId);
ConThread.start();
Thread TimeoutThread = new Thread(() -> {
int c = 0;
int delay = 100;
try {
try {
do {
try {
if(t.isAlive())
Thread.sleep(delay);
else
break;
} catch(Exception x) {}
} while((c+=delay) < 10000);
} catch(Exception x){}
} finally {
try {
t.stop();
} catch(Exception x){}
}
}, "ConTimeout-"+SessId);
TimeoutThread.start();
try {
ConThread.join();
} catch(Exception x) {}
if(lh.get(0) == null)
throw new SQLException();
return (Connection) lh.get(0);
}
I call getConnection from another thread, then make a secondary "timeout" thread to watch it and then Join the calling thread to the ConThread.
I have been getting results close to expected, indeed, but it got me wondering:
Is there a better way to do this? Does the creation of 2 threads eat up much on system resources, enough to make this approach unpractical?
You need connection pooling. Pool in the connection and reuse it rather than recreating everytime. One such library for DB connection pooling is DBCP by Apache
It will take care of when connection gets dropped off and so on. You could have validation Query and it would query DB say before borrowing connection from the pool and once it validates successfully, it will fire your actual query.
Hey, have been trying to work this out for last day or so but hitting brick wall. Trying to unit test this bit of code. But not sure if need to use EasyMock or not?? Seem few examples online but seem to be using older techniques.
public boolean verifyConnection(final String url) {
boolean result;
final int timeout = getConnectionTimeout();
if (timeout < 0) {
log.info("No need to verify connection to client. Supplied timeout = {}", timeout);
result = true;
} else {
try {
log.debug("URL: {} Timeout: {} ", url, timeout);
final URL targetUrl = new URL(url);
final HttpURLConnection connection = (HttpURLConnection) targetUrl.openConnection();
connection.setConnectTimeout(timeout);
connection.connect();
result = true;
} catch (ConnectException e) {
log.warn("Could not connect to client supplied url: " + url, e);
result = false;
} catch (MalformedURLException e) {
log.error("Malformed client supplied url: " + url, e);
result = false;
} catch (IOException e) {
log.warn("Could not connect to client supplied url: " + url, e);
result = false;
}
}
return result;
}
It just take's in a url checks its valid and returns T or F.
I have always observed that Mocking Can be avoided as much as possible because it can lead to difficult to maintain JUnit tests and defeat the whole purpose.
My suggestion would be to create a temporary server on your local machine from a JUnit itself.
At the beginning of JUnit you can create a server(not more than 10-15 lines of coding required) using Java sockets and then in your code pass the URL for the local server. This way you are reducing mocking and ensuring maximum code coverage.
Something like this -
public class SimpleServer extends Thread {
public void run() {
try {
serverSocket = new ServerSocket(port);
while (true) {
Socket s = serverSocket.accept();
}
}
catch (IOException e) {
e.printStackTrace();
}
finally {
serverSocket = null;
}
}
}
If you want to mock this method, I'd recommend passing in the URL rather than the String. Don't have your method create the URL it needs; let the client create the URL for you and pass it in. That way your test can substitute a mock if it needs to.
It's almost a dependency injection idea - your method should be given its dependencies and not create them on its own. The call to "new" is the dead giveaway.
It's not a drastic change. You could overload the method and have two signatures: one that accepts a URL string and another that accepts the URL itself. Have the first method create the URL and call the second. That way you can test it and still have the method with the String signature in your API for convenience.
Trying to set up mock implementation of the HttpURLConnection. Like
public class MockHttpURLConnection extends HttpURLConnection {'
then added method to class to override
' protected HttpURLConnection createHttpURLConnection(URL url)
throws IOException {
return (HttpURLConnection) url.openConnection();
}
So test looking something like this:
#Test
public void testGetContentOk() throws Exception
{
String url = "http://localhost";
MockHttpURLConnection mockConnection = new MockHttpURLConnection();
TestableWebClient client = new TestableWebClient();
client.setHttpURLConnection(mockConnection);
boolean result = client.verify(url);
assertEquals(true, result);
}
#Test
public void testDoesNotGetContentOk() throws Exception
{
String url = "http://1.2.3.4";
MockHttpURLConnection mockConnection = new MockHttpURLConnection();
TestableWebClient client = new TestableWebClient();
client.setHttpURLConnection(mockConnection);
boolean result = client.verify(url);
assertEquals(false, result);
}
/**
* An inner, private class that extends WebClient and allows us
* to override the createHttpURLConnection method.
*/
private class TestableWebClient extends WebClient1 {
private HttpURLConnection connection;
/**
* Setter method for the HttpURLConnection.
*
* #param connection
*/
public void setHttpURLConnection(HttpURLConnection connection)
{
this.connection = connection;
}
/**
* A method that we overwrite to create the URL connection.
*/
#Override
public HttpURLConnection createHttpURLConnection(URL url) throws IOException
{
return this.connection;
}
}
First part passed but is getting true for false dummy test, thanks for feedback back so far best site I have found for help. So let me know if think on right track