Is there a way in the JVM to guarantee that some function will run before a AWS lambda function will exit? I would like to flush an internal buffer to stdout as a last action in a lambda function even if some exception is thrown.
As far as I understand you want to execute some code before your Lambda function is stopped, regardless what your execution state is (running/waiting/exception handling/etc).
This is not possible out of the box with Lambda, i.e. there is no event fired or something similar which can be identified as a shutdown hook. The JVM will be freezed as soon as you hit the timeout. However, you can observe the remaining execution time by using the method getRemainingTimeInMillis() from the Context object. From the docs:
Returns the number of milliseconds left before the execution times out.
So, when initializing your function you can schedule a task which is regularly checking how much time is left until your Lambda function reaches the timeout. Then, if only less than X (milli-)seconds are left, you do Y.
aws-samples shows how to do it here
package helloworld;
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.io.IOException;
import java.net.URL;
import java.util.HashMap;
import java.util.Map;
import java.util.stream.Collectors;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
/**
* Handler for requests to Lambda function.
*/
public class App implements RequestHandler<Object, Object> {
static {
Runtime.getRuntime().addShutdownHook(new Thread() {
#Override
public void run() {
System.out.println("[runtime] ShutdownHook triggered");
System.out.println("[runtime] Cleaning up");
// perform actual clean up work here.
try {
Thread.sleep(200);
} catch (Exception e) {
System.out.println(e);
}
System.out.println("[runtime] exiting");
System.exit(0);
}
});
}
public Object handleRequest(final Object input, final Context context) {
Map<String, String> headers = new HashMap<>();
headers.put("Content-Type", "application/json");
headers.put("X-Custom-Header", "application/json");
try {
final String pageContents = this.getPageContents("https://checkip.amazonaws.com");
String output = String.format("{ \"message\": \"hello world\", \"location\": \"%s\" }", pageContents);
return new GatewayResponse(output, headers, 200);
} catch (IOException e) {
return new GatewayResponse("{}", headers, 500);
}
}
private String getPageContents(String address) throws IOException {
URL url = new URL(address);
try (BufferedReader br = new BufferedReader(new InputStreamReader(url.openStream()))) {
return br.lines().collect(Collectors.joining(System.lineSeparator()));
}
}
}
Related
I've been doing some reading about CompletableFuture.
As of now I understand that CompletableFuture is different from Future in a sense that it provides means to chain futures together, to use callback to handle Future's result without actually blocking the code.
However, there is this complete() method that I'm having a hard time wrapping my head around. I only know that it allows us to complete a future manually, but what is the usage for it? The most common examples I found for this method is when doing some async task, we can immediately return a string for example. But what is the point of doing so if the return value doesn't reflect the actual result? If we want to do something asynchronously why don't we just use regular future instead? The only use I can think of is when we want to conditionally cancel an ongoing future. But I think I'm missing some important key points here.
complete() is equivalent to the function transforming the previous stage's result and returning getResponse("a1=Chittagong&a2=city")
response, you can run this method in a different thread
when getResponse() methods response available then thenApply() will be invoked to print log.
no one will be blocked if you run getResponse(String url) in a different thread.
This example shows a scenario where we are printing a log while getting responses from complete();
Code
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;
import java.util.logging.Level;
import java.util.logging.Logger;
public class CompletableFutureEx {
Logger logger = Logger.getLogger(CompletableFutureEx.class.getName());
public static void main(String[] args) {
new CompletableFutureEx().completableFutureEx();
}
private void completableFutureEx() {
var completableFuture = new CompletableFuture<String>();
completableFuture.thenApply(response -> {
logger.log(Level.INFO, "Response : " + response);
return response;
});
//some long process response
try {
completableFuture.complete(getResponse("a1=Chittagong&a2=city"));
} catch (Exception e) {
completableFuture.completeExceptionally(e);
}
try {
System.out.println(completableFuture.get());
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
}
private String getResponse(String url) throws URISyntaxException, IOException, InterruptedException {
var finalUrl = "http://localhost:8081/api/v1/product/add?" + url;
//http://localhost:8081/api/v1/product/add?a1=Chittagong&a2=city
var request = HttpRequest.newBuilder()
.uri(new URI(finalUrl)).GET().build();
var response = HttpClient.newHttpClient()
.send(request, HttpResponse.BodyHandlers.ofString());
System.out.println("response body " + response.body());
return response.body();
}
}
I am trying to develop a camunda process, but I don't know how to implement a multi insntance subprocess to iterate through a collection.
For example:
SubProcess subProcess = modelInstance.getModelElementById("elementVersionId-" + element.getId().toString());
subProcess.builder().multiInstance().multiInstanceDone() //Cant add a start event after multinstance done
After add a multiInstanceDone to the subprocess i cant start the subprocess with startEvent.
Does anyone have an idea, example to help me?
Hope this helps:
import lombok.extern.slf4j.Slf4j;
import org.camunda.bpm.model.bpmn.Bpmn;
import org.camunda.bpm.model.bpmn.BpmnModelInstance;
import org.camunda.bpm.model.bpmn.builder.MultiInstanceLoopCharacteristicsBuilder;
import org.camunda.bpm.model.bpmn.instance.*;
import java.io.File;
#Slf4j
public class MultiInstanceSubprocess {
public static final String MULTI_INSTANCE_PROCESS = "myMultiInstanceProcess";
// #see https://docs.camunda.org/manual/latest/user-guide/model-api/bpmn-model-api/fluent-builder-api/
public static void main(String[] args) {
BpmnModelInstance modelInst;
try {
File file = new File("./src/main/resources/multiInstance.bpmn");
modelInst = Bpmn.createProcess()
.id("MyParentProcess")
.executable()
.startEvent("ProcessStarted")
.subProcess(MULTI_INSTANCE_PROCESS)
//first create sub process content
.embeddedSubProcess()
.startEvent("subProcessStartEvent")
.userTask("UserTask1")
.endEvent("subProcessEndEvent")
.subProcessDone()
.endEvent("ParentEnded").done();
// Add multi-instance loop characteristics to embedded sub process
SubProcess subProcess = modelInst.getModelElementById(MULTI_INSTANCE_PROCESS);
subProcess.builder()
.multiInstance()
.camundaCollection("myCollection")
.camundaElementVariable("myVar")
.multiInstanceDone();
log.info("Flow Elements - Name : Id : Type Name");
modelInst.getModelElementsByType(FlowNode.class).forEach(e -> log.info("{} : {} : {}", e.getName(), e.getId(), e.getElementType().getTypeName()));
Bpmn.writeModelToFile(file, modelInst);
} catch (Exception e) {
e.printStackTrace();
}
}
}
I have a directory that contains 200 million HTML files (don't look at me, I didn't create this mess, I just have to deal with it). I need to index every HTML file in that directory into Solr. I've been reading guides on getting the job done, and I've got something going right now. After about an hour, I've got about 100k indexed, meaning this is going to take roughly 85 days.
I'm indexing the files to a standalone Solr server, running on a c4.8xlarge AWS EC2 instance. Here's the output from free -m with the Solr server running, and the indexer I wrote running as well:
total used free shared buffers cached
Mem: 60387 12981 47405 0 19 4732
-/+ buffers/cache: 8229 52157
Swap: 0 0 0
As you can see, I'm doing pretty good on resources. I increased the number of maxWarmingSearchers to 200 in my Solr config, because I was getting the error:
Exceeded limit of maxWarmingSearchers=2, try again later
Alright, but I don't think increasing that limit was really the right approach. I think the issue is that for each file, I am doing a commit, and I should be doing this in bulk (say 50k files / commit), but I'm not entirely sure how to adapt this code for that, and every example I see does a single file at a time. I really need to do everything I can to make this run as fast as possible, since I don't really have 85 days to wait on getting the data in Solr.
Here's my code:
Index.java
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
public class Index {
public static void main(String[] args) {
String directory = "/opt/html";
String solrUrl = "URL";
final int QUEUE_SIZE = 250000;
final int MAX_THREADS = 300;
BlockingQueue<String> queue = new LinkedBlockingQueue<>(QUEUE_SIZE);
SolrProducer producer = new SolrProducer(queue, directory);
new Thread(producer).start();
for (int i = 1; i <= MAX_THREADS; i++)
new Thread(new SolrConsumer(queue, solrUrl)).start();
}
}
Producer.java
import java.io.IOException;
import java.nio.file.*;
import java.nio.file.attribute.BasicFileAttributes;
import java.util.concurrent.BlockingQueue;
public class SolrProducer implements Runnable {
private BlockingQueue<String> queue;
private String directory;
public SolrProducer(BlockingQueue<String> queue, String directory) {
this.queue = queue;
this.directory = directory;
}
#Override
public void run() {
try {
Path path = Paths.get(directory);
Files.walkFileTree(path, new SimpleFileVisitor<Path>() {
#Override
public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
if (!attrs.isDirectory()) {
try {
queue.put(file.toString());
} catch (InterruptedException e) {
}
}
return FileVisitResult.CONTINUE;
}
});
} catch (IOException e) {
e.printStackTrace();
}
}
}
Consumer.java
import co.talentiq.common.net.SolrManager;
import org.apache.solr.client.solrj.SolrServerException;
import java.io.IOException;
import java.util.concurrent.BlockingQueue;
public class SolrConsumer implements Runnable {
private BlockingQueue<String> queue;
private static SolrManager sm;
public SolrConsumer(BlockingQueue<String> queue, String url) {
this.queue = queue;
if (sm == null)
this.sm = new SolrManager(url);
}
#Override
public void run() {
try {
while (true) {
String file = queue.take();
sm.indexFile(file);
}
} catch (InterruptedException e) {
e.printStackTrace();
} catch (SolrServerException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
}
SolrManager.java
import org.apache.solr.client.solrj.SolrClient;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.HttpSolrClient;
import org.apache.solr.client.solrj.request.AbstractUpdateRequest;
import org.apache.solr.client.solrj.request.ContentStreamUpdateRequest;
import java.io.File;
import java.io.IOException;
import java.util.UUID;
public class SolrManager {
private static String urlString;
private static SolrClient solr;
public SolrManager(String url) {
urlString = url;
if (solr == null)
solr = new HttpSolrClient(url);
}
public void indexFile(String fileName) throws IOException, SolrServerException {
ContentStreamUpdateRequest up = new ContentStreamUpdateRequest("/update/extract");
String solrId = UUID.randomUUID().toString();
up.addFile(new File(fileName), solrId);
up.setParam("literal.id", solrId);
up.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true);
solr.request(up);
}
}
You can use up.setCommitWithin(10000); to make Solr just commit automagically at least every ten seconds. Increase the value to make Solr commit each minute (60000) or each ten minutes (600000). Remove the explicit commit (setAction(..)).
Another option is to configure autoCommit in your configuration file.
You might also be able to index quicker by moving the HTML extraction process out from Solr (and just submitting the text to be indexed), or expanding the amount of servers you're posting to (more nodes in the cluster).
Am guessing you wont be searching the index in parallel while documents are being indexed. So here are the things that you could do.
You can configure the auto commit option in your solrconfig.xml. It can be done based on number of documents / time interval. For you, number of documents option would make more sense.
Remove that call to setAction() method in ContentStreamUpdateRequest object. you can maintain a count for number of calls made to indexFile() method. Say if it reaches 25000/10000 (based on your heap you can limit the count) then for that indexing call alone you can perform the commit using the SolrClient object like solr.commit(). so that the commit will be made once for specified count.
Let me know the results. Good Luck!
Lately i have been trying to make communication between minecraft server (running with Java) and scratch (running with JavaScript).
I have written the code in java already:
package me.yotam180;
import java.io.IOException;
import java.io.OutputStream;
import java.net.InetSocketAddress;
import java.util.HashMap;
import java.util.Map;
import org.bukkit.Bukkit;
import com.sun.net.httpserver.HttpExchange;
import com.sun.net.httpserver.HttpHandler;
import com.sun.net.httpserver.HttpServer;
public class HttpProcessor {
public MainClass plugin;
public HttpProcessor (MainClass plug) throws IOException {
plugin = plug;
plugin.getLogger().info("CREATED HTTTP PROCESSOR");
HttpServer server = HttpServer.create(new InetSocketAddress(9090), 0);
server.createContext("/pollplayer", new PollPlayerHandler());
server.createContext("/killplayer", new KillPlayerHandler());
plugin.getLogger().info("STARTED HTTTP SERVER");
server.setExecutor(null); // creates a default executor
server.start();
}
static class PollPlayerHandler implements HttpHandler {
#SuppressWarnings("deprecation")
#Override
public void handle(HttpExchange httpExchange) throws IOException {
// TODO Auto-generated method stub
Map <String,String>parms = HttpProcessor.queryToMap(httpExchange.getRequestURI().getQuery());
StringBuilder response = new StringBuilder();
response.append(Bukkit.getPlayer(parms.get("name")).getLocation().toString());
HttpProcessor.writeResponse(httpExchange, response.toString());
}
}
static class KillPlayerHandler implements HttpHandler {
#SuppressWarnings("deprecation")
#Override
public void handle(HttpExchange httpExchange) throws IOException {
// TODO Auto-generated method stub
Map <String,String>parms = HttpProcessor.queryToMap(httpExchange.getRequestURI().getQuery());
Bukkit.getPlayer(parms.get("name")).setHealth(0);
HttpProcessor.writeResponse(httpExchange, "SUCCESS");
}
}
public static void writeResponse(HttpExchange httpExchange, String response) throws IOException {
httpExchange.sendResponseHeaders(200, response.length());
OutputStream os = httpExchange.getResponseBody();
os.write(response.getBytes());
os.close();
}
public static Map<String, String> queryToMap(String query){
Map<String, String> result = new HashMap<String, String>();
for (String param : query.split("&")) {
String pair[] = param.split("=");
if (pair.length>1) {
result.put(pair[0], pair[1]);
}else{
result.put(pair[0], "");
}
}
return result;
}
}
Now I have to make the scratch side HTTP Client. Every way i tried, It just didn't work. I try to open my browser, i write http://localhost:9090/pollplayer?name=yotam_salmon and it reports my player location beautifully. Now my problmem is the scratch JS.
Here it is:
new (function () {
var ext = this;
// Cleanup function when the extension is unloaded
ext._shutdown = function () { };
// Status reporting code
// Use this to report missing hardware, plugin or unsupported browser
ext._getStatus = function () {
return { status: 2, msg: 'Ready' };
};
ext.get_Player = function (name, callback) {
//in this function i need to call http://localhost:9090/pollplayer?name= + name, wait for the response and then callback it.
//the response can't be "return response;", and it cannot be call backed from another function. If this function was called, it
//has to report the location back as a string
};
// Block and block menu descriptions
var descriptor = {
blocks: [
['R', 'location of %s', 'get_Player', 'Player'],
]
};
// Register the extension
ScratchExtensions.register('ScratchCraft', descriptor, ext);
})();
I cannot format my JS code differently, because Scratch works only with this format.(It is explained here: http://llk.github.io/scratch-extension-docs/). In the ext.get_Player function i have to go to the Java http server, request /pollplayer?name= + name, and callback it .
I would be happy to get a solution :) Thanks!
The solution was very simple. I just had to add an header of "Allow-Access-Cross-Origin", and it was solved.
httpExchange.getResponseHeaders().set("Access-Control-Allow-Origin", "*");
httpExchange.getResponseHeaders().set("Content-Type", "text/plain");
Constantly monitor a http request which if returns code 200 then no action is taken but if a 404 is returned then the administrator should be alerted via warning or mail.
I wanted to know how to approach it from a Java perspective. The codes available are not very useful.
First of all, you should consider using an existing tool designed for this job (e.g. Nagios or the like). Otherwise you'll likely find yourself rewriting many of the same features. You probably want to send only one email once a problem has been detected, otherwise you'll spam the admin. Likewise you might want to wait until the second or third failure before sending an alert, otherwise you could be sending false alarms. Existing tools do handle these things and more for you.
That said, what you specifically asked for isn't too difficult in Java. Below is a simple working example that should help you get started. It monitors a URL by making a request to it every 30 seconds. If it detects a status code 404 it'll send out an email. It depends on the JavaMail API and requires Java 5 or higher.
public class UrlMonitor implements Runnable {
public static void main(String[] args) throws Exception {
URL url = new URL("http://www.example.com/");
Runnable monitor = new UrlMonitor(url);
ScheduledExecutorService service = Executors.newScheduledThreadPool(1);
service.scheduleWithFixedDelay(monitor, 0, 30, TimeUnit.SECONDS);
}
private final URL url;
public UrlMonitor(URL url) {
this.url = url;
}
public void run() {
try {
HttpURLConnection con = (HttpURLConnection) url.openConnection();
if (con.getResponseCode() == HttpURLConnection.HTTP_NOT_FOUND) {
sendAlertEmail();
}
} catch (IOException e) {
e.printStackTrace();
}
}
private void sendAlertEmail() {
try {
Properties props = new Properties();
props.setProperty("mail.transport.protocol", "smtp");
props.setProperty("mail.host", "smtp.example.com");
Session session = Session.getDefaultInstance(props, null);
Message message = new MimeMessage(session);
message.setFrom(new InternetAddress("me#example.com", "Monitor"));
message.addRecipient(Message.RecipientType.TO,
new InternetAddress("me#example.com"));
message.setSubject("Alert!");
message.setText("Alert!");
Transport.send(message);
} catch (Exception e) {
e.printStackTrace();
}
}
}
I'd start with the quartz scheduler, and create a SimpleTrigger. The SimpleTrigger would use httpclient to create connection and use the JavaMail api to send the mail if an unexpected answer occurred. I'd probably wire it using spring as that has good quartz integration and would allow simple mock implementations for testing.
A quick and dirt example without spring combining Quartz and HttpClient (for JavaMail see How do I send an e-mail in Java?):
imports (so you know where I got the classes from):
import java.io.IOException;
import org.apache.http.HttpResponse;
import org.apache.http.StatusLine;
import org.apache.http.client.ClientProtocolException;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.DefaultHttpClient;
import org.quartz.Job;
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;
code:
public class CheckJob implements Job {
public static final String PROP_URL_TO_CHECK = "URL";
public void execute(JobExecutionContext context)
throws JobExecutionException {
String url = context.getJobDetail().getJobDataMap()
.getString(PROP_URL_TO_CHECK);
System.out.println("Starting execution with URL: " + url);
if (url == null) {
throw new IllegalStateException("No URL in JobDataMap");
}
HttpClient client = new DefaultHttpClient();
HttpGet get = new HttpGet(url);
try {
processResponse(client.execute(get));
} catch (ClientProtocolException e) {
mailError("Got a protocol exception " + e);
return;
} catch (IOException e) {
mailError("got an IO exception " + e);
return;
}
}
private void processResponse(HttpResponse response) {
StatusLine status = response.getStatusLine();
int statusCode = status.getStatusCode();
System.out.println("Received status code " + statusCode);
// You may wish a better check with more valid codes!
if (statusCode <= 200 || statusCode >= 300) {
mailError("Expected OK status code (between 200 and 300) but got " + statusCode);
}
}
private void mailError(String message) {
// See https://stackoverflow.com/questions/884943/how-do-i-send-an-e-mail-in-java
}
}
and the main class which runs forever and checks every 2 minutes:
imports:
import org.quartz.JobDetail;
import org.quartz.SchedulerException;
import org.quartz.SchedulerFactory;
import org.quartz.SimpleScheduleBuilder;
import org.quartz.SimpleTrigger;
import org.quartz.TriggerBuilder;
import org.quartz.impl.StdSchedulerFactory;
code:
public class Main {
public static void main(String[] args) {
JobDetail detail = JobBuilder.newJob(CheckJob.class)
.withIdentity("CheckJob").build();
detail.getJobDataMap().put(CheckJob.PROP_URL_TO_CHECK,
"http://www.google.com");
SimpleTrigger trigger = TriggerBuilder.newTrigger()
.withSchedule(SimpleScheduleBuilder
.repeatMinutelyForever(2)).build();
SchedulerFactory fac = new StdSchedulerFactory();
try {
fac.getScheduler().scheduleJob(detail, trigger);
fac.getScheduler().start();
} catch (SchedulerException e) {
throw new RuntimeException(e);
}
}
}