First to say I'm n00b in Java. I can understand most concepts but in my situation I want somebody to help me. I'm using JBoss Netty to handle simple http request and using MemCachedClient check existence of client ip in memcached.
import org.jboss.netty.channel.ChannelHandler;
import static org.jboss.netty.handler.codec.http.HttpHeaders.*;
import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.*;
import static org.jboss.netty.handler.codec.http.HttpResponseStatus.*;
import static org.jboss.netty.handler.codec.http.HttpVersion.*;
import com.danga.MemCached.*;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;
import org.jboss.netty.buffer.ChannelBuffer;
import org.jboss.netty.buffer.ChannelBuffers;
import org.jboss.netty.channel.ChannelFuture;
import org.jboss.netty.channel.ChannelFutureListener;
import org.jboss.netty.channel.ChannelHandlerContext;
import org.jboss.netty.channel.ExceptionEvent;
import org.jboss.netty.channel.MessageEvent;
import org.jboss.netty.channel.SimpleChannelUpstreamHandler;
import org.jboss.netty.handler.codec.http.Cookie;
import org.jboss.netty.handler.codec.http.CookieDecoder;
import org.jboss.netty.handler.codec.http.CookieEncoder;
import org.jboss.netty.handler.codec.http.DefaultHttpResponse;
import org.jboss.netty.handler.codec.http.HttpChunk;
import org.jboss.netty.handler.codec.http.HttpChunkTrailer;
import org.jboss.netty.handler.codec.http.HttpRequest;
import org.jboss.netty.handler.codec.http.HttpResponse;
import org.jboss.netty.handler.codec.http.HttpResponseStatus;
import org.jboss.netty.handler.codec.http.QueryStringDecoder;
import org.jboss.netty.util.CharsetUtil;
/**
* #author The Netty Project
* #author Andy Taylor (andy.taylor#jboss.org)
* #author Trustin Lee
*
* #version $Rev: 2368 $, $Date: 2010-10-18 17:19:03 +0900 (Mon, 18 Oct 2010) $
*/
#SuppressWarnings({"ALL"})
public class HttpRequestHandler extends SimpleChannelUpstreamHandler {
private HttpRequest request;
private boolean readingChunks;
/** Buffer that stores the response content */
private final StringBuilder buf = new StringBuilder();
protected MemCachedClient mcc = new MemCachedClient();
private static SockIOPool poolInstance = null;
static {
// server list and weights
String[] servers =
{
"lcalhost:11211"
};
//Integer[] weights = { 3, 3, 2 };
Integer[] weights = {1};
// grab an instance of our connection pool
SockIOPool pool = SockIOPool.getInstance();
// set the servers and the weights
pool.setServers(servers);
pool.setWeights(weights);
// set some basic pool settings
// 5 initial, 5 min, and 250 max conns
// and set the max idle time for a conn
// to 6 hours
pool.setInitConn(5);
pool.setMinConn(5);
pool.setMaxConn(250);
pool.setMaxIdle(21600000); //1000 * 60 * 60 * 6
// set the sleep for the maint thread
// it will wake up every x seconds and
// maintain the pool size
pool.setMaintSleep(30);
// set some TCP settings
// disable nagle
// set the read timeout to 3 secs
// and don't set a connect timeout
pool.setNagle(false);
pool.setSocketTO(3000);
pool.setSocketConnectTO(0);
// initialize the connection pool
pool.initialize();
// lets set some compression on for the client
// compress anything larger than 64k
//mcc.setCompressEnable(true);
//mcc.setCompressThreshold(64 * 1024);
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception {
HttpRequest request = this.request = (HttpRequest) e.getMessage();
if(mcc.get(request.getHeader("X-Real-Ip")) != null)
{
HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK);
response.setHeader("X-Accel-Redirect", request.getUri());
ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE);
}
else {
sendError(ctx, NOT_FOUND);
}
}
private void writeResponse(MessageEvent e) {
// Decide whether to close the connection or not.
boolean keepAlive = isKeepAlive(request);
// Build the response object.
HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK);
response.setContent(ChannelBuffers.copiedBuffer(buf.toString(), CharsetUtil.UTF_8));
response.setHeader(CONTENT_TYPE, "text/plain; charset=UTF-8");
if (keepAlive) {
// Add 'Content-Length' header only for a keep-alive connection.
response.setHeader(CONTENT_LENGTH, response.getContent().readableBytes());
}
// Encode the cookie.
String cookieString = request.getHeader(COOKIE);
if (cookieString != null) {
CookieDecoder cookieDecoder = new CookieDecoder();
Set<Cookie> cookies = cookieDecoder.decode(cookieString);
if(!cookies.isEmpty()) {
// Reset the cookies if necessary.
CookieEncoder cookieEncoder = new CookieEncoder(true);
for (Cookie cookie : cookies) {
cookieEncoder.addCookie(cookie);
}
response.addHeader(SET_COOKIE, cookieEncoder.encode());
}
}
// Write the response.
ChannelFuture future = e.getChannel().write(response);
// Close the non-keep-alive connection after the write operation is done.
if (!keepAlive) {
future.addListener(ChannelFutureListener.CLOSE);
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e)
throws Exception {
e.getCause().printStackTrace();
e.getChannel().close();
}
private void sendError(ChannelHandlerContext ctx, HttpResponseStatus status) {
HttpResponse response = new DefaultHttpResponse(HTTP_1_1, status);
response.setHeader(CONTENT_TYPE, "text/plain; charset=UTF-8");
response.setContent(ChannelBuffers.copiedBuffer(
"Failure: " + status.toString() + "\r\n",
CharsetUtil.UTF_8));
// Close the connection as soon as the error message is sent.
ctx.getChannel().write(response).addListener(ChannelFutureListener.CLOSE);
}
}
When I try to send request like http://127.0.0.1:8090/1/2/3
I'm getting
java.lang.NoClassDefFoundError: com/danga/MemCached/MemCachedClient
at httpClientValidator.server.HttpRequestHandler.<clinit>(HttpRequestHandler.java:66)
I believe it's not related to classpath. May be it's related to context in which mcc doesn't exist.
Any help appreciated
EDIT:
Original code http://docs.jboss.org/netty/3.2/xref/org/jboss/netty/example/http/snoop/package-summary.html
I've modified some parts to fit my needs.
Why do you think this is not classpath related? That's the kind of error you get when the jar you need is not available. How do you start your app?
EDIT
Sorry - i loaded and tried the java_memcached-release_2.5.2 bundle in eclipse and found no issue so far. Debugging the class loading revealed nothing unusual. I can't help besides some more hints to double check:
make sure your download is correct. download and unpack again. (are the com.schooner.* classes available?)
make sure you use > java 1.5
make sure your classpath is correct and complete. The example you have shown does not include netty. Where is it.
I'm not familiar with interactions stemming from adding a classpath to the manifest. Maybe revert to plain style, add all jars needed (memcached, netty, yours) to the classpath and reference the main class to start, not a startable jar file
Related
I am debugging one problem of connection reset and need some help.
Here is the background
Using java version 8, apache httpClient 4.5.2
I have a following program, which runs successfully on windows 10, 7 but end up with connection reset on Azure windows server 2016 VM.
import java.io.IOException;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import org.apache.commons.codec.binary.Base64;
import org.apache.http.Header;
import org.apache.http.HttpResponse;
import org.apache.http.client.ClientProtocolException;
import org.apache.http.client.config.RequestConfig;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClientBuilder;
import org.apache.http.impl.conn.PoolingHttpClientConnectionManager;
import org.apache.http.util.EntityUtils;
public class TestConnectionReset
{
static PoolingHttpClientConnectionManager connManager = new PoolingHttpClientConnectionManager();
static {
connManager.setMaxTotal(10);
connManager.setDefaultMaxPerRoute(2);
}
public static void main(String[] args) throws ClientProtocolException, IOException, InterruptedException {
while (true) {
HttpClientBuilder clientBuilder = HttpClientBuilder.create();
RequestConfig config = RequestConfig.custom().setConnectTimeout(1800000).setConnectionRequestTimeout(1800000)
.setSocketTimeout(1800000).build();
clientBuilder.setDefaultRequestConfig(config);
clientBuilder.setConnectionManager(connManager);
String userName = "xxxxx";
String password = "xxxxx";
String userNamePasswordPair = String.valueOf(userName) + ":" + password;
String authenticationData = "Basic " + new String((new Base64()).encode(userNamePasswordPair.getBytes()));
HttpPost post = new HttpPost("https://url/rest/oauth/token");
Map<String, String> requestBodyMap = new HashMap<String, String>();
requestBodyMap.put("grant_type", "client_credentials");
String req = getFormUrlEncodedBodyFromMap(requestBodyMap);
StringEntity stringEntity = new StringEntity(req);
post.setEntity(stringEntity);
post.setHeader("Authorization", authenticationData);
post.setHeader("Content-Type", "application/x-www-form-urlencoded");
CloseableHttpClient closeableHttpClient = clientBuilder.build();
HttpResponse response = closeableHttpClient.execute(post);
Header[] hs = response.getAllHeaders();
for (Header header : hs) {
System.out.println(header.toString());
}
System.out.println(EntityUtils.toString(response.getEntity()));
Thread.sleep(10*60*1000L);
}
}
public static String getFormUrlEncodedBodyFromMap(Map<String, String> formData) {
StringBuilder requestBody = new StringBuilder();
Iterator<Map.Entry<String, String>> itrFormData = formData.entrySet().iterator();
while (itrFormData.hasNext()) {
Map.Entry<?, ?> entry = (Map.Entry)itrFormData.next();
requestBody.append(entry.getKey()).append("=").append(entry.getValue());
if (itrFormData.hasNext()) {
requestBody.append("&");
}
}
return requestBody.toString();
}
}
I am using pooling httpclient connection manager. 1st request in 1st time loop execution succeeded but subsequent iteration of for loop with next request fails.
My findings
If we see underlying socket connection on windows 10, after 1st request socket goes into CLOSE_WAIT state and next request executes with closing the existing connection and creating new connection.
Actually server closes the connection in duration of 5 minutes. But windows 10 able to detect it and re-initiate the connection when next request is triggered.
Now, on windows server 2016, I can see that netstat shows socket ESTABLISHED state. Means connection is ready to use and in that, it picks up the same connection and finally server has already closed it so results into connection reset error.
I suspect its an environmental issue, where server 2016 is keeping socket ESTABLISHED even after server has terminated it, but on windows 10 socket status changed to CLOSE_WAIT.
Help on this is much appreciated
Finally got the root cause,
Its issue with microsoft azure. They are using SNAT and closing outbound TCP connections after 4 minute idle time. This wasted my 5 days to figureout.
Means if you are connected with server with keep-alive and hope that you can reuse the connection till server time out and sends FIN. But before that if idle period reaches to 4 minutes, azure kills it. BOOM!!. Worst part is, it is not even notifying server or client with RST, means violating TCP and questioning its reliability.
clientBuilder.setKeepAliveStrategy(new ConnectionKeepAliveStrategy() {
#Override
public long getKeepAliveDuration(HttpResponse response, HttpContext context) {
// TODO Auto-generated method stub
return 3*60*1000;
}
});
Using above code, I managed to close connection on 3 minute expiry and close it before azure kills it.
I would like to use Jetty 9 (v9.2.12.v20150709) embedded for my test cases.
But I am unable to change the HTTP-Session-Timeout programmatically.
This call webapp.getSessionHandler().getSessionManager().setMaxInactiveInterval(timeoutInSeconds); doesn't seem to work.
Here is reduced code segement, which shows what I do:
import java.io.IOException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.eclipse.jetty.client.HttpClient;
import org.eclipse.jetty.client.api.ContentResponse;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.webapp.WebAppContext;
#SuppressWarnings("javadoc")
public class EmbeddedJetty
{
#SuppressWarnings("serial")
public static class TimeoutServlet extends HttpServlet
{
#Override
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws IOException
{
// return the value of the currently used HTTP-Session Timeout
response.setContentType("text/html");
response.setStatus(HttpServletResponse.SC_OK);
response.getWriter().println("<h1>Timeout: " + request.getSession()
.getMaxInactiveInterval() + "</h1>");
}
}
public static void main(String[] args) throws Exception
{
// the custom timeout, which I am trying to set
int timeoutInSeconds = 1234;
Server server = new Server(0);
WebAppContext webapp = new WebAppContext();
webapp.setContextPath("/");
webapp.setResourceBase(System.getProperty("user.dir"));
// Can't set custom timeout. Following Statement doesn't work.
webapp.getSessionHandler().getSessionManager().setMaxInactiveInterval(
timeoutInSeconds);
server.setHandler(webapp);
webapp.addServlet(TimeoutServlet.class.getName(), "/*");
server.start();
// get current URL of the server
String url = server.getURI().toString();
System.out.println("\n URL: " + url);
// server.dumpStdErr();
// make a request to get the used timeout setting
HttpClient httpClient = new HttpClient();
httpClient.start();
ContentResponse response = httpClient.GET(url);
httpClient.stop();
String timeoutInfo = response.getContentAsString();
System.out.println(timeoutInfo);
// check if the custom timeout is used
if( timeoutInfo.contains(String.valueOf(timeoutInSeconds)) )
{
System.out.println("Custom Timeout is used.");
}
else
{
// Unfortunately, I get the default(?) everytime
System.out.println("Default Timeout? Custom Value is NOT used.");
}
System.out.println("Press Enter to exit ...");
System.in.read();
server.stop();
server.join();
}
}
I am using the WebAppContext-Style of setup, because this allowed me to get my ServletContextListeners to work by using WebAppContext.addEventListener(). Which I couldn't get to work by using a ServletHandler.
Also I am using the Version 9.2.12.v20150709 of Jetty, because it is Classpath-compatible with Selenium v2.5.2 (which supports Java 7 (project requirement)).
Have you any suggestions, what i am doing wrong?
Thank you for your time.
A WebAppContext has some defaults, which are loaded during server.start() (WebAppContext.startContext()).
These defaults contain also a DefaultWebDescriptor located in the jetty-webapp.jar under /org/eclipse/jetty/webapp/webdefault.xml. This Descriptor includes a session-config, which sets the timeout to the default of 30m (1800s).
To overwrite the defaults the call of setMaxInactiveInterval() must be done after the server is started:
server.start();
webapp.getSessionHandler().getSessionManager().setMaxInactiveInterval(timeoutInSeconds);
Or to avoid these defaults, it might be better to use a ServletContextHandler instead.
I have a Grizzly Http Server with Async processing added. It is queuing my requests and processing only one request at a time, despite adding async support to it.
Path HttpHandler was bound to is: "/"
Port number: 7777
Behavior observed when I hit http://localhost:7777 from two browsers simultaneously is:
Second call waits till first one is completed. I want my second http call also to work simultaneously in tandom with first http call.
EDIT Github link of my project
Here are the classes
GrizzlyMain.java
package com.grizzly;
import java.io.IOException;
import java.net.URI;
import javax.ws.rs.core.UriBuilder;
import org.glassfish.grizzly.http.server.HttpServer;
import org.glassfish.grizzly.http.server.NetworkListener;
import org.glassfish.grizzly.nio.transport.TCPNIOTransport;
import org.glassfish.grizzly.strategies.WorkerThreadIOStrategy;
import org.glassfish.grizzly.threadpool.ThreadPoolConfig;
import com.grizzly.http.IHttpHandler;
import com.grizzly.http.IHttpServerFactory;
public class GrizzlyMain {
private static HttpServer httpServer;
private static void startHttpServer(int port) throws IOException {
URI uri = getBaseURI(port);
httpServer = IHttpServerFactory.createHttpServer(uri,
new IHttpHandler(null));
TCPNIOTransport transport = getListener(httpServer).getTransport();
ThreadPoolConfig config = ThreadPoolConfig.defaultConfig()
.setPoolName("worker-thread-").setCorePoolSize(6).setMaxPoolSize(6)
.setQueueLimit(-1)/* same as default */;
transport.configureBlocking(false);
transport.setSelectorRunnersCount(3);
transport.setWorkerThreadPoolConfig(config);
transport.setIOStrategy(WorkerThreadIOStrategy.getInstance());
transport.setTcpNoDelay(true);
System.out.println("Blocking Transport(T/F): " + transport.isBlocking());
System.out.println("Num SelectorRunners: "
+ transport.getSelectorRunnersCount());
System.out.println("Num WorkerThreads: "
+ transport.getWorkerThreadPoolConfig().getCorePoolSize());
httpServer.start();
System.out.println("Server Started #" + uri.toString());
}
public static void main(String[] args) throws InterruptedException,
IOException, InstantiationException, IllegalAccessException,
ClassNotFoundException {
startHttpServer(7777);
System.out.println("Press any key to stop the server...");
System.in.read();
}
private static NetworkListener getListener(HttpServer httpServer) {
return httpServer.getListeners().iterator().next();
}
private static URI getBaseURI(int port) {
return UriBuilder.fromUri("https://0.0.0.0/").port(port).build();
}
}
HttpHandler (with async support built in)
package com.grizzly.http;
import java.io.IOException;
import java.util.Date;
import java.util.concurrent.ExecutorService;
import javax.ws.rs.core.Application;
import org.glassfish.grizzly.http.server.HttpHandler;
import org.glassfish.grizzly.http.server.Request;
import org.glassfish.grizzly.http.server.Response;
import org.glassfish.grizzly.http.util.HttpStatus;
import org.glassfish.grizzly.threadpool.GrizzlyExecutorService;
import org.glassfish.grizzly.threadpool.ThreadPoolConfig;
import org.glassfish.jersey.server.ApplicationHandler;
import org.glassfish.jersey.server.ResourceConfig;
import org.glassfish.jersey.server.spi.Container;
import com.grizzly.Utils;
/**
* Jersey {#code Container} implementation based on Grizzly
* {#link org.glassfish.grizzly.http.server.HttpHandler}.
*
* #author Jakub Podlesak (jakub.podlesak at oracle.com)
* #author Libor Kramolis (libor.kramolis at oracle.com)
* #author Marek Potociar (marek.potociar at oracle.com)
*/
public final class IHttpHandler extends HttpHandler implements Container {
private static int reqNum = 0;
final ExecutorService executorService = GrizzlyExecutorService
.createInstance(ThreadPoolConfig.defaultConfig().copy()
.setCorePoolSize(4).setMaxPoolSize(4));
private volatile ApplicationHandler appHandler;
/**
* Create a new Grizzly HTTP container.
*
* #param application
* JAX-RS / Jersey application to be deployed on Grizzly HTTP
* container.
*/
public IHttpHandler(final Application application) {
}
#Override
public void start() {
super.start();
}
#Override
public void service(final Request request, final Response response) {
System.out.println("\nREQ_ID: " + reqNum++);
System.out.println("THREAD_ID: " + Utils.getThreadName());
response.suspend();
// Instruct Grizzly to not flush response, once we exit service(...) method
executorService.execute(new Runnable() {
#Override
public void run() {
try {
System.out.println("Executor Service Current THREAD_ID: "
+ Utils.getThreadName());
Thread.sleep(25 * 1000);
} catch (Exception e) {
response.setStatus(HttpStatus.INTERNAL_SERVER_ERROR_500);
} finally {
String content = updateResponse(response);
System.out.println("Response resumed > " + content);
response.resume();
}
}
});
}
#Override
public ApplicationHandler getApplicationHandler() {
return appHandler;
}
#Override
public void destroy() {
super.destroy();
appHandler = null;
}
// Auto-generated stuff
#Override
public ResourceConfig getConfiguration() {
return null;
}
#Override
public void reload() {
}
#Override
public void reload(ResourceConfig configuration) {
}
private String updateResponse(final Response response) {
String data = null;
try {
data = new Date().toLocaleString();
response.getWriter().write(data);
} catch (IOException e) {
data = "Unknown error from our server";
response.setStatus(500, data);
}
return data;
}
}
IHttpServerFactory.java
package com.grizzly.http;
import java.net.URI;
import org.glassfish.grizzly.http.server.HttpServer;
import org.glassfish.grizzly.http.server.NetworkListener;
import org.glassfish.grizzly.http.server.ServerConfiguration;
/**
* #author smc
*/
public class IHttpServerFactory {
private static final int DEFAULT_HTTP_PORT = 80;
public static HttpServer createHttpServer(URI uri, IHttpHandler handler) {
final String host = uri.getHost() == null ? NetworkListener.DEFAULT_NETWORK_HOST
: uri.getHost();
final int port = uri.getPort() == -1 ? DEFAULT_HTTP_PORT : uri.getPort();
final NetworkListener listener = new NetworkListener("IGrizzly", host, port);
listener.setSecure(false);
final HttpServer server = new HttpServer();
server.addListener(listener);
final ServerConfiguration config = server.getServerConfiguration();
if (handler != null) {
config.addHttpHandler(handler, uri.getPath());
}
config.setPassTraceRequest(true);
return server;
}
}
It seems the problem is the browser waiting for the first request to complete, and thus more a client-side than a server-side issue. It disappears if you test with two different browser processes, or even if you open two distinct paths (let's say localhost:7777/foo and localhost:7777/bar) in the same browser process (note: the query string partecipates in making up the path in the HTTP request line).
How I understood it
Connections in HTTP/1.1 are persistent by default, ie browsers recycle the same TCP connection over and over again to speed things up. However, this doesn't mean that all requests to the same domain will be serialized: in fact, a connection pool is allocated on a per-hostname basis (source). Unfortunately, requests with the same path are effectively enqueued (at least on Firefox and Chrome) - I guess it's a device that browsers employ to protect server resources (and thus user experience)
Real-word applications don't suffer from this because different resources are deployed to different URLs.
DISCLAIMER: I wrote this answer based on my observations and some educated guess. I think things may actually be like this, however a tool like Wireshark should be used to follow the TCP stream and definitely assert this is what happens.
I have some housekeeping tasks within an Elastic Beanstalk Java application running on Tomcat, and I need to run them every so often. I want these tasks run only on the leader node (or, more correctly, on a single node, but the leader seems like an obvious choice).
I was looking at running cron jobs within Elastic Beanstalk, but it feels like this should be more straightforward than what I've come up with. Ideally, I'd like one of these two options within my web app:
Some way of testing within the current JRE whether or not this server is the leader node
Some some way to hit a specific URL (wget?) to trigger the task, but also restrict that URL to requests from localhost.
Suggestions?
It is not possible, by design (leaders are only assigned during deployment, and not needed on other contexts). However, you can tweak and use the EC2 Metadata for this exact purpose.
Here's an working example about how to achieve this result (original source). Once you call getLeader, it will find - or assign - an instance to be set as a leader:
package br.com.ingenieux.resource;
import java.io.IOException;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import javax.inject.Inject;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import org.apache.commons.io.IOUtils;
import com.amazonaws.services.ec2.AmazonEC2;
import com.amazonaws.services.ec2.model.CreateTagsRequest;
import com.amazonaws.services.ec2.model.DeleteTagsRequest;
import com.amazonaws.services.ec2.model.DescribeInstancesRequest;
import com.amazonaws.services.ec2.model.Filter;
import com.amazonaws.services.ec2.model.Instance;
import com.amazonaws.services.ec2.model.Reservation;
import com.amazonaws.services.ec2.model.Tag;
import com.amazonaws.services.elasticbeanstalk.AWSElasticBeanstalk;
import com.amazonaws.services.elasticbeanstalk.model.DescribeEnvironmentsRequest;
#Path("/admin/leader")
public class LeaderResource extends BaseResource {
#Inject
AmazonEC2 amazonEC2;
#Inject
AWSElasticBeanstalk elasticBeanstalk;
#GET
public String getLeader() throws Exception {
/*
* Avoid running if we're not in AWS after all
*/
try {
IOUtils.toString(new URL(
"http://169.254.169.254/latest/meta-data/instance-id")
.openStream());
} catch (Exception exc) {
return "i-FFFFFFFF/localhost";
}
String environmentName = getMyEnvironmentName();
List<Instance> environmentInstances = getInstances(
"tag:elasticbeanstalk:environment-name", environmentName,
"tag:leader", "true");
if (environmentInstances.isEmpty()) {
environmentInstances = getInstances(
"tag:elasticbeanstalk:environment-name", environmentName);
Collections.shuffle(environmentInstances);
if (environmentInstances.size() > 1)
environmentInstances.removeAll(environmentInstances.subList(1,
environmentInstances.size()));
amazonEC2.createTags(new CreateTagsRequest().withResources(
environmentInstances.get(0).getInstanceId()).withTags(
new Tag("leader", "true")));
} else if (environmentInstances.size() > 1) {
DeleteTagsRequest deleteTagsRequest = new DeleteTagsRequest().withTags(new Tag().withKey("leader").withValue("true"));
for (Instance i : environmentInstances.subList(1,
environmentInstances.size())) {
deleteTagsRequest.getResources().add(i.getInstanceId());
}
amazonEC2.deleteTags(deleteTagsRequest);
}
return environmentInstances.get(0).getInstanceId() + "/" + environmentInstances.get(0).getPublicIpAddress();
}
#GET
#Produces("text/plain")
#Path("am-i-a-leader")
public boolean isLeader() {
/*
* Avoid running if we're not in AWS after all
*/
String myInstanceId = null;
String environmentName = null;
try {
myInstanceId = IOUtils.toString(new URL(
"http://169.254.169.254/latest/meta-data/instance-id")
.openStream());
environmentName = getMyEnvironmentName();
} catch (Exception exc) {
return false;
}
List<Instance> environmentInstances = getInstances(
"tag:elasticbeanstalk:environment-name", environmentName,
"tag:leader", "true", "instance-id", myInstanceId);
return (1 == environmentInstances.size());
}
protected String getMyEnvironmentHost(String environmentName) {
return elasticBeanstalk
.describeEnvironments(
new DescribeEnvironmentsRequest()
.withEnvironmentNames(environmentName))
.getEnvironments().get(0).getCNAME();
}
private String getMyEnvironmentName() throws IOException,
MalformedURLException {
String instanceId = IOUtils.toString(new URL(
"http://169.254.169.254/latest/meta-data/instance-id"));
/*
* Grab the current environment name
*/
DescribeInstancesRequest request = new DescribeInstancesRequest()
.withInstanceIds(instanceId)
.withFilters(
new Filter("instance-state-name").withValues("running"));
for (Reservation r : amazonEC2.describeInstances(request)
.getReservations()) {
for (Instance i : r.getInstances()) {
for (Tag t : i.getTags()) {
if ("elasticbeanstalk:environment-name".equals(t.getKey())) {
return t.getValue();
}
}
}
}
return null;
}
public List<Instance> getInstances(String... args) {
Collection<Filter> filters = new ArrayList<Filter>();
filters.add(new Filter("instance-state-name").withValues("running"));
for (int i = 0; i < args.length; i += 2) {
String key = args[i];
String value = args[1 + i];
filters.add(new Filter(key).withValues(value));
}
DescribeInstancesRequest req = new DescribeInstancesRequest()
.withFilters(filters);
List<Instance> result = new ArrayList<Instance>();
for (Reservation r : amazonEC2.describeInstances(req).getReservations())
result.addAll(r.getInstances());
return result;
}
}
You can keep a secret URL (a long URL is un-guessable, almost as safe as a password), hit this URL from somewhere. On this you can execute the task.
One problem however is that if the task takes too long, then during that time your server capacity will be limited. Another approach would be for the URL hit to post a message to the AWS SQS. The another EC2 can have a code which waits on SQS and execute the task. You can also look into http://aws.amazon.com/swf/
Another approach if you're running on the Linux-type EC2 instance:
Write a shell script that does (or triggers) your periodic task
Leveraging the .ebextensions feature to customize your Elastic Beanstalk instance, create a container command that specifies the parameter leader_only: true -- this command will only run on an instance that is designated the leader in your Auto Scaling group
Have your container command copy your shell script into /etc/cron.hourly (or daily or whatever).
The result will be that your "leader" EC2 instance will have a cron job running hourly (or daily or whatever) to do your periodic task and the other instances in your Auto Scaling group will not.
I'm working on a tool which analyze some SSL Services, and right now I'm trying to test the client-initiated renegotiation.
I'm using BouncyCastle to do so, with a TlsClientProtocol with a custom function, because BC doesn't "handle" natively the renegotiation.
So, right now I'm using this class:
package org.bouncycastle.crypto.tls;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.Socket;
import java.util.Hashtable;
import org.bouncycastle.crypto.tls.Certificate;
import org.bouncycastle.crypto.tls.CipherSuite;
import org.bouncycastle.crypto.tls.DefaultTlsClient;
import org.bouncycastle.crypto.tls.ExtensionType;
import org.bouncycastle.crypto.tls.ServerOnlyTlsAuthentication;
import org.bouncycastle.crypto.tls.TlsAuthentication;
import org.bouncycastle.crypto.tls.TlsClientProtocol;
import org.bouncycastle.util.encoders.Hex;
/**
*
* #version 1.0
*/
public class TestProtocol extends TlsClientProtocol {
private byte[] verifyData;
public TestProtocol(InputStream input, OutputStream output) {
super(input, output);
}
// I need to replace this method to have the verifyData,
// because we need to send it into the renegotiation_info ext
#Override
protected void sendFinishedMessage() throws IOException {
verifyData = createVerifyData(getContext().isServer());
ByteArrayOutputStream bos = new ByteArrayOutputStream();
TlsUtils.writeUint8(HandshakeType.finished, bos);
TlsUtils.writeUint24(verifyData.length, bos);
bos.write(verifyData);
byte[] message = bos.toByteArray();
safeWriteRecord(ContentType.handshake, message, 0, message.length);
}
public void renegotiate() throws IOException {
this.connection_state = CS_START;
sendClientHelloMessage();
this.connection_state = CS_CLIENT_HELLO;
completeHandshake();
}
public static void main(String[] args) throws IOException, InterruptedException {
Socket s = new Socket("10.0.0.101", 443);
final TestProtocol proto = new TestProtocol(s.getInputStream(), s.getOutputStream());
proto.connect(new DefaultTlsClient() {
public TlsAuthentication getAuthentication() throws IOException {
return new ServerOnlyTlsAuthentication() {
public void notifyServerCertificate(Certificate serverCertificate) throws IOException {}
};
}
#Override
public int[] getCipherSuites() {
return new int[]{CipherSuite.TLS_RSA_WITH_NULL_SHA, CipherSuite.TLS_RSA_WITH_NULL_MD5};
}
private boolean first = true;
#Override
public Hashtable getClientExtensions() throws IOException {
#SuppressWarnings("unchecked")
Hashtable<Integer, byte[]> clientExtensions = super.getClientExtensions();
if (clientExtensions == null) {
clientExtensions = new Hashtable<Integer, byte[]>();
}
// If this is the first ClientHello, we're not doing anything
if (first) {
first = false;
}
else {
// If this is the second, we add the renegotiation_info extension
byte[] ext = new byte[proto.verifyData.length + 1];
ext[0] = (byte) proto.verifyData.length;
System.arraycopy(proto.verifyData, 0, ext, 1, proto.verifyData.length);
clientExtensions.put(ExtensionType.renegotiation_info, ext);
}
clientExtensions.put(ExtensionType.session_ticket, new byte[] {});
return clientExtensions;
}
});
proto.renegotiate();
}
}
And it's working.. Almost..
When I call the renegotiate() method, it :
- sends the ClientHello
- receives the ServerHello
- receives the Certificate
- receives the ServerHelloDone
- sends the ClientKeyExchange
- sends the ChangeCipherSpec
- sends the Finish
- receives an alert: Fatal, Decrypt Error ; instead of NewSessionTicket,ChangeCipherSpec,Finish
And I just can't figure out why. I thought it could be the SeqNumber used to create the MAC which would need a refresh, but no. When I'm giving an obviously wrong value, I receive also a MAC Error Alert.
To do my testing, I'm using a server allowing CLEAR cipher suites and obviously allowing Client-initiated Renegotiation.
When I'm trying with OpenSSL, it works fine, and I can't see where is the difference, what I'm doing wrong.
The server is on a private VPN, so you can't use it to test things, but here are the .cap of the handshakes.
https://stuff.stooit.com/d/1/528b4a314e35d/openssl.cap
https://stuff.stooit.com/d/1/528b4a54a68cd/my.cap
The first one is the working one, using openSSL.
And the second one is mine, using BouncyCastle.
I'm aware that it won't be very easy to help me on this case, but hey, thanks to ppl who'll try :)
Ok, as always I found the answer a few time after posting my question -- (Even if I was on it for hours / days).
The problem comes with the "Finished" message the client sends. The verify_data is a hash containing all previous handshake messages of the current negotiation.
But in my case, it also contained the handshake messages of the first negotiation, so the verify_data doesn't have the good value.
So to make it works, I need to reset the RecordStream.hash, using RecordStream.hash.reset()