How to validate is PoolingHttpClientConnectionManager is applied on jersey client - java

Below is the code snippet I am using for jersey client connection pooling.
ClientConfig clientConfig = new ClientConfig();
clientConfig.property(ClientProperties.CONNECT_TIMEOUT, defaultConnectTimeout);
clientConfig.property(ClientProperties.READ_TIMEOUT, defaultReadTimeout);
PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
cm.setMaxTotal(50);
cm.setDefaultMaxPerRoute(5);
clientConfig.property(ApacheClientProperties.CONNECTION_MANAGER, cm);
clientConfig.connectorProvider(new ApacheConnectorProvider());
How can I validate that my client is using connection pooling? Is poolStats.getAvailable() count is valid way of making sure ? In my case this available count is 1 when I tested client.

Yes, the count can be 1, but to confirm you can try following steps.
You can first add a thread that keep running in background and prints the existing state of poolstats at some interval, lets say every 60 sec. You can use below logic. Ensure you are referring to same PoolingHttpClientConnectionManager object instance in below logic code running as a part of background thread.
Then, try calling the logic which internally makes call to external service using the mentioned jersey client in continuation (may be in for loop)
You should see different logs (in your thread logic) getting printed which would confirm that the jersey client is actually using the pooled configuration.
Logic:
PoolStats poolStats = cm.getTotalStats();
Set<HttpRoute> routes = cm.getRoutes();
if(CollectionUtils.isNotEmpty(routes)) {
for (HttpRoute route : routes) {
PoolStats routeStats = poolingHttpClientConnectionManager.getStats(route);
int routeAvailable = routeStats.getAvailable();
int routeLeased = routeStats.getLeased();
int routeIdle = (routeAvailable - routeLeased);
log.info("Pool Stats for Route - Host = {}, Available = {} , Leased = {}, Idle = {}, Pending = {}, Max = {} " ,
route.getTargetHost(), routeAvailable, routeLeased, routeIdle, poolStats.getPending(), poolStats.getMax());
}
}
int available = poolStats.getAvailable();
int leased = poolStats.getLeased();
int idle = (available - leased);
log.info("Pool Stats - Available = {} , Leased = {}, Idle = {}, Pending = {}, Max = {} " ,
available, leased, idle, poolStats.getPending(), poolStats.getMax());

Related

Pod size returned zero post kubernetes job creation

We are creating kubernates job using java kubernates client api (V:5.12.2) like below.
I am struck with two places . Could some one please help on this ?
podList.getItems().size() in below code snippet is some times returning zero even though I see pod get created and with other existing jobs.
How to specify particular label to job pod?
KubernetesClient kubernetesClient = new DefaultKubernetesClient();
String namespace = System.getenv(POD_NAMESPACE);
String jobName = TextUtils.concatenateToString("flatten" + Constants.HYPHEN+ flattenId);
Job jobRequest = createJob(flattenId, authValue);
var jobResult = kubernetesClient.batch().v1().jobs().inNamespace(namespace)
.create(jobRequest);
PodList podList = kubernetesClient.pods().inNamespace(namespace)
.withLabel("job-name", jobName).list();
// Wait for pod to complete
var pods = podList.getItems().size();
var terminalPodStatus = List.of("succeeded", "failed");
_LOGGER.info("pods created size:" + pods);
if (pods > 0) {
// returns zero some times.
var k8sPod = podList.getItems().get(0);
var podName = k8sPod.getMetadata().getName();
kubernetesClient.pods().inNamespace(namespace).withName(podName)
.waitUntilCondition(pod -> {
var podPhase = pod.getStatus().getPhase();
//some logic
return terminalPodStatus.contains(podPhase.toLowerCase());
}, JOB_TIMEOUT, TimeUnit.MINUTES);
kubernetesClient.close();
}
private Job createJob(String flattenId, String authValue) {
return new JobBuilder()
.withApiVersion(API_VERSION)
.withNewMetadata().withName(jobName)
.withLabels(labels)
.endMetadata()
.withNewSpec()
.withTtlSecondsAfterFinished(300)
.withBackoffLimit(0)
.withNewTemplate()
.withNewMetadata().withAnnotations(LINKERD_INJECT_ANNOTATIONS)
.endMetadata()
.withNewSpec()
.withServiceAccount(Constants.TEST_SERVICEACCOUNT)
.addNewContainer()
.addAllToEnv(envVars)
.withImage(System.getenv(BUILD_JOB_IMAGE))
.withName(“”test)
.withCommand("/bin/bash", "-c", "java -jar test.jar")
.endContainer()
.withRestartPolicy(RESTART_POLICY_NEVER)
.endSpec()
.endTemplate()
.endSpec()
.build();
}
Pods are not instantly created as consequence of creating a Job: The Job controller needs to become active and create the Pods accordingly. Depending on the load on your control plane and number of Job instances you may need to wait more or less time.

Java: Kafka AdminClient not establishing connection (or so it seems)

everyone.
This is my first post here, so, please, pardon my finesse skill of writing stack overflow questions.
I am having trouble using AdminClient from org.apache.kafka.clients.admin.AdminClient.
The issue at hand is this:
I initiate a secure connection to our broker server (running kafka 1.0.0) using SASL SSL.
it works just fine when I am running a consumer against that same broker with the same security settings. However when I am doing AdminClient stuff, it seems to have worked, but I see no traffic coming out of my machine to the broker server whatsoever in wireshark, and what I am trying to do does not happen on the broker side.
here is my code:
public class AclProvisioner {
//set up variables
private static Properties props = new Properties();
private static ClassLoader classloader = Thread.currentThread().getContextClassLoader();
static String mid = null;
static String topic = null;
public static void main(String... args) {
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "kafkabroker.mydomain.com:9094");
props.put("security.protocol","SASL_SSL");
props.put("ssl.truststore.location", "C:\\Temp\\mydomain.root.jks" );
props.put("ssl.truststore.password","my_truststore_password");
props.put("sasl.mechanism","GSSAPI");
props.put("sasl.kerberos.service.name","kafka_admin_username");
AdminClient adminClient = AdminClient.create(props);
// generate ACLs
AclBinding newTopicReadAcl = new AclBinding( new Resource(ResourceType.TOPIC, "TestTopic"),
new AccessControlEntry("MY_TESTID", "*", AclOperation.READ, AclPermissionType.ALLOW) );
AclBinding newTopicDescribeAcl = new AclBinding( new Resource(ResourceType.TOPIC, "TestTopic"),
new AccessControlEntry("MY_TESTID", "*", AclOperation.DESCRIBE, AclPermissionType.ALLOW) );
AclBinding newGroupReadAcl = new AclBinding( new Resource(ResourceType.GROUP, "TestGroup"),
new AccessControlEntry("MY_TESTID", "*", AclOperation.READ, AclPermissionType.ALLOW) );
Collection<AclBinding> aclList = Arrays.asList(newTopicReadAcl, newTopicDescribeAcl, newGroupReadAcl);
adminClient.createAcls(aclList);
// create topic
int numPartitions = 6;
short replicasFactor = 2;
NewTopic newTopic = new NewTopic("Demo.JavaAdminClientTest", numPartitions, replicasFactor);
Map<String, String> configMap = new HashMap<>();
configMap.put(TopicConfig.CLEANUP_POLICY_CONFIG, TopicConfig.CLEANUP_POLICY_COMPACT);
configMap.put(TopicConfig.COMPRESSION_TYPE_CONFIG, "gzip");
newTopic.configs(configMap);
List<NewTopic> topics = Arrays.asList(newTopic);
adminClient.createTopics( topics );
}
If I ssh to the server itself and export my keytab and kinit, I am able to generate ACLs just fine using CLI method. I am also able to run a consumer using the same exact properties (as far as security goes).
Another thing I have discovered, is that if I put a server that does not exist or can not be reached, the program does fail, telling me that it could not resolve the BOOTSTRAP_SERVER_NAME.
same exact behavior happens if instead of ACL I attempt to create Topics. Once again, that does work just fine out of CLI.
I appreciate any pointers!
Cheers
All AdminClient methods are asynchronous and only return Future objects.
So if you don't explicitly wait on the futures to complete, your program just terminates before the AdminClient has time to send anything over the network.
You can use all() or values() on the CreateAclsResult [0] and CreateTopicsResults [1] to retrieve KafkaFuture [2] objects. Then use get() on them to wait for example.
[0] http://kafka.apache.org/11/javadoc/org/apache/kafka/clients/admin/CreateAclsResult.html
[1] http://kafka.apache.org/11/javadoc/org/apache/kafka/clients/admin/CreateTopicsResult.html
[2] http://kafka.apache.org/11/javadoc/org/apache/kafka/common/KafkaFuture.html

How to set setMaxPerRoute in PoolingHttpClientConnectionManager?

I have PoolingHttpClientConnectionManager where I want to set max connections per route. I'm doing it in next way:
poolingHttpClientConnectionManager.setDefaultMaxPerRoute(5);
poolingHttpClientConnectionManager.setMaxPerRoute(new HttpRoute(HttpHost.create(url)), 3);
where for example my url is https://repo.maven.apache.org/maven2.
So I have default max per route 5 and 3 per specific url. Then if I call
poolingHttpClientConnectionManager.getStats(new HttpRoute(HttpHost.create(url)));
I receive as a result PoolStats with max = 3 so everything ok for now.
But when I create a client with pooling connection manager and call same url I can see in logs:
PoolingHttpClientConnectionManager - Connection leased: [id: 0][route: {s}->https://repo.maven.apache.org:443][total kept alive: 0; route allocated: 1 of 5; total allocated: 1 of 200]
As I can see it still uses 5 connections as max for that example url.
So my question is how to set max connections per some route to make it work?
Ok I've managed to fix it with next code:
// code to create HttpRoute the same as in apache library
private HttpRoute getHttpRouteForUrl(String url) throws URISyntaxException
{
URI uri = new URI(url);
boolean secure = uri.getScheme().equalsIgnoreCase("https");
int port = uri.getPort();
if (uri.getPort() > 0)
{
port = uri.getPort();
}
else if (uri.getScheme().equalsIgnoreCase("https"))
{
port = 443;
}
else if (uri.getScheme().equalsIgnoreCase("http"))
{
port = 80;
}
else
{
LOGGER.warn("Unknown port of uri %s", repository);
}
HttpHost httpHost = new HttpHost(uri.getHost(), port, uri.getScheme());
// TODO check whether we need this InetAddress as second param
return new HttpRoute(httpHost, null, secure);
}
And if we use this HttpRoute for setMaxPerRoute everything works as expected.

Gradle P4Java java.net.SocketTimeoutException: Read timed out

I am using P4Java library in my build.gradle file to sync a large zip file (>200MB) residing at a remote Perforce repository but I am encountering a "java.net.SocketTimeoutException: Read timed out" error either during the sync process or (mostly) during deleting the temporary client created for the sync operation. I am referring http://razgulyaev.blogspot.in/2011/08/p4-java-api-how-to-work-with-temporary.html for working with temporary clients using P4Java API.
I tried increasing the socket read timeout from default 30 sec as suggested in http://answers.perforce.com/articles/KB/8044 and also by introducing sleep but both approaches didn't solved the problem. Probing the server to verify the connection using getServerInfo() right before performing sync or delete operations results in a successful connection check. Can someone please point me as to where I should look for answers?
Thank you.
Providing the code snippet:
void perforceSync(String srcPath, String destPath, String server) {
// Generating the file(s) to sync-up
String[] pathUnderDepot = [
srcPath + "*"
]
// Increasing timeout from default 30 sec to 60 sec
Properties defaultProps = new Properties()
defaultProps.put(PropertyDefs.PROG_NAME_KEY, "CustomBuildApp")
defaultProps.put(PropertyDefs.PROG_VERSION_KEY, "tv_1.0")
defaultProps.put(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "60000")
// Instantiating the server
IOptionsServer p4Server = ServerFactory.getOptionsServer("p4java://" + server, defaultProps)
p4Server.connect()
// Authorizing
p4Server.setUserName("perforceUserName")
p4Server.login("perforcePassword")
// Just check if connected successfully
IServerInfo serverInfo = p4Server.getServerInfo()
println 'Server info: ' + serverInfo.getServerLicense()
// Creating new client
IClient tempClient = new Client()
// Setting up the name and the root folder
tempClient.setName("tempClient" + UUID.randomUUID().toString().replace("-", ""))
tempClient.setRoot(destPath)
tempClient.setServer(p4Server)
// Setting the client as the current one for the server
p4Server.setCurrentClient(tempClient)
// Creating Client View entry
ClientViewMapping tempMappingEntry = new ClientViewMapping()
// Setting up the mapping properties
tempMappingEntry.setLeft(srcPath + "...")
tempMappingEntry.setRight("//" + tempClient.getName() + "/...")
tempMappingEntry.setType(EntryType.INCLUDE)
// Creating Client view
ClientView tempClientView = new ClientView()
// Attaching client view entry to client view
tempClientView.addEntry(tempMappingEntry)
tempClient.setClientView(tempClientView)
// Registering the new client on the server
println p4Server.createClient(tempClient)
// Surrounding the underlying block with try as we want some action
// (namely client removing) to be performed in any way
try {
// Forming the FileSpec collection to be synced-up
List<IFileSpec> fileSpecsSet = FileSpecBuilder.makeFileSpecList(pathUnderDepot)
// Syncing up the client
println "Syncing..."
tempClient.sync(FileSpecBuilder.getValidFileSpecs(fileSpecsSet), true, false, false, false)
}
catch (Exception e) {
println "Sync failed. Trying again..."
sleep(60 * 1000)
tempClient.sync(FileSpecBuilder.getValidFileSpecs(fileSpecsSet), true, false, false, false)
}
finally {
println "Done syncing."
try {
p4Server.connect()
IServerInfo serverInfo2 = p4Server.getServerInfo()
println '\nServer info: ' + serverInfo2.getServerLicense()
// Removing the temporary client from the server
println p4Server.deleteClient(tempClient.getName(), false)
}
catch(Exception e) {
println 'Ignoring exception caught while deleting tempClient!'
/*sleep(60 * 1000)
p4Server.connect()
IServerInfo serverInfo3 = p4Server.getServerInfo()
println '\nServer info: ' + serverInfo3.getServerLicense()
sleep(60 * 1000)
println p4Server.deleteClient(tempClient.getName(), false)*/
}
}
}
One unusual thing which I observed while deleting tempClient was it was actually deleting the client but still throwing "java.net.SocketTimeoutException: Read timed out" which is why I ended up commenting the second delete attempt in the second catch block.
Which version of P4Java are you using? Have you tried this out with the newest P4Java? There are notable fixes dealing with RPC sockets since the 2013.2 version forward as can be seen in the release notes:
http://www.perforce.com/perforce/doc.current/user/p4javanotes.txt
Here are some variations that you can try where you have your code to increase timeout and instantiating the server:
a] Have you tried to passing props in its own argument,? For example:
Properties prop = new Properties();
prop.setProperty(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "300000");
UsageOptions uop = new UsageOptions(prop);
server = ServerFactory.getOptionsServer(ServerFactory.DEFAULT_PROTOCOL_NAME + "://" + serverPort, prop, uop);
Or something like the following:
IOptionsServer p4Server = ServerFactory.getOptionsServer("p4java://" + server, defaultProps)
You can also set the timeout to "0" to give it no timeout.
b]
props.put(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "60000");
props.put(RpcPropertyDefs.RPC_SOCKET_POOL_SIZE_NICK, "5");
c]
Properties props = System.getProperties();
props.put(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "60000");
IOptionsServer server =
ServerFactory.getOptionsServer("p4java://perforce:1666", props, null);
d] In case you have Eclipse users using our P4Eclipse plugin, the property can be set in the plugin preferences (Team->Perforce->Advanced) under the Custom P4Java Properties.
"sockSoTimeout" : "3000000"
REFERENCES
Class RpcPropertyDefs
http://perforce.com/perforce/doc.current/manuals/p4java-javadoc/com/perforce/p4java/impl/mapbased/rpc/RpcPropertyDefs.html
P4Eclipse or P4Java: SocketTimeoutException: Read timed out
http://answers.perforce.com/articles/KB/8044

Communication of separate processes through Esper events

I am trying to let multiple java processes exchange events using Esper. One process should send events, the other prepares a query and reacts according to the reported events.
When both operations are done within the same java process, everything works fine. But when I use two different processes, they just don't see each other.
I am wondering what is the key for this communication. I used the same name for the provider. This is all I could do so far.
The Producer:
String aType = espertest.dummy.A.class.getName();
Configuration cepConfig = new Configuration();
cepConfig.addEventType("A",aType);
EPServiceProvider epService = EPServiceProviderManager.getProvider("DummyProvider", cepConfig);
Object o = new A();
epService.getEPRuntime().sendEvent(o);
The Consumer:
String aType = A.class.getName();
String expression = "select count(*) from "+aType + "";
System.out.println("Our Query: " + expression);
Configuration cepConfig = new Configuration();
cepConfig.addEventType("A",aType);
EPServiceProvider epService = EPServiceProviderManager.getProvider("DummyProvider", cepConfig);
EPStatement statement = epService.getEPAdministrator().createEPL(expression);
DummyListener listener = new DummyListener();
statement.addListener(listener);
System.out.println("Anything");
try{
A a = new A();
epService.getEPRuntime().sendEvent(a);
Thread.sleep(60000);
}catch(Exception E)
{
System.out.println("Exception ");
}
The consumer tries to count the events of type A. It also sends an instance of A as a test, and this works fine. The listener is called as expected.
The code above is just an excerpt.
You need to configure middleware (Message Queue, Distributed Cache, Networked FileSystem, Socket Connection, etc....) to get the events from the producer JVM to the consumer JVM. If you can deploy the producer and consumer to a container that supports Apache Camel (e.g. ServiceMix) then it should be trivial to stand up a prototype that uses ActiveMQ to transport your objects into Esper as Camel has support for both products.
JVM 1
From Data Source
To CEP Engine 1
To Message Queue
JVM 2 (also could host MQ Broker)
From Message Queue
To CEP Engine 2
To Destination
Update:
If the producer and consumer can be threads in the same JVM, then the issue may be in the consumer. I cannot see where the consumer does anything with the event from the producer. Try something like this instead (esper reference is provided to the producer/consumer and consumer is reworked with an update method to handle results of the select statement).
Test Driver:
public Driver() {
String aType = espertest.dummy.A.class.getName();
Configuration cepConfig = new Configuration();
cepConfig.addEventType("A",aType);
EPServiceProvider epService = EPServiceProviderManager.getProvider("DummyProvider", cepConfig);
Consumer c = new Consumer(epService);
Producer p = new Producer(epService);
}
Producer:
public Producer(EPServiceProvider epsp) {
Object o = new A();
epsp.getEPRuntime().sendEvent(o);
}
Consumer:
public Consumer(EPServiceProvider epsp) {
EPStatement statement = epsp.getEPAdministrator().createEPL(input);
statement.setSubscriber(this);
}
public void update(A event) {
System.out.println("Consumer received event!");
}

Categories