Why were the messages delayed in NSQ? It's unexpected…… - java

I used the NSQ as the mq in my project,java producer produce the message to NSQ and go consumer consume it.But the strange things is that the consumer always get the message after few seconds.There is just a few messages,I really don't know how to explain why it's happened.
Here is the test result,please pay attention to the time.They both consume the same topic.You can see in the second time,Go is slower than java for 7s.
Java result:
INFO | jvm 1 | 2018/07/11 17:22:01 | Msg
receive:{"did":"XSQ000200000005","msg":{"id":"5560","type":1,"content":"ZBINh6CBsLw7k2xjr1wslSjY+5QavEgYU6AzzLZn0lOgON9ZYHnNP4UJVUGB+/SpsxZQnrWR9PlULzpSP/p9l9t8wiAwj8qhznRaT8jeyx1/EUrDE0oXJB8GxWaLJUICCbC92j4BMA2HU8vgcfDOp9nSy1KFafi9zgFiCf9Igqo="}}
INFO | jvm 1 | 2018/07/11 17:22:11 | Msg
receive:{"did":"XSQ000200000005","msg":{"id":"5560","type":1,"content":"ZBINh6CBsLw7k2xjr1wslSjY+5QavEgYU6AzzLZn0lOgON9ZYHnNP4UJVUGB+/SpsxZQnrWR9PlULzpSP/p9l9t8wiAwj8qhznRaT8jeyx1/EUrDE0oXJB8GxWaLJUICCbC92j4BMA2HU8vgcfDOp9nSy1KFafi9zgFiCf9Igqo="}}
INFO | jvm 1 | 2018/07/11 17:23:21 | Msg
receive:{"did":"XSQ000200000005","msg":{"id":"5560","type":1,"content":"ZBINh6CBsLw7k2xjr1wslSjY+5QavEgYU6AzzLZn0lOgON9ZYHnNP4UJVUGB+/SpsxZQnrWR9PlULzpSP/p9l9t8wiAwj8qhznRaT8jeyx1/EUrDE0oXJB8GxWaLJUICCbC92j4BMA2HU8vgcfDOp9nSy1KFafi9zgFiCf9Igqo="}}
INFO | jvm 1 | 2018/07/11 17:25:31 | Msg
receive:{"did":"XSQ000200000005","msg":{"id":"5560","type":1,"content":"ZBINh6CBsLw7k2xjr1wslSjY+5QavEgYU6AzzLZn0lOgON9ZYHnNP4UJVUGB+/SpsxZQnrWR9PlULzpSP/p9l9t8wiAwj8qhznRaT8jeyx1/EUrDE0oXJB8GxWaLJUICCbC92j4BMA2HU8vgcfDOp9nSy1KFafi9zgFiCf9Igqo="}}
Go result:
2018-07-11 17:22:03 broker.go DEBUG Ready to send msg 5560 with type 1 to XSQ000200000005
2018-07-11 17:22:28 broker.go DEBUG Ready to send msg 5560 with type 1 to XSQ000200000005
2018-07-11 17:23:21 broker.go DEBUG Ready to send msg 5560 with type 1 to XSQ000200000005
2018-07-11 17:25:38 broker.go DEBUG Ready to send msg 5560 with type 1 to XSQ000200000005
please ignore the other errors,just because of the business.
Here is my go consumer:
func (b *Broker) createConsumer(topic string, vendor int32) error {
config := nsq.NewConfig()
laddr := "127.0.0.1"
// so that the test can simulate binding consumer to specified address
config.LocalAddr, _ = net.ResolveTCPAddr("tcp", laddr+":0")
// so that the test can simulate reaching max requeues and a call to LogFailedMessage
config.DefaultRequeueDelay = 0
// so that the test wont timeout from backing off
config.MaxBackoffDuration = time.Millisecond * 50
c, err := nsq.NewConsumer(topic, "channel_box_" + util.String(vendor), config)
if err != nil {
return log.Error("Failed to new nsq consumers.")
}
c.AddConcurrentHandlers(nsq.HandlerFunc(func(message *nsq.Message) error {
if err := b.handle(message, vendor); err != nil {
log.Errorf("Handle message %v for vendor %d from mq failed.", message.ID, vendor)
}
return nil
}), 5)
if err = c.ConnectToNSQLookupds(b.Opts.Nsq.Lookup); err != nil {
return log.Error("Failed to connect to nsq lookup server.")
}
b.consumers = append(b.consumers, c)
return nil
}

Related

Stub is not returning the mocked recponse for QueryForParamater request

Trying to mock the QueryForparamater response but message as 'Request was not matched'. tried with below different ways but getting same result. Can anyone correct me to unblock?
ResponseDefinitionBuilder mockResponse1 = new ResponseDefinitionBuilder();
mockResponse1.withStatus(201);
mockResponse1.withBodyFile("pp/isOpenResponse.json");
//WireMock.stubFor(WireMock.get(endpoint).withQueryParam("city",equalTo("1")).willReturn(mockResponse1));
// WireMock.stubFor(get(endpoint+"?city=1 ").willReturn(mockResponse1));
WireMock.stubFor(WireMock.get(endpoint+"?city=1 ").willReturn(mockResponse1));
//WireMock.stubFor(WireMock.get(endpoint+"?city=1 ").willReturn(mockResponse1));
startServer();
String testapi= "http://localhost:8080"+endpoint;
System.out.print("service to be hit : "+testapi);
Response mockResponse= RestAssured.given().queryParam("city","1").get(testapi).then().extract().response();
System.out.println(mockResponse.prettyPrint());
Response on console is below
Request was not matched
=======================
-----------------------------------------------------------------------------------------------------------------------
| Closest stub | Request |
-----------------------------------------------------------------------------------------------------------------------
|
GET | GET
/readfromfile/index/?city=1 | /readfromfile/index/?city=1 <<<<< URL does not match
|

Flux - parallel flatMap with webclient - limit to fixed batched rate

The code I have is this:
return Flux.fromIterable(new Generator()).log()
.flatMap(
s ->
webClient
.head()
.uri(
MessageFormat.format(
"/my-{2,number,#00}.xml",
channel, timestamp, s))
.exchangeToMono(r -> Mono.just(r.statusCode()))
.filter(HttpStatus::is2xxSuccessful)
.map(r -> s),
6) //only request 6 segments in parallel via webClient
.take(6) //we need only 6 200 OK responses
.sort();
It just requests HEAD, until first 6 requests are successful.
Parallelization works here, but the problem is that after 1 of the requests is complete, it immediatley triggers next request (to maintain parallelization level of 6). What I need here is to have parallelization level of 6, but in batches. So - trigger 6 requests, wait until all complete, trigger again 6 requests ...
This is the output of the log() above:
: | request(6)
: | onNext(7)
: | onNext(17)
: | onNext(27)
: | onNext(37)
: | onNext(47)
: | onNext(57)
: | request(1) <---- from here NOT OK; wait until all complete and request(6)
: | onNext(8)
: | request(1)
: | onNext(18)
: | request(1)
: | onNext(28)
: | request(1)
: | onNext(38)
: | request(1)
: | onNext(48)
: | request(1)
: | onNext(58)
: | cancel()
UPDATE
this is what I tried with the buffer:
return Flux.fromIterable(new Generator())
.buffer(6)
.flatMap(Flux::fromIterable)
.log()
.flatMap(
s ->
webClient
.head()
.uri(
MessageFormat.format(
"/my-{2,number,#00}.xml",
channel, timestamp, s))
.exchangeToMono(r -> Mono.just(r.statusCode()))
.filter(HttpStatus::is2xxSuccessful)
.map(r -> s),
6) //only request 6 segments in parallel via webClient
.take(6)
.sort();
OK, It seems I have the code that works. Here I use window:
return Flux.fromIterable(new Generator())
.window(6) //group 1,2,3,4,5,6,7... into [0,1,2,3,4,5],[6,7..,11],[12,..,17]
.log()
.flatMap(
s -> s.log().flatMap(x -> webClient
.head()
.uri(
MessageFormat.format(
"/my-{2,number,#00}.xml",
channel, timestamp, x))
.exchangeToMono(r -> Mono.just(r.statusCode()))
.filter(HttpStatus::is2xxSuccessful)
.map(r -> x), 6), 1) //1 means take only 1 array ([0,1,2,3,4,5]). 6 means take in parallel all from array (0,1,2,3,4,5)
.take(6, true) //pass through only 6 elements (cancel afterwards)
.sort();

SearchRequest in RootDSE

I have to following function to query users from an AD server:
public List<LDAPUserDTO> getUsersWithPaging(String filter)
{
List<LDAPUserDTO> userList = new ArrayList<>();
try(LDAPConnection connection = new LDAPConnection(config.getHost(),config.getPort(),config.getUsername(),config.getPassword()))
{
SearchRequest searchRequest = new SearchRequest("", SearchScope.SUB,filter, null);
ASN1OctetString resumeCookie = null;
while (true)
{
searchRequest.setControls(
new SimplePagedResultsControl(100, resumeCookie));
SearchResult searchResult = connection.search(searchRequest);
for (SearchResultEntry e : searchResult.getSearchEntries())
{
LDAPUserDTO tmp = new LDAPUserDTO();
tmp.distinguishedName = e.getAttributeValue("distinguishedName");
tmp.name = e.getAttributeValue("name");
userList.add(tmp);
}
LDAPTestUtils.assertHasControl(searchResult,
SimplePagedResultsControl.PAGED_RESULTS_OID);
SimplePagedResultsControl responseControl =
SimplePagedResultsControl.get(searchResult);
if (responseControl.moreResultsToReturn())
{
resumeCookie = responseControl.getCookie();
}
else
{
break;
}
}
return userList;
} catch (LDAPException e) {
logger.error(e.getExceptionMessage());
return null;
}
}
However, this breaks when I try to search on the RootDSE.
What I've tried so far:
baseDN = null
baseDN = "";
baseDN = RootDSE.getRootDSE(connection).getDN()
baseDN = "RootDSE"
All resulting in various exceptions or empty results:
Caused by: LDAPSDKUsageException(message='A null object was provided where a non-null object is required (non-null index 0).
2020-04-01 10:42:22,902 ERROR [de.dbz.service.LDAPService] (default task-1272) LDAPException(resultCode=32 (no such object), numEntries=0, numReferences=0, diagnosticMessage='0000208D: NameErr: DSID-03100213, problem 2001 (NO_OBJECT), data 0, best match of:
''
', ldapSDKVersion=4.0.12, revision=aaefc59e0e6d110bf3a8e8a029adb776f6d2ce28')
So, I really spend a lot of time with this. It is possible to kind of query the RootDSE, but it's not that straight forward as someone might think.
I mainly used WireShark to see what the guys at Softerra are doing with their LDAP Browser.
Turns out I wasn't that far away:
As you can see, the baseObject is empty here.
Also, there is one additional Control with the OID LDAP_SERVER_SEARCH_OPTIONS_OID and the ASN.1 String 308400000003020102.
So what does this 308400000003020102 more readable: 30 84 00 00 00 03 02 01 02 actually do?
First of all, we decode this into something, we can read - in this case, this would be the int 2.
In binary, this gives us: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
As we know from the documentation, we have the following notation:
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 |
|---|---|---|---|---|---|---|---|---|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|-------|-------|
| x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | x | SSFPR | SSFDS |
or we just take the int values from the documentation:
1 = SSFDS -> SERVER_SEARCH_FLAG_DOMAIN_SCOPE
2 = SSFPR -> SERVER_SEARCH_FLAG_PHANTOM_ROOT
So, in my example, we have SSFPR which is defined as follows:
For AD DS, instructs the server to search all NC replicas except
application NC replicas that are subordinate to the search base, even
if the search base is not instantiated on the server. For AD LDS, the
behavior is the same except that it also includes application NC
replicas in the search. For AD DS and AD LDS, this will cause the
search to be executed over all NC replicas (except for application NCs
on AD DS DCs) held on the DC that are subordinate to the search base.
This enables search bases such as the empty string, which would cause
the server to search all of the NC replicas (except for application
NCs on AD DS DCs) that it holds.
NC stands for Naming Context and those are stored as Operational Attribute in the RootDSE with the name namingContexts.
The other value, SSFDS does the following:
Prevents continuation references from being generated when the search
results are returned. This performs the same function as the
LDAP_SERVER_DOMAIN_SCOPE_OID control.
So, someone might ask why I even do this. As it turns out, I got a customer with several sub DCs under one DC. If I tell the search to handle referrals, the execution time is pretty high and too long - therefore this wasn't really an option for me. But when I turn it off, I wasn't getting all the results when I was defining the BaseDN to be the group whose members I wanted to retrieve.
Searching via the RootDSE option in Softerra's LDAP Browser was way faster and returned the results in less then one second.
I personally don't have any clue why this is way faster - but the ActiveDirectory without any interface of tool from Microsoft is kind of black magic for me anyway. But to be frank, that's not really my area of expertise.
In the end, I ended up with the following Java code:
SearchRequest searchRequest = new SearchRequest("", SearchScope.SUB, filter, null);
[...]
Control globalSearch = new Control("1.2.840.113556.1.4.1340", true, new ASN1OctetString(Hex.decode("308400000003020102")));
searchRequest.setControls(new SimplePagedResultsControl(100, resumeCookie, true),globalSearch);
[...]
The used Hex.decode() is the following: org.bouncycastle.util.encoders.Hex.
A huge thanks to the guys at Softerra which more or less put my journey into the abyss of the AD to an end.
You can't query users from the RootDSE.
Use either a domain or if you need to query users from across domains in a forest use the global catalog (running on different ports, not the default 389 / 636 for LDAP(s).
RootDSE only contains metadata. Probably this question should be asked elsewhere for more information but first read up on the documentation from Microsoft, e.g.:
https://learn.microsoft.com/en-us/windows/win32/ad/where-to-search
https://learn.microsoft.com/en-us/windows/win32/adschema/rootdse
E.g.: namingContexts attribute can be read to find which other contexts you may want to query for actual users.
Maybe start with this nice article as introduction:
http://cbtgeeks.com/2016/06/02/what-is-rootdse/

Apache Camel quartz2 timer starting multiple exchanges

I have an application that creates routes to connect to a REST endpoint and process the responses for several vendors. Each route is triggered with a quartz2 timer. Recently when the timer fires it creates multiple exchanges instead of just one and I cannot determine what is causing it.
The method that creates the routes is here:
public String generateRoute(String vendorId) {
routeBuilders.add(new RouteBuilder() {
#Override
public void configure() throws Exception {
System.out.println("Building REST input route for vendor " + vendorId);
String vendorCron = vendorProps.getProperty(vendorId + ".rest.cron");
String vendorEndpoint = vendorProps.getProperty(vendorId + ".rest.endpoint");
String vendorAuth = vendorProps.getProperty(vendorId + ".rest.auth");
int vendorTimer = Integer.valueOf(vendorId) * 10000;
GsonDataFormat format = new GsonDataFormat(RestResponse.class);
from("quartz2://timer" + vendorId + "?cron=" + vendorCron)
.routeId("Rte-vendor" + vendorId)
.streamCaching()
.log("Starting route " + vendorId)
.setHeader("Authorization",constant(vendorAuth))
.to("rest:get:" + vendorEndpoint)
.to("direct:processRestResponse")
.end();
};
});
return "direct:myRoute." + vendorId;
and a sample 'vendorCron' string is
"*+5+*+*+*+?&trigger.timeZone=America/New_York".
When the quartz route fires I see this type of output in the log
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
15:39| INFO | CamelLogger.java 159 | Starting route 4
When I should ( and used to) only see one of these.
Any ideas what would cause this?
Thanks!
This is because of your vendorCron
If Cron trigger is every 5secs then you see this log in every 5 secs..
If Cron trigger is every 5mins/hours you see these login in 5 mins/hours.
I was staring so hard I missed the obvious. I need a 0 in the seconds place of the cron expression.
Thank you for the time.

Message: listener timeout after waiting for [30000] ms

I am trying to implement simple CRUD operation on Elasticsearch using groovy and grails
some time I am able to create an index and some time I am getting below-mentioned exception that is time out, I have tried some many ways none of them is working. I stuck here can some help me to get out from this.
*below the exception I have attached the code which I am using here, please go through it and check whether it is correct or not
Exception
Error |2018-05-29 23:13:18,320 [http-bio-8080-exec-10] ERROR
errors.GrailsExceptionResolver - IOException occurred when processing
request: [GET] /Sharama1/person/addPerson
listener timeout after waiting fo
List item
r [30000] ms. Stacktrace follows:
Message: listener timeout after waiting for [30000] ms
Line | Method
->> 661 | get in org.elasticsearch.client.RestClient$SyncResponseListener
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 220 | performRequest in org.elasticsearch.client.RestClient
| 192 | performRequest . . . . . . . . . . . in ''
| 428 | performRequest in org.elasticsearch.client.RestHighLevelClient
| 414 | performRequestAndParseEntity . . . . in ''
| 299 | index in ''
| -2 | invoke0 . . . . . . . . . . . . . . . in sun.reflect.NativeMethodAccessorImpl
| 62 | invoke in ''
| 43 | invoke . . . . . . . . . . . . . . . in sun.reflect.DelegatingMethodAccessorImpl
| 497 | invoke in java.lang.reflect.Method
| 1426 | jlrMethodInvoke . . . . . . . . . . . in org.springsource.loaded.ri.ReflectiveInterceptor
| 189 | invoke in org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoCachedMethodSite
*create index code *
def addPerson={
RestHighLevelClient client=ESService.getClient()
Map<String,Object> jsonMap = new HashMap<>();
jsonMap.put("firstName","abcd");
jsonMap.put("lastName","xyz");
jsonMap.put("date",new Date());
jsonMap.put("message","Hugh data Index mapping");
IndexRequest indexRequest = new IndexRequest("person1","hughdata","4").source(jsonMap);
IndexResponse res = client.index(indexRequest);
String index = res.getIndex()
String type = res.getType()
String id = res.id
long version = res.getVersion()
DocWriteResponse.Result result = res.getResult();
if (result == DocWriteResponse.Result.CREATED){
println("index created = "+result)
}
else if (result == DocWriteResponse.Result.UPDATED){
println("index Updated = "+result)
}
["index":index,"type": type,"id":id,"version":version]
}
Create a client code
class ESService {
RestHighLevelClient client=null
//TransportClient client=null
def RestHighLevelClient getClient(){
try {
String hostname = "localhost"
int port = 9200
String scheme = "http"
client = new RestHighLevelClient(RestClient.builder(new HttpHost("localhost",9200,"http")))
boolean pingResponse = client.ping()
if (pingResponse == true) {
print("connection established..." + pingResponse);
} else {
print("connection not established. Try again : " + pingResponse)
}
/*return client*/
}
catch (ElasticsearchException e){
e.printStackTrace()
}
return client
}
}
Buildconfig.groovy
dependencies {
// specify dependencies here under either 'build', 'compile', 'runtime', 'test' or 'provided' scopes e.g.
// runtime 'mysql:mysql-connector-java:5.1.27'
// runtime 'org.postgresql:postgresql:9.3-1100-jdbc41'
test "org.grails:grails-datastore-test-support:1.0-grails-2.3"
compile group: 'org.elasticsearch', name: 'elasticsearch', version: '6.0.1'
compile group: 'org.elasticsearch.client', name: 'elasticsearch-rest-high-level-client', version: '6.0.1'
compile('com.amazonaws:aws-java-sdk-elasticsearch:1.11.123')
compile('com.amazonaws:aws-java-sdk-elasticloadbalancingv2:1.11.123')
}

Categories