Always getting error “Non HTTP response code: java.net.UnknownHostException” - java

I tried stress testing with JMeter software to test a web site as it crashed after a sms campaign. Currently site has been moved to a physical server.
I tested multiple times by adding threads, it worked and gave few errors (for above 1000 threads), and worked for 400 threads with no error. So I tried distributed testing with 4 PCs including my one.
After I tried again with only my PC to send requests to the site by adding 400 threads(ramp up = 1 , loop = 1). But each and every requests gives error. Then I tried using 1 thread. Same error was given.
I checked my network connection, and there is no problem. Then I browsed the web site "http://www.myjobs.lk/", and it works fine.
These are the values I have given in testing.
Under this condition, I cannot perform the testing because it always gives errors. How can I overcome this problem?

You're using incorrect JMeter configuration, change it as follows:
Remove http:// from "Server Name or IP" input
Put http to "Protocol input
It is also possible to have the full URL in "Path" field like
But using "http://" in "Server Name or IP" won't work.
Also once you defined hostname, port, path, etc. in HTTP Request Defaults it will be automatically applied to all HTTP Request Samplers. You will be able to override an option for particular this or that sampler but if you don't - default value will be used. See Why It's SO Important To Use JMeter's HTTP Request Defaults for more detailed explanation and some use cases.

For me it was helpful to setup proxy server:

Looks like JMeter tries to connect to myjobs.lk, and you browse to www.myjobs.lk. Try changing so that JMeter also connects to www.myjobs.lk

Related

2 Same HTTP Requests Give Different Results?

What differentiate these 2 requests that cause them to have different results/responses from the server although they should be the same ?
Request initiated by Chrome after a simple
click/navigation(successful, response code is 302)
I simply copied
that request as a curl and imported it to Postman and then postman
hanged
I did the same with Java - HttpUrlConnection(mimicking all the request headers and cookies like Chrome sent), but it hanged and waited forever. Is this simply because of the server logic that doesn't accept non-browser client ?
Here are the steps that I tried:
1. Visited this link: https://www.tokopedia.com/p/handphone-tablet/handphone
2. I opened the inspector and opened the Network - All tab
3. I clicked one of the products
4. I clicked the top request from the Network - All tab
5. I copied it as cURL bash
6. I imported it to Postman
7. I ran that request
8. Postman hanged
Actually the problem might even go deeper than what the other answers say.
So neither the User-Agent request header nor telnet might solve that problem (unless you initialize the TLS handshake also with telnet MANUALLY, but that is near impossible to complete).
TLS fingerprinting
If the connection is an SSL/TLS connection, the server could detect which algorithm is used to generate keys, and most applications have their specific signature / cipher.
So only by the TLS handshake alone you can tell Chrome from Postman or FireFox or Java. Java usually - unless a JVM implementation REALLY wants to go off-road - has the same signature across all platforms, using the same cipher/algorithm across all implementations.
I am sorry I cannot properly recall the name of this technique. The first project I know that published this is called something like "A3" or "S3". Salesforce published an article about JA3 analysis. They describe the technique and show a list of signatures and applications so you can guesstimate what app you're talking to, without the need to even decrypt the data: https://engineering.salesforce.com/tls-fingerprinting-with-ja3-and-ja3s-247362855967
My Solution
I had that same problem too, wanted to scan the NVidia or AMD servers for graphics card availability. Did not work from Java, so after a lot of research, finding the project mentioned above, I simply used Selenium to control FireFox and that got the proper server responses and I achieved my goal this way.
The only way to be sure that the exact same data is sent is to manually send it yourself through something like telnet. I had a similar problem once- it turned out that the browser was sending the data in one big chunk, while my code was sending it line-by-line. No site should have this problem, but it's possible that it exists.
The server might be checking for User-Agent request header and will block traffic that does not originate from a browser. Try setting the header in curl or your Java Code to a value corresponding to (any) browser. I've encountered such behavior on some e-shops and commercial websites.

Tomcat 7 , Spring rest template application producing err_invalid_chunked_encoding in browser

I have a Tomcat 7 , Spring 4.2 'RestController' implementation of REST API which seems to produce 'ERR_INVALID_CHUNKED_ENCODING' for few API calls on returning a JSON response.
It is the same code that creates a ResponseEntity. But for few API calls the "Content-Length" is set properly and other calls the "Transfer-Encoding" is set as Chunked.
private CacheControl cacheControl = CacheControl.noStore().mustRevalidate();
protected <T> ResponseEntity<TNRestResponse<T>> createEntity(TNRestResponse<T> res) {
return ResponseEntity.ok().cacheControl(cacheControl).body(res);
}
The weird part is the response for the same API call that creates ERR_INVALID_CHUNKED_ENCODING seems to work fine in another environment. The only difference is the client and service is running in the same server in the problematic scenario.
The solution already tried is to set the Content-Length manually which seems to result to premature end of file on the client.The JSON length is only around 468 characters but client receives only 409 characters , even though server logs shows that the full response has been sent and connection is closed.
We are so lost at the solution for this problem because it is the same code acting strangely in different environment.I tried to check the compression settings in server.xml on both the tomcat.But everything looks fine.
Also disabled the proxy setting in both IE and chrome.
Any helpful inputs or insights would be really good ? Thanks in advance.
Follow these steps:
1) Go to your OS's Control panel > internet options > Connections >
LAN Settings or to your browser settings.
2) Deselect "Use Proxy" for your LAN or for your browser.
ERR_INVALID_CHUNKED_ENCODING
Original answer
Another original answer

is there any deep level of host redirection in HttpClient 4.3.6?

Writing an API in JAVA to scrape the site which redirects to multiple host before delivering the required page.
for ex
** Main Host **
www.abc.com
First redirection from Main Host url response
www.pqr.com/test?a=1&b=2
Second redirection from first redirection response
www.xzy.com/result?sum=3
HttpClient works flawlessly upto the first redirection it also gets correct response but program redirects to
www.pqr.com/result?sum=3
which gives me 404 :(
So, is there any deep level of redirection in httpclient? or am I missing something?
Network traffic monitored by using the fiddler. Application is written in JAVA.
You can setup maximum number of redirections when building client object via RequestConfig.builder.setMaxRedirects(int maxRedirects) (see docs).
But by default this number is equal to 50 which obviously greater then number of redirects needed in your case. That means that problem lies somewhere else and without viewing your code or exact name of initial host you are connecting to, it is impossible to find reason of the problem.

AWS cloudFront signed cookie fails intermittently for the same server

We use AWS to store aduio/video content for our website.
We us the Signed Cookies Using a Canned Policy:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-setting-signed-cookie-canned-policy.html
So we have 3 cookies set for each request to retrieve the data:
CloudFront-Policy;
CloudFront-Signature;
CloudFront-Key-Pair-Id;
And it is used to access a resource URL like http://cloudfront.org_name.com/2016%2F7%2F1%2FStanding+Meditation_updated+91615.mp3
All three cookies are set by the server (Java-based) for each request anew to a correct pre-set value.
It all works most of the time for most of the content, but for some resources it just fails with a 403 Forbidden error.
If I open two contents (one working, one not) in separate browser tabs, all the cookies and the rest look exactly the same, except for the resource URL.
And yet - one works, while the other does not.
What is even more confusing, sometimes the same resource requested from the same physical client machine, once in FF, other time in Chrome, works in one browser but fails in other one.
Also, sometimes clearing user browser cookies works, the other time it fails, with no discernible pattern.
It's been driving me insane as I struggle to see what's wrong.
Can anyone provide any insight as to what the reason could be and what remedies could be tried?
Okay, the answer is in my reply to Michael:
I noticed later on that the resource URLs for working and failing content were different. Pretty close to not spot the difference on the first sight, but diffrent. Everything was the same - cookes, headers, other parameters. But I was comparing 2 different contents. First URL always worked, second always failed.
Lesson learnt: carefully curl the two resources and analyse the uRLS to see what actually is different.
A tip: use Chrome's development tools to derive curl commands:
Right click on the failing URL -> Copy-> Copy as cURL. Then paste in command line to test.
BTW, we just re-uploaded the failing resource and updated the referring web page - everything works again.

HttpResponse body is being altered

We are facing a peculiar issue at the moment and we have no clue what is causing this.
We have a web-service hosted on serverA.
When this web-service is invoked from serverB (using the command, curl http://serverA:8008/service/getId), we get the required response. (the web service returns an Id which is an integer).
When the same web-service is invoked from serverC, we get the required response but the digit 2 in the response is getting replaced by _ .
For example, we get 5002 when the web-service is invoked from serverB.
When the same web service is invoked from serverC, we get 500_
We checked the wireshark details from serverA and the data going out from serverA is the same for both the servers.
We have no clue at the moment why this is happening. I would like to add that serverC is in DMZ while serverB is not.
Any input/help in this regard is highly appreciated.
by gather the facts that
1. Server doesn't change the response by its own.
2. Web Service is giving the same response for the same input.
only culprit is your firewall, can you stop it for testing purpose and see if the response is coming as expected. OR
Try to check the firewall settings and create a hole/exception for web Service.
Thanks everyone for your efforts, the issue is now resolved. It was an incorrect firewall rule that was causing this. I asked our network engineer how the firewall setting can alter http response body and following is the reply I got:
For certain protocols the firewall does deep-level packet inspection,
so rather than just check the port number it actually looks into the
payload. This allows it to block malware, malformed packets that might
be exploiting a vulnerability and the like. So it know what to inspect
you have to specify in the rule what the traffic is, so you say it’s
on port 8008 and it’s HTTP. The problem was that for some reason this
rule had been set to use port 8008, but the traffic type was set to
passive mode FTP rather than HTTP. Once I corrected it to HTTP, it
started working.
Try putting ServerB in DMZ too and see what happen.
If it acts the same its a network issue.
If not you might have 2 different versions of code on the servers.
This sounds to me like you have special characters in your URL and they cause the overwriting of the port number, but only if the characters are recognized in the character set. Can you use a hex editor to check the URL for special characters (backspace, specifically)?
I can't solve your problem, but look for any transcoders on the path.
Send request from server C to server A.
1) wireshark at A, to see if it receives request correctly. A possible transcoder may convert host-less urls to host-ful ( GET /service/getId to GET http:// serverA:8080/service/getId), or may drop Host header etc. But if you see nothing wrong here proceed to step 2.
2) wireshark at B, to see if response is valid. Look if Content-Type is set correctly. If set correctly, and still getting manipulated try adding header Cache-Control: no-transform. Many transcoders respect that. If this also fails and can't remove any possible transcoders, viruses you may have go to step 3.
3) Just go https, it is immune to such things.
This is a feature of Apache, designed to hide parts of the HTTPresponce.
I did not see a fix immediatly, and do not have the time to look right now. I'll try to edit one in later.
If you want to try to find it, here is the link to the documentation: http://xianshield.org/guides/apache2.0guide.html
use [Ctrl] + [F] to find this statement (without qoutes) "Configure and build the Apache Server"

Categories