Stackify Prefix sfclient ERR_CONNECTION_REFUSED - java

Two part question that may or may not be related to each other.
I am running Stackify Prefix v3.0.28 for a Java application on Win10 and it generally seems to work OK: I can see the traces of various actions in our application.
Part 1:
When navigating to any page of our application I get two failed requests to load JS files:
http://127.0.0.1:2/scripts/sfclient.xhr.min.js
http://127.0.0.1:2/scripts/sfclient.perf.prefix.min.js
Both of these requests fail with ERR_CONNECTION_REFUSED. Those script references are not in my JSP page so I assume they are injected by Prefix.
Here is the raw HTML that tries to load the 2 scripts:
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><script src="http://127.0.0.1:2/scripts/sfclient.xhr.min.js"></script>
<script>var SPerfLib = window.SPerfLib || {}; SPerfLib.RequestId = '54fd58d1-7f7e-d3a4-0001-331676a83598'; if(!SPerfLib.isAttached) { document.addEventListener('DOMContentLoaded', function() { var l = document.createElement('script'); l.src = 'http://127.0.0.1:2/scripts/sfclient.perf.prefix.min.js'; document.body.appendChild(l);}); SPerfLib.isAttached = true;}</script>
I have tried looking for configuration options, but found none. I was not sure if the scripts should be server from port 2 or not. The Prefix trace output is from port 2012 and that seems correct.
I tried uninstalling and re-installing Prefix, but with the same results. There does not seem to be any later version of Prefix to try.
How do I get those scripts to load successfully?
Part 2:
On one particular page we have an XHR to retrieve some JSON data. The server is returning data correctly, but it is somehow deleted before it arrives at the browser. The response headers show status 200 but 0 bytes content-length, which then causes some of our JS on the page to fail. If I run the same thing w/o Prefix everything works as expected - status is still 200, but content-length is 37 and JSON payload is visible.
This is the response header for the XHR when Prefix is in play (note content-length: 0)
cache-control: no-cache, must-revalidate
content-language: en-US
content-length: 0
content-type: text/html
date: Mon, 31 Aug 2020 14:19:24 GMT
expires: Thu, 01 Jan 1970 00:00:00 GMT
last-modified: Mon, 31 Aug 2020 14:19:24 GMT
pragma: no-cache
server: WildFly/10
status: 200
x-powered-by: Undertow/1
x-powered-by: JSP/2.3
x-stackifyid: V1|8bbdce1c-a507-bbdc-0001-3378bff33740|
If I remove the Stackify agent from the JVM options and disable the profiler, then the response header looks like this:
cache-control: no-cache, must-revalidate
content-language: en-US
content-length: 37
content-type: text/html;charset=UTF-8
date: Mon, 31 Aug 2020 14:25:12 GMT
expires: Thu, 01 Jan 1970 00:00:00 GMT
last-modified: Mon, 31 Aug 2020 14:25:12 GMT
pragma: no-cache
server: WildFly/10
status: 200
x-powered-by: Undertow/1
I'm appreciative of any suggestions!

These issues you are having with Prefix are known issues with Prefix. We are working on a complete re-write of Prefix (a reason why there has been such a big delay since our last release) and these items are things we are getting fixed in the new Prefix version. We are getting very close to releasing a Beta for Prefix, if you would like to be on the list to give the Prefix beta a try email the Stackify Support Team support#stackify.com

Related

Failed to pass the challenge for domain

Using certbot fails to generate certificate with this error:
org.shredzone.acme4j.exception.AcmeException: Failed to pass the challenge for domain www.
mysampledomain123.com, ... Giving up.
I manually checked the challenge file and got
http://www.mysampledomain123.com/.well-known/acme-challenge/jU--PkDrn5tDZw2RN6NNJHbPD00ovHFkLFvN3mJdeQX
Inside the file:
jU--PkDrn5tDZw2RN6NNJHbPD00ovHFkLFvN3mJdeQX.tuMr-UijwpsJ1KVZkdWTYgodWZ2SxxKdB7_CMAAEfpg
And here's the complete HTTP response header:
Accept-Ranges: bytes
Connection: keep-alive
Content-Encoding: gzip
Content-Type: text/plain;charset=iso-8859-1
Date: Sun, 16 Feb 2020 14:15:22 GMT
Server: nginx/1.14.0 (Ubuntu)
Transfer-Encoding: chunked
Vary: Accept-Charset, Accept-Encoding, Accept-Language, Accept
X-Powered-By: MyServer
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 999
X-RateLimit-Reset: 0
I'm wondering whether the problem is with the HTTP response headers or the content itself.
Any ideas would be appreciated.

Google Drive API responds with 416 HTTP code

For some reason the previously working code stopped working and server started to respond with 416.
Here are the logs of HTTP client during failing interaction:
-------------- REQUEST --------------
GET https://www.googleapis.com/drive/v3/files/0B02Nopv3SQOvOVNKaDIwTEZ3MHd?alt=media
Accept-Encoding: gzip
Authorization: <Not Logged>
Range: bytes=0-33554431
User-Agent: My app Google-API-Java-Client Google-HTTP-Java-Client/1.22.0 (gzip)
-------------- RESPONSE --------------
HTTP/1.1 416 Requested range not satisfiable
Alt-Svc: quic=":443"; ma=2592000; v="39,38,37,35"
Server: UploadServer
Cache-Control: private, max-age=0
Content-Range: bytes */0
X-GUploader-UploadID: AEnB2UqBx9B09Lnr8tG761gdoz3DkhHSNO_OzHh1LkU6B2908v17rnBGQZSNW4ZVTjbRdFtvPWWIqZGdtSrTo6ZWN7YW9nxf6d
Vary: X-Origin
Vary: Origin
Expires: Mon, 11 Sep 2017 15:23:20 GMT
Content-Length: 225
Date: Mon, 11 Sep 2017 15:23:20 GMT
Content-Type: application/json; charset=UTF-8
I was trying to download a file which is around 200000 bytes, so I thought meaning of "chuck size" changed somewhere, so it could not give 33554431 bytes of a 282177 byte file. Tried changing that to a smaller value, but no success.
Drive.Files.Get get = drive.files().get(file.getId())
MediaHttpDownloader downloader = get.getMediaHttpDownloader()
downloader.directDownloadEnabled = false
localFile.newOutputStream()
get.executeMediaAndDownloadTo(stream)
Direct download does not work either, it just downloads "0" bytes.
Does anyone know how to overcome this issue?
416 Range Not
Satisfiable
error means the server is not able to serve the requested ranges. The
most likely reason is that the document doesn't contain such ranges,
or that the Range header value, though syntactically correct, doesn't
make sense.
One of the resolutions that may provide from this forum is to:
Add "Accept-Ranges: none" to our response headers.
It appeared to be a web interface when using Firefox. It uploaded "empty" files in certain cases.
https://productforums.google.com/forum/#!topic/drive/S03wEknc75g;context-place=forum/drive

Json doesn't work with specified URL

I am a new programmer i am trying to build an app with Json.
If i use this URL doesn't work . http://zsuzsafodraszat.hostzi.com/boltok.json
if i Use this, my app working. https://api.myjson.com/bins/3zm8i
Both Json files exactly the same.
Can you help me what i am doing wrong ? Maybe bad extension or web000 is not a good service for Json ? Can you give me some good free json hosting ? Thanks
Those 2 urls do not have the same content or the same headers. You can see this if run curl commands from the command line:
$ curl -i "http://zsuzsafodraszat.hostzi.com/boltok.json"
HTTP/1.1 200 OK
Date: Wed, 13 Apr 2016 22:52:50 GMT
Server: Apache
Last-Modified: Wed, 13 Apr 2016 16:48:23 GMT
Accept-Ranges: bytes
Content-Length: 1020
Connection: close
Content-Type: application/json
??{"Aldi":"http://catalog.aldi.com/emag/hu_HU/print/Online_katalogus_04_07/Online_katalogus_04_07.pdf",
"Lidl":"http://www.lidl.hu/statics/lidl-hu/ds_doc/HU_HHZ_kw14_2016.pdf",
"Spar":"http://ajanlatok.spar.hu/view/download/?d=1279",
"Penny":"https://view.publitas.com/16538/136265/pdfs/016f82fb5b00bc97b5a8c35f512d89b01cd3e3ce.pdf",
"Coop":"https://view.publitas.com/2556/133497/pdfs/16603d7e9bf30e8a8a4efec7f01d3fa2caf92fe0.pdf",
"Auchan":"http://www.lidl.hu/statics/lidl-hu/ds_doc/HU_HHZ_kw14_2016.pdf"}
$ curl -i "https://api.myjson.com/bins/3zm8i"
HTTP/1.1 200 OK
Server: nginx/1.5.8
Date: Wed, 13 Apr 2016 22:52:56 GMT
Content-Type: application/json
Content-Length: 500
Connection: keep-alive
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
{"Aldi":"http://catalog.aldi.com/emag/hu_HU/print/Online_katalogus_04_07/Online_katalogus_04_07.pdf","Lidl":"http://www.lidl.hu/statics/lidl-hu/ds_doc/HU_HHZ_kw14_2016.pdf","Spar":"http://ajanlatok.spar.hu/view/download/?id=1279","Penny":"https://view.publitas.com/16538/136265/pdfs/016f82fb5b00bc97b5a8c35f512d89b01cd3e3ce.pdf","Coop":"https://view.publitas.com/2556/133497/pdfs/16603d7e9bf30e8a8a4efec7f01d3fa2caf92fe0.pdf","Auchan":"http://www.lidl.hu/statics/lidl-hu/ds_doc/HU_HHZ_kw14_2016.pdf"}
As you can see, one of them has a couple of junk bytes at the beginning that my terminal is displaying as question marks. Also the http headers are different. The Content-Lengths are wildly different too. Did you use something other than a plain text editor to create the json payload in the failing example?
Try removing the junk characters and adding these http headers:
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true

IBM Social Business Toolkit - no support for IBM Connections 5.5?

I'm asking myself whether IBM Connections 5.5 is not supported by IBM SBT.
The version I use is "1.1.11.20151208-1200".
My test procedure is the following:
public static void testCreateCommunity() throws ClientServicesException {
String g = Variables.aCommunityService.createCommunity( "TEST", "TESTDESCRIPTION", "public" );
if (g.isEmpty()) {
System.out.println( "x0[Failed] Creating Community has failed." );
}
else
{
System.out.println( "Creating Community successfully done." );
}
}
This code does perfectly work in Connections 5.0 CR 3 , but does not work
in IBM Connections 5.5.
I always get:
com.ibm.sbt.services.client.ClientServicesException: Request to url https://blabla.com/communities/service/atom/communities/my returned an error response 400:Bad Request HTTP/1.1 400 Bad Request [Date: Tue, 26 Jan 2016 10:20:02 GMT, X-Frame-Options: SAMEORIGIN, Strict-Transport-Security: max-age=max-age=31536000;includeSubDomains, X-XSS-Protection: 1;mode=block, X-Permitted-Cross-Domain-Policies: master-only, X-Powered-By: Servlet/3.0, Expires: Thu, 1 Jan 1970 00:00:00 GMT, Cache-Control: no-store, no-cache, must-revalidate, X-LConn-Auth: false, X-UA-Compatible: IE=edge, Last-Modified: Tue, 26 Jan 2016 10:20:02 GMT, Set-Cookie: LtpaToken2=kx9gO87/cDI8zHT1v8iwsFCP6WAbAH7FusrA8VU7jOC78KqkTEghj1XsNPRLMDT4tmIEI+diSer+++TZw1gSiC79jveQoTerr53Ggdf/zVwOVACyzA9kcpzPsaWn2+u83SkHC4s3ZCAoDGe1eq6Mb9sF2lnrn2GDrbsSzzvCPdo+pSzx4AG+0OEOa1rPX2gVF5mCfYXeqtNxUeFMc/Eibzt0zszHX5RDXZz5pcU+D1LW98B8rnar3YJjEgp8QdLT1IvhRYIo1zQQs920c9kU0tgw+CccC97fD/SRucqsHWqh2aHhs2hlTaEzMKo21o/5lD+Qwkn3QwWYFtKZntmQGLlAlJvPBQNgR2+38E4Y8uEyFy8jaBbZE0tE6MdK9zSY9Pz6zGPZaMHSV6msS+veXncynS5mcFg7jpLdsHqbQRw0Hb9w3Pe7XChaQ+yrbwTiF+mooWrCoSOYCYkA6fEVVKUbCDF0imKFWVZXOdCaszl/Ank9DFbiBSXfNGWoiXk1pJHSnoJs8C4+jBqjhbcYebpbLLTmjtS2DytMW15r97bpDekGMqFywms539c4c9QKMmjPli6L7fgYAGVsopqlMmp8AwhhuH9tXaqc6mOtbspMAKGZTn8GmvAFIVTxqfumyYLCUQvsCOgRIhdC0WlXxx/Zq+usQcvHUXwQarFhycU=; Path=/; Domain=.blabla.com, Set-Cookie: JSESSIONID=0000H65mMCw0ijcsS5e19kYaAyB:1a9lvgg03; Path=/; HttpOnly, Vary: Accept-Encoding,User-Agent, Connection: close, Transfer-Encoding: chunked, Content-Type: text/xml;charset=UTF-8, Content-Language: de-DE]
Does anybody know whether IBM SBT generally supports Connections 5.5?
We have similar issues in Communities (and Activities) use cases. Most of the functionality works fine, but you stumbled upon an issue that we also encountered.
For now we are using the REST API's to work around this.
I know we could contribute to the open source project, but that will take some time.
So I would say it generally supports 5.5, but in this specific case ...
After running into the same issue I have found a way to add the header before the createCommunity() service call. In your case it would look like this:
Variables.aCommunityService.addDefaultHeader("Content-Type","application/atom+xml")
//Then create community
https://github.com/OpenNTF/SocialSDK/issues/1772#issuecomment-239517941
HTH

How to get Rackspace Cloud object header

I am trying to update header data of some Rackspace objects i have uploaded. Example header attribute like: X-Object-Meta-name
But to do that, currently i need to download the whole object and parse the header from the downloaded object. Then do some checking and update if necessary and then upload the object again. But this makes the updating process very slow.
Is there a way to only download the header part of an object and updating it alone? Thanks in advance!
https://github.com/jclouds/jclouds/blob/master/apis/openstack-swift/src/main/java/org/jclouds/openstack/swift/v1/features/ObjectApi.java#L207
If you give it a map with "name"->"the updated header value", it should update the header and automatically add the x-object-meta- prefix.
Is there a way to only download the header part of an object and updating it alone?
I am not a Java developer, but the Cloud Files API is a RESTful one, so I will provide examples using curl. If you are using a library then you may want to edit your question to include which library as many of them abstract these opperations and a better answer can probably be provided in the context of that library.
To download the headers without the object content, perform a HTTP HEAD request.
$ curl -I -XHEAD -H'X-Auth-Token:******' \
> https://storage101.dfw1.clouddrive.com/v1/MossoCloudFS_******/container/object
HTTP/1.1 200 OK
Content-Length: 400
Accept-Ranges: bytes
Last-Modified: Tue, 21 Apr 2015 12:06:23 GMT
Etag: 81dc9bdb52d04dc20036dbd8313ed055
X-Timestamp: 1429617982.70468
X-Object-Meta-Foo: Bar
Content-Type: text/html
X-Trans-Id: txd337e4634c98475baf1a4-0055363d42dfw1
Date: Tue, 21 Apr 2015 12:06:26 GMT
To update just the headers on the object, you can then perform a HTTP POST request.
$ curl -i -XPOST -H'X-Auth-Token:******' \
> -H'X-Object-Meta-Foo: Bar' \
> -H'X-Object-Meta-Foo2: Bar2' \
> https://storage101.dfw1.clouddrive.com/v1/MossoCloudFS_******/container/object
HTTP/1.1 202 Accepted
Content-Length: 76
Content-Type: text/html; charset=UTF-8
X-Trans-Id: txc262dfe86727440cbfcb1-0055363d5cdfw1
Date: Tue, 21 Apr 2015 12:06:53 GMT
<html><h1>Accepted</h1><p>The request is accepted for processing.</p></html>
Performing another HEAD requeset will show that both the headers are now present.
$ curl -I -XHEAD -H'X-Auth-Token:******' \
> https://storage101.dfw1.clouddrive.com/v1/MossoCloudFS_******/container/object
HTTP/1.1 200 OK
Content-Length: 400
Accept-Ranges: bytes
Last-Modified: Tue, 21 Apr 2015 12:06:53 GMT
Etag: 81dc9bdb52d04dc20036dbd8313ed055
X-Timestamp: 1429618012.98354
X-Object-Meta-Foo: Bar
X-Object-Meta-Foo2: Bar2
Content-Type: text/html
X-Trans-Id: txdd9365b54e8f4d8c8451d-0055363d6adfw1
Date: Tue, 21 Apr 2015 12:07:06 GMT

Categories