I have a Java/Wicket page that generates a JNLP file that launches my company's software. This class will optionally take some url parameters and embed them as arguments in the JNLP. When the user launches this JNLP file the client application will perform some function based on those parameters. If the client software is already running on the machine, hitting the JNLP page will instead try to feed these parameters via a remote call to the running client instead of launching a new page.
This part is where I'm having issues. On IE, Firefox and Chrome I could open a new client, but trying to hit the same URL again would instead return a JNLP file. I found that clearing the browser cache fixes this issue on all browsers. Also, I cannot seem to hit breakpoints in the JNLP class, which enforces my hunch that this is more of an issue with the request than something strange with Wicket.
I put the following code in my page class, which extends org.apache.wicket.markup.html.WebPage:
#Override
protected void setHeaders(WebResponse response) {
getPageMap().remove(this);
HttpServletResponse httpServletResponse = response.getHttpServletResponse();
if (httpServletResponse != null) {
httpServletResponse.setDateHeader("Expires", 0);
httpServletResponse.addHeader("Cache-Control", "no-cache,no-store,private,must-revalidate,max-stale=0,post-check=0,pre-check=0");
httpServletResponse.addHeader("Keep-Alive", "timeout=3, max=993");
}
}
This doesn't seem to work, as Firefox 3.6 still seems to cache the result. IE 7 will work but only after trying the link I create a few times. I don't know a lot about web development and Wicket and this is new to me so it's possible I'm missing something simple.
TL;DR: How do I get a Wicket page to not cache on the client browser?
A hack used in some of the Wicket internals (see for example the source for org.apache.wicket.markup.html.image.NonCachingImage) is to add random noise to the url.
Basically, if you're generating the urls that the browser calls, you can add a parameter ignored by the web application that varies randomly and fools the browser into ignoring its cache.
Please check the following page:
https://web.archive.org/web/20120104201334/http://palisade.plynt.com:80/issues/2008Jul/cache-control-attributes/
Firefox should honor the "Cache-Control" header.
I don't know Wicket very well but have you tried using WebResponse.setLastModifiedTime(Time time)? I know FF sends an If-Modified-Since header to which your server would reply with 304 Not Modified or the normal response.
It would seem natural to me that your server would check the lastModifiedTime on WebResponse to decide.
If that doesn't help I would suggest you get Firebug for Firefox and take a look at the requests and responses.
response.setHeader( "Expires", "0" );
response.setHeader( "Cache-Control", "no-store, no-cache, must-revalidate, max-age=0, private" );
response.setHeader( "Pragma", "no-cache" );
This works with IE, Firefox and so on, the only browser with which it certainly does not work is konqueror.
Wicket 6.11.0:
Application.get().getResourceSettings().setDefaultCacheDuration(Duration.NONE);
Have you ever tried to load pages using window.location.replace?
Related
I am trying to upload a sound to myinstants.com using java and OkHttp.
I used the Chrome dev tools to look at what requests get made and try to recreate them using OkHttp but I'm failing at the login part.
Chrome dev tools tells me that this is the form post, with a content-type of application/x-www-form-urlencoded.
And I try to replicate the post using the following code:
RequestBody loginBody = new FormBody.Builder()
.add("csrfmiddlewaretoken", token) //this token is comes from inside the <input> tag that is retrieved in the HTML of a normal get request to https://myinstants.com/accounts/login and is diffrent every time you load the page
.add("login", username)
.add("password", password)
.add("remember", "on")
.add("next", "/new/")
.build();
Request login = new Request.Builder()
.url("https://www.myinstants.com/accounts/login/?next=/new/")
.addHeader("cookie", CookieHandler.getCookie()) // cookie that is generated from the "set-cookie" response headers of the get request to https://myinstants.com/accounts/login
.addHeader("content-type", "application/x-www-form-urlencoded")
.post(loginBody)
.build();
Response response = new OkHttpClient().newCall(login).execute();
According to the chrome dev tools, the response of the above post request should have a couple of set-cookie response headers, but they are not present for me.
I don't think the issue is with the cookie I'm using because when comparing to what is found in the chrome dev tools, the cookie matches that exaclty (except for some things that are new every time you visit the site), so I think the issue is with the form post. Any ideas what I am doing wrong?
Some servers block the requests if they see you're making the requests from outside a browser.
What often works (but not always) is to try tricking the server into thinking you're using a browser. You can do this by setting the "User-Agent" header.
To do so, open the dev tools of your browser (F12), access the "Network" tab and make a request to any site. Then, look into the "Request Headers" section for the "User-Agent" value. Just copy it and send it with your request.
If all this fail, it's possible that the site has a bot protection based in Javascript. In this kind of site, the login page contains a javascript that triggers right before the login process and generates a random token that you need to send along with the credentials in order to login successfully. Since you're accessing without a browser, you can't run JavaScript and thus, you can't generate this token.
If that's the case, the best thing you can do is to use a real browser controlled programmatically. For Java you can use Selenium, but I personally prefer to use Puppeteer from NodeJS. In essence they're the same thing, an API to remote control a modified version of the chromium/chrome browser.
But with Puppeteer you have a little more flexibility than with Java because you don't need to convert between Java and Javsacritp objects and vice-versa.
I don't know what may be wrong and I don't have time to test your code now. But as a suggestion, you could try to make the upload using another library and see if it works.
I recommend Apache Fluent API:
https://mvnrepository.com/artifact/org.apache.httpcomponents/fluent-hc
I want to download a source of a webpage to a file (*.htm) (i.e. entire content with all html markups at all) from this URL:
http://isap.sejm.gov.pl/DetailsServlet?id=WDU20061831353
which works perfectly fine with FileUtils.copyURLtoFile method.
However, the said URL has also some links, for instance one which I'm very interested in:
http://isap.sejm.gov.pl/RelatedServlet?id=WDU20061831353&type=9&isNew=true
This link works perfectly fine If open it with a regular browser, but when I try to download it in Java by means of FileUtils -- I got only a no-content page with single message "trwa ladowanie danych" (which means: "loading data...") but then nothing happens, the target page is not loaded.
Could anyone help me with this? From the URL I can see that the page uses Servlets -- is there a special way to download pages created with servlets?
Regards --
This isn't a servlet issue - that just happens to be the technology used to implement the server, but generally clients don't need to care about that. I strongly suspect it's just that the server is responding with different data depending on the request headers (e.g. User-Agent). I see a very different response when I fetch it with curl compared to when I load it in Chrome, for example.
I suggest you experiment with curl, making a request which looks as close as possible to a request from a browser, and then fiddling until you can find out exactly which headers are involved. You might want to use Wireshark or Fiddler to make it easy to see the exact requests/responses involved.
Of course, even if you can fetch the original HTML correctly, there's still all the Javascript - it would be entirely feasible for the HTML to contain none of the data, but for it to include Javascript which does the actual data fetching. I don't believe that's the case for this particular page, but you may well find it happens for
try using selenium webdriver to the main page
HtmlUnitDriver driver = new HtmlUnitDriver(true);
driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);
driver.get(baseUrl);
and then navigate to the link
driver.findElement(By.name("name of link")).click();
UPDATE: I checked the following: if I turn off the cookies in Firefox and then try to load my page:
http://isap.sejm.gov.pl/RelatedServlet?id=WDU20061831353&type=9&isNew=true
then I yield the incorrect result just like in my java app (i.e. page with "loading data" message instead of the proper content).
Now, how can I manage the cookies in java to download this page properly then?
Is there a way to delete older version of an applet from browser's cache? The things I have already tried to prevent the cache problem in first place are:
1- To set "no-cache" in HTTP response header, I placed following script on the top of my jsp:
<%
if (request.getProtocol().compareTo("HTTP/1.0") == 0) {
response.setHeader("Pragma", "no-cache");
} else if (request.getProtocol().compareTo("HTTP/1.1") == 0) {
response.setHeader("Cache-Control", "no-cache");
}
response.setDateHeader("Expires", 0);
%>
2- While deploying applet 'cache_option' is set to 'no'
But of no use. I was now wondering if there is a way to programatically delete this applet jar file from cache?
[UPDATE]
Providing a unique url for applet each time doesn't look like a good idea in my case. As, in my case applet reloads(refresh) itself after a time (say at mid-night, using Timer), hitting on a url
applet.getAppletContext().showDocument(url);
It would be difficult to communicate new url to applet
The answer you got on your other question also applies here: Provide a unique url for your applet each time. It's not humor, as lots of people are using this technique and it would solve your problem.
these links might be of your help:
Applet Caching and Installation in Java Plug-in
how-to-clear-cache-to-reload-applet
How to disable http caching in applet
How to disable browser applet cache..?
Java applet cached forever, not downloading new version?
There are a couple of solutions to your problem. The one which is discussed more in the links, is to use a different name for every applet jar file. Append version number or anything so as to ensure that browser loads the applet from the server every time it runs, rather then from the cache. You can get more help from the above pasted links. Thanks.
P.S. The links are pasted in order of relevance.
We are using IBM Websphere Application Server 6.1 and browser is Internet Explorer 8.
We have a java servlet which dynamically generates PDF and MS Word documents. On the first attempt some users are saying they are getting
"Internet Explorer was unable to open this site. The requested site is either unavailable or cannot be found. Please try again later."
As per Microsoft Support article id 323308
When you try to open a Microsoft Office document or a PDF document over HTTPS (SSL) IE fails with above error message.
This issue occurs if the server sends a "Cache-control:no-store" header or sends a "Cache-control:no-cache" header.
For IE8 Microsoft suggests to add registry entry on users Windows XP desktop. This is not very practical for us to do as we don't control our users desktops. This does not happen for IE9, Firefox, Chrome, etc.
As per PK20531 WAS 6.1 is adding Cache-Control: no-cache="set-cookie, set-cookie2" and Expires
HTTP headers when there is cookie being set in the response.
Note - We are not setting the cookie in the servlet. The cookie is set by single sign-on software.
On the first attempt when the single sign-on (LTPA) cookie is being set and WAS is adding HTTP headers which IE browser does not like.
Does Java servlet api provide a way to remove http headers? Is there a technique to use Filter api to remove http headers?
If you remove the Cache-Control header from the response, then you're not sending any instructions about caching and therefore the caching behavior would be unpredictable.
It would be better to set the header to something else, rather than remove it. Presumably you want to enable caching on the browser for your pages. So you could add these lines to your servlet to enable caching in the browser:
response.setHeader("Pragma", "cache");
response.setHeader("Cache-Control", "private, must-revalidate");
You could do this in a Filter too, because filters have access to the HTTP response object. But if you've written your own servlet then it's probably more efficient — and clearer — to do it in the servlet.
It's all controllable by you. If you don't put it there, there will be nothing to remove.
I have small ajax problem related to cross domain as i see it.
On localmachine i created html example with some ajax:
in registration text field user types 'username',
on every keystroke ajax sends it to
local Tomcat, where servlet checks if that username is already used
and sends 'taken' reponse back.
No problem on localhost at all.
As soon as i type used 'username' servlet sends 'taken' response
and browser displays it.
But, when i put test html page with ajax
on remote machine (some free hosting on remote network)
that sends validation request on my localhost Tomcat,
connection is made,
in Tomcat console i see request comming,
and in firebug in Mozzila this is Console ouput:
GET http://89.216.182.25:8080/Dinamicki1/UsernameServlet?username=zik 200 OK
...but in response tab
there is not servlet response 'taken'
and message in firebug is in red color
So servers communicate well, no firewall problems, response is 200 OK
But response body is empty.
Any ideas what this red messages in firebugs are?
Thank you very much in advance.
And if anyone can recommend a some serious ajax tutorial for java
it will be highly appreciated :)
You need to use a domain-relative URL in your Ajax request:
/Dinamicki1/UsernameServlet?username=zik
Or a context-relative URL (assuming that the page is served from /Dinamicki1):
UsernameServlet?username=zik
With regard to "Ajax tutorial for Java", start here: How to use Servlets and Ajax?
You cannot use AJAX to read replies from other domains.
Your HTML must be on the same server (and same domain, port, and protocol) as the AJAX servlet.
The 200 status reported in Firebug does not indicate the validity of the cross-domain ajax call, be it successful or not.
You might want to try using a proxy method to perform the call.
E.g. JavaScript: Use a Web Proxy for Cross-Domain XMLHttpRequest Calls
I figured out how to solve it from this site:
"To allow directory browsing via Apache Tomcat change the parameter "listings" in the file conf/web.xml from false to true."
Call your page not as C:/Documents and Settings/.../page.html but as localhost:8080/your_servlet_name (page is better named index.html).
This way, you will be able to make AJAX requests to localhost:8080/your_servlet_name/something_else.
A solution that worked for me was that I had to add "www" to the url! I was using URL Rewrite, so every URL that I had (image, js, get, load, post), I needed to use full url, but it was missing "www"!
For me, It was web api(c# .NET) request and cors was not enabled.
Added header for cors on controller and it solved the problem.
[EnableCors(origins: "*", headers: "*", methods: "*")]