I have a Java web app that deploys as a WAR to Tomcat 7.0.41 (myapp.war). I noticed that when I deploy the WAR to a Tomcat that lives in one part of our network, the web pages display perfectly fine. However, and this only happens in IE 11, if I take the exact same WAR and deploy it to the exact same version/Chef-configured-instance of a Tomcat server that lives in another part of our network, the page stylings look way different and completely wrong. Again, this is specific to IE11 and the location in the network that the app is served from. If I go to the app in IE 11 from a "good" location on the network, the frontend renders perfectly fine. Or if I view the app from a "bad" location on the network, but in a non-IE browser, again all is well.
I have a feeling that we might have some IT proxy (nginx, etc.) that is preventing Tomcat from serving certain CSS/JS files, and so the end result is a partially-complete frontend that looks all wonky in the browser. And somehow, this only crops up in IE 11.
I have (sort of) confirmed this by viewing the source of all my HTML, JS and CSS files and copying them to files in a local folder. I then open up one of the HTML files (locally) in a browser and the site displays perfectly.
The problem here is that my JS files use a bunch of open source JS libraries. And those libraries have dependencies on other libraries. So on and so forth, and the dependency graph is pretty huge. It's tough for me to tell which files are not being downloaded properly/completely.
Here's the kicker: if I add in html5shiv to my app then the problem goes away entirely, no matter which browser (IE or not) or what location in the network I choose. However adding html5shiv breaks other things in my app, and for reasons outside the context of this question, can't be used.
Anyone have any idea how I could troubleshoot/fix this? Why would this only be affecting IE 11 and not other browsers? Why is html5shiv solving this?!?
You need to start using Wireshark.
What it does is capture all network traffic and allow you to view it exactly as it was sent/received by your network card.
What I would do is capture the complete traffic that occurs between your computer and the server in the location where it is working, when you visit the webpage that has the problem. Then repeat that for the server that is not working.
You will then have the complete traffic and can compare them side by side. Even if it doesn't tell you the cause of the problem Wireshark will tell you where the difference is occurring in the packets that are sent by the two different servers.
You could also do it the other way round by running TCPDump (with command like tcpdump -i eth0 -w file.cap -s 0 to get the complete packets, rather than just the first X bytes) on the server, to capture the packets sent, and then viewing the capture in Wireshark.
"Does Wireshark offer such file-level abstractions or is it all nitty-gritty, byte-level output I need to read?"
Kind of both. Basically once you have the stream in front of you, you are able to see the individual requests starting by looking for GET entries in the packets.
Once you've identified where a file starts, you can right-click on that packet, choose follow TCP stream and it will give you a summarised view of that TCP stream:
If you need the detailed difference between the files, it will be there....but tbh it's probably going to be something obvious like a file being completely truncated or mangled, rather than just a byte or two being wrong in one of the files.
Related
I have a Java web application that runs on a Tomcat server in a production environment. The application which is built with the Stripes framework works fine but almost every day exceptions are logged in the catalina.log files. Here is, for example, one of the log messages:
net.sourceforge.stripes.exception.ActionBeanNotFoundException: Could not locate an ActionBean that is bound to the URL [/admin/start/Welcome.action].
“/start/Welcome.action” is a valid URL but the URL “/admin/start/Welcome.action” is not present anywhere in my project files. I have no idea where it originates from.
Here are other invalid URLs that are also listed in the log files:
/wordpress/start/Welcome.action
/downloader/start/Welcome.action
/manager/start/Welcome.action
/admin/content/sitetree/start/Welcome.action
These URLs do not exist and have never existed in my application. Apart from them there is another group of ActionBeanNotFoundException messages about URLs that have once existed in the application but they do not any more.
Do you have an explanation of this? I asked this question to my hosting provider but they were unable to answer me. Any ideas would be appreciated!
There are two possibilities: Either some component in your web application generates such a URL - just in esoteric places that people rarely click on. If your application is available on the internet, those links in esoteric places might provide a place for the google bot (or any other bot for that matter) to try what's available in those locations, thus requesting the nonexistent URL. You don't need to go to bots for that matter, some browsers prefetch some URLs before you click.
Another option would be internet background chatter - various computers worldwide try to randomly identify vulnerable systems by just requesting well-known pages in order to find security holes in old software. The URLs that you mention (ending in .action) don't look like those though.
We develop an application which uploads some CSV file.
In order to be sure about our code, the upload has been tested with 2 differents framework : ZK (which manages upload itself) and with classic jsp/Spring REST.
On our local server (windows, tomcat 5.5) all is ok.
On client system (Unix Solaris 10, tomcat 5.5) we have a pb : the first time the file is correctly uploaded, the second time if we change something in data (even if we delete the file) we have the same file as first upload....
It seems a cache or something else disturb the upload.
Any idea ?
Thank you.
[Edit] Additional information
For information, we are on Citrix Metaframe Program Neighborhood (a old version -> v9.0).
For those present at the customer (with or without Citrix), CSV file are uploaded correctly each time.
For us, who are outside, that's not working.
File A is uploaded, then we modified it (A') then uploaded again...and the result is : file A is deleted (as expected, by programmation) then a new file appear which is the same as A (not A' as expected).
If we stop Tomcat or even make others http request, the upload works correctly.
We test upload with 2 differents framework : ZK (which manage the upload itself) and Spring MVC (REST). Both are working on our servers with same Tomcat (5.5).
Other thing strange, we have access to an another server (by VPN not Citrix) where we deployed the application on a Tomcat 7 (already installed by the client). All is OK.
Is it possible that is an hardware problem? with a router...
First of all, it is very difficult to understand your question. With what I understood, you are not able to load any file the second time as the details of the first file are still present in memory/variables. Post your code so that it will be easy.
Try these
Start the application, load a file, say A.csv, first time, then stop
the application
Start the application again, and load another file B.csv and see if it is loaded correctly.
If steps 1 and 2 work correctly, you can be sure that no one has hard-coded anything in the code.
Now, go through your code and see if you have any static variables, being set with the contents of the file.
If removing static variables doesn't work, try printing all the variables and narrow down the issue.
Good luck!
We develop a Java web application which is run on Tomcat. It's been installed on many computers and it's working without problems. Recently, on a single remote installation, it exhibits very strange behavior: Sometimes content coming through HTTP to the browser is randomly shuffled - wrong data being served for a given URL. Most often it manifests as images being randomly exchanged on a web page. But it's not limited to images, it happened to me that instead of a HTML page the browser got one of the images instead.
I tried to debug the problem using FireBug + NetExport, and so far I gathered:
It appears randomly. Most of the time content is error-free, the problem occurs only from time to time.
The same application runs on many installations, but only this single installation produces the error.
It's not connected to a particular browser - we tried different browsers from different computers and the problem persists.
It occurs when viewed from the server itself (localhost) too, which rules out some broken transparent proxy on the way.
Both images and HTML pages are affected.
If wrong content is received, the data itself is consistent: Content-Length, Content-Type, ETags, etc. - everything matches the content. Just that it's wrong data for a given URL.
I'm truly puzzled, I've never seen such an error. I'd be grateful for any ideas how to investigate the problem further.
We have a web application hosted on this webLogic server on a UNIX machine. Its primarily a JSP/Servlet based app. Whenever we do a modification/enhancemment to any one of those JSps or servlets, I precompile them on my local and deploy them on the UNIX system. For example, if there is a file called GetIdServlet.class, we usually rename the existing file to say GetIdServlet.class1 and then put in the new file as GetIdServlet.class. This is just to be able to revert back to the original file in case they are needed. However, I notice very strange behaviour. The application loses some functionality whenever we stop and start the server. The functionality may be back on the next or a few restarts after that. For example, a submit button that is supposed to direct it to the next page just stops working. It may start working after a few restart.
However on my local(Eclipse + webLogic) there is absolutely no issue. Everything works fine. Any ideas on what's going wrong?
You are using Unix Environment and i assume that the local desk setup used is windows OS or MAC. thus, when you copy the class files you are using some tool like WinSCP.
in case so, then please set the copy settings of such tool to use binary method of copying the files.
Example in WinSCP. go to Options->Preferences->Select Transfer in the Side Menu->under the Transfer Mode section, Select Binary option as the Transfer Mode. This will ensure that the binary replica is created on the Unix environment and that no data is lost in the transfer.
we will have large files (up to 2 Gb) files on a web page, and want to have the functionality that the user can continue a download if it gets interrupted.
At the moment, the only solution i can come up with is a java applet, i have tried searching for any existing open source projects with this functionality but havent found any so far.
Would be thankful for any tips how to achieve this, or pointers to existing projects or documentation that can be useful.
I am open to any solutions, so it does not have to be java applet (important is that it works in the most common browsers)
You could serve it as a torrent instead of a simple file and let the user's BitTorrent client figure it out.
Assuming you don't want to do that, HTTP lets the client specify a range of bytes to download. The user's browser has to be able to recognize that the file the user wants to download is the same as the one that already exists in partial form and send the proper range headers to the server, and server has to honor them. You can probably take care of the server side, but the browser will have to hold up its end. Some "download manager" programs do this too.
For more information:
http://www.west-wind.com/Weblog/posts/244.aspx
What you need is a Java Download Manager
1. An open Source project exist in google
http://code.google.com/p/jdownloadmon/
HTTPDownloadConnection.java#method connect
2. What partial content range types this server supports
Accept-Ranges: bytes
http://en.wikipedia.org/wiki/List_of_HTTP_header_field
Accept Range will allow the user to pause and resume download.
3.http://stackoverflow.com/questions/3414438/java-resume-download-in-urlconnection