my company have a bunch of websites that some of them redirect and some do not.
my job is to automate with selenium java to find which one is redirecting get the fiddler log and send it to the advertiser responsible
i tried to find a way to get the log from the fiddler in an automated way
(.saz file or even text log) but i could not find any way to automate that
p.s.
if there's another way to get all the connections from a proxy server without fiddler it will be great too. but i need everything that the fiddler gets (the web view, the headers, the raw)
any help?
Fiddler should be capturing all traffic.. and logs can be saved in an automated way...(you could also write custom rule if you have specific need).. I have done this before so i know it works
The log (saz or txt) has all required information. Different views by fiddler shows you formatted and organized info...
Related
I'm testing with JMeter and I need the HTTP request data to make that.
I tried to see that information in F12 Network of Chrome browser, but it doesn't appear the information there.
Someone knows how can I get that information?
You can't get the request because the browser refreshes the network tab when you go to another page. But you can persist these requests marking the option Preserve Logs, like so:
Image before request, with the checkbox checked:
After request, persistent logs:
You can see more information about the network tab here
The easiest way is to capture HAR file in the Chrome Developer Tools, it will have all the information for the request data including parameters, headers, cookies, etc.
Once done you will be able to inspect it with i.e. HAR Analyzer or simply convert it into a JMeter .jmx script
The best way to capture HTTP Requests is by using JMeter's Proxy.
If you try to inspect network in browser and construct JMeter script manually it takes lot of efforts, Alternatively you can just set your JMeter as a proxy server and capture your browsers network traffic.
Follow this post for more information on how to setup proxy in JMeter and record web applications.
My company is in the process of changing all our URLs from HTTP to HTTPS. One of the most useful tools to us is manually using the console in Chrome to view the information on the Security and Network tabs to find the items that are preventing the page from being secure (showing the green padlock).
I am using Java code that reads through a local text file of HTTPS URLs to test.
Using Chromedriver, can it tell me if a URL is secure or insecure (as it would show me if I used Chrome manually)? Also, can it tell me what is causing a page to not be secure?
I'm hoping someone has tackled this one already, but can't find anything on it. Thanks in advance!
I'm learning how to send HTTP requests in Java. Is there a way to visually see the POST/GET responses in a browser? UI and all? I know how to perform the requests in Java and receive html printed out in the console, but it would be easier for me to just see the website itself in the browser.
do i need a plugin? or do i need to open up a socket connection and do something with localhost?
Sorry if this question is a duplicate/is not clear: I'm very new to this.
In Firefox, you can press Ctr+Shift+Q and click on Network. Once you visit a page, it will show you the request in the list area. If you click on a request, it will show you the request and response headers. Very useful for debugging sites. I hope that's what you were asking. BTW, I have Firefox 30.0 in Win 8.1. I don't know about previous versions.
EDIT: If you want to intercept the HTTP request generated from Java, you can use Fiddler. It may have what you're looking for.
I use Firebug for firefox. It shows requests, responses, and all headers in real time (with measuring latency) so it's really convenient for development. It's add-on: https://addons.mozilla.org/en-US/firefox/addon/firebug/
You should be able to console.log the object and in most browsers click on the object in development mode, expand it, and see all of its properties.
In Chrome you can right click -> Inspect Element and go to the Network tab. Refreshing the page will begin tracking the page you are on. When you send out requests, the request log will record them.
Recently I used a Mac application called Spotflux. I think it's written in Java (because if you hover over its icon it literally says "java"...).
It's just a VPN app. However, to support itself, it can show you ads... while browsing. You can be browsing on chrome, and the page will load with a banner at the bottom.
Since it is a VPN app, it obviously can control what goes in and out of your machine, so I guess that it simply appends some html to any website response before passing it to your browser.
I'm not interested in making a VPN or anything like that. The real question is: how, using Java, can you intercept the html response from a website and append more html to it before it reaches your browser? Suppose that I want to make an app that literally puts a picture at the bottom of every site you visit.
This is, of course, a hypothetical answer - I don't really know how Spotflux works.
However, I'm guessing that as part of its VPN, it installs a proxy server. Proxy servers intercept HTTP requests and responses, for a variety of reasons - most corporate networks use proxy servers for caching, monitoring internet usage, and blocking access to NSFW content.
As a proxy server can see all HTTP traffic between your browser and the internet, it can modify that HTTP; often, a proxy server will inject an HTTP header, for instance; injecting an additional HTML tag for an image would be relatively easy.
Here's a sample implementation of a proxy server in Java.
There are many ways to do this. Probably the easiest would be to proxy HTTP requests through a web proxy like RabbIT (written in java). Then just extend the proxy to mess with the response being sent back to the browser.
In the case of Rabbit, this can either be done with custom code, or with a special Filter. See their FAQ.
WARNING: this is not as simple as you think. Adding an image to the bottom of every screen will be hard to do, depending on what kind of HTML is returned by the server. Depending on what CSS, javascript, etc that the remote site uses, you can't just put the same HTML markup in and expect it to act the same everywhere.
I was trying to crawl some of website content, using jsoup and java combination. Save the relevant details to my database and doing the same activity daily.
But here is the deal, when I open the website in browser I get rendered html (with all element tags out there). The javascript part when I test it, it works just fine (the one which I'm supposed to use to extract the correct data).
But when I do a parse/get with jsoup(from Java class), only the initial website is downloaded for parsing. Meaning there are some dynamic parts of a website and I want to get that data but since they're rendered post get, asynchronously on the website I'm unable to capture it with jsoup.
Does anybody knows a way around this? Am I using the right toolset? more experienced people, I bid your advice.
You need to check before if the website you're crawling demands some of this list to show all contents:
Authentication with Login/Password
Some sort of session validation on HTTP headers
Cookies
Some sort of time delay to load all the contents (sites profuse on Javascript libraries, CSS and asyncronous data may need of this).
An specific User-Agent browser
A proxy password if, by example, you're inside a corporative network security configuration.
If anything on this list is needed, you can manage that data providing the parameters in your jsoup.connect(). Please refer the official doc.
http://jsoup.org/cookbook/input/load-document-from-url