Blocking a website using Java - java

I am trying to block certain websites using a web application. So, when a I type a url suppose "http://www.google.com" it should first check whether google is blocked by my application or not. If not open the website otherwise reject the browser request to open it. I am unable to find a way to capture all HTTP request from browser so that I can process it.
I Know proxies are the most suitable option but is there any alternative solution to this. After some searching I found a library - jpcap (a network packet capture library) and I was wondering if this could help me or not?

What you are trying to create is a proxy-server.
You have to configure the browser to go through the proxy, then you can deny websites, reroute them etc.
There are many proxies already there (open source and commercial) that offer what you want.
For example: Squid http://www.squid-cache.org/
See Wikipedia description of a proxy here: https://en.wikipedia.org/wiki/Proxy_server
Many firewall products offer the service of a transparent proxy, redirecting all http/https traffic going through the firewall into a proxy server. It seems, you have a direct connection but your packages are really filtered. Aka transparent proxy.
If your assignment does not allow this context, you need to check the assignment again, if you really got the scope of filtering right.
You cannot take over the browser's ip communication from a servlet or servlet filter. Using a (servlet) filter, you can only filter requests directed to your application. One step above, using an application server valve (Tomcat uses this term, others may use a different one), you can only filter requests directed at that server. One step above (or below) your application server is the physical server and the network it is running in.
If your client does not share the same network as your server, you can't even apply transparent proxy to it. Since browsers are running on the client computer, most clients in the world do not share the same network zone as the server.
It just does not work as you expect it.

Related

How to put a my own proxy between any client and any server (via a web page)

what I want to do is to build a web application(proxy) that user use to request the webpage he want and
my application forward the request to the main server,
modify HTML code,
send to the client the modified one.
The question now is
How to keep my application between the client and main serevr
(for example when the user click any link inside the modified page-
ajax request - submit forms - and so on)
in another words
How to grantee that any request (after the first URL request) from the client sent to my proxy and any response come first to my proxy
The question is: Why do you need a proxy? Why do you want to build it - why not use already existing one like HAProxy ?
EDIT: sorry, I didn't read your whole post correctly. You can start with:
http://www.jtmelton.com/2007/11/27/a-simple-multi-threaded-java-http-proxy-server/
If the user is willing to, or can be forced1 to configure his clients (e.g. web browser) to use a web proxy, then your problem is already solved. Another way to do this (assuming that the user is cooperative) is to get them to install a trusted browser plugin that dynamically routes selected URLs through your proxy. But you can't do this using an untrusted webapp: the Browser sandbox won't (shouldn't) let you.
Doing it without the user's knowledge and consent requires some kind of interference at the network level. For example, a "smart" switch could recognizes TCP/IP packets on port 80 and deliberately route them to your proxy instead of the IP address that the client's browser specifies. This kind of thing is known as "deep packet inspection". It would be very difficult to implement yourself, and it requires significant compute power in your network switch if you are going to achieve high network rates through the switch.
The second problem is that making meaningful on-the-fly modifications to arbitrary HTML + Javascript responses is a really difficult problem.
The final problem is that this is only going to work with HTTP. HTTPS protects against "man in the middle" attacks ... such as this ... that monitor or interfere with the requests and responses. The best you could hope to do would be to capture the encrypted traffic between the client and the server.
1 - The normal way to force a user to do this is to implement a firewall that blocks all outgoing HTTP connections apart from those made via your proxy.
UPDATE
The problem now what should I change in the html code to enforce client to request any thing from my app --- for example for link href attribute may be www.aaaa.com?url=www.google.com but for ajax and form what I should do?
Like I said, it is a difficult task. You have to deal with the following problems:
Finding and updating absolute URLs in the HTML. (Not hard)
Finding and dealing with the base URL (if any). (Not hard)
Dealing with the URLs that you don't want to change; e.g. links to CSS, javascript (maybe), etc. (Harder ...)
Dealing with HTML that is syntactically invalid ... but not to the extent that the browser can't cope. (Hard)
Dealing with cross-site issues. (Uncertain ...)
Dealing with URLs in requests being made by javascript embedded in / called from the page. This is extremely difficult, given the myriad ways that javascript could assemble the URL.
Dealing with HTTPS. (Impossible to do securely; i.e. without the user not trusting the proxy to see private info such as passwords, credit card numbers, etc that are normally sent securely.)
and so on.

Confirm understanding: Tomcat served applet, and app network traffic?

All..
I am hoping someone who can confirm for me, what I read and what I have observed, regarding the Tomcat Java applet server?
I have Linux server running Tomcat (I built two new ones, but based the configuration off the previous two that were present when I came on the job). I am fairly new to Tomcat servers -vs- web servers.
When a client connects to the Tomcat server address...
A static web page is served, with a link to a java applet:
When they click a link, Tomcat serves up an applet to the browser.
When the applet is served:
All connections and traffic that the applet creates is tunneled back to the Tomcat server? (pretty sure this is happening, and what is supposed to happen)
All connections connect through the client network connection? (All tests I have done can not confirm this.)
Is the tunneling a reason why Tomcat is used over just serving up the Java applet via a Apache server?
We have a SSL secure connection with certificates setup to allow https connections to the Tomcat server, and I am assuming all the data between Tomcat server and the applet is encrypted because of this?
Thanks!
There's no good reason from what you've told us so far to use Tomcat over a lighter httpd such as apache or nginx - if it's really just serving a Java applet and web page (static content). The former two are application servers, and as that implies that means a little more than just static content - although it will serve static content just fine, too. But there is no "Default" integration between the two technologies. In particular - your data will not be encrypted by default, you've got to make sure that your applet makes secure request. Serving the applet offer SSL only protects the connection that actually serves the applet, not subsequent ones - though there's no reason these shouldn't also go through the same SSL endpoint, the applet has to initiate that, there's nothing "magical" going on.
Here's a good article on when you'd want to use one or the other.
As for the other part - there is a security model that comes with an applet. By default, the applet will only be able to make connections back to the server from which it came - this is to prevent certain kinds of "cross-site" attacks which were seen in the past. These days, different sites interoperating are more common so there are many technologies you could use to for that, if you need to - but applets are largely considered outdated and not widely used - but your end user may also configure applets to get around this default policy.
Here is information about the appliet security model, including network restrictions.

How to track all the web request in my machine?

I tried using Muffin's web proxy to record the url's that were hit in the browser.
I am able to track the internet request like google.com,stackoverflow etc.,
But, was unable to track the intranet request like the one which does not need internet. I am not sure how intranet works because those request for the url were not being tracked.
Is there a way to track those request as well(intranet urls).
1) I am able to track the weburls because i am redirecting all the request to the socket i had created in java.But setting it up as a proxy in the settings.
2) Usually intranet sites do not rely on the proxy servers. it will directly communicate though dns server.How to make those request also to go through my socket ?
Note I am trying to achieve it using JAVA sockets.
Use a windows application called FIDDLER.... It can track all type of inbound and outbound connections...

How to append html to a website response before it reaches the browser in Java?

Recently I used a Mac application called Spotflux. I think it's written in Java (because if you hover over its icon it literally says "java"...).
It's just a VPN app. However, to support itself, it can show you ads... while browsing. You can be browsing on chrome, and the page will load with a banner at the bottom.
Since it is a VPN app, it obviously can control what goes in and out of your machine, so I guess that it simply appends some html to any website response before passing it to your browser.
I'm not interested in making a VPN or anything like that. The real question is: how, using Java, can you intercept the html response from a website and append more html to it before it reaches your browser? Suppose that I want to make an app that literally puts a picture at the bottom of every site you visit.
This is, of course, a hypothetical answer - I don't really know how Spotflux works.
However, I'm guessing that as part of its VPN, it installs a proxy server. Proxy servers intercept HTTP requests and responses, for a variety of reasons - most corporate networks use proxy servers for caching, monitoring internet usage, and blocking access to NSFW content.
As a proxy server can see all HTTP traffic between your browser and the internet, it can modify that HTTP; often, a proxy server will inject an HTTP header, for instance; injecting an additional HTML tag for an image would be relatively easy.
Here's a sample implementation of a proxy server in Java.
There are many ways to do this. Probably the easiest would be to proxy HTTP requests through a web proxy like RabbIT (written in java). Then just extend the proxy to mess with the response being sent back to the browser.
In the case of Rabbit, this can either be done with custom code, or with a special Filter. See their FAQ.
WARNING: this is not as simple as you think. Adding an image to the bottom of every screen will be hard to do, depending on what kind of HTML is returned by the server. Depending on what CSS, javascript, etc that the remote site uses, you can't just put the same HTML markup in and expect it to act the same everywhere.

Security matter: are parameters in url secure?

I have developed myself in the last few months about web development in java (servlets and jsp). I am developing a web server, which is mainly serving for an application. Actually it is running on google app engine. My concern is, although I am using SSL connections, sending parameters in the URL (e.g. https://www.xyz.com/server?password=1234&username=uname) may not be secure. Should I use another way or is it really secure? I don't know if this url is delivered as plaint text as whole (with the parameters)?
Any help would be appreciated!
Everything is encrypted, including the URL and its parameters. You might still avoid them because they might be stored in server-side logs and in the browser history, though.
Your problem seems to go further than Web Server and Google App Engine.
Sending a password through a web form to your server is a very common security issue. See this SO threads:
Is either GET or POST more secure than the other? (meaningly, POST will simply not display the parameter in the URL so this is not enough)
Are https URLs encrypted? (describes something similar to what you intend to do)
The complete HTTP request including the request line is encrypted inside SSL.
Example http request for the above URL which will all be contained within the SSL tunnel:
GET /server?password=1234&username=uname HTTP/1.1
Host: www.xyz.com
...
It is possible though that your application will log the requested URL, as this contains the users password this may not be OK.
Well, apart from the issues to do with logging and visibility of URLs (i.e., what happens before and after the secure communication) both GET and POST are equally secure; there is very little information that is exchanged before the encrypted channel is established, not even the first line of the HTTP protocol. But that doesn't mean you should use GET for this.
The issue is that logging in is changing the state of the server and should not be repeated without the user getting properly notified that this is happening (to prevent surprises with Javascript). The state that is being changed is of the user session information on the server, because what logging in does is associate a verified identity with that session. Because it is a (significant) change of state, the operation should not be done by GET; while you could do it by PUT technically, POST is better because of the non-idempotency assumptions associated with it (which in turn encourages browsers to pop up a warning dialog).

Categories