I am trying to integrate Meek in an Android application. There is a sample here that shows how to instantiate the transport:
https://github.com/guardianproject/AndroidPluggableTransports/blob/master/sample/src/main/java/info/pluggeabletransports/sample/SampleClientActivity.java
The question is what do I do from there. Assuming I have an application that uses OkHttp3. Is there a way to reconcile both and use OkHttp3 as underlying transport mechanism while the app only interacts with Okhttp3?
I am also quite conflicted on how to instantiate the transport and what each option means. In the link provided above, the transport is instantiate as follows:
private void initMeekTransport() {
new MeekTransport().register();
Properties options = new Properties();
String remoteAddress = "185.152.65.180:9001";// a public Tor guard to test
options.put(MeekTransport.OPTION_URL,"https://meek.azureedge.net/"); //the public Tor Meek endpoint
options.put(MeekTransport.OPTION_FRONT, "ajax.aspnetcdn.com"); //the domain fronting address to use
options.put(MeekTransport.OPTION_KEY, "97700DFE9F483596DDA6264C4D7DF7641E1E39CE"); //the key that is needed for this endpoint
init(DispatchConstants.PT_TRANSPORTS_MEEK, remoteAddress, options);
}
However, in the README (https://github.com/guardianproject/AndroidPluggableTransports), the suggested approach is:
Properties options = new Properties();
String bridgeAddress = "https://meek.actualdomain.com";
options.put(MeekTransport.OPTION_FRONT,"www.somefrontabledomain.com");
options.put(MeekTransport.OPTION_KEY,"18800CFE9F483596DDA6264C4D7DF7331E1E39CE");
init("meek", bridgeAddress, options);
Transport transport = Dispatcher.get().getTransport(this, PT_TRANSPORTS_MEEK, options);
if (transport != null)
{
Connection conn = transport.connect(bridgeAddress);
//now use the connection, either as a proxy, or to read and write bytes directly
if (conn.getLocalAddress() != null && conn.getLocalPort() != -1)
setSocksProxy (conn.getLocalAddress(), conn.getLocalPort());
ByteArrayOutputStream baos = new ByteArrayOutputStream();
baos.write("GET https://somewebsite.org/TheProject.html HTTP/1.0".getBytes());
conn.write(baos.toByteArray());
byte[] buffer = new byte[1024*64];
int read = conn.read(buffer,0,buffer.length);
String response = new String(buffer);
}
In the latter approach, the bridge address is passed to the init() method. In the first approach it is the remote address that is passed to the init() method while the bridge address is passed as option. Which one of these approaches is the right one?
Furthermore, I would appreciate some comments on OPTION_URL, OPTION_FRONT, and OPTION_KEY. Where does each of these pieces of information come from? If I do not want to use these default information, how do i go about setting up a "Meek endpoint" for instance to not use the default Tor one? Anything particular needs to be done on the CDN? How would I use Amazon or Google instead of aspnetcdn?
As you can see, I am fairly confused.
Thanks for your interest in this library. Unfortunately, we haven't made a formal release yet, and it is still quite early in its development.
However, we think that we can help still.
First, it should be made clear that the major cloud providers (Google, Amazon, Azure) that support domain fronting, are now very much against it. Thus, you might find some limitation in using any of those platforms, especially with a fronted domain that you do not control.
You can read about Tor and Signal's own struggle with this here: https://signal.org/blog/looking-back-on-the-front/ and https://blog.torproject.org/domain-fronting-critical-open-web
Now, that said, if you still want to proceed with domain fronting, and all you are using is HTTP/S, then you can do so, without using Meek at all. You simple need to set the "Host" header on your request to the actual domain you want to access, while the request itself using the fronted domain. It is actually quite simple.
You can read more here: https://www.bamsoftware.com/papers/fronting/#sec:introduction
Related
Im trying to get mail from a POP3 server through a proxy. Most "tutorials" suggest doing something like
Properties p = System.getProperties();
p.setProperty("proxySet", "true");//does this line even do anything?
p.setProperty("socksProxyHost", proxyHost);
p.setPorperty("socksProxyPort", proxyPort);
p.setProperty("socksProxyVersion", "5");//or 4 if you want to use 4
p.setProperty("mail.pop3.socketFactory.class", SSL_FACTORY);
p.setProperty("mail.pop3.socketFactory.fallback", "false");//also not sure what it does
p.setProperty("mail.pop3.port", portOnHostYouWantToTalkTo);
p.setProperty("mail.pop3.socketFactory.port", portOnHostYouWantToTalkTo);
Session session = Session.getDefaultInstance(p, null);
//or session = Session.getInstance(p, null);
URLName urlName = new URLName(protocol, hostYouwantToTalkTo, portOnHostYouWantToTalkTo, null, mailbox, mailboxPassword);
Store store = session.getStore(urlName);
Now, if I do something like this I get an exception:
java.net.SocketException: Can't connect to SOCKS proxy:Connection timed out: connect.
My POP3 server does not log any connections, suggesting there is a proxy issue or an error in my code. I am using 73.29.157.190:29099 for now.
2) If, however, I do
Properties p = new Properties();
//all the same logic and stuff
Session = Session.getInstance(p, null);
My POP3 server logs a connection from localhost, and works properly, suggesting that I am NOT using a proxy to connect to it and everything else is fine.
My question is, why do "tutorials" use System.getProperties() and pass it to getInstance()? Every Session instance will keep a reference to System.properties. So, effectively every Session instance will be affected every time you try to create a new one or alter System.getProperties() in any way so you might as well reuse the same one.
Does javamail need something set in System.properties specifically and not the ones passed to Session?
Also, what parameters do you need to set in order to get javamail to use a proxy? What does System.properties have that makes it work unlike my new Properties? A link to a good tutorial or documentation that explains it would be greatly appreciated.
Thanks!
First, get rid of all the socket factory stuff, you don't need it.
Next, make sure you really have a SOCKS proxy and not just a web proxy. If you do, see this JavaMail FAQ entry.
Setting the System properties for a SOCKS proxy will cause all network connections from your program to go through the proxy server, which may not be what you want.
I was just looking around to find out how to make a program that would sniff my network traffic in Java, but I couldn't find anything. I wanted to know if there was any way to view the network traffic going by. I heard of an idea with a Socket, but I don't get how that would work. So anyways, just looking for an API or a way to write it myself.
EDIT:
I would gladly like an API, but I would also like clarification on the way to sniff traffic with a Socket.
jpcap, jNetPcap -- those are pcap wrapper projects in Java.
Kraken -- similar project, well documented with lots of examples.
simple example from the Kraken web site:
public static void main(String[] args) throws Exception {
File f = new File("sample.pcap");
EthernetDecoder eth = new EthernetDecoder();
IpDecoder ip = new IpDecoder();
TcpDecoder tcp = new TcpDecoder(new TcpPortProtocolMapper());
UdpDecoder udp = new UdpDecoder(new UdpPortProtocolMapper());
eth.register(EthernetType.IPV4, ip);
ip.register(InternetProtocol.TCP, tcp);
ip.register(InternetProtocol.UDP, udp);
PcapInputStream is = new PcapFileInputStream(f);
while (true) {
// getPacket() will throws EOFException and you should call is.close()
PcapPacket packet = is.getPacket();
eth.decode(packet);
}
}
Another Java libpcap wrapper is https://github.com/kaitoy/pcap4j
Pcap4J is a Java library for capturing, crafting and sending packets. Pcap4J wraps a native packet capture library (libpcap or WinPcap) via JNA and provides you Java-Oriented APIs.
You need a packet sniffer api, maybe netutils is what you need:
The 'netutils' package gives a low level java network library. It
contains extensive infrastructure for sniffing, injecting, building
and parsing Ethernet/IP/TCP/UDP/ICMP packets.
Not telling any API or java related thing but if you really want to only sniff data for analysis purpose then give try: WireShark. Its an application used for network analyse.
Its useful if someone is not aware of.
I'm not sure if what I'm trying to do is possible, it might not. Here is my problem:
I'm trying to use a Servlet to pass information from a client to a server via HTTP. This communication is very frequent (I'm passing UI information, so every single mouse event), so I want to have as little overhead as possible to avoid latency issues, which is why I would like to not do a GET call for each transmission. HTTP is a requirement. I'm using an older Tomcat version (Servlet API 2.4). I guess this is somewhat of a web sockets use case, but I don't have any web sockets support available.
What I tried was to open a URL connection on the client side, and to open the input stream (otherwise the doGet() of the servlet never gets called). I'm passing an argument for initialization purposes to the client.
URLConnection uiConnection = url.openConnection();
uiConnection.setRequestProperty("Authorization", "Basic " + encode("xyz" + ":"
+ "xyz"));
uiConnection.setReadTimeout(0);
uiConnection.setDoOutput(true);
uiConnection.setAllowUserInteraction(true);
DataInputStream is = new DataInputStream(
uiConnection.getInputStream());
When I later try to retrieve an ouput stream from this connection, I'm getting a ProtocolException (cannot write output after reading input).
out = new BufferedWriter(new OutputStreamWriter(
uiConnection.getOutputStream()));
out.write(uiUpdate);
On the servlet end I did something like this:
DataInputStream is = new DataInputStream(
request.getInputStream());
Am I completely on the wrong track or is something like this possible without using a new connection for each transmission?
Thanks,
Mark
I think the key question for this, is do you also have http traffic going to this IP? If so, there may not be anything you can do using just java. If not, then create a servlet to listen in on port 80, and parse the incoming data directly.
http://download.oracle.com/javase/tutorial/networking/sockets/clientServer.html
I'd like to fetch a webpage, just fetching the data (not parsing or rendering anything), just catch the data returned after a http request.
I'm trying to do this using the high-level Class Socket of the JavaRuntime Library.
I wonder if this is possible since I'm not at ease figuring out the beneath layer used for this two-point communication or I don't know if the trouble is coming from my own system.
.
Here's what my code is doing:
1) setting the socket.
this.socket = new Socket( "www.example.com", 80 );
2) setting the appropriate streams used for this communication.
this.out = new PrintWriter( socket.getOutputStream(), true);
this.in = new BufferedReader( new InputStreamReader( socket.getInputStream() ) );
3) requesting the page (and this is where I'm not sure it's alright to do like this).
String query = "";
query += "GET / HTTP/1.1\r\n";
query += "Host: www.example.com\r\n";
...
query += "\r\n";
this.out.print(query);
4) reading the result (nothing in my case).
System.out.print( this.in.readLine() );
5) closing socket and streams.
If you're on a *nix system, look into CURL, which allows you to retrieve information off the internet using the command line. More lightweight than a Java socket connection.
If you want to use Java, and are just retrieving information from a webpage, check out the Java URL library (java.net.URL). Some sample Java code:
URL ur = new URL("www.google.com");
URLConnection conn = ur.openConnection();
InputStream is = conn.getInputStream();
String foo = new Scanner(is).useDelimiter("\\A").next();
System.out.println(foo);
That'll grab the specified URL, grab the data (html in this case) and spit it out to the console. Might have to tweak the delimiter abit, but this will work with most network endpoints sending data.
Your code looks pretty close. Your GET request is probably malformed in some way. Try this: open up a telnet client and connect to a web server. Paste in the GET request as you believe it should work. See if that returns anything. If it doesn't it means there is a problem with the GET request. The easiest thing to do that point would be write a program that listens on a socket (more or less the inverse of what you're doing) and point a web browser to localhost:[correct port] and see what the web browser sends you. Use that as your template for the GET request.
Alternatively you could try and piece it together from the HTTP specification.
I had to add the full URL to the GET parameter. To make it work. Although I see you can specify HOST also if you want.
Socket socket = new Socket("youtube.com",80);
PrintWriter out = new PrintWriter(new BufferedWriter(new
OutputStreamWriter(socket.getOutputStream())));
out.println("GET http://www.youtube.com/yts/img/favicon_48-vflVjB_Qk.png
HTTP/1.0");
out.println();
out.flush();
Yes, it is possible. You just need to figure out the protocol. You are close.
I would create a simple server socket that prints out what it gets in. You can then use your browser to connect to the socket using a url like: http://localhost:8080. Then use your client socket to mimic the HTTP protocol from the browser.
Not sure why you're going lower down than URLConnection - its designed to do what you want to do: http://download.oracle.com/javase/tutorial/networking/urls/readingWriting.html.
The Java Tutorial on Sockets even says: "URLs and URLConnections provide a relatively high-level mechanism for accessing resources on the Internet. Sometimes your programs require lower-level network communication, for example, when you want to write a client-server application." Since you're not going lower than HTTP, I'm not sure what the point is of using a Socket.
I want to read the contents of a URL but don't want to "hang" if the URL is unresponsive. I've created a BufferedReader using the URL...
URL theURL = new URL(url);
URLConnection urlConn = theURL.openConnection();
urlConn.setDoOutput(true);
BufferedReader urlReader = new BufferedReader(newInputStreamReader(urlConn.getInputStream()));
...and then begun the loop to read the contents...
do
{
buf = urlReader.readLine();
if (buf != null)
{
resultBuffer.append(buf);
resultBuffer.append("\n");
}
}
while (buf != null);
...but if the read hangs then the application hangs.
Is there a way, without grinding the code down to the socket level, to "time out" the read if necessary?
I think URLConnection.setReadTimeout is what you are looking for.
If you have java 1.4:
I assume the connection timeout (URLConnection.setConnectTimeout(int timeout) ) is of no use because you are doing some kind of streaming.
---Do not kill the thread--- It may cause unknown problems, open descriptors, etc.
Spawn a java.util.TimerTask where you will check if you have finished the process, otherwise, close the BufferedReader and the OutputStream of the URLConnection
Insert a boolean flag isFinished and set it to true at the end of your loop and to false before the loop
TimerTask ft = new TimerTask(){
public void run(){
if (!isFinished){
urlConn.getInputStream().close();
urlConn.getOutputStream().close();
}
}
};
(new Timer()).schedule(ft, timeout);
This will probably cause an ioexception, so you have to catch it. The exception is not a bad thing in itself.
I'm omitting some declarations (i.e. finals) so the anonymous class can access your variables. If not, then create a POJO that maintains a reference and pass that to the timertask
Since Java 1.5, it is possible to set the read timeout in milliseconds on the underlying socket via the 'setReadTimeout(int timeout)' method on the URLConnection class.
Note that there is also the 'setConnectTimeout(int timeout)' which will do the same thing for the initial connection to the remote server, so it is important to set that as well.
I have been working on this issue in a JVM 1.4 environment just recently. The stock answer is to use the system properties sun.net.client.defaultReadTimeout (read timeout) and/or sun.net.client.defaultConnectTimeout. These are documented at Networking Properties and can be set via the -D argument on the Java command line or via a System.setProperty method call.
Supposedly these are cached by the implementation so you can't change them from one thing to another so one they are used once, the values are retained.
Also they don't really work for SSL connections ala HttpsURLConnection. There are other ways to deal with that using a custom SSLSocketFactory.
Again, all this applies to JVM 1.4.x. At 1.5 and above you have more methods available to you in the API (as noted by the other responders above).
For Java 1.4, you may use SimpleHttpConnectionManager.getConnectionWithTimeout(hostConf,CONNECTION_TIMEOUT) from Apache