I am currently trying to transfer a file from a Android device to a Java TCP Server, but I am unable to find a good example which explains the structure I would need to implement this. There are many Java Client&Server examples there which explain file transfer but I want to make sure if this will still work once one throws an Android Device in there.
My question is how do I implement this sort of structure? And if it doesn't work, would I be better sending the file over an HTTP connection to a PHP server? I see a lot of examples and documentation online for the later method so I presume it is more reliable. I would however prefer to use a Java server.
The file consists of a large set of coordinates recorded by the Android device which will then be sent to the server. I have not yet established how I will store this data yet but I was originally going to store them in a primitive text file.
Design
The first thing you need is something to allow you to run Java code on your server.
There are a number of options. Two of the most popular technologies are Glassfish and Apache Tomcat.
Crudely speaking Apache Tomcat is sufficient for simple client-server communication and Glassfish is used if you need to do more complex stuff. Both allow Servlets (which are essentially self contained server classes written in Java) to run on the server-side.
They handle communication with the client by launching a JVM (Java Virtual Machine) each time they receive a request. The Java servlet can run inside the JVM and respond do some processing if required before sending a response back to the client.Each new request is run in a new instance of a servlet. This makes dealing with multiple concurrent requests simpler (no need for more complex threading).
Networking (sending data to and from the server)
In networking situations the client can be a PC, an Android phone, or any other device capable of connecting to the internet. As far as the server is concerned, if the client can communicate using HTTP (a standard protocol which it understands) the it doesn't care what sort of device it is. This means that solutions for PC desktop client-server applications are similar to one for a phone.
You can use library such as Apache HTTP Components to make it easier to handle HTTP requests and responses between the device and the server. Of course you could write your own classes to do this using Sockets but this would be very time consuming, particularly if you have never done it before.
Storage of Data
If you have time I would recommend implementing some sort of database to store the information.
They have a number of benefits to such as data recovery mechanisms, indexing for fast searching of data, ensure data integrity, better structuring of data and so on.
If you decide to use a database I recommend MySQL. It is a free and more importantly - well documented.
Aside: JDBC can be used to communicate with the database with Java.
Sorry about the in-line hyperlinks - apparently my repuation isn't high enough to post more than two!
Source: Personal experience from implementing a similar design.
Related
I have a central server, to which many distributed servers need to transmit data in the form of somewhat large files, 500MB - 10GB+. The servers are not on the same physical network and can't be connected to one another via a VPN. While we're trying to get other ports opened, currently we can only talk over 443, HTTPS, which is great for our REST services but terrible for file transfer between servers.
I know this isn't as specific a question as one would like for Stackoverflow, but I would like to know: what methods might work better than the ones I've tried?
Server A -> generate file -> transfer over https -> DMZ -> proxy pass -> receive at Server B
Both servers use Java 1.8, Tomcat, and Spring 4.1.4.RELEASE. The DMZ is just Apache and pretty much out of our control.
Things I've tried...
Make RPC calls to a service using Spring's HttpInvokerProxyFactoryBean (this works fine for smaller sites, but the larger sites often drop connections while transferring data)
Multipart form post using Apache HttpPost (this also works, but we have to configure file limits in apache/tomcat, plus its connection is unreliable as well)
Using a library called RMIIO, which basically simulates RMI over HTTP if configured properly. This seemed promising as it requests a stream from the server and writes to the stream from the remote server. I haven't really gotten this to work over HTTPS yet, and the library was written in 2007 (with some updates up through July 2016), but it feels very dated, not highly maintained and I suspect there are better ways to do this sort of thing now-a-days (not that I can find them)
Looked at gRPC but realized it's just a binary protocol and I'd have to basically handle chunking the files if I wanted to get a streaming effect.
Read an article about Developing non-blocking REST services with Spring MVC, http://callistaenterprise.se/blogg/teknik/2014/04/22/c10k-developing-non-blocking-rest-services-with-spring-mvc/ again looked interesting if we were receiving a lot of files at the same time, but I don't see how it helps with a single file transfer.
I've looked at a lot of other things and tried a few more, but it all seems wrong. When I read about big data and Spark streams or any of the million streaming options that I see, I feel like there should be something similar for transferring a single file from one server to another without a broker in the middle. Maybe there are, just not over HTTPS.
It would be nice to know the progress of the transfer (on both ends) and be able to recover should there be connectivity issues or transfer errors.
But any direction or thoughts would be immensely helpful. Thanks for your time and input.
I need to secure the connection between my primary java app and my MYSQL server. Right now I have a class in my primary java app with the info about my SQL server (login details; user, password, schema etc).
I tried obfuscating that class but it didn't succeed. Then I heard something about calling an external java app with the connection info, and retrieve that info securely.
How can I execute such a thing?
Runtime run;
Process pr = null;
run = Runtime.getRuntime();
pr = run.exec("your program.jar");
pr.getInputStream().close();
InputStream eos = pr.getErrorStream();
and you can use a file to pass your info to the jar application
When dealing with a client/server style application, all the business logic, including the persistence layer, should be maintained on the server side.
That is, the client connects to some server process and makes requests. It should never care about how the data is managed or stored. It just cares about getting and manipulating the data. This also means that you centralise the business associated with that data, which means that should it change, you are less likely to need to change the client.
This also means that all the access information for the database never leaves the domain of the server.
Now the question is, how do you achieve. This answer will come down to exactly what it is you want to achieve an the means by which you want to achieve it, but, I would also add, the client should be authenticating with the server first, meaning that the user must be made to enter and user name and password in order to be able access the data (unless it's a public accessible API, then you probably don't care).
You could use
RMI. This would allow you to expose server objects that the client could interact with. This is good if you wish to send objects from the server to the client. It allows the client to interact with Java objects as if they were local objects.
From a coding point of view, this is a (relatively) simple solution, as you are dealing with Java Objects. The problem is though, only Java clients (with the right libraries) will be able to access the server.
You could use
Plain Sockets. This will allow you to connect to a service on the server and communicate with it.
You can even serialize objects between the client and server, allow the application to deal with Java Objects as well.
This is also a much more difficult approach, as you become responsible for dealing with the low level protocol and error handling (which RMI takes care of for you).
This approach does, however, provide you with the opportunity for other clients to connect to your server (so long as you are using just a plain sockets and serializing objects ;)).
This is a lot of work...
You could use
Some kind of web service (Servlet's under Tomcat for example or event a J2EE server), that would use simple HTTP requests to list of available services/functions that would return either something like JSON or XML response which the client would then need to parse.
This is, by far, the most open and probably the most common solution. It would take some work to get running, but is far less involved then using something like sockets and is also the most flexible, as you wouldn't need release no libraries each time you want to change or update a service.
Now all these allow you to provide secure connections over the wire, through SSL, you just need to establish the correct connection from the client to the server, so you've got an added level of security.
Each hides the database access behind a server layer, adding additional protection to the database.
I'm looking to establish a low latency 2way communication between a javascript interface (client) and a java server.
The client has to request data from the server (can ask for different set's of data, needs to be async, data are small sets of sensor data).
I was thinking of implementing this using websockets because of it's low latency. However I'm stuck at choosing a java websocket server implementation (I found jetty but there are so many, and there is also a case to be made for node.js and socket.io but there are not going to be a lot of clients in this case; just one client sending multiple requests so correct me if i'm wrong but there doesn't seem to be a reason for going the node.js path).
Last but not least; the server is running on a raspberry pi and is recieving it's sensor data over a special protocol; (but i don't think that's important for this question).
Is there anyone with some experience in this field and wants to share his/her toughts? Thx.
I've been using Kaazing (HTML5 edition) to proxy traffic received via a web-socket to a Java process listening on a traditional TCP server socket.
It's working well, latency is low and was consistently handling over 1000 messages/second (though we found our Java code was the limiting factor in that respect).
Kaazing also provides client APIs for Java, JavaScript and Flex, which allowed us to write an acceptance test suite using the familiar APIs (Concordion in my case).
I don't know how well it'd run on a Raspberry Pi, but given it's free to download there's a simple way to find out.
I've solved my problem by using Atmosphere which is a framework that provides compatibility to all major java servers and web browsers. (The Java Official Standard is still in the workings).
https://github.com/Atmosphere/atmosphere
I've got the chat demo up and running.
I have an SQLite Database on a webserver. I would like to access the database from a typical Java Desktop Application. Presently, I'm doing this thing...
Download the Database file to a local directory, perform the queries as necessary.
But, I'm unable to perform any update queries on this. How can I do this. [ On the actual database]
Another question is, to directly access the database from web in java (is this possible), make direct queries, updates anything etc,.
How can I achieve this type?
I've written code for connection of Java to SQLite and is working pretty fine, if the db file is in local directory. What changes or anything I have to do to establish a link to the file on webserver without having to download the database file.?
Thanks in advance...
CL. is right saying that if you need to access from desktop applications to a web database, SQLite is not an appropriate choice.
Using SQLite is fine in small web sites, applications where your data have to be accessed from and only from the web site itself; but if you need to access your data from - say - your desktop, without downloading the data file, you can't achieve that with SQLite and HTTP.
An appropriate choice for your web application would be MySQL or other client/server database, so that you could be able to connect to the database service from any place other than your web application, provided server access rules set permit that (e.g. firewalls, granted authentication, etc.).
In your usage scenario, you would be facing several orders of problems.
1) Security
You would be forced to violate the safety principle saying that database files must be protected from direct web exposure; in fact, to access your web SQLite database file from your desktop you would be forced to expose it directly, and this is wrong, as anyone would be able to download it and access your data, which by definition must be accessible just by you.
2) Updatability without downloading
Using HTTP to gain access to the database file can only lead to the requested resource download, because HTTP is a stateless protocol, so when you request GET or even POST access to the database, the web server would provide it to you in one solution, full stop.
In extreme synthesis, no chance to directly write back changes to the database file.
3) Updatability with downloading
You could download your file with a HTTP GET request, read data, make changes and the rest, but what if your online database changes in the meanwhile? Data consistency would be easily compromised.
There could be a way
If you give up using HTTP for your desktop application access to the database, then you could pick FTP (provided you have access credentials to the resource).
FTP lets you read data from and write data to files, so - on Linux - you could use FUSE to mount a remote FTP sharing and access it just like if it was connected to your local file system (see this article, for example).
In synthesis, you:
Create a mount point (i.e. a local directory) for FTP sharing
Use curlftpfs to link the remote FTP resource to your mount point
Access to this directory from your application as if it was a conventional directory
This way you could preserve security, keeping the database file from being exposed on the web, and you would be able to access it from your desktop application.
That said, please consider that concurrent access by several processes (desktop app + webserver instance) to a single database file could lead to problems (see this SO post to have an idea). Keep that in mind before architecting your solution.
Finally, in your usage scenario my suggestion is to program some server-side web service or REST interface that, under authentication, let you interact with the database file performing the key operations you need.
It is safe, reliable and "plastic" enough to let you do whatever you want.
EDIT:
MySQL is widely used for web sites or web applications, as it is fast, quite scalable and reasonably reliable. Activating MySQL server is a little bit OT on StackOverflow and quite long-winded to report, but in that case you could google around to find plenty of articles discussing such topic for your operating system of choice.
Then use MySQL JDBC driver to access the database from your Java desktop application.
If your idea is to stick with SQLite, though, you could basically prepare four web endpoints:
http://yourwebsite.com/select
http://yourwebsite.com/insert
http://yourwebsite.com/update
http://yourwebsite.com/delete
(Notice I specified "http", but you could consider moving to SSL encrypted http connection, a.k.a. "https", find details here and here. I don't know which webserver are you running, but still googling a little bit should point you to a good resource to properly configure https.)
Obviously you could add any endpoint you like for any kind of operation, even a more generic execute, but play my game just for a while.
Requests towards those endpoints are POST, and every endpoint receives proper parameters such as:
table name
fields
where clause
... and the like, but most important is security, so you have to remember 2 things:
1. Sign every request. You could achieve this defining a secret operation key (a string which is known to your client and you server but never travels in clear text), and using it in a hashing function to produce a digest which is sent together with other parameters as an incontrovertible proof for the server that that request it's receiving comes from a genuine source. This avoids you to send username and password in every request, which would introduce the problem of password encryption if you don't use https, and involves that the server has to be able to reconstruct the same signature for the same request using the same algorithm. (I flew over this thing at 400Mph, but the topic is too large to be correctly treated here. Anyways I hope this could point you in the right direction.)
2. Properly escape request parameters. "Sanitize" parameters someone calls it, and I think the metaphor is correct. Generally speaking this process involves some filtering operations performed by the server's endpoint, but it basically could be written as "use prepared statements for your queries". If you don't it could be likely that some malicious attacker injects SQL code in requests to exploit your server in some manner.
SQLite is an embedded database and assumes that the database file is directly accessible.
Your application is not an appropriate use of SQLite.
You should use a client/server database.
In any case, you should never make a database directly accessible on the internet;
the data should go through a web service.
I have a situation where I want a Java client to have a two-way data channel with a servlet (I have control over both), so that either can begin data transferring without having to wait for the other to do something first, but to get through the firewalls this needs to be tunnelled in http or https.
I have looked around, but I do not believe I know the right terms for asking Google.
I was originally looking at http-tunneling modules, but realizing that I have a web container in the other end, I believe that the appropriate way is to think of a fat client needing to communicate home. I was thinking that the persistant connection in http 1.1 might be very useful here. I can easily do heartbeat transfers to keep the connection from ideling.
At this point in time I just need to do a proof of concept so I primarily need something that works now, which can then be optimized or even replaced later.
So, I'd appreciate pointers to projects that allow me to have a connection where either side can at will push information (like a serialized object or a descriptive stream of bytes) to the other side. I'd prefer pure Java, if at all possible.
EDIT: Thanks for the pointers. It appears that what I need, will be available in the servlet 3.0 specification, which I might end up using in the long term depending on when it will be supported in the various web containers.
For now I am investigating the Cometd package, which appears to be able to do exactly what I need for my prototype.
Search terms: comet, long-polling
These are mostly used in an AJAX context, but I see no reason why you could not use them in a Java project.
Please take a look at Eclipse Net4J,
http://wiki.eclipse.org/Net4j
It supports all the features you mentioned. A special nice feature is that it supports HTTP connection pooling so you can have lots of channels between client and server but use only a few HTTP connections.
The only problem is that it doesn't have documentation at all. You just have to read the source code. Once you figure it out, it's very easy to use.
There are a few more diagrams on old Net4J site,
http://net4j.berlios.de/
How fast does it need to be? You could always just do polling on the client. Just check for new messages every so often.
You can use the Hessian protocol over HTTP. It's a fast binary protocol for serializing data. Typically used for a web-services style RPC communication, but there's no reason it couldn't be 2-way - see Hessian mux. It's pure Java, too :-)
Generally this is done by having the server not respond to an http request immediately. It waits around for some update (or a timeout) before sending a response. Obviously some care needs to be made ensuring that the server will handle this under load.
See, for instance, Comet.