java + google web toolkit (google apps engine) - java

I need to upload image to the server, where SmartGWT webapplication is running... after trying this solution ( Basic File upload in GWT - first answer), when I have created independent http servlet with mapping in web.xml, I'm able to receive uploaded file on server side (in linked solution - "out" is ByteArrayOutoutStream), so it is in server RAM... Problem is, how to save file to server file system storage.
When I tried to create FileOutputStream instead of ByteArrayOutoutStream, an exception has been thrown that it is restricted class in Google Apps Engine.
Any ideas, how to store file to server, when it is restricted in GAE? Or how can I tell, that I don't want the FileUploading servlet to run under GAE? Thanks for any ideas...

You have no write permissions. You may, however, store the image as a blob or by using the distributed datastore.
From TFA:
Writing to local files is not supported in App Engine due to the
distributed nature of your application. Instead, data which must be
persisted should be stored in the distributed datastore. For more
information see the documentation on the runtime sandbox.

Related

Upload image in spring mvc permanently in other location

i want to upload image in spring mvc in file system. i can do it but when redeploy the project all image remove from directory. now my question is how upload image permanently?i want this in real application.
You probably uploaded your images under your project directories/war. Every time you deploy/build your project/war, the images got deleted.
You need to save the image files outside of your project/war.
For real application, I suggest you at least save/serve your uploaded images in a separate server. Amazon S3 is a pretty good one. You can just store the object/file name relative to S3 base url in your database. They have java APIs for you to upload the files too.
As mentioned, in order to persist these uploaded images it is best to save them outside of your project/.WAR file. The reason for this is as you've already experienced, each time you redeploy your application you will loose anything (i.e. your uploaded images) that had been written to the previous project/.WAR when it was deployed.
The provided solution of utilizing an Amazon S3 bucket to save these images is a good solution and you could definitely accomplish what you desire (having a URL of www.example.com/upload/ showing all these images). With springMVC within your controller you can set up a method and assign the #RequestMapping annotation like so:
#Controller
#RequestMapping("/") //this can map to your www.example.com
public class MainController(){
#RequestMapping("/upload") //this will then map to www.example.com/upload
public String showUploads(){
return "redirect:http://pathToAmazonS3Bucket"
}
}
In SpringMVC the redirect allows you to redirect to an absolute URL path. See docs
But as already mentioned you still have to host your images somewhere outside of your project path/WAR file. Amazon S3 works, but since you asked for another solution here is one.
You can save all the images to a file on your PC, then run Python3 or Node as HTTP servers. This solution though requires you to utilize your computer and your internet connection to host your images on the web. This assumes you are good with leaving your PC running non-stop or have an old one laying around that you will use as your webserver. It also assumes your ISP is okay with you hosting a webserver on your network. Lastly you can obtain your own unique URL from numerous services online (some free and some for small fees), this way people dont have to type in your IP address.
I am running a similar setup above on my network with a free domain name from No-IP.com.
Also, how do you plan to host your spring web application on the web? Will you be doing this via an online hosting service or hosting yourself? If hosting yourself, will you use Apache Tomcat or Glassfish or another container/application server?

How to set-up Persistent Storage for eclipse-tomcat for backup and streaming (not using database)?

I am working on a Java Web-Application project using servlets, eclipse, and tomcat.
I would like to be able to dynamically store/create persistent files from servlets and allow the user to access the files using a link, without storing the files in the database.
I have read that getServletContext().getRealPath("/") is volatile and gets reset every time the server is restarted.
I have also read that creating a directory like "$HOME/.ourapp" would solve this. Although, I cannot seem to find how to set-up tomcat to allow the user to access the files using a link, using the eclipse-tomcat.
Question : How to set-up eclipse-tomcat so that the link to the website "http://localhost/" and the file "http://localhost/temp-xx.txt" is the same, while also allowing to dynamically create persistent data "temp-xx.txt" is generated by a servlet and allow the user to access it and does not get deleted when the server is restarted.
This gets complicated, because Tomcat can server files using DefaultServlet (it just sends files back to the client, exactly as you'd expect from a web server), but it caches files internally, so modifying the file system underneath it can have some surprising behavior.
You can disable caching for the DefaultServlet but I've seen reports that it still behaves in surprising ways. The only fool-proof solution I've seen is to write your own servlet that streams the files from wherever they are stored.
But writing your own streaming servlet isn't as simple as you might think. If you want it to be high-performance, you'll want to enable all the nice HTTP features like range-requests, eTags, If-Modified-Since and all that stuff that the DefaultServlet already provides. Perhaps you should start with using the DefaultServlet and see how far it will get you.
The configuration is actually really easy: just add a <Resources> element to your META-INF/context.xml file and use a postResources attribute. You can find the documentation in the Tomcat users' guide for resources.

Java Application on Amazon Web Services with Mobile and Web Clients

Our new start-up company is trying to build a mobile app with an accompanied website. We are trying to setup our application on Amazon Web Services.
We have Java code running in an EC2 instance, which will store data in S3. We want clients (iOS and Web for now) to communicate with the Java Backend via a REST API. Ideally the website would be hosted under the same AWS account.
The Java Code and REST API are already set up in a very basic form, but the setup of the Website is unclear, since this is new to us all. We also would like to evaluate beforehand if such a setup is even feasible.
Since I am in charge of the Website i have already spend hours researching this specific setup, but i simply lack experience in cloud/backend development to come to a conclusion.
Here are some questions we would like to answer:
Where would the HTML files and accompanied JavaScript etc. files be stored?
How can data (images etc.) that is stored within S3 by the JAVA code be accessed from the Website directly?
How could something like bootstrapping of data within HTML files be achieved (in JSON format preferably)?
How could the server be set up to compress certain files like CSS or JavaScript?
Please point me into the right direction, any comment is appreciated.
Where would the HTML files and accompanied JavaScript etc. files be
stored?
Either on the same AWS EC2 box or a different one, just give it a static IP and link that IP to the domain you want, done. Just remember to have port 80 open as a firewall rule.
How can data (images etc.) that is stored within S3 by the JAVA code
be accessed from the Website directly?
The files will have some url that you can link to directly in your html so it's essentially just a url.
How could something like bootstrapping of data within HTML files be
achieved (in JSON format preferably)?
You have a number of choices here. You could potentially create some JSP files to generate the HTML and load the JSP files on access and cache them so they load up super fast. You could argue however, this is overkill and in some ways, the REST endpoint should be robust enough to handle the requests.
Part of me thinks you should endeavor to use the REST API for this information so you just have to manage one endpoint, why make an extra endpoint or over engineered solution for the HTML that you then have to maintain? Build once and reuse.
How could the server be set up to compress certain files like CSS or
JavaScript?
During the build process, run the files through a minify process. This is built into maven so you could do it automatically or by hand using something like jscompress. This Minify plugin shows how to automatically minify your resources. Consider you'll need to be using Maven though as your build tool.

Back-end server for Play 2 framework app

I'm planning a web application where users will be able to upload and process their files. The specifics of the application are irrelevant to my questions, but lets assume that the application will deal with mp3 audio files. I'm going to split my application in two distinct parts: the front-end and the back-end.
The front-end application will be a usual web application serving html pages to users. Typically a user will upload his file and fill an html form to specify which operations he would like to perform on the file. The files will be initially uploaded to a storage facility, such as Amazon S3, and later processed by a back-end server. I'm using Play 2.0.4 framework to develop the front-end application and this is going very well for me. I managed to implement user authorization, drafted most of the UI and also implemented file upload to S3. The application is currently deployed on Heroku without any problems.
For my back-end server I'm considering to use Play 2 framework once again. The back-end server will receive notification (http request) from the front-end server about creation of a new job. Job specification will include a link to the original user file in the storage and arguments describing the job. The job should be added to a queue. Now the most important part is to delegate the actual processing job to a third party program, which most certainly will be a compiled command line utility, such as SoX for the case of audio processing, written by good people using a programming language of their choice. As far as I know it is possible to call an external program from java, pass command line arguments and collect the result. After processing is done, the back-end server will upload processed file back to storage, and send notification (http request) to the front-end application, which will store a link to the processed file and display it to the user at some later time. To be able to use command line utility I'm going to deploy the back-end application to a Amazon EC2 instance with a Typesafe stack installation.
Here are some questions about this basic plan:
Is Play 2 a reasonable choice for the back-end, or should I look into alternatives? One of them seems to be CGI, which according to Wikipedia "is a standard method for web server software to delegate the generation of web content to executable files." Unfortunately I don't have any experience with that.
There shouldn't be any problem implementing a job queue with Play?
Is it possible to install a command line utility on EC2 and call it from Play?
Should I expect any problems installing Typesafe stack on the EC2? This post briefly describes what I'm planning to do https://www.assembla.com/spaces/bufferine/wiki/Typesafe_stack_on_Amazon_EC2
Assuming that in the future the application will grow, how would I split the jobs among multiple instances on EC2? Should I create a separate job-balancing application in between my front-end and back-end?
I would appreciate any advice! Thanks!
Note: I'm using Java api for Play 2 framework, since I'm not familiar with Scala language.
You may consider Akka for processing and it's built in Play2. It will help you to manage tasks easily, and even saving hardware ressources if used with advanced features. There is a Java API that should cover all your needs. And it's not necessary in a backend APP, if you need more power you can scale even better with two same instancies. Play and Akka are stateless, you can just add new instances to scale. To make it run on EC2, just use the play dist command.
And yes, you can install whatever you want in EC2 and call it from your app.
You may like:
http://akka.io/
http://www.playframework.com/documentation/2.1.0/JavaAkka
http://www.playframework.com/documentation/2.1.0/ProductionDist
also, but in scala
http://blog.greweb.fr/2013/01/playcli-play-iteratees-unix-pipe/
http://blog.greweb.fr/2012/11/play-framework-enumerator-outputstream/

How to provide access to an external resource (file) for a GlassFish web application?

I am a bit of a GlassFish beginner, so please forgive my ingnorance on the subject.
Basically we are serving a game website, and to make the client downloadable by our web app we copy it into a directory within domain1. The problem with this is that when redeploying the web app the client downloadable is lost and we have to copy it across again.
I'd like to be able to store the client downloadable in some external location and have GlassFish provide access to it.
I could just hardcode the link into the web app, but then we would lose portability so that's the reason for having GlassFish handle it.
I could also put the client downloadable into our database but that seems like poor use of a database and could also result in poor database performance.
The third option I have found is to add a custom resource mapping from some name to the file location, and then provide a method in one of my beans to retrieve the file location. This seems like a lot of work just to have an external resource, I feel like there must be an easier way.
So what should I do?
With GlassFish you can define an alternate document root to serve files from outside the war. From the documentation:
Alternate Document Roots
An alternate document root (docroot)
allows a web application to serve
requests for certain resources from
outside its own docroot, based on
whether those requests match one (or
more) of the URI patterns of the web
application's alternate docroots.
To specify an alternate docroot for a
web application or a virtual server,
use the alternatedocroot_n property,
where n is a positive integer that
allows specification of more than one.
This property can be a subelement of a
sun-web-app element in the
sun-web.xml file or a virtual server
property. For more information about
these elements, see sun-web-app in
Oracle GlassFish Server 3.0.1
Application Deployment Guide.
So you could configure something like this:
<property name="alternatedocroot_1" value="from=/ext/* dir=/path/to/ext"/>
Refer to the documentation for full details.
The link to your downloadables needn't be in the same application as the game servlets, right?
One solution would be to create a new "pseudo" application containing only a web.xml and your static file content. You would of course not deploy it in war form (well, only if you really want to) but just copy the files into the unpacked directory when you want to change content. I use a setup like this to serve a bunch of files from a Web app server I run.
At work, in an "enterprise" kind of environment, we do things differently. We have an Apache HTTPD server working as the front end. It forwards to the app server for stuff that needs to be done in Java, but any static content, as well as cookie management, SSL, load balancing and other "web server-y" stuff is done by HTTPD. This yields a bit of a performance advantage with heavily loaded sites and lots of big but static files. It also lets us split the work among different physical boxes, which again can help with performance.

Categories