Space Restriction for Client side caching - java

I have a search result page that returns around 40 images. I use mongohq to store my images.
Now these image will never change. They will either be removed or left as is.
So my Spring servlet streams the images after reading from mongoHq based on image id
/app/download/{uniqueImageId}
All works good. Except the load timings to stream the images. I feel that these images will remain constant for these unique ids so why not cache them. I can add a filter that applies to my above url type and add a caching header, which i plan to give a really long value like maybe cache the images for a week.
My question is, if i start telling the client's browser to cache all these 40+ images, will it cache all these images?
Aren't there any space restrictions from the client side?
Do you see any better option to handle such scenario?

My question is, if i start telling the client's browser to cache all these 40+ images, will it cache all these images? Aren't there any space restrictions from the client side?
Of course, there are space restriction on the client side (also the storage space of the whole world is limited... uhm, sorry for that...). The user may restrict the caching space, and/or the browser just takes automatically the free space available for caching.
Typically I would expect that the browser cache is always some megabytes (let's say 100+), so often needed images, like icons, transfered in a session will be cached. Whether the image is still in the cache, when the user visits your site three days later, depends on the cache size and the users activity in between. So you never know.
What the client or any intermediate proxies do, is out of your direct control. The only thing you do by setting the caching headers is, to say that it is legal not to refresh this resource for a while. Make sure you understand the HTTP1.1 headers correctly, if you do set the headers in your application.
Do you see any better option to handle such scenario?
The term "better" isn't very exact here. What exactly do you need to optimize?
If you have a lot of requests on the same image set, you can reduce server and database load by putting an edge server, like nginx, in front of your application, which is configured as caching reverse proxy. In this case, your own edge server is interpreting the caching headers. In general, I consider it a good design, if an application has no significant load on serving static resources.

Related

Most secure way to load sensitive information into protocol buffers

My application uses Google protocol buffers to send sensitive data between client and server instances. The network link is encrypted with SSL, so I'm not worried about eavesdroppers on the network. I am worried about the actual loading of sensitive data into the protobuf because of memory concerns explained in this SO question.
For example:
Login login = Login.newBuilder().setPassword(password)// problem
.build();
Is there no way to do this securely since protocol buffers are immutable?
Protobuf does not provide any option to use char[] instead of String. On the contrary, Protobuf messages are intentionally designed to be fully immutable, which provides a different kind of security: you can share a single message instance between multiple sandboxed components of a program without worrying that one may modify the data in order to interfere with another.
In my personal opinion as a security engineer -- though others will disagree -- the "security" described in the SO question to which you link is security theater, not actually worth pursuing, for a number of reasons:
If an attacker can read your process's memory, you've already lost. Even if you overwrite the secret's memory before discarding it, if the attacker reads your memory at the right time, they'll find the password. But, worse, if an attacker is in a position to read your process's memory, they're probably in a position to do much worse things than extract temporary passwords: they can probably extract long-lived secrets (e.g. your server's TLS private key), overwrite parts of memory to change your app's behavior, access any and all resources to which your app has access, etc. This simply isn't a problem that can be meaningfully addressed by zeroing certain fields after use.
Realistically, there are too many ways that your secrets may be copied anyway, over which you have no control, making the whole exercise moot:
Even if you are careful, the garbage collector could have made copies of the secret while moving memory around, defeating the purpose. To avoid this you probably need to use a ByteBuffer backed by non-managed memory.
When reading the data into your process, it almost certainly passes through library code that doesn't overwrite its data in this way. For example, an InputStream may do internal buffering, and probably doesn't zero out its buffer afterwards.
The operating system may page your data out to swap space on disk at any time, and is not obliged to zero that data afterwards. So even if you zero out the memory, it may persist in swap. (Encrypting swap ensures that these secrets are effectively gone when the system shuts down, but doesn't necessarily protect against an attacker present on the local machine who is able to extract the swap encryption key out of the kernel.)
Etc.
So, in my opinion, using mutable objects in Java specifically to be able to overwrite secrets in this way is not a useful strategy. These threats need to be addressed elsewhere.

Aside from RPC calls, what could be taking my App Engine Program so long

I'm trying to performance optimize a page load in GAE, and I'm a bit stumped what is taking so long to serve the page.
When I first got appstats running I found the page was calling about 500-600 RPC calls. I've now got that down to 3.
However, I'm still seeing a massive load of extra time in App Stats. Another page on my site (using the same django framework + templating) loads in about 60ms, doing a small query to a small data set.
Question is, what is this overhead, and where should I be looking for trouble points?
The data in the request has about 350 records, and about 30 properties per record. I'm cool with the data call itself taking the datastore api time, but it's the other time I'm confused about. The data does get stepped through a biggish iterator, and I've now used fetch on most of these requests to keep the RPC call down, and make sure things are in memory rather than being queried as they go.
Slow Request - Look at all the extra blue
Fast Request , RPC blue is matched against overall blue
EDIT
OK, so I have created a new model called FastModel, and copied the bare minimum items needed for the page to it, so it can load as quickly as possible, and it does make a big difference. Seems there are things on the Model that slow it all down. Will investigate further.
Deserializing 350 records, especially large ones, takes a long time. That's probably what's taking up the bulk of your execution time.

Transferring large arrays from server to client in GWT

I'm attempting to transfer a large two dimensional array (17955 X 3) from my server to the client using Asynchronous RPC calls. This is taking a very long period of time which is especially bad because the data is needed in order to initialize the application. I've read that using a JSON object might be faster, but I'm not sure how to do the conversion in Java as I'm pretty new to the language and GWT, and I don't know if the speed difference is significant. I also read somewhere that I can zip the data, but I only read that in a forum and I'm not sure if it's actually possible as I couldn't find information for it elsewhere. Is there any way to transfer large amounts of data from server to client? Thanks for your time.
Read this article on adding JSON capabilities to GWT. In regards to compression this article explains gzipping with GWT.
Also the size of your array is still very large even with the compression you may achieve with gzipping, which will vary depending on how much data is repeated in your array. You may want to consider logically breaking up the array in multiple RPC calls if at all possible.
I would recommend revisiting your design if your application needs such a large amount of data to initialize.
As other's pointed out, you should re-consider your design because even if you are able to solve the data transfer speed issue somehow you will likely find other issues waiting for you:
Processing large amount of data in the browser can be slow.
Lot of data means a lot of used-up memory
What you can think about is:
Partitioning the data:
How is your user going to cope with a lot of data. Your user will probably need some kind of user interface aid to be able to work with such a huge data. If you are going to use paging, tabs or other means to partition the data for user's consumption, why not load the data on demand. For example, you can load a single page of records if you are using a paging grid or you can load a single tab worth of records if you are going to use tabs. Similary, if you are going to allow filtering on the records, you can set a default filter after the load to keep the data to a minumum.
Summarizing the data:
You can also summarize the data on the server, if you are not going to show each row to the user. For example you can initially show summary for each group of records and let the user drill-down in a specific group

Retrieve multiple images from server quickly

For my BlackBerry application, I am using a single thread to retrieve images from the server one at a time. My application has a number of images and it takes too long to load all the images. How can I speed this up?
If these are static images, you can also do something like CSS sprites - stitch them all together into one big image, then in code you display the portion of the large image that corresponds to the original image you want.
The last two arguments to Graphics.drawImage(...) indicate where to start drawing from the original image, and that's how you would select the part you want.
Use multiple threads instead of one. Also, if this is a server that you control, consider pre-sizing the images for the target devices or having the device send its size to the server to generate and cache device specific images.
its too late but sorry for that.
i have used observer pattern for it.
Link:-http://en.wikipedia.org/wiki/Observer_pattern
thankx
#Peter
Threads on a mobile phone is a bad idea. Firstly threading on phones suck! secondly phones can't really handle more then one http connection at a time stuff bombs out.
#Userbb
You can do sneeky things like stream them via a socket connection OR include multiple images in a single http request (creating a connection and http headers have overhead).
and also deff do what #peter suggested about resizing serverside.

On Android/Java, how many bytes has a connection downloaded?

An Android app I'm writing involves quite a lot of downloading of content (think podcatcher/RSS).
I would like to be able to give the user an indication of how many bytes they've downloaded, so they can make the decision whether they want to use Wifi or not.
To this end, I have found a way of counting the number of bytes read by the app, by wrapping an InputStream in a simple CountingInputStream.
However, this does not take into consideration basic things like packet headers and HTTP headers. More importantly, it does not take into consideration any compression that content may be encoded with.
So, how many bytes did my application download over the network? I'm not so interested in the number of bytes uploaded, but if know how, don't be shy.
I have gone down a fairly low level approach as I am feeding the input stream into an XML PullParser. I will also be needing to do a similar exercise with dumping bytes (images in this case) straight onto the SD Card.
Is this the best approach? What am I missing?
ufff... I think this is pretty transparent to underlying protocol, so you can't count all these bytes used in session or link layer, and operators like to charge even for control bytes which are not in any way visible to end user. Also they count traffic in both directions (your reqest to server takes also some), so - good question is: how to measure needed traffic/money for downloading that picture... ?
This isn't a direct answer, but you could try asking someone who has solved a similar problem before, e.g. a data counter application. I've used NetCounter by Cyril Jaquier (http://www.jaqpot.net/netcounter/), and he claims his software is open source. I couldn't get his download link to work, but there's a contact email address. If you got his source code, you should be able to use the same method as him.
As I know, there are two ways to count data traffic. One is /sys/class/net/{interface}/statistics as mentioned in android app named netCounter, the other is /proc/net/dev which is used in android app named wifi-tether. But I don't know the difference between these two methods nor which is better.
The number of bytes received by a particular app is stored in /proc/uid_stat//tcp_rcv where app_uid is the uid of your app on the particular device.

Categories