Looking for a best way to read images from MongoDB - java

I'll provide a link of website so you can understand better what I'm trying to achieve.
If you go on link and then go on > Adventure Holidays > Summer Camps you will get on page where site provide you a random document from collection. So far there is just two documents but you'll get it. So, I want to have different image from each documents that user get.
If user get Kieve Summer Camp show him image of Kieve Camp and so on. But I'm stuck.
Is a better option to create Photo collection and then that collection to connect with every other collection and than compare IDs from photo collection with other documents from other collection? Or is a better way to add field photo on each document and get that image like I'm now getting title, description and etc?

It depends on your needs. The sentence with collection 5 times in it is very confusing. I thing your question is if you should embed or reference the images. In my opinion it would be the beste to store images file based an store just a Link in MongoDB. MongoDB is not a file storage and limited to 16MB per document. I don’t know how big your images are but a file based storage would be the best solution.
So if you stick to your solution I would say to store each image as an own document, maybe with metadata, in a Collection just for images.

Related

Android: Storage Options

I am creating an application that allows the user to store information about food recipes that they can share on social media. I have an activity that allows the user to upload an image, write ingredients and directions for the recipe. In the past I have worked with shared preferences for saving user information, however, I know that the information stored is unordered. So I want to know what type of storage I should use to achieve this outcome....
My activity that saves user data:
From this I want to load the information into a previous activities list view that will have this type of list element layout:
From this what type of storage approach should I take? if I use shared preferences can I just place the parts I need into the elements of the list manually, for example, extracting the image saved from the user and placing that inside the image section of the listviews? or will the limitations of shared get in the way and maybe use internal storage? what would be the best approach?
Form a structured JSON object out of your data and you can use REALM. It is a new and easy alternative to SQLite.
It is very fast, easy to learn and integrate in your app.
Also, it is scalable means if you decide that you will be taking the same data to your backend then it will be easy for you to handle large amount of data.

Design Pattern for storing most requested images in a map object

I have a website that lets users generate an image. I then provide an embed link
which they can paste on their blog/website. The link conists of a simple html img
element which calls and then returns the image from my webapp to their website.
Currently, I'm reading the image from the file system and returning it via
the response outputstream.
My question is, is there a better more efficient way of doing this? I mostly would like
to keep the top 10 images in memory for faster access.
I currently have a singleton object that stores some data during app startup. My idea was to create a Map/List object and then store the image bytes in there. My images have unique names so that should make it a bit simpler.
I'd imagnie I'd need to store the image name, the last time accessesed, frequencey accessed and then kick out the images that were last accessed or have low times accessed.
I'd rather not re-invent the wheel if there is already a design pattern for this. Anyone ever implement something similar? Any general idea of what is the best way to implement this would be helpful.
I use Tomact 7, Java 7
What you're looking for is a caching library.
Check out:
Guava
Commons JCS
EHCache
Did you already take a look to the Guava-Libraries from Google? You can find a LoadingCache there, which maybe makes exactly what you want. Look at the CachesExplained Wiki page.

parsing web page which is changing real time in JAVA

Heres what i want to do. Im quite a beginner with this so maybe a lame question, But, I want to implement gui application in java wich gets data from sports livescore pages
e.g
http://www.futbol24.com/Live/
http://livescore.com/
and parse it (somehow) in my app...and then i will be able to store it in for example jtable ,save full time results in database,playing sounds after goal is scored and so on
What is the best way to do this ?
It would be almost impossible to parse an HTML document from a live web page and get specific information from it. If you did manage to work out exactly where in the document the data is, the page structure could change at any time. The scores might not even be in the HTML - they could be fetched by Javascript in the page.
I suggest you find an RSS feed of the information you want. Then you'll only have a nice, small piece of XML to parse. That's what it's for.

File-based Document Storage in android

I'm in the early stages of a note-taking application for android and I'm hoping that somebody can point me to a nice solution for storing the note data.
Ideally, I'm looking to have a solution where:
Each note document is a separate file (for dropbox syncing)
A note can be composed of multiple pages
Note pages can have binary data (such as images)
A single page can be loaded without having to parse the entire document into memory
Thread-safety: Multiple reads/writes can occur at the same time.
XML is out (at least for the entire file), since I don't have a good way to extract a single page at a time. I considered using zip files, but (especially when compressed) I think they'd be stuck loading the entire file as well.
It seems like there should be a Java library out there that does this, but my google-fu is failing me. The only other alternative I can think of is to make a separate sqlite database for every note.
Does anybody know of a good solution to this problem? Thanks!
Seems like a relational database would work here. You just need to play around with the schema a little.
Maybe make a Pages table with each page including, say, a field for the document it belongs to and a field for its order in the document. Pages could also have a field for binary data, which might be contained in another table. If the document itself has additional data, maybe you have a table for documents too.
I haven't used SQLite transactions on an Android device, but it seems like that would be a good way to address thread safety.
I would recommend using SQLite to store the documents. Ultimately, it'll be easier than trying to deal with file I/O every time you access the note. Then, when somebody wants to upload to dropbox, you generate the file on the fly and upload it. It would make sense to have a Notes table and a pages table, at least. That way you can load each page individually and a note is just a collection of pages anyway. Additionally, you can store images as BLOBS in the database for a particular page. Basically, if you only want one type of content per page, then you would have, in the pages table, something like an id column and a content column. Alternatively, if you wanted to support something that is more complex such as multiple types of content then you would need to make your pages a collection of something else, like "entities."
IMO, a relational database is going to be the easiest way to accomplish your requirement of reading from particular pages without having to load the entire file.

Indexing uploaded documents - searchable only by the users that uploaded them

If someone could point me in the right direction that would be most helpful.
I have written a custom CMS where I want to be able to allow each individual user to upload documents (.doc .docx .pdf .rtf .txt etc) and then be able to search the contents of those files for keywords.
The CMS is written entirely in PHP and MySQL within a Linux environment.
Once uploaded the documents would be stored in the users private folder on the server "as is". There will be hundreds if not thousands of documents stored by each user.
It is very important that the specific users files are searchable only by that user.
Could anyone point me in the right direction? I have had a look at Solr but these types of solutions seem so complicated. I have spent an entire week looking at different solutions and this is my last attempt at finding a solution.
Thank you in advance.
2 choices I see.
A search index per user. Their documents are indexed separately from everyone else's. When they do a search, they hit their own search index. There is no danger of seeing other's results, or getting scores based on contents from other's documents. The downside is having to store and update the index separately. I would look into using Lucene for something like this, as the indices will be small.
A single search index. The users all share a search index. The results from searches would have to be filtered down so that only results were returned for that user. The upside is implementing a single search index (Solr would be great for this). The down side is the risk of cross talk between users searches. Scoring would be impacted by other users documents, resulting in poorer search results.
I hate to say it, but from a quality standpoint, I'd lean towards number 1. Number 2 seems more efficient and easier, but user results are more important to me.
keep the files outside of the public directory tree, keep a reference to the file's filepath and creator's user id in a database table, then they can search for the files using database queries. you will of course have to let users create accounts and have a log in. you can they let them download the files using php.
As long as the user's files are all located in an isolated directory, or there is some way specify one user's documents, like adding the user id to the filename, you could use grep.
The disadvantages:
Each search would have to go through all the documents, so if you have a lot of documents or very large documents it would be slow.
Binary document formats, like Word or PDF, might not produce accurate results.
This is not an enterprise solution.
Revised answer: Try mnoGoSearch

Categories