I'm finding my way around Android and so far so good. My next big challenge is coming to grips with web services. I would like to build an app that reads data from a web site or database on web server and store the data in my app.
Basically, it will be an app that I build in conjunction with a news website that pulls their latest articles into the app. What I'm finding difficult is how to bridge the gap between my application and the data in the SQL Server database.
I'm familiar with building asp websites that read data from a database, but how would I do something similar with an app?
Do I ask the website to store the articles in an xml format? Or, is there another way that I can request a specific article and be provided with the content?
I hope I'm phrasing the question correctly and that someone can just guide me to the right way to approach this.
Thanks in advance.
You can approach this problem from different perspectives.
The common solution is to build a Webservice that will bridge the gap between your mobile application and the data that remain in your server. I personnaly prefer to setup a Rails backend and thus have a RESTful API that will help me access my data. For instance, to retrieve the list of articles I could just request the following url: http://my_server_host/articles. So for the Webservice part you can have whatever you want: Rails, J2EE, .NET etc. And you can choose the model that fits your needs (REST, SOAP, XML-RPC etc.).
Then you will have to write a class that will contain all the necessary calls to the Webservice you have built. Basically, if your Webservice returns the results as an XML format you will have to:
Send the request to the appropriate URL. (See: HttpGet or HttpPost if you want to modify a resource).
Parse the XML returned. (In short, you can use SAX or DOM to parse your XML response and transform them to a business entity (an Article, a User etc.).)
This hopefully gives you a hint about a possible solution. By the way Google is your friend, but I will probably come back to add external links/resources to help you more.
Edit
Another possible solution that could work for you, since all you need is to retrieve some articles. Just setup a simple Wordpress blog for instance. Wordpress gives you an URL for the blog's RSS feed, all you will have to do is to parse that RSS feed (XML). There is a great article on the IBM website for parsing an RSS feed that you can find here. By the way, this solution is only possible if you want to save your articles on a Wordpress blog. But you got the point hopefully.
Reading your data form the Database on the Server would be bad practice. You'd have to open up some ports and that's defiantly not what you want (if you don't have root-access, you also can't).
For non-interactive content (what you want) you would use XML or JSON.
Related
I am creating an Android App for my Website.(for ex: fossbytes android app).
I want to fetch all the Article properties like Title, Publish Date, Tags, and category of each article and store them in a Array.
For Ex: An Array containing all the Article Titles
Another Array containing all the Article Category
and so on
I am somewhat familiar with sqlite and totally unfamiliar with fetching data from servers and stuff like that.
My website is Running on Wordpress. I have tables in it that contain all these informations. How can i accomplish this.
I can not try anything as i know nothing about fetching data from server database tables.
You should see how client-server interacts using REST APIs. Check if Wordpress provide APIs to access the data stored in the sql tables. If they do, you can hit the APIs from your Android app, and use the response, you receive from the API server, to show on the UI as you want. That is the basic approach.
There are plenty of libraries available to help you do network operations hassle free, like Retrofit + GSON. You might have to do network operations in the background, so could look into building background services in Android.
You can read this tutorial, it might help you.
I am trying to build a web site build on semantic technologies. It is a CMS, to make it simple lets say it's a blog. I need to be able to do simple CRUD operations. All data will be saved on Jena like blog posts, user informations, blog categories etc.
I have a php system. Here is the path what i am planing to follow:
Use Apache Jena as RDF Store
Use Apache Jena for storing and retrieving the data.
Write a web service on java
Communicate through web service with PHP in JSON format to view, control the data.
My main focus is to build a web site on semantic technologies.
Is there anything wrong with my approach?
If not the main question is when a user made a blog post how will i create a relation with the blog post and user.
With mysql it was just a froeign key. How can i make a relations on Jena between new blog post and existing user?
I can't see anything wrong with your approach. Maybe I would suggest to use JSON-LD as an interchange format, because Jena can read it and write it directly instead of having to create your own converters to RDF (see https://jena.apache.org/documentation/io/).
Regarding the modeling question, I strongly recommend to have a look at the SIOC vocabulary (http://rdfs.org/sioc/spec/), which aims to represent exactly what you are looking for, and more.
Another hardcore solution would be to create the pages of the website in RDF (serialized in RDF/XML), and use XSL to generate the HTML version on demand for each page. It really depends on the size of your website.
I want to store the temperatures for a year from a weather forecast web site like this one into a database to can use it later in an android application. I tried to use Jsoup, but i only get pieces of the table containing temperatures.
Is there any way to get that html table content to can store it?
It would be a whole lot better if you used the API provided by wunderground instead of using jsoup in order to screen scrape the page.
The main reasons are that the implementation will be a lot cleaner and also your implementation will be immune to stylistic changes in wunderground web pages.
Here is guide on how to consume a REST web service with Spring.
Once you have retrieved the data from the API you could easily store the data in a database using an ORM framework like Hibernate since you would have already created the objects to retrieve the data.
You can make your life even easier if you use Spring with Hibernate integration to save the data. Check out this guide.
The guides mentioned above use Spring Boot to make it extremely easy to get started with the Spring framework (gone are the days where it would be almost impossible for a novice to get started with a Spring project all alone)
Broadly speaking, the HTML document displayed on the website would have to be parsed programmatically, tokenized, converted to suitable data types and finally stored into the database. However it should be checked whether the data on the website could be read via a SOAP webservice or something similar, as the interface would be cleaner and the approach more robust.
I thought of making the following application for my college project in java. I know core java. I want to know what should i read "specifically" for this project as there is less time:
It will have an interface to put your query. This string would go as a query to internet search engines and with the help of search engine find the data (the first web page that we see (that is data for my application for this time. :) )).
I do not want to display the data. I just want the HTML file or the source code of the generated web page. Is it sounding like Common Getaway Interface? I do not know about this.
But i think it for the same purpose. If it is this. please guide me to know how to implement this.
Whatever please specify
Problem 1 : What should i read ? Any direct help at this point is not my intention. I want to implement it myself.
Problem 2 : Is connecting to internet requires some jnlp knowledge too.
for eg. as on google we search something it shows us the links of the websites. I can see the source code of this generated web page. I just want this page for my application to work on.
EDIT:
I do not want to rely on google only or any particular web server. I want to decide that by my application.
Please also refer to my problem 2.
As i discovered that we have Terms of Conditions for websites should i try to make my crawler. Would then my application not breaking the rules . Well its important for me.
Ashish,
Here what I would recommend.
Learn the basics of JSON from these links (Introduction ,lib download)
Then look at the Google Web Search JSON API here.
Learn how to GET the data from servers using HttpClient library here.
Now what you have to do is, fire a get request for the search, read the JSON response, parse the response using the JSON lib from #1 and you have the search results.
Most of the search engines (Bing etc) offer Jason/REST apis so you can do the same for other search engines.
Note: Jason APIs are normally used from JavaScritps on the UI side but since its very easy and quick to learn, I suggested you that. You can also explore (if time permits) the XML based APIs also.
URL url = new URL("http://fooooo.com");
in = new BufferedReader(new InputStreamReader(url.openStream()));
String inputLine;
while ((inputLine = in.readLine()) != null)
{
System.out.println(inputLine);
}
Should be enough to get you started .
And yes , do check if you are not violating the usage terms of a website . Search Engines dont really like you trying to access them via a program .
Many , Including Google , has APIs specifically designed for this purpose.
you can do everything you want using HTMLUnit. It´s like a web browser but for java. Check some examples at their website.
Read "Working with URL's" in the Java tutorial to get an idea what is behind the available libs like HTMLUnit, HttpClient, etc
I do not want to display the data. I just want the HTML file or the source code of the generated web page.
You probably dont need the HTML either. Google provide its search results as a web service using this API. Similarly for other search engine GIYF. You get the search results as XML, which is far more easier for you to parse. Plus the XML wont have any unwanted data like ads.
I don't even know if what I'm asking is possible and I don't know what to search for on Google.
Basically, there are multiple projects that would require me to fetch some data from websites. The example I'm thinking of right now is to grab my account info from a banking site http://www.americanexpress.ca I'd like to know how I'd make it so my login info is entered in the fields on the left and grab the data from the resulting page. I'd then make methods to parse that data.
Obviously, this would need to be secure as I don't want my banking info stolen.
Sorry if the solution is obvious as I've never tried grabbing data from websites.
As mentioned, Apache HttpClient is one option, though personally I've always found HtmlUnit to be a bit more convenient to work with (from an API standpoint) for doing things like this. HtmlUnit is built on top of HttpClient, and exposes a higher-level API for interacting with and manipulating page content.
You have to use Apache HttpClient (or same) library. It have all required classes for you.