heey everyone,
I've got a task from my work where I need to put our customer database on a OSM using .PHP and .JS is this possible? And is it possible to get the info rom our database without having the lat and longtitude of the addresses?
I'm also not very good at programming. thx for everyone that tries to help.
There's a great number of things you need to learn to be able to build the kind of site you're talking about. You need to know several (not just one) programming language, how relational databases work, how HTML and JavaScript work together, and the many different ways geographic information is processed.
To put it another way, it's like saying "I've been asked to build a house, where can I learn the bricklaying I need?". You're being asked too large a task, especially for an intern.
There is a simple example of overlaying markers on an OSM map,simple example of overlaying markers on an OSM map but that needs you to know some JavaScript yourself, which you'll need to learn separately.
got this from someone else on other forum maybe helps other people also still searching for more though if people know some
Related
In one of the project we have some html files stored in oracle database but we may keep it in files as well or if more appropriate in some NOSQL database whichever is appropriate. We are given some keywords and based on them we need to find relevant sections in those files. These files are basic company declaration, news articles , financial reports, etc. Now need to find different sections let's say pertaining to below categories:
Risk using keywords like crime, theft, litigation,accuse, etc
High rank Changes using keywords like 'will be leaving', Appointment of Certain Officers', 'Election of Director', etc
Shareholder Rights using keywords like 'shareholder rights','shareholder lawsuits', 'financial restatements', etc
There are other categories as well and they have defined keywords to be searched. So the requirement is to category-wise extract the section/paragraph which are MOST relevant.
The emphasis is on High accuracy to find most relevant section.
If technologies like Solr or Elastic search or Jackrabbit provides that we are open. Just need right direction to correct tech-stack needed here.
Currently we are trying Oracle text search but I believe we might have a better programmatic solution as well may using Machine learning or NLP or some library in Java which would do that. Kindly give me some insights. I am an experienced java developer and working with Machine leaning and NLP. I am language-agnostic, so a good solution using any language or technique is welcome.
The direction you seem to be going with this question is one of word/phrase search [easy] vs semantic search [hard]. There have been several people over the years to work on such solutions [I met folks from a company in Scotland who were building a Java based solution, but I can't recall the name]. Where you get into trouble with semantic search is that there are so many problem domains [and very relevant taxonomies within the domain] where semantics are way different for same words or phrases. Then of course some folks make the "semantic" job easier by meta-tagging the data (examples: images, video, complex documents), then searching the meta data.
When I was an Enterprise Architect a few years back, we used Verity to essentially Google the enterprise. I have no idea if it is still a product, but it leveraged Oracle Text and layered it's code on that.
Back in the day, the state of the art was what Forester Research called: "Connecting Data, Content, And Text With Organic Information Abstraction", but I don't know where the state of the practice is right now.
I'll bet Google might have some tools you could use :) .
Sounds like a fun project!!!
I know that this question was asked before - but the answer was not satisfying (in the sense of that the answer was just a link ).
So my question is, is there any way to extend the existing openNLP models? I already know about the technique with DBPedia/Wikipedia. But what if i just want to append some lines of text to improve the models - is there really no way? (If so - that would be really stupid...)
Unfortunately, you can't. See this question which has a detailed answer to the same problem.
I think, that is a though problem because when you deal with texts you have often licensing issues. For example, you can not build a corpus on Twitter data and publish it to the community (see this paper for some more information).
Therefore, often companies build domain specific corpora and use them internally. For example, we did in our research project. Therefore, we built a tool (Quick Pad Tagger) to create annotated corpora efficiently (see here).
Ok i think this needs a separate answer.
I found the Yago database: http://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago//
This database seems to be just fantastic (from the first look). You can download all the tagged data and put it in a database (they already deliver the tools for that).
The next stage is to "refactor" the tagged entities so that opennlp can use it (openNLP uses sth. like this <START:person> Pierre Vinken <END>)
Then you create some text files and train it with the opennlp delivered training tool.
Not 100% sure if this works but i will come back and tell you.
For my Java project I am looking for a convenient way to store my data. I have the following requirements:
It should be easy to synchronize with subversion (which I use for my Java code and other stuff). So I guess file-based is appropriate.
I want to be able to get certain elements without having to read all data into memory. Like in a database ("give me all objects with/without property x", "give me all information about object with certain ID").
I want to be able to read and write in this way.
I guess a database is overkill for my purpose, difficult to sync and I have to be admin/root on all machines to install it. (right?)
So I was thinking of using XML, but I heard that XML parsing in Java does not work very well. Or can anyone point me to a good library?
Then I was thinking of CSV. But all examples I saw (here and elsewhere) read the data into memory before processing it, which is not what I want.
I hope you can help me with this problem, because I am not so experienced with Java.
Edit:
Thank you for downvoting this question without any comment. This is not helpful at all because now I have no new information on my problem and I also have no idea what I did wrong with respect to this community's rules.
You can use Datanucleus (ORM) and use it with an XML Datastore
http://www.datanucleus.org/products/datanucleus/datastores/xml.html
I'm thinking about making a simple web application to practice custom tags, EL, ...
Now I'm thinking about how to make a simple front page.
I want to have a front page where I'll show a short description of a post and then the user can click it to see the full article.
Further down the line I'd like to attach a poster to it, and even further down the line I'd like to allow people to leave a comment.
Now I see two ways to do this:
a) put it all into a database
b) put the short description and the article into a .tag file and put the comments and users into the database.
Now I'm wondering which way would you go, or would you go for something entirely else?
The first way is probably the easiest but it does require access to the database "often".
The second way is a bit more "sloppy", especially depending on my implementation but it does have the advantage of accessing the database less often.
And any recommendations on keeping the data actual?
I could either load everything each time somebody accesses the news page, or I could put it in the application scope and put the articles into a bean and use a listener.
And do you use hibernate/jdbc/... for a database connection?
I'm getting the feeling that the actual programming will be the easiest part.
Any directions (or book recommendations for that matter) are welcome.
I've read head first servlets & jsp, and while it does a wonderful job of explaining how to develop the application I find it a bit lacking in the when/how to connect with the database and how to optimize it.
Sorry for the long post that possibly doesn't really fall under the scope of this site.
As far as I can see, you are thinking too much about performance. You should not. It is of a little concern in the start. Go what feels right. Tackle performance when its actually lacking.
I would suggest you
You should use some pooling mechanism for database connection. Its very important and make the process very efficient. Take a look at DBCP or C3P0 or something.
to go store your data into the database, even the short description, in some appropriate table.
Moreover, don't load everything when somebody accesses the page, it might go futile and it will take a lot more time and the user gets frustrated.
you can cache data later when you feel its a good idea. Hibernate provide caching real easy, you might try to incorporate Hibernate, as you mentioned it yourself.
you can use AJAX calls wherever appropriate to get rapid request/response.
These are few things I like to mention.
I have a simple task that I feel there has to be an app out there for (or is easy to build or extend an open-source version).
I need to run a mysql query repeatedly and look for changes in the results between runs (the data is coming in in real time).
I have built several of these queries and throughout the day find myself jumping between tabs in my mysql client running them, and trying to see what has changed. This becomes difficult as there are hundreds of rows of data and you can't remember the previous values easily.
Ideally I could have a simple app (or web app) that stores the query, and refreshes over and over again. As the data is filled into the table it could compare the old results and change the color to red or green (or something).
I would need sorting, and simple filtering (possibly with string replaces into the query based on the inputs.
We run Ubuntu at work and I have tried doing this via terminal scripts (we use Ruby), but I feel a more-visual output would give me better results.
Googling around I see several for-pay apps, but there has to be something out there to do this.
I don't mind coding one up, but I don't like to re-invent the wheel if I don't have to.
Many thanks!
For simple things like this you are not reinventing the wheel as much as making your own sandwich -- some things don't make much sense to buy. Just build the simplest web page possible (e.g. a table with the table names you are interested in and maybe a timestamp for the last time it was checked. Have some javascipt run your query and color the cells based on the change you are looking for...repeating this operation as needed. I could give you more specific info if you can tell me how the data changes...more entries into a table? Updates to existing data?
I often use JDBC servlets via Tomcat for this. Here's an excellent tutorial and a very simple example.
I've done something similar in the past using Excel. Just build a connected spreadsheet, make your queries and the result will be outputed to Excel, then you format the way you like it. Very flexible, and if you need some kind of logic beyond the query itself, there are always Excel's built in functions and VBA.
Here is a useful link to help you. It is very simple:
http://port25.technet.com/archive/2007/04/10/connecting-office-applications-to-mysql-and-postgresql-via-odbc.aspx