how to fetch a real-time changing data into a local server? - java

we have to develop a local server which will load itself with the real-time data of a industry (particularly time stamped data points like the temperature of a boiler,pressure values etc) which are stored in industrial server and we want to fetch them and populate our server with it, the data is not streamed at server end so how to fetch it continuously and populate the server...
we would like to store only past 2-3 days of history data as time advances, any recommendations about the server and the back end process to be used to fetch data are welcome, we don't have any idea were to start..
please help...

As others have stated,
You need to provide more information on how do you intend to populate your server.
What API do you have for the "real time server"?
I worked on a management system for solar engery devices
(i.e - devices that produce electricity from solar energy - they are called photo-volatic cells if I remember correctly).
In my case these devices had an FTP access , which provided me files with time-based information.
I constructed a java server that used the following technologies:
A. Apache tomcat web container - This web container allowed me on one hand to hold java logic, and on the other hand to expose HTTP-based interface to the customer.
The Java logic was located in a Servlet- which exposes methods to handle HTTP requests (and allows writing returned data using response objects).
B. The servlet has an init method, I used it to perform some initialization, such as starting a quartz periodic task to probe the ftp servers of the devices.
C. I used a database (postgresql database, which is an open source database) to store configuration for the application, and also to store results.
D. I used another periodic task to archive old data in an archiving table, so the main data table will hold relatively new data.
I ran the archiving task once in a few days, and it simple checked for record that were "too" old, inserted them to the archiving table, and deleted them from the main data table. In order to peform this efficiently I have have decided to use a function that I coded on the database.
E. In order to access the database from the application, I used the Hibernate object relational mapping technology.
This technology allowed me to define mappings between tables and their relations to java objects, and gave me generated create,read (by-id), delete and updated SQL statements.
Using the HQL query language, I wrote some more complex queries.
F. For presentation/client side - I used plain JSP.
You may choose other alternatives such as :
GWT, Apache Wicket, JSF
You may consider using some MVC framework to have some seperation between the logic and the presentation. Such frameworks can be:
Spring-MVC , Struts, and many others.
To conclude, you must understand that Java offers you a variety of technologies, you must define requirements well, and then start investigating which technology can meet your needs.

Related

Fields retrieved by the REST API do not correspond to fields in the object manager

I need all the data backup storage on a regular basis on Salesforce to local database, so I wrote a program that calls the REST API /services/data/v53.0/sobjects access to all the sobjects, Then respectively according to their name call /services/data/v53.0 sobjects/XXX/describegot fields for each object, but I found that the fields I got did not match the fields in the object manager.
I've also tried using SOQL directly:
SELECT EntityDefinition.QualifiedApiName, QualifiedApiName, DataType
FROM FieldDefinition
WHERE EntityDefinition.QualifiedApiName = 'xxx'
But it still doesn't work, if I need to back up the CRM data to my own local database, what do I need to do? How do I get all the tables and all the fields and export them?
please help me!
There are a few ways to do this, but none of them are easy. In the past I have used addons that connect directly to Salesforce via MSSQL. One such application is purpose built for this use case. Its called DBamp. Unfortunately it is rather pricy. You can also connect to your Salesforce instance with integration software like Jitterbit, Mulesoft, DellBoomi or Talend. That approach would require creating an integration catered to the object you want the backup for.
On free side, you could use Excel to connect to your Salesforce instance and pull down whatever object you want, this is probably not an ideal solution though. Data Tab > Get Data > From Online Service > From Salesforce Object.
enter image description here
I have seen other solution like creating full copy sandboxes every week. The last option is connecting MSSQL to Salesforce via SSIS and an ODBC connector but this has been a pretty bad experience in the past, could just be me though.

Different applications to the same database

I have 3 different applications
ASP.NET web application
Java Desktop application
Android Studio mobile application
These 3 applications have the same database and and they need to connect from any part of the world with an internet connection. They share almost all the information, so, if you move something in one application it has to update the information in the other 2 applications.
I have the database on a physical server and I want to know how best to make this connection.
I have searched but I couldn't find if I have to connect directly to the server with some SQL Server, using Web Service, or something like that.
I hope someone could help.
Thank you.
I believe the best way is to first create a Web API layer (REST/SOAP) that will be used to perform all the relative operations in the centralized DB. Once that is setup, any of your applications written in any language can use the exposed web API methods to manipulate the data of the same DB.
If you are looking at a global solution - will you have multiple copies of the applications in different parts of the world as well?
In this scenario you should be looking at a cloud-hosted database with some form of geo-replication so that you can keep latency to a minimum.
There are no restrictions on the number of applications that can connect to a specific database - you do not have to create a different database for each and you may be able to reuse Stored Procedures between applications if they perform the same task.
I would however look at the concept of schemas - any database objects that are specific to one app should be separated from other - so put them in a schema for "App1". Shared objects can be in a shared schema.

Handling multiple POST requests at the same time and writing to database

I started working with REST services recently. I have several tools joined into the framework of integrated tools. Tools communicate over the common component (CC) which handles their requests (using REST services) and is actually an interface between all the tools. For every POST request a new resource is created and stored into memory. Every time the CC goes off all the data is lost. For that case, I created an Apache Derby database to store all the resources. With every resource creation, entry is created in the database. Every time CC turns on it fetches all the data from the database and the data is regurally synced. The problem is that multiple tools can POST at almost the same time. How does REST handle these requests? I hoped that it manages the requests in a queue-like way, but from what I see it does it at the same time in a thread-like way. My database goes down instantly. Am I on the right track or something else could be wrong?

data integration services between java system and sql server

I am currently architecting some integration services for a web application. External java applications produce a data feed which supplies data, the data is massaged as necessary and then inputted in to a sql server database. The data is managed here and used as the basis for wcf and http rest services which are accessed by web applications, mobile devices etc.
This is the current setup. I am at present changing this modifying this as we have some issues with the integration of the java system and sql server database. The main issue we have is the standard of the data required, it can be missing fields etc. The current integration is a comma separated file placed on an ftp server, the file picked up, the file processed, data massaged and data inserted in to the sql server. Where we are currently getting "burned" is that data is inserted in to the sql server database and the quality of the data is not up to the necessary standard and/or quality.
So this process is being changed and looking for options as to both modernize this and make the integration services more robust.
So I am looking for both suggestions and recommendations to improve the above?
Some options that spring to mind are:
Expose a wcf service that the java system calls, data gets passed to it via the SOAP protocol, data then validated in the service before inserting in to sql server
Format of the data supplied moves from common separated file to an xml file and the xml file gets validated against a schema before the data is massaged
Any other suggestions?
Neither of your solutions is going to solve your data quality problem at its source. I'd look more critically at the applications producing the data and put the validation there in addition to validating it before INSERT into the database. You want to validate prior to INSERT, because you should never trust clients. But clients ought to honor a contract when they send you data.
One advantage that the web service offers that the others don't is the possibility of real time INSERTs into the database. Let the source applications send their requests to this broker service. It validates requests and inserts them in real time. No more batch.

Using JPL (Java + Prolog) in a Java EE web application

I would like to develop a Java EE web application that requires Prolog, via JPL, for certain search related tasks.
The web application will be deployed in the JBoss application server.
The Prolog engine can be either YAP or SWI (afaik the only Prolog engines compatible with JPL at the moment).
The Prolog queries depend on information stored in a (potentially large) database.
If someone has tried this or something similar, could you please give me feedback about the following questions?:
What is the best way to manage concurrent http sessions that need access to the Prolog engine?. Is it possible -desirable?- to assign to each separate session its own Prolog engine ?. If this solution works, is it possible to implement something similar to a 'Prolog engine pooling' to quickly assign prolog engines to new sessions ? . Or the best solution is to have a single Prolog engine that will manage all the query requests synchronously ? (and slowly).
How could be managed the interaction of Prolog with the database ?. If the data is changing often in the database and Prolog needs this data to solve its queries, what is the best strategy to keep the facts in the Prolog engine synchronized with the data in the database ?. The navy option of starting from scratch at each new session (e.g., reloading all the data from the database as Prolog facts) does not seem to be a good idea if the database grows large.
Any other expected issues/difficulties related to the java-prolog-database interaction during the implementation ?
Thanks in advance!
What is the best way to manage concurrent http sessions that need access to the Prolog engine?.
If I look at the source of JPL, it looks like it uses an engine pool. The query data type implements the enumerator pattern plus a close() operation. I guess an engine is automatically assigned to a query as long as it is active.
So each http request can independently access the Prolog system via new query objects. If you don't want to close your query object during a http request, I guess you can also attach it to a http session. And reuse it an another request.
How could be managed the interaction of Prolog with the database ?
This depends on the usage pattern of the data in the database and the available access paths. It could be that you can quickly access very large databases during a request and refetch the data during each request. For example if the needed matching data set is small and if the database has good indexes, so that the matching data can be quickly accessed.
Otherwise you would need to implement some intelligent caching. I am currently working at a solution where I use a kind of a check-in/check-out pattern. But this is not suitable for a web server, where you have multiple users. I am using this pattern for a standalone solution where there is one user and one checked out data junk in memory. For a web server with varying multiple users the junks could overflow the webserver memory.
So caching only works if you can limit and throttle the junks or if you have a very large webserver memory. Maybe you can find such an invariant for your application. Otherwise the conclusion could be that you cannot go Java EE independent of whether you use Prolog or not.

Categories