Can i use play framework linked to an existing database (Example SAP DBB) to only display :
Dashboards (Queries)
Charts (Queries)
I developped an authentication page, suddenly, i did not tought about how to extract data from an existing database without declaring models!
What is the good way to only extract with many queries data and display it on views scala (Play framework JAVA) ?
Thank you so much
We also have many existing databases, without having a complete model of each table in Play applications. I create a case class in the Play application for all necessary fields, also a subset of all columns. It's Scala code, but also possible with Java, of course.
case class Positionstext(akt_abnr: Int
, akt_text_art: Int
, akt_text_pos: Int
, akt_text: String) {
}
The carefully assembled SQL command retrieves rows from one or more tables.
def positionstexte(x: Int) : List[Positionstext] = {
DB.withConnection{ connection =>
val select =
""" select plr_ak_texte.akt_abnr
, plr_ak_texte.akt_text_art
, plr_ak_texte.akt_text_pos
, plr_ak_texte.akt_text
from plrv11.plr_ak_texte
, plrv11.plr_auftr_status
where plr_ak_texte.akt_abnr = plr_auftr_status.as_abnr
and plr_ak_texte.akt_aend_ix = plr_auftr_status.as_aend_ix
and plr_ak_texte.akt_abnr = ?
and plr_ak_texte.akt_text_art = 7
and plr_auftr_status.as_aend_ix <> 99
"""
val prepareStatement = connection.prepareStatement(select)
prepareStatement.setInt(1, x)
val rs = prepareStatement.executeQuery
var list: ListBuffer[Positionstext] = scala.collection.mutable.ListBuffer()
while (rs.next) {
list += new Positionstext(rs.getInt("akt_abnr")
, rs.getInt("akt_text_art")
, rs.getInt("akt_text_pos")
, rs.getString("akt_text"))
}
rs.close()
prepareStatement.close()
list.toList
}
And that's it! The SQL command already does most of the work by using sub-queries, joins etc.
All desired objects are in the list now and can display in the view.
Thank you for your return,
This is what i did and it works pretty fine :
package controllers;
import models.Sysuser;
import models.DataI;
import models.DataII;
import play.mvc.Controller;
import play.mvc.Result;
import play.mvc.Security;
import views.html.sitemap.index;
import javax.inject.*;
import java.util.concurrent.CompletionStage;
import play.libs.concurrent.HttpExecutionContext;
import static java.util.concurrent.CompletableFuture.supplyAsync;
import io.ebean.*;
import play.Logger;
import java.util.List;
import play.libs.Json;
import java.util.*;
import java.util.stream.*;
#Security.Authenticated(Secured.class)
public class SiteMap extends Controller {
private final HttpExecutionContext httpExecutionContext;
private static final Logger.ALogger logger = Logger.of(SiteMap.class);
#Inject
public SiteMap(HttpExecutionContext httpExecutionContext) {
this.httpExecutionContext = httpExecutionContext;
}
public CompletionStage<Result> index() {
return supplyAsync(() -> {
return ok(views.html.sitemap.index.render(Sysuser.findByUserName(request().username()), QueryI(), QueryII() ));
}, httpExecutionContext.current());
}
/**
* Custom Query 1
*/
public List<DataI> QueryI() {
final String sql = "SELECT sysuser_id, role_id "
+"from sysuser_role "
+"where sysuser_id = '1' "
+"and role_id in ('1','2','3','4','5') ";
final RawSql rawSql = RawSqlBuilder.parse(sql).create();
Query<DataI> query = Ebean.find(DataI.class);
query.setRawSql(rawSql);
List<DataI> L = query.findList();
return(L);
}
/**
* Custom Query 2
*/
public List<DataII> QueryII() {
final String sql = "SELECT sysuser.name, sysuser.active, department.description "
+"from sysuser "
+"left join department on department.id = sysuser.department_id "
+"where sysuser.id = '2' ";
final RawSql rawSql = RawSqlBuilder.parse(sql).create();
Query<DataII> query = Ebean.find(DataII.class);
query.setRawSql(rawSql);
List<DataII> L = query.findList();
return(L);
}
}
I am using Java instead of Scala, however, i don't think that there is a need of these codes such as :
1- DB.withConnection{ connection =>
2- val prepareStatement = connection.prepareStatement(select)
....and what else...
What do you think about my code? is it optimal ? I am going to use complexe queries to fill some dashboards in this template : https://adminlte.io/themes/v3/index.html
Related
I'm trying to drop a table when deleting a record in the database, but it is giving me the following error:
Error logging in: Request processing failed; nested exception is javax.persistence.TransactionRequiredException: Executing an update/delete query
I have read couple of articles and even some questions in Stack overflow, this one Question about the errorbut none of the answers are working, the one I see that might help the most is adding the note #Transactional which I put over the method executeDropTable() but it is giving me the same error., this is my code:
package com.ssc.test.cb3.service;
import com.ssc.test.cb3.dto.ReportRequestDTO;
import com.ssc.test.cb3.dto.mapper.ReportRequestMapper;
import com.ssc.test.cb3.repository.ReportRequestRepository;
import java.util.List;
import org.springframework.stereotype.Service;
import com.ssc.test.cb3.model.ReportRequest;
import com.ssc.test.cb3.repository.ReportTableRepository;
import java.util.Map;
import javax.persistence.EntityManager;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.transaction.annotation.Transactional;
/**
* Class to prepare the services to be dispatched to the database upon request.
*
* #author ssc
*/
#Service
#RequiredArgsConstructor
#Slf4j
public class ReportRequestService {
private final ReportRequestRepository reportRequestRepository;
private final EntityManager entityManager;
private final ReportTableRepository reportTableRepository;
private static String SERVER_LOCATION = "D:\\JavaProjectsNetBeans\\sscb3Test\\src\\main\\resources\\";
/**
* Function to delete a report from the database
*
* #param id from the report request objet to identify what is the specific
* report
*/
public void delete(int id) {
ReportRequest reportRequest = reportRequestRepository.findById(id).orElse(null);
ReportTable reportTable =
String fileName = reportRequest.getFileName();
if (reportRequest == null || reportRequest.getStatus() == 1) {
log.error("It was not possible to delete the selected report as it hasn't been processed yet or it was not found");
} else {
reportRequestRepository.deleteById(id);
log.info("The report request {} was successfully deleted", id);
new File(SERVER_LOCATION + reportRequest.getFileName()).delete(); // Delete file
log.info("The file {} was successfully deleted from the server", fileName);
// DROP created tables with file name without extention
executeDropTable(fileName);
log.info("The table {} was successfully deleted from the data base", fileName);
}
}
/**
* Service to Drop report request tables created on the database when a
* report request is generated and serviced to be downloaded This method
* will be called when a user deletes in the fron-end a report request in
* finished status.
*
* #param tableName will be the name of the table that was created on the
* database
*/
#Transactional
public void executeDropTable(String tableName) {
int substract = 4;
tableName = tableName.substring(0, tableName.length() - substract);
System.out.println("Table name: " + tableName);
String query = "DROP TABLE :tableName"; // IF EXISTS
entityManager.createNativeQuery(query)
.setParameter("tableName", tableName)
.executeUpdate();
}
}
Can anyone please help me to sort this out?
A native query literally means "execute this SQL statement on the database", but you are trying to use JPL or something else with variable expansion.
your SQL string is invalid, try:
String query = "DROP TABLE " + tablename;
entityManager.executeNativeQuery(query);
I have two Hbase tables 'hbaseTable', 'hbaseTable1' and Hive table 'hiveTable'
my query looks like:
'insert overwrite hiveTable select col1, h2.col2, col3 from hbaseTable h1,hbaseTable2 h2 where h1.col=h2.col2';
I need to do a inner join in hbase and bring the data to hive. We are using hive with java which gives a very poor performance.
So planning to change the approach by using spark. i.e, spark with java
How do I connect to hbase from my JAVA code using SPARK.
now my spark code should do a join in hbase and bring in data to hive by the above query.
Please provide sample code.
If you are using spark to load hbase data then why load it in hive?
You can use spark sql which is similar to hive and thus sql.
You can query the data without using hive at all.
For example :
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.SQLContext;
import scala.Tuple2;
import java.util.Arrays;
public class SparkHbaseHive {
public static void main(String[] args) {
Configuration conf = HBaseConfiguration.create();
conf.set(TableInputFormat.INPUT_TABLE, "test");
JavaSparkContext jsc = new JavaSparkContext(new SparkConf().setAppName("Spark-Hbase").setMaster("local[3]"));
JavaPairRDD<ImmutableBytesWritable, Result> source = jsc
.newAPIHadoopRDD(conf, TableInputFormat.class,
ImmutableBytesWritable.class, Result.class);
SQLContext sqlContext = new SQLContext(jsc);
JavaRDD<Table1Bean> rowJavaRDD =
source.map((Function<Tuple2<ImmutableBytesWritable, Result>, Table1Bean>) object -> {
Table1Bean table1Bean = new Table1Bean();
table1Bean.setRowKey(Bytes.toString(object._1().get()));
table1Bean.setColumn1(Bytes.toString(object._2().getValue(Bytes.toBytes("colfam1"), Bytes.toBytes("col1"))));
return table1Bean;
});
DataFrame df = sqlContext.createDataFrame(rowJavaRDD, Table1Bean.class);
//similarly create df2
//use df.join() and then register as joinedtable or register two tables and join
//execute sql queries
//Example of sql query on df
df.registerTempTable("table1");
Arrays.stream(sqlContext.sql("select * from table1").collect()).forEach(row -> System.out.println(row.getString(0) + "," + row.getString(1)));
}
}
public class Table1Bean {
private String rowKey;
private String column1;
public String getRowKey() {
return rowKey;
}
public void setRowKey(String rowKey) {
this.rowKey = rowKey;
}
public String getColumn1() {
return column1;
}
public void setColumn1(String column1) {
this.column1 = column1;
}
}
If you need to use hive for some reasons use HiveContext to read from hive and persist data using saveAsTable.
Let me know in case of doubts.
The teacher from my Java-coding class has published on his web page an example of what he expects us to code for ourselves. When I tried to recreate the same code within my own lab, however, I have been facing several errors. Here is the code:
public class StudentRepositoryImpl extends Repository<Student, Long> implements StudentRepositoryCustom {
private static final Logger log = LoggerFactory.getLogger(StudentRepositoryImpl.class);
public StudentRepositoryImpl()
{
super(Student.class);
}
#Override
#Transactional
public List<Student> findAllWithLabsSqlQuery()
{
log.trace("findAllWithLabsSqlQuery: method entered");
HibernateEntityManager hibernateEntityManager = getEntityManager().unwrap(HibernateEntityManager.class);
Session session = hibernateEntityManager.getSession();
Query query = session.createSQLQuery("select distinct {s.*}, {sd.*}, {d.*}" +
" from student s" +
" left join student_lab sd on sd.student_id = s.id" +
" left join lab d on d.id = sd.lab_id")
.addEntity("s", Student.class)
.addJoin("sd", "s.studentLabs")
.addJoin("d", "sd.lab")
.addEntity("s", Student.class)
.setResultTransformer(Criteria.DISTINCT_ROOT_ENTITY);
List<Student> students = query.list();
log.trace("findAllWithLabsSqlQuery: students={}", students);
return students;
}
#Override
#Transactional
public List<Student> findAllWithLabsJpql() {
log.trace("findAllWithLabsJpql: method entered");
javax.persistence.Query query = getEntityManager().createQuery("select distinct s from Student s" +
" left join fetch s.studentLabs sd" +
" left join fetch sd.lab d");
List<Student> students = query.getResultList();
log.trace("findAllWithLabsJpql: students={}", students);
return students;
}
#Override
#Transactional
public List<Student> findAllWithLabsJpaCriteria() {
log.trace("findAllWithLabsJpaCriteria: method entered");
CriteriaBuilder criteriaBuilder = getEntityManager().getCriteriaBuilder();
CriteriaQuery<Student> query = criteriaBuilder.createQuery(Student.class);
query.distinct(Boolean.TRUE);
Root<Student> from = query.from(Student.class);
Fetch<Student, StudentLab> studentLabFetch = from.fetch(Student_.studentLabs, JoinType.LEFT);
studentLabFetch.fetch(StudentLab_.discipline, JoinType.LEFT);
List<Student> students = getEntityManager().createQuery(query).getResultList();
log.trace("findAllWithLabsJpaCriteria: students={}", students);
return students;
}
}
The errors I am facing are:
1 - the super method ("Cannot resolve method 'super(java.lang.Class)'")
2 - every getEntityManager function ("Cannot resolve method 'getEntityManager'")
3 - the "Student_" and "StudentLab_" ("Cannot resolve symbol")
In his project the same code, albeit with class name modifications, works. Is it something to do with the .iml or pom.xml files?
Here are the imported libraries:
import org.hibernate.Criteria;
import org.hibernate.Query;
import org.hibernate.Session;
import org.hibernate.jpa.HibernateEntityManager;
import org.hibernate.jpa.internal.EntityManagerFactoryImpl;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.transaction.annotation.Transactional;
import javax.persistence.criteria.*;
import java.util.List;
import ro.ubb.books.core.model.*;
import ro.ubb.books.core.repository.*;
If you have also copied pom.xml, try running following command.
mvn clean package
Note : mvn(maven) must be installed.
I have a simple Query method that runs cypher queries as noted below. If I run the EXACT same query in the web console (yes, same db instance, correct path), I get a non-empty iterator in the console. Shouldn't I 1) not get that message and 2) get the results I see in my database?
This class has other methods that add data to the database and that functionality works well. This query method is not working...
Class:
import org.neo4j.cypher.javacompat.ExecutionEngine;
import org.neo4j.cypher.javacompat.ExecutionResult;
import org.neo4j.graphdb.Direction;
import org.neo4j.graphdb.GraphDatabaseService;
import org.neo4j.graphdb.Node;
import org.neo4j.graphdb.Relationship;
import org.neo4j.graphdb.RelationshipType;
import org.neo4j.graphdb.Transaction;
import org.neo4j.graphdb.factory.GraphDatabaseFactory;
import org.neo4j.helpers.collection.IteratorUtil;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.sql.*;
public class NeoProcessor {
//private GraphDatabaseService handle;
private static final String DB_PATH = "/usr/local/Cellar/neo4j/2.0.1/libexec/data/new_graph.db";
static GraphDatabaseService graphDb = new GraphDatabaseFactory().newEmbeddedDatabase( DB_PATH );
public NeoProcessor()
{
}
public void myQuery(String cypherText)
{
//System.out.println("executing the above query");
cypherText = "MATCH (n:Phone{id:'you'}) MATCH n-[r:calling]->m WHERE n<>m RETURN n, r, m";
ExecutionEngine engine = new ExecutionEngine( this.graphDb );
ExecutionResult result;
try ( Transaction ignored = graphDb.beginTx() )
{
result = engine.execute( cypherText + ";");
System.out.println(result);
ignored.success();
}
}
}
Below is a pic showing how the query rreturns results from the DB:
result = engine.execute(cypherText + ";");
System.out.println(result.dumpToString());
Specified by:
http://api.neo4j.org/2.0.3/org/neo4j/cypher/javacompat/ExecutionResult.html#dumpToString()
To consume the result you need to use the iterator. If you just want a string representation use the ExecutionResult.dumpToString(). Be aware this method exhausts the iterator.
You should be calling:
System.out.println(result.dumpToString)
Which will prettify it for you. Of course, there is always the possibility that your match returns no results. You shouuld also close the transaction in a finally block, although that won't matter much here.
EDIT: Taking a second look at this, your Cypher query is wrongly formed, It should be
MATCH (n:Phone) - [r:calling] -> (m)
WHERE n.id = `you'
RETURN n, r, m
Can somebody give one java example of sparql insert/Delete query in Stardog.
there is only queryExecution.execSelect() method available.
there is no queryExecution.execInsert() or queryExecution.execDelete() available.
Please give one working example.
EDIT
I've found this this from stardog docs page.
http://stardog.com/docs/#notes
As of 1.1.5, Stardog's SPARQL 1.1 support does not include: UPDATE query language
does that means no way out for editing a tuple once entered?
Stardog does not yet support SPARQL update, but as was pointed out to you on the mailing list, there are 5 ways you can modify the data once it's loaded. You can use our HTTP protocol directly, any of the 3 Java API's we support, or you can use the command line interface.
Below one is a sample program of inserting a graph and removing it.
package com.query;
import java.util.List;
import org.openrdf.model.Graph;
import org.openrdf.model.Statement;
import org.openrdf.model.URI;
import org.openrdf.model.impl.GraphImpl;
import org.openrdf.model.impl.ValueFactoryImpl;
import org.openrdf.query.QueryEvaluationException;
import com.clarkparsia.stardog.StardogDBMS;
import com.clarkparsia.stardog.StardogException;
import com.clarkparsia.stardog.api.Connection;
import com.clarkparsia.stardog.api.ConnectionConfiguration;
import com.clarkparsia.stardog.jena.SDJenaFactory;
import com.hp.hpl.jena.query.ParameterizedSparqlString;
import com.hp.hpl.jena.query.QueryExecution;
import com.hp.hpl.jena.query.QueryExecutionFactory;
import com.hp.hpl.jena.query.ResultSet;
import com.hp.hpl.jena.query.ResultSetFormatter;
import com.hp.hpl.jena.rdf.model.Model;
public class test {
public static void main(String[] args) throws StardogException, QueryEvaluationException {
String appDbName ="memDb";
String selectQuery="SELECT DISTINCT ?s ?p ?o WHERE { ?s ?p ?o }";
StardogDBMS dbms = StardogDBMS.toServer("snarl://localhost:5820/")
.credentials("admin", "admin".toCharArray()).login();
List<String> dbList = (List<String>) dbms.list();
if (dbList.contains(appDbName)) {
System.out.println("droping " + appDbName);
dbms.drop(appDbName);
}
dbms.createMemory(appDbName);
dbms.logout();
Connection aConn = ConnectionConfiguration
.to("memDb") // the name of the db to connect to
.credentials("admin", "admin") // credentials to use while connecting
.url("snarl://localhost:5820/")
.connect();
Model aModel = SDJenaFactory.createModel(aConn);
System.out.println("################ GRAPH IS EMPTY B4 SUBMITTING ="+aModel.getGraph()+ "################");
URI order = ValueFactoryImpl.getInstance().createURI("RDF:president1");
URI givenName = ValueFactoryImpl.getInstance().createURI("RDF:lincoln");
URI predicate = ValueFactoryImpl.getInstance().createURI("RDF:GivenNane");
Statement aStmt = ValueFactoryImpl.getInstance().createStatement(order,predicate,givenName);
Graph aGraph = new GraphImpl();
aGraph.add(aStmt);
insert(aConn, aGraph);
ParameterizedSparqlString paraQuery = new ParameterizedSparqlString(selectQuery);
QueryExecution qExecution = QueryExecutionFactory.create(paraQuery.asQuery(),aModel);
ResultSet queryResult = qExecution.execSelect();
System.out.println("############### 1 TUPPLE CAME AFTER INSERT ################");
ResultSetFormatter.out(System.out, queryResult);
aGraph.add(aStmt);
remove(aConn, aGraph);
paraQuery = new ParameterizedSparqlString(selectQuery);
qExecution = QueryExecutionFactory.create(paraQuery.asQuery(),aModel);
queryResult = qExecution.execSelect();
System.out.println("################ DB AGAIN EMPTY AFTER REMOVE ################");
ResultSetFormatter.out(System.out, queryResult);
System.out.println("closing connection and model");
aModel.close();
aConn.close();
}
private static void insert(final Connection theConn, final Graph theGraph) throws StardogException {
theConn.begin();
theConn.add().graph(theGraph);
theConn.commit();
}
private static void remove(final Connection theConn, final Graph theGraph) throws StardogException {
theConn.begin();
theConn.remove().graph(theGraph);
theConn.commit();
}
}