Null Pointer in BigQueryIO Write - java

I'm getting a null pointer exception in my Dataflow pipeline, but all of the values at the location of the exception are defined correctly. In this code I am reading from a database, doing some conversions on the result set, and then trying o create a table in an existing dataset based on the table rows in that result set. I've confirmed the values passed to the BigQueryIO.writeTableRows() call are all valid, but still I get an exception on the line where I try to carry out the write to Big Query. I starred the location of the null pointer exception in the code below.
// Gather First query results
WriteResult results = pipeline
.apply("Connect", JdbcIO.<TableRow>read()
.withDataSourceConfiguration(buildDataSourceConfig(options, URL))
.withQuery(query)
.withRowMapper(new JdbcIO.RowMapper<TableRow>() {
// Convert ResultSet to PCollection
public TableRow mapRow(ResultSet rs) throws Exception {
ResultSetMetaData md = rs.getMetaData();
int columnCount = md.getColumnCount();
TableRow tr = new TableRow();
for (int i = 1; i <= columnCount; i++ ) {
String name = md.getColumnName(i);
tr.set(name, rs.getString(name));
}
return tr;
}
}))
.setCoder(TableRowJsonCoder.of())
.apply("Write to BQ",
BigQueryIO.writeTableRows()
.withSchema(schema)
.to(dataset)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED));
023-01-10T20:33:22.4214526Z WARNING: Unable to infer a schema for type com.google.api.services.bigquery.model.TableRow. Attempting to infer a coder without a schema.
2023-01-10T20:33:22.4216783Z Exception in thread "main" java.lang.NullPointerException
2023-01-10T20:33:22.4218945Z at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$Write.validateNoJsonTypeInSchema(BigQueryIO.java:3035)
2023-01-10T20:33:22.4221029Z at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$Write.continueExpandTyped(BigQueryIO.java:2949)
2023-01-10T20:33:22.4222727Z at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$Write.expandTyped(BigQueryIO.java:2880)
2023-01-10T20:33:22.4226464Z at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$Write.expand(BigQueryIO.java:2776)
2023-01-10T20:33:22.4228072Z at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$Write.expand(BigQueryIO.java:1786)
2023-01-10T20:33:22.4234778Z at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:548)
2023-01-10T20:33:22.4237961Z at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
2023-01-10T20:33:22.4240010Z at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:376)
2023-01-10T20:33:22.4242466Z at edu.mayo.mcc.aide.sqaTransfer.SqaTransfer.buildPipeline(SqaTransfer.java:133)
2023-01-10T20:33:22.4244722Z at edu.mayo.mcc.aide.sqaTransfer.SqaTransfer.main(SqaTransfer.java:99)
2023-01-10T20:33:22.4246444Z . exit status 1

The current error is due to a bad schema passed in the withSchema(schema) method in BigQueryIO.
The schema can be created with TableSchema object :
TableSchema schema =
new TableSchema()
.setFields(
Arrays.asList(
new TableFieldSchema()
.setName("string_field")
.setType("STRING")
.setMode("REQUIRED"),
new TableFieldSchema()
.setName("int64_field")
.setType("INT64")
.setMode("NULLABLE"),
new TableFieldSchema()
.setName("float64_field")
.setType("FLOAT64"), // default mode is "NULLABLE"
new TableFieldSchema().setName("numeric_field").setType("NUMERIC"),
new TableFieldSchema().setName("bool_field").setType("BOOL"),
new TableFieldSchema().setName("bytes_field").setType("BYTES"),
new TableFieldSchema().setName("date_field").setType("DATE"),
new TableFieldSchema().setName("datetime_field").setType("DATETIME"),
new TableFieldSchema().setName("time_field").setType("TIME"),
new TableFieldSchema().setName("timestamp_field").setType("TIMESTAMP"),
new TableFieldSchema().setName("geography_field").setType("GEOGRAPHY"),
new TableFieldSchema()
.setName("array_field")
.setType("INT64")
.setMode("REPEATED")
.setDescription("Setting the mode to REPEATED makes this an ARRAY<INT64>."),
new TableFieldSchema()
.setName("struct_field")
.setType("STRUCT")
.setDescription(
"A STRUCT accepts a custom data class, the fields must match the custom class fields.")
.setFields(
Arrays.asList(
new TableFieldSchema().setName("string_value").setType("STRING"),
new TableFieldSchema().setName("int64_value").setType("INT64")))));
return schema;
// In the IO.
rows.apply(
"Write to BigQuery",
BigQueryIO.writeTableRows()
.to(String.format("%s:%s.%s", project, dataset, table))
.withSchema(schema)
...
You can also use a Json schema :
String tableSchemaJson =
""
+ "{"
+ " \"fields\": ["
+ " {"
+ " \"name\": \"source\","
+ " \"type\": \"STRING\","
+ " \"mode\": \"NULLABLE\""
+ " },"
+ " {"
+ " \"name\": \"quote\","
+ " \"type\": \"STRING\","
+ " \"mode\": \"REQUIRED\""
+ " }"
+ " ]"
+ "}";
// In the IO.
rows.apply(
"Write to BigQuery",
BigQueryIO.writeTableRows()
.to(String.format("%s:%s.%s", project, dataset, table))
.withJsonSchema(tableSchemaJson)
...
You can check the documentation to have more details.

Related

parse strange Json response to a list

I want to parse this object to a list of string. I do not need the key but just want the value as a list of string.
I cannot have a simple model classes because the keys object are more than 1000 in some responses and are random.
So please any idea how to parse it to list in kotlin or java?
{
"data": {
"21": "593754434425",
"22": "4560864343802",
"23": "7557134347529",
"24": "5937544344255",
"25": "45608643438024",
"26": "75571343475293"
}
}
You could first deserialize it as it is, and then convert to a list.
The JSON can be represented this way:
data class Response(val data: Map<String, String>)
You can mark this class #Serializable and use Kotlinx Serialization to deserialize it, or you can use other libraries like Moshi or Jackson (with jackson-module-kotlin).
Once it's deserialized, simply get the values of the map (it's a collection):
val response = Json.decodeFromString<Response>(yourJsonString)
// this is a Collection, not List, but it should be good enough
val stringValues = response.data.values
// if you really need a List<String>
val list = stringValues.toList()
If you want to get the values in the natural order of the keys, you can also use something like:
val values = response.data.toSortedMap(compareBy<String> { it.toInt() }).values
You can use this to parse your data:
val it: Iterator<String> = json.keys()
val arrayList = ArrayList<String>()
while (it.hasNext()) {
val key = it.next()
arrayList.add(json.get(key))
}
A better way is to change the json model, if you access it.
{
"data": [
"593754434425","4560864343802",
"7557134347529","5937544344255",
"45608643438024","75571343475293"
]
}
For this problem, its handy to use the libriary org.json.
See following code snippet:
import org.json.JSONObject;
import java.util.ArrayList;
public class Main {
public static void main(String[] args) {
// Defining the input
String input = "{\n" +
" \"data\": {\n" +
" \"21\": \"593754434425\",\n" +
" \"22\": \"4560864343802\",\n" +
" \"23\": \"7557134347529\",\n" +
" \"24\": \"5937544344255\",\n" +
" \"25\": \"45608643438024\",\n" +
" \"26\": \"75571343475293\"\n" +
" }\n" +
"}\n";
// Parsing it to a json object with org.json
JSONObject inputJson = new JSONObject(input);
// If inputJson does not contain the key data, we return
if(!inputJson.has("data")) return;
// Else we read this data object to a new JSONObject
JSONObject dataJson = inputJson.getJSONObject("data");
// Define an array list where all the values will be contained
ArrayList<String> values = new ArrayList<>();
// Get a key set of the dat json object. For each key we get its respective value and add it to our value array list
for (String key : dataJson.keySet()) values.add(dataJson.getString(key));
// Print all values
for (String value : values) System.out.println(value);
}
}
=>
4560864343802
7557134347529
5937544344255
45608643438024
75571343475293
593754434425
Installing org.json is the easiest with a package manager like maven or gradle.
Guys i have comeup with a similar solution for the problem here
this is my model class
data class UnVerifiedTagIds(
#SerializedName("data")
val data: Object
)
and this is how i parse the respone here
val values: ArrayList<String> = ArrayList()
val list_of_tag_ids: ArrayList<String> =response.data as ArrayList<String>
The ist one is the dataclass for the response
and the 2nd one is the ApiCallInterface m using Retrofit...
and the last one is the apicall itself
I am using Kotlin language
do class name with name like this data class Result(val data:Map<String,String>)
and using library GSON for convert string json to this model
val json = "{\n" +
" \"data\": {\n" +
" \"21\": \"593754434425\",\n" +
" \"22\": \"4560864343802\",\n" +
" \"23\": \"7557134347529\",\n" +
" \"24\": \"5937544344255\",\n" +
" \"25\": \"45608643438024\",\n" +
" \"26\": \"75571343475293\"\n" +
" }\n" +
"}"
val dat = Gson().fromJson(json,Result::class.java)
if (dat.data.isNotEmpty()){
val list= dat.data.values.toMutableList()
print(list)
}
that works fine with me

Unable to collect data from metric query language MQL - GCP

I want to execute MQL (metric query language) using below library.
<dependency>
<groupId>com.google.apis</groupId>
<artifactId>google-api-services-monitoring</artifactId>
<version>v3-rev540-1.25.0</version>
</dependency>
Here is my code snippet. which will create monitoring client and will try to collect data from GCP monitoring.
public void queryTimeSeriesData() throws IOException {
// create monitoring
Monitoring m = createAuthorizedMonitoringClient();
QueryTimeSeriesRequest req = new QueryTimeSeriesRequest();
String query = "fetch consumed_api\n" +
"| metric 'serviceruntime.googleapis.com/api/request_count'\n" +
"| align rate(2m)\n" +
"| every 2m\n" +
"| group_by [metric.response_code],\n" +
" [value_request_count_max: max(value.request_count)]";
req.setQuery(query);
HashMap<String, Object> queryTransformationSpec = new HashMap<String, Object>();
HashMap<String, Object> timingState = new HashMap<String, Object>();
HashMap<String, Object> absoluteWindow = new HashMap<String, Object>();
absoluteWindow.put("startTime", "2020-09-03T12:40:00.000Z");
absoluteWindow.put("endTime", "2020-09-03T13:41:00.000Z");
timingState.put("absoluteWindow", absoluteWindow);
timingState.put("graphPeriod", "60s");
timingState.put("queryPeriod", "60s");
queryTransformationSpec.put("timingState", timingState);
req.set("queryTransformationSpec", queryTransformationSpec);
req.set("reportPeriodicStats", false);
req.set("reportQueryPlan", false);
QueryTimeSeriesResponse res = m.projects().timeSeries().query("projects/MY_PROJECT_NAME", req).execute();
System.out.println(res);
}
Above code is working fine but its not returning data of given startTime and endTime ,
It always returns latest datapoint available. is there any problem with my code ?
Found way to execute MQL query with given time range. The
new working code is the following:
public void queryTimeSeriesData() throws IOException {
// create monitoring
Monitoring m = createAuthorizedMonitoringClient();
QueryTimeSeriesRequest req = new QueryTimeSeriesRequest();
String query = "fetch consumed_api\n" +
"| metric 'serviceruntime.googleapis.com/api/request_count'\n" +
"| align rate(5m)\n" +
"| every 5m\n" +
"| group_by [metric.response_code],\n" +
" [value_request_count_max: max(value.request_count)]" +
"| within d'2020/09/03-12:40:00', d'2020/09/03-12:50:00'\n";
req.setQuery(query);
QueryTimeSeriesResponse res = m.projects().timeSeries().query("projects/MY_PROJECT_NAME", req).execute();
System.out.println(res);
}
Included query start time and end time in query itself by using within operator. As per google docs for MQL queries:
within - Specifies the time range of the query output.

DQL drop group_name in DFC

I have a list of group names which the app is reading from external .txt.
I want to pass to method as a List <String> group names and to execute dql query something like:
for (String s : groupnames) {
dql = "DROP GROUP " + s;
System.out.println("dropped group: " + s;
}
How to write/execute DQL?
I have done it by myself:
private static void deleteGroups(List<String> groupsToDelete) {
try {
DfClientX clientX = new DfClientX();
IDfQuery query = clientX.getQuery();
for (String s : groupsToDelete){
query.setDQL("DROP GROUP '" + s + "'");
printInfo("Executing DQL: " + query.getDQL());
query.execute(_session, 0);
}
} catch (DfException e) {
printError(e.getMessage());
DfLogger.error("app", "DQL DROP GROUP execution", null,e);
}
}
Not quite sure does CS permit to delete group via DFC but it should be like:
IDfQuery query = new DfQuery();
query.setDQL("DROP GROUP <group_name>");
query.execute(getSession(), IDfQuery.DF_EXEC_QUERY);
There is for sure way to instantiate group object in memory and call .delete() method. I'll try to check it out.

Error in Lucene text Search

I'm new to text search and I'm studying some examples related to lucene. I found one of the example from this link. http://javatechniques.com/blog/lucene-in-memory-text-search-example/ I tried it in my eclipse IDE. But it gives some errors. I imported all the relevent jar files as well.
Here Is the code :
public class InMemoryExample {
public static void main(String[] args) {
// Construct a RAMDirectory to hold the in-memory representation
// of the index.
RAMDirectory idx = new RAMDirectory();
try {
// Make an writer to create the index
IndexWriter writer =
new IndexWriter(idx, new StandardAnalyzer(Version.LUCENE_48),
IndexWriter.MaxFieldLength.LIMITED);
// Add some Document objects containing quotes
writer.addDocument(createDocument("Theodore Roosevelt",
"It behooves every man to remember that the work of the " +
"critic, is of altogether secondary importance, and that, " +
"in the end, progress is accomplished by the man who does " +
"things."));
writer.addDocument(createDocument("Friedrich Hayek",
"The case for individual freedom rests largely on the " +
"recognition of the inevitable and universal ignorance " +
"of all of us concerning a great many of the factors on " +
"which the achievements of our ends and welfare depend."));
writer.addDocument(createDocument("Ayn Rand",
"There is nothing to take a man's freedom away from " +
"him, save other men. To be free, a man must be free " +
"of his brothers."));
writer.addDocument(createDocument("Mohandas Gandhi",
"Freedom is not worth having if it does not connote " +
"freedom to err."));
// Optimize and close the writer to finish building the index
writer.optimize();
writer.close();
// Build an IndexSearcher using the in-memory index
Searcher searcher = new IndexSearcher(idx);
// Run some queries
search(searcher, "freedom");
search(searcher, "free");
search(searcher, "progress or achievements");
searcher.close();
}
catch (IOException ioe) {
// In this example we aren't really doing an I/O, so this
// exception should never actually be thrown.
ioe.printStackTrace();
}
catch (ParseException pe) {
pe.printStackTrace();
}
}
/**
* Make a Document object with an un-indexed title field and an
* indexed content field.
*/
private static Document createDocument(String title, String content) {
Document doc = new Document();
// Add the title as an unindexed field...
doc.add(new Field("title", title, Field.Store.YES, Field.Index.NO));
// ...and the content as an indexed field. Note that indexed
// Text fields are constructed using a Reader. Lucene can read
// and index very large chunks of text, without storing the
// entire content verbatim in the index. In this example we
// can just wrap the content string in a StringReader.
doc.add(new Field("content", content, Field.Store.YES, Field.Index.ANALYZED));
return doc;
}
/**
* Searches for the given string in the "content" field
*/
private static void search(Searcher searcher, String queryString)
throws ParseException, IOException {
// Build a Query object
//Query query = QueryParser.parse(
QueryParser parser = new QueryParser("content", new StandardAnalyzer(Version.LUCENE_48));
Query query = parser.parse(queryString);
int hitsPerPage = 10;
// Search for the query
TopScoreDocCollector collector = TopScoreDocCollector.create(5 * hitsPerPage, false);
searcher.search(query, collector);
ScoreDoc[] hits = collector.topDocs().scoreDocs;
int hitCount = collector.getTotalHits();
System.out.println(hitCount + " total matching documents");
// Examine the Hits object to see if there were any matches
if (hitCount == 0) {
System.out.println(
"No matches were found for \"" + queryString + "\"");
} else {
System.out.println("Hits for \"" +
queryString + "\" were found in quotes by:");
// Iterate over the Documents in the Hits object
for (int i = 0; i < hitCount; i++) {
// Document doc = hits.doc(i);
ScoreDoc scoreDoc = hits[i];
int docId = scoreDoc.doc;
float docScore = scoreDoc.score;
System.out.println("docId: " + docId + "\t" + "docScore: " + docScore);
Document doc = searcher.doc(docId);
// Print the value that we stored in the "title" field. Note
// that this Field was not indexed, but (unlike the
// "contents" field) was stored verbatim and can be
// retrieved.
System.out.println(" " + (i + 1) + ". " + doc.get("title"));
System.out.println("Content: " + doc.get("content"));
}
}
System.out.println();
} }
but it shows few syntax errors in following lines :
Error 1:
IndexWriter writer = underline MaxFieldLength in red
new IndexWriter(idx, new StandardAnalyzer(Version.LUCENE_48),
IndexWriter.MaxFieldLength.LIMITED);
Error 2: underline optimeze() in red
writer.optimize();
Error 3: underline new IndexSearcher(idx) in red
Searcher searcher = new IndexSearcher(idx);
Error 4: underline search in red
searcher.search(query, collector);
Could you please help me to get rid of these errors? It will be a great help. Thanks
Modified code:
public class InMemoryExample {
public static void main(String[] args) throws Exception{
// Construct a RAMDirectory to hold the in-memory representation
// of the index.
RAMDirectory idx = new RAMDirectory();
// Make an writer to create the index
IndexWriterConfig cfg = new IndexWriterConfig(Version.LUCENE_48, new
StandardAnalyzer(Version.LUCENE_48));
IndexWriter writer = new IndexWriter(idx, cfg);
// Add some Document objects containing quotes
writer.addDocument(createDocument("Theodore Roosevelt",
"It behooves every man to remember that the work of the " +
"critic, is of altogether secondary importance, and that, " +
"in the end, progress is accomplished by the man who does " +
"things."));
writer.addDocument(createDocument("Friedrich Hayek",
"The case for individual freedom rests largely on the " +
"recognition of the inevitable and universal ignorance " +
"of all of us concerning a great many of the factors on " +
"which the achievements of our ends and welfare depend."));
writer.addDocument(createDocument("Ayn Rand",
"There is nothing to take a man's freedom away from " +
"him, save other men. To be free, a man must be free " +
"of his brothers."));
writer.addDocument(createDocument("Mohandas Gandhi",
"Freedom is not worth having if it does not connote " +
"freedom to err."));
// Optimize and close the writer to finish building the index
writer.commit();
writer.close();
// Build an IndexSearcher using the in-memory index
IndexSearcher searcher = new IndexSearcher(DirectoryReader.open(idx));
// Run some queries
search(searcher, "freedom");
search(searcher, "free");
search(searcher, "progress or achievements");
//searcher.close();
}
/**
* Make a Document object with an un-indexed title field and an
* indexed content field.
*/
private static Document createDocument(String title, String content) {
Document doc = new Document();
// Add the title as an unindexed field...
doc.add(new Field("title", title, Field.Store.YES, Field.Index.NO));
// ...and the content as an indexed field. Note that indexed
// Text fields are constructed using a Reader. Lucene can read
// and index very large chunks of text, without storing the
// entire content verbatim in the index. In this example we
// can just wrap the content string in a StringReader.
doc.add(new Field("content", content, Field.Store.YES, Field.Index.ANALYZED));
return doc;
}
/**
* Searches for the given string in the "content" field
*/
private static void search(IndexSearcher searcher, String queryString)
throws ParseException, IOException {
// Build a Query object
//Query query = QueryParser.parse(
QueryParser parser = new QueryParser("content", new StandardAnalyzer(Version.LUCENE_48));
Query query = parser.parse(queryString);
int hitsPerPage = 10;
// Search for the query
TopScoreDocCollector collector = TopScoreDocCollector.create(5 * hitsPerPage, false);
searcher.search(query, collector);
ScoreDoc[] hits = collector.topDocs().scoreDocs;
int hitCount = collector.getTotalHits();
System.out.println(hitCount + " total matching documents");
// Examine the Hits object to see if there were any matches
if (hitCount == 0) {
System.out.println(
"No matches were found for \"" + queryString + "\"");
} else {
System.out.println("Hits for \"" +
queryString + "\" were found in quotes by:");
// Iterate over the Documents in the Hits object
for (int i = 0; i < hitCount; i++) {
// Document doc = hits.doc(i);
ScoreDoc scoreDoc = hits[i];
int docId = scoreDoc.doc;
float docScore = scoreDoc.score;
System.out.println("docId: " + docId + "\t" + "docScore: " + docScore);
Document doc = searcher.doc(docId);
// Print the value that we stored in the "title" field. Note
// that this Field was not indexed, but (unlike the
// "contents" field) was stored verbatim and can be
// retrieved.
System.out.println(" " + (i + 1) + ". " + doc.get("title"));
System.out.println("Content: " + doc.get("content"));
}
}
System.out.println();
} }
and this is the output:
Exception in thread "main" java.lang.VerifyError: class
org.apache.lucene.analysis.SimpleAnalyzer overrides final method
tokenStream.(Ljava/lang/String;Ljava/io/Reader;)Lorg/apache/lucene/analysis/TokenStream;
at java.lang.ClassLoader.defineClass1(Native Method) at
java.lang.ClassLoader.defineClass(Unknown Source) at
java.security.SecureClassLoader.defineClass(Unknown Source) at
java.net.URLClassLoader.defineClass(Unknown Source) at
java.net.URLClassLoader.access$100(Unknown Source) at
java.net.URLClassLoader$1.run(Unknown Source) at
java.net.URLClassLoader$1.run(Unknown Source) at
java.security.AccessController.doPrivileged(Native Method) at
java.net.URLClassLoader.findClass(Unknown Source) at
java.lang.ClassLoader.loadClass(Unknown Source) at
sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at
java.lang.ClassLoader.loadClass(Unknown Source) at
beehex.inmemeory.textsearch.InMemoryExample.search(InMemoryExample.java:98)
at
beehex.inmemeory.textsearch.InMemoryExample.main(InMemoryExample.java:58)
I don't see a third argument on the IndexWriter constructor. You should modify The code to fit to the new lucene api like so :
IndexWriterConfig cfg = new IndexWriterConfig(Version.LUCENE_48, new StandardAnalyzer(Version.LUCENE_48));
IndexWriter writer = new IndexWriter(idx, cfg);
Also , rather than catching an exception here , i'd rather make my main method throw Exception and let the program fail altogether
EDIT :
2) remove the optimize call as the IndexWriter class does not have that method any longer (i think commit will do the trick here) .
3) define the IndexSearcher class like so :
IndexSearcher searcher = new IndexSearcher(DirectoryReader.open(idx));

calling a java class in a servlet

in my servlet i called an instance of a class.java( a class that construct an html table) in order to create this table in my jsp.
the servlet is like the following:
String report=request.getParameter("selrep");
String datev=request.getParameter("datepicker");
String op=request.getParameter("operator");
String batch =request.getParameter("selbatch");
System.out.println("report kind was:"+report);
System.out.println("date was:"+datev);
System.out.println("operator:"+op);
System.out.println("batch:"+batch);
if(report.equalsIgnoreCase("Report Denied"))
{
DeniedReportDisplay rd = new DeniedReportDisplay();
rd.ConstruireReport();
}
else if(report.equalsIgnoreCase("Report Locked"))
{
LockedReportDisplay rl = new LockedReportDisplay();
rl.ConstruireReport();
}
request.getRequestDispatcher("EspaceValidation.jsp").forward(request, response);
in my jsp i can not display this table even empty or full.
note: exemple a class that construct denied Report has this structure:
/*constructeur*/
public DeniedReportDisplay() {}
/*Methodes*/
#SuppressWarnings("unchecked")
public StringBuffer ConstruireReport()
{
StringBuffer retour=new StringBuffer();
int i = 0;
retour.append("<table border = 1 width=900 id=sheet align=left>");
retour.append("<tr bgcolor=#0099FF>" );
retour.append("<label> Denied Report</label>");
retour.append("</tr>");
retour.append("<tr>");
String[] nomCols ={"Nom","Prenom","trackingDate","activity","projectcode","WAName","taskCode","timeSpent","PercentTaskComplete","Comment"};
//String HQL_QUERY = null;
for(i=0;i< nomCols.length;i++)
{
retour.append(("<td bgcolor=#0066CC>")+ nomCols[i] + "</td>");
}
retour.append("</tr>");
retour.append("<tr>");
try {
s= HibernateUtil.currentSession();
tx=s.beginTransaction();
Query query = s.createQuery("select opcemployees.Nom,opcemployees.Prenom,dailytimesheet.TrackingDate,dailytimesheet.Activity," +
"dailytimesheet.ProjectCode,dailytimesheet.WAName,dailytimesheet.TaskCode," +
"dailytimesheet.TimeSpent,dailytimesheet.PercentTaskComplete from Opcemployees opcemployees,Dailytimesheet dailytimesheet " +
"where opcemployees.Matricule=dailytimesheet.Matricule and dailytimesheet.Etat=3 " +
"group by opcemployees.Nom,opcemployees.Prenom" );
for(Iterator it=query.iterate();it.hasNext();)
{
if(it.hasNext()){
Object[] row = (Object[]) it.next();
retour.append("<td>" +row [0]+ "</td>");//Nom
retour.append("<td>" + row [1] + "</td>");//Prenom
retour.append("<td>" + row [2] + "</td>");//trackingdate
retour.append("<td>" + row [3]+ "</td>");//activity
retour.append("<td>" + row [4] +"</td>");//projectcode
retour.append("<td>" + row [5]+ "</td>");//waname
retour.append("<td>" + row [6] + "</td>");//taskcode
retour.append("<td>" + row [7] + "</td>");//timespent
retour.append("<td>" + row [8] + "</td>");//perecnttaskcomplete
retour.append("<td><input type=text /></td>");//case de commentaire
}
retour.append("</tr>");
}
//terminer la table.
retour.append ("</table>");
tx.commit();
} catch (HibernateException e)
{
retour.append ("</table><H1>ERREUR:</H1>" +e.getMessage());
e.printStackTrace();
}
return retour;
}
thanks for help.
1) The instances of DeniedReportDisplay and LockedReportDisplay are created locally, no way to refer them once outside the if..else block.
2) The method invoked ( rd.ConstruireReport() ) returns a StringBuffer and you should store it somewhere. Try to use Response.getWriter() and put all the response string into this writer.
3) Suggest you to find some good tutorial books about how to design Servlets/JSP, the solution you tried to build is quite wried.
The problem is that you are not doing anything with the return value from ConstruireReport(), so it just get's lost. You should set it as a request attribute so your JSP can find the string.
EDIT: Suggestion to use getWriter() on the servlet removed - misunderstood scenario.

Categories