I have around 27,000 documents that I had like to write into my firestore collecton.
In order to do so, I use WriteBatch and I break it every 500 documents.
I get the following error:
java.lang.IllegalStateException: A write batch can no longer be used after commit() has been called.
The code im using is:
private void addData2DB(ArrayList<String> titles, ArrayList<String> authors, ArrayList<String> publishers, ArrayList<String> genres, ArrayList<String> pages) {
db = FirebaseFirestore.getInstance();
WriteBatch batch = db.batch();
double counter = 1.0;
for (int i = 0; i <= titles.size(); i++) {
Map<String, Object> data = new HashMap<>();
String title = titles.get( i );
String[] TitleWords = title.split( "\\s+" );
if (!title.isEmpty()) {
String docID = db.collection( "GeneratingID" ).document().getId();
data.put( "bookAuthor", authors.get( i ) );
data.put( "bookTitle", titles.get( i ) );
data.put( "genre", genres.get( i ) );
data.put( "pages", pages.get( i ) );
data.put( "publishedBy", publishers.get( i ) );
DocumentReference dr = db.collection( "BooksDB" ).document( docID );
batch.set( dr, data );
}
if ((double) i / 500 == counter) {
batch.commit();
counter = counter + 1;
}
}
}
Is there a way to fix this?
Thank you
Move the WriteBatch batch = db.batch(); inside your for loop.
You are getting the following error:
java.lang.IllegalStateException: A write batch can no longer be used after commit() has been called.
Most likely because your code doesn't wait for the first commit to complete. To write 500 documents to Firestore, takes some time, as it's an asynchronous operation. So you get the above error because a batch object can't be modified again after you call commit().
So you need to wait until the operation completes before adding new write operations to the batch and call again "commit()".
Related
I'm trying to parse Json in VBA.
I'm collecting data from an API that returns a json format in a string.
I use JsonConverter to parse my string.
Now when i want to search on it, i got an error 13 incompatibility type.
See my Java API below :
#GetMapping("/rest/collectData/excel/exportAll")
public HashMap<Object, Object> collectAll(){
HashMap<Object, Object> result = new HashMap<>();
String sql = "SELECT affair_code AS codeAffair, name, amount, end_date AS state FROM service_record WHERE affair_code IS NOT NULL AND affair_code != ''";
List<Map<String, Object>> allServiceRecords = jdbcTemplate.queryForList(sql);
if(allServiceRecords != null && allServiceRecords.size() >0){
result.put("result", true);
for(Map<String, Object> serviceRecord : allServiceRecords){
HashMap<Object, Object> details = new HashMap<>();
if(result.containsKey(serviceRecord.get("codeAffair"))){
details.put("alone", false);
details.put("message", "Plusieurs prestations ont été trouvées.");
} else {
details.put("alone", true);
details.put("name", (String) serviceRecord.get("name"));
details.put("amount", (Double) serviceRecord.get("amount"));
details.put("state", ((Date) serviceRecord.get("state")).compareTo(new Date()) < 0 ? "En cours" : "Clos");
}
result.put(serviceRecord.get("codeAffair"), details);
}
} else{
result.put("result", false);
result.put("error", "La liste n'est pas définie, ou vide.");
}
return result;
}
It returns json :
{"03-045251":{"alone":true,"amount":0.0,"name":"name1","state":"En cours"},"03_05494":{"alone":true,"amount":16743.0,"name":"name2","state":"En cours"}}
First, i execute sql request to collect my data and put it in a map.
Then, i send this map to my excel VBA.
Now see my VBA :
Sub JsonDataSqwal()
firstRow = Range("A" & 11).End(xlDown).Row
lastRow = Range("A" & Rows.Count).End(xlUp).Row
Dim httpObject As Object
Set httpObject = CreateObject("MSXML2.XMLHTTP")
sUrl = "http://localhost/rest/collectData/excel/exportAll"
sRequest = sUrl
httpObject.Open "GET", sRequest, False
httpObject.send
sGetResult = httpObject.responseText
If Not IsNull(sGetResult) Then
Dim oJson As Object
JsonConverter.JsonOptions.AllowUnquotedKeys = True
Set oJson = JsonConverter.ParseJson(sGetResult)
Dim i As Long
For i = firstRow To lastRow
Dim codeAffString As String
codeAffString = Cells(i, 4)
Debug.Print oJson(codeAffString)("name")
Next i
End If
End Sub
For the moment, i try to print my data. the loop collects values from a column, which contains all my codeAffair as 00_00000 or 00-00000
It is this data that i try to use in my vba code with the var codeAffString.
When i execute my code, i'm always getting error 13 about type incompatibility.
To solve this, i tried many things :
to add quote to my var
To rename my HashMap as HashMap<String, Object>
To allow unquoting keys
To change my back office program
To replace my value like """" + codeAffairString + """"
To replace my var with a fix String "00_00000". It works in this case.
To check the type of my var with VarTyp function which returns 8 for String index.
Now i Have no other idea to solve my problem..
If someone see where is my mistake..
Thank you !
Just a quick test:
I used the JSON string you gave and the value you gave for codeAffString to build a minimal reproducible example and it does not produce any errors:
Sub test()
Const JsonString As String = "{""03-045251"":{""alone"":true,""amount"":0.0,""name"":""name1"",""state"":""En cours""},""03_05494"":{""alone"":true,""amount"":16743.0,""name"":""name2"",""state"":""En cours""}}"
Const codeAffString As String = "03-045251"
Dim oJson As Object
JsonConverter.JsonOptions.AllowUnquotedKeys = True
Set oJson = JsonConverter.ParseJson(JsonString)
Debug.Print oJson(codeAffString)("name") ' outputs name1
End Sub
The error you describe occurs if codeAffString cannot be found in the JSON.
Test it by the following in your code:
For i = firstRow To lastRow
Dim codeAffString As String
codeAffString = Cells(i, 4)
If IsEmpty(oJson(codeAffString)) Then
Debug.Print codeAffString & " does not exist in the json"
Else
Debug.Print oJson(codeAffString)("name")
End If
Next i
How can I optimize my for loop:
public List<XYZ.Users> getUsers(
List<ResponseDocument> responseDocuments
) {
List<XYZ.Users> users = new ArrayList<>();
for (ResponseDocument solrUser : responseDocuments) {
Map<String, Object> field = solrUser.getFields();
String name = field.getOrDefault(Constants.NAME, "").toString();
int classId = (int) field.getOrDefault(Constants.ID, -1);
XYZ.Users user = XYZ.Users.newBuilder()
.setUser(name)
.setId(classId)
.setDir(Constants.DIR)
.build();
users.add(user);
}
return users;
}
I track the time taken by this for loop in production ( 500 requests/sec) and I have seen some random spikes in time? I am not able to figure out why that might happen.
What optimizations would you recommend?
I am working with JPA, my web application is taking 60 sec to execute this method, I want to execute it faster how to achive ?
public boolean evaluateStudentTestPaper (long testPostID, long studentID, long howManyTimeWroteExam) {
Gson uday = new Gson();
Logger custLogger = Logger.getLogger("StudentDao.java");
// custLogger.info("evaluateTestPaper test paper for testPostID: " +
// testPostID);
long subjectID = 0;
// checking in table
EntityManagerFactory EMF = EntityManagerFactoryProvider.get();
EntityManager em = EMF.createEntityManager();
List<StudentExamResponse> studentExamResponses = null;
try {
studentExamResponses = em
.createQuery(
"SELECT o FROM StudentExamResponse o where o.studentId=:studentId And o.testPostID=:testPostID and o.howManyTimeWroteExam=:howManyTimeWroteExam")
.setParameter("studentId", studentID).setParameter("testPostID", testPostID)
.setParameter("howManyTimeWroteExam", howManyTimeWroteExam).getResultList();
System.out.println("studentExamResponses--------------------------------------------------"
+ uday.toJson(studentExamResponses) + "---------------------------------------");
} catch (Exception e) {
custLogger.info("exception at getting student details:" + e.toString());
studentExamResponses = null;
}
int studentExamResponseSize = studentExamResponses.size();
if (AppConstants.SHOWLOGS.equalsIgnoreCase("true")) {
custLogger.info("student questions list:" + studentExamResponseSize);
}
// Get all questions based on student id and test post id
List<ExamPaperRequest> examPaperRequestList = new ArrayList<ExamPaperRequest>();
List<Questions> questionsList = new ArrayList<Questions>();
// StudentExamResponse [] studentExamResponsesArgs =
// (StudentExamResponse[]) studentExamResponses.toArray();
// custLogger.info("Total questions to be evaluated: " +
// examPaperRequestList.size());
List<StudentTestResults> studentTestResultsList = new ArrayList<StudentTestResults>();
StudentTestResults studentTestResults = null;
StudentResults studentResults = null;
String subjectnames = "", subjectMarks = "";
int count = 0;
boolean lastIndex = false;
if (studentExamResponses != null && studentExamResponseSize > 0) {
// studentExamResponses.forEach(studentExamResponses->{
for (StudentExamResponse o : studentExamResponses.stream().parallel()) {
// 900 lines of coade inside which includes getting data from database Queries
}
}
As #Nikos Paraskevopoulos mentioned, it should probably be the ~900 * N database iterations inside that for loop.
I'd say to avoid DB iterations as much as you can, specially inside a loop like that.
You can try to elaborate your current StudentExamResponse sql to englobe more clauses - those you're using inside your for mainly, which could even diminish the amount of items you iterate upon.
My guess would be your select query is taking time.
If possible, set query timeout to less than 60 seconds & confirm this.
Ways of setting query timeout can be found out there - How to set the timeout period on a JPA EntityManager query
If this is because of query, then you may need to work to make select query optimal.
I'm doing a real time pipeline where I connect Spark Streaming with HBase. For the sake of this process, I have to execute a filter in a HBase table, secifically a prefix filter, since I want to match the records where the key starts with a certain string.
The table I'm filtering is called "hm_notificaciones". I can connect successfully to Hbase shell and scan the table from the command line. Running the following command:
scan "hm_notificaciones"
I get the following records:
ROW COLUMN+CELL
46948854-20180307 column=info_oferta:id_oferta, timestamp=1520459448795, value=123456
46948854-20180312170423 column=info_oferta:id_establecimiento, timestamp=1520892403770, value=9999
46948854-20180312170423 column=info_oferta:id_oferta, timestamp=1520892390858, value=123445
46948854-20180312170536 column=info_oferta:id_establecimiento, timestamp=1520892422044, value=9239
46948854-20180312170536 column=info_oferta:id_oferta, timestamp=1520892435173, value=4432
46948854-20180313110824 column=info_oferta:id_establecimiento, timestamp=1520957374921, value=9990
46948854-20180313110824 column=info_oferta:id_oferta, timestamp=1520957362458, value=12313
I've been tying to run a prefix filter using the Hbase API. I'm writing some Scala code to connect to the API and make the filter. The following code compiles and executes, however it returns an empty result:
def scanTable( table_name:String, family: String, search_key: String )= {
val conf: Configuration = HBaseConfiguration.create()
val connection: Connection = ConnectionFactory.createConnection(conf)
// This is a test to verify if I can connect to HBase API.
//This statements work and print all the table names in HBase
val admin = connection.getAdmin
println("Listing all tablenames")
val list_table_names = admin.listTableNames()
list_table_names.foreach(println)
val table: Table = connection.getTable( TableName.valueOf(table_name) )
//val htable = new HTable(conf, tableName)
var colValueMap: Map[String, String] = Map()
var keyColValueMap: Map[String, Map[String, String]] = Map()
val prefix = Bytes.toBytes(search_key)
val scan = new Scan(prefix)
scan.addFamily(Bytes.toBytes(family))
val prefix_filter = new PrefixFilter(prefix)
scan.setFilter(prefix_filter)
val scanner = table.getScanner(scan)
for( row <- scanner){
val content = row.getNoVersionMap
for( entry <- content.entrySet ){
for( sub_entry <- entry.getValue.entrySet){
colValueMap += (Bytes.toString( sub_entry.getKey) -> Bytes.toString(sub_entry.getValue) )
}
keyColValueMap += (Bytes.toString(row.getRow) -> colValueMap )
}
}
//this doesn't execute
for( ( k, v) <- colValueMap) {
printf( "key: %s", "value: %s\n", k, v )
}
//this never executes since scanner is null (or empty)
for (result <- scanner) {
for (cell <- result.rawCells) {
println("Cell: " + cell + ", Value: " + Bytes.toString(cell.getValueArray, cell.getValueOffset, cell.getValueLength))
}
}
scanner.close
table.close
connection.close
}
I've tried two approaches to print/get the data: composing a Map and iterating over the ResultScanner. However, it seems that my filter is not working since it's returning a null/empty set.
Do you know if there is an alternative way to execute a prefix filter on Hbase?
The code I'm using to test the above code is the following:
user_key = "46948854-20181303144609"
scanTable("hm_notificaciones", "info_oferta", user_key)
The second loop, will not enter, because you have already iterated the scanner on previous step.
for (result <- scanner) {
for (cell <- result.rawCells) {
println("Cell: " + cell + ", Value: " + Bytes.toString(cell.getValueArray, cell.getValueOffset, cell.getValueLength))
}
}
And use keyColValueMap to print. It worked for me, check you prefix filter again.
for( ( k, v) <- colValueMap) {
printf( "key: %s", "value: %s\n", k, v )
}
I have the following code.
GraphDatabaseFactory dbFactory = new GraphDatabaseFactory();
GraphDatabaseService db= dbFactory.newEmbeddedDatabase("C:/Users/shadid/Documents/Neo4j/DB");
ExecutionEngine execEngine = new ExecutionEngine(db, null);
ExecutionResult execResult = execEngine.execute("MATCH (mat:TheMatrix) RETURN mat");
String results = execResult.dumpToString();
System.out.println(results);
I am getting a null point exception. I have tried running the command in the neo4j command line. So the data do exist. I am not quite sure where is the error. Quite a noob in neo4j so could someone please help me out
Here's the error I am getting by the way
Exception in thread "main" java.lang.NullPointerException
at org.neo4j.cypher.internal.CypherCompiler.(CypherCompiler.scala:69)
at org.neo4j.cypher.ExecutionEngine.createCompiler(ExecutionEngine.scala:237)
at org.neo4j.cypher.ExecutionEngine.(ExecutionEngine.scala:64)
at App.Main.main(Main.java:53)
Just found a more intuitive way of doing the same thing and it works yay!!!
try ( Transaction ignored = db.beginTx();
Result result = db.execute( "MATCH (n:TheIronGiant) RETURN n.`title: `" ) )
{
String rows ="";
while ( result.hasNext() )
{
Map<String,Object> row = result.next();
for ( Entry<String,Object> column : row.entrySet() )
{
rows += column.getKey() + ": " + column.getValue() + "; ";
}
rows += "\n";
}
System.out.println(rows);
}
You are using the ExecutionEngine constructor that takes a LogProvider as the second parameter, but passing a null value for it. If you called the single-parameter constructor instead, which does not take a LogProvider, you may not get the exception.
Also, ExecutionEngine class is now deprecated, and you should use GraphDatabaseService.execute() instead.