How can I retrieve long response from mongodb? - java

In Spring mvc + mongodb application, I have 400k documents. if I need to return 300k documents when a query is made, how can I do that?
Following is the stack trace,
HTTP Status 500 - Request processing failed; nested exception is java.lang.IllegalArgumentException: response too long: 1634887426
type Exception report
message Request processing failed; nested exception is java.lang.IllegalArgumentException: response too long: 1634887426
description The server encountered an internal error that prevented it from fulfilling this request.
exception
org.springframework.web.util.NestedServletException: Request processing failed; nested exception is java.lang.IllegalArgumentException: response too long: 1634887426
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:973)
org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:863)
javax.servlet.http.HttpServlet.service(HttpServlet.java:646)
org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
root cause
java.lang.IllegalArgumentException: response too long: 1634887426
com.mongodb.Response.<init>(Response.java:49)
com.mongodb.DBPort$1.execute(DBPort.java:141)
com.mongodb.DBPort$1.execute(DBPort.java:135)
com.mongodb.DBPort.doOperation(DBPort.java:164)
com.mongodb.DBPort.call(DBPort.java:135)
com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:292)
com.mongodb.DBTCPConnector.call(DBTCPConnector.java:271)
com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:84)
com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:66)
com.mongodb.DBCursor._check(DBCursor.java:458)
com.mongodb.DBCursor._hasNext(DBCursor.java:546)
com.mongodb.DBCursor.hasNext(DBCursor.java:571)
org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:1803)
org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:1628)
org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:1611)
org.springframework.data.mongodb.core.MongoTemplate.find(MongoTemplate.java:535)
com.AnnaUnivResults.www.service.ResultService.getStudentList(ResultService.java:38)
com.AnnaUnivResults.www.service.ResultService$$FastClassBySpringCGLIB$$1f19973d.invoke(<generated>)
org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:711)
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:136)
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:644)
com.AnnaUnivResults.www.service.ResultService$$EnhancerBySpringCGLIB$$f9296292.getStudentList(<generated>)
com.AnnaUnivResults.www.controller.ResultController.searchStudentByCollOrDept(ResultController.java:87)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
java.lang.reflect.Method.invoke(Method.java:597)
org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:215)
org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132)
org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:749)
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:690)
org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:83)
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:945)
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:876)
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:961)
org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:863)
javax.servlet.http.HttpServlet.service(HttpServlet.java:646)
org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837)
javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
I guess the above stack trace is because the documents returned is very large. How can I manage this? I changed tomcat server config as 4096M. But I still have problem.

The math first: you are trying to load more than 1.5GB in a single query. That will take a while and except for very rare use cases is showing - sorry - bad application design.
Don't think of how to deal with that from the database side. You should refactor your code.
There are two common scenarios for loading a lot of documents.
Scenario 1: Calculations over the result set
Sometimes you want to do calculations over a large part of your result set. Let's say you want to find out how much turnover all customers from EMEA generated so far and your order documents look like this (simplified for the sake of this example):
{
_id:<...>,
customerId:<...>,
deliveryAdress: {<...>},
region: "EMEA",
items:[{<...>},{<...>},...],
total:12345.78
}
Now, what you could do to a certain extend is to load all the orders from the EMEA region with the equivalent of
db.orders.find({region:"EMEA"})
// the repository method would be something like
// findByRegion(String region)
and iterate over the result set, building a sum of total. This approach has several problems. First of all, even when doing it in this way, you load a lot of data you don't need (items,deliveryAddress). So the first way to reduce the amount of data returned by MongoDB is to use projection:
db.orders.find({region:"EMEA"},{_id:0,total:1})
// as of now, you would have to create a custom method
// and a custom repository implementation
// See "Further Reading"
which will give you a lot of documents only containing the total of all orders from EMEA, vastly reducing the size returned from the database. As far as I know, this can't be done using spring-data's dynamic finders (repositories) automagically.
But this approach still has the drawback that it doesn't scale too well, since there might be a point in time where you have more orders from EMEA than you can load in a single transaction. You could use a server side cursor and an iterator (see scenario 2 for details), but this still is a bit awkward.
A far better approach would be to let MongoDB do the calculations. For this, you would use MongoDB's aggregation framework. As for the example, the query would look like
db.orders.aggregate([{$match:{region:"EMEA"}},{$group:{_id:"$region",totalTurnover:{$sum:"$total"} } })
which would return a single document looking like
{_id:"EMEA",totalTurnover:<very large Sum>}
The advantage is obvious: you keep the load of your application, you don't need to load all the data, drastically improving the performance. And it is scalable.
Scenario 2: You really need a lot of the document's
Even when you really need a lot of the documents, loading them all in one huge result set is bad practise, as that approach isn't scalable, as you found out. A better approach would be to request parts of the result set. For this you use server side cursors.
With spring-data-mongodb you would use PagingAndSortingRepository instead of a CrudRepository or any other. Since PagingAndSortingRepository is an extension of CrudRepository, migration should be quite easy. The advantage is that you only request a part of the result set at a given point in time, which makes your query scalable at the cost of manually iterating over it.
Further reading
Customization of spring-data-mongodb repositories
Aggregation framework support in spring-data-mongodb
PagingAndSortingRepository in "Core Concepts" of the spring-data-mongodb docs

Related

Hyperledger Fabric - Java-SDK - Future completed exceptionally: sendTransaction

I'm running an HL Fabric private network and submitting transactions to the ledger from a Java Application using Fabric-Java-Sdk.
Occasionally, like 1/10000 of the times, the Java application throws an exception when I'm submitting the transaction to the ledger, like the message below:
ERROR 196664 --- [ Thread-4] org.hyperledger.fabric.sdk.Channel
: Future completed exceptionally: sendTransaction
java.lang.IllegalArgumentException: The proposal responses have 2
inconsistent groups with 0 that are invalid. Expected all to be
consistent and none to be invalid. at
org.hyperledger.fabric.sdk.Channel.doSendTransaction(Channel.java:5574)
~[fabric-sdk-java-2.1.1.jar:na] at
org.hyperledger.fabric.sdk.Channel.sendTransaction(Channel.java:5533)
~[fabric-sdk-java-2.1.1.jar:na] at
org.hyperledger.fabric.gateway.impl.TransactionImpl.commitTransaction(TransactionImpl.java:138)
~[fabric-gateway-java-2.1.1.jar:na] at
org.hyperledger.fabric.gateway.impl.TransactionImpl.submit(TransactionImpl.java:96)
~[fabric-gateway-java-2.1.1.jar:na] at
org.hyperledger.fabric.gateway.impl.ContractImpl.submitTransaction(ContractImpl.java:50)
~[fabric-gateway-java-2.1.1.jar:na] at
com.apidemoblockchain.RepositoryDao.BaseFunctions.Implementations.PairTrustBaseFunction.sendTrustTransactionMessage(PairTrustBaseFunction.java:165)
~[classes/:na] at
com.apidemoblockchain.RepositoryDao.Implementations.PairTrustDataAccessRepository.run(PairTrustDataAccessRepository.java:79)
~[classes/:na] at java.base/java.lang.Thread.run(Thread.java:834)
~[na:na]
While my submitting method goes like this:
public void sendTrustTransactionMessage(Gateway gateway, Contract trustContract, String payload) throws TimeoutException, InterruptedException, InvalidArgumentException, TransactionException, ContractException {
// Prepare
checkIfChannelIsReady(gateway);
// Execute
trustContract.submitTransaction(getCreateTrustMethod(), payload);
}
I'm using a 4 org network with 2 peers each and I am using 3 channels, one for each chaincode DataType, in order to keep the things clean.
I think that the error coming from the Channel doesn't make sense because I am using the Contract to submit it...
Like I'm opening the gateway and then I keep it open for continuously submit the txs.
try (Gateway gateway = getBuilder(getTrustPeer()).connect()) {
Contract trustContract = gateway.getNetwork(getTrustChaincodeChannelName()).getContract(getTrustChaincodeId(), getTrustChaincodeName());
while (!terminateLoop) {
if (message) {
String payload = preparePayload();
sendTrustTransactionMessage(gateway, trustContract, payload);
}
...
wait();
}
...
}
EDIT:
After reading #bestbeforetoday advice, I've managed to catch the ContractException and analyze the logs. Still, I don't fully understand where might be the bug and, therefore, how to fix it.
I'll add 3 prints that I've taken to the ProposalResponses received in the exception and a comment after it.
ProposalResponses-1
ProposalResponses-2
ProposalResponses-3
So, in the first picture, I can see that 3 proposal responses were received at the exception and the exception cause message says:
"The proposal responses have 2 inconsistent groups with 0 that are invalid. Expected all to be consistent and none to be invalid."
In pictures, 2/3 is represented the content of those responses and I notice that there are 2 fields saving null value, namely "ProposalRespondePayload" and "timestamp_", however, I don't know if those are the "two groups" referred at the message cause of the exception.
Thanks in advance...
It seems that, while the endorsing peers all successfully endorsed your transaction proposal, those peer responses were not all byte-for-byte identical.
There are several things that might differ, including read/write sets or the value returned from the transaction function invocation. There are several reasons why differences might occur, including non-deterministic transaction function implementation, different transaction function behaviour between peers, or different ledger state at different peers.
To figure out what caused this specific failure you probably need to look at the peer responses to identify how they differ. You should be getting a ContractException thrown back from your transaction submit call, and this should allow you to access the proposal responses by calling e.getProposalResponses():
https://hyperledger.github.io/fabric-gateway-java/release-2.2/org/hyperledger/fabric/gateway/ContractException.html#getProposalResponses()

Apache Camel: aggregate on pollEnrich results rather than from and how to preserve headers

In my camel route I consume messages from a queue; Each message contains headers "pad" (the path) and a file prefix. E.g.:
message1: pad="/some/dir", file="AAA"
message2: pad="/another/dir", file="BRD"
Per message I want a file created:
message1: /some/dir/AAA.tar (containing all files /some/dir/AAA*)
message2: /another/dir/BRD.tar (containing all files in /another/dir/BRD.tar)
The directories and filenames are collected in another route.
so far I have this camel route:
from("broker1:files.queue")
.log("starting with message ${header.file}")
.pollEnrich()
.simple("file:${header.pad}?antInclude=${header.file}.*")
.aggregate(new TarAggregationStrategy(false,true))
.constant(true)
.completionFromBatchConsumer()
.eagerCheckCompletion()
.parallelProcessing(false)
.setHeader("file", simple("${header.file}"))
.setHeader("pad", simple("${header.pad}"))
.log("tarring to: ${header.pad}${header.file}.tar")
.setHeader(Exchange.FILE_NAME, simple("${header.file}.tar"))
.setHeader(Exchange.FILE_PATH, simple("${header.pad}"))
.to("file://ignored")
.log("Going to do other stuff here on ${header.file}");
I have a few issues here:
- When running this route, I see multiple "starting with message" lines before I see a log line "tarring to"
- the log line "tarring to" actually says ".tar", the headers are empty...
- The ".tar" file created is stored in "./ignored" and contains one file from each jms message file header.
This leads me to believe the aggregation happens on a level I am not expecting; I want to aggregate the results of the pollEnrich, not of the other messages on the queue. Why, and how can I make it behave as I want?
The other is the lost headers; It might be due to the aggregation on the wrong items... Anyway, I would think that the setHeader()s in the aggregation should set them, but they're lost anyway; How can I preserve them?
I'm relatively new to camel programming; so please forgive my shortcomings; The indentation in the code is how I think the scope should be; Which probably is totally off. I am using camel-2.20.1, but can switch to any other version.
Edit
That reading made me change the route a bit; as written in the comments; it now looks like this: (the TarAggregationStrategy() is created in my CamelContext and there added to the registry)
from("broker1:files.queue")
.log("starting with message ${header.file}")
.pollEnrich()
.simple("file:${header.pad}?antInclude=${header.file}.*")
.aggregationStrategyRef("tarAggregationStrategy")
.log("tarring to: ${header.pad}${header.file}.tar")
.setHeader(Exchange.FILE_NAME, simple("${header.file}.tar"))
.setHeader(Exchange.FILE_PATH, simple("${header.pad}"))
.to("file://ignored")
.log("Going to do other stuff here on ${header.file}");
It does seem to go better now; with the exception that the actual tar does not occur due to not being able to create a temp file as per the stack trace:
org.apache.camel.component.file.GenericFileOperationFailedException: Could not make temp file (c9db039a-1585-4e63-85dc-e21ca268b290)
at org.apache.camel.processor.aggregate.tarfile.TarAggregationStrategy.aggregate(TarAggregationStrategy.java:174)
at org.apache.camel.processor.PollEnricher.process(PollEnricher.java:280)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:138)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:101)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201)
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:97)
at org.apache.camel.component.jms.EndpointMessageListener.onMessage(EndpointMessageListener.java:112)
at org.springframework.jms.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:719)
at org.springframework.jms.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:679)
at org.springframework.jms.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:649)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:317)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:255)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1166)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1158)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1055)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Could not make temp file (c9db039a-1585-4e63-85dc-e21ca268b290)
at org.apache.camel.processor.aggregate.tarfile.TarAggregationStrategy.addFileToTar(TarAggregationStrategy.java:199)
at org.apache.camel.processor.aggregate.tarfile.TarAggregationStrategy.aggregate(TarAggregationStrategy.java:167)
... 19 more
The thing that I noticed is that the part between ( and ) after Could not create temp file is actually the content of the body (which I could've left empty, but for no apparent reason I filled with the file id)
If you want to preserve headers from your messages so that they still exist after aggregation, your aggregation strategy has to do this. I do not think that TarAggregationStrategy does this.
Think of the aggregator as a boundary. It collects Camel Exchanges (Camel-wrapped Messages) and creates a new Exchange according to the AggregationStrategy. I guess that most out-of-the-box aggregators focus on merging or appending message bodies but not headers.
So if you want your headers header.file and header.pad to survive the aggregation, you have to implement that in your own aggregation strategy.
Since you use TarAggregationStrategy you could probably extend or decorate this one, just implement the header stuff and delegate to TarAggregationStrategy for the body stuff.

How to find unclosed Spring Data JPA streaming result set

I'm using the streaming result sets provided by Spring Data JPA's repositories along with MySQL in order to reduce memory consumption of methods that involve scanning large results sets (which is looking increasingly like a hopelessly vain attempt; in theory the idea of using streams for this is brilliant; in practice, the constrainsts are very difficult to work with).
If I attempt to start using a second query in a thread while a stream produced by a previos query is unclosed, I get an exception like this:
org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.hibernate.exception.GenericJDBCException: could not extract ResultSet
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:982)
org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:872)
javax.servlet.http.HttpServlet.service(HttpServlet.java:661)
...
Root cause: org.hibernate.exception.GenericJDBCException: could not extract ResultSet
org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:47)
org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:111)
...
java.sql.SQLException: Streaming result set com.mysql.jdbc.RowDataDynamic#15f2d1f is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.
com.mysql.jdbc.SQLError.createSQLException(SQLError.java:868)
com.mysql.jdbc.SQLError.createSQLException(SQLError.java:864)
com.mysql.jdbc.MysqlIO.checkForOutstandingStreamingData(MysqlIO.java:3214)
...
Unfortunately, when I locate my code in the very long stack trace, when I look at it I don't see any items that are allocated and not disposed, so I'm not really sure what's going on. How can I go about finding which query was not closed on time?

NoHostAvailableException With Cassandra & DataStax Java Driver If Large ResultSet

The setup:
2-node Cassandra 1.2.6 cluster
replicas=2
very large CQL3 table with no secondary index
Rowkey is a UUID.randomUUID().toString()
read consistency set to ONE
Using DataStax java driver 1.0
The request:
Attempting to do a table scan by "SELECT some-col from schema.table LIMIT nnn;"
The fail:
Once I go beyond a certain nnn LIMIT, I start to get NoHostAvailableExceptions from the driver.
It reads like this:
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.181.13.239 ([/10.181.13.239] Unexpected exception triggered))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:64)
at com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:214)
at com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:169)
at com.jpmc.es.rtm.storage.impl.EventExtract.main(EventExtract.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.181.13.239 ([/10.181.13.239] Unexpected exception triggered))
at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:98)
at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:165)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
Given: This is probably not the most enlightened thing to do to a large table with millions of rows, but this is how I learn what not to do, so I would really appreciate someone who could volunteer how this kind of error can be debugged.
For example, when this happens, there are no indications that the nodes in the cluster ever had an issue with the request (there is nothing in the logs on either node that indicate any timeout or failure). Also, I enabled the trace on the driver, which gives you some nice autotrace (ala Oracle) info as long as the query succeeds. But in this case, the driver blows a NoHostAvailableException and no ExecutionInfo is available, so tracing has not provided any benefit in this case.
I also find it interesting that this does not seem to be recorded as a timeout (my JMX consoles tell me no timeouts have occurred). So, I am left not understanding WHERE the failure is actually occurring. I am left with the idea that it is the driver that is having a problem, but I don't know how to debug it (and I would really like to).
I have read several posts from folks that state that query'g for resultSets > 10000 rows is probably not a good idea, and I am willing to accept this, but I would like to understand what is causing the exception and where the exception is happening.
FWIW, I also tried bumping the timeout properties in the cassandra.yaml, but this made no difference whatsoever.
I welcome any suggestions, anecdotes, insults, or monetary contributions for my registration in the house of moron-developers.
Regards!!
My guess (and perhaps others can confirm) is that you are putting too high a load on the cluster by the query which is causing the timeout. So, yes, it's a little difficult to debug as it's not obvious what the root cause was: was the limit I set too large or is the cluster actually down?
You want to avoid setting large limits on the amount of data you request in a single query, typically by setting a reasonable limit and paging through the results, e.g.,
SELECT * FROM messages WHERE user_id = 101 LIMIT 1000;
SELECT * FROM messages WHERE user_id = 101 AND msg_id > [Last message ID received] LIMIT 1000;
The Automatic Paging functionality added in (see this document, where the code examples in this answer are copied from) is a big improvement in datastax java-driver as it removes the need to manually page and lets you do the following:
Statement stmt = new SimpleStatement("SELECT * FROM images");
stmt.setFetchSize(100);
ResultSet rs = session.execute(stmt);
// Iterate over the ResultSet here
While this won't necessarily solve your problem it will minimise the possibility that it was a "too-big" query.

Struts Log Error - Method Invokation Failed

For a long time, ever since a recent change that allowed our users to keep their lost work in case of a system failure for our contact log form, we have had reports of a problem with one of our input forms. When the user tries to enter a very basic contact log (No files, one option, a date to mark it, and the log itself), they are bounced out of our app. Immediately afterwards, if they log in and try again, they can successfully submit the form.
We have been so far unable to duplicate this error on our test server, but we have narrowed down the error thrown when it happens. In our log, it looks like this:
[5/23/13 13:18:47:837 EDT] 46b24806 PropertyUtils E org.apache.commons.beanutils.PropertyUtils Method invocation failed.
[5/23/13 13:18:47:853 EDT] 46b24806 PropertyUtils E org.apache.commons.beanutils.PropertyUtils TRAS0014I: The following exception was logged java.lang.IllegalArgumentException: argument type mismatch
at java.lang.reflect.Method.invoke(Native Method)
at org.apache.commons.beanutils.PropertyUtilsBean.invokeMethod(PropertyUtilsBean.java(Inlined Compiled Code))
It goes on for several lines, but the jist of the errors seem to indicate a Struts problem. The method being utilized is a do method that writes to an SQL database, though the database itself doesn't seem to be the source of this problem.
Any sort of help or guidance would be appreciated. We have thrown multiple theories across the table, but without being able to duplicate the problem, it's fairly difficult to find a solution. Thank you in advance.
ONE ADDITIONAL NOTE: We are using Struts v. 1.2

Categories