I am facing out of memory issue due to high heap allocation.I verified it from HP Diagnostics tool and it is pointing to a section in my code where I am adding elements in an arraylist. I am not able o figure out how else can I write this code so that objects are released early. Below is the code :
private List<UpperDTO> populateRecords(List<BaseEntity> baseEntityList,List<DataEntity> dataEntityList) {
List<UpperDTO> masterDTOList = new ArrayList<UpperDTO>();
if(baseEntityList !=null && baseEntityList.size()>0){
BigDecimal conId = null;
for(BaseEntity baseEntity :baseEntityList){
conId = baseEntity.getConsignmentId();
ArrayList<StatusData> statusDataList = new ArrayList<StatusData>();
if(dataEntityList !=null && dataEntityList.size()>0){
for(DataEntity data : dataEntityList){
if(conId.equals(data.getConsignmentId())){
//making null to supress from the response
data.setConsignmentId(null);
statusDataList.add(TrackServiceHelper.convertStatusDataToDTO(data));
}
}
}
masterDTOList.add(TrackServiceHelper.populateDTO(baseEntity, statusDataList));
}
}
return masterDTOList;
}
public static UpperDTO populateDTO(TrackBaseEntity baseEntity,
List<StatusData> statusList) {
UpperDTO upperDTO = new UpperDTO();
//Setter methods called
upperDTO.setStatusData(statusList);
return upperDTO;
}
The issue is pointed at following line in the code :
masterDTOList.add(TrackServiceHelper.populateDTO(baseEntity, statusDataList));
This is rest api which receives messages from JMS Queues and MDB listens to these messages. I am not able to simulate this in my local or Dev environments as the issue comes during performance testing when the number of requests are high. How can I fix this?
This is the stacktrace of Collection Leak from HP Diagnostics:
Chart Collection Class Contained Type Probe Collection Growth Rate Collection Size Leak Stack Trace Maximum Size
0, 0, 255 java.util.ArrayList com.rex.ih2.dtos.UpperDTO gtatsh645 3,848 122,312 java.util.ArrayList.add(ArrayList.java:413)
com.rex.ih2.utils.AppDAO.populateConsignment(AppDAO.java:168)
com.rex.ih2.utils.AppDAO.searchConsignment(AppDAO.java:93)
com.rex.ih2.service.AppService.fetchConDetail(AppService.java:131)
com.rex.ih2.service.AppService.getConDetail(AppService.java:69)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:76)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:607)
org.apache.webbeans.intercept.InterceptorHandler.invoke(InterceptorHandler.java:297)
org.apache.webbeans.intercept.NormalScopedBeanInterceptorHandler.invoke(NormalScopedBeanInterceptorHandler.java:98)
com.rex.ih2.service.TrackService_$$_javassist_0.getConsignmentDetail(TrackService_$$_javassist_0.java)
com.rex.ih2.beans.TrackBean.action(TrackBean.java:35)
com.tnt.integration.bean.AbstractServiceBean.invokeService(AbstractServiceBean.java:259)
com.tnt.integration.bean.AbstractServiceBean.onMessage(AbstractServiceBean.java:157)
com.rex.ih2.beans.TrackBean.onMessage(TrackBean.java)
I agree with dcsohi. This is actually a design problem. You may want to look at below approaches:-
1) Size of the object being added in the list. If it can be optimized.
2) Handle data in chunks instead of adding all at once in the list.
3) Optimizing JVM arguments to increase head size so that it can handle more objects.
You can try to simulate this by increasing number of test objects and reducing heap size in dev environment or maybe taking production dump and running with the same volume.
Ok, it looks to me like you only care about DataEntity objects and BaseEntity objects where their "consignment IDs" match. You really should do this sort of thing in the database query. The use of "entity" objects makes it seem like your DB interactions are via JPA/Hibernate, in which case you may want to create a DB view that joins the two tables by consignment ID, and provides the necessary information for your output. Next, create a custom read-only entity that matches this view. Then you can apply pagination to your query of this view (if it's still necessary) and retrieve the information in smaller batches.
Related
I am running a hierachical Spring Statemachine and - after walking through the inital transitions into state UP with the default substate STOPPED - want to use statemachine.getState(). Trouble is, it gives me only the parent state UP, and I cannot find an obvious way to retrieve both the parent state and the sub state.
The machine has states constructed like so:
StateMachineBuilder.Builder<ToolStates, ToolEvents> builder = StateMachineBuilder.builder();
builder.configureStates()
.withStates()
.initial(ToolStates.UP)
.state(ToolStates.UP, new ToolUpEventAction(), null)
.state(ToolStates.DOWN
.and()
.withStates()
.parent(ToolStates.UP)
.initial(ToolStates.STOPPED)
.state(ToolStates.STOPPED,new ToolStoppedEventAction(), null )
.state(ToolStates.IDLE)
.state(ToolStates.PROCESSING,
new ToolBeginProcessingPartAction(),
new ToolDoneProcessingPartAction());
...
builder.build();
ToolStates and ToolEvents are just enums. In the client class, after running the builder code above, the statemachine is started with statemachine.start(); When I subsequently call statemachine.getState().getId(); it gives me UP. No events sent to statemachine before that call.
I have been up and down the Spring statemachine docs and examples. I know from debugging that the entry actions of both states UP and STOPPED have been invoked, so I am assuming they are both "active" and would want to have both states presented when querying the statemachine. Is there a clean way to achieve this ? I want to avoid storing the substate somewhere from inside the Action classes, since I believe I have delegated all state management issues to the freakin Statemachine in the first place and I would rather like to learn how to use its API for this purpose.
Hopefully this is something embarrasingly obvious...
Any advice most welcome!
The documentation describes getStates():
https://docs.spring.io/spring-statemachine/docs/current/api/org/springframework/statemachine/state/State.html
java.util.Collection<State<S,E>> getStates()
Gets all possible states this state knows about including itself and substates.
stateMachine.getState().getStates();
to wrap it up after SMA's most helpful advice: turns out the stateMachine.getState().getStates(); does in my case return a list of four elements:
a StateMachineState instance containing UP and STOPPED
three ObjectState instances containing IDLE, STOPPED and PROCESSING,
respectively.
this leads me to go forward for the time being with the following solution:
public List<ToolStates> getStates() {
List<ToolStates> result = new ArrayList<>();
Collection<State<ToolStates, ToolEvents>> states = this.stateMachine.getState().getStates();
Iterator<State<ToolStates, ToolEvents>> iter = states.iterator();
while (iter.hasNext()) {
State<ToolStates, ToolEvents> candidate = iter.next();
if (!candidate.isSimple()) {
Collection<ToolStates> ids = candidate.getIds();
Iterator<ToolStates> i = ids.iterator();
while (i.hasNext()) {
result.add(i.next());
}
}
}
return result;
}
This maybe would be more elegant with some streaming and filtering, but does the trick for now. I don't like it much, though. It's a lot of error-prone logic and I'll have to see if it holds in the future - I wonder why there isn't a function in the Spring Statemachine that gives me a list of the enum values of all the currently active states, rather than giving me everything possible and forcing me to poke around in it with external logic...
My code is throwing the error below:
java.lang.StackOverflowError
at jetbrains.exodus.entitystore.EntityIterableCache.putIfNotCached(EntityIterableCache.java:100)
at jetbrains.exodus.entitystore.iterate.EntityIterableBase.asProbablyCached(EntityIterableBase.java:578)
at jetbrains.exodus.entitystore.iterate.EntityIterableBase.iterator(EntityIterableBase.java:138)
at jetbrains.exodus.entitystore.iterate.binop.MinusIterable$SortedIterator.<init>(MinusIterable.java:72)
at jetbrains.exodus.entitystore.iterate.binop.MinusIterable$SortedIterator.<init>(MinusIterable.java:59)
at jetbrains.exodus.entitystore.iterate.binop.MinusIterable.getIteratorImpl(MinusIterable.java:55)
at jetbrains.exodus.entitystore.iterate.binop.MinusIterable.getIteratorImpl(MinusIterable.java:23)
at jetbrains.exodus.entitystore.iterate.EntityIterableBase.iterator(EntityIterableBase.java:138)
at jetbrains.exodus.entitystore.iterate.binop.MinusIterable$SortedIterator.<init>(MinusIterable.java:72)
at jetbrains.exodus.entitystore.iterate.binop.MinusIterable$SortedIterator.<init>(MinusIterable.java:59)
at jetbrains.exodus.entitystore.iterate.binop.MinusIterable.getIteratorImpl(MinusIterable.java:55)
at jetbrains.exodus.entitystore.iterate.binop.MinusIterable.getIteratorImpl(MinusIterable.java:23)
at jetbrains.exodus.entitystore.iterate.EntityIterableBase.iterator(EntityIterableBase.java:138)
at jetbrains.exodus.entitystore.iterate.binop.MinusIterable$SortedIterator.<init>(MinusIterable.java:72)
at jetbrains.exodus.entitystore.iterate.binop.MinusIterable$SortedIterator.<init>(MinusIterable.java:59)
at jetbrains.exodus.entitystore.iterate.binop.MinusIterable.getIteratorImpl(MinusIterable.java:55)
at jetbrains.exodus.entitystore.iterate.binop.MinusIterable.getIteratorImpl(MinusIterable.java:23)
It does not point exactly where in my code is the root cause, but I know in my code I have this:
EntityIterable tempEntities = txn.findWithProp(entityType, propertyName);
tempEntities.forEach(entity -> {
if (!match(entity.getProperty(propertyName))) {
tempEntities = tempEntities.minus(txn.getSingletonIterable(entity));
}
);
And I know that the count for the tempEntities is 10,000+ items, since the code did a save for 10,000+ entities prior to this throwing.
Does it mean you can't iterate over like 10K entities with Xodus?
Obviously, you can iterate over 10K entities. Don't create iterables via singletons and binary operations (union, intersect, minus). This is API misusage. Even if you provide sufficient stack size for the JVM, performance problems would haunt you.
I'm trying to create an SNMP4j agent and am finding it difficult to understand the process correctly. I have successfully created an agent that can be queried from the command line using snmpwalk. What I am having difficulty with is understanding how I am meant to update the values stored in my implemented MIB.
The following shows the relevant code I use for creating the MIB (I implement Host-Resources-MIB)
agent = new Agent("0.0.0.0/" + port);
agent.start();
agent.unregisterManagedObject(agent.getSnmpv2MIB());
modules = new Modules(DefaultMOFactory.getInstance());
HrSWRunEntryRow thisRow = modules.getHostResourcesMib().getHrSWRunEntry()
.createRow(oidHrSWRunEntry);
final OID ashEnterpriseMIB = new OID(".1.3.6.1.4.1.49266.0");
thisRow.setHrSWRunIndex(new Integer32(1));
thisRow.setHrSWRunName(new OctetString("RunnableAgent"));
thisRow.setHrSWRunID(ashEnterpriseMIB);
thisRow.setHrSWRunPath(new OctetString("All is good in the world")); // Max 128 characters
thisRow.setHrSWRunParameters(new OctetString("Everything is working")); // Max 128 characters
thisRow.setHrSWRunType(new Integer32(HrSWRunTypeEnum.application));
thisRow.setHrSWRunStatus(new Integer32(HrSWRunStatusEnum.running));
modules.getHostResourcesMib().getHrSWRunEntry().addRow(thisRow);
agent.registerManagedObject(modules.getHostResourcesMib());
This appears to be sufficient to create a runnable agent. What I do not understand is how I am meant to change the values stored in the MIB (how do I, for example, change the value of HrSWRunStatus). There seem to be a few kludge ways but they don't seem to fit with the way the library is written.
I have come across numerous references to using/overriding the methods
prepare
commit
undo
cleanup
But cannot find any examples where this is done. Any help would be gratefully received.
In protected void registerManagedObjects(), you need to do something like new MOMutableColumn(columnId, SMIConstants.SYNTAX_INTEGER, MOAccessImpl.ACCESS_READ_WRITE, null); for your HrSWRunStatus. Take a look at the TestAgent.java example of SNMP4J-agent source.
I am implementing REST through RESTlet. This is an amazing framework to build such a restful web service; it is easy to learn, its syntax is compact. However, usually, I found that when somebody/someprogram want to access some resource, it takes time to print/output the XML, I use JaxbRepresentation. Let's see my code:
#Override
#Get
public Representation toXml() throws IOException {
if (this.requireAuthentication) {
if (!this.app.authenticate(getRequest(), getResponse()))
{
return new EmptyRepresentation();
}
}
//check if the representation already tried to be requested before
//and therefore the data has been in cache
Object dataInCache = this.app.getCachedData().get(getURI);
if (dataInCache != null) {
System.out.println("Representing from Cache");
//this is warning. unless we can check that dataInCache is of type T, we can
//get rid of this warning
this.dataToBeRepresented = (T)dataInCache;
} else {
System.out.println("NOT IN CACHE");
this.dataToBeRepresented = whenDataIsNotInCache();
//automatically add data to cache
this.app.getCachedData().put(getURI, this.dataToBeRepresented, cached_duration);
}
//now represent it (if not previously execute the EmptyRepresentation)
JaxbRepresentation<T> jaxb = new JaxbRepresentation<T>(dataToBeRepresented);
jaxb.setFormattedOutput(true);
return jaxb;
}
AS you can see, and you might asked me; yes I am implementing Cache through Kitty-Cache. So, if some XML that is expensive to produce, and really looks like will never change for 7 decades, then I will use cache... I also use it for likely static data. Maximum time limit for a cache is an hour to remain in memory.
Even when I cache the output, sometimes, output are irresponsive, like hang, printed partially, and takes time before it prints the remaining document. The XML document is accessible through browser and also program, it used GET.
What are actually the problem? I humbly would like to know also the answer from RESTlet developer, if possible. Thanks
Dear all,
I am using java rmi for my program, from client side i am calling my interface method by passing single argument. Using this argument interface runs a query and returns 40,000 rows(each row containing 10 row)as a result.All these are stored in vector inside vector structure [[0,1,2,3,4,5,6,7,8,9],[],[],[],[],[]...]. This things happen when i am clicking one button. First time its works but again i am trying to do the same (ie clicking button ) It shows java.lang.out of memory exception at client side. pls help me.I am using Postgresql db.
Client side:
Vector data = new Vector();
data = Inter.getEndProductDetailsForCopyChain(endProductId);
Server side:
public Vector getEndProductDetailsForCopyChain(int endProductId1)
{
Connection OPConnect = StreamLineConnection.GetStreamline_Connection();
Vector data=new Vector();
try{
System.out.println("Before query data vector size>>>>>>>>"+data.size());//mohan
String sqlQry = "select distinct style_no,version_no,matNo,type,specs,color,size,ref_no,uom1 from garment where id=" +endProductId1;
System.out.println("sqlQry"+ sqlQry);
Statement st=OPConnect.createStatement();
ResultSet rs = st.executeQuery(sqlQry);
while(rs.next()){
Vector row = new Vector();
row.add(rs.getString("style_no"));
row.add(rs.getString("version_no"));
row.add(rs.getString("matNo"));
row.add(rs.getString("type"));
row.add(rs.getString("specs"));
row.add(rs.getString("color"));
row.add(rs.getString("size"));
row.add(rs.getString("ref_no"));
row.add(rs.getString("uom1"));
row.add(new Boolean(false));
data.add(row);
}
System.out.println("After query data vector size>>>>>>>>"+data.size());
}catch(Exception e)
{ e.printStackTrace();
closeConnection(OPConnect);
}
return data;
}
I cleared all the vectors and hashmap after finishing my process but still throwing Out of memory exception at client side this happens when data(query result vector) dispatched to client side.
A direct response to your question: if you can change the client JVM command line args, then start with more memory allocated. For example, use -Xmx256M to use max memory of 256 Meg
A more useful response to your question: The way you phrased your question suggests you know the real problem: a program architecture that tries to obtain so much data on a single click. Do you really need to have so much data on the client side? Can you do some processing with it on the server side and send much less? Can you add paging? or lazy loading?
Consider Google's search model as a possible solution...a Google search for "hello" has about 310,000,000 matches, yet Google only sends me 10 results at a time. Then I click "Next" to get more... this is paging. Users cannot typically make much sense of 40,000 rows of data at once. Would this work for you?
If this is for export, fetch 100 or so rows at a time, export them, then fetch the next rows... you really don't want to be transferring so much data via RMI in one call.
Try to re-use data, do not create a new vector on every request. data.clear(); // fill it then.