Dear all,
I am using java rmi for my program, from client side i am calling my interface method by passing single argument. Using this argument interface runs a query and returns 40,000 rows(each row containing 10 row)as a result.All these are stored in vector inside vector structure [[0,1,2,3,4,5,6,7,8,9],[],[],[],[],[]...]. This things happen when i am clicking one button. First time its works but again i am trying to do the same (ie clicking button ) It shows java.lang.out of memory exception at client side. pls help me.I am using Postgresql db.
Client side:
Vector data = new Vector();
data = Inter.getEndProductDetailsForCopyChain(endProductId);
Server side:
public Vector getEndProductDetailsForCopyChain(int endProductId1)
{
Connection OPConnect = StreamLineConnection.GetStreamline_Connection();
Vector data=new Vector();
try{
System.out.println("Before query data vector size>>>>>>>>"+data.size());//mohan
String sqlQry = "select distinct style_no,version_no,matNo,type,specs,color,size,ref_no,uom1 from garment where id=" +endProductId1;
System.out.println("sqlQry"+ sqlQry);
Statement st=OPConnect.createStatement();
ResultSet rs = st.executeQuery(sqlQry);
while(rs.next()){
Vector row = new Vector();
row.add(rs.getString("style_no"));
row.add(rs.getString("version_no"));
row.add(rs.getString("matNo"));
row.add(rs.getString("type"));
row.add(rs.getString("specs"));
row.add(rs.getString("color"));
row.add(rs.getString("size"));
row.add(rs.getString("ref_no"));
row.add(rs.getString("uom1"));
row.add(new Boolean(false));
data.add(row);
}
System.out.println("After query data vector size>>>>>>>>"+data.size());
}catch(Exception e)
{ e.printStackTrace();
closeConnection(OPConnect);
}
return data;
}
I cleared all the vectors and hashmap after finishing my process but still throwing Out of memory exception at client side this happens when data(query result vector) dispatched to client side.
A direct response to your question: if you can change the client JVM command line args, then start with more memory allocated. For example, use -Xmx256M to use max memory of 256 Meg
A more useful response to your question: The way you phrased your question suggests you know the real problem: a program architecture that tries to obtain so much data on a single click. Do you really need to have so much data on the client side? Can you do some processing with it on the server side and send much less? Can you add paging? or lazy loading?
Consider Google's search model as a possible solution...a Google search for "hello" has about 310,000,000 matches, yet Google only sends me 10 results at a time. Then I click "Next" to get more... this is paging. Users cannot typically make much sense of 40,000 rows of data at once. Would this work for you?
If this is for export, fetch 100 or so rows at a time, export them, then fetch the next rows... you really don't want to be transferring so much data via RMI in one call.
Try to re-use data, do not create a new vector on every request. data.clear(); // fill it then.
Related
I am building tests in SoapUI. I have a very large response (>3500 duties). For that response I need to build a request and execute that request. Currently the code (Java) works, but I would like to optimize the code.
For each Duty I build a request to get additional employee data and execute it using a for next loop. Below is an example of the large XML response that I get.
<DUTIES>
<DUTY>
<EMPNO>1</EMPNO>
<LOCATION>AMS</BASE_STATION>
<ACTUALTIME>2019-02-20T06:00:00</ACTUALTIME>
<POSITIONING_CODE>1</POSITIONING_CODE>
</DUTY>
<DUTY>
<EMPNO>2</EMPNO>
<LOCATION>RTM</BASE_STATION>
<ACTUALTIME>2019-02-20T06:00:00</ACTUALTIME>
<POSITIONING_CODE/>
</DUTY>
<DUTY>
<EMPNO>1</EMPNO>
<LOCATION>AMS</BASE_STATION>
<ACTUALTIME>2019-02-21T06:00:00</ACTUALTIME>
<POSITIONING_CODE>1</POSITIONING_CODE>
</DUTY>
</DUTIES>
As you can see from the sample the same employee is multiple times in the response, so currently I am calling the request multiple time for the same employee. I would like to optimize this.
In SoapUI I can use the statement:
String[] emps = resp.getNodeValues("/Duties/Duty/string(EMPNO)");
String[] locs = resp.getNodeValues("/Duties/Duty/string(LOCATION)");
String[] tims = resp.getNodeValues("/Duties/Duty/string(ACTUALTIME)");
Then I would like to sort the arrays on emps and only build a request to get additional employee data when the employee changes. This will make the code much faster.
Now my questions:
What is the best way to do this? Work with multidimensional array and sort them? Or is there a better way of doing this?
Thanks in advance,
Said
I would create an instance of java.util.HashMap<String,String> or java.util.HashMap<Long,String> depending on which datatype is returned, when you retrive the empno.
Just blindly do a map.put(empno,null) for each duty element, and you will only have each employee in the hashmap once afterwards, as each additional addition of the same key will overwrite the existing.
After that, simply
for (String key : map.keySet()) {
// do your stuff
}
As I see it, you really don't need to sort anything to get there.
I am facing out of memory issue due to high heap allocation.I verified it from HP Diagnostics tool and it is pointing to a section in my code where I am adding elements in an arraylist. I am not able o figure out how else can I write this code so that objects are released early. Below is the code :
private List<UpperDTO> populateRecords(List<BaseEntity> baseEntityList,List<DataEntity> dataEntityList) {
List<UpperDTO> masterDTOList = new ArrayList<UpperDTO>();
if(baseEntityList !=null && baseEntityList.size()>0){
BigDecimal conId = null;
for(BaseEntity baseEntity :baseEntityList){
conId = baseEntity.getConsignmentId();
ArrayList<StatusData> statusDataList = new ArrayList<StatusData>();
if(dataEntityList !=null && dataEntityList.size()>0){
for(DataEntity data : dataEntityList){
if(conId.equals(data.getConsignmentId())){
//making null to supress from the response
data.setConsignmentId(null);
statusDataList.add(TrackServiceHelper.convertStatusDataToDTO(data));
}
}
}
masterDTOList.add(TrackServiceHelper.populateDTO(baseEntity, statusDataList));
}
}
return masterDTOList;
}
public static UpperDTO populateDTO(TrackBaseEntity baseEntity,
List<StatusData> statusList) {
UpperDTO upperDTO = new UpperDTO();
//Setter methods called
upperDTO.setStatusData(statusList);
return upperDTO;
}
The issue is pointed at following line in the code :
masterDTOList.add(TrackServiceHelper.populateDTO(baseEntity, statusDataList));
This is rest api which receives messages from JMS Queues and MDB listens to these messages. I am not able to simulate this in my local or Dev environments as the issue comes during performance testing when the number of requests are high. How can I fix this?
This is the stacktrace of Collection Leak from HP Diagnostics:
Chart Collection Class Contained Type Probe Collection Growth Rate Collection Size Leak Stack Trace Maximum Size
0, 0, 255 java.util.ArrayList com.rex.ih2.dtos.UpperDTO gtatsh645 3,848 122,312 java.util.ArrayList.add(ArrayList.java:413)
com.rex.ih2.utils.AppDAO.populateConsignment(AppDAO.java:168)
com.rex.ih2.utils.AppDAO.searchConsignment(AppDAO.java:93)
com.rex.ih2.service.AppService.fetchConDetail(AppService.java:131)
com.rex.ih2.service.AppService.getConDetail(AppService.java:69)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:76)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:607)
org.apache.webbeans.intercept.InterceptorHandler.invoke(InterceptorHandler.java:297)
org.apache.webbeans.intercept.NormalScopedBeanInterceptorHandler.invoke(NormalScopedBeanInterceptorHandler.java:98)
com.rex.ih2.service.TrackService_$$_javassist_0.getConsignmentDetail(TrackService_$$_javassist_0.java)
com.rex.ih2.beans.TrackBean.action(TrackBean.java:35)
com.tnt.integration.bean.AbstractServiceBean.invokeService(AbstractServiceBean.java:259)
com.tnt.integration.bean.AbstractServiceBean.onMessage(AbstractServiceBean.java:157)
com.rex.ih2.beans.TrackBean.onMessage(TrackBean.java)
I agree with dcsohi. This is actually a design problem. You may want to look at below approaches:-
1) Size of the object being added in the list. If it can be optimized.
2) Handle data in chunks instead of adding all at once in the list.
3) Optimizing JVM arguments to increase head size so that it can handle more objects.
You can try to simulate this by increasing number of test objects and reducing heap size in dev environment or maybe taking production dump and running with the same volume.
Ok, it looks to me like you only care about DataEntity objects and BaseEntity objects where their "consignment IDs" match. You really should do this sort of thing in the database query. The use of "entity" objects makes it seem like your DB interactions are via JPA/Hibernate, in which case you may want to create a DB view that joins the two tables by consignment ID, and provides the necessary information for your output. Next, create a custom read-only entity that matches this view. Then you can apply pagination to your query of this view (if it's still necessary) and retrieve the information in smaller batches.
Background
My application connects to the Genesys Interaction Server in order to receive events for actions performed on the Interaction Workspace. I am using the Platform SDK 8.5 for Java.
I make the connection to the Interaction Server using the method described in the API reference.
InteractionServerProtocol interactionServerProtocol =
new InteractionServerProtocol(
new Endpoint(
endpointName,
interactionServerHost,
interactionServerPort));
interactionServerProtocol.setClientType(InteractionClient.AgentApplication);
interactionServerProtocol.open();
Next, I need to register a listener for each Place I wish to receive events for.
RequestStartPlaceAgentStateReporting requestStartPlaceAgentStateReporting = RequestStartPlaceAgentStateReporting.create();
requestStartPlaceAgentStateReporting.setPlaceId("PlaceOfGold");
requestStartPlaceAgentStateReporting.setTenantId(101);
isProtocol.send(requestStartPlaceAgentStateReporting);
The way it is now, my application requires the user to manually specify each Place he wishes to observe. This requires him to know the names of all the Places, which he may not necessarily have [easy] access to.
Question
How do I programmatically obtain a list of Places available? Preferably from the Interaction Server to limit the number of connections needed.
There is a method you can use. If you check methods of applicationblocks you will see cfg and query objects. You can use it for get list of all DNs. When building query, try blank DBID,name and number.
there is a .net code similar to java code(actually exatly the same)
List<CfgDN> list = new List<CfgDN>();
List<DN> dnlist = new List<Dn>();
CfgDNQuery query = new CfgDNQuery(m_ConfService);
list = m_ConfService.RetrieveMultipleObjects<CfgDN>(query).ToList();
foreach (CfgDN item in list)
{
foo = (DN) item.DBID;
......
dnlist.Add(foo);
}
Note : DN is my class which contains some property from platform SDK.
KeyValueCollection tenantList = new KeyValueCollection();
tenantList.addString("tenant", "Resources");
RequestStartPlaceAgentStateReportingAll all = RequestStartPlaceAgentStateReportingAll.create(tenantList);
interactionServerProtocol.send(all);
I am working on refactoring an existing application written in PowerBuilder and Java and which runs on Sybase EA Server (Jaguar). I am building a small framework to wrap around Jaguar API functions that are available in EA Server. One of the classes is to get runtime statistics from EA Server using the Monitoring class.
Without going into too much detail, Monitoring is a class in EA Server API that provides Jaguar Runtime Monitoring statistics (actual classes are in C++; EA Server provides a wrapper for these in Java, so they can be accessed through CORBA).
Below is the simplified version of my class. (I made a superclass which I inherit from for getting stats for components, conn. caches, HTTP etc).
public class JagMonCompStats {
...
public void dumpStats(String type, String entity) {
private String type = "Component";
private String entity = "web_business_rules";
private String[] header = {"Active", "Pooled", "invoke"};
// This has a lot more keys, simplified for this discussion
private static short[] compKeys = {
(short) (MONITOR_COMPONENT_ACTIVE.value),
(short) (MONITOR_COMPONENT_POOLED.value),
(short) (MONITOR_COMPONENT_INVOKE.value)
};
private double[] data = null;
...
/* Call to Jaguar API */
Monitoring jm = MonitoringHelper.narrow(session.create("Jaguar/Monitoring"));
data = jm.monitor(type, entity, keys);
...
printStats(entity, header, data);
...
}
protected void printStats(String entityName, String[] header, double[] data) {
/* print the header and print data in a formatted way */
}
}
The line data = jm.monitor is the call to Jaguar API. It takes the type of the entity, the name of the entity, and the keys of the stats we want. This method returns a double array. I go on to print the header and data in a formatted output.
The program works, but I would like to get experts' opinion on OO design aspect. For one, I want to be able to customize printStats to be able to print in different formats (for e.g., full blown report or a one-liner). Apart from this, I am also thinking of showing the stats on a web page or PowerBuilder screen, in which case printStats may not even be relevant. How would you do this in a real OO way?
Well, it's quite simple. Don't print stats from this class. Return them. And let the caller decide how the returned stats should be displayed.
Now that you can get stats, you can create a OneLinerStatsPrinter, a DetailedStatsPrinter, an HtmlStatsFormatter, or whatever you want.
I have been given the task of creating a sql database and creating a GUI in Java to access it with. I pretty much have it but I have a question about threads. Before today I did not use any threads in my program and as a result just to pull 150 records from the database i had to wait around 5 - 10 seconds. This was very inconvenient and I was not sure if i could fix the issue. Today I looked on the internet about using threads in programs similar to mine and i decided to just use one thread in this method:
public Vector VectorizeView(final String viewName) {
final Vector table = new Vector();
int cCount = 0;
try {
cCount = getColumnCount(viewName);
} catch (SQLException e1) {
e1.printStackTrace();
}
final int viewNameCount = cCount;
Thread runner = new Thread(){
public void run(){
try {
Connection connection = DriverManager.getConnection(getUrl(),
getUser(), getPassword());
Statement statement = connection.createStatement();
ResultSet result = statement.executeQuery("Select * FROM "
+ viewName);
while (result.next()) {
Vector row = new Vector();
for (int i = 1; i <= viewNameCount; i++) {
String resultString = result.getString(i);
if (result.wasNull()) {
resultString = "NULL";
} else {
resultString = result.getString(i);
}
row.addElement(resultString);
}
table.addElement(row);
}
} catch (SQLException e) {
e.printStackTrace();
}
}
};
runner.start();
return table;
}
The only thing i really changed was adding the thread 'runner' and the performance increased exponentially. Pulling 500 records occurs almost instantly this way.
The method looked like this before:
public Vector VectorizeTable(String tableName) {
Vector<Vector> table = new Vector<Vector>();
try {
Connection connection = DriverManager.getConnection(getUrl(),
getUser(), getPassword());
Statement statement = connection.createStatement();
ResultSet result = statement.executeQuery("Select * FROM "
+ tableName);
while (result.next()) {
Vector row = new Vector();
for (int i = 1; i <= this.getColumnCount(tableName); i++) {
String resultString = result.getString(i);
if (result.wasNull()) {
resultString = "NULL";
} else {
resultString = result.getString(i);
}
row.addElement(resultString);
}
table.addElement(row);
}
} catch (SQLException e) {
e.printStackTrace();
}
return table;
}
My question is why is the method with the thread so much faster than the one without? I don't use multiple threads anywhere in my program. I have looked online but nothing seems to answer my question.
Any information anyone could give would be greatly appreciated. I'm a noob on threads XO
If you need any other additional information to help understand what is going on let me know!
Answer:
Look at Aaron's answer this wasn't an issue with threads at all. I feel very noobish right now :(. THANKS #Aaron!
I think that what you are doing is appearing to make the database load faster because the VectorizeView method is returning before the data has been loaded. The load is then proceeding in the background, and completing in (probably) the same time as before.
You could test this theory by adding a thread.join() call after the thread.start() call.
If this is what is going on, you probably need to do something to stop other parts of your application from accessing the table object before loading has completed. Otherwise your application is liable to behave incorrectly if the user does something too soon after launch.
FWIW, loading 100 or 500 records from a database should be quick, unless the query itself is expensive for the database. That shouldn't be the case for a simple select from a table ... unless you are actually selecting from a view rather than the table, and the view is poorly designed. Either way, you probably would be better off focussing on why such a simple query is taking so long, rather than trying to run it in a separate thread.
In your follow-up you say that the version with the join after the start is just as fast as the version without it.
My first reaction is to say: "Leave the join there. You've fixed the problem."
But this doesn't explain what is actually going on. And I'm now completely baffled. The best I can think of is that something your application is doing before this on the current thread is the cause of this.
Maybe you should investigate what the application is doing in the period in which this is occurring. See if you can figure out where all the time is being spent.
Take a thread dump and look at the threads.
Run it under the debugger to see where the "pause" is occurring.
Profile it.
Set the application logging to a high level and see if there are any clues.
Check the database logs.
Etcetera
It looks like you kick off (i.e. start) a background thread to perform the query, but you don't join to wait for the computation to complete. When you return table, it won't be filled in with the results of the query yet -- the other thread will fill it in over time, after your method returns. The method returns almost instantly, because it's doing no real work.
If you want to ensure that the data is loaded before the method returns, you'll need to call runner.join(). If you do so, you'll see that loading the data is taking just as long as it did before. The only difference with the new code is that the work is performed in a separate thread of execution, allowing the rest of your code to get on with other work that it needs to perform. Note that failing to call join could lead to errors if code in your main thread tries to use the data in the Vector before it's actually filled in by the background thread.
Update: I just noticed that you're also precomputing getColumnCount in the multi-threaded version, while in the single-threaded version you're computing it for each iteration of the inner loop. Depending on the complexity of that method, that might explain part of the speedup (if there is any).
Are you sure that it is faster? Since you start separate thread, you will return table immediately. But are you sure that you measure time after it's fully populated with data?
Update
To measure time correctly, save runner object somewhere and call runner.join(). You can even to it in the same method for testing.
Ok, I think that if you examine table at the end of this method you will find it's empty. That's because start starts running the thread in the background, and you immediately return table without the background thread having a chance to populate it. So it appears to be going faster but actually isn't.