Today I tried to fix some potential memory leaks in my web application.
I use the following libraries.
spring-webmvc-3.2.9.RELEASE
spring-data-mongodb-1.5.0.RELEASE
mongo-java-driver-2.12.1
First I missed to close the MongoClient but changed my configuration this way.
#Configuration
public class MongoDBConfiguration implements DisposableBean {
private MongoClient mongoClient;
#Bean
public MongoTemplate mongoTemplate() {
try {
final Properties props = loadProperties();
log.debug("Initializing Mongo DB client");
mongoClient =
new MongoClient(getProperty(props, "host", "localhost"), cint(getProperty(props, "port",
"27017")));
UserCredentials credentials = null;
final String auth = getProperty(props, "auth", null);
if (auth != null && auth.equalsIgnoreCase("true")) {
final String user = getProperty(props, "user", null);
final String pass = getProperty(props, "pass", null);
if (user != null && pass != null) {
credentials = new UserCredentials(user, pass);
}
}
final MongoDbFactory mongoDbFactory =
new SimpleMongoDbFactory(mongoClient, getProperty(props, "dbname", "Feeder"), credentials);
final MappingMongoConverter mongoConverter =
new MappingMongoConverter(new DefaultDbRefResolver(mongoDbFactory),
new MongoMappingContext());
mongoConverter.setCustomConversions(customConversions(mongoConverter));
mongoConverter.afterPropertiesSet();
final MongoTemplate mongoTemplate = new MongoTemplate(mongoDbFactory, mongoConverter);
return mongoTemplate;
} catch (final IOException e) {
log.error("", e);
}
return null;
}
/**
* close Mongo client to avoid memory leaks
*/
#Override
public void destroy() {
log.debug("Shutdown Mongo DB connection");
mongoClient.close();
log.debug("Mongo DB connection shutdown completed");
}
}
When stopping or reloading the web application, there are still messages complaining about probable memory leaks.
2014-06-24 07:58:02,114 DEBUG d.p.f.s.m.MongoDBConfiguration - Shutdown Mongo DB connection
2014-06-24 07:58:02,118 DEBUG d.p.f.s.m.MongoDBConfiguration - Mongo DB connection shutdown completed
Jun 24, 2014 7:58:02 AM org.apache.catalina.loader.WebappClassLoader checkThreadLocalMapForLeaks
SEVERE: The web application [/feeder##1.5.1] created a ThreadLocal with key of type [com.mongodb.BaseCluster$1] (value [com.mongodb.BaseCluster$1#766465]) and a value of type [java.util.Random] (value [java.util.Random#5cb9231f]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
Lines 3 and 4 are repeated up to 9 times, as fare as I have seen.
How can I fix this? Can it be ignored?
You need to clean thread locals. See my answer here stop/interrupt a thread after jdbc Driver has been deregister
/**
* Cleanup function which cleans all thread local variables. Using thread
* local variables is not a good practice but unfortunately some libraries
* are still using them. We need to clean them up to prevent memory leaks.
*
* #return number of Thread local variables
*/
private int immolate() {
int count = 0;
try {
final Field threadLocalsField = Thread.class
.getDeclaredField("threadLocals");
threadLocalsField.setAccessible(true);
final Field inheritableThreadLocalsField = Thread.class
.getDeclaredField("inheritableThreadLocals");
inheritableThreadLocalsField.setAccessible(true);
for (final Thread thread : Thread.getAllStackTraces().keySet()) {
count += clear(threadLocalsField.get(thread));
count += clear(inheritableThreadLocalsField.get(thread));
}
log.info("Immolated " + count + " values in ThreadLocals");
} catch (Exception e) {
log.error("ThreadLocalImmolater.immolate()", e);
}
return count;
}
/**
* Cleaner for thread local map.
*
* #param threadLocalMap
* thread local map to clean or null
* #return number of cleaned objects
* #throws Exception
* in case of error
*/
private int clear(#NotNull final Object threadLocalMap) throws Exception {
if (threadLocalMap == null) {
return 0;
}
int count = 0;
final Field tableField = threadLocalMap.getClass().getDeclaredField(
"table");
tableField.setAccessible(true);
final Object table = tableField.get(threadLocalMap);
for (int i = 0, length = Array.getLength(table); i < length; ++i) {
final Object entry = Array.get(table, i);
if (entry != null) {
final Object threadLocal = ((WeakReference<?>) entry).get();
if (threadLocal != null) {
log(i, threadLocal);
Array.set(table, i, null);
++count;
}
}
}
return count;
}
Related
I m using ignite cache, I want to cache a view where the id is not relevant so implementing loadCache seems to be somehow tricky to me when there is no id!!!
How should I update the example below
public class CacheJdbcPersonStore extends CacheStoreAdapter<Long, Person> {
...
// This method is called whenever "IgniteCache.loadCache()" or
// "IgniteCache.localLoadCache()" methods are called.
#Override public void loadCache(IgniteBiInClosure<Long, Person> clo, Object... args) {
if (args == null || args.length == 0 || args[0] == null)
throw new CacheLoaderException("Expected entry count parameter is not provided.");
final int entryCnt = (Integer)args[0];
Connection conn = null;
try (Connection conn = connection()) {
try (PreparedStatement st = conn.prepareStatement("select * from PERSONS")) {
try (ResultSet rs = st.executeQuery()) {
int cnt = 0;
while (cnt < entryCnt && rs.next()) {
Person person = new Person(rs.getLong(1), rs.getString(2), rs.getString(3));
clo.apply(person.getId(), person);
cnt++;
}
}
}
}
catch (SQLException e) {
throw new CacheLoaderException("Failed to load values from cache store.", e);
}
}
...
}
clo.apply(person.getId(), person); this part is the issue in my logic my view doesn't have an ID
You need some unique ID to store data in Ignite. If there is nothing suitable in the actual data, your options are:
UUID.randomUUID()
Simple counter (id++) or LongAdder/AtomicLong - works only if you are loading from a single node
IgniteAtomicSequence - works across entire cluster
I am new to using SSE with Jersey, and have the following situation.
I have a JAXB annotated class that represents and acts on the I/O of a Raspberry Pi (class GpioRepresentation).
The client class accesses the status of I/O through the method getUpdate() which returns the XML object
representation of the class.
#XmlRootElement
public class GpioRepresentation implements GpioSubject
{
...
/**
* Returns an object of this class with the current
* representation of the I/O states
* #return this
*/
public synchronized GpioRepresentation getUpdate()
{
this.getGarageDoorInputState();
this.getZoneOneFeedback();
this.getZoneTwoFeedback();
this.getZoneThreeFeedback();
this.getGarageDoorRelayState();
this.getZoneOneRelayState();
this.getZoneTwoRelayState();
this.getZoneThreeRelayState();
return this;
}
...
}
The client that uses getUpdate() is the class HomeResource, method getPiStatusStream(). This is a JAX-RS annotated method,
and provides remote clients Server Sent Events. Currently this method is written as shown here with a continuous loop
in a separate thread which polls for updates.
#Path("/homeservice")
#RolesAllowed({"ADMIN", "USER"})
#Produces(MediaType.APPLICATION_XML)
#Consumes(MediaType.APPLICATION_XML)
public class HomeResource
{
private static final Logger LOGGER = LoggerFactory.getLogger(HomeResource.class);
private GpioRepresentation piService;
...
/**
* gets status information on the Raspberry Pi's
* I/O and returns it to the client on a continuous basis
* and only if it changes.
* #return EventOutput
*/
#GET
#Path("/iostatus")
#Produces(SseFeature.SERVER_SENT_EVENTS)
public EventOutput getPiStatusStream()
{
final EventOutput eventOutput = new EventOutput();
new Thread(new Runnable()
{
public void run()
{
try {
String gdState = null;
String zOneState = null;
String zTwoState = null;
String zThreeState = null;
String gdRState = null;
String zOneRState = null;
String zTwoRState = null;
String zThreeRState = null;
String lastgdState = null;
String lastzOneState = null;
String lastzTwoState = null;
String lastzThreeState = null;
String lastgdRState = null;
String lastzOneRState = null;
String lastzTwoRState = null;
String lastzThreeRState = null;
while(true) {
final OutboundEvent.Builder eventBuilder = new OutboundEvent.Builder();
final GpioRepresentation iostatus = piService.getUpdate();
gdState = piService.getGarageDoorInputState();
zOneState = piService.getZoneOneFeedback();
zTwoState = piService.getZoneTwoFeedback();
zThreeState = piService.getZoneThreeFeedback();
gdRState = piService.getGarageDoorRelayState();
zOneRState = piService.getZoneOneRelayState();
zTwoRState = piService.getZoneTwoRelayState();
zThreeRState = piService.getZoneThreeRelayState();
if (!(gdState.equals(lastgdState) && zOneState.equals(lastzOneState) && zTwoState.equals(lastzTwoState) && zThreeState.equals(lastzThreeState)
&& gdRState.equals(lastgdRState) && zOneRState.equals(lastzOneRState) && zTwoRState.equals(lastzTwoRState) && zThreeRState.equals(lastzThreeRState)))
{
OutboundEvent event = eventBuilder.data(GpioRepresentation.class, iostatus)
.mediaType(MediaType.APPLICATION_XML_TYPE)
.build();
eventOutput.write(event);
lastgdState = gdState;
lastzOneState = zOneState;
lastzTwoState = zTwoState;
lastzThreeState = zThreeState;
lastgdRState = gdRState;
lastzOneRState = zOneRState;
lastzTwoRState = zTwoRState;
lastzThreeRState = zThreeRState;
}
Thread.sleep(100);
}
}
catch (Exception exeption)
{
System.err.println("Error: " + exeption);
}
finally
{
try
{
eventOutput.close();
}
catch (IOException ioClose)
{
throw new RuntimeException("Error when closing the event output.", ioClose);
}
}
}
}).start();
return eventOutput;
}
...
}
The issue I have and see with this is that this doesn't scale well. Creating a thread for every GET from a remote client
takes time, and eats CPU resources. Plus I don't think this is an elegant solution. What I would like to do
is encapsulate the event code into a separate class, and use some sort of observer pattern that can trigger the
creation of an event....however, how do I tie this into the resource method so that it can be returned to the
remote client?
Can anyone point me to some examples, or provide advice on designing a solution for this?
Solution was to utilize the SseBroadcaster class. I made the HomeService class an observer of the GpioRepresentation class, and then called a new method (broadcastIOUpdateMessage()) which then output my event to the remote client.
public void broadcastIOUpdateMessage()
{
GpioRepresentation iostatus = piService.getUpdate();
OutboundEvent.Builder eventBuilder = new OutboundEvent.Builder();
OutboundEvent event = eventBuilder.data(GpioRepresentation.class, iostatus)
.mediaType(MediaType.APPLICATION_XML_TYPE)
.build();
broadcaster.broadcast(event);
}
#GET
#Path("/iostatus")
#Produces(SseFeature.SERVER_SENT_EVENTS)
public EventOutput getPiStatusStream()
{
final EventOutput eventOutput = new EventOutput();
this.broadcaster.add(eventOutput);
return eventOutput;
}
I'm using Jedis to perform a lot of insertions/reads in Redis.
The Redis server is using the default configuration.
The problem appears when I start using a few threads and the exception is:
redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketException: Pipe quebrado (Write failed)
I've searched a lot about this problem but couldn't find the reason of it or it's solve. The code I'm using to perform these tests is below:
public class RedisFacade {
private static RedisFacade instancia = null;
// Initialize the Connection
final JedisPoolConfig poolConfig = buildPoolConfig();
JedisPool pool = new JedisPool(poolConfig, "localhost");
Jedis jedis;
int i = 0;
private RedisFacade() {
}
public static RedisFacade getInstancia() {
if (instancia == null) {
instancia = new RedisFacade();
}
return instancia;
}
// retorna um cliente jedis da pool
public Jedis getDB() {
if (jedis == null) {
jedis = pool.getResource();
}
return jedis;
}
//inserting
public void insert(Document d) {
String key = i + d.getString("date") + d.getString("time");
String value = d.toString();
this.getDB().set(key, value);
i++;
}
//reading
public void read(String date, String time) {
Object doc = this.getDB().get(i + date + time);
i++;
System.out.println(doc);
}
public void destroyPool() {
this.pool.destroy();
}
private JedisPoolConfig buildPoolConfig() {
final JedisPoolConfig poolConfig = new JedisPoolConfig();
poolConfig.setMaxTotal(1100);
poolConfig.setMaxIdle(16);
poolConfig.setMinIdle(16);
poolConfig.setTestOnBorrow(true);
poolConfig.setTestOnReturn(true);
poolConfig.setTestWhileIdle(true);poolConfig.setMinEvictableIdleTimeMillis(Duration.ofSeconds(60).toMillis());
poolConfig.setTimeBetweenEvictionRunsMillis(Duration.ofSeconds(30).toMillis());
poolConfig.setNumTestsPerEvictionRun(3);
poolConfig.setBlockWhenExhausted(true);
return poolConfig;
}}
Seems like it's a timeout issue.
See this thread: Configure Jedis timeout
And also this discussion: https://github.com/xetorthio/jedis/issues/185
So I would try instantiating JedisPool with a timeout parameter
(i.e. https://github.com/xetorthio/jedis/blob/master/src/main/java/redis/clients/jedis/JedisPool.java#L201, but there are many other constructors)
and setting CONFIG SET timeout 600 in redis (with a 10 minute timeout for instance).
Edit
The JedisPool timeout is in milliseconds it seems.
After trying to implement new constructors, new configurations for the pool and clients, I tried a simple way to fix the problem: close the resources I was getting from the pool. To do so, I've changed the following code:
public Jedis getDB() {
jedis = pool.getResource();
return jedis;
}
//cria um _id pra ser usado novamente quando for buscar os documentos
public void insert(Document d) {
String key = "key" + i;
String value = d.toString();
Jedis jedis = this.getDB();
jedis.set(key, value);
jedis.close();
i++;
}
//busca pelo _id
public void read() {
Jedis jedis = this.getDB();
Object doc = jedis.get("key" + i);
jedis.close();
i++;
System.out.println(doc);
}
After changing the code, the service started to work I was planning, so I'll accept this as a solution.
I am using Redis(3.2.100) for Windows to cache my database data in Java.This is my redis init code:
private static Dictionary<Integer, JedisPool> pools = new Hashtable();
static {
JedisPoolConfig config = new JedisPoolConfig();
config.setMaxIdle(2);
config.setMaxTotal(10);
config.setTestOnBorrow(true);
config.setMaxWaitMillis(2000);
for (int i = 0; i < 16; i++) {
JedisPool item = new JedisPool(config, "127.0.0.1", 6379,10*1000);
pools.put(i, item);
}
}
This is the cache code:
public static String get(String key, Integer db) {
JedisPool poolItem = pools.get(db);
Jedis jredis = poolItem.getResource();
String result = jredis.get(key);
return result;
}
The problem is when the program run for a while,the getResource method throws:
redis.clients.jedis.exceptions.JedisException: Could not get a resource from the pool
So how to reuse the connection or close the connection.I am using this command to find out that the client has reached the max value.
D:\Program Files\Redis>redis-cli.exe info clients
# Clients
connected_clients:11
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
How to fix it?
Remember to close the redis connection,modify this function like this:
public static String get(String key, Integer db) {
JedisPool poolItem = pools.get(db);
Jedis jredis = null;
String result = null;
try {
jredis = poolItem.getResource();
result = jredis.get(key);
} catch (Exception e) {
log.error("get value error", e);
} finally {
if (jredis != null) {
jredis.close();
}
}
return result;
}
I am currently developing a Java Web application on a machine with the
following
attributes:
Ubuntu-Server Edition Linux 12.04 64-bit
Sun java JDK version 7
Apache Tomcat 7.0.30
Netbeans IDE, version 7.1.2
My project consists of a SOAP Web service interface, that maintains a
shared in-
memory object. In order to make the object visible to all threads, I
developed a
Singleton. I post the code of my application below:
#WebService()
public class ETL_WS {
private Singleton CAP_offer_coupon_map;
public ETL_WS() { }
/**
* This method adds a single coupon record to the memory map.
*/
#WebMethod
synchronized public int singleCouponLoad(#WebParam(name =
"CouponID") long coupon_id,
#WebParam(name = "ProductCategoryID") long product_category_id,
#WebParam(name = "DateTo") Date dateTo,
#WebParam(name = "LocationID") Location location_id) {
Coupon _c = new Coupon(coupon_id, product_category_id, dateTo);
if(location_id != null)
_c.setLocation(location_id);
CAP_CouponOfferCollection _data = CAP_offer_coupon_map.getModel();
Coupon _tmp = _data.getCoupon(coupon_id);
if(_tmp == null)
return -1;
_data.insertCoupon(_c);
return 0;
}
/**
* This method adds a single offer record to the memory map.
*/
#WebMethod
synchronized public int singleOfferLoad(#WebParam(name =
"OfferID") long offer_id,
#WebParam(name = "ProductCategoryID") long product_category_id,
#WebParam(name = "DateTo") Date dateTo,
#WebParam(name = "LocationID") Location location_id) {
Offer _o = new Offer(offer_id, product_category_id, dateTo);
if(location_id != null)
_o.setLocation(location_id);
CAP_CouponOfferCollection _data = CAP_offer_coupon_map.getModel();
Offer _tmp = _data.getOffer(offer_id);
if(_tmp == null)
return -1;
_data.insertOffer(_o);
return 0;
}
#WebMethod
synchronized public String getAllCoupons() {
CAP_CouponOfferCollection _data = CAP_offer_coupon_map.getModel();
HashMap<Long, Coupon> _c = _data.getCoupons_map();
return _c.toString();
}
#WebMethod
synchronized public String getAllOffers() {
CAP_CouponOfferCollection _data = CAP_offer_coupon_map.getModel();
HashMap<Long, Offer> _o = _data.getOffers_map();
return _o.toString();
}
#WebMethod
synchronized public long getProductIdFromCouponId(#**WebParam(name
= "CouponID") long coupon_id) {
long _product_id = -1;
CAP_CouponOfferCollection _data = CAP_offer_coupon_map.getModel();
Coupon _c = _data.getCoupon(coupon_id);
if(_c != null)
_product_id = _c.getCoupon_id();
return _product_id;
}
#WebMethod
synchronized public long getProductIdFromOfferId(#WebParam(name = "OfferID") long
offer_id) {
long _product_id = -1;
CAP_CouponOfferCollection _data = CAP_offer_coupon_map.getModel();
Offer _o = _data.getOffer(offer_id);
if(_o != null)
_product_id = _o.getOffer_id();
return _product_id;
}
}
The Singleton wrapper-class is shown below:
public class Singleton {
private static boolean _instanceFlag = false;
private static final Singleton _instance = new Singleton();
private static CAP_CouponOfferCollection _data;
private Singleton() {
_data = new CAP_CouponOfferCollection();
_instanceFlag = true;
}
public static synchronized CAP_CouponOfferCollection getModel() {
if(_instanceFlag == false) {
_data = new CAP_CouponOfferCollection();
_instanceFlag = true;
}
return _data;
}
}
and the CAP_CouponOfferCollection class is shown below:
public class CAP_CouponOfferCollection {
private HashMap<Long, Coupon> _coupons_map;
private HashMap<Long, ArrayList<Long>> _product_to_coupons_map;
private HashMap<Long, Offer> _offers_map;
private HashMap<Long, ArrayList<Long>> _product_cat_to_offers_map;
private static long _creation_time;
public CAP_CouponOfferCollection() {
_creation_time = System.currentTimeMillis();
System.out.println("Creation of CAP_CouponOffer object: " +
_creation_time);
}
synchronized public void insertCoupon(Coupon newCoupon) {
if(_coupons_map == null) {
_coupons_map = new HashMap<Long, Coupon>();
_product_to_coupons_map =
new HashMap<Long, ArrayList<Long>>();
}
Long key = newCoupon.getCoupon_id();
if(!_coupons_map.containsKey(key)) {
_coupons_map.put(key, newCoupon);
key = newCoupon.getProductCategory_id();
if(_product_to_coupons_map.containsKey(key)) {
ArrayList<Long> _c_list = _product_to_coupons_map.get(key);
_c_list.add(newCoupon.getCoupon_id());
_product_to_coupons_map.remove(key);
_product_to_coupons_map.put(key, _c_list);
}else {
ArrayList<Long> _c_list = new ArrayList<Long>();
_c_list.add(newCoupon.getCoupon_id());
_product_to_coupons_map.put(key, _c_list);
}
}
}
synchronized public void insertOffer(Offer newOffer) {
if(_offers_map == null) {
_offers_map = new HashMap<Long, Offer>();
_product_cat_to_offers_map =
new HashMap<Long, ArrayList<Long>>();
}
Long key = newOffer.getOffer_id();
if(!_offers_map.containsKey(key)) {
_offers_map.put(key, newOffer);
key = newOffer.getProductCategory_id();
if(_product_cat_to_offers_map.containsKey(key)) {
ArrayList<Long> _o_list = _product_cat_to_offers_map.get(key);
_o_list.add(newOffer.getOffer_id());
_product_cat_to_offers_map.remove(key);
_product_cat_to_offers_map.put(key, _o_list);
}else {
ArrayList<Long> _o_list = new ArrayList<Long>();
_o_list.add(newOffer.getOffer_id());
_product_cat_to_offers_map.put(key, _o_list);
}
}
}
synchronized public void removeCoupon(long couponId) {
Coupon _c;
Long key = new Long(couponId);
if(_coupons_map != null && _coupons_map.containsKey(key)) {
_c = (Coupon) _coupons_map.get(key);
_coupons_map.remove(key);
Long product = new Long(_c.getCoupon_id());
ArrayList<Long> _c_list =
(ArrayList<Long>) _product_to_coupons_map.get(product);
_c_list.remove(key);
_product_to_coupons_map.remove(product);
_product_to_coupons_map.put(product, _c_list);
}
}
synchronized public void removeOffer(long offerId) {
Offer _o;
Long key = new Long(offerId);
if(_offers_map != null && _offers_map.containsKey(key)) {
_o = (Offer) _offers_map.get(key);
_offers_map.remove(key);
Long product = new Long(_o.getOffer_id());
ArrayList<Long> _o_list =
(ArrayList<Long>) _product_cat_to_offers_map.get(product);
_o_list.remove(key);
_product_cat_to_offers_map.remove(product);
_product_cat_to_offers_map.put(product, _o_list);
}
}
synchronized public Coupon getCoupon(long CouponID) {
Long key = new Long(CouponID);
if(_coupons_map != null && _coupons_map.containsKey(key)) {
Coupon _c = (Coupon) _coupons_map.get(key);
Date _now = new Date();
if(_now.compareTo(_c.getDateTo()) > 0) {
this.removeCoupon(CouponID);
return null;
}
return (Coupon) _coupons_map.get(key);
}else
return null;
}
synchronized public Offer getOffer(long OfferID) {
Long key = new Long(OfferID);
if(_offers_map != null && _offers_map.containsKey(key)) {
Offer _o = (Offer) _offers_map.get(key);
Date _now = new Date();
if(_now.compareTo(_o.getDateTo()) > 0) {
this.removeOffer(OfferID);
return null;
}
return (Offer) _offers_map.get(key);
}else
return null;
}
synchronized public ArrayList<Long> getCoupons(long ProductID) {
Long key = new Long(ProductID);
if(_product_to_coupons_map != null && _product_to_coupons_map.containsKey(key))
{
ArrayList<Long> _c_list =
(ArrayList<Long>) _product_to_coupons_map.get(key);
Iterator itr = _c_list.iterator();
while(itr.hasNext()) {
Long l = (Long) itr.next();
if(this.getCoupon(l.longValue()) == null)
_c_list.remove(l.intValue());
}
_product_to_coupons_map.remove(key);
_product_to_coupons_map.put(key, _c_list);
return _c_list;
}else
return null;
}
synchronized public ArrayList<Long> getOffers(long ProductID) {
Long key = new Long(ProductID);
if(_product_cat_to_offers_map != null &&
_product_cat_to_offers_map.containsKey(key)) {
ArrayList<Long> _o_list = _product_cat_to_offers_map.get(key);
Iterator itr = _o_list.iterator();
while(itr.hasNext()) {
Long l = (Long) itr.next();
if(this.getOffer(l.longValue()) == null)
_o_list.remove(l.intValue());
}
_product_cat_to_offers_map.remove(key);
_product_cat_to_offers_map.put(key, _o_list);
return _o_list;
}else
return null;
}
synchronized public HashMap<Long, Coupon> getCoupons_map() {
return _coupons_map;
}
synchronized public void setCoupons_map(HashMap<Long, Coupon> _coupons_map) {
this._coupons_map = _coupons_map;
}
synchronized public static long getCreation_time() {
return _creation_time;
}
synchronized public void cleanup_offers() {
if(_product_cat_to_offers_map != null) {
Set _offers_key_set = _product_cat_to_offers_map.keySet();
Iterator itr = _offers_key_set.iterator();
while(itr.hasNext()) {
Long l = (Long) itr.next();
this.getOffers(l.longValue());
}
}
}
synchronized public void cleanup_coupons() {
if(_product_to_coupons_map != null) {
Set _coupons_key_set = _product_to_coupons_map.keySet();
Iterator itr = _coupons_key_set.iterator();
while(itr.hasNext()) {
Long l = (Long) itr.next();
this.getCoupons(l.longValue());
}
}
}
}
The problem is that when I deploy the above application (its name is ETL_Procedures)
to the Apache Tomcat I get the following SEVERE log entries:
SEVERE: The web application [/ETL_Procedures] appears to have started a thread named [maintenance-task-executor-thread-1] but has failed to stop it. This is very likely to create a memory leak.
Oct 03, 2012 5:35:03 PM org.apache.catalina.loader.WebappClassLoadercheckThreadLocalMapForLeaks
SEVERE: The web application [/ETL_Procedures] created a ThreadLocal with key of type [org.glassfish.gmbal.generic.OperationTracer$1] (value [org.glassfish.gmbal.generic.OperationTracer$1#4c24821]) and a value of type [java.util.ArrayList] (value [[]]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
Oct 03, 2012 5:35:03 PM org.apache.catalina.loader.WebappClassLoadercheckThreadLocalMapForLeaks
SEVERE: The web application [/ETL_Procedures] created a ThreadLocal with key of type [com.sun.xml.bind.v2.ClassFactory$1] (value [com.sun.xml.bind.v2.ClassFactory$1#6f0d70f7]) and a value of type [java.util.WeakHashMap] (value [{class com.sun.xml.ws.runtime.config.TubeFactoryList=java.lang.ref.WeakReference#5b73a116, class javax.xml.bind.annotation.adapters.CollapsedStringAdapter=java.lang.ref.WeakReference#454da42, class com.sun.xml.ws.runtime.config.MetroConfig=java.lang.ref.WeakReference#5ec52546, class com.sun.xml.ws.runtime.config.TubeFactoryConfig=java.lang.ref.WeakReference#61124745, class java.util.ArrayList=java.lang.ref.WeakReference#770534cc, class com.sun.xml.ws.runtime.config.Tubelines=java.lang.ref.WeakReference#76cd7a1f, class javax.xml.bind.annotation.W3CDomHandler=java.lang.ref.WeakReference#2c0cc628, class com.sun.xml.ws.runtime.config.TubelineDefinition=java.lang.ref.WeakReference#7aa582af}]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
Oct 03, 2012 5:35:03 PM org.apache.catalina.loader.WebappClassLoadercheckThreadLocalMapForLeaks
SEVERE: The web application [/ETL_Procedures] created a ThreadLocal with key of type [com.sun.xml.bind.v2.runtime.Coordinator$1] (value [com.sun.xml.bind.v2.runtime.Coordinator$1#826ee11]) and a value of type [java.lang.Object[]] (value [[Ljava.lang.Object;#33d7a245]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
Oct 03, 2012 5:35:03 PM org.apache.catalina.loader.WebappClassLoader checkThreadLocalMapForLeaks
SEVERE: The web application [/ETL_Procedures] created a ThreadLocal with key of type [org.glassfish.gmbal.generic.OperationTracer$1] (value [org.glassfish.gmbal.generic.OperationTracer$1#4c24821]) and a value of type [java.util.ArrayList] (value [[]]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
I really do not know what causes those memory leaks. My questions are the
following:
Can anyone suspect what may be the issue with my web service? If the
cause of the memory leaks is the Singleton object,
what else can I do to meet my applications requirements and avoid memory
leaks.
Is there any tool that I can use in order to monitor my threads, and
what exactly causes these SEVERE messages?
If I let my application deployed for a long time, I get an
IllegalStateException with the following message:
INFO: Illegal access: this web application instance has been stopped already. Could not load com.sun.xml.ws.rx.rm.localization.LocalizationMessages. The eventual following stack trace is caused by an error thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access, and has no functional impact.
java.lang.IllegalStateException at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1600)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1559)
at com.sun.xml.ws.rx.rm.runtime.sequence.SequenceMaintenanceTask.run(SequenceMaintenanceTask.java:81)
at com.sun.xml.ws.commons.DelayedTaskManager$Worker.run(DelayedTaskManager.java:91)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
How can I see the whole call path that causes this Exception?
Thank you for your time and I am really sorry for the long message.
Looks like you got java.lang.IllegalStateException when your application was redeployed, but failed to undeploy, and some old process still run. Maybe when you redeploy application some threads was waiting while other thread leave synchronization block. Maybe it is some scheduled process, do you have any? Difficult to say what it was.
I can suggest you to register listener in web.xml and in contextDestroyed() method clean up all resources. And run some memory analyzing tool, for example, MAT