Is it possible to salvage messages from Weblogic JMS file store? - java

I have a couple of JMS file stores from a Weblogic 10.3 server, and i would like to retrieve the messages contained in them, if possible, without using Weblogic. Is this possible?
Many years ago i was able to read the JMS file store for a previous version of Weblogic using Java serialization (ObjectInputStream), but the files i have are giving me a
java.io.StreamCorruptedException: invalid stream header: C001BEAD
exception when i open them using ObjectInputStream. I'm wondering if there is a file header that i need to skip before i can deserialize the messages, or perhaps this version of Weblogic doesn't use Java serialization at all.
The messages in the file are MapMessages. I can see the strings that correspond to the map keys, when i hex dump the file, but of course the values are not readable this way. But the fact that i can see the map keys make me hopeful that the messages are serialized in the file.
Any ideas on how to salvage the data?

Set aside in a safe place all the *.dat files you wish to salvage.
Start-up a weblogic and log-in to the Admin console
Go to Home ->Summary of JMS Servers ->XL-JMS-Server
Enable “Insertion Paused At Startup”
Enable “Production Paused At Startup”
Enable “Consumption Paused At Startup” paused
Save the settings
Shutdown Weblogic
Swap-in a JMS data store you wish to salvage
Start Weblogic
Browse the JMS monitoring page to see which Queues and Topics have messages persisted.
At this point, the datastore is ready to be inspected/dumped using a QueueBrowser or a TopicSubscriber that you write. Alternatively, you could walk the messages ad hoc using Hermes JMS ( http://www.hermesjms.com ). Hermes has message renderers that you can implement for your custom message types.

The only way we and Oracle support were able to come up with, was to create another Weblogic instance configured the same way, and let that Weblogic instance pick up and process the messages.

Related

Reading huge file and writing in RDBMS

I have a huge text file which is continuously getting appended from a common place, which I need to read line by line from my java application and update in a SQL RDBMS such that if java application crashes, it should start from where it left and not from the beginning.
its a plain text file. Each row will contains:
<Datatimestamp> <service name> <paymentType> <success/failure> <session ID>
Also the data which is retrieved from database should also be real time without any performance, availability or availability issues in web application
Here is my approach:
Deploy application in two systems boxes with each contains heartbeat which pings the other system for service availability.
When you get a success response to heart beat,you also get the time stamp which is last successfully read.
When the next heartbeat response fails, application in another system can take over, based on:
1. failed response
2. Last successful time stamp.
Also, since the need for data retrieval is very real time and data is huge, can I crawl the database put that into Solr or Elastic search for faster retrieval, instead of making the database calls ?
There are various ways to do it, what is the best way.
I would put a messaging system in between the text file and the DB writing applications. (for example RabbitMQ) in this case, the messaging system functions as a queue. one application constantly reads the file and inserts the rows as messages to the broker. on the other side, multiple "DB writing applications" can read from the queue and write to DB.
the advantage of the messaging system is its support for multiple clients reading from the queue. the messaging system takes care of synchronizing between the clients, dealing with errors, dead letters, etc. the clients don't care about what payload was processed by other instances.
regarding maintaining multiple instances of "DB writing applications": I would go for ready made cluster solutions. perhaps docker cluster managed by kubernates?
another viable alternative is a streaming platform, like Apache Kafka.
You can use a software like FileBeat to read the file and direct the filebeat output to RabbitMQ or Kafka. From there a Java program can subscribe / consume the data and put it into a RDBMS system.

what is best option for creating log message buffer

I am working on a web application which needs to be deployed to cloud. There is a cloud service which can store log messages for applications securely. This is exposed by cloud using REST API which can take up to max 25 log messages in json format. we are currently using log4j(open for any other too) to log in to file. Now, we need to transition our application to move from file based logging to using cloud REST API.
I am considering that it would be expensive to make REST API call for every log message and slow down the application.
in this context, I am considering writing a custom appender which can write to a buffer. buffer can be in-memory or persistent buffer which will be read and emptied periodically by a separate thread or process by sending 25 messages in bunch to cloud REST API.
option 1:
using in-memory buffer
my custom appender would write message to in memory list and keep filling it.
There woudl be a daemon thread which will keep removing 25 messages at a time from the buffer and write to cloud using REST API. There is a downside to this approach that in event of application/server/node crashing.. we lose critical log message which can lead to diagnostic of why crash occurred.I am not sure if this is right way of thinking.
option 2:
using persistent buffer database/message queue:
appender can log message to database table temporarily or post to message queue which will be processed by separate long running job to pick up messages from db or queue and post it to cloud using REST API.
please guide which option looks best.
There is a lot of build in appender in log4j : https://logging.apache.org/log4j/2.x/manual/appenders.html and if you use a dedicated service in a cloud, they may give a specific appender.
If it's in your environment, maybe try a stack like ELK with log4j rollingfile apender, with that technique you'll not lose log entries.

JMS taking too long to process messages

An application has a JMS queue responsible for delivering audit logs. The application send logs to a JMS queue and this queue is consumed by a MDB.
However the messages sent are big XML files that vary from 20 MB to 100 MB. The problem is that the JMS queue take too long to consume the messages, leading to an OutOfMemory error.
What should I do to solve this problem?
This answer may of may not help jguilhermemv, just want to share an idea for those who can read this post, a work around for big messages.
First thing is try not to send to big messages, Now we have two options (But these require implementation changes, and can be done in starting or if system implementation changes are allowed in later stage):
Try to save the log in DB and send just log-ids in JMS msgs. (Saving logs in DB is not recommended as size and time to save will again be a problem in later stage.)
Save logs in form of files (Save them at a common location) and file names in DB and share those file name IDs via JMS. Consumer can then after consuming can read that log file.

How can we save Java Message Queues for reference?

How can we keep track of every message that gets into our Java Message Queue? We need to save the message for later reference. We already log it into an application log (log4j) but we need to query them later.
You can store them
in memory - in a collection or in an in-memory database
in a standalone database
You could create a database logging table for the messages, storing the message as is in a BLOB column, the timestamp that it was created / posted to the MQ and a simple counter as primary key. You can also add fields like message type etc if you want to create statistical reports on messages sent.
Cleanup of the tabe can be done simply by deleting all message older than the retention period by using the timestamp column.
I implemented such a solution in the past, we chose to store messages with all their characteristics in a database and developed a search, replay and cancel application on top of it. This is the Message Store pattern:
(source: eaipatterns.com)
We also used this application for the Dead Letter Channel.
(source: eaipatterns.com)
If you don't want to build a custom solution, have a look at the ReplayService for JMS from CodeStreet.
The best way to do this is to use whatever tracing facility your middleware provider offers. Or possibly, you could set up an intermediate listener whose only job was to log messages and forward on to your existing application.
In most cases, you will find that the middleware provider already has the ability to do this for you with no changes or awareness by your application.
I would change the queue to a topic, and then keep the original consumer that processes the messages, and add another consumer for auditing the messages to a database.
Some JMS providers cater for topic-to-queue-bridge definitions, the consumers then receive from their own dedicated queues, and don't have to read past messages that are left on the queue due to other consumers being inactive.
Alternatively, you could write a log4j appender, which writes your logged messages to a database.

Getting multiple Java pop3 clients to work with GMail

I have written a nice program in Java that connects to a gmail account and download atachments sent to it. Once an attachment has been downloaded, it is marked as read and is not downloaded ever again. This program will have to run in multiple instances with each program downloading unique attachments so that a single attachment is never downloaded twice. The problem is that at the moment if the attachment is of a decent size, one program is still downloading it, when another instance connects and also starts to download the attachment before it has been marked as read.
I have tried checking and setting various flags and checking whether the folder is open, nothing seems to work. Any solutions?
Update: Thank you for the quick answers, sadly IMAP is not an option due to other reasons.
Consider using IMAP instead - it is designed for client-server interaction.
From RFC1939 (Post Office Protocol - Version 3):
POP3 is not intended to provide
extensive manipulation operations of
mail on the server; normally, mail is
downloaded and then deleted. A more advanced (and complex) protocol, IMAP4, is discussed in RFC1730.
I don't think POP3 is made for multiple simultaneous access.
Ask yourself this: do i really need multiple processes accessing the same mailbox?
If you do, you'll have to find a way to have these processes communicate to each other.
Use a common database or server process to coordinate actions.
IMAP does have more options, but i'm not sure if you can "lock" a single mail to mark it as being processed.
As the others have mentioned, POP3 isn't really intended for this kind of scenario.
If you absolutely have to use POP3, I'd suggest downloading all the e-mail to an intermediate server which sorts the messages and makes them available for each of the other clients.
It sounds like you're just trying to distribute the processing of the e-mails. If that's the case, you can just have each client connect to your intermediate server to retrieve the next available message.
I'm not sure what your constraints are, but you may even want to consider receiving the attachments some other way besides e-mail. If people are uploading files, you could set up a web form that automatically sends each file to the next available instance of your application for processing.
If you need to stay with a POP3 connection, you could keep a local database of previously downloaded message ids. Then new instances could check against that before downloading again. The best solution is just to use IMAP, though, as IMAP is able to set the read/unread flags before downloading.
You could mark the mail as read before starting the download, and then start downloading it.

Categories