I am connecting from server 1 to server 2 using ibm mq series(single queue manager group) using single subnet NAT ip address. Destination of server 2 is single ip address point. Problem is: whenever i connect different custodian data(client data) to upload data and whenever server restart happens on one custodian point, other custodian client resets and not able to connect to server2 ip address.
Solution i have attempted: instead of ibm mq series and regular c# code, using combination of microservices with java8 parallel processing and Apache Kafka to render parallel threads to connect with single nat ip subnet, so that none of the custodian codes get resetted.is this solution correct? But the destination server is giving single ip address point to connect for n custodians. Is there something i can do in nat gateway to parallel render data to destination IP address? Please help me with my solution.will be happy to help with any info.
Your question is not very clear and then you go off on a tangent about your solution which again is not clear and appears to be creating an overly complex system.
So, let me guess what your setup is:
Server 1 has an IBM MQ client running a C# .NET application. Is this correct? If so, what version is the IBM MQ Client software? i.e. 9.1.0.9
Server 2 is running IBM MQ queue manager. Is this correct? If so, what version is the IBM MQ Server software? i.e. 9.2.0.4 Use the dspmqver command to get this information.
If the above is correct then it is pretty much a basic IBM MQ environment.
So, when your application has this issue, MQ would have thrown an MQException. What was the Completion Code and Reason Code? The reason code would probably been one of the following
2009 [MQRC_CONNECTION_BROKEN]
2059 [MQRC_Q_MGR_NOT_AVAILABLE]
2538 [MQRC_HOST_NOT_AVAILABLE]
What did your application do when it received the MQException? Crash or die or loop forever?
Did you add reconnection login to your code to reconnect to the queue manager after the failure?
Related
I'm new-ish to networking, and I'm swimming (drowning) in semantics.
I have a VM which runs a Java application. Ideally, it would be fed inputs from the host through a RabbitMQ queue. The Java application would then place the results on another RabbitMQ queue on a different port where it will be used by the host application. After researching it for a bit, it seems like RabbitMQ only exists in the localhost space with listeners on different ports, am I correct in this?
Do I need 2 RabbitMQ servers running in tandem, then, (one on the VM and other on Host) each listening to the same port? Or do I just need one RabbitMQ server running while both applications are pointed to the same IP Address/Port?
Also, I have also read that you cannot connect as 'guest/guest' unless it is on localhost, which I understand, but how is RabbitMQ supposed to be configured/reachable to anything besides localhost?
I've been researching for several hours, but the documentation does not point to a direct answer/how-to guide. Perhaps it is my lack of network experience. If anyone could elaborate on these questions or point me to some articles/helpful guides, I would be much obliged.
P.S. -- I don't even know what code to display to give context. Let me know and I'll edit the code into the post.
RabbitMQ listens to TCP port 5672 on all network interfaces out-of-the-box. This includes the "loopback" interface (to allow fast connections to self) and interfaces visible to other remote hosts (including VMs).
For your use case, you probably need a single RabbitMQ instance for both directions. The application on the host will publish messages to one queue and the Java application in the VM will consume messages from that queue and push the result to a second queue. This second queue can be consumed by the application on the host.
For the user, you need to create a new user with the appropriate rights. This is documented in the access control article. To create the user, you can do it from the management web UI (after you enabled the management plugin) or using the rabbitmqctl command line tool.
The last part is networking between the host and the VM. It really depends on the technology you use. It may work out-of-the-box or you may have to configure how VMs are connected to the network. Refer to the documentation of your hypervisor.
I am acutally coding an Application that allows the users to post things.
There is just one server and multiple clients, but since it isn't sure on what ip the server will run i want the server to be automatically found, the port is fixed , it is 55001.
Going thru all IP's from 1 to 255 takes years , i have already tried that, does anyone have a clean and fast solution to this?
a) Use broadcast to discover the server (Only IPv4 has this)
b) Use multicast to discover the server (Same code-base for IPv4 and IPv6)
c) Register the server into a DNS entry
d) Register the server, into a central server
e) Let the user type in the server IP/hostname
So i wrote a program to connect to a Clustered WebLogic server behind a VIP with 4 servers and 4 queues that are all connected( i think they call them distributed...) When i run the program from my local machine and just get JMS Connections, look for messages and disconnect, it works great. and by that i mean it:
iteration #1
connects to server 1.
look for a message
disconnects
iteration #2
connects to server 2.
look for a message
disconnects
and so on.
When i run it on the server though, the application picks a server and stick to it. It will never pick a new server, so the queues on the other servers don't ever get worked. like with a "sticky session" setup... My OS is Win7, and the Server os is Win2008r2 JDK is identical for both machines.. How is this configured client side? The server implementation uses "Apache Procrun" to run it as a service. but i haven't seen too many issues with that part...
is there a session cookie getting written out somewhere?
any ideas?
Thanks!
Try disabling 'Server Affinity' on the JMS Connection factory. If you are using the Default Connection Factory, define your own an disable Server Affinity.
EDIT:
Server Affinity is a Server-side setting, but it controls how messages are routed to consumers after a WebLogic JMS Server receives the message. The other option is to use round-robin DNS and send to only one hostname that resolves to a different IP(Managed Server) such that each connection goes to a different server.
I'm pretty sure this is the setting you're looking for :)
I have a client/server applciation that communicates through JNDI/RMI/IIOP using, on client side, some Glassfish client code (NOT packaged as a Glassfish client) and on server side a Glassfish instance.
I have some Glassfish multimode scripts that I use to make sure the domains I create on any machines are totally identical and correctly configured.
Using that script on local network, I have already made sure I could access a remote Glassfish server instance from client code on my machine (that was quite a reasonable guess, however I tend to test all things I'm not totally sure of).
Next step is to have that client/server application working over (I should instead say "through") internet : with my client code in my company LAN (in other words on my machine) and my server code on an Amazon VM running my Glassfish server. For some reasons, the remote Glassfish is running on a Windows VM.
Obviously (as I ask that question, you can safely guess the through internet test is NOT working. And you're right.
So, to have more guesses, I started SmartSniffer both on my machine and on server.
On my machine, I can only see one TCP packet going to that server instance (and nothing coming back).
On server instance, I can see one packet entering (the client query) and one packet exiting (the server answer). That server answer looks like this :
[4/4/2012 11:47:13 AM:917]
GIOP.......(................NameService....._is_a...................NEO................ª.......(IDL:omg.org/SendingContext/CodeBase:1.0............n........172.27.63.145.Ô2....¯«Ë........e...........................
...................
... ...........&...............(IDL:omg.org/CosNaming/NamingContext:1.0.
That 172.27.63.145 address is my IP in local network.
[4/4/2012 11:47:13 AM:917]
GIOP.......2............NEO................0.......(IDL:omg.org/SendingContext/CodeBase:1.0............ô........46.137.114.###.'5....¯«Ë........d...........................
...................
... ...........&...........!...|...............$...
...f............10.241.42.###.'6.#........g..............g........default...................g...............+IDL:omg.org/CosNaming/NamingContextExt:1.0.............¢........10.241.42.208.'5...M¯«Ë....
...d... S1AS-ORB............RootPOA....
TNameService............................... ...................
... ...........&......
That 46.137.114.### is external one of my Amazon VM, and 10.241.42.### is its internal IP in amazon magical virtual server.
So it seems server is answering, no ?
But that answer never finds its way to my machine in my network.
So ... how can I check where it get lost ? Seems likepacket sniffer has done its job, but what can I do now ?
NOTE This question is a clarification of "How to Connect a glassfish client to glassfish server over NATs?"
Perhaps stupid question, but is your Amazon EC2 instance is configured with all required ports open for your communication protocol to work? You could see configured open ports in security group your instance assigned to in AWS console, under EC2->Security Groups.
first of all, I am not very familiar with Tibco, please keep that in mind ;).
I have a task to write an application which reads/writes to a jms queue (not a big deal). The problem is, the customer uses Tibco & allowed me to connect to their server to run some tests. Unfortunatly, I am only allowed to connect via natted IPs & as soon as I try to connect to a QueueConnectionFactory, I receive an error because Tibco itself tries to connect to the "private" IP.
The interesting thing is, receiving the Queue, QueueConnectionFactory,... objects from the context works fine - but when I do a toString() I see that the cf received has configured the 'private' IP.
Example: I set this url as provider url -> tibjmsnaming://213.133.111.182:7222
Receiving the QueueConnectionFactory object works fine, doing a to string returns "QueueConnectionFactory[URL=tcp://145.12.51.4:7222;clientID=null]"
So as soon as I call "createQueueConnectionFactory()" I receive this exception:
javax.jms.JMSException: Failed to connect to the server at tcp://145.12.51.4:7222
Is there a way to override this behavior & tell the Tibco server to use the configured provider url instead?
I know this is ancient, but if you - like me - come from Google, here's the correct answer:
the URL above uses JNDI to look up the actual connection; the connector does not directly connect to the NATted IP, but connects to the NATted IP (213.133.111.182) to look for the "real" IP (145.12.51.4), which doesn't work due to the NATting.
Solution: either change the registered IP in the JNDI store or connect directly, circumventing JNDI.
1) Check from the client machine, if you are able to ping the EMS server IP
2) Check if you can connect to EMS IP:Port via Telnet
3) If both succeed then your EMS client should connect to EMS Server, if still it is not connecting, then you 4) must review the EMS DLL is proper is at least able to connect when u run the EMS client and server from the same machine. 5) if point 4 is successful then you must review the client firewall and server firewall policies with your network admin.
-hB
The only way you're going to be able to directly send ad hoc messages to a private port is if the firewall / router that is doing the NAT is configured to pass through messages on that port to the correct destination. Otherwise they'll go nowhere.
I think you would have to investigate if JMS or Tibco has a mode that allows a client to maintain a connection to the server or poll the server for messages because it will not be able to receive ad hoc messages in the other direction.
In extreme cases (e.g. corporate firewalls & proxies where all ports are offlimits) the client might not even be able to connect to your server on some random port. It might have to open a connection via an HTTP/1.1 pipeline to receive any messages from your server.