I have a problem setting up camel netty consumer for 514 port in order to catch syslog messages.
My route:
from("netty:udp://127.0.0.1:514?sync=false")
.process(new Processor(){
public void process(Exchange exchange) throws Exception {
processor.processAntyMalwareLog(exchange);
}
}).log("I've got message");
application is starting:
Route: route3 started and consuming from: Endpoint[udp://127.0.0.1:514]
and 514 port is opened but not is not listening
>netstat -lnp | grep 514
udp6 0 0 127.0.0.1:514 :::* 21513/java
I can see in tcpdump with tcpdump -i eth1 -nn -A -s 0 port 514 and udp that the messages are are being send and received properly.
Can anyone point me where I am doing mistake?
You need to use the client mode, eg set clientMode=true. See more details in the netty docs:
http://camel.apache.org/netty.html
And upgrade and use Netty 4 if possible:
http://camel.apache.org/netty4.html
Related
centos 7, tomcat 8.5.
a.war and rest.war are in the same tomcat.
a.war use following code to call rest.war:
import org.apache.http.impl.client.DefaultHttpClient;
DefaultHttpClient httpClient = new DefaultHttpClient();
HttpPost httpPost = new HttpPost(url);
httpPost.addHeader(HTTP.CONTENT_TYPE, "application/json");
StringEntity se = new StringEntity(json.toString());
se.setContentType("text/json");
se.setContentEncoding(new BasicHeader(HTTP.CONTENT_TYPE, "application/json"));
httpPost.setEntity(se);
HttpResponse response = httpClient.execute(httpPost);
however, if url of HttpPost(url) is <public ip>:80, then httpClient.execute(httpPost) will throw connection refused.
while if url of HttpPost(url) is localhost:80 or 127.0.0.1:80, then httpClient.execute(httpPost) is success.
why? and how can solve this problem?
Note: if I access a.war from browser with public ip like http://<public ip>/a in my computer, all operations are success.
my tomcat connector is:
<Connector
port="80"
protocol="HTTP/1.1"
connectionTimeout="60000"
keepAliveTimeout="15000"
maxKeepAliveRequests="-1"
maxThreads="1000"
minSpareThreads="200"
maxSpareThreads="300"
minProcessors="100"
maxProcessors="900"
acceptCount="1000"
enableLookups="false"
executor="tomcatThreadPool"
maxPostSize="-1"
compression="on"
compressionMinSize="1024"
redirectPort="8443" />
my server has no domain, only has a public ip, its /etc/hosts is:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
updated with some commands run in server:
ss -nltp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:111 *:* users:(("rpcbind",pid=643,fd=8))
LISTEN 0 128 *:80 *:* users:(("java",pid=31986,fd=53))
LISTEN 0 128 *:22 *:* users:(("sshd",pid=961,fd=3))
LISTEN 0 1 127.0.0.1:8005 *:* users:(("java",pid=31986,fd=68))
LISTEN 0 128 :::111 :::* users:(("rpcbind",pid=643,fd=11))
LISTEN 0 128 :::22 :::* users:(("sshd",pid=961,fd=4))
LISTEN 0 80 :::3306 :::* users:(("mysqld",pid=1160,fd=19))
netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 643/rpcbind
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 31986/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 961/sshd
tcp 0 0 127.0.0.1:8005 0.0.0.0:* LISTEN 31986/java
tcp6 0 0 :::111 :::* LISTEN 643/rpcbind
tcp6 0 0 :::22 :::* LISTEN 961/sshd
tcp6 0 0 :::3306 :::* LISTEN 1160/mysqld
ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 1396428 bytes 179342662 (171.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1396428 bytes 179342662 (171.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
p2p1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.25 netmask 255.255.255.0 broadcast 192.168.1.255
ether f8:bc:12:a3:4f:b7 txqueuelen 1000 (Ethernet)
RX packets 5352432 bytes 3009606926 (2.8 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2839034 bytes 559838396 (533.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: p2p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether f8:bc:12:a3:4f:b7 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.25/24 brd 192.168.1.255 scope global noprefixroute dynamic p2p1
valid_lft 54621sec preferred_lft 54621sec
route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 p2p1
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 p2p1
ip route
default via 192.168.1.1 dev p2p1 proto dhcp metric 100
192.168.1.0/24 dev p2p1 proto kernel scope link src 192.168.1.25 metric 100
iptables -L -n -v --line-numbers
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
You probably have configured one of these:
Firewall public IP's ports, so that nothing goes through.
Tomcat may bind a specific IP, e.g. localhost (see Connector elements in tomcat's server.xml)
Apache httpd, nginx or another reverse proxy might handle various virtual host names, and also they might handle localhost different than the public IP
Port Forwarding - if you only forward localhost:80 to localhost:8080 (tomcat's default port), you might not have anything on publicip:80 that forwards that traffic as well.
Edit after your comment:
incoming traffic seems to be fine, but outgoing you do have those problems. Adding from #stringy05's comment: Check if the IP in question is routable from your server: You're connecting to whatever IP from that server, so use another means to create an outgoing connection, e.g. curl.
Explanation for #1 & #3:
If you connect to an external http server, it will handle the request differently based on the hostname used. It might well be that the IP "hostname" is blocked, either by a high level firewall, or just handled differently than the URL by the webserver itself. In most cases you can check this by connecting to the webserver in question from any other system, e.g. your own browser.
If Tomcat is listening (bound) to your public IP-adres it should work, but maybe your public IP-adres belongs to some other device, like a SOHO router, than your problem is similar to this:
https://superuser.com/questions/208710/public-ip-address-answered-by-router-not-internal-web-server-with-port-forwardi
But without an DNS name you cannot simply add a line to /etc/hosts but you can add the public IP-adres to one of your Network Interfaces Cards (NIC) like lo (loopback), eth0, etc. as described in one of these articles:
https://www.garron.me/en/linux/add-secondary-ip-linux.html
https://www.thegeekdiary.com/centos-rhel-6-how-to-addremove-additional-ip-addresses-to-a-network-interface/
E.g. with public IP-address 1.2.3.4 you would need (which will only be effective until next reboot and worst case might interfere with your ability to connect to the server with e.g. SSH!):
sudo ip addr add 1.2.3.4/32 dev lo
It may be useful to have the output of these commands to better understand your setup, feel free to share it in your question, with consistently anonymized public IP-adres):
Either one of these (ss = socket stat, newer replacement for good old netstat):
ss -nltp
netstat -nltp
And one of these:
ifconfig
ip addr show
And last but not least either one of these:
route
ip route
I don't expect that we need to know your firewall config, but if you use it, it may be interesting to keep an eye on it while you are at it:
iptables -L -n -v --line-numbers
Try putting your public domain names into the local /etc/hosts file of your server like this:
127.0.0.1 localhost YOURPUBLIC.DOMAIN.NAME
This way your Java code does not need to try to use the external IP-adres but instead connects directly to Tomcat.
Good luck!
I think the curl timeout explains it - you have a firewall rule somewhere that is stopping the server accessing the public IP address.
If there's no reason the service can't be accessed using localhost or the local hostname then do that but if you need to call the service via a public IP then it's a matter of working out why the request gets a timeout from the server.
Some usual suspects:
The server might not actually have internet access - can you curl https://www.google.com?
There might be a forward proxy required - a sys admin will know this sort of thing
There might be IP whitelisting on some infra around your server - think AWS security groups, load balancer IP whitelists that sort of thing. To fix that you need to know the public IP of your server curl https://canihazip.com/s and get that added to the whitelist
I spent some time for the ports used by jvm on krt boxes. I see each jvm opens 10 ports.
Five are defined in the command line for mgmt., http, debug, jmx and ajp. Out of the other
five I can understand 1 for activemq and 2 for jdbc. There are two unknown to me
One out of that connects back to the server and another does not show what it is listening
To. The one option I read on net it to increase the range of ephemeral ports (we have 32k
Starting we can go 16k) I am not sure how we can dictate the port numbers for the five
Which are not defined today
Some commands to describe the situation.
[krtdev7#surya:/env/krtdev7/bin]$krtport KRTDataHistory-1
PORT ASSIGNMENTS:
=================
mgmt/shutdown=17091
http=17291
ajp=17491
jmx=17691
debug=17891
[krtdev7#surya:/env/krtdev7/bin] $ netstat -ap|grep 16831
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 0.0.0.0:17291 0.0.0.0:* LISTEN 16831/java
tcp 0 0 0.0.0.0:17491 0.0.0.0:* LISTEN 16831/java
tcp 0 0 0.0.0.0:36691 0.0.0.0:* LISTEN 16831/java
tcp 0 0 0.0.0.0:40596 0.0.0.0:* LISTEN 16831/java
tcp 0 0 0.0.0.0:17691 0.0.0.0:* LISTEN 16831/java
tcp 0 0 localhost:17091 0.0.0.0:* LISTEN 16831/java
tcp 0 0 0.0.0.0:17891 0.0.0.0:* LISTEN 16831/java
tcp 0 0 surya.internal.su:51631 sky.internal.s:ncube-lm ESTABLISHED 16831/java
tcp 0 0 surya.internal.su:40938 agni.internal.sun:61616 ESTABLISHED 16831/java
tcp 0 0 surya.internal.su:51630 sky.internal.s:ncube-lm ESTABLISHED 16831/java
unix 2 [ ] STREAM CONNECTED 16386441 16831/java**
Now we can see the 5 extra ports are assigned Could anybody let me know how to control these 5 extra port assignment rather how to make the jvm choose from the range of ports for these 5 extra ports?
I developed a client/server application that is receiving an initial connect at TCP port 10000 and after negotiating, the server generates a game room by Binding an UDP socket to a another port (like 10001) and client that connects to this Room, should connnect using UDP to this port.
This is the code I run to create every game Room:
...
EventLoopGroup udpBossGroup = new NioEventLoopGroup(1);
Bootstrap bUdp = new Bootstrap();
bUdp.group(udpBossGroup);
bUdp.handler(new LoggingHandler(LogLevel.INFO));
bUdp.handler(new UDPInitializer());
bUdp.channel(NioDatagramChannel.class);
bUdp.bind(udpPortCounter).sync();
...
I tried checking on netstat but it shows the same process ID, maybe it's the father process ID:
netstat -lanp
udp6 0 0 :::10024 :::* 26568/java
udp6 0 0 :::10025 :::* 26568/java
udp6 0 0 :::10026 :::* 26568/java
PS shows me same PID but different LWP, so I believe they are using different threads:
ps -eLF | grep -i java
UID PID PPID LWP C NLWP SZ RSS PSR STIME TTY TIME CMD
root 26568 4088 26568 0 26 620767 66144 0 10:16 pts/2 00:00:00 java -jar gameserver.jar
root 26568 4088 26569 0 26 620767 66144 0 10:16 pts/2 00:00:00 java -jar gameserver.jar
root 26568 4088 26570 0 26 620767 66144 1 10:16 pts/2 00:00:00 java -jar gameserver.jar
root 26568 4088 26571 0 26 620767 66144 0 10:16 pts/2 00:00:00 java -jar gameserver.jar
Question are:
Is this mode really multithreaded on the UDP sockets (Every socket runs on a different thread)?
How can I make sure it's using a different thread on every UDP socket?
Print thread ID using Thread.currentThread().getId() to confirm is using multithread on server when you write a multithread client(or multil clients) to send data incessantly.
I found you used only one NioEventLoopGroup(1), that means only one thread work for connect, receive data etc. , if only one THREAD ID print on STEP 1, try this:
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
b.group(bossGroup, workerGroup)
Netty process task using thread pool (NioDatagramChannel), so threads reusable to process different socket data, "How can I make sure it's using a different thread on every UDP socket?" I don't suggest do this because it will cost a lot of threads when lots of clients connect your server, you should keep data thread safe on another way.
How can I make a GemFire 8.2.0 P2P cluster of GemFire servers using only statically defined server list in the cache.xml configuration file?
I cannot use multicast. I do not want to use the separate locator process.
My cache.xml for server nodes
<!DOCTYPE cache PUBLIC
"-//GemStone Systems, Inc.//GemFire Declarative Caching 8.0//EN"
"http://www.gemstone.com/dtd/cache8_0.dtd">
<cache>
<cache-server port="40404" />
<pool name="serverPool">
<server host="10.0.0.192" port="40404" />
<server host="10.0.0.193" port="40404" />
</pool>
</cache>
I have read in the documentation that I can have a static list of servers in the pool, I see on the client side, this style of configuration works. and my clients connect to the list of servers.
but GemFire server / peer to peer clustering using only static cluster configuration is not working for me.
I am now using
serverCache = new CacheFactory().set("cache-xml-file", "server-cache.xml").set("mcast-port", "0")
.set("start-locator","localhost[13489]").set("locators", "localhost[13489]").create();
in the logs of this jmv i see
```
[info 2016/02/08 15:47:34.922 UTC tid=0x1] Starting peer location for Distribution Locator on localhost/127.0.0.1[13489]
[info 2016/02/08 15:47:34.925 UTC tid=0x1] Starting Distribution Locator on localhost/127.0.0.1[13489]
[info 2016/02/08 15:47:48.093 UTC tid=0x1] Starting server location for Distribution Locator on localhost/127.0.0.1[13489]
```
on a 2nd box I use
serverCache = new CacheFactory().set("cache-xml-file", "server-cache.xml").set("mcast-port", "0").set("locators", "IP-of-1stbox[13489]").create();
com.gemstone.gemfire.GemFireConfigException: Unable to contact a Locator service. Operation either timed out or Locator does not exist. Configured list of locators is "[ip-of-1stbox(null)<v0>:13489]".
at com.gemstone.org.jgroups.protocols.TCPGOSSIP.sendGetMembersRequest(TCPGOSSIP.java:222)
at com.gemstone.org.jgroups.protocols.PingSender.run(PingSender.java:85)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "main" com.gemstone.gemfire.GemFireConfigException: Unable to contact a Locator service. Operation either timed out or Locator does not exist. Configured list of locators is "[54.173.123.102(null)<v0>:13489]".
at com.gemstone.org.jgroups.protocols.TCPGOSSIP.sendGetMembersRequest(TCPGOSSIP.java:222)
at com.gemstone.org.jgroups.protocols.PingSender.run(PingSender.java:85)
at java.lang.Thread.run(Thread.java:745)
I have port 13489 open
I can see
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN -
tcp 0 0 ::ffff:127.0.0.1:13489 :::* LISTEN 5137/java
tcp 0 0 :::40404 :::* LISTEN 5137/java
tcp 0 0 :::22 :::* LISTEN -
tcp 0 0 ::ffff:10.0.0.193:21145 :::* LISTEN 5137/java
tcp 0 0 ::ffff:10.0.0.193:65148 :::* LISTEN 5137/java
udp 0 0 0.0.0.0:68 0.0.0.0:* -
udp 0 0 10.0.0.193:123 0.0.0.0:* -
udp 0 0 127.0.0.1:123 0.0.0.0:* -
udp 0 0 0.0.0.0:123 0.0.0.0:* -
udp 0 0 ::ffff:10.0.0.193:2300 :::* 5137/java
port 13489 is in use on the first box
when i did get them connected i found this
[warn 2016/02/08 16:38:12.688 UTC <locator request thread[1]> tid=0x20] Expected one of these: [class com.gemstone.gemfire.cache.client.internal.locator.LocatorListRequest, class com.gemstone.gemfire.management.internal.JmxManagerLocatorRequest, class com.gemstone.gemfire.cache.client.internal.locator.ClientReplacementRequest, class com.gemstone.gemfire.cache.client.internal.locator.QueueConnectionRequest, class com.gemstone.org.jgroups.stack.GossipData, class com.gemstone.gemfire.cache.client.internal.locator.ClientConnectionRequest, class com.gemstone.gemfire.cache.client.internal.locator.LocatorStatusRequest, class com.gemstone.gemfire.cache.client.internal.locator.GetAllServersRequest] but received ConfigurationRequest for groups :
cluster[cluster]
There is a mixup in the cache.xml. You will need two sets of cache.xml, one for the server and one for the client. On the server cache.xml, you define the port on which the server will listen for client communication, defines your regions etc. Something like the following:
<?xml version="1.0" encoding="UTF-8"?>
<cache
xmlns="http://schema.pivotal.io/gemfire/cache"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://schema.pivotal.io/gemfire/cache http://schema.pivotal.io/gemfire/cache/cache-8.1.xsd"
version="8.1">
<cache-server port="40404" />
<region name="MyRegion" refid="PARTITION" />
</cache>
To start an embedded locator and point the server to other running servers in the system, you can do
CacheFactory cf = new CacheFactory();
cf.set("cache-xml-file", "server-cache1.xml");
cf.set("mcast-port", "0");
cf.set("start-locator", "12345");
cf.set("locators","localhost[12345],localhost[6789]");
In a second process, use the exact same locators property, and use 6789 as the start-locator port.
For the client cache.xml, you define a connection pool and provide it with a list of running servers:
<?xml version="1.0" encoding="UTF-8"?>
<client-cache
xmlns="http://schema.pivotal.io/gemfire/cache"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://schema.pivotal.io/gemfire/cache http://schema.pivotal.io/gemfire/cache/cache-8.1.xsd"
version="8.1">
<pool name="serverPool">
<server host="localhost" port="40404" />
<server host="localhost" port="40405" />
</pool>
<region name="MyRegion" refid="CACHING_PROXY"/>
</client-cache>
For your client application, you should create a ClientCache using the above cache.xml like so:
ClientCacheFactory ccf = new ClientCacheFactory();
ccf.set("cache-xml-file", "client.xml");
ClientCache clientCache = ccf.create();
Region r = clientCache.getRegion("MyRegion");
r.put("1", "one");
You can start a locator in an embedded mode (i.e. within the GemFire Server process by using the start-locator gemfire property.
One way to do this is:
put start-locator=address1[port1] in gemfire.properties file.
start the server using
gfsh>start server --name=server1 --properties-file=/path/to/gemfire.properties
Start the second server by pointing it to the locator port in the first server:
gfsh>start server --name=server2 --locators=address1[port1]
I'm trying one scenario where my local ip is pinging ,server1_ip and server2_ip, but it's causing hogging on server as there are more than 1 connections on same ip's port as below..
[root#local ~]# netstat -antup -p|grep 8000
tcp 1 1 ::ffff:local_ip:58972 ::ffff:server1_ip:8000 LAST_ACK -
tcp 1 1 ::ffff:local_ip:49169 ::ffff:server2_ip:8000 LAST_ACK -
tcp 1 0 ::ffff:local_ip:49172 ::ffff:server2_ip:8000 CLOSE_WAIT 25544/java
tcp 1 0 ::ffff:local_ip:58982 ::ffff:server1_ip:8000 CLOSE_WAIT 25544/java
tcp 1 1 ::ffff:local_ip:58975 ::ffff:server1_ip:8000 LAST_ACK -
tcp 1 1 ::ffff:local_ip:49162 ::ffff:server2_ip:8000 LAST_ACK -
there are 2 threads , on some functionality I need to stop thread and also close socket connection on port 8000.
which is I'm doing with following method which is part of my thread.
protected void disconnect() {
if (this.mSocket != null) {
try {
this.mSocket.shutdownInput();
this.mSocket.shutdownOutput();
this.mOutputStream.flush();
this.mOutputStream.close();
this.mInputStream.close();
this.mSocket.close();
} catch (Exception vException) {
vException.printStackTrace();
}
}
this.mInputStream = null;
this.mOutputStream = null;
this.mSocket = null;
}
but when this method is called it's sending that connection in LAST_ACK state.
Please let me know cause of this and solution on this problem.
CLOSE_WAIT and LAST_ACK are intermediary states of a TCP connection reached just before closing. The tcp connection should eventually reache the CLOSED state: CLOSE_WAIT->LAST_ACK->CLOSED. So there is normal what are you seeing in netstat.
See this diagram of a tcp connection transition state: http://www.cs.northwestern.edu/~agupta/cs340/project2/TCPIP_State_Transition_Diagram.pdf
The only issue in your code is that you call flush on the OutputStream after shutting down output. You can remove the shutdownInput & shutdownOutput, they are useful when you want to close a single communication way: input or output.