twitter hosebird client not working - java

I am trying to get tweets via hbc-twitter4j-v3 . Example code is : https://github.com/twitter/hbc/blob/master/hbc-example/src/main/java/com/twitter/hbc/example/Twitter4jSampleStreamExample.java
For enabling authentication on proxy, I have also set system properties for host,port and authentication. But it is showing following error-
[main] INFO com.twitter.hbc.httpclient.BasicClient - New connection executed: hosebird-client-0, endpoint: /1.1/statuses/sample.json?delimited=length&stall_warnings=true
[hosebird-client-io-thread-0] INFO com.twitter.hbc.httpclient.ClientBase - hosebird-client-0 Establishing a connection
[main] INFO com.twitter.hbc.httpclient.BasicClient - Stopping the client: hosebird-client-0, endpoint: /1.1/statuses/sample.json?delimited=length&stall_warnings=true
[main] INFO com.twitter.hbc.httpclient.ClientBase - hosebird-client-0 exit event - Stopped by user: waiting for 5000 ms
[main] WARN com.twitter.hbc.httpclient.ClientBase - hosebird-client-0 Client thread failed to finish in 5000 millis
[main] INFO com.twitter.hbc.httpclient.BasicClient - Successfully stopped the client: hosebird-client-0, endpoint: /1.1/statuses/sample.json?delimited=length&stall_warnings=true
[hosebird-client-io-thread-0] WARN com.twitter.hbc.httpclient.ClientBase - hosebird-client-0 Unknown host - stream.twitter.com
[hosebird-client-io-thread-0] WARN com.twitter.hbc.httpclient.ClientBase - hosebird-client-0 failed to establish connection properly
[hosebird-client-io-thread-0] INFO com.twitter.hbc.httpclient.ClientBase - hosebird-client-0 Done processing, preparing to close connection
[hosebird-client-io-thread-0] INFO com.twitter.hbc.httpclient.ClientBase - hosebird-client-0 Shutting down httpclient connection manager
Any help??
Thanks in advance

Hopefully I haven't overlooked something but this is how it appears to me...
If by setting properties you mean the http.proxy* ones, I don't think it will work as hosebird-client uses Apache's HTTP client under the hood which doesn't seem to use them.
From a cursory glance at the code, specifically around the ClientBuilder, it doesn't look like hbc supports proxy configuration - perhaps they have a good reason not to or just don't need the feature themselves, maybe try requesting it?
It looks like one of the ways you can get HttpClient to use a proxy is by adding it to the HttpParams object, e.g.:
HttpParams params = ...
HttpHost proxy = new HttpHost(hostname, port);
params.setParameter(ConnRoutePNames.DEFAULT_PROXY, proxy);
Whilst the HttpParams object isn't exposed anywhere you could potentially extend the ClientBuilder in order to supply your proxy configuration. If you look at the ClientBuilder#build() method, you can see where the HttpParams object is being set up. Good luck!
EDIT: Additionally, this issue indicates there are no plans to add proxy support directly in hbc.

Related

RabbitMQ ConvertAndSend no exchange "event bus name"

I have want to connect 2 micro services java with c#, in my c# I have setup my rabbitmq and I am running my services on docker. I believe everything is setup correctly but when trying to send over a message I get an error telling me my event bus does not exist. Even though i can see that the event bus does infact exist on the rabbitmq localhost website.
Would anyone know what causes this and how i could fix this, thank you for your time and help.
my rabbitmq port:
"15672:15672"
"5672:5672"
I can go on my localhost:15672 and see my exchanges and queues on rabbitmq
Yet when I try to use the "ConvertAndSend" Methode in java I get this error:
2022-08-30 12:01:47.801 INFO 90768 --- [nio-8090-exec-1] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:5672]
2022-08-30 12:01:48.100 INFO 90768 --- [nio-8090-exec-1] o.s.a.r.c.CachingConnectionFactory : Created new connection: rabbitConnectionFactory#1a1cc163:0/SimpleConnection#39d3d67d [delegate=amqp://guest#127.0.0.1:5672/, localP
ort= 56701]
2022-08-30 12:01:48.249 ERROR 90768 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'StudyP
lannerAndMonitor_event_bus' in vhost '/', class-id=60, method-id=40)
I have also let a friend try and run my code and he does not have the issue, everything seems to work fine with him.

How to Log/ View AWS Java SDK HTTP(S) Requests

I am developing a Spring Boot Application that uses HTTPS Only. I am using AWS services and the corresponding AWS Java SDK. How can I view the HTTP(S) request that the java sdk methods call on the backend of my application? I want to make sure when doing uploads to S3, etc, that everything is done over HTTPS only as the security of this application is important. A little confused on how to see when the backend of the application is interacting with the AWS services. Thanks in advance.
AWS uses HTTPS by default for all communications, and gives you options (such as VPC endpoints) that prevent traffic from leaving the AWS VPC.
Unfortunately, I couldn't find a reference in the Java SDK documentation that says it follows this practice. You can find guarantees for individual services (for example, S3). And it's implied that the SDK uses TLS by the page that describes how to enforce using TLS 1.2.
However, if you really want to be sure, you need to enable logging at the wire level.
Update in response to comment:
I ran the following program with debugging on:
public static void main(String[] argv)
throws Exception
{
S3Client client = S3Client.builder().build();
for (Bucket bucket : client.listBuckets().buckets())
{
System.out.println(bucket.name());
}
}
Looking at the logs, it's definitely making an HTTPS connection:
2021-11-17 17:51:24,802 [main] DEBUG request - Sending Request: DefaultSdkHttpFullRequest(httpMethod=GET, protocol=https, host=s3.amazonaws.com, port=443, encodedPath=/, headers=[amz-sdk-invocation-id, User-Agent], queryParameters=[])
...
2021-11-17 17:51:24,832 [main] DEBUG PoolingHttpClientConnectionManager - Connection request: [route: {s}->https://s3.amazonaws.com:443][total kept alive: 0; route allocated: 0 of 50; total allocated: 0 of 50]
2021-11-17 17:51:24,839 [main] DEBUG PoolingHttpClientConnectionManager - Connection leased: [id: 0][route: {s}->https://s3.amazonaws.com:443][total kept alive: 0; route allocated: 1 of 50; total allocated: 1 of 50]
2021-11-17 17:51:24,840 [main] DEBUG MainClientExec - Opening connection {s}->https://s3.amazonaws.com:443
2021-11-17 17:51:24,871 [main] DEBUG DefaultHttpClientConnectionOperator - Connecting to s3.amazonaws.com/52.216.251.102:443
2021-11-17 17:51:24,871 [main] DEBUG SdkTlsSocketFactory - Connecting socket to s3.amazonaws.com/52.216.251.102:443 with timeout 2000
2021-11-17 17:51:24,902 [main] DEBUG SdkTlsSocketFactory - Enabled protocols: [TLSv1.2]
That's followed by the TLS negotiation, and then the actual request. One thing that may be confusing, if you're not familiar with the HTTP protocol, is this:
2021-11-17 17:51:25,022 [main] DEBUG MainClientExec - Executing request GET / HTTP/1.1
That's the HTTP request line, and the "HTTP/1.1" indicates the protocol version. This information is sent over the secure connection.

Unable to connect to a Kerberos-secured Phoenix datasource

I want to test pulling data from Apache HBase with a Java application. The application will use SQL-like queries via a JDBC to Apache Phoenix.
I've set up my Hadoop "cluster" on one machine using Ambari and the HortonWorks HDP 2.5 platform. I've also Kerberized the environment using Ambari's wizard, where my KDC is a seperate machine running Windows Active Directory.
Ambari shows no errors, and I am able to use sqlline.py to successfully make SQL-like calls to HBase through Phoenix. I set up some example tables this way (cf. HortonWorks Phoenix & ODBC tutorial, although I had to kinit etc. first).
However, I am having problems creating a JDBC datasource to be used by the Java application. In my case, I am planning to host the webapp on WildFly 10.1 and I am developing with Eclipse JEE with the JBoss Tools plugin.
These are the steps I used to create the datasource:
Datasource Explorer > Database Connections > New...
Connection Profile: Generic JDBC
URL: jdbc:phoenix:hdfs.eaa.local:2181/hbase-secure:HTTP/hbase.eaa.local#EAA.LOCAL:jboss.server.temp.dir/spnego.service.keytab
Username: hbase -I'm unsure what to put here-
Driver: I've created a new driver of the type "Generic JDBC Driver" and I had to add JAR files for all of the dependencies of phoenix-core-[version].jar. The Driver Class is org.apache.phoenix.jbdc.PhoenixDriver.
I got the connection string from an extant post in the HortonWorks community, which is why it includes the Kerberos principal and keytab used for the connection.
When I try to test the datasource connection, it churns for about 5 minutes before spitting out an error message (after something like 35 attempts). The client returns Java exceptions that the sockets are in a "closing state", and the Zookeeper logs are less helpful:
INFO [SyncThread:0:ZooKeeperServer#617] - Established session 0x157aef451560217 with negotiated timeout 40000 for client /192.168.40.3:52674
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.40.41:43860
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#827] - Processing ruok command from /192.168.40.41:43860
INFO [Thread-1448:NIOServerCnxn#1007] - Closed socket connection for client /192.168.40.41:43860 (no session established for client)
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.40.41:43922
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#868] - Client attempting to establish new session at /192.168.40.41:43922
INFO [SyncThread:0:ZooKeeperServer#617] - Established session 0x157aef451560218 with negotiated timeout 40000 for client /192.168.40.41:43922
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:SaslServerCallbackHandler#118] - Successfully authenticated client: authenticationID=hbase/hdfs.eaa.local#EAA.LOCAL; authorizationID=hbase/hdfs.eaa.local#EAA.LOCAL.
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:SaslServerCallbackHandler#134] - Setting authorizedID: hbase
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#964] - adding SASL authorization for authorizationID: hbase
INFO [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor#494] - Processed session termination for sessionid: 0x157aef451560218
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1007] - Closed socket connection for client /192.168.40.41:43922 which had sessionid 0x157aef451560218
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /192.168.40.41:44008
INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#827] - Processing ruok command from /192.168.40.41:44008
INFO [Thread-1449:NIOServerCnxn#1007] - Closed socket connection for client /192.168.40.41:44008 (no session established for client)
NB. 192.168.40.3 is the VPN server, which my host machine is using to tunnel into the environment with the Hadoop cluster. 192.168.40.41 is the machine running the cluster, hdfs.eaa.local.
There are plenty of accepted socket connections which are then immediately closed. Occasionally the client authenticates successfully (so I'm confident in my Kerberos settings) but then there is a session termination immediately afterward.
I've also tried to deploy the Datasource directly in WildFly with jboss-cli and standalone.xml and module.xml modifications. But I get lots of problems with missing dependencies that I'm not sure how to resolve without creating a new module for each required JAR (and there are a lot) for phoenix-core-[version].jar. I followed this guide.
What can I do to fix the issue or diagnose further? I've been pulling my hair out for a couple of days now.
You need to add hbase-site.xml and core-site.xml to your classpath.
See How to connect to a Kerberos-secured Apache Phoenix data source with WildFly? for more information.

Spark Cannot Connect to HBaseClusterSingleton

I start an Hbase cluster for my test class. I use that helper class:
HBaseClusterSingleton.java
and use it as like that:
private static final HBaseClusterSingleton cluster = HBaseClusterSingleton.build(1);
I retrieve configuration object as follows:
cluster.getConf()
and I use it at Spark as follows:
sparkContext.newAPIHadoopRDD(conf, MyInputFormat.class, clazzK,
clazzV);
When I run my test there is no need to startup an Hbase cluster because Spark will connect to my dummy cluster. However when I run my test method it throws an error:
2015-08-26 01:19:59,558 INFO [Executor task launch
worker-0-SendThread(localhost:2181)] zookeeper.ClientCnxn
(ClientCnxn.java:logStartConnect(966)) - Opening socket connection to
server localhost/127.0.0.1:2181. Will not attempt to authenticate
using SASL (unknown error)
2015-08-26 01:19:59,559 WARN [Executor
task launch worker-0-SendThread(localhost:2181)] zookeeper.ClientCnxn
(ClientCnxn.java:run(1089)) - Session 0x0 for server null, unexpected
error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused at
sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
Hbase tests, which do not run on Spark, works well. When I check the logs I see that cluster and Spark is started up correctly:
2015-08-26 01:35:21,791 INFO [main] hdfs.MiniDFSCluster
(MiniDFSCluster.java:waitActive(2055)) - Cluster is active
2015-08-26 01:35:40,334 INFO [main] util.Utils
(Logging.scala:logInfo(59)) - Successfully started service
'sparkDriver' on port 56941.
I realized that when I start up an hbase from command line my test method for Spark connects to it!
So, does it means that it doesn't care about the conf I passed to it? Any ideas about how to solve it?

Cassandra Downed Host Retry (Hector)

Can someone assist me in understanding and troubleshooting this issue? I do not know what is causing Hector to fail when it tries to connect to the Cassandra cluster.
How can I find out where the issue is?
0 [main] INFO me.prettyprint.cassandra.connection.CassandraHostRetryService - Downed Host Retry service started with queue size 10 and retry delay 30s
168 [main] INFO me.prettyprint.cassandra.service.JmxMonitor - Registering JMX me.prettyprint.cassandra.service_keyspace-name:ServiceType=hector,MonitorType=hector
399 [main] INFO me.prettyprint.cassandra.model.ConfigurableConsistencyLevel - READ ConsistencyLevel set to QUORUM for ColumnFamily Files
400 [main] INFO me.prettyprint.cassandra.model.ConfigurableConsistencyLevel - WRITE ConsistencyLevel set to QUORUM for ColumnFamily Files
406 [main] INFO me.prettyprint.cassandra.model.ConfigurableConsistencyLevel - READ ConsistencyLevel set to QUORUM for ColumnFamily FileList
407 [main] INFO me.prettyprint.cassandra.model.ConfigurableConsistencyLevel - WRITE ConsistencyLevel set to QUORUM for ColumnFamily FileList
From the trace it seems you are using QUORUM as consistency level. Try to use ONE and see if it works. It seems that one or more of the nodes that should satisfy your request are down. Use noderool ring/status or see if any node is down in your cluster.

Categories