I am getting the following exception when I am trying to execute the following code that uses ribbon and tries to get server list from eureka.
Exception
3122 [main] WARN com.netflix.loadbalancer.RoundRobinRule - No up servers available from load balancer: DynamicServerListLoadBalancer:{NFLoadBalancer:name=origin,current list of Servers=[],Load balancer stats=Zone stats: {},Server stats: []}ServerList:DiscoveryEnabledNIWSServerList:; clientName:origin; Effective vipAddresses:localhost; isSecure:false; datacenter:null
Exception in thread "main" com.netflix.client.ClientException: LoadBalancer returned null Server for :origin
at com.netflix.client.LoadBalancerContext.computeFinalUriWithLoadBalancer(LoadBalancerContext.java:418)
at com.netflix.client.AbstractLoadBalancerAwareClient.executeWithLoadBalancer(AbstractLoadBalancerAwareClient.java:166)
at com.netflix.zuul.RibbonTest.main(RibbonTest.java:23)
Code
public static void main(String[] args) throws Exception {
ConfigurationManager.loadPropertiesFromResources("ribbon-client.properties");
RestClient client = (RestClient) ClientFactory.getNamedClient("origin");
HttpRequest request = HttpRequest.newBuilder().uri(new URI("/serviceContext/api/v1/")).build();
for (int i = 0; i < 20; i++) {
HttpResponse response = client.executeWithLoadBalancer(request);
}
}
ribbon-client.properties
origin.ribbon.NIWSServerListClassName=com.netflix.niws.loadbalancer.DiscoveryEnabledNIWSServerList
#also tried unsuccessfully with localhost:8091, localhost:8091/serviceContext
origin.ribbon.DeploymentContextBasedVipAddresses=localhost
The service is a Spring-Boot (not using Spring-Cloud) app using eureka-client.properties:
eureka-client.properties
eureka.registration.enabled=true
eureka.preferSameZone=true
eureka.shouldUseDns=false
eureka.serviceUrl.default=http://localhost:9080/eureka/v2/
eureka.region=default
eureka.name=service-Eureka
eureka.vipAddress=localhost
eureka.port=8091
eureka.instanceId=CHNSHL119363L
The service gets registered successfully with eureka deployed in local tomcat#port 9080 and is discover-able # http://localhost:9080/eureka/ and http://localhost:9080/eureka/v2/apps/
W/O using Spring-Cloud what needs to be fixed in the above code/configuration to be able to get list of servers dynamically from eureka using ribbon?
The following post served useful.
The issue is solved by
1. Correcting vipAddress configuration for service and ribbon
2. Configuring and registering ribbon as eureka-client
1. Changes for vipAddress
eureka-client.properties (service)
eureka.vipAddress=my.eureka.local
ribbon-client.properties
origin.ribbon.DeploymentContextBasedVipAddresses=my.eureka.local
2. Changes to register ribbon as eureka-client
public static void main(String[] args) throws Exception {
ConfigurationManager.loadPropertiesFromResources("ribbon-client.properties");
**InstanceInfo instanceInfo = new EurekaConfigBasedInstanceInfoProvider(instanceConfig).get();
applicationInfoManager = new ApplicationInfoManager(instanceConfig, instanceInfo);
eurekaClient = new DiscoveryClient(applicationInfoManager, clientConfig);**
RestClient client = (RestClient) ClientFactory.getNamedClient("origin");
HttpRequest request = HttpRequest.newBuilder().uri(new URI("/serviceContext/api/v1/")).build();
for (int i = 0; i < 20; i++) {
HttpResponse response = client.executeWithLoadBalancer(request);
}
}
Related
With the following super simple Java application:
class Main {
static final private Logger logger = LoggerFactory.getLogger(Main.class);
public static void main(String[] args) throws Exception {
JMXServiceURL url = new JMXServiceURL(null, "localhost", 9999);
logger.info("got JMXServiceURL");
logger.info(String.format("JMXServiceURL=%s", url.toString()));
try (JMXConnector connector = JMXConnectorFactory.connect(url)) {
logger.info("got JMXConnector");
MBeanServerConnection connection = connector.getMBeanServerConnection();
logger.info("got MBeanServerConnection");
logger.info(String.format("connection.getMBeanCount()=%d", connection.getMBeanCount()));
}
logger.info("exiting");
}
}
I'm using a minimal build.gradle file with dependencies:
dependencies {
implementation 'ch.qos.logback:logback-classic:1.2.3'
implementation 'org.jvnet.opendmk:jmxremote_optional:1.0_01-ea'
}
Without the jmxremote_optional dependency, I get a java.net.MalformedURLException: Unsupported protocol: jmxmp error. I presume I've added the correct Maven dependency to resolve that.
When I run this, I get the following and then the application hangs indefinitely:
120 18:43:33.693 [main] INFO jmxclient.Main - got JMXServiceURL
123 18:43:33.696 [main] INFO jmxclient.Main - JMXServiceURL=service:jmx:jmxmp://localhost:9999
I definitely have a Java application exposing JMX metrics on that port:
time curl localhost:9999
curl: (52) Empty reply from server
real 0m0.020s
user 0m0.012s
sys 0m0.000s
I am working in an microservices architecture that works as follows
I have two service web applications (REST services) that register themselves correctly in an eureka server, then I have a client application that fetches the eureka registry and using ribbon as a client side load balancer, determines to which service application go (at the moment, a simple Round Robin is being used).
My problem is that when I stop one of the service applications (they currently run in docker containers btw), eureka does not remove them from their registry (seems to take a few minutes) so ribbon still thinks that there are 2 available services, making around 50% of the calls fail.
Unfortunately I am not using Spring Cloud (reasons out of my control). So my config for eureka is as follows.
For the service applications:
eureka.registration.enabled=true
eureka.name=skeleton-service
eureka.vipAddress=skeleton-service
eureka.statusPageUrlPath=/health/ping
eureka.healthCheckUrlPath=/health/check
eureka.port.enabled=8042
eureka.port=8042
eureka.appinfo.replicate.interval=5
## configuration related to reaching the eureka servers
eureka.preferSameZone=true
eureka.shouldUseDns=false
eureka.serviceUrl.default=http://eureka-container:8080/eureka/v2/
eureka.decoderName=JacksonJson
For the client application (eureka + ribbon)
###Eureka Client configuration for Sample Eureka Client
eureka.registration.enabled=false
eureka.name=skeleton-web
eureka.vipAddress=skeleton-web
eureka.statusPageUrlPath=/health/ping
eureka.healthCheckUrlPath=/health/check
eureka.port.enabled=8043
eureka.port=8043
## configuration related to reaching the eureka servers
eureka.preferSameZone=true
eureka.shouldUseDns=false
eureka.serviceUrl.default=http://eureka-container:8080/eureka/v2/
eureka.decoderName=JacksonJson
eureka.renewalThresholdUpdateIntervalMs=3000
#####################
# RIBBON STUFF HERE #
#####################
sample-client.ribbon.NIWSServerListClassName=com.netflix.niws.loadbalancer.DiscoveryEnabledNIWSServerList
# expressed in milliseconds
sample-client.ribbon.ServerListRefreshInterval=3000
# movieservice is the virtual address that the target server(s) uses to register with Eureka server
sample-client.ribbon.DeploymentContextBasedVipAddresses=skeleton-service
I had faced similar issue in my development ,
There are multiple things I tried and worked for me .
1)Instead of using eureka registry ,use only underlying ribbon and modify it's health check mechanism as per our need ,for doing this provide own implementation for IPing
public class PingUrl implements com.netflix.loadbalancer.IPing {
public boolean isAlive(Server server) {
String urlStr = "";
if (isSecure) {
urlStr = "https://";
} else {
urlStr = "http://";
}
urlStr += server.getId();
urlStr += getPingAppendString();
boolean isAlive = false;
try {
ResponseEntity response = getRestTemplate().getForEntity(urlStr, String.class);
isAlive = (response.getStatusCode().value()==200);
} catch (Exception e) {
;
}
return isAlive;
}
}
override the load balancing behaviour
#SpringBootApplication
#EnableZuulProxy
#RibbonClients(defaultConfiguration = LoadBalancer.class)
#ComponentScan(basePackages = {"com.test"})
public class APIGateway {
public static void main(String[] args) throws Exception {
SpringApplication.run(APIGateway .class, args);
}
}
public class LoadBalancer{
#Autowired
IClientConfig ribbonClientConfig;
#Bean
public IPing ribbonPing() {
return new PingUrl(getRoute() + "/ping");
}
}
private String getRoute() {
return RequestContext.getCurrentContext().getRequest().getServletPath();
}
providing availability filtering rule
public class AvailabilityBasedServerSelectionRule extends AvailabilityFilteringRule {
#Override
public Server choose(Object key) {
Server chosenServer = super.choose(key);
int count = 1;
List<Server> reachableServers = this.getLoadBalancer().getReachableServers();
List<Server> allServers = this.getLoadBalancer().getAllServers();
if(reachableServers.size() > 0) {
while(!reachableServers.contains(chosenServer) && count++ < allServers.size()) {
chosenServer = reachableServers.get(0);
}
}
return chosenServer;
}
}
2)You can specify the time interval for refreshing the list
ribbon.eureka.ServerListRefreshInterval={time in ms}
when i run this code
public class test2 {
public static void main(String[] args) {
// TODO Auto-generated method stub
String podName = "xrdpprocan";
String namespace = "default";
String master = "https://my_ip_adress";
Config config = new ConfigBuilder().withMasterUrl(master).withTrustCerts(true).build();
try (final KubernetesClient client = new DefaultKubernetesClient(config)) {
String log = client.pods().inNamespace(namespace).withName(podName).getLog(true);
System.out.println("Log of pod " + podName + " in " + namespace + " is:");
System.out.println("------------------");
System.out.println(log);
} catch (KubernetesClientException e) {
System.out.println(e.getMessage());
}
}
i get this Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
Where is the problem: The current type of your client configuration is incomplete, you are missing the client authentication settings/data part.
Please be aware, when you are running your code from outside the cluster
(this type of client configuration is called out-of-cluster client configuration) you need to specify explicitly a bare minimum for successful connection to Kubernetes control-plane from outside.
Kubernetes Master URL
At least one method for user authentication, can be any of:
client certificates
bearer tokens
HTTP basic auth
You see the problem ? - you have specified none of these from the second condition for >> user << authentication (this is a key word here: user)
Right now Java Kubernetes client falls back into Service account based authentication strategy, thinking you are not human but robot (Pod running in context of Service Account).
Putting it technically, client is resolving now to the last resort option:
KUBERNETES_AUTH_TRYSERVICEACCOUNT
(4th on the list of fabric8io/kubernetes-client supported configuration option, check below)
which involves reading in service account token placed into the filesystem inside Pod's container at following path:
/var/run/secrets/kubernetes.io/serviceaccount/token
Officially fabric8io/kubernetes-client java client supports the following ways of configuring the client:
This will use settings from different sources in the following order
of priority:
System properties
Environment variables
Kube config file
Service account token & mounted CA certificate <== you client code tries this
System properties are preferred over environment variables. The
following system properties & environment variables can be used for
configuration
The easiest solution is to rely on Kube config file option to access cluster from outside, e.g.:
public class KubeConfigFileClientExample {
public static void main(String[] args) throws IOException, ApiException {
// file path to your KubeConfig
String kubeConfigPath = System.getenv("HOME") + "/.kube/config";
// loading the out-of-cluster config, a kubeconfig from file-system
ApiClient client =
ClientBuilder.kubeconfig(KubeConfig.loadKubeConfig(new FileReader(kubeConfigPath))).build();
// set the global default api-client to the in-cluster one from above
Configuration.setDefaultApiClient(client);
// the CoreV1Api loads default api-client from global configuration.
CoreV1Api api = new CoreV1Api();
// invokes the CoreV1Api client
V1PodList list =
api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null, null);
for (V1Pod item : list.getItems()) {
System.out.println(item.getMetadata().getName());
}
}
}
Full code sample can be found here.
On Google Compute Engine, I created 3 VMs and installed Elasticsearch 5.1.2 on them.
I installed GCE Discovery Plugin for unicast discovery.
From my local web browser(Win7), I could access these Elasticsearch nodes successfully.
On Google Cloud Platform, I have added the firewall rule accepting tcp:9300, tcp:9200.
Now I'd like to use Java transport client to talk to remote Elasticsearch nodes from my local java application.
I'm sure cluster.name is correct.
Code and Error are as follows:
public class NativeClient {
#SuppressWarnings({ "resource", "unchecked" })
public static Client createTransportClient() throws UnknownHostException {
Settings settings = Settings.builder().put("cluster.name", "elasticsearch").put("client.transport.sniff", true)
.build();
TransportClient client = new PreBuiltTransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("104.100.100.96"), 9300));
return client;
}
private final Node node = null;
private final Client client = null;
}
public class IndicesOperations {
private final Client client;
public IndicesOperations(Client client) {
this.client = client;
}
public boolean checkIndexExists(String name){
IndicesExistsResponse response=client.admin().indices().prepareExists(name).execute().actionGet();
return response.isExists();
}
public static void main( String[] args ) throws InterruptedException, UnknownHostException {
Client client =NativeClient.createTransportClient();
IndicesOperations io=new IndicesOperations(client);
String myIndex = "test";
if(io.checkIndexExists(myIndex))
io.deleteIndex(myIndex);
io.createIndex(myIndex);
Thread.sleep(1000);
io.closeIndex(myIndex);
io.openIndex(myIndex);
io.deleteIndex(myIndex);
}
}
Exception in thread "main" NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{lu8DzekbSWOrNEgFgXxpgQ}{104.100.100.96}{104.100.100.96:9300}]]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:328)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:226)
at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:59)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:345)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:403)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1226)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:80)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:54)
at com.packtpub.IndicesOperations.checkIndexExists(IndicesOperations.java:16)
at com.packtpub.IndicesOperations.main(IndicesOperations.java:49)
elasticsearch.yml
network.host: _gce_
cloud:
gce:
project_id: es-cloud
zone: asia-east1-b
discovery:
type: gce
Edit:
After deploying my java application on google compute engine, it can access the elasticsearch instance running on google compute engine. In this application, I just modified InetAddress.getByName("10.140.0.2"). When deploying on my local machine, I used the external ip of that VM.
What else do I have to modify to run it on my local machine?
I appended the external ip of my VM to network.publish_host property in elasticsearch.yml, then I could access the elasticsearch running on remote VM:
network.host: _gce_
network.publish_host: 104.100.100.96
cloud:
gce:
project_id: es-cloud
zone: asia-east1-b
discovery:
type: gce
I don't understand exactly, but fortunately it works.
I am tring to connect to the the Object Storage service in Bluemix. However, I keep getting an exception. Here is my code:
public static void main(String[] args){
SwiftApi swiftApi;
String endpoint = "https://identity.open.softlayer.com/v2.0";
String tenantName = "object_storage_aedba606_1c69_4a54_b12c_2cecxxxxxx";
String userName = "e8ee36a1fa38432abcxxxxxxx";
String password = "Y6R(cY3xxxxxxxx";
String identity = tenantName+":"+userName;
String provider = "openstack-swift";
String region = "dallas";
Properties overrides=new Properties();
overrides.setProperty(Constants.PROPERTY_LOGGER_WIRE_LOG_SENSITIVE_INFO, "true");
swiftApi = ContextBuilder.newBuilder(provider)
.endpoint(endpoint)
.credentials(identity, password)
.overrides(overrides)
.buildApi(SwiftApi.class);
System.out.println("List Containers");
ContainerApi containerApi = swiftApi.getContainerApi(region);
Set<Container> containers = containerApi.list().toSet();
System.out.println("Listing Containers: ");
for (Container container : containers) {
System.out.println(" " + container);
}
System.out.println(" ");
}
I keep getting the following exception:
Exception in thread "main" org.jclouds.rest.AuthorizationException: request:
POST https://identity.open.softlayer.com/v2.0/tokens HTTP/1.1 [Sensitive data in payload,
use PROPERTY_LOGGER_WIRE_LOG_SENSITIVE_INFO override to enable logging this data.]
failed with response: HTTP/1.1 401 Unauthorized
at org.jclouds.openstack.swift.v1.handlers.SwiftErrorHandler.handleError
(SwiftErrorHandler.java:52)
at org.jclouds.http.handlers.DelegatingErrorHandler.handleError
(DelegatingErrorHandler.java:65)
at org.jclouds.http.internal.BaseHttpCommandExecutorService.shouldContinue
(BaseHttpCommandExecutorService.java:136)
My application is a standalone Java application. I am using the credentials that are supplied within my Object Storage service from Bluemix.
Any help is greatly appreciated.
This blog will help you connect to Object Storage in Bluemix https://developer.ibm.com/recipes/tutorials/connecting-to-ibm-object-storage-for-bluemix-with-java/
You can also access Object Storage using the Swift CLI https://console.ng.bluemix.net/docs/services/ObjectStorage/objectstorge_usingobjectstorage.html#using-swift-cli