Fallback Mechanism - Best approach? - java

I have three different types of server connection. These can be configured in properties file.
Say there are three servers:
Server1
Server2
Server3
In Properties file, I've configured as below:
ServerPref1 = Server1
ServerPref2 = Server2
ServerPref3 = Server3
In code level, my fall back mechanism is as below:
private static void getServerAndConnect() {
try {
connect(Properties.ServerPref1);
} catch (ServerException se1) {
try {
connect(Properties.ServerPref2);
} catch (ServerException se2) {
try {
connect(Properties.ServerPref3);
} catch (ServerException se3) {
// Unable to connect
}
}
}
}
The connect() method will throw custom ServerException, if unable to connect to server.
Everything works as expected.
My question is: Is this the correct or best way to implement fallback mechanism?

I'd recommend a list of server connections then you can use a loop instead of nesting, this will let you add more servers without code changes.
Since you have separate attributes for each connection the best I can offer without seeing the rest of your code is to put those fields into a temporary list and loop over that.
Ideally make your properties parsing code write the connections into a List as well so you can have arbitrary number of servers without adding new fields to your Properties class.
private static void getServerAndConnect() {
List<ServerPref> serverPrefs = Arrays.asList(Properties.ServerPref1, Properties.ServerPref2, Properties.ServerPref3);
for (ServerPref serverPref : serverPrefs) {
try {
connect(serverPref);
// test success of connection? and break out of the loop
break;
} catch (ServerException se1) {
// log error and go onto next one
}
}
}

The general approach is ok. Depending on your needs you could make a few improvements:
Are there always exactly three servers? If than number can change, put your servers in a list, and iterate over that list to find the first functioning server.
If you want your work load more evenly distributed over the servers, instead of all connections going to the first server if it is available, randomize the list of servers before you iterate ofver it, or use a round robin approach.
If the getServerAndConnect() method is called often, consider remembering the server that was used eventually, and use that first the next time, since the probability is hight that is still is reachable.

Related

How to return only new items in real time when they are posted by another endpoint?

I am trying to do live streaming example app, where I can live update the list in the browser. I want to return all elements and then still listening (don't stop the stream) when new item is add to the database. Then I want to show new item in the browser. My current solution all the time prints all items (second by second) but I think there is better solution, when I can a) find the difference in list from last processing repository.findAll() and return only currList - prevList b) I can listen to some kind of events? Like inserting to table and add new item to still opened stream.
Here is my current code:
#RestController
#RequestMapping("/songs")
public class SongController {
private final SongRepository songRepository;
public SongController(SongRepository songRepository) {
this.songRepository = songRepository;
}
#GetMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<Song> getAllSongs() {
return Flux.interval(Duration.ofSeconds(1))
.flatMap(x -> songRepository.findAll());
}
#PostMapping
public Mono<Song> addSong(#RequestBody Song song) {
return songRepository.save(song);
}
}
Here is how it looks like now:
As you can see, Its obviously looped, and I just need plain list with 7 elements on begining and then +1 element every time I post new song (by addSong()).
I don't need a entire ready solution, I just don't know what should I use.
Thank you in advance, cheers
In my experience there are three options that have different pros and cons.
You could create a web socket connection from the browser to your backend service. This will create a bi-directional connection that will allow you push updates from the server to your browser. In this instance whenever you add a song you would then write that song to the web socket connection and handle that on the browser side, so adding it to the list in the browser.
The cons of this are in my experience web socket connections are finicky and aren't the most stable or reliable.
You could use server side events. I haven't used this personally but I have heard this can be a viable options for pushing events from the server to the browser. https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events
You could poll the endpoint. I know this approach gets a lot of hate in recent years but it is a viable options. The benefit with polling the endpoint is that it is resilient to failures. If your backend is overloaded and can't respond for one request it will likely be able to respond to a subsequent request. Also there are ways of improving commonly used endpoints so you're not hammering your database like a cache or something of that nature.

AWS Causes in Dyanmo for ConditionalCheckFailedException?

I have about 30 instances running and submitting data to dynamo, but in my logs, I'm getting a ton of ConditionalCheckFailedException failure messages. The weird thing is, I'm not saving with any conditional check, unless I'm missing something:
private void save(DynamoObject myObject) {
try {
mapper.save(model);
} catch (ConditionalCheckFailedException e) {
// metrics and logging
} catch (Exception e) {
// metrics and logging
}
What could be causing this?
It looks like you are using DynamoDBMapper and specifically #DynamoDBVersionAttribute somewhere and your put item failure is related to the mapper's optimistic locking strategy. The item version on the server is different to that on the client side because of another write to that item, so DynamoDB rejects the put.
You'll need to reconcile the item differences client-side and re-submit.

Establish 50+ URLConnections simultaneously within 1-2 seconds

I'm trying to scrape live data from 50+ dynamic webpages and need the data to be updated every 1-2 seconds. To do so, I have a Timer scheduled every 1/2 second that iterates through the following method 50 times (for 50 URLs):
public double fetchData(String link) {
String data = null;
try {
URL url = new URL();
urlConn = url.openConnection(link);
InputStreamReader inStream = new InputStreamReader(urlConn.getInputStream());
BufferedReader buff = new BufferedReader(inStream);
/*code that scrapes webpage, stores value in "data"*/
inStream.close();
buff.close();
} catch (IOException e) {
e.printStackTrace();
}
return data;
}
This method works but takes about a second per URL, or 50 sec total. I've also tried JSoup in hopes that the delay may be overcome using the following code:
public double fetchData(String link, String identifier) {
Document doc;
String data = null;
try {
doc = Jsoup.connect(link).timeout(10*1000).get();
data = doc.getElementById(identifier).parent().child(0).text();
} catch (IOException e) {
e.printStackTrace();
}
return data;
}
but have run into approximately the same processing time. Are there any faster ways to draw data from dynamic webpages simultaneously, whether through URLConnection, JSoup, or some other method?
The short answer is "use threads". Create a thread for each of the 50+ URLs that you want to scrape repeatedly.
It will most likely make little difference if you use URLConnection, JSoup or some other way do the scraping. The actual bottleneck is likely to be due to:
load and performance of the load on the server(s) you are scraping from
network bandwidth
network latency
The first of those is outside of your control (in a positive way!). The last two ... you might be able to address but only by throwing money at the problem. For example, you could pay for a better network connection / path, or pay for alternative hosting to move your scraper close to the sites you are trying to scrape.
Switching to multi-threaded scraping will ameliorate some of those bottlenecks, but not eliminate them.
But I don't think what you are doing is a good idea.
If you write something that repeatedly re-scrapes the same pages once every 1 or 2 seconds, they are going to notice. And they are going to take steps to stop you. Steps that will be difficult to deal with. Things like:
rate limiting your requests
blocking your IPs or IP range
sending you "cease and desist" letters
And if that doesn't help, maybe more serious things.
The real solution may be to get the information a more efficient way; e.g. via an API. This may cost you money too. Because (when it boils down to it) your scraping will be costing them money for either no return ... or a negative return if your activity ends up reducing real peoples' clicks on their site.

Handle multiple clients with one server

I try to get a connection to multiple clients using the Sockets in Java. Everything seems to work, but the problem is, that the server just listens to the first client. If there are multiple clients, the server can send them all messages, but he can just listen to the messages that came from the first client. I tried this all out (I'm at this problem since yesterday). So I'm pretty sure, that the fault has to be in the class "ClientListener".
Explanation:
There is a List with clients (connection to communicate with Strings). In the GUI there is a list, where I can choose, with which client I'd like to communicate. If I change the client, the variable currentClient (int) switches to another number
networkClients is an ArrayList, where all the different connections are "stored".
The first connected client is exactly the same as the other clients, there is nothing special about him. He is called, when the variable currentClient is set to 0 (per default). The variable-switching is working. Like I said, all the clients give me a response if I send them an order, but just networkClients.get(0) is heard by the server (ClientListener).
class ClientListener implements Runnable {
String request;
#Override
public void run() {
try {
while (networkClients.size() < 1) {
Thread.sleep(1000);
}
//***I'm pretty sure, that the problem is in this line
while ((request = networkClients.get(currentClient).getCommunicationReader().readLine()) != null) {
//***
myFileList.add(new MyFile(request));
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
I hope someone can help me. I tried many things, but nothing worked.
EDIT: Like I wrote in the code example, is it possible that the while-loop isn't able to switch the number of "currentClient" (which is handled by another Thread)? I tested/simulated something similar in a testclass and the result was, that a while-loop of course can can update the state in it (meaning, that if a variable changes in the () of a while loop, it will of course be checked after every repeat).
You should take a look at multithreading.
Your server program should be made out of:
- The main thread
- A thread that handles new connections.
(Upon creating a new connection, start a new thread and pass the connection on to that thread)
- A thread for each connected client, listening to the each client separately
Take a look at some examples like: (1) (2)
I found the solution:
The Thread sits in the declared method I mentioned in the starting post (in the code snippet) and waits unlimited time for a new response of the client.
So changing the index of the list "networkClients" won't do anything, because nothing will happen there, until there is a new order sent by the client (which lets the thread go further).
So you need to implement an extra listener for each client.

Developing a Java Application that uses an AppEngine database

This might be a very trivial question, but I'm having trouble finding an answer:
Using the Google Plugin for Eclipse, I would like to develop a plain old Java application (not a web-app), that uses AppEngine for cloud storage.
For this, I could, of course, simply create two projects, one containing the AppEngine server and one containing the Java application.
But I'm wondering whether it is possible to set up a single project in Eclipse that contains both the server and the client code (like for a GWT project). To execute it for local debugging, I would then want Eclipse to launch Tomcat to make my servlets available and then launch my Main.java from the client directory of the project as if the project was just a simple Java application. Is this what the "Launch and deploy from this directory" checkbox is for in the "Google" -> "Web Application" settings? If so, how do I use it?
I found one way to do it, but it's a bit cheesy.
First, add the following helper-class to the project:
// other imports
import com.google.appengine.tools.development.DevAppServerMain;
public class DevServer {
public static void launch(final String[] args) {
Logger logger = Logger.getLogger("");
logger.info("Launching AppEngine server...");
Thread server = new Thread() {
#Override
public void run() {
try {
DevAppServerMain.main(args); // run DevAppServer
} catch (Exception e) { e.printStackTrace(); }
}
};
server.setDaemon(true); // shut down server when rest of app completes
server.start(); // run server in separate thread
URLConnection cxn;
try {
cxn = new URL("http://localhost:8888").openConnection();
} catch (IOException e) { return; } // should never happen
boolean running = false;
while (!running) { // maybe add timeout in case server fails to load
try {
cxn.connect(); // try to connect to server
running = true;
// Maybe limit rate with a Thread.sleep(...) here
} catch (Exception e) {}
}
logger.info("Server running.");
}
}
Then, add the following line to the entry class:
public static void main(String[] args) {
DevServer.launch(args); // launch AppEngine Dev Server (blocks until ready)
// Do everything else
}
Finally, create the appropriate Run Configuration:
Simply click "Run As" -> "Web Application". To create a default Run Configuration.
In the created Run Configuration, under the "Main"-tab select your own entry class as the "Main class" instead of the default "com.google.appengine.tools.development.DevAppServerMain".
Now, if you launch this Run Configuration, it will first bring up the AppEngine server and then continue with the rest of the main(...) method in the entry class. Since the server thread is marked as a daemon thread, once the other code in main(...) completes, the application quits normally, shutting down the server as well.
Not sure if this is the most elegant solution, but it works. If someone else has a way to achieve this without the DevServer helper-class, please do post it!
Also, there might be a more elegant way to check whether the AppEngine server is running, other than pinging it with a URL connection as I did above.
Note: The AppEngine Dev Server registers its own URLStreamHandlerFactory to automatically map Http(s)URLConnections onto AppEngine's URL-fetch infrastructure. This means that you get errors complaining about missing url-fetch capabilities if you then use HttpURLConnections in your client code. Luckily, this can be fixed in two way as described here: Getting a reference to Java's default http(s) URLStreamHandler.
If you definitely want to use appengine, then you will end up creating two projects, one on appengine and another a standalone (no servlets). In this case you can take a look at appengine Remote API

Categories