I'm writing an RCON client for an Insurgency (Source engine game) dedicated server. I'm using the RCON protocol defined by Valve that is used in all of the games that use the Source engine. I can successfully send commands to the server, and display the server's response to those commands. However, I have no idea how to read or request the feed displayed by the in-game console (which contains the part I'm primarily interested in: the killfeed). I have looked at querying the server for a possible request to be sent the feed, but there is no such functionality listed.
How would I go about retrieving the console feed from the server?
You cannot request the console feed from the server via RCON.
Two alternative solutions come to mind:
Save the output of the server application
Insurgency (or most source servers for that matter) prints the information you are looking for to stdout. The most elegant solution to save this output would be to start the server via systemd and read it from the syslog via journalctl.
As a more simple solution you can just write it to a file using a pipe:
./start_server.sh > output.log
Or if you still want to see the output as it is printed:
./start_server.sh | tee output.log
Use sourcemod
You can write a sourcemod-plugin, or use an existing one that records and provides those information. The SuperLogs plugin comes to mind, but I haven't used it in a long time. This will be significantly more work.
I have been using the first solution for a long time now. Be aware that Insurgency buffers the output and only writes it once that buffer is full, leading to delays of 20 minutes and upwards. This can be improved by setting sv_logflush 1 in the Insurgency config.
Related
I made a web-server that runs on an esp32(LAN) and I have made it possible to send information to the esp itself from the servers url, (example : 192.168.1.39/?userInput=123), the number 123 is what I want to send from the application depends on the user's input (I compiled it to a packet of 8bits) so max number is 255, the server has an XML and some basic UI for viewing the information passed back and forth, I wanna be able to send the so called packet to the server and it passing it to the esp32 with almost no delay, I used google firebase before but it has way too much delay for it to be usable, I tried using a WebView and loading the URL with the number from the packet, I ran out of ideas on how to approach this would love some advice :)
I tried searching other questions here on the site, asked friends/teachers, watched a few tutorials and asked chatGPT for help but nothing was helpful.
From reading your question it seems you are lost setting up server and client at the same time. Divide the tasks into chunks you can digest:
First, setup your ESP32 webserver. Follow a tutorial like https://randomnerdtutorials.com/esp32-web-server-arduino-ide/ and test it using a normal web browser. It can be used to run GET requests easily, and the amount of data you need to transfer that should definitely be enough. Alternatively you can use curl to send client requests.
Next, develop your java client to send the appropriate request. You can test the behaviour using any standard webserver and check the logs.
Finally put the ESP32 url into your client and see whether they work together.
I have a huge text file which is continuously getting appended from a common place, which I need to read line by line from my java application and update in a SQL RDBMS such that if java application crashes, it should start from where it left and not from the beginning.
its a plain text file. Each row will contains:
<Datatimestamp> <service name> <paymentType> <success/failure> <session ID>
Also the data which is retrieved from database should also be real time without any performance, availability or availability issues in web application
Here is my approach:
Deploy application in two systems boxes with each contains heartbeat which pings the other system for service availability.
When you get a success response to heart beat,you also get the time stamp which is last successfully read.
When the next heartbeat response fails, application in another system can take over, based on:
1. failed response
2. Last successful time stamp.
Also, since the need for data retrieval is very real time and data is huge, can I crawl the database put that into Solr or Elastic search for faster retrieval, instead of making the database calls ?
There are various ways to do it, what is the best way.
I would put a messaging system in between the text file and the DB writing applications. (for example RabbitMQ) in this case, the messaging system functions as a queue. one application constantly reads the file and inserts the rows as messages to the broker. on the other side, multiple "DB writing applications" can read from the queue and write to DB.
the advantage of the messaging system is its support for multiple clients reading from the queue. the messaging system takes care of synchronizing between the clients, dealing with errors, dead letters, etc. the clients don't care about what payload was processed by other instances.
regarding maintaining multiple instances of "DB writing applications": I would go for ready made cluster solutions. perhaps docker cluster managed by kubernates?
another viable alternative is a streaming platform, like Apache Kafka.
You can use a software like FileBeat to read the file and direct the filebeat output to RabbitMQ or Kafka. From there a Java program can subscribe / consume the data and put it into a RDBMS system.
I'm building an Android app in which I want to display some real time data (updated every second) which I want to stream directly from my server to the App. There will be multiple Apps connected at the same time, which should all get the same stream. I am now looking for a way to do this from both the server and the client/Android side. From the server side I can basically build anything, so I thought I'd start from the client side.
In the Android docs I found the inputStream class which I guess is what I need for this. So my first question: is the inputStream class the right tool for the job?
If so, I guess I can set it up (found some examples on the net), but from here I'm still unsure of how to build this service from the server side? Do I need to build a simple page which I constantly update, or should I use a messaging lib such as zeromq with multicasting? Any more tips/hints/pointers on which technology to use for the server side would be very welcome as well!
This depends on your data. For example if you need to keep your clients updated on some values, like weather data in a location, a simple polling mechanism will suffice. You would have to build a web page that shows the current values and the clients would have to keep polling and parsing the page in the time intervals desired.
On the other hand, if you've got a stream of binary data that need to be transferred to the client, you would need to do some socket programming. There are tons of samples like this to help you get started. Also keep in mind that to maintain your sockets with the server, you will have to keep them running in the background as a service.
My question is about reading a remote XML file by using Java.
My files are stored in one device that runs Windows CE. I should access to few of these devices several times per day.
Which solution is more efficient considering network constraints, stablishment of a TCP session and data loss: to open and read the file remotelly or get a copy locally to the server and process it then?
Thank you very much.
It seems u want the files to be read from client by server, whereas it most cases its the other way round. In this case you should have some push functionality from the client to server and this can be over HTTP.
Or you can have a Http connection listener running in the client which accepts request from server and sends back the XML file to the server. Essentially its like a server thread running in the client.
Not sure if u running JAVA on Windows CE. Look for solutions in Windows CE HTTP listener.
See if it helps
I need a simple application, preferably a cross-platform one, that enables sending of files between two computers.
It just need to accept and send the files, and show a progress bar. What applications could I use or how could I write one?
Sending and Receiving Files
The sending and receiving of a file basically breaks down to two simple pieces of code.
Recieving code:
ServerSocket serverSoc = new ServerSocket(LISTENING_PORT);
Socket connection = serverSoc.accept();
// code to read from connection.getInputStream();
Sending code:
File fileToSend;
InputStream fileStream = new BufferedInputStream(fileToSend);
Socket connection = new Socket(CONNECTION_ADDRESS, LISTENING_PORT);
OutputStream out = connection.getOutputStream();
// my method to move data from the file inputstream to the output stream of the socket
copyStream(fileStream, out);
The sending piece of code will be ran on the computer that is sending the code when they want to send a file.
The receiving code needs to be put inside a loop, so that everytime someone wants to connect to the server, the server can handle the request and then go back to waiting on serverSoc.accept().
To allow sending files between both computers, each computer will need to run the server (receiving code) to listen for incoming files, and they will both need to run the sending code when they want to send a file.
Progress Bar
The JProgressBar in Swing is easy enough to use. However, getting it to work properly and show current progress of the file transfer is slightly more difficult.
To get a progress bar to show up on a form only involves dropping it onto a JFrame and perhaps setting setIndeterminate(false) so hat it shows that your program is working.
To implement a progress bar correctly you will need to create your own implementation of a SwingWorker. The Java tutorials have a good example of this in theirlesson in concurrency.
This is a fairly difficult issue on its's own though. I would recommend asking this in it's own question if you need more help with it.
Woof is a cool Python script that might work for you:
http://www.home.unix-ag.org/simon/woof.html
I would strongly consider using FTP. Apache has a FTP client and a server
Edit: spdenne's suggestion of HTTP is also good, especially if everyone has Java 6. If not, you can use something like Tiny Java Web Server.
You can write one by using Socket programming in Java. You would need to write a Server and a Client program. The server would use a ServerSocket to listen for connections, and the Client would use a Socket to connect to that server on the specified port.
Here's a tutorial: http://www.javaworld.com/jw-12-1996/jw-12-sockets.html
Sun's Java 6 includes a light-weight HTTP server API and implementation. You could fairly easily use this to serve your file, using URLConnection to obtain it.
Check out this tutorial, it's a really basic example. You would probably also want to send control headers prior to the actual file being sent, containing the size of the file, filename, etc.
Alternatively, base it on an existing protocol, like this project.
Can you install FTP servers on (one of) your machines ?
If you can, you will just have to use a FTP client (FileZilla for example, which have a progress bar).
Two popular apps are "scp" and "rsync". These are standard on Linux, are generally available on Unix and can be run on Windows under cygwin, although you may be able to find windows-native apps that can do it as well. (PuTTY can serve as an SCP client).
For any sort of pc-to-pc file transfer, you need to have a listener on the destination PC. This can be a daemon app (or Windows system process), or it can be a Unix-style "superserver" that's configured to load and run the actual file-copy app when someone contacts the listening port.
SCP and one of the rsync modes do require that there be some sort of remote login capability. Rsync can also publish resources that it will handle directory. Since the concept of a Windows "remote login" isn't as well-established as it is under Linux, this may be preferable. Plus it limits remote access to defined sources/targets on the destination machine instead of allowing access to any (authorized) part of the filesystem.
To transfer over a network more efficiently. Take a look at this article that explains efficient data transfer through zero copy