Spring doesn't track changes on files stored in "./resources/" folder - java

I'm new to Spring Boot, so I'm not sure about how to store/manipulate files (use persistance within spring). Use case: Store list of films (title, director...) on a JSON file stored on API server with persistance instead of using a DB.
I have a favorites.json at src/main/resources. This file is updated when request arrives as I said. Code here: GitHub Repo
A kind person has left in the comments what is probably the problem. Changes files in classpath won't work. I still struggling how store data in JSON without a database.
Problem I'm facing:
Files are updated correctly at POST request via OutputStream, but it seems like favorites.json is treated as a static resource, so any update will be ignored until API starts again (I have tried restarting the api when the file is updated, see this but it doesn't change anything. It's still needed to stop and start manually, bash script may help, but I prefer another solution if better-possible.
Maybe I'm looking for a file-based repository, place this file in a specific project path where spring detect updates.
I think I'm skipping some important concepts of spring behaviour.
Here POST Resource
#CrossOrigin(origins = "http://localhost:3000")
#PostMapping(path = TaskLinks.FAVORITES, consumes = "application/json", produces = "application/json")
#ResponseBody
public String updateFavs(#RequestBody List<Show> newFavorites) {
showService.updateFavorites(newFavorites);
return "All right";
}
Methods that modify the file:
public boolean updateFavorites(List<Show> newFavorites) {
if (newFavorites == null)
return false;
setNewFavorites(newFavorites);
return true;
}
private void setNewFavorites(List<Show> newFavorites) {
Gson gson = new Gson();
try {
FileWriter fileW = new FileWriter(FAVORITES_PATH);
String strNewFavs = gson.toJson(newFavorites);
fileW.write(strNewFavs);
fileW.close(); // auto flush
} catch (JsonIOException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}

If someone needs to use spring boot persistence system, I will let here what I've found.
The unique solution that I've found to use file persistance on spring-boot (API) is to hard-reload the whole API, which I think is not a clean thing.
So I ended up storing the JSON file on mysql.
Maybe spring have specific tools that I've omitted, but I don't have time to check right now.
The closest approach I got was accessing system temporary file, which is correctly updated because it's allocated outside the application.
I didn't get access to files outside the application other than temporary ones.
Now I'm working with NodeJS express and implemented a png delivery API. I don't really know how I would've done it with spring at all, but there's probably a file focused database or something that may work fine with spring. If I have to face this situation, I will upload the solution that I find most favorable. At the moment express works fine.

Related

Tensorflow serving get all active models metadata in Java maven

I'm looking for a way to get model metadata from all currently active models on Tensorflow Serving in Java Maven.
I have some working code for retrieving metadata from a specific model and version so if it would be possible to get a list of all model names and versions through grpc (or api) that would be great. Working code using tensorflow-client (com.yesup.oss) :
static ManagedChannel channel = ManagedChannelBuilder.forAddress(TF_SERVICE_HOST, TF_SERVICE_PORT)
.usePlaintext(true).build();
static PredictionServiceGrpc.PredictionServiceBlockingStub stub = PredictionServiceGrpc.newBlockingStub(channel);
public static void getMetadata(String model, Integer version) {
System.out.println("Create request");
GetModelMetadataRequest request = GetModelMetadataRequest.newBuilder()
.setModelSpec(ModelSpec.newBuilder()
.setName(model)
.setSignatureName("serving_default")
.setVersion(Int64Value.newBuilder().setValue(version))
)
.addMetadataField("signature_def")
.build();
System.out.println("Collecting metadata...");
GetModelMetadataResponse response = stub.getModelMetadata(request);
System.out.println("Done");
try {
SignatureDefMap sdef = SignatureDefMap.parseFrom(
response.getMetadataMap().get("signature_def").getValue());
System.out.println( sdef);
} catch (InvalidProtocolBufferException e1) {
e1.printStackTrace();
}
}
Own thoughts
I have thought about a couple of solutions, however none of them are preferable.
Create a server on the same device running Tensorflow Serving that can share the content of Tensorflow Serving config file. The config file contains model names and version, but we will not know if they are currently active.
Use jython or python to access other libraries (tensorflow-serving-api) which seems to contain "list-all-model-names" and "retriveConfig".
Any advice are appreciated, thanks in advance!

How can I get the path of my Neo4j <config storeDirectory=""> in a Batch Inserter method?

I'm using Neo4j 2.2.8 and Spring Data in a web application. I'm using xml to configure my database, like:
<neo4j:config storeDirectory="S:\Neo4j\mybase" />
But I'm trying to use a Batch Inserter to add more than 1 million of nodes sourced from a .txt file. After reading the file and setting the List of objects, my code to batch is something like:
public void batchInserter(List<Objects> objects) {
BatchInserter inserter = null;
try {
inserter = BatchInserters.inserter("S:\\Neo4j\\mybase");
Label movimentosLabel = DynamicLabel.label("Movimentos");
inserter.createDeferredSchemaIndex(movimentosLabel).on("documento").create();
for (Objects objs : objects{
Map<String, Object> properties = new HashMap<>();
properties.put("documento", objs.getDocumento());
long movimento = inserter.createNode(properties, movimentosLabel);
DynamicRelationshipType relacionamento = DynamicRelationshipType.withName("CONTA_MOVIMENTO");
inserter.createRelationship(movimento, objs.getConta().getId(), relacionamento, null);
}
} finally {
if (inserter != null) {
inserter.shutdown();
}
}
}
Is it possible to get the path of my database configured in my xml in the "inserter"? Because with the above configuration Neo4j gives me an error about multiple connections. Can I set a property to solve this error of multiple connections? Has anyone had this problem and have any idea how to solve it? Ideas are welcome.
Thanks to everyone!
Your question has several pieces to it:
Error About Multiple Connections
If you're using spring-data with a local database tied to a particular directory or file, be aware that you can't have two neo4j processes opening the same DB at the same time. This means that if you've decided to use BatchInserter against the same file/directory, this cannot happen at all while the JVM that's using the spring-data DB is running. There won't be a way I know of to get around that problem. One option would be to not use the batch inserter against the file, but to use the REST API to do inserting.
get the path of my database configured in my xml
Sure, there's a way to do that, you'd have to consult the relevant documentation. I can't give you the code for that because it depends on which config file your'e talking about and how it's structured, but in essence there should be a way to inject the right thing into your code here, and read the property from the XML file out of that injected object.
But that won't help you given your "Multiple connections" issue mentioned above.
Broadly, I think your solution is either:
Don't run your spring app and your batch inserter at the same time.
Run your spring app, but do insertion via the REST API or other method, so there isn't a multiple connection issue to begin with.

Netflix Zuul filters pulled from Cassandra not working

I have been using Zuul in combination with other Netflix OSS softs like Eureka, Hystrix, etc. At first, I was able to create my own filter and make it work from the FileSystem. Then, I wanted to use ZuulFilterDAOCassandra API to be able to pull filters from a Cassandra database. The problem is that I'm not able to make it work.
I have translated the Java filter I used from the FileSystem to a Groovy filter, and added it to Cassandra using the ZuulFilterDAO.addFilter() method, and activate it using ZuulFilterDAO.setFilterActive().After that, I started the ZuulFilterPoller to start pulling the filters from the database, if it already had any. For the database table model, I assumed it by looking at the FilterInfo object and its attributes, and from the ZuulFilterDAOCassandra.addFilter()method.
Here's a more complete section of code :
ZuulFilterDAO dao = new ZuulFilterDAOCassandra(keyspace);
try {
FilterInfo fi = dao.addFilter(code, "pre", "testFilter", "false", "1");
dao.setFilterActive(fi.getFilterName(), 1); // 1 is the filter revision
ZuulFilterPoller.start(dao);
} catch (Exception e) {
// Logging
}
// Show every active filter
List<FilterInfo> allActiveFilters = dao.getAllActiveFilters();
for (FilterInfo fi : allActiveFilters) {
LOG.info("FilterName: " + fi.getFilterName()); // outputs : FilterName: testFilter
}
In the Zuul console, I also get the following :
adding filter to diskFilterInfo{filter_id='zuul-server:testFilter:pre',
filter_name='testFilter', filter_type='pre', revision=1, creationDate=/*date*/,
isActive=true, isCanary=false, application_name=zuul-server}
filter written testFilter.groovy
I can then edit the testFilter.groovy at the root of my project, which contains the code I have mentioned in the dao.addFilter() method. I know the code is working, since it worked when pulling filters from the FS. And, when adding a filter, there's a verification of the code done by FilterVerifier.
And when I send my http requests to my application, Zuul doesn't filter them anymore. Am I missing something ?
Try using the FilterFileManager. It's my understanding that ZuulFilterPoller saves the filters from Cassandra to disk, and the FilterFileManager puts them to work.

Any Java API in Azure to get existing ServiceBusContract?

I am using the tutorial here for pushing data and consuming, data from Azure Service Bus. When I run the example the second time, I get back an error PUT https://asbtest.servicebus.windows.net/TestQueue?api-version=2012-08 returned a response status of 409 Conflict, which is way of saying you have already a configuration with that name, so do not create it another time. Most probably, this is the guilty code
Configuration config =
ServiceBusConfiguration.configureWithWrapAuthentication(
"HowToSample",
"your_service_bus_owner",
"your_service_bus_key",
".servicebus.windows.net",
"-sb.accesscontrol.windows.net/WRAPv0.9");
ServiceBusContract service = ServiceBusService.create(config);
QueueInfo queueInfo = new QueueInfo("TestQueue");
That is recalling create() is causing the problem, I would guess. But all methods in com.microsoft.windowsazure.services.serviceBus.ServiceBusService from http://dl.windowsazure.com/javadoc/ are only create, and I am unable to find a method like
ServiceBusContract service = A_class_that_finds_existing_bus_contract.find(config);
Am I thinking the wrong way, or is there another way out. Any pointers are appreciated.
EDIT:
I realized my code example for what I was asking was config, not service bus contract. Updated it, to reflect so.
Turns out I was wrong. The create() function in ServiceBusService does not throw any exception, as I gathered from Javadocs. Also, you can create the service bus contracts multiple times, as it being only a connection. The exception arises, when you attempt to create a queue with a name that already exists. That is this line.
String path = "TestQueue";
QueueInfo queueInfo = new QueueInfo(path);
To overcome this, you can go this way.
import com.microsoft.windowsazure.services.serviceBus.Util;
...
...
Iterable<QueueInfo> iqnf = Util.iterateQueues(service);
boolean queue_created = false;
for( QueueInfo qi : iqnf )
{
if( path.toLowerCase().equals( qi.getPath() ))
{
System.out.println(" Queue already exists. Do not create one.");
queue_created = true;
}
}
if ( !queue_created ) {
service.createQueue(queueInfo);
}
Hope, this helps anybody who may be stuck on create conflicts for queue on Azure.
EDIT: Even after I got the path code, my code refused to work. Turns out there is another caveat. Azure makes all queue names in lower case. I have edited the code to use toLower() for this work around.
I upvoted Soham's Question and Answer. I did not know about lowercase though I have not verified it. It did confirm the problem I am having right now as well.
The way #Soham has addressed it is good but not good for large ServicebUs where we may have tons of Queues it's added overhead to iterate it. The only way is to catch the ServiceException which is very generic and ignore that Exception.
Example:
QueueInfo queueInfo = new QueueInfo(queName);
try {
CreateQueueResult qr = service.createQueue(queueInfo);
} catch (ServiceException e) {
//Silently ignore for now.
}
The right way would be for the Azure library to extend the ServiceException and throw "ConcflictException" for e.g. which is present in httpStatusCode of ServiceException but unfortunately it's set to Private.
Since it is not We would have to extend the ServiceException and override the httpStatusCode setter.
Again, not the best way but the library can improve if we list as feedback on their Github issues.
Note: ServiceBus is still in preview phase.

Which path should I use when calling a java properties file?

Basically I have the model of my project set up this way:
ModelFolder
-----src
-----bin, etc.
-----PropertiesFolder1
--------File1.properties
--------File2.properties, etc.
-----PropertiesFolder2
--------File1.properties
--------File2.properties, etc.
-----MainPropertiesFile1.properties
-----MainPropertiesFile2.properties
I am trying to use it with my View, which is a Dynamic Web Project, and I got the properties files to finally load in my Web Project after changing
foo.load(new FileInputStream("foo.properties"));
to
foo.load(Thread.currentThread().getContextClassLoader().getResourceAsStream("foo.properties"));
and exporting the project to a JAR file, which I then included in WEB-INF/lib. However, I had to add another method to the Model, and when I tried testing that method, the Model was not able to read my properties file. I know that I can use FileInputStream with the full path to get the properties file working in the Model and the View, but are there any alternatives?
I don't want to keep changing the full path every time I switch computers (I use H:\username\...\Java\Workspace at work, whereas at home it's just C:\Java\Workspace).
I also don't want to have to move my properties files to different folders; and finally I don't want to change the way I load the properties file every time I test my Model or my View.
Is there a way to accomplish this?
This is driving me crazy, I've tried all of the following:
try
{
foo.load(this.getClass().getResourceAsStream("foo.properties"));
//foo.load(Thread.currentThread().getContextClassLoader().getResourceAsStream("foo.properties"));
//foo.getClass().getResourceAsStream("foo.properties");
//foo.load(new FileInputStream("foo.properties"));
} catch (IOException ex)
{
al.logIntoProgrammerLog(ex);
}
All of those lines either work in the model or the view. Is there any way I can call those properties files via a relative path in the model, and then somehow properly connect the model with the view so that all the files are found and loaded?
Any help would be greatly appreciated; I am new to Java, so I might be missing something really simple. Thank you.
EDIT:
Sorry for not clarifying this, the Model is a Java Project, whereas the View is a Dynamic Web Project running on local Tomcat Server v6.0.
Better (I hope) explanation:
My View has a LoginServlet with the following doPost method:
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException
{
String username = request.getParameter("usernameField");
String password = request.getParameter("passwordField");
ActivityLogger al = new ActivityLogger();
LoginController l_c = new LoginController();
//loginUser method will call my UserStorage class and
//will return true if UserStorage finds the User with those credentials in the db
//UserStorage is using a prepared sql statement stored in a properties file to find the User
//That properties file is not being read unless I specify the full path to it.
//Both Login and UserStorage are in the Model
if(l_c.loginUser(username, password))
{
//take user to welcome page
}
else
//display error
}
Thanks again
Take a look at this thread for possible solutions for sharing a .properties file. But I would suggest that you review this approach to see if it's really what you want. More specifically, if your .properties file contains SQL queries, then are you sure you need it in your View? Sounds like you may want to make a clean separation there.
If you are in a web application, you should be able to get the path (relative path in the context of the webapp, so it will work on any machine) by using the ServletContext's getResourceAsStream(). For example, if you wanted to get the path from your servlet's doGet() method:
public void doGet(HttpServletRequest request, HttpServletResponse response) {
InputStream in = getServletContext().getResourceAsStream(<path>);
}
where <path> would be the relative path to your .properties file (i.e. /WEB-INF/foo.properties, or wherever that properties file is located... look in your deployment folder to find out for sure)
But in re-reading your post it seems like perhaps your "Model" is not a webapp and your "View" is a webapp? Maybe you could clarify whether these are two different applications - and possibly one being a webapp running within a servlet container (i.e. Tomcat, Glassfish, etc) and one being a standalone app? If that's the case then this is more of a shared file issue than a 'cannot find resource' issue. A bit of clarification about the nature of your application(s) would help steer the correct response...

Categories