Extracting configmaps from services in kubernetes cluster using a spring microservice - java

We are trying to get live configuration data from our kubernetes cluster. Therefore we would like to read the configmaps from each of our services.
Is there a way to exctract this data with a spring microservice which runs alongside the rest of the services?
Or are there other (better?) ways / tools to get this information?

Using Kubernetes APIs you can get the configmaps you need. I am not familiar with the Java client, but here it is:
https://github.com/kubernetes-client/java
You can retrieve a list of configmaps and their contents using these APIs. Your application will need a cluster role and a cluster role binding to allow it reading from configmap resources if you're using RBAC.

To extract information you can just query the Kubernetes API, likely in your case using the Java Kubernetes client. Likely the biggest issue you will face will be ensuring you have read access for the namespace(s) that the ConfigMaps are in.
The bigger question about a 'better way' is trying to understand why you want to read all of the ConfigMaps for your applications. The goal you are trying to accomplish will guide the solution.

Related

Containerized drools rules engine?

We have this Java codebase that has a good amount of business rules written in Drools. I have been tasked with designing and recommending and alternative cloud-based rules engine, so that other services and applications within the company can utilize it too. Here is the high level plan:
Perform a "Lift and shift" by decoupling the rule execution from the java code base
Create a containerized rules service that takes in an input via HTTP or a message queue and returns output, or perform some actions (Send notifications, queue something, etc)
Host it on Azure or GCP
I'm trying to create a baby POC. I need some help with some of the implementation details. For example, would creating a .NET REST endpoint and then passing in the data to the drools Java container be a feasible idea? Or would it be simpler to just create simple Java REST endpoint that uses Drools behind the scenes?
Any tips or examples of this would be highly appreciated, as I don't want to re-invent the wheel!
Drools has a native build in REST web service that can be embedded in Java Containers (JBoss, Tomcat).
This framework is the KIE Server and can be activated to host build in Drools Process/Rules.
https://docs.jboss.org/drools/release/7.69.0.Final/drools-docs/html_single/#_ch.kie.server
There are some docker images that contains default KIE Server that you can use and deploy your rules to.
Ex : https://hub.docker.com/r/jboss/kie-server/
Hope this helps,
Best,
Emmanuel
Or would it be simpler to just create simple Java REST endpoint that uses Drools behind the scenes?
You might want to consider using Kogito for your DRL rules, instead of having to deploy a containerised Kie Server.
Then, to have a Docker image generated easily with Kogito-on-Quarkus, it's enough to add the Quarkus' JIB extension to your Kogito-on-Quarkus app.

Migrating to cloud and multi tenant DB

We have got a web based Java application which we are planning to migrate to cloud with an intention that multiple clients will be using it in a SaaS based environment. The current architecture of the application is quite asynchronous in nature. There are 4 different modules, each having a database of its own. When there is a need of data exchange between the modules we push the data using Pentaho and make use of a directory structure to store the interim data file, which is then picked up by the other module to populate its database. Given the nature of our application this asynchronous communication is very important for us.
Now we are facing a couple of challenges while migrating this application to cloud:
We are planning to use Multi Tenancy on our database server, but how do we ensure that the flat files we use for transferring the data between different modules are also channelized to their respective tenants in the DB.
Since we are planning to host this in cloud, would seek your views, if keeping a text file on a cloud server would be safe from a data security perspective.
File storage in cloud is safe and you can use control IAM roles setup to control the permissions of a file. Cloud providers like Google (Cloud storage), Amazon (AWS S3), etc provides a secure and scalable infrastructure to maintain files in the cloud.
In general setup, cloud storage provides you with buckets which are tagged with a global unique identification. For a multi-tenant setup you can create multiple buckets for individual tenants and store the necessary data feeds in it. Next, you can have jobs batch or streaming jobs using kettle (Pentaho) to push it to the right database based on the unique bucket definition.
Alternatively, you can also push (like other answers) to a streaming setup (like ActiveMQ, Kafka, etc) with user specific topics and have a streaming service (using java or pentaho) to ingest the data to respective database based on the topic.
Hope this helps :)
I cannot realistically give any specific advice without knowing more
about your system. However, based on my experience, I would
recommend switching to message queues, something like Kafka would
work nicely.
Yes, cloud providers offer enough security for static file storage. You can
limit access however you see fit, for example using AWS S3.
1- The multi tenancy may create a bit of issue while transferring the files. But from what information you have given the process of flat file movement across application will not be impacted. Still you can think of moving to MQ mode for passing the data across.
2-From data security view, AWS provides lot of features at access level, MFA, etc. If it needs to be highly secured i would recommend to get AWS Private cloud where nothing is shared with any one at any level.

Using MetricsServlet to fetch metrics in Cassandra

I want to fetch various metrics like read/write latency, disk utilisation etc. of each of my Cassandra nodes(without using JMX) as a JSON object. It seems to me that MetricsServlet, can do exactly that. However, I'm still not able to figure out, what all do I need to do in order to use it(metrics-servlets does not come with Cassandra). I'll appreciate if I can get some advice/sample code(for fetching any metric).
Cassandra is not a java web server, it doesnt support servlets. You would need to start a java web server in same JVM as Cassandra and load those servlets. While possible its probably a lot less work to just query the metrics via JMX and convert to JSON with an external application or to expose JMX via http with something like MX4J (what I would recommend)

Monitoring Spring boot applications : gather service/node availability data for offline reporting

I would like to gather and store data on the availability of the service or node. The day after I could summarize the figures, like { day-1: service = 98.5%; day-2 = 99%}.
I could get the data by calling a simple rest (ping) service (e.g. via Actuator or what). Then I would need to write a custom scheduled application calling the Actuator/ping services.
Is there a simple solution for collecting/storing the availability data? Via Spring Batch?
UPDATE 31-05: I read about Spring Boot Admin. Is this the right solution? See also this introduction.
The data could be extracted and formatted in a CSV, JasperReporting, etc.
I hope that I can help you. I think that what you need is a way of monitoring your applications in a persistent way. You can build your own solution creating a Ping resource and scheduling a client to collect availability information from time to time. But, to no re-invent the wheel, a really suggest you use some professional tool.
I recommend that you use a Dashboard tool like Grafana to create these reports, and I suggest you try Prometheus to capture monitoring pieces of information.
I have listed some links below.
Actuator and Prometheus
monitoring-spring-boot-applications-with-prometheus
Prometheus dashboard in Grafana

(Architecture) Grabbing Data for angular2 app. Directly check MongoDB or my Java REST?

I have a quick architecture question as this is one of my first web applications.
On the frontend I have an Angular2 NodeJS app, backend I have a Java server aggregating some data for me in a MongoDB.
My question is simple. Should I create REST controllers in my java server to get data from the database? Or call the database directly from the Angular app.
I am leaning towards the Java REST idea. I just feel it is more secure, easier to do, and when I scale I can have processing done in Java when a rest call is made.
But I am worried this may slow things down too much? I can directly call the database and get info to put on my angular site. Does anyone know if this is a real concern for speed?
Keep in mind the data returned from the calls could be thousands of lines of JSON and hundreds of objects.
I think you can benefit from checking out this link:
https://www.mongodb.com/blog/post/building-your-first-application-mongodb-creating-rest-api-using-mean-stack-part-1
or
https://www.mongodb.com/blog/post/the-modern-application-stack-part-1-introducing-the-mean-stack?jmp=blog
As a side note - maybe it's just me - but I prefer Elastic to MongoDB - as it comes with Java-based REST API out of the box, and handles all the complexities of scalability and load balancing among nodes in the cluster.

Categories