Background
I’ve been using the serverless framework since version 0.5 successfully. The project was made in python using lambda and api-gateway and we group all our API’s on the same git repo separated by folders simulating the same structure that our services have, at the end this is a Nanoservice architecture and was integrated with cognito, custom authorizer, stages, the entire deal. An example of the structure:
functions/V1
– /users
—- /post
----- handler.py
----- s-function.json
—- /delete
—- /get
– /groups
—- /get
s-project.json
s-resources-cf.json
Problem
Now I'm trying to do the same in Java, and of course because of Java is not supported in 0.5 then I go for V1. The first issue that I found how to use the same API gateway for multiple resources using nanoservice architecture. Assuming that this will be fixed soon, I want to include in the process Codepipeline and Codebuild. Checking all examples over the internet with serverless, everyone is making one single Java package with several handlers for post, get, …, requests and one serverless.yml with the configuration, then the buildspec.yml and one git repo for this. This works great but if i’m going to create a combination of Micro and Nano services how i’m going to have N git repos where i can isolate deploys with Codepiline, for me this is exponential support for repositories, codepipeline builds etc… but on the other hand if i want to edit one single function, make a push and trigger the codepipeline(build/deploy and test) this single java handler and no the entire infrastructure how can i achieve this?
In the real world, everyone has one git repo per micro/nano service? (easily we can have +100 resources in one single apigateway project), all CI deployments are isolated in this way? and how to group an entire api to manage order in local development to recreate the same order of resources with folders, or this aproach is wrong?
Hopefully anyone else solve this problem before and can give me some guidance
Related
I'm not sure how to phrase this so I apologize if the title of the question does not make sense to you.
Due to various historical reasons, I have multiple teams contribute to the same code repo, that serves multiple service endpoints. Currently all teams deployments and releases are done together which creates a lot of churn.
I'm trying to get to this state: team A and B can still share the same code base, but they can deploy separately using different Kubernetes namespace? Like:
Team A's code is all under com/mycompany/team_a, team B's under com/mycompany/team_b
Somewhere in the repo there is a config that does the mapping:
com/mycompany/team_a/* => config_team_a.yaml, that has a Kubernetes config, maybe with namespace TeamA/ServiceA
com/mycompany/team_b/* => config_team_b.yaml with namespace TeamB/ServiceB
So that they can build their image separately and, of course, deploy separately.
Correct me if I'm wrong, but from the description of your problem it looks like you actually have two problems:
The fact that you have separate services code in the same repo (team A and team B);
The fact that you have several environments (development/production, for example)
The second issue can be easily solved if you use Helm, for example. It allows you to template your builds and pass different configs to it.
The first one, can also be partly solved by helm, since you can also separate your teams builds using templating.
However, a few years ago, I was working on a .net monorepo and faced yet another problem: every time there was a PR merged to our git repo, a build was triggered in Jenkins for every service we had, even those that did not have changes. From the description of your problem, it is not clear to me if you have a Jenkins pipeline configured and/or if you are also facing something similar, but if you are, you can have a look at what I did to workaround the issue: repo. Feel free to have a look and I hope that helps.
I just started a new project where I'm going to use Java, Spring cloud functions and AWS Lambda.
It's my first time building a serverless application and I've been looking at different example projects and tutorials on how to get started.
However, the projects I've found have been so small that it's hard to understand how to map it to a real project.
As I understand it you build a jar file and upload it to AWS Lambda where you specify which function to run.
However, as the project grows, more and more functions that aren't even going to run (unreachable code) will make the jar bigger and bigger and cause each Lambda startup to be slower and slower?
I could create separate modules for each Lambda function with its own Application class in order to build separate jars, but it doesn't feel like the intended architecture.
Also, I would like to be able to run all of the functions locally using tomcat in a single application.
I guess I could build a separate module specifically designed to run locally, but again it doesn't feel like the intended architecture.
Any suggestions or references to best practices would be greatly appreciated.
TL;DR:
One JAR per function, not all functions in one JAR.
Use Maven modules. One module per Lambda function.
Don't run the Lambda locally, use unit tests with mocks.
Deploy to AWS to test if the Lambda works as intended.
Reading the question I get the feeling that there are a few misconceptions on how AWS Lambda works, that need to be addressed first.
However, as the project grows, more and more functions that aren't even going to run (unreachable code) will make the jar bigger and bigger [...]
You do not deploy a single JAR that contains all your Lambda functions. Every function is deployed as a single JAR. So if you have 20 Lambda functions, you deploy 20 JAR files.
The size of the JAR file is determined by the individual dependencies of the function. A function might use a specific dependency an another might not. So JAR size will differ depending on your dependencies.
One way to improve this is to split your code from the dependencies, by putting the dependencies in Lambda layers. This way, you only deploy a small JAR with your code. The dependency JAR should only be deployed, when the dependencies have been updated. Unfortunately, this will make deployments more complex, but it is doable.
I could create separate modules for each Lambda function with its own Application class in order to build separate jars, but it doesn't feel like the intended architecture.
That's what I'd recommend. And it is more or less the only way. AWS Lambda has a 1 to 1 relationship between the JAR and the function. One Lambda function, per JAR. If you need a second Lambda function, you need to create it and deploy another JAR.
Also, I would like to be able to run all of the functions locally using tomcat in a single application. I guess I could build a separate module specifically designed to run locally, but again it doesn't feel like the intended architecture.
There are tools to run Lambdas locally, like the serverless framework. But running all the Lambdas in a Tomcat is probably going to be hard work.
In general, running Lambdas locally is something I'd not recommend. Write unit tests to run the code locally and deploy to AWS to test the Lambda. There is not really any better way I can think of to do testing efficiently.
Most Lambdas communicate with other services, like DynamoDB, S3 or RDS. So how would you run those locally? There are options, but it just makes everything more and more complicated. And what about services that you can't easily emulate locally (EventBridge, IAM, etc.)? That's why in my experience, running serverless applications locally is unrealistic and will not give you confidence that they'll work once deployed. So why not deploy during development and test the "real" thing?
From my experience, I would recommend using multiple Maven modules, one function per Maven module. Create shared modules for common logic. This approach would require you to implement some smart deployment pipeline to tell which function must be deployed if you change a common lib shared between many functions. If you don't have shared modules using just a hash on /src might be enough, otherwise, you need to add some metadata that describes the relation between Maven modules. I haven't investigated it but it might be possible to get the relation between modules from Maven to feed in your CI/CD so you use build tool to help sort out CD
It's possible to keep all functions within the same JAR and deploy one Jar multiple times with different entry points. The downside is you have tight coupling between all functions. Changes for one function might have some side effects on the other functions. Also coupling all functions within one JAR might make your function slower as it would create one Spring context containing all different beans. Also, Spring Boot approach with autoconfiguration would not help when for one function you need DB connection configured and for another, you need messaging configured. Ofc. you might mitigate some of the downsides but I think the idea of functions is similar to microservices to have a small unite of deployment, well encapsulated.
Finally, you could create a repository per function. It's the most flexible solution but also it might bring some caveats. Ultimately I could imagine every function uses a different version of Spring Boot, some functions are written in Java and some in Kotlin etc. Every function has a slightly different way of testing and running. This all would make maintenance very hard for you in long run. I believe in keeping all functions within one repo with a common set of libraries and configurations would benefit you in terms of cost of maintenance.
Thankfully to Spring Cloud function abstraction, you can use standalone web application by importing the required starter https://docs.spring.io/spring-cloud-function/docs/current/reference/html/spring-cloud-function.html#_standalone_web_applications. This will allow you to trigger your function as HTTP endpoint. Additionally, Spring Cloud function provides Maven plugin which allows you to run the function locally (only GCP) function:run https://docs.spring.io/spring-cloud-function/docs/current/reference/html/spring-cloud-function.html#_getting_started_3
Currently I'm working on a Java project which needs to have Google Cloud integration.
I need to get all folders and all projects from a service account using Cloud Resource Management API.
The problem is folders are new and only available in version 2 of the API, but projects are in version 1. I cannot include 2 jar files because the there will be conflict and only one of them will be used.
Does anyone have similar issue and solved the problem?
Thanks.
In simple terms, you can make two programs, one for each API, and make them talk to each other (Have one launch the other).
Have the version 2 program grab all the folder info you need, and the pass the relevant parts to your program with the version 1 API.
It's not great, but it works.
Better yet you could make a converter to update each project as it is opened, so that it will only use the version 2 API going forward.
I have to write a (java) web-app, which fetches data from an AWS RDS Postgresql Instance, and renders the data using Vaadin Charts. So my two constraints are: (java) based, and using Vaadin to do so.
Thing is, I have never developped an form of web-app, and am complettely lost. I've read stuff about maven, spring, gradle , containers and am safe to say, have absolutely no clue where to start...
Could anyone point me to some complete tutorials about how to developp web aps from the ground up? everytime I google something I read something different and am completely overflown by information...
If you want to start with something working ASAP you can clone existing repos with vaadin examples. You will have existing code that builds, manages dependencies, starts webserver etc:
https://github.com/vaadin/dashboard-demo
https://github.com/vaadin/book-examples
https://github.com/vaadin/spreadsheet-demo
All the rest is probably opinion-based like should I you use maven or not? etc.
I have the following scenario:
I have a view in an Oracle server and all Iwant is to show that view in a web browser, along with an input field or two for basic filtering. No users, no authentication, just this view maybe with a column or two linking to a second page for master detail viewing. The children are just string descriptions of the columns of the master that contain IDs. No inserts or updates.
The question is which is the JAVA based web framework of choice that can accomplish the above in the minimum amount of
code lines
code time(subjective but also kind of objective if someone has expirience with more than one or two frameworks)
configuration effort
deployment effort and requirements.
dependencies and mem footprint
Also:
6. Oracle APEX is not an option.
3,4 and 5 are maybe the same in the sense that they are everything except the functionality coding.
I want something that I can compile, deploy by just FTPing to the database host, run and forget. (e.g. For the deployment aspect, Hudson way comes in mind (java -jar hudson.war and that's all)).
Also: 3,4 have priority over 1 and 2. (Explanation with a rant: I dont mind coding a lot as long as it is application code and not "why do we still use javascript over http for everything" code)
Thanks.
EDIT 1: ROO attempt.
First I tried Spring Roo but here is what happened and it is exactly the kind of stuff i want to avoid:
Downloaded Roo (setup env vars, path, etc)
Saw that it requires Maven (1st smell)
Installed maven
Setup my project in roo shell
Tried to run it and it could not build because maven could not locate artifacts.
Searched the web and eventually found that I need to tweak the generated pom because of a problem between springsource repositories and maven central caused when Oracle is used because of a minor bug in ROO that includes the maven repo and not the spring one... etc...etc..
Abandonded Roo because:
I wanted a simple one page presentation of a table view in a locally installed database, and after 30 minutes I had no progress except for searching maven forums for why maven cant find sth called an "artifact" in a list of sth called "repository".
Take a look at Spring MVC and Spring Roo. the latter will generate you Spring application in a matter of minutes with the database access and then you can add your filtering.
The hudson-like deployment should be easy if you're happy with the features an embedded servlet container like jetty or winstone can provide. Just add a main class that fires up the server and sets a few config variables. That should be possible with any java web framework.
Here's how hudson did it:
http://weblogs.java.net/blog/2007/02/11/hudson-became-self-executable
try (µ)Micro and see if it works for you. It is Open Source, of course, and I also provided a couple of useful examples to start with. HTH - florin