Spring simple example: where create list of beans in JavaConfig? - java

I'm learning Spring but I don't understand where I have to fill my structure... for example, I want a list of Teams where each team have a list of players.
This is the code, i have my TeamApplication:
#SpringBootApplication
public class TeamApplication {
public static void main(String[] args) {
SpringApplication.run(TeamApplication.class, args);
AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext(AppConfig.class);
Team team = context.getBean(Team.class);
Player player = context.getBean(Player.class);
}
}
then I have AppConfig:
#Configuration
public class AppConfig {
#Bean
public Team team() {
return new Team();
}
#Bean
public Player player() {
return new Player();
}
}
so Player is:
public class Player {
private static final Logger LOG = LoggerFactory.getLogger(Player.class);
private String name;
private int age;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
#PostConstruct
public void init() {
LOG.info("Player PostConstruct");
}
#PreDestroy
public void destroy() {
LOG.info("Player PreDestroy");
}
}
and Team is:
public class Team {
private static final Logger LOG = LoggerFactory.getLogger(Team.class);
private String name;
private List<Player> listPlayer;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public List<Player> getListPlayer() {
return listPlayer;
}
public void setListPlayer(List<Player> listPlayer) {
this.listPlayer = listPlayer;
}
#PostConstruct
public void init() {
LOG.info("Team PostConstruct");
}
#PreDestroy
public void destroy() {
LOG.info("Team PreDestroy");
}
}
Now:
- Where I have to fill those lists? in PostConstruct? But in this way i have always the same datas... in an simple Java application I first create players:
Player p1 = new Player();
p1.setName("A");
p1.setAge(20);
Player p2 = new Player();
p1.setName("B");
p1.setAge(21);
Player p3 = new Player();
p1.setName("C");
p1.setAge(22);
Then I create my teams:
List<Person> l1 = new LinkedList<>();
l1.add(p1);
l1.add(p2);
Team t1 = new Team();
t1.setListPlayer(l1);
List<Person> l2 = new LinkedList<>();
l2.add(p3);
Team t2 = new Team();
t1.setListPlayer(l2);
so... in Spring:
Where can I init my players (in PostConstruct I will get always the same name/age)?
Where have I to create my listTeam? After getBean in TeamApplication?
Kind regards!

I've created a quick example project on GitHub. I need to emphasize that this is not a production ready code and you shouldn't follow it's patterns for I did't refactor it to be pretty but understandable and simple instead.
First you don't have to define the set of teams and players. As you said the data will be loaded from DB, so let the users do this work. :) Instead, you need to define the services (as spring beans) which contain the business logic for the users to do their task.
How does Spring know my db table structure and the DB table <-> Java object mapping? If you want to persist your teams and players some DB, you should mark them with annotations for Spring. In the example project I put the #Entity annotation to them so Spring will know it has to store them. Spring use convention over configuration so if I don't define any db table names, Spring will generate some from the entity class names, in this case PLAYER, TEAM and TEAM_PLAYERS. Also I annotated the Java class field I wanted to store with the following annotations:
#Column: this field will be stored in a DB column. Without any further config Spring will generate the name of it's column.
#Id and #GeneratedValue: Spring will auto generate the id of the persisted entities and store it's value in this annotated field.
#OneToMany: this annotation tells Spring to create a relation between two entities. Spring will create the TEAM_PLAYERS join table because of this annotation, and store the team-player id pairs in it.
How does Spring know the database's URL? As I imported H2 db in maven's pom.xml Spring will use it, and without any configuration it'll store data in memory (which will lost between app restarts). If you look at the application.yaml you can find the configuration for the DB, and Spring'll do the same. If you uncomment the commented lines Spring'll store your data under your home directory.
How does Spring sync these entities to the DB? I've created two repositories and Spring'll use them to save, delete and find data. They're interfaces (PlayerRepository and TeamRepository) and they extend CrudRepository interface which gives them some basic CRUD operations without any further work.
So far so good, but how can the users use these services? I've published these functionalities through HTTP endpoints (PlayerController and TeamController). I marked them as #RestControllers, so spring will map some HTTP queries to them. Through them users can create, delete, find players and teams, and assign players to teams or remove one player from a team.
You can try this example if you build and start it with maven, and send some queries to these endpoints by curl or by navigating to http://localhost:8080/swagger-ui.html page.
I've done some more configuration for this project but those are not relevant from the aspect of your question. I haven't explained the project deeper but you can make some investigation about my solutions and you can find documentations on Spring's site.
Conclusion:
My Spring managed classes are:
#Entity: Player, Team
Repository: PlayerRepository, TeamRepository
#RestController: PlayerController, TeamController
The flow of a call: User -(HTTP)-> #RestController -> Repository(Entity) -> DB

Spring isn't really meant for defining beans like your Player and Team class, where they are basically POJOs that will likely be different in each instance. Where Spring beans really shine are in defining singletons that will be injected int other beans or components, such as a Controller, a service, or similar.
That said, it is possible to define beans that are not singletons. Just change the scope of the bean to prototype, like so:
#Bean
#Scope("prototype")
public Team team() {
return new Team();
}

Related

Multi-tenancy with a separate database per customer, using Spring Data ArangoDB

So far, the only way I know to set the name of a database, to use with Spring Data ArangoDB, is by hardcoding it in a database() method while extending AbstractArangoConfiguration, like so:
#Configuration
#EnableArangoRepositories(basePackages = { "com.company.mypackage" })
public class MyConfiguration extends AbstractArangoConfiguration {
#Override
public ArangoDB.Builder arango() {
return new ArangoDB.Builder();
}
#Override
public String database() {
// Name of the database to be used
return "example-database";
}
}
What if I'd like to implement multi-tenancy, where each tenant has data in a separate database and use e.g. a subdomain to determine which database name should be used?
Can the database used by Spring Data ArangoDB be determined at runtime, dynamically?
This question is related to the discussion here: Manage multi-tenancy ArangoDB connection - but is Spring Data ArangoDB specific.
Turns out this is delightfully simple: Just change the ArangoConfiguration database() method #Override to return a Spring Expression (SpEL):
#Override
public String database() {
return "#{tenantProvider.getDatabaseName()}";
}
which in this example references a TenantProvider #Component which can be implemented like so:
#Component
public class TenantProvider {
private final ThreadLocal<String> databaseName;
public TenantProvider() {
super();
databaseName = new ThreadLocal<>();
}
public String getDatabaseName() {
return databaseName.get();
}
public void setDatabaseName(final String databaseName) {
this.databaseName.set(databaseName);
}
}
This component can then be #Autowired wherever in your code to set the database name, such as in a servlet filter, or in my case in an Apache Camel route Processor and in database service methods.
P.s. I became aware of this possibility by reading the ArangoTemplate code and a Spring Expression support documentation section
(via), and one merged pull request.

How to set tableName dynamically using environment variable in spring boot?

I am using AWS ECS to host my application and using DynamoDB for all database operations. So I'll have same database with different table names for different environments. Such as "dev_users" (for Dev env), "test_users" (for Test env), etc.. (This is how our company uses same Dynamo account for different environments)
So I would like to change the "tableName" of the model class using the environment variable passed through "AWS ECS task definition" environment parameters.
For Example.
My Model Class is:
#DynamoDBTable(tableName = "dev_users")
public class User {
Now I need to replace the "dev" with "test" when I deploy my container in test environment. I know I can use
#Value("${DOCKER_ENV:dev}")
to access environment variables. But I'm not sure how to use variables outside the class. Is there any way that I can use the docker env variable to select my table prefix?
My Intent is to use like this:
I know this not possible like this. But is there any other way or work around for this?
Edit 1:
I am working on the Rahul's answer and facing some issues. Before writing the issues, I'll explain the process I followed.
Process:
I have created the beans in my config class (com.myapp.users.config).
As I don't have repositories, I have given my Model class package name as "basePackage" path. (Please check the image)
For 1) I have replaced the "table name over-rider bean injection" to avoid the error.
For 2) I printed the name that is passing on to this method. But it is Null. So checking all the possible ways to pass the value here.
Check the image for error:
I haven't changed anything in my user model class as beans will replace the name of the DynamoDBTable when the beans got executed. But the table name over riding is happening. Data is pulling from the table name given at the Model Class level only.
What I am missing here?
The table names can be altered via an altered DynamoDBMapperConfig bean.
For your case where you have to Prefix each table with a literal, you can add the bean as such. Here the prefix can be the environment name in your case.
#Bean
public TableNameOverride tableNameOverrider() {
String prefix = ... // Use #Value to inject values via Spring or use any logic to define the table prefix
return TableNameOverride.withTableNamePrefix(prefix);
}
For more details check out the complete details here:
https://github.com/derjust/spring-data-dynamodb/wiki/Alter-table-name-during-runtime
I am able to achieve table names prefixed with active profile name.
First added TableNameResolver class as below,
#Component
public class TableNameResolver extends DynamoDBMapperConfig.DefaultTableNameResolver {
private String envProfile;
public TableNameResolver() {}
public TableNameResolver(String envProfile) {
this.envProfile=envProfile;
}
#Override
public String getTableName(Class<?> clazz, DynamoDBMapperConfig config) {
String stageName = envProfile.concat("_");
String rawTableName = super.getTableName(clazz, config);
return stageName.concat(rawTableName);
}
}
Then i setup DynamoDBMapper bean as below,
#Bean
#Primary
public DynamoDBMapper dynamoDBMapper(AmazonDynamoDB amazonDynamoDB) {
DynamoDBMapper mapper = new DynamoDBMapper(amazonDynamoDB,new DynamoDBMapperConfig.Builder().withTableNameResolver(new TableNameResolver(envProfile)).build());
return mapper;
}
Added variable envProfile which is an active profile property value accessed from application.properties file.
#Value("${spring.profiles.active}")
private String envProfile;
We have the same issue with regards to the need to change table names during runtime. We are using Spring-data-dynamodb 5.0.2 and the following configuration seems to provide the solutions that we need.
First I annotated my bean accessor
#EnableDynamoDBRepositories(dynamoDBMapperConfigRef = "getDynamoDBMapperConfig", basePackages = "my.company.base.package")
I also setup an environment variable called ENV_PREFIX which is Spring wired via SpEL.
#Value("#{systemProperties['ENV_PREFIX']}")
private String envPrefix;
Then I setup a TableNameOverride bean:
#Bean
public DynamoDBMapperConfig.TableNameOverride getTableNameOverride() {
return DynamoDBMapperConfig.TableNameOverride.withTableNamePrefix(envPrefix);
}
Finally, I setup the DynamoDBMapperConfig bean using TableNameOverride injection. In 5.0.2, we had to setup a standard DynamoDBTypeConverterFactory in the DynamoDBMapperConfig builder to avoid NPE.:
#Bean
public DynamoDBMapperConfig getDynamoDBMapperConfig(DynamoDBMapperConfig.TableNameOverride tableNameOverride) {
DynamoDBMapperConfig.Builder builder = new DynamoDBMapperConfig.Builder();
builder.setTableNameOverride(tableNameOverride);
builder.setTypeConverterFactory(DynamoDBTypeConverterFactory.standard());
return builder.build();
}
In hind sight, I could have setup a DynamoDBTypeConverterFactory bean that returns a standard DynamoDBTypeConverterFactory and inject that into the getDynamoDBMapperConfig() method using the DynamoDBMapperConfig builder. But this will also do the job.
I up voted the other answer but here is an idea:
Create a base class with all your user details:
#MappedSuperclass
public abstract class AbstractUser {
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
private Long id;
private String firstName;
private String lastName;
Create 2 implentations with different table names and spirng profiles:
#Profile(value= {"dev","default"})
#Entity(name = "dev_user")
public class DevUser extends AbstractUser {
}
#Profile(value= {"prod"})
#Entity(name = "prod_user")
public class ProdUser extends AbstractUser {
}
Create a single JPA respository that uses the mapped super classs
public interface UserRepository extends CrudRepository<AbstractUser, Long> {
}
Then switch the implentation with the spring profile
#RunWith(SpringJUnit4ClassRunner.class)
#DataJpaTest
#Transactional
public class UserRepositoryTest {
#Autowired
protected DataSource dataSource;
#BeforeClass
public static void setUp() {
System.setProperty("spring.profiles.active", "prod");
}
#Test
public void test1() throws Exception {
DatabaseMetaData metaData = dataSource.getConnection().getMetaData();
ResultSet tables = metaData.getTables(null, null, "PROD_USER", new String[] { "TABLE" });
tables.next();
assertEquals("PROD_USER", tables.getString("TABLE_NAME"));
}
}

Creating Spring #Repository and #Controller for every item I'm working with(from database)

While working with a project that involves requesting multiple data types from a database I came to a following question:
Lets say I have 2 java classes that correspond to database entities:
Routes
public class Route {
public Route(int n, int region, Date fdate, boolean changed, int points,
int length) {
super();
this.n = n;
this.region = region;
this.fdate = fdate;
this.changed = changed;
this.points = points;
this.length = length;
}
}
Carrier
public class Carrier {
public Carrier(...) {
this.id = src.getId();
this.name = src.getName();
this.instId = src.getInstId();
this.depotId = src.getDepotId();
}
If so, what's the correct approach of creating Dao interfaces and classes? I'm doing it like this -
#Repository
public class CarrierDaoImpl implements CarrierDao{
#Autowired
DataSource dataSource;
public List<Carrier> getAllOrgs() { ... }
}
#Repository
public class RoutesDaoImpl implements RoutesDao {
#Autowired
DataSource dataSource;
public ArrayList<AtmRouteItem> getRoutes(AtmRouteFilter filter) { ... }
}
I'm creating a #Repository DAO for every java class item\db entity and then 2 separate controllers for requests about carriers and routes. Like this:
#RestController
#RequestMapping(path = "/routes")
public class RoutesController {
#Autowired
RoutesDao routesDao;
#GetMapping(value = {"/getRoutes/", "/getRoutes"})
public ArrayList<Route> getRoutes() { ... } }
And same for controller Carriers. Is it correct and if not what's the correct approach?
Sorry for styling issues, that's my first question on stackoverflow :)
I would suggest creating services marked with #Service annotation (i.e. CarrierService interface and CarrierServiceImpl implementation). Than inject them into controllers. Use repositories within services because some database operations will require transactions and a better place for managing transactions are services. Also services can do more specialized job which will require access to multiple repositories so you can inject them. And don’t forget to mark your services with #Transactional annotation.
It's correct to have a DAO for each entity.
When working with JPA repositories you have no choice but to provide the entity. For instance:
public interface FooRepository extends JpaRepository<Foo,Long>{}
Same for the REST controllers, you have to bring together functionalities by object as you do.
You can improve your mapping to be more RESTful. To retrieve all routes, don't specify a path:
#GetMapping
public ArrayList<RouteResource> getRoutes() { ... }
(I never use #GetMapping yet but it should work like that)
And if you want specific route:
#GetMapping("/get/{id}")
public RouteResource getRoute() {...}
You should return resources instead of entities to client.

Querying Spring Data Elasticsearch for nested properties

I am trying to query spring data elasticsearch repositories for nested properties. My Repository looks like this:
public interface PersonRepository extends
ElasticsearchRepository<Person, Long> {
List<Person> findByAddressZipCode(String zipCode);
}
The domain objects Person and Address (without getters/setters) are defined as follows:
#Document(indexName="person")
public class Person {
#Id
private Long id;
private String name;
#Field(type=FieldType.Nested, store=true, index = FieldIndex.analyzed)
private Address address;
}
public class Address {
private String zipCode;
}
My test saves one Person document and tries to read it with the repository method. But no results are returned. Here is the test method:
#Test
public void testPersonRepo() throws Exception {
Person person = new Person();
person.setName("Rene");
Address address = new Address();
address.setZipCode("30880");
person.setAddress(address);
personRepository.save(person);
elasticsearchTemplate.refresh(Person.class,true);
assertThat(personRepository.findByAddressZipCodeContaining("30880"), hasSize(1));
}
Does spring data elasticsearch support the default spring data query generation?
Elasticsearch indexes the new document asynchronously...near real-time. The default refresh is typically 1s I think. So you must explicitly request a refresh (to force a flush and the document available for search) if you are wanting the document immediately searchable as with a unit test. So your unit test needs to include the ElasticsearchTemplate bean so that you can explicitly call refresh. Make sure you set waitForOperation to true to force a synchronous refresh. See this related answer. Kinda like this:
elasticsearchTemplate.refresh("myindex",true);

Stripes MVC Model Data

I am experienced with Spring MVC and am trying out Stripes to decide whether to try it out for a new project.
In Spring MVC I would prepare model data and pass it to the view by adding it to a map in the ModelAndView instance created by my controller. I am having trouble finding the equivalent of this for Stripes.
It seems like the closest parallel is to have an ActionBean prepare my model data and add it to the HttpSession. A ForwardRedirect is used to load the view and the data is accessed from the session.
Is there better support for a front controller provided by Stripes, or is this a totally different design principle than Spring MVC? (ie I have to invoke methods from the view using EL to retrieve data, as some of the examples do)
Thanks!
A typical MVC design in Stripes would look like something like the code below.
The JPA entity is automaticaly loaded by a Stripes interceptor provided by Stripersist (but this can also easily implemented on your own if you wish). Thus in this example, requesting http://your.app/show-order-12.html will load a order with id 12 from the database and will show it on the page.
Controller (OrderAction.java):
#UrlBinding("/show-order-{order=}.html")
public class OrderAction implements ActionBean {
private ActionBeanContext context;
private Order order;
public ActionBeanContext getContext() {
return context;
}
public void setContext(ActionBeanContext context) {
this.context = context;
}
public void setOrder(Order order) {
this.order = order;
}
public String getOrder() {
return order;
}
#DefaultHandler
public Resolution view() {
return new ForwardResolution(“/WEB-INF/jsp/order.jsp”);
}
}
View (order.jsp):
<html><body>
Order id: ${actionBean.order.id}<br/>
Order name: ${actionBean.order.name)<br/>
Order total: ${actionBean.order.total)<br/>
</body></html>
Model (Order.java):
#Entity
public class Order implements Serializable {
#Id #GeneratedValue(strategy = GenerationType.IDENTITY)
private Integer id;
private String name;
private Integer total;
public String getName() {
return name;
}
public Integer getTotal() {
return total;
}
}
BTW there is an really excellent short(!) book on Stripes that covers all these things:
Stripes: ...and Java Web Development Is Fun Again
Okay I have figured it out. Attributes added to the HttpServletRequest (retrieved from context) ARE available in the page receiving the ForwardRedirect
IE
context.getRequest().setAttribute("attr1", "request attribute 1");
return new ForwardResolution("/WEB-INF/pages/hello.jsp");
In hello.jsp
${attr1}
is available... yay!
There is on one nice solution for nopCommerce 3.20 (MVC). It's a payment plugin supporting, authorize, authorize/capture, refund and partially refund. PCI compliance included, no CC info is stored on db
http://shop.wama-net.com/en/stripe-payment-plugin-for-nopcommerce
Jacky

Categories