I have a simple model class:
#Entity
public class Task {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
#Size(min = 1, max = 80)
#NotNull
private String text;
#NotNull
private boolean isCompleted;
And here is my Spring Rest Data repository:
#CrossOrigin // TODO: configure specific domains
#RepositoryRestResource(collectionResourceRel = "task", path
= "task")
public interface TaskRepository extends CrudRepository<Task,
Long> {
}
So just as a sanity check, I was creating some tests to verify the end-points that are automatically created. post, delete, and get works just fine. I am however unable to properly update the isCompleted property.
Here are my test methods. The FIRST one passes no problem, but the SECOND one fails.
#Test
void testUpdateTaskText() throws Exception {
Task task1 = new Task("task1");
taskRepository.save(task1);
// update task text and hit update end point
task1.setText("updatedText");
String json = gson.toJson(task1);
this.mvc.perform(patch("/task/1")
.contentType(MediaType.APPLICATION_JSON_UTF8)
.content(json))
.andExpect(status().isNoContent());
// Pull the task from the repository and verify text="updatedText"
Task updatedTask = taskRepository.findById((long) 1).get();
assertEquals("updatedText", updatedTask.getText());
}
#Test
void testUpdateTaskCompleted() throws Exception {
Task task1 = new Task("task1");
task1.setCompleted(false);
taskRepository.save(task1);
// ensure repository properly stores isCompleted = false
Task updatedTask = taskRepository.findById((long) 1).get();
assertFalse(updatedTask.isCompleted());
//Update isCompleted = true and hit update end point
task1.setCompleted(true);
String json = gson.toJson(task1);
this.mvc.perform(patch("/task/1")
.contentType(MediaType.APPLICATION_JSON_UTF8)
.content(json))
.andExpect(status().isNoContent());
// Pull the task from the repository and verify isCompleted=true
updatedTask = taskRepository.findById((long) 1).get();
assertTrue(updatedTask.isCompleted());
}
EDIT: Modified test methods to be clear.
Finally figured out. Turns out the getter and setter in my model class was named incorrectly.
They should have been:
public boolean getIsCompleted() {
return isCompleted;
}
public void setIsCompleted(boolean isCompleted) {
this.isCompleted = isCompleted;
}
Found the answer per this SO Post:
JSON Post request for boolean field sends false by default
Related
I'm totally new to reactive programming and I have problem with coding even such an elementary task. The following method of an RestController should:
Take as parameter DoiReservationRequest object that represents reservation of yet-to-be-published DOI number (https://www.doi.org/). This reservation is meaningful only within our internal systems. The parameter is passed in the body of the POST request. DOI reservation request is a simple object:
public record DoiReservationRequest(String doi) {
}
Check that there is no previous reservation of the same number, or that the DOI number is not actually already submitted and published. For this purpose, try to find submissions with the same DOI in DoiSubmissionRepository, which is defined as:
#EnableMongoRepositories
#Repository
public interface DoiSubmissionRepository extends ReactiveMongoRepository<DoiSubmission, String> {
Flux<DoiSubmission> findAllByDoi(Publisher<String> doi);
}
DoiSubmission is itself defined as:
#Getter
#NoArgsConstructor(access = AccessLevel.PROTECTED)
#AllArgsConstructor
#ToString
#Document
public final class DoiSubmission {
#Id
private String id;
#Indexed
private String doi;
private Integer version;
private String xml;
private Date timestamp;
}
If no submission exists then return HTTP 201 with body that for now is empty, but before that save the reservation as DOI submission that has version 0 and empty xml content.
If submissions with the same doi exist (several different versions of the same DOI number with different xml data), return HTTP 409 with body that is yet to be determined that describes the error.
The code hangs indefinitely when POST request is made:
#PostMapping("/api/v1/reservation/")
public Mono<ResponseEntity<String>> create(#RequestBody Publisher<DoiReservationRequest> doi) {
return doiSubmissionRepository
.findAllByDoi(Mono.from(doi)
.map(DoiReservationRequest::doi))
.hasElements()
.flatMap(hasElements->{
if (hasElements) {
return Mono.just(ResponseEntity.status(HttpStatus.CONFLICT).body(""));
} else {
return Mono.from(doi)
.map(doiReservationRequest -> new DoiSubmission(
UUID.randomUUID().toString(),
doiReservationRequest.doi(), 0, "", new Date()))
.flatMap(doiSubmissionRepository::save)
.then(Mono.just(ResponseEntity.status(HttpStatus.OK).body("")));
}
});
}
I am sorry for the long post, but I believe It is important to mention everything related to the issue.
I am dealing with a requirement for my webservice that sends out notifications for 20k+ users at a time. Since this is quite a heavy task, I thought that having it Async was probably the best approach as it will take some time to process the data. This feature is available for a vast majority of users on the platform, hence there can be multiple requests at once. The amount of users that will receive a notification can vary from 1k to 20k+. Since the request processing takes quite a long time - I basically create a notification, assign it to the correct talents and then send it out in waves. This feature alone seems to have a massive impact on performance when there is multiple concurrent requets for notifications active at the same time and I end up with an out of memory error. I am not sure if this can be optimized at all, or If I should just perhaps choose a completely different approach to everything. I apologize for the long post but I believe it important that I mention everything.
I designed the system to act as follows:
I receive a notification request, which is created in a separate table
I receive a token that indicates which users should get the notification
I fetch the users via a mapped class that is used as a predicate inside of a findAll method (QueryDSL)
I created a relational table taht contains the notificationId, talentId and an extra 'sent' column. Every talent that should receive the message is added to this table along with the notificationId
I have a #Scheduled method that picks up a portion of the notification/talent relations and sends out the notification periodically
My Async configuration class is as follows:
#Component
#Configuration
#EnableAsync
public class AsyncConfiguration implements AsyncConfigurer {
#Override
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setMaxPoolSize(5);
executor.setCorePoolSize(5);
executor.setThreadNamePrefix("asyncExec-");
executor.initialize();
return executor;
}
#Override
public AsyncUncaughtExceptionHandler getAsyncUncaughtExceptionHandler() {
return (ex, method, params) -> {
System.out.println("Exception with message :" + ex.getMessage());
System.out.println("Method :" + method);
System.out.println("Number of parameters :" + params.length);
};
}
}
My user entity has the following mapping:
#OneToMany(mappedBy = "talent", cascade = CascadeType.ALL, orphanRemoval = true)
private List<TalentRoleNotificationRelations> roleNotifications;
My notification has the following mapping:
#OneToMany(mappedBy = "roleNotification", cascade = CascadeType.ALL, orphanRemoval = true)
private List<TalentRoleNotificationRelations> roleNotifications;
And my relational table entity is as follows:
#Entity
#Data
#Builder
#NoArgsConstructor
#AllArgsConstructor
public class TalentRoleNotificationRelations implements Serializable {
private static final long serialVersionUID = 1L;
#EmbeddedId
private TalentRoleNotificationIdentity identity;
#ManyToOne(fetch = FetchType.LAZY)
#MapsId("talentId")
#JoinColumn(name = "talent_id")
private Talent talent;
#ManyToOne(fetch = FetchType.LAZY)
#MapsId("roleNotificationId")
#JoinColumn(name = "role_notification_id")
private RoleNotification roleNotification;
private Boolean sent;
}
And the composite key identity:
#Embeddable
#Builder
#AllArgsConstructor
#NoArgsConstructor
#EqualsAndHashCode
#Data
public class TalentRoleNotificationIdentity implements Serializable {
private static final long serialVersionUID = 1L;
#Column(name = "talent_id")
private String talentId;
#Column(name = "role_notification_id")
private String roleNotificationId;
}
This mapping was done following the guide here which indicates how a many to many relation with an extra column should be implemented.
And the actual process of creating the notification
Controller method:
#PostMapping(value = "notify/{searchToken}/{roleId}", produces = MediaType.APPLICATION_JSON_VALUE)
public RoleNotificationInfo notifyTalentsOfNewRole(#PathVariable String searchToken, #PathVariable String roleId) {
var roleProfileInfo = (RoleProfileInfo) Optional.ofNullable(searchToken)
.map(token -> searchFactory.fromToken(token, RoleProfileInfo.class))
.orElse(null);
return productionService.notifyTalentsOfMatchingRole(roleProfileInfo, roleId);
}
Service method: (This is where I believe the issue is, as well as a very bad use of a many to many table mapping?)
#Transactional(propagation = Propagation.REQUIRES_NEW)
#Async
public RoleNotificationInfo notifyTalentsOfMatchingRole(RoleProfileInfo roleProfileInfo, String roleId) {
var predicate = Optional.ofNullable(userService.getPredicate(roleProfileInfo)).orElse(new BooleanBuilder());
var role = roleDao.findById(roleId).orElseThrow(NoSuchRole::new);
var isNotificationLimitReached = isNotificationLimitReached(roleId);
if (!isNotificationLimitReached) {
var notificationBody = notificationBodyDao.findById(NotificationBodyIdentifier.MATCHING_ROLE)
.orElseThrow(NoSuchNotificationBody::new);
var newNotification = RoleNotification.builder()
.notificationBody(notificationBody)
.role(role)
.build();
roleNotificationDao.saveAndFlush(newNotification);
talentDao.findAll(predicate)
.forEach(talent -> {
var identity = TalentRoleNotificationIdentity.builder()
.talentId(talent.getId())
.roleNotificationId(newNotification.getId())
.build();
var talentRoleNotification = TalentRoleNotificationRelations.builder()
.identity(identity)
.roleNotification(newNotification)
.talent(talent)
.sent(false)
.build();
talentRoleNotification.setIdentity(identity);
talentRoleNotification.setTalent(talent);
talentRoleNotificationRelationsDao.save(talentRoleNotification);
});
return dtoFactory.toInfo(newNotification);
} else throw new RoleNotificationLimitReached();
}
Scheduled method that sends out the notifications:
#Transactional(propagation = Propagation.REQUIRES_NEW)
#Scheduled(fixedDelay = 5)
public void sendRoleMessage() {
var unsentNotificationIds = talentRoleNotificationRelationsDao.findAll(
QTalentRoleNotificationRelations.talentRoleNotificationRelations.sent.isFalse()
.and(QTalentRoleNotificationRelations.talentRoleNotificationRelations.talent.notificationToken.isNotNull()),
PageRequest.of(0, 50)
);
unsentNotificationIds.forEach(unsentNotification -> {
var talent = unsentNotification.getTalent();
var roleNotification = unsentNotification.getRoleNotification();
markAsSent(unsentNotification);
talentRoleNotificationRelationsDao.saveAndFlush(unsentNotification);
notificationPusher.push(notificationFactory.buildComposite(talent.getNotificationToken()), dtoFactory.toInfo(roleNotification));
});
}
The notificationPusher method itself:
#Override
public void push(PushMessageComposite composite, RoleNotificationInfo roleNotificationInfo) {
String roleId = roleNotificationInfo.getRoleId();
String title = "New matching role!";
var push = Message.builder()
.setToken(composite.getMeta().getDeviceToken())
.setAndroidConfig(AndroidConfig.builder()
.setNotification(AndroidNotification.builder()
.setTitle(title)
.setBody(roleNotificationInfo.getBody())
.setSound(composite.getMeta().getSound())
.build())
.build())
.setApnsConfig(ApnsConfig.builder()
.setAps(Aps.builder()
.setAlert(ApsAlert.builder()
.setTitle(title)
.setBody(roleNotificationInfo.getBody())
.build())
.setBadge(composite.getMeta().getBadge().intValue())
.setSound(composite.getMeta().getSound())
.build())
.build())
.putData("roleId", roleId)
.build();
Try.run(() -> FirebaseMessaging.getInstance().sendAsync(push).get())
.onFailure(e -> {
log.error("Firebase Cloud Messaging failed during sendNotification", e);
var talent = talentDao.findTalentByNotificationToken(composite.getMeta().getDeviceToken());
talent.setNotificationToken(null);
talentDao.save(talent);
});
}
And the dto factory mapper method:
#Transactional(propagation = Propagation.MANDATORY)
public RoleNotificationInfo toInfo(RoleNotification source) {
return RoleNotificationInfo.builder()
.id(source.getId())
.body(source.getNotificationBody().getBody())
.created(source.getCreated())
.roleId(source.getRole().getId())
.build();
}
I am unsure where the problem lies. I assume it is due to the high amount of users fetched from certain queries (20k+). I did some profiling, and this were the results:
Memory/CPU charts:
My question is, should this even be possible at all? Is there a different approach that would be much more efficient? Should I use an external service for this? Is the problem something very obvious that I do not see? I am not sure where to look. If anything in my code is unclear and needs further clarification, please let me know and I'll try to edit and format it the best way I can.
I've followed an open Course on Spring web. Written some code to list all orders from a database and return them through a rest api. This works perfectly. Now I'm writing some code to give the ID of the order in the request, find 0 or 1 orders and return them. However, when there is no Order find with the given ID, a nullpointerexception is given. I can't find out what is causing this. I'm assuming the .orElse(null) statement. Please advise
Controller:
#RequestMapping("api/V1/order")
#RestController
public class OrderController {
private final OrderService orderService;
#Autowired
public OrderController(OrderService orderService) {
this.orderService = orderService;
}
#GetMapping(path = "{id}")
public Order getOrderById(#PathVariable("id") int id) {
return orderService.getOrderById(id)
.orElse(null);
}
}
Service:
#Service
public class OrderService {
private final OrderDao orderDao;
#Autowired
public OrderService(#Qualifier("oracle") OrderDao orderDao) {
this.orderDao = orderDao;
}
public Optional<Order> getOrderById(int orderNumber) {
return orderDao.selectOrderById(orderNumber);
}
}
Dao:
#Override
public Optional<Order> selectOrderById(int searchedOrderNumber) {
final String sql = "SELECT \"order\", sender, receiver, patient, orderdate, duedate, paymentref, status, netprice from \"ORDER\" where \"order\" = ?";
Order order = jdbcTemplate.queryForObject(sql, new Object[] {searchedOrderNumber}, (resultSet, i) -> {
int orderNumber = resultSet.getInt( "\"order\"");
String sender = resultSet.getString("sender");
String receiver = resultSet.getString("receiver");
String patient = resultSet.getString("patient");
String orderDate = resultSet.getString("orderdate");
String dueDate = resultSet.getString("duedate");
String paymentRef = resultSet.getString("paymentref");
String status = resultSet.getString("status");
int netPrice = resultSet.getInt("netprice");
return new Order(orderNumber,sender,receiver,patient,orderDate,dueDate,paymentRef,status,netPrice);
});
return Optional.ofNullable(order);
}
For the Jdbcexception, use general query instead of the queryForObject, or use try/catch to convert the Jdbc related exception, else Spring itself will handle these internally using ExceptionTranslater, ExceptionHandler etc.
To handle optional case in controllers, just throw an exception there, for example PostController.java#L63
And handle it in the PostExceptionHandler.
Editing based on comment about stack trace
For your error please check - Jdbctemplate query for string: EmptyResultDataAccessException: Incorrect result size: expected 1, actual 0
To solve problem associated with orderService.getOrderById(id) returning null you can return ResponseEntity.ResponseEntity gives you more flexibility in terms of status code and header. If you can change your code to return ResponseEntitythen you can do something like
#GetMapping(path = "{id}")
public ResponseEntity<?> getOrderById(#PathVariable("id") int id) {
return orderService
.getOrderById(id)
.map(order -> new ResponseEntity<>(order.getId(), HttpStatus.OK))
.orElse(new ResponseEntity<>(HttpStatus.NOT_FOUND));
}
You can even write generic Exception handler using #ControllerAdvice and throw OrderNotFoundException as .orElse(throw new OrderNotFoundException);. Check more information here.
I would like to have Documents stored with an UUID id and createdAt / updatedAt fields. My solution was working with Spring Boot 2.1.x. After I upgraded from Spring Boot 2.1.11.RELEASE to 2.2.0.RELEASE my test for MongoAuditing failed with createdAt = null. What do I need to do to get the createdAt field filled again?
This is not just a testproblem. I ran the application and it has the same behaviour as my test. All auditing fields stay null.
I have a Configuration to enable MongoAuditing and UUID generation:
#Configuration
#EnableMongoAuditing
public class MongoConfiguration {
#Bean
public GenerateUUIDListener generateUUIDListener() {
return new GenerateUUIDListener();
}
}
The listner hooks into the onBeforeConvert - I guess thats where the trouble starts.
public class GenerateUUIDListener extends AbstractMongoEventListener<IdentifiableEntity> {
#Override
public void onBeforeConvert(BeforeConvertEvent<IdentifiableEntity> event) {
IdentifiableEntity entity = event.getSource();
if (entity.isNew()) {
entity.setId(UUID.randomUUID());
}
}
}
The document itself (I dropped the getter and setters):
#Document
public class MyDocument extends InsertableEntity {
private String name;
}
public abstract class InsertableEntity extends IdentifiableEntity {
#CreatedDate
#JsonIgnore
private Instant createdAt;
}
public abstract class IdentifiableEntity implements Persistable<UUID> {
#Id
private UUID id;
#JsonIgnore
public boolean isNew() {
return getId() == null;
}
}
A complete minimal example can be find here (including a test) https://github.com/mab/auditable
With 2.1.11.RELEASE the test succeeds with 2.2.0.RELEASE it fails.
For me the best solution was to switch from event UUID generation to a callback based one. With the implementation of Ordered we can set the new callback to be executed after the AuditingEntityCallback.
public class IdEntityCallback implements BeforeConvertCallback<IdentifiableEntity>, Ordered {
#Override
public IdentifiableEntity onBeforeConvert(IdentifiableEntity entity, String collection) {
if (entity.isNew()) {
entity.setId(UUID.randomUUID());
}
return entity;
}
#Override
public int getOrder() {
return 101;
}
}
I registered the callback with the MongoConfiguration. For a more general solution you might want to take a look at the registration of the AuditingEntityCallback with the `MongoAuditingBeanDefinitionParser.
#Configuration
#EnableMongoAuditing
public class MongoConfiguration {
#Bean
public IdEntityCallback registerCallback() {
return new IdEntityCallback();
}
}
MongoTemplate works in the following way on doInsert()
this.maybeEmitEvent - emit an event (onBeforeConvert, onBeforeSave and such) so any AbstractMappingEventListener can catch and act upon like you did with GenerateUUIDListener
this.maybeCallBeforeConvert - call before convert callbacks like mongo auditing
like you can see in source code of MongoTemplate.class src (831-832)
protected <T> T doInsert(String collectionName, T objectToSave, MongoWriter<T> writer) {
BeforeConvertEvent<T> event = new BeforeConvertEvent(objectToSave, collectionName);
T toConvert = ((BeforeConvertEvent)this.maybeEmitEvent(event)).getSource(); //emit event
toConvert = this.maybeCallBeforeConvert(toConvert, collectionName); //call some before convert handlers
...
}
MongoAudit marks createdAt only to new entities by checking if entity.isNew() == true
because your code (UUID) already set the Id the createdAt is not populated (the entity is not considered new)
you can do the following (order by best to worst):
forget about the UUID and use String for your id, let the mongo itself create and manage it's entities ids (this how MongoTemplate actually works lines 811-812)
keep the UUID at the code level, convert from/to String when inserting and retrieving from the db
create a custom repository like in this post
stay with 2.1.11.RELEASE
set the updateAt by GenerateUUIDListener as well as id (rename it NewEntityListener or smth), basically implement the audit
implement a new isNew() logic that don't depends only on the entity id
in version 2.1.11.RELEASE the order of the methods was flipped (MongoTemplate.class 804-805) so your code worked fine
as an abstract approach, the nature of event is to be sort of send-and-forget (async compatible), so it's a very bad practice to change the object itself, there is NO grantee for order of computation, if any
this is why the audit build on callbacks and not events, and that's why Pivotal don't (need to) keep order between versions
My Spring REST program, a slight extension of a Stephen Zerhusen demo using Json Web Tokens (JWT), works OK -- as far as it goes. I added an Option object, and I can GET, PUT and POST using just an Option class (#Entity) and an OptionRepository interface (extends JpaRepository)
I'm now trying, but failing, to restrict the returned data to just what the logged-in user has rights to. As an example, suppose that my logged in user only has rights to Option values 1, 3, and 5.
If I have a service call like GET /option I should not return Option values 2 or 4.
If I have a service call like GET /option/2 I should get back a HTTP 404 result.
I understand that once the user has logged in I can get their user information through a Principal object reference. Such a solution was offered in this previous stackoverflow question, and other pages also offer similar solutions.
My immediate problem is to find where I can affect the GET and PUT behavior of /option. Here is all that I added to an existing, working demo. First the entity defining class.
#Entity
#Table(name="choice")
public class Option implements Serializable {
#Id
#Column(name="id")
#GeneratedValue(strategy=GenerationType.AUTO)
private Long id = Utilities.INVALID_ID;
#Column(name="value", length=50, nullable=false)
private String value;
#Column(name="name", length=100, nullable=false)
private String name;
public Long getId() { return this.id; }
public void setId(Long id) { this.id = id; }
public String getValue() { return this.value; }
public void setValue(String value) { this.value = value; }
public String getName() { return this.name; }
public void setName(String name) { this.name = name; }
}
Now the JpaRepository interface extension:
#RepositoryRestResource(collectionResourceRel="option", path="option")
public interface OptionRepository extends JpaRepository<Option, Long> {
}
I merely added those two files to the program and GET, PUT and POST work. BTW, it turns out that if I comment out the #RepositoryRestResource statement the call to /option/1 returns HTTP 404. Some documentation suggests it isn't needed, but I guess it really is.
Now to filter the output. Let's pretend to filter by making the server always return Option (id = 5). I do this by:
#RepositoryRestResource(collectionResourceRel="option", path="option")
public interface OptionRepository extends JpaRepository<Option, Long> {
#RequestMapping(path = "/option/{id}", method = RequestMethod.GET)
#Query("from Option o where o.id = 5")
public Iterable<Option> getById(#PathVariable("id") Long id);
}
When I run this server and do GET /option/1 I get back ... Option 1, not Option 5. The #Query isn't used.
What is the magic needed to affect the GET, PUT, etc?
Thanks,
Jerome.
You can use Resource Processor to manipulate returned resources:
#Component
public class OptionResourceProcessor implements ResourceProcessor<Resource<Option>> {
#Override
public Resource<Option> process(Resource<Option> resource) {
Option option = resource.getContent();
if (/* Logged User is not allowed to get this Option */ ) {
throw new MyCustomException(...);
} else {
return resource;
}
}
}
Then you can create custom Exception handler, for example:
#ControllerAdvice
public class ExceptionsHandler {
#ExceptionHandler(MyCustomException.class)
public ResponseEntity<?> handleMyCustomException(MyCustomException e) {
return new ResponseEntity<>(new MyCustomMessage(e), HttpStatus.FORBIDDEN);
}
}
To add some logic to PUT/POST/DELETE request you can use a custom Event Handler, for example:
#RepositoryEventHandler(Option.class)
public class OptionEventHandler {
#HandleBeforeSave
public void handleBeforeSave(Option option) {
if (/* Logged User is not allowed to save this Option */ ) {
throw new MyCustomException(...);
}
}
}
You can find more SDR usage examples in my sample project...