Hygieia JIRA Collector NullPointerException - java

I tried to include our JIRA to Hygieia Dashboard. But I always get a NPE in Jira-Collector.
This is the log-file:
hygieia-jira | 2016-05-10 08:27:06,304 ERROR c.c.d.c.s.FeatureDataClientSetupImpl - Unexpected error in Jira paging request of java.lang.NullPointerException
hygieia-jira | [null]
And the docker-compose settings:
hygieia-jira-feature-collector:
image: hygieia-jira-feature-collector:latest
container_name: hygieia-jira
volumes:
- ./logs:/hygieia/logs
links:
- mongodb:mongo
- hygieia-api
environment:
- JIRA_BASE_URL=https://our-jira.com/tracker/base64
- JIRA_CREDENTIALS=b2ZhMmFidDpibsdfGlkbWlsdfzYWfsfdub37527lpbmc=
- JIRA_ISSUE_TYPE_ID=267
- JIRA_SPRINT_DATA_FIELD_NAME=customfield_14290
- JIRA_EPIC_FIELD_NAME=customfield_15590

Related

Microservices communication with a spring gateway

I'm very new in springboot and I'm trying to explore this world.
I was creating three microservices that communicates to each other. Everything seems working except the Spring gateway that I just added.
The API call returns:
Error: Socket hang up
This is the configuration that I made but for sure I think it is not 100% correct. Can You help me to discover the bad config?
This is the docker-compose:
version: '3.4'
x-common-variables: &common-variables
DATASOURCE_USER: ${DB_USER}
DATASOURCE_PASSWORD: ${DB_PASSWORD}
DATASOURCE_PORT: ${DB_PORT}
services:
apigateway:
build:
context: .
dockerfile: APIgateway/Dockerfile
ports:
- "4444:4444"
restart: always
paymysqldb:
container_name: paymysqldb
image: mysql
ports:
- "3313:3306"
environment:
- MYSQL_DATABASE=${DB_DATABASE_PAY}
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DB_ROOT_PASSWORD}
volumes:
- paystorage:/var/lib/mysql
usermysqldb:
container_name: usermysqldb
image: mysql
ports:
- "3311:3306"
environment:
- MYSQL_DATABASE=${DB_DATABASE_USER}
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DB_ROOT_PASSWORD}
volumes:
- userstorage:/var/lib/mysql
catalogmysqldb:
container_name: catalogmysqldb
image: mysql
ports:
- "3312:3306"
environment:
- MYSQL_DATABASE=${DB_DATABASE_CATALOG}
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DB_ROOT_PASSWORD}
volumes:
- catalogstorage:/var/lib/mysql
paymanager:
container_name: paymanager
image: arausa/payimage
build:
context: .
dockerfile: MicroServices/PaymentManager/Dockerfile
depends_on:
- paymysqldb
ports:
- "3333:3333"
restart: always
environment:
<<: *common-variables
PM_DATASOURCE_HOST: ${DB_HOST_PAY}
PM_DATASOURCE_NAME: ${DB_DATABASE_PAY}
usermanager:
container_name: usermanager
image: arausa/userimage
build:
context: .
dockerfile: MicroServices/UserManager/Dockerfile
depends_on:
- usermysqldb
ports:
- "1111:1111"
restart: always
environment:
<<: *common-variables
UM_DATASOURCE_HOST: ${DB_HOST_USER}
UM_DATASOURCE_NAME: ${DB_DATABASE_USER}
expose:
- "1111"
catalogmanager:
container_name: catalogmanager
image: arausa/catalogimage
build:
context: .
dockerfile: MicroServices/CatalogManager/Dockerfile
depends_on:
- catalogmysqldb
ports:
- "2222:2222"
restart: always
environment:
<<: *common-variables
CM_DATASOURCE_HOST: ${DB_HOST_CATALOG}
CM_DATASOURCE_NAME: ${DB_DATABASE_CATALOG}
#kafka usa zookeeper tiene traccia dei broker, topologia della network e info per la sincronizzazione
zookeeper:
image: wurstmeister/zookeeper
#identifica il broker kafka
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092" #porta di default per il broker kafka
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 #serve a dire dove sta girando zookeeper
volumes:
userstorage:
catalogstorage:
paystorage:
This is the API gateway class
package com.example.apigateway;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.gateway.route.RouteLocator;
import org.springframework.cloud.gateway.route.builder.RouteLocatorBuilder;
import org.springframework.context.annotation.Bean;
#SpringBootApplication
#EnableDiscoveryClient
public class APIgatewayApplication {
public static void main(String[] args) {
SpringApplication.run(APIgatewayApplication.class, args);
}
#Bean
public RouteLocator myRoutes(RouteLocatorBuilder builder) {
return builder.routes()
.route(p -> p
.path("/user/**")
.uri("http://usermanager:1111"))
.route(p -> p
.path("/catalog/**")
.uri("http://catalogmanager:2222"))
.route(p -> p
.path("/payment/**")
.uri("http://paymanager:3333"))
.build();
}
}
This is the application.properties:
spring.application.name=apigateway
server.port=4444
And finally this is the .env even if I don't think it is usefull for this problem:
DB_DATABASE_PAY=PayDB
DB_HOST_PAY=paymysqldb
DB_DATABASE_USER=UserDB
DB_HOST_USER=usermysqldb
DB_DATABASE_CATALOG=CatalogDB
DB_HOST_CATALOG=catalogmysqldb
DB_USER=db_user
DB_PASSWORD=ale2022
DB_ROOT_PASSWORD=user
DB_PORT=3306
My bad. I forgot to recreate the image of the API.
The error is now :
500 Server Error for HTTP POST "/user/addUser"
apigateway_1 |
apigateway_1 | io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: usermanager/172.18.0.10:1111

Elasticsearch client can't connect elasticsearch in docker container

I'm trying to use Elasticsearch inside a container from a Java app which is also inside a container. Without docker containers my app correctly connect to local elasticsearch. My docker-compose file:
version: "3.7"
volumes:
postgis:
services:
database:
container_name: database
build:
postgis/
ports:
- 5432:5432
volumes:
- ./postgis:/var/lib/postgresql:rw
restart: on-failure
networks:
- net
application:
depends_on:
- database
- es
container_name: application
build:
application/
ports:
- $LORRYAPP_DEBUG_PORT:8080
volumes:
- ./application:/app:rw
environment:
LORRYAPP_OPTS: $LORRYAPP_OPTS
restart: on-failure
networks:
- net
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
ports:
- "9200:9200"
- "9300:9300"
environment:
- discovery.type=single-node
networks:
- net
networks:
net:
driver: bridge
Initialising es-client in ctor:
public ElasticSearchDao(ObjectMapper mapper) {
this.esClient = new RestHighLevelClient(RestClient.builder(HttpHost.create("http://localhost:9200")));
this.mapper = mapper;
}
Stacktrace:
java.net.ConnectException: Connection refused
at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:788) ~[elasticsearch-rest-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:218) ~[elasticsearch-rest-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:205) ~[elasticsearch-rest-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1454) ~[elasticsearch-rest-high-level-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1424) ~[elasticsearch-rest-high-level-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1394) ~[elasticsearch-rest-high-level-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestHighLevelClient.index(RestHighLevelClient.java:836) ~[elasticsearch-rest-high-level-client-7.4.0.jar!/:7.4.0]
ES available through browser. http://0.0.0.0:9200/, http://127.0.0.1:9200/, http://localhost:9200/ give response:
{
"name" : "254bdb7bcc2a",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "vm537RNGSiG3dW8ag2MDTw",
"version" : {
"number" : "7.6.1",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "aa751e09be0a5072e8570670309b1f12348f023b",
"build_date" : "2020-02-29T00:15:25.529771Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
I think this could be related to question I've asked a long while ago.
Accessing Elasticsearch Docker instance using NEST
It's targeted against C#, this is really an issue with containers. What happened in my case was that client would connect to docker, obtain internal IP which is used for inter-container communication only and then would try to use this IP to keep connection open - that obviously doesn't work.

Streams deploying stuck and then fails with no errors in the logs

I'm trying to deploy the following streams:
STREAM_2=:messages > filter --expression="#jsonPath(payload, '$.id')==1" | rabbit --queues=id_1 --host=rabbitmq --routing-key=id_1 --exchange=ex_1 --own-connection=true
STREAM_3=:messages > filter --expression="#jsonPath(payload, '$.id')==2" | rabbit --queues=id_2 --host=rabbitmq --routing-key=id_2 --exchange=ex_1
STREAM_4=:messages > filter --expression="#jsonPath(payload, '$.id')==3" | rabbit --queues=id_3 --host=rabbitmq --routing-key=id_3 --exchange=ex_1
STREAM_1=rabbit --queues=hello_queue --host=rabbitmq > :messages
Visualization:
I'm listening for a queue and then sending the message to a different queue depending on one of the message's attributes.
I'm running a local system, using this docker-compose.yml, but I switched to RabbitMQ instead of Kafka for communication.
When I deploy the streams, it takes a couple of minutes until the dataflow-server container reaches the max memory usage, and finally fails on random streams (and sometimes kills the container).
The logs (both stdout and stderr) don't show errors.
I'm running with the latest versions as follows:
DATAFLOW_VERSION=2.0.1.RELEASE SKIPPER_VERSION=2.0.0.RELEASE docker-compose up
Another thing I noticed, in the logs I keep getting:
2019-03-27 09:35:00.485 WARN 70 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node -1 could not be established. Broker may not be available.
although I have nothing related to Kafka in my docker-compose.yml. Any ideas where it's coming from?
Relevant parts from my YAML:
version: '3'
services:
mysql:
image: mysql:5.7.25
environment:
MYSQL_DATABASE: dataflow
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: rootpw
expose:
- 3306
dataflow-server:
image: springcloud/spring-cloud-dataflow-server:${DATAFLOW_VERSION:?DATAFLOW_VERSION is not set!}
container_name: dataflow-server
ports:
- "9393:9393"
environment:
- spring.datasource.url=jdbc:mysql://mysql:3306/dataflow
- spring.datasource.username=root
- spring.datasource.password=rootpw
- spring.datasource.driver-class-name=org.mariadb.jdbc.Driver
- spring.cloud.skipper.client.serverUri=http://skipper-server:7577/api
- spring.cloud.dataflow.applicationProperties.stream.spring.rabbitmq.host=rabbitmq
depends_on:
- rabbitmq
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
expose:
- "5672"
app-import:
...
skipper-server:
image: springcloud/spring-cloud-skipper-server:${SKIPPER_VERSION:?SKIPPER_VERSION is not set!}
container_name: skipper
ports:
- "7577:7577"
- "9000-9010:9000-9010"
volumes:
scdf-targets:
Looks like I were a victim of the OOM killer. The container was crashing with an exit code of 137.
The easiest solution for me now is giving Docker more memory:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
9a0e3ff0beb8 dataflow-server 0.18% 1.293GiB / 11.71GiB 11.04% 573kB / 183kB 92.1MB / 279kB 49
2a448b3583a3 scdf_kafka_1 7.00% 291.6MiB / 11.71GiB 2.43% 4.65MB / 3.64MB 40.4MB / 36.9kB 73
eb9a70ce2a0e scdf_rabbitmq_1 2.15% 94.21MiB / 11.71GiB 0.79% 172kB / 92.5kB 41.7MB / 139kB 128
06dd2d6a1501 scdf_zookeeper_1 0.16% 81.72MiB / 11.71GiB 0.68% 77.8kB / 99.2kB 36.7MB / 45.1kB 25
1f1b782ad66d skipper 8.64% 6.55GiB / 11.71GiB 55.93% 3.63MB / 4.73MB 213MB / 0B 324
The skipper container is now using 6.55GiB memory, if someone knows what it could be, I would be grateful.
For now, I'm accepting my answer since it does provide a workaround, although I feel there could be a better solution than increasing the memory limit for Docker.
EDIT:
Looks like this is indeed the solution, from this GitHub issue:
Stream components (parts of the pipe) are deployed as applications. Those applications are deployed into the Skipper container (as well as the Skipper application itself) since skipper deploys streams. The more applications that get deployed (parts of the pipe, streams, etc) the more memory is used.

Grails, Jobs, Static Helper Methods and Hibernate Session

I have a Grails job-class (grails-app/jobs) who needs to call a static (helper) method (defined in src/groovy). This method calls get- and find-methods respectively on 2 different domain-objects. The result of the method call is returning a simple String (could return anything for that sake - doesn't matter).
My question is, how do I use .withTransaction or .withSession in the job-class when I'm calling a static method containing fetch of 2 (could be more) different domain-classes?
Or, how do I declare/use a Hibernate session in a job-class so that I don't have to use .withBlaBla?
EDIT (another EDIT at the bottom - sorry):
The lines where EZTable and EZRow is fetched is working. EmailReminder I had to wrap with EmailReminder.with... Now the lines with call to ServiceUtils.handleSubjectOrMessageString(ezTable, ezRow, emailReminder.subject) are causing an exception (this is added "now" - the entire job-class was working earlier with simple String-values).
class EmailReminderJob implements Job {
EmailReminder emailReminder
EZTable ezTable
EZRow ezRow
static triggers = {}
def void execute(JobExecutionContext context) {
List<String> emails = new ArrayList<String>(0)
ezTable = EZTable.get(new Long(context.mergedJobDataMap.get('ezTableId')))
ezRow = EZRow.get(new Long(context.mergedJobDataMap.get('ezRowId')))
EmailReminder.withTransaction { status ->
emailReminder = EmailReminder.get(new Long(context.mergedJobDataMap.get('emailReminderId')))
if(emailReminder.sendMessageToOwnerUser && emailReminder.ownerUser.email!=null)
emails.add(emailReminder.ownerUser.email)
if(emailReminder.sendMessageToOwnerCompany && emailReminder.ownerCompany.email!=null)
emails.add(emailReminder.ownerCompany.email)
if(emailReminder.emails!=null && emails.size()>0)
emails.addAll(new ArrayList<String>(emailReminder.emails))
if(emailReminder.messageReceiverUsers!=null && emailReminder.messageReceiverUsers.size()>0) {
for(user in emailReminder.messageReceiverUsers) {
if(user.email!=null)
emails.add(user.email)
}
}
}
if(emails.size()>0) {
String host = "localhost";
Properties properties = System.getProperties();
properties.setProperty("mail.smtp.host", host);
Session session = Session.getDefaultInstance(properties);
try{
// Create a default MimeMessage object.
MimeMessage message = new MimeMessage(session);
message.setFrom(new InternetAddress(emailReminder.emailFrom));
for(email in emails) {
message.addRecipient(
Message.RecipientType.TO,
new InternetAddress(email)
);
}
message.setSubject(ServiceUtils.handleSubjectOrMessageString(ezTable, ezRow, emailReminder.subject));
message.setText(ServiceUtils.handleSubjectOrMessageString(ezTable, ezRow, emailReminder.definedMessage));
Transport.send(message);
}catch (MessagingException mex) {
mex.printStackTrace();
}
}
}
}
The static-method in my util-class under src/groove (the line EZColumn ezcolumn = EZColumn.get(id) and the next are causing the exception):
def static String handleSubjectOrMessageString(EZTable eztable, EZRow ezrow, String subjectOrMessage) {
String regex = '(?<=\\$\\$)(.*?)(?=\\$\\$)'
Pattern pattern = Pattern.compile(regex)
Matcher matcher = pattern.matcher(subjectOrMessage)
StringBuffer stringBuffer = new StringBuffer();
while(matcher.find()) {
if(subjectOrMessage.substring(matcher.start(), matcher.end()).contains('#')) {
String stringId = subjectOrMessage.substring(matcher.start(), matcher.end()).split('#')[0]
String name = subjectOrMessage.substring(matcher.start(), matcher.end()).split('#')[1]
try {
Long id = new Long(stringId)
EZColumn ezcolumn = EZColumn.get(id)
EZCell ezcell = EZCell.findByEzTableAndEzRowAndEzColumn(eztable, ezrow, ezcolumn)
matcher.appendReplacement(stringBuffer, fetchCellValues(ezcell, ezcolumn))
} catch(NumberFormatException nfe) {
if(stringId.equals("id")) {
if(name.equals("row"))
matcher.appendReplacement(stringBuffer, ezrow.id.toString())
else if(name.equals("table"))
matcher.appendReplacement(stringBuffer, eztable.id.toString())
else
matcher.appendReplacement(stringBuffer, "???")
}
}
}
}
matcher.appendTail(stringBuffer);
println stringBuffer.toString().replaceAll('\\$', "")
return stringBuffer.toString().replaceAll('\\$', "")
}
The exception:
| Error 2015-02-11 10:33:33,954 [quartzScheduler_Worker-1] ERROR core.JobRunShell - Job EmailReminderGroup.ER_3_EZTable_3 threw an unhandled Exception:
Message: null
Line | Method
->> 202 | run in org.quartz.core.JobRunShell
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
^ 573 | run in org.quartz.simpl.SimpleThreadPool$WorkerThread
| Error 2015-02-11 10:33:33,996 [quartzScheduler_Worker-1] ERROR core.ErrorLogger - Job (EmailReminderGroup.ER_3_EZTable_3 threw an exception.
Message: Job threw an unhandled exception.
Line | Method
->> 213 | run in org.quartz.core.JobRunShell
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
^ 573 | run in org.quartz.simpl.SimpleThreadPool$WorkerThread
Caused by NullPointerException: null
->> 202 | run in org.quartz.core.JobRunShell
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
^ 573 | run in org.quartz.simpl.SimpleThreadPool$WorkerThread
| Error 2015-02-11 10:33:34,005 [quartzScheduler_Worker-1] ERROR listeners.ExceptionPrinterJobListener - Exception occurred in job: null
Message: org.quartz.SchedulerException: Job threw an unhandled exception. [See nested exception: java.lang.NullPointerException]
Line | Method
->> 218 | run in org.quartz.core.JobRunShell
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
^ 573 | run in org.quartz.simpl.SimpleThreadPool$WorkerThread
Caused by SchedulerException: Job threw an unhandled exception.
->> 213 | run in org.quartz.core.JobRunShell
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
^ 573 | run in org.quartz.simpl.SimpleThreadPool$WorkerThread
Caused by NullPointerException: null
->> 202 | run in org.quartz.core.JobRunShell
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
^ 573 | run in org.quartz.simpl.SimpleThreadPool$WorkerThread
EDIT AGAIN :( :
I have many nested calls in my static method (fetchCellValues(ezcell, ezcolumn) in the matcher.appendReplacement(stringBuffer, fetchCellValues(ezcell, ezcolumn))-method is calling deeper to fetch values and I actually get a "no Session"-exception at one call (call regular as all the other calls trying to fetch another domain-object):
Message: org.quartz.SchedulerException: Job threw an unhandled exception. [See nested exception: org.hibernate.LazyInitializationException: could not initialize proxy - no Session]
You use them like you would anywhere. Both are independent of the class they're called on; withTransaction just runs the wrapped code in a transaction, joining a current active transaction if there is one, and withSession makes the current Hibernate Session available to the wrapped code but otherwise doesn't do anything.
You don't indicate any reason for needing either, so it's not obvious what to advise specifically. You don't need a transaction if you're only reading data, and if you're calling domain class methods you shouldn't need access to the session.
One use for withTransaction that I've advocated in the past (pretty much the only use for it since it's typically misused) is to avoid lazy loading exceptions when there isn't an active session already. Wrapping code in a withTransaction block has the side effect of creating a session and keeping it open for the duration of the block and that lets you work with lazy-loaded instances and collections. Controllers have an active session because there's an open-session-in-view interceptor that starts a session at the beginning of the request, stores it in a ThreadLocal, and flushes and closes it at the end of the request. Jobs are similar because the plugin uses Quartz job start/end events to do the same thing.
But whether you are making your code transactional because of lazy loading or because you're updating, you should usually be doing the work in a transactional service.
Services are great for transactional work because they're transactional by default (only services that have no #Transactional annotations and include static transactional = false are non-transactional), and it's easy to configure transaction demarcation per-class and per-method with the #Transactional annotation. They are also great for encapsulating business logic, independent of how they're called; there's usually no need for a service method to have any HTTP/Job/etc. awareness, just pass it the data needed in String/number/boolean/object arguments and let it do its work.
I like to keep controllers simple, doing data binding from the request params and calling services to do the real work, then rendering a response or routing to the next page, and I do the same thing in Quartz jobs. Use Quartz for its scheduling functionality, but do the real work in a service. Dependency-inject the service like any bean (def fooService) and put all of the business logic and database work there. It keeps things cleanly delineated in the code, and makes testing easier since you can test the service methods without having to mock HTTP calls or Quartz.

Grails dynamic scaffolding error

I am new to Grails, I installed Grails 2.4 on Ubuntu 14.04
I have this Grails Domain:
class EndUser {
String username
String password
String fullName
String toString() {
"$fullName"
}
static hasMany = [projects: Project, tasks: Task]
static constraints = {
fullName()
username()
password()
}
}
and it's controller:
class EndUserController {
static scaffold = true
def index = {
// redirect(action: "list")
}
}
I am getting the error below, and everytime i create a new EndUser, I am getting an error page showing this message:
Error 500: Internal Server Error
URI
/ProjectTracker/endUser/create
Class
java.lang.NullPointerException
Message
null
What did I do wrong? and how can I fix it?
Please let me know if I need to provide more info.
|Running Grails application
|Server running. Browse to http://localhost:8080/ProjectTracker
| Error 2014-06-09 01:59:47,663 [http-bio-8080-exec-7] ERROR errors.GrailsExceptionResolver - NullPointerException occurred when processing request: [GET] /ProjectTracker/endUser/edit/1
Stacktrace follows:
Message: Error processing GroovyPageView: Error executing tag <g:form>: Error executing tag <g:render>: null
Line | Method
->> 527 | doFilter in /endUser/edit
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Caused by GrailsTagException: Error executing tag <g:form>: Error executing tag <g:render>: null
->> 38 | doCall in /endUser/edit
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Caused by GrailsTagException: Error executing tag <g:render>: null
->> 33 | doCall in /endUser/edit
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Caused by NullPointerException: null
->> 333 | hash in java.util.concurrent.ConcurrentHashMap
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 988 | get in ''
| 141 | getValue in grails.util.CacheEntry
| 81 | getValue in ''
| 33 | doCall . in endUser_edit$_run_closure2_closure27
| 38 | doCall in endUser_edit$_run_closure2
| 40 | run . . . in endUser_edit
| 189 | doFilter in grails.plugin.cache.web.filter.PageFragmentCachingFilter
| 63 | doFilter in grails.plugin.cache.web.filter.AbstractFilter
| 1145 | runWorker in java.util.concurrent.ThreadPoolExecutor
| 615 | run . . . in java.util.concurrent.ThreadPoolExecutor$Worker
^ 745 | run in java.lang.Thread
Thanks,
Khalil.
It looks like you hit this bug: https://jira.grails.org/browse/GRAILS-11430. It will be fixed in Grails 2.4.1; you can either wait until that version is released or try the workaround provided in the comments of that link, which has been reported to work.

Categories